The idea of robotic cars that drive themselves is a good one, I think, and one whose time is rapidly coming. The New York Times reports on Google undertaking test drives of cars in the San Francisco area with robotically controlled cars, including a drive down Lombard Street, a famously hilly and difficult street. The engineers are those who took honors at recent DARPA contests for creating vehicles able to self-navigate urban settings; these are the top people in the business:
During a half-hour drive beginning on Google’s campus 35 miles south of San Francisco last Wednesday, a Prius equipped with a variety of sensors and following a route programmed into the GPS navigation system nimbly accelerated in the entrance lane and merged into fast-moving traffic on Highway 101, the freeway through Silicon Valley. It drove at the speed limit, which it knew because the limit for every road is included in its database, and left the freeway several exits later. The device atop the car produced a detailed map of the environment.
The car then drove in city traffic through Mountain View, stopping for lights and stop signs, as well as making announcements like “approaching a crosswalk” (to warn the human at the wheel) or “turn ahead” in a pleasant female voice. This same pleasant voice would, engineers said, alert the driver if a master control system detected anything amiss with the various sensors.
The test drives have a human navigator in the car as well as an expert human driver at the wheel to take control if something went wrong; the Times article says that assuming human control is no more difficult than ending cruise control. None of which I doubt at all. The article added, however, that Google had carefully examined the California vehicle code to determine that the experimental cars were legal to drive on the road:
But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would? And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”
The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.
I am particularly curious whether Google had, or perhaps ought to have in the future, an obligation to let appropriate California authorities know that it was test-driving experimental cars with at least some question as to whether the configuration of a human poised to take over is safe and effective. In fact, I wonder if it did so in advance – the article gives a California Department of Motor Vehicles counsel’s view, but I wonder whether that came before or after the fact? Is there some obligation to warn the local law enforcement before undertaking something like this? And to be perfectly blunt, suppose that it were Toyota that had been doing this? Would the reaction have been quite so agreeable?
I am delighted to see this kind of technology moving forward, and definitely agree that the technology is ahead of the law in some of these areas. As a lawyer interested in robotics and the law, I do have some questions as to the appropriate protocols in place for testing new technologies, and whether there are obligations to let the public know in advance, or authorities know in advance, what one is doing. I have no doubt that Google carefully checked its legal position beforehand. Query whether this is exactly the right set of legal rules for testing new technologies that, in order to see whether they can function as they are intended, require that they be tested in, among, and with the public. I do not have a settled view on how these rules should work.
Update: Alert commenter points to an article at Jalopnik on the question of whether Google’s cars are legal on California roads; the article says that Google alerted local authorities. But my question is still, what should be the rules for deciding whether the car is safe to take in traffic among people like you and me? Who should be able to decide that?
In a certain sense, after all, what Google is doing is a form of human subject experimentation, of the kind that were it a university, would likely at a minimum require discussion with a human subjects committee, informing the subjects, etc. Of course, one can say that this is not the proper analogy – and I would agree as a lawyer looking at California’s codes – but I would still ask the question, without having at this point a fixed answer, should it be? The nature of Google’s testing requires that it be done among the driving public, by assumption among people who don’t know there is a robot among them; should this be regarded as ethical and should the legal rules allow for it in dealing with robots?
From a technology and society standpoint, the long term gains from this kind of technology will presumably be once we mandate that all cars be driven by robots. Among other things, if things work as hoped, it will allow for much, much smaller spacings between the cars and higher road speeds that will extract greater efficiencies from roads and highways, and help manage congestion problems. Being able to read the Volokh Conspiracy on your way to work is vitally important, of course, but not the only potential gain of the new technologies.
Update 2: Several of the comments suggest that the issue isn’t really a big one, since the “driver” that Google has placed behind the wheel is ultimately legal responsible for the operation of the vehicle under the law, as it currently stands. I’m sure that Google has researched this question as a matter of current law thoroughly and I don’t doubt the conclusion. Query once again, however, whether that would be a stable legal rule of liability in the case of ordinary people for whom the vehicle has come equipped with this technology; if something goes wrong, you are responsible and then you turn and sue the manufacturer?
But we’ve been down these paths before; it seems to me it would be hard in some future where the technology is widespread and beyond the ability of a human operator to have much sense of whether the machine is properly operational, to have any very direct form of liability, either as a matter of tort or a matter of the criminal law of the vehicle code. People simply will conclude, with good reason, that it would be unfair and inefficient to hold them responsible for the operation of technology without any clear way of knowing whether it works or not. The responsibility then moves backwards to the manufacturer, and other points of expert contact, but it is hard not to conclude that in terms of the actual operation of the vehicle, that point of accountability is lost. It might well – I am pretty certain that it would – be made up by safety and other gains from the widesrpead use of the technology, but I would not at all believe that the current liability rules in either tort or the criminal aspects of the vehicle code would remain legally stable.
And the idea of the “system” acting as a “driver” when everyone is mandated to use the system – well, I assume at that point, we shift entirely to a different arrangement for liability. The idea of the “system” as “driver” as “liable” is interesting, but I would assume that at that point, it morphs into some insurance system of liability and compensation.
(No more updates for me – I’m allowing myself to be distracted from writing about the operational and strategic uses of UAVs. Can I get a robotic system to write that for me?)