Self-driving cars are receiving a lot of attention these days – partly as the technologies that make them possible advance and partly because, well, we the public are more aware of them and are realizing there is quite a lot to discuss regarding their regulation and use. As the technologies that appear to be making self-driving cars possible advance from the science fiction to the hypothetical to the possible to the likely, technological paths become sufficiently determinate that it makes sense to be talking about the social, legal, and regulatory structures for their use.
Indeed, we are probably a little late in holding these discussions, because knowledge of the social and regulatory conditions can, and does, have an influence on the technological designs, and so generally, the earlier the better. A new and quite interesting debate at the Economist asks the question, whether and how soon these cars will be ready for market (it’s not a debate over whether they are desirable, but instead whether they will be feasible in the foreseeable future). It’s striking that the pro-side (holding that they will be, and sooner rather than later) essentially rests on technological feasibility, while the con side rests partly on skepticism about the technologies but very considerably on whether the social, economic, legal and regulatory hurdles will have been overcome.
Self-driving cars are special for a couple of reasons. One is that they will (and already do) consist of a bundle of technologies – in one sense conceived in the usual robotics formulation of sensors, computation, and physical movement. But in the case of cars, it’s better understood as automation of the distinct systems of a car: acceleration, braking, steering, etc. These are being automated in separate systems, and combined together in the computer control of the total vehicle.
A second feature about autonomous cars arises from the this “bundling” feature. Automation, leading eventually to genuinely autonomous driving, is coming about gradually, as these systems are gradually introduced to new versions of vehicles. It’s both gradual for automation of the vehicle as a whole, but it’s also the introduction of this bit of the bundle of technologies or that bit, rather than every piece at once. At the top end of the luxury car range, mostly, you can buy a car that is gradually incorporating more and more of these systems and capabilities. It happens gradually and is introduced in the marketing not as self-driving, which would be both untrue at this stage and also a huge flag for litigation, but instead as giving the human driver greater safety and convenience. The automation is somewhat like an advanced form of cruise control – still entirely in the hands of the human driver – except that it gradually gets more and more advanced as in the case, for example, of being able to hold an approximate speed while still maintaining a safe distance to the car ahead and slowing appropriately. In the case of steering – the ability to parallel park, for example, which is a headache in tight urban spaces for many people (me included), but amenable to machines because, at bottom, this is a matter of calculation and geometry, not interaction with other moving vehicles, provided that the car has sufficiently good sensors and steering control to do what the computer is able to calculate.
These kinds of features can be introduced to lines of vehicles – presumably to start out at the most expensive cars and gradually be introduced down the food chain – without the idea that any particular level of automation requires that the car be autonomous. It’s gradual – incremental increases in the automation of separate systems until, finally, taken together, the car is able to drive itself. This final step of autonomy presumably requires complex integration of all these systems, as well as much additional programming for how certain decisions will be made by the vehicle, but the systems fundamental to the car moving as a physical vehicle will already have been automated by that point.
A third feature is that the idea is that they will be introduced as individual self-driving units into a more general ecosystem of cars that might also be autonomous, but more likely, at least at first, will be driven by human beings. This sharply distinguishes the combined social and technological model from centralized transportation systems that contemplate a central computerized control of all the vehicles. Many people – me included – would have imagined that self-driving cars could only work as part of a centralized grid controlling all vehicles at once, but the trajectory of the technologies involved is to try and find a way to allow autonomously driving vehicles to be able to operate among human-driven cars. This requires certain capabilities of the technologies involved, of course, and it is not wholly certain that they will get there any time soon, to the point that one could go beyond Google’s test cars – which, note, operate on roads already mapped in enormous detail by Google engineers – to a general self-driving car capable of driving among human drivers. Still, it is remarkable how far Google cars have come and how fast.
Given the speed with which the technologies are taking off, then, the social, economic, legal, and regulatory questions require answers. Automobiles are very special, after all – driving is a complex social activity, a highly structured social activity featuring many formal rules and standards as well as many informal ones. For most of us, it is the most trusting activity we engage in – trusting to other drivers to behave as expected in a multitude of formal and informal ways – and this in a social space that is remarkable both for its “natural” features (the law of inertia comes to mind) and its entirely artificial and socially constructed features (red means stop). The answers are not yet there – not really. In one sense, all the necessary fields of law are in place, such as products liability, insurance law and markets, driving laws, etc. But even if there is no new “law of robots” to be introduced, the answers are not yet in place, specific to these emerging technologies.
This was discussed in several very interesting sessions at the (fabulous – thank you, Ryan Calo and Michael Froomkin among others!) We Robots 2013 conference a few weeks ago at Stanford Law School (there is video as well as the draft papers presented at the link). Bryant Walker Smith, an automative engineer and lawyer (and fellow at Stanford Law School’s Center for Internet and Society), and Josh Blackman, a law professor at South Texas College of Law, walked through some of the issues related to vehicle and driving codes. Smith released last November, through CSIS, a terrific report on the driving laws of the 50 states plus international driving law (who knew there’s a Geneva Convention on driving law, aimed at standardizing some basic things like red and green?) that shows both that there is probably room for self-driving vehicles under the laws of many states, but that there are equally many ambiguities and questions – any of which might lead to legal headaches if litigated in accidents. The liability issue is under discussion in many quarters, of course, such as the special 2012 law review symposium of the Santa Clara University School of Law (which, given its location, has carved out an important role in these emerging technologies) on legal issues related to driverless cars.
One important feature of the discussion over liability and insurance bears mentioning. As writers such as Megan McArdle have correctly noted, the practical outcome in the existing tort system will likely be “functional” strict liability for the machine’s manufacturer or programmer or both in case of accidents. This was noted as well at the We Robot 2013 conference. The resulting liability awards might be sufficient to deter the technology – even if overall the effect of many people using self-driving cars resulted in much safer roads. This worries Google; one engineer remarked at the We Robots that Detroit car companies might have introduced various parts of these self-driving technologies, but thought about the liability issues and thought again. A recent article in the Economist points to several possible solutions – each of which, however, essentially shifts liability off of the manufacturer or programmer of the vehicle and raises serious questions about who bears the costs and benefits of driverless cars, and what would be efficient or fair:
A study in 2009 of the legal risks of increasingly autonomous cars by the RAND Corporation, a research body, suggested two possible solutions: changing the liability laws to require courts to take the benefits of driverless technology into account when punishing carmakers for any failings; and limiting motorists’ ability to sue in state courts when driverless technology mandated by federal laws fails to prevent an accident.
Comments are closed.