(Updated) The Incremental Progress of Self-Driving Cars and Current Safety Systems

I’m continuing my series of posts on automated vehicles (the last one was some initial thoughts on comparisons between self-driving cars and autonomous weapon systems).  Today I want to recommend this January 12, 2013 NYT story, by John Markoff and Somini Sengupta, on the current state of safety systems for cars in the incremental advance toward fully automated and finally self-driving vehicles.  Plus, in order to understand the regulatory and legal context in which this transformation necessarily takes place, I also highly recommend the new report by Bryant Walker Smith (Stanford Center for Internet and Society), on the legality of self-driving cars in the US. It makes a useful basis for comparing the ways in which vehicle codes will have to gradually take account of evolutionary technologies.

New York State, for example, requires in its vehicle code that drivers have one hand on the steering wheel at all times; that obviously won’t be compatible with the emergence of self-driving cars. Even Nevada (a state that has positioned itself ahead of the curve by adopting a self-driving car provision) requires that the car have a human driver who is responsible and able to take over driving.  Texting while the car drives itself is okay, in other words, but getting into the vehicle drunk and telling it to drive you home is not, because you would not be able to drive if necessary.  Yet technology will presumably alter that, and the vehicle code will presumably adapt as the technology improves, given that a core purpose of self-driving vehicles is to drive people who are incapacitated, by alcohol, but more importantly by age.  After all, Google is betting its self-driving cars on a market among elderly baby boomers who can’t (or shouldn’t) be driving.

Which goes to illustrate that a key focus and market for robotics in America will be, one way or another, the elderly.  It isn’t necessarily about Robbie the Robot – or Awesom-O, if you prefer – your robot friend and servant.  As Gary Marcus points out in another useful New Yorker column, “Why Making Robots Is So Darn Hard,” there are important reasons why personal or genuinely useful consumer robotics is so much harder than, say, robots on the industrial workshop floor.  But robots will increasingly feature further back in the supply chain of, for example, elder-care.  Amazon’s genius in no small part consists of convincing aging boomers like me that we’re cool and hip and not old at all because … we order all our stuff online and it arrives like magic at the front door – we don’t stop to tell ourselves that both by inclination and capacity we’re less and less interested in going out. But Amazon will increasingly automate its warehouses and fulfillment processes, and eventually the aircraft delivering at least some of it will become remotely piloted, and finally delivery trucks will automate too – though it’s an open question how the goods will be dropped at your door, as you and I and our fellow boomers struggle with our canes and walkers (though we hope that automation can improve those too).

As Matthew Waxman and I have argued  in a different context from vehicles (automated weapon systems, in the latest Policy Review), changes that increase automation (to the point of the machine making decisions and executing them without human intervention) in self-driving vehicles (and, for that matter, weapons, but this will be the topic of my next post) are coming incrementally.  That would not be true of some technological systems, where the change-over has to be a genuine paradigm change of the whole system. Many believed this would be true of cars, for example – I certainly did.  A technological change to self-driving vehicles was widely presumed to require a centralized computer network to control all the cars in the system.  Yet in this case, it’s not turning out that way – because the sensors and other automated technologies can be applied (and sold mostly as safety features) car by car, and they are able to cope with non-self-driving cars cars and other hazards that are not corralled within a single controlled system.  The Times article addresses exactly this point:

The systems offer auditory, visual and mechanical warnings if a collision is imminent — and increasingly, if needed, take evasive actions automatically. By the middle of this decade, under certain conditions, they will take over the task of driving completely at both high and low speeds.

But the new systems are poised to refashion the nature of driving fundamentally long before completely autonomous vehicles arrive. “This is really a bridge,” said Ragunathan Rajkumar, a computer science professor who is leading a Carnegie Mellon University automated driving research project partly financed by General Motors. “The driver is still in control. But if the driver is not doing the right thing, the technology takes over.”

Although drivers — at least for now — remain responsible for their vehicles, a host of related legal and insurance issues have already arisen, and researchers are opening a new line of study about how humans interact with the automatic systems.

I agree that there will be an important field of study concerning human-robot interactions – calling Susan Calvin, etc.  Some of that will be at the level of general human behavior – I mean in the sense of the Uncanny Valley and that kind of psychological study of human beings.  But much of that will be at the level of highly particular technological systems that deal with such apparently mundane, un-SusanCalvin behavior as how humans brake, accelerate, and have difficulty estimating distances and angles in parallel parking and what kinds of machine systems can address it.  A lot of it won’t be psychology as such, but instead simply machine consequences of the fact that the automation of some parts of the system (“platoons” of self-driving cars on the highway, for example, operating at distances and speeds that would be radically unsafe if not impossible for human beings) will put the decision making and execution beyond human capability.

We are still a ways from that, however.  To get a sense of the incremental changes coming today, consider the kinds of activities ripe for automation on this sidebar list from the Times article, naming various automated safety systems available or coming for cars:

Already in some cars:

Antilock brakes
Electronic stability control
Lane keeping
Lane departure warning
Pedestrian detection
Driver fatigue/distraction alert
Cruise control/adaptive cruise control
Forward collision avoidance
Automatic braking
Automated parking
Adaptive headlights
Traffic sign detection

Coming soon:

Traffic jam assistance
Super cruise control
Night assistance thermal imaging
V2X communications
Intersection assistance
Traffic light detection

(In my next post, I return again to the comparison of self-driving cars and automated weapon systems, and raise the idea of an “Ethical Turing Test” for evaluating the ethical behaviors of human beings and machines.)

Comments are closed.

Powered by WordPress. Designed by Woo Themes