Robotics as Social and Legal Policy

I write a lot about drones and warfare, robotics on the battlefield and the legal questions it raises, but my interest in robotics is much broader than that.  It extends to the effort to build and utilize (see, I resisted the battlefield word “deploy”) robots in society, and particularly in day to day interactions.  For example, consider the use of robot technologies in the nursing profession and eldercare.  Such uses might include from technologies more or less available now, such as machines that can take over the pickup and distribution of medicines in a nursing center, using existing warehouse technologies.  But over time, we want technologies that don’t yet exist, such as robots that can assist the elderly in their homes in multiple ways such as walking assistance, carrying the groceries, etc. – as distinguished from single purpose, roomba-like appliances.

The New York Times has a good piece today in the Science section, by Robert Markoff, on the current level of achievement in robotics for tasks that you or I would find simple to master, such as folding laundry.  For a robot, it is really hard.  The difficulties are daunting in all three areas typically associated with robots – mechanics of movement, computational processing, and sensors.  Markoff is particularly good at describing the “brittleness” of robot behavior:

Today’s robots can often do one such task in limited circumstances, but researchers describe their skills as “brittle.” They fail if the tiniest change is introduced. Moreover, they must be reprogrammed in a cumbersome fashion to do something else.

Then there is the famous (if you follow robotics, anyway) YouTube video of a robot at UC Berkeley doing what appears to be a remarkable job of folding towels.

As the Times article explains, however, it is slightly less remarkable than it seems when you learn that it is 50x actual speed.  There are big debates among roboticists over the best way to conquer these problems, and whether it involves a whole different approach to robotic learning.  A fun introduction to the whole area is Lee Gutkind’s 2006 account of the Carnegie-Mellon robotics institute, Almost Human: Making Robots Think.

Carnegie-Mellon was the site of a visit by President Obama a few weeks ago to announce a new robotics initiative.  But the real question of legal and social policy for robotic development is much more basic than what was raised in the President’s visit.  President Obama was mostly focused on manufacturing and the effort to use robotics as a way of bringing back manufacturing jobs at a high value-added level to the US; a not unimportant part was addressed to private sector labor union concerns.  Stanford Law School scholar Ryan Calo (who is one of the few studying the intersection of law and robotics) has a new paper out on SSRN, Open Robotics, asking much more fundamental questions.  It is summarized in a fine on-line essay here.  Calo describes the difference between “closed” and “open” robotics:

“Closed” robots resemble any contemporary appliance: They are designed to perform a set task. They run proprietary software and are no more amenable to casual tinkering than a dishwasher. The popular Roomba robotic vacuum cleaner and the first AIBO mechanical pet are closed in this sense. “Open” robots are just the opposite. By definition, they invite contribution. An open robot has no predetermined function, runs third-party or even open-source software, and can be physically altered and extended without compromising performance.

“Open” systems are more valuable because they invite the development of more valuable and differentiated uses, building on pre-existing platforms.  Open robotics follows the same path of the development of personal computing, able to run software created by third parties, thus creating vastly more value – value which, in no small part, lay in what someone might program the machine to do.  So far so good, says Calo.  But then, enter the lawyers.

The trouble with open platforms is that they open the manufacturer up to a universe of potential lawsuits. If a robot is built to do anything, it can do something bad. If it can run any software, it can run buggy or malicious software. The next killer app could, well, kill someone.

Liability in a closed world is fairly straightforward. A Roomba is supposed to do one thing and do it safely. Should the Roomba cause an injury in the course of vacuuming the floor, then iRobot generally will be held liable as it built the hardware and wrote or licensed the software. If someone hacks the Roomba and uses it to reenact the video game Frogger on the streets of Austin (this really happened), then iRobot can argue product misuse.

But what about in an open world? Open robots have no intended use. The hardware, the operating system, and the individual software — any of which could be responsible for an accident — might each have a different author. Open source software could have many authors. But plaintiffs will always sue the deep pockets. And courts could well place the burden on the defendants to sort it out.

An obvious question is why this was’t an issue in personal computing and its open model.  The difference, Calo notes, is largely that when things went wrong in the computer world, the losses – especially in the early years, before they started running things like grids and plants and real-world systems – were intangible and digital.  The point about robots, however, is that they act directly in the gross physical world, and so the nature of injuries is very different, from the very beginning:

The damage caused by home computers is intangible. The only casualties are bits. Courts were able to invoke doctrines such as economic loss, which provides that, in the absence of physical injury, a contracting party may recover no more than the value of the contract. Where damage from software is physical, however, when the software can touch you, lawsuits can and do gain traction. Examples include plane crashes based on navigation errors, the delivery of excessive levels of radiation in medical tests, and “sudden acceleration”—a charge respecting which it took a team of NASA scientists ten months to clear Toyota software of fault.

Open robots combine, arguably for the first time, the versatility, complexity, and collaborative ecosystem of a PC with the potential for physical damage or injury. The same norms and legal expedients do not necessarily apply. In robotics no less than in the context of computers or the Internet, the possibility that providers of a platform will be sued for what users do with their products may lead many to reconsider investing in the technology. At a minimum, robotics companies will have an incentive to pursue the slow, manageable route of closing their technology.

To recap: Robots may well be the next big thing in technology. The best way to foster innovation and to grow the consumer robotics industry is through an open model. But open robots also open robotic platform manufacturers to the potential for crippling liability for what users do with those platforms. Where do we go from here?

Where indeed?  Calo provides some suggestions – I urge you to read both this short essay and the full SSRN article.  I also urge you to check out the work done at a Silicon Valley firm that figures in many of these technology development accounts – Willow Garage.  But I recall a couple of years ago discussing robotics with an economist-lawyer friend who, when I told him I thought the technology raised whole new legal questions – questions that I was then thinking about in relation to battlefield technologies – shrugged it off and said, it’s just regular old tort law.  That’s it, nothing really new there.  In one sense that’s always true in these liability questions; in another sense, well, it determines the path we collectively select for our technological development: open or closed.

Powered by WordPress. Designed by Woo Themes