Robots, Law and Society

I mentioned earlier a panel discussion I participated in on robots, law and society at Stanford Law School a couple of weeks ago.  Adam Gorlick at Physorg.com has a good article summing up the discussion.

Ryan Calo, of Stanford Law School’s Center for Internet and Society, raises some of the fundamental liability issues and the implications for research and development – and I fully share his concerns, and more.  (You can read an early draft of one of his academic law papers on privacy at SSRN.)  What happens, for example, if the robot fails to call 911 as programmed?

[W]ho will be to blame if a robot-controlled weapon kills a civilian? Who can be sued if one of those new cars takes an unexpected turn into a crowd of pedestrians? And who is liable if the robot you programmed to bathe your elderly mother drowns her in the tub?

As mechanical engineers and  at Stanford develop technology that will transform the stuff of science fiction into everyday machinery, scholars at the Law School are thinking about the legal challenges that will arise.

“I worry that in the absence of some good, up-front thought about the question of liability, we’ll have some high-profile cases that will turn the public against robots or chill innovation and make it less likely for engineers to go into the field and less likely for capital to flow in the area,” said M. Ryan Calo, a residential fellow at the Law School’s Center for Internet and Society.

And the consequence of a flood of lawsuits, he said, is that the United States will fall behind other countries – like Japan and  – that are also at the forefront of  technology, a field that some analysts expect to exceed $5 billion in annual sales by 2015.

“We’re going to need to think about how to immunize manufacturers from lawsuits in appropriate circumstances,” Calo said, adding that defense contractors are usually shielded from liability when the robots and machines they make for the military accidentally injure a soldier.

One of the features of this discussion, as well as a discussion at Harvard Law School last week, was to ask what makes robots different as a matter of technology.  A decent rule of thumb is that robots feature:

  • mobility or or locomotion or at least some mechanism for physically interacting between it and the world, at the ‘gross motor level’ (yes, one can talk of computers interacting with the world by running the phone system or something, but robot technologies are special in part because of the machine itself physically taking motor actions);
  • sensors that allow the robot to take in signals and streams of input from the outside world; and
  • computational power (specifically, computational power that allows for learning of some kind to adapt the other features of physical action and mobility and sensors).

I don’t want to get doctrinal or theological here about the Essential Nature of Robots; these are just rules of thumbs.  But they figure in how one thinks of robots as distinguished loosely from other technologies – and another thing noted in one of these discussions, and not a surprise, is that a larger proportion of the folks who work on them tend to come from mechanical engineering than computer science than is the case in computers or internet technologies.

(Update:  One of the comments suggests that Asimov already addressed these questions fifty years ago, albeit speculatively.  It’s an interesting sideline question.  In my possibly fallible memory of the Robot stories, Asimov addressed many questions on the assumption of the three laws; the stories turned on interpretations of those laws.  I often thought the most interesting aspect of the Robot stories was the construction of the different societies, based around robotics technology – Earth, crowded with humans and robot-phobic, at the one extreme, and Aurora, void of humans and filled with robots at the other extreme.  It wasn’t the robots that were interesting – it was the comparison of the two societies.  But I also think that the assumptions that Asimov built into his robot stories in order to give – as with all good fantasy, speculative fiction, fairy tales, ghost stories, horror – a structure with its own internal set of rules to keep us interested in the narrative, also meant that his stories did not address many of the things referenced in the post above.  What do you think?  Did Asimov cover the bases, and the rest is simply inflection; or am I right in thinking that there are vast areas quite untouched?)

Powered by WordPress. Designed by Woo Themes