“Guilty” Battlefield Robots

One of my favorite issues of the New York Times Magazine is its “year in ideas” issue, which comes annually in December.  It has a short item this year related to battlefield robotics and law and ethics, by Dara Kerr, “Guilty Robots.” (If you go over the Opinio Juris version of this post, you can pick up the “robots” tag to get all the blogging there on international law of war and ethics and battlefield robots.)

[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.

“I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says.

The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement.

The “guilty” part comes from a feature of Professor Arkin’s ethical architecture, in which certain parameters cause the robot to become more “worried” about the rising calculations of collateral damage and other such factors.

After considering several moral emotions like remorse, compassion and shame, Arkin decided to focus on modeling guilt because it can be used to condemn specific behavior and generate constructive change. While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt level increases. As the level grows, the robots may choose weapons with less risk of collateral damage or may refuse to fight altogether.

I am agnostic as to whether at some point in the future, robots might prove to be ethically superior to humans in making decisions about firing weapons on the battlefield.  When I say agnostic, I mean genuinely agnostic – it seems to me an open question of where technology goes, and in, say, a hundred years, who can say?  For thing, I do fully imagine that roboticized medicine, surgery and operations, will very possibly have reached the point where it might well be presumptive malpractice for the human doctor to override the machine.  It is not impossible for me to imagine – far from it – a time in which it would be a presumptive war crime for the human soldier to override the ethical decisions of the machine.

But maybe not.  Although I am strongly in favor of the kinds of research programs that Professor Arkin is undertaking, I think the ethical and legal  issues, whether the categorical rules or the proportionality rules, of warfare involve questions that humans have not managed to answer at the conceptual level.  Proportionality and what it means when seeking to weigh up radically incommensurable goods – military necessity and harm to civilians, for example – to start with in the law and ethics of war.  One reason I am excited by Professor Arkin’s attempts to perform these functions in machine terms, however, is that the detailed, step by step, project forces us to think through difficult conceptual issues regarding human ethics at the granular level that we might otherwise skip over with some quick assumptions.  Programming does not allow one to do that quite so easily.

And it is open to Professor Arkin to reply to the concern that humans don’t have a fully articulated framework, even at the basic conceptual level, for the ethics of warfare, so how then is a machine going to do it?  “Well, in order to develop a machine, I don’t actually have to address those questions or solve those problems.  The robot doesn’t have to have more ethical answers than you humans – it just has to be able to do as well, even with the gaps and holes.” I’m not sure that answer (which I’m putting into Professor Arkin’s mouth entirely hypothetically, let me emphasize) would be sufficient – partly because I suspect that intuitions applied casuistically by human beings often encode and respond to facts that affect our ethical senses in ways that would not really be articulable, by human or machine.  And partly because we probably do think that in various ways, the machine has to be better than the human.

Many readers will by now be familiar with Peter W. Singer’s widely noticed Wired for War. But I would suggest following it up with Professor Arkin’s own new book, Governing Lethal Behavior in Autonomous Robots (particularly now that Amazon has dropped the price from $60 to $40).  (I guess I should also add that this discussion is about battlefield robotics in the sense of “autonomous” firing systems – not the current robotics question of human controlled, but remote platform unmanned combat vehicles, Predators and drones, and targeted killing.)

(Update:  Thanks Instapundit!  Let me add two things, looking to the comments.  First, I agree that “regret” is the correct term here, not “guilt.”  Guilt is the term used in the NYT article – but as moral emotions go, it is one of the most difficult conceptually to frame, and I say this as a student of one of the great philosophers on this topic, UCLA’s Herbert Morris.  Second, I agree that Professor Arkin does indeed mean “better” than human in a very limited, deliberately limited frame; to get a sense of the parameters he means, look at his excellent book.  Finally, for those concerned that without consciousness or intentionality on the part of a robot, no matter how sophisticated the programming, it isn’t really “morality,” I am happy to call it some form of moral simulacrum, because the issue, for these purposes, is how close behaviorally the robot can come via its programming.)

Powered by WordPress. Designed by Woo Themes