One of my favorite issues of the New York Times Magazine is its “year in ideas” issue, which comes annually in December. It has a short item this year related to battlefield robotics and law and ethics, by Dara Kerr, “Guilty Robots.” (If you go over the Opinio Juris version of this post, you can pick up the “robots” tag to get all the blogging there on international law of war and ethics and battlefield robots.)
[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.
“I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says.
The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement.
The “guilty” part comes from a feature of Professor Arkin’s ethical architecture, in which certain parameters cause the robot to become more “worried” about the rising calculations of collateral damage and other such factors.
After considering several moral emotions like remorse, compassion and shame, Arkin decided to focus on modeling guilt because it can be used to condemn specific behavior and generate constructive change. While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt