At almost the same moment that Human Rights Watch/Harvard Law School Human Rights Clinic released its report, “Losing Humanity: The Case Against Killer Robots,” which called for states to establish a treaty that would prohibit the “development, production, and use” of “fully autonomous weapons,” the Pentagon (under Deputy Defense Secretary Ashton Carter’s signature) issued a DOD Directive, “Autonomy in Weapons Systems.” The DOD Directive mandates review of autonomy and automation features of rapidly proliferating “automating” military systems, from the very beginning as they are designed, developed, and evolved, to ensure compliance with the laws of war and, more broadly, to ensure that both design and operational knowledge in the field maintain “appropriate” levels of human control in any weapons use. Matthew Waxman and I discussed the HRW report at Lawfare; DangerRoom-Wired’s Spencer Ackerman discusses the HRW report, the DOD Directive, and Matt’s and my approach in our “Law and Ethics for Robot Soldiers.” Benjamin Wittes at Lawfare excerpts some important chunks of the DOD Directive.
Ackerman says of the DOD Directive that the “Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.” The Directive seeks to ”‘minimize the probability and consequences of failures’ in autonomous or semi-autonomous armed robots ‘that could lead to unintended engagements’, starting at the design stage.” Its solution – unlike HRW’s call for what its summary terms an “absolute ban” – is based upon constant reviews of the military system (unintended effects on weapons systems might occur because of changes to non-weapons systems, after all) – from the inception of design forward. The DOD Directive is intended to be flexible in application and to apply to all military systems, so it relies on a general standard of “appropriate” levels of human control over the system at issue, without specifying in each case what that will mean.
Ackerman adds that Matt Waxman and I should be pleased with the Directive’s approach – and we are. We think the Directive, unlike the HRW report, embraces the best approach. Our “Law and Ethics for Robot Soldiers” is premised on the observation, Ackerman notes, that technological advancement in
robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”
Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)
HRW could say this is what their report calls for, in a sense, since it too tries to build in a notion of incremental reviews into what a treaty should mandate under a general ban. But the purpose of these reviews for HRW’s proposal seems to be to indicate when the absolute ban on “development” of autonomous weapons systems is triggered. The HRW report is not, to my reading at least, completely clear on what “development” means in the context of incremental reviews, or in the context of what the report itself calls an absolute ban. It seems to be trying to mix absolute apples with incremental oranges.
The role of incremental reviews for the Directive, by contrast, is not about whether some point triggering an absolute ban has been reached, but instead to determine whether the technological system, at that point in its development, preserves the “appropriate” amount of human control and, in the case of a system in the process of design and development, will continue to do so as development continues to a final system that has be legally evaluated for deployment. This is a quite distinct meaning of “reviews”; it’s certainly not an absolute ban on “development” of systems that, in a world of murky, incremental technological progress might be closing in on human-less autonomy but might not. It’s flexible as applied to incrementally advancing technology, not absolute. It’s also worth pointing out that while there is a fundamental legal standard at issue here – the requirement of legal review of weapons for compliance with the laws of war – most of this is really policy seen as trying to implement law at the front end, particularly with regards to the incremental, and in some cases incidental-but-dangerous, progression of systems that are at the design stage only.
The Directive also focuses on the complex policy issues for ensuring that the human beings operating these systems have the training and knowledge to avoid inappropriately either leaving the machine to decide things that the design in fact anticipates will be human decisions, or alternatively entering into an operational process in which human intervention in the machine function will, or is likely, to bring about serious errors and collateral damage. It makes sense as a matter of operational policy to take up features of the machine in conjunction with how humans actually operate it and can be trained to operate it.
At the end of the day, I think the DOD Directive approach will prevail. Actually, I don’t think it has serious competition. For one thing, it doesn’t attempt to prejudge how technology will change or what its future capabilities and limits will be. What the US does will set the tone, for the limited set of countries that both engage in serious weapons design and development, and also care about serious legal weapons review. But Matt Waxman and I will have more to say about each of these documents and their respective approaches over the next while.
Comments are closed.