The Debate About to Heat Up Over HRW’s Call to Ban “Killer Robots,” AKA Autonomous Weapon Systems

(And: thanks to Instapundit for linking to the new policy essay by Matthew Waxman and me from the Hoover Institution, referenced at the end of this post, Law and Ethics for Autonomous Weapon Systems – thanks Glenn!)

Last November, two documents appeared within a few days of each other, each addressing the emerging legal and policy issues of autonomous weapon systems – and taking strongly incompatible, indeed opposite, approaches.  One was from Human Rights Watch, whose report, Losing Our Humanity: The Case Against Killer Robots, made a sweeping, preemptive, provocative call for an international treaty ban on the use, production, and development of what it defined as “fully autonomous weapons” and dubbed “Killer Robots.”  Human Rights Watch has followed that up with a public campaign for signatures on a petition supporting a ban, as well as a number of publicity initiatives that (I think I can say pretty neutrally) seem as much drawn from sci-fi and pop culture as anything.  It plans to launch this global campaign at an event at the House of Commons in London later in April.

The other was the Department of Defense Directive, “Autonomy in Weapon Systems” (3000.09, November 21, 2012).  The Directive establishes DOD policy and “assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems … [and] establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.”

By contrast to the sweeping, preemptive treaty ban approach embraced by HRW, the DOD Directive calls for a review and regulatory process – in part an administrative expansion of the existing legal weapons review process within DOD, but reaching back to the very beginning of the research and development process.  In part it aims to ensure that whatever level of autonomy a weapon system might have, and in whatever component, the autonomous function is intentional and not inadvertent, and has been subjected to design, operational, and legal review to ensure that it both complies with the laws of war in the operational environment for which it is intended – and will actually work in that operational environment as advertised.  (The DOD Directive is not very long, and makes the most sense, if you are looking for an introduction into DOD’s conceptual approach, read against the background of a briefing paper issued earlier, in July 2012, by DOD’s Defense Science Board, The Role of Autonomy in DOD Systems.)

In essence, HRW seeks to ban autonomous weapon systems, rooting a ban on autonomous lethal targeting by machine per se in its interpretation of existing IHL, while calling for new affirmative treaty law specifically to codify it. By contrast, DOD adopts a regulatory approach grounded in existing processes and law of weapons and weapons reviews.  Michael Schmitt and Jeffrey Thurnher offer the basic legal position underlying DOD’s approach in a new article forthcoming in Harvard National Security Journal“‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict.” They say that autonomous weapon systems are not per se illegal under the law of weapons, and that their legality or restriction on their lawful use in any particular operational environment depends upon the usual principles of targeting law. There will be machine systems that will never be lawful for use in some operational environments or even in any operational environment – but maybe some that will.

II

I think Schmitt and Thurnher have it right as a legal matter – and quite clearly so – but there are important dissenting voices.  A different view is offered by University of Miami’s Markus Wagner in, for example,“Autonomy in the Battlespace: Independently Operating Weapon Systems and the Law of Armed Conflict” (chapter in International Humanitarian Law and the Changing Technology of War, 2012).   New School for Social Research professor Peter Asaro has offered a reading of Protocol I and other laws of armed conflict treaties aiming to show that human beings are assumed to be present as moral agents engaged in targeting in these texts (forthcoming special section of the International Review of the Red Cross). Asaro is careful to hold out only that this interpretation is implicit, rather than explicit – a thoughtful and creative reading, though not finally one that persuades the hard-hearted lex lata lawyer in me.  (Asaro is not a lawyer, but a “philosopher of technology,” thus establishing himself as having the Coolest of Jobs, and also co-founder of an organization that has been calling for a ban for several years; Peter and I have cordially disagreed at several academic discussions, most recently at the outstanding WeRobot 2013 conference at Stanford Law School earlier this week.)

A debate over autonomous weapon systems is thus underway in academic law and policy – and in the Real World.  It promises to heat up considerably. Much of the debate (as Peter’s and my exchange at the WeRobot 2013 conference suggests) goes to what one believes is the bedrock moral principle (and which, if true, ought to be embraced as law) for targeting and weapons.  Is it per se immoral for a human being ever to be targeted autonomously by a machine that (as “full autonomy” is defined by DOD) has no human being “in” or “on” the loop, either in target selection or engagement with the target?  Is a human being essential to those two actions – target selection and target engagement – and is the absence of a human being fatal to its morality, irrespective of how good or how bad the machine does at targeting only what it ought to and minimizing collateral harms? Peter takes the position that the human being is essential; my position is that the bottom-level moral principle at issue here is not whether it is a human or not a human, but whether whatever does the targeting is able to comply with the requirements of the laws of war.  The “package” is simple an incident of nature, contingent, and not morally controlling.

Peter’s position, not mine, is the one taken by a number of very smart ethicists and philosophers, including, for example, Wendell Wallach, who describes a machine taking such a lethal decision “mala in se. University of Sheffield computer science professor Noel Sharkey (the well-known public commentator on these issues, with whom I’ve had the pleasure of friendly disagreement before and no doubt will again) also takes this position, though he also takes others that are factual in nature.  But on this moral argument, the requirement of a human being is the end of the moral chain, so to speak.  I don’t agree with it, but I understand the arguments driving it.  HRW’s report, by contrast, launches into quite a different kind of argument, and a much more problematic one.  Though it appears to accept the buck-stopping moral position, it also and mostly argues strenuously for two factual claims.

The first is that, no matter how much time goes by, as a matter of fact, machine intelligence will never be adequate to the moral decision-making that lethal targeting requires.  To which, of course, the proper response is, fifty years?  A hundred years? Two hundred years?  Maybe HRW is right.  But how does it know and what gives being a human rights monitor any special ability to see the future of technology – and tell us what to ban and not ban today, in order to ensure that a future that it purports to see does not come about?  Not all of us are quite as certain about where technology might go and what it might yield – and we are quite unwilling, on HRW’s say-so, to give up the possible future social gains (including reducing harm on the battlefield) that such technologies might produce along the way because HRW foresees a future somewhere between a Philip K. Dick novel and Terminator.  (Or as a friend put it, knowing Ken co-blogs with Ilya, “So who sailed from the Grey Havens and gave HRW a palantir? -ed.)

The second is that, no matter what technological developments take place, machines could never offer the affective and emotional qualities that targeting decisions in war do and properly should require on the battlefield – sympathy, empathy, compassion. Again, this is a factual claim about the future of machine intelligence – a prediction extending into the future, forever – that leaves one to ask, how does HRW claim to know any such thing?  And it’s a particularly peculiar claim coming from a human rights monitor whose bread and butter in armed conflict reporting not infrequently involves things soldiers did on the battlefield because of fear, desire for vengeance, simple bad judgment from cold and hunger, and the limits of human cognition in the fog of war – a conspicuous, yet all-too-human, absence of empathy and compassion.  One wonders why HRW didn’t just as easily focus on those less praise-worthy human emotions and at least entertain the possibility that a machine that has no emotions either way, but which might be programmed to behave in ways that respect the humanity of non-combatants and, further, might be programmed to simply sacrifice itself in order to spare non-combatants, might after all said and done be a very good thing.

III

In conversations with HRW, I’ve been told, and encouraged to note publicly, that it does not want its report and call for a ban to be understood in extreme ways.  I’m happy to do that, with one caveat.  So, for example, it does not mean everything one might read its call for a ban on “development” of fully autonomous weapons to say.  It also appears to want to find a way not to be interpreted as declaring the future history of technology, though that appears more difficult, given the language of the report.  My (genuine) advice to HRW on this point (though not my view, of course) is to say that it’s not predicting where technology will and won’t go, as a matter of necessity.  Instead, it’s saying that, in its judgment, it is overwhelmingly likely that all these bad scenarios would emerge over the long run – and that these scenarios are sufficiently bad to justify banning all these many things today.

In other words, I suppose HRW should have said – first, we invoke the Precautionary Principle, so if you want to introduce these technologies, you have to show that harms will not come from these new technologies in war.  (There are stronger and weaker versions of the Precautionary Principle, which I’m ignoring here.)  Second, continues HRW, our factual judgment is something close to maximum catastrophe with maximum likelihood.   Third, in any case, we think any gains anticipated are highly unlikely to come about, so the opportunity cost is actually near zero.  Hence a ban on all these things is justified – and this includes giving up something all the gains that might also arise from the development of technologies of autonomy in terms of reducing harms on the battlefield.  The expected value of future gains cannot be set against the expected value of disaster because the Precautionary Principle says, first, do no harm; anyway, the disaster in terms of humanity on the battlefield, we believe, will be much worse – more severe and more likely – than any foregone possible gains; and anyway, we don’t think there will be any gains.

That’s not what the report actually says, however – though my recommendation to HRW is that it would be a modestly more defensible position, or at least not a patently absurd one that claims to know the future.  It’s still wildly wrong, in my view (Cass Sunstein explains why in Worst Case Scenarios), but not a flat-out conversation stopper.  Still, I welcome HRW looking for ways to walk back some of the more extreme things in its report.  But in order to do that for real, it needs to find some way to express those limits in writing, not simply in the way that its spokespeople present the position. Otherwise what stands as its position is not what has been said, but what it actually calls for in its written document; it merely looks like it’s hedging its bets.   In a public advocacy campaign over the long term, what it says in writing matters.  Of course there is plenty of room for HRW to offer more specific statements on different parts of its report – and qualify, interpret, limit, and walk back some things without having to say “walk back.” But I don’t think any of that much matters unless it is in writing.

In a way, however, these arguments all miss the point of the actual discussion surrounding actual weapon systems.  In real life, DOD is not seeking to operationalize some sci-fi robot – it has no practical reason in the foreseeable future to want weapons that raise these deep existential questions.  It has plenty of real world scary scenarios to worry about that drive it to autonomous systems having little to do with targeting human beings. DARPA might be funding various existentially cool things and exciting the neurons of Silicon Valley’s dopamine-driven techno-pagans – but it’s important not to confuse DOD and DARPA.  DOD’s actual weapon systems involving autonomy and automation today are much less sci-fi-thrilling – and much more necessary to military operations, unless one would like to see naval vessels, for example, sunk by swarms of high speed missiles beyond the reaction time of human beings.  Autonomy for weapons today is driven mostly by machine-on-machine encounters, in which preservation of the naval vessel, or preventing Hamas missiles from reaching their targets in Israel, is the core issue.  The practical issues involve, for example, chunks of missiles that might fall from the sky and injure civilians; is it excessive or not?  Not Killer Robots.

Some months prior to these two documents making their appearance, Matthew Waxman and I published a short policy paper in the journal Policy Review, “Law and Ethics for Robot Soldiers.” It made note of arguments by those favoring a complete ban, but mostly focused on the United States (as well as other technologically advanced states; the US is far from the only country doing cutting-edge robotics, in weapons and many other things) and the possibility of developing weapon systems that might move from “automated” to “autonomous.”  That paper endorsed a regulatory approach to these weapon systems, embracing transparency of standards, best practices in weapons reviews, close interaction between the lawyers and engineers from the beginning of weapon system design, etc.  The Policy Review essay was devoted to setting out the problem for a lay audience not having much prior knowledge, however, and oriented toward policy and process issues by which DOD would formulate policy, conduct legal reviews, and how it would deal with other states and their weapon development policies.  It was not primarily directed to arguments for or against a sweeping ban, since HRW had not yet launched its Killer Robots campaign.

IV

Since then, however, Matt and Ken have been … busy.  And we’re pleased to announce that the Hoover Institution has just published our new policy essay, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. It revises and substantially extends our arguments on autonomous and automated robotic weapons, and shifts the focus of argument to address the ban arguments more directly.  Though longer than our first essay, it is still not long (at some 12,000 words) and intended to be readable by a general audience, not an academic one.  It is available at SSRN, here (and the same pdf at the Hoover Institution website, here).

Comments are closed.

Powered by WordPress. Designed by Woo Themes