I’ve written in the past about debates over autonomous weapon systems – their legality and morality, ways to review proposed or emerging systems to ensure their conformity to the laws of war, and the general problem of how to regulate weapons technologies with both enormous promises of benefits but also significant risks of harm, where the technology is in the process of gradual development. (There’s an important broader discussion about the regulation of future and emerging technologies generally, such as the law and ethics of Google’s driverless cars, but here we focus on weapon systems.) Since I last posted, there have been some important new interventions in these debates.
As it happens, Human Rights Watch/Harvard International Human Rights Clinic launched their report on autonomous weapon systems, “Losing Humanity: The Case Against Killer Robots,” the same weekend that the Defense Department issued a DOD Directive, “Autonomy in Weapons Systems.” They are very different in aim and understanding of what needs to be regulated and why.
The HRW report is best understood not as a “neutral” report, but instead as a quasi-brief intended to justify a sweeping call for a preemptive, prohibitory multilateral treaty that would ban the “development, production, and use” of autonomous weapons systems. (It uses a particular definition of “autonomous,” as in “fully autonomous weapon,” directed to the question of where, if at all, a human being is situated in the firing loop – it is different from how the term is often used elsewhere. Much of what the report proposes to ban would count as “highly automated” rather than “autonomous,” but its general point is to ban systems that take humans out of the firing loop in various ways, so to create or lead to the development of, a “fully autonomous weapon.”) From a international politics perspective, if this seems like an effort to recreate the international campaign to ban landmines from the 1990s, well, it is, at least to judge by its reception in the international NGO community.
It would be hard to overstate the sweeping nature of its call for a ban; nothing if not comprehensive. Most notably, not just on production or use of what it defines as “fully autonomous weapons,” but their “development.” The ban is to be implemented in part through “reviews of technologies and components that could lead to fully autonomous weapons.” These reviews, which apparently would lead to legal decisions about whether the ban applies to particular “technologies or components” (because the “technologies or components could lead to fully autonomous weapons”), are to take place at the “very beginning of the development process and continue throughout” development. The report is unclear as to the extent to which “technologies or components” to be reviewed – and thus perhaps banned – must arise in the context of the development of weapons specifically or might arise in other kinds of robotics activities.
The DOD Directive, for its part, calls for integrated review of weapons systems as they acquire more automated features, as well as other things such as training of DOD personnel, to ensure that humans retain the “appropriate” level and kind of role suitable to the system and its use (including the interactions between systems). Depending on how you see it, that “appropriate” either makes the Directive admirably able to cover all the many different systems and activities across DOD, while still requiring that they do so, or else just a weasel-word that can mean anything to anybody. The devil is in the details. However, even those who would not share DOD’s basic approach would probably appreciate its attempt to draw together not just legal reviews in a formal sense, but all the informal activities related to training, understanding of how systems might overlap and interact in intended and unintended ways, and the benefits as well as limits of what personnel can do, both to get right but also get wrong, in actually using systems that might not be intended to have very much autonomy at all.
One thing that’s clear is the HRW report and the DOD directive are headed in very different directions – and this is something of an understatement. Each talks about “reviews” of technology from the beginning of development processes to the end; the similarity mostly ends there, however. DOD’s review is to ensure that “appropriate” human control is maintained across systems that are understood to be gradually embracing automation as part of technological advancement. HRW’s reviews, by contrast, apparently aim to identify as early as possible technologies or components that “could lead to fully autonomous weapons” in order determine whether, as a consequence, they are subject to the ban on “development” under the treaty regime that HRW urges.
After the HRW report appeared, the Naval War College’s Michael N. Schmitt posted to SSRN a short response to it, “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics.” Mike frames his critique of the report as an argument very much from inside the doctrinal terms of the law of armed conflict– arguing that international humanitarian law does not prohibit autonomous weapons or weapon systems as a category of weapon, neither does it prohibit (other requirements of law being met in specific circumstances) their use as such. He also walks through the very elaborate process by which DOD reviews weapons for their legality as to both weapon and use, under customary law of war obligations codified at Article 36 of 1977 Additional Protocol I (the US has not signed on to API, but accepts this provision as restating customary law).
By contrast, Matthew Waxman’s and my critique at Lawfare, as well as Ben Wittes’ separate critical post (also John Dehn’s comment) were much more about policy and, especially, questioned the many factual premises of the HRW report. These run from HRW thinking it can predict the empirical outcomes of technology over a long run of time, to HRW’s remarkably self-confident factual assertions on the superiority of human emotions in controlling targeting and firing of weapons, human empathy over human fear.
Ultimately, however, Mike, Ben, and Matt and I come to the same general conclusion – HRW’s preemptory ban call is not likely to gain very much traction and, in our various separate ways, each of us thinks it shouldn’t, because it’s wrong in principle and because, in any case, this short, factually speculative report simply can’t support the kind of sweeping, categorical calls it makes. Perhaps it will galvanize the like-minded community of international NGOs to try and rerun the landmines campaign, but (speaking for myself, not for Matt or others) I don’t think the report is seen as persuasive by very many outside of HRW’s epistemic community. Be that as it may, however, Tom Malinowski has responded on behalf of HRW to Matt and me, and separately to Ben, at Lawfare. Meanwhile, the final published version of Matt Waxman’s and my “Law and Ethics for Robot Soldiers,” of which we had posted a working version with footnotes at SSRN, appeared in the new issue of Policy Review. Finally, because of arguments over the definitions of autonomy and automation, I recommend a highly useful article appearing in 2013 from William Marra and Sonia McNeil, “Understanding ‘The Loop’: Regulating the Next Generation of War Machines,” 36 Harvard Journal of Law and Public Policy 3 (2013), which also appeared as a working paper in the Lawfare Research Paper Series 1-2012.
To this list let’s add an article on December 3, 2012 in the Guardian by the prominent artificial intelligence scientist Noel Sharkey, who has been pressing for just such an international ban campaign for years and who has served as something of the intellectual inspiration and adviser behind HRW’s embrace of the whole ban treaty agenda. I mostly don’t share Professor Sharkey’s views (with some I disagree on principle and with others, such as the factual future of technology, I’m agnostic, but unwilling to give up the possible benefits and certainly not sympathetic to HRW’s ban proposals). But he is the most persuasive voice for the ban campaign (as well a model of civility in debating this, which is no small thing), and I’m much looking forward to meeting him at a January conference on artificial agents at the University of Virginia.
I think there’s more that could be said by way of critique of the HRW report, distinct from Mike Schmitt’s legal critique – questions about how HRW purports to know all the many things that in this report it says it knows, about the future, about human nature and emotion, about so many things. And why, given the potentially large losses in humanitarian and other opportunity costs to make war less destructive if HRW were somehow wrong in its factual judgments, HRW is entitled to any moral authority or expert deference on these essentially factual predictions from the rest of us. No one doubts HRW’s moral seriousness and expertise in many areas, but I’m quite certain I’m not alone in wondering what about being a human rights organization makes it reliable or an authority in predicting the empirical future of technology for the next several generations. But finally I don’t think the HRW report is at the center of the real policy debate. It has essentially made its call on the right analysis and right policy; in that case, what happens afterwards is politics of one kind or another to try and make it happen. If it can make itself politically central to the determination of policy, then its views will matter; otherwise, not so much.
That is to say, this report and the self-confidently categorical nature of both its conclusions and recommendations means, in practical terms, that it has decided not to be part of the formation of what the US government and some other technologically important governments, at least, would likely regard as the mainstream policy and legal debate over autonomous weapons and their regulation. HRW can perhaps generate enough political strength to steer the argument its direction or force its views into the political debate. But if it can’t generate that political force through NGO campaigning, it would have to walk back or ignore many things in this report, simply if it wanted to join the debate on the basis of reasoned persuasion, because it has already categorically pronounced on so many of them.
That means, then, that the real policy action is likely to be elsewhere – inside DOD, and informally among governments, especially. NATO, we should add, might not be the most important locus of informal discussion at the governmental level – it might well be that more important discussions over the long haul take place with technologically advanced Asian allies who are at least as far ahead in robotics as the US and have more looming defense concerns than Western European countries. Global civil society movements can likely strongly affect government positions taken by NATO allies in Western Europe; I’m skeptical, however, they will have so great an effect on Japan, South Korea, or Taiwan (or, for that matter, Israel), quite apart from China. But while that kind of inside-DOD (or intergovernmental) policy discussion would likely always have been true just by the nature of the activity, it’s not as though DOD, like all of us, can’t get locked inside its own epistemic community. The overall questions of regulation would benefit greatly from having HRW as a serious nongovernmental interlocutor, but (again, speaking just for myself) however one sees the substantive merits of its position today, going forward, HRW’s categorical, ex ante stance means that having answered the hard questions to its own satisfaction, if not to everyone else’s, the rest is politics.
(I’m speaking just for myself in this post, to be clear; not for my co-author on Law and Ethics for Robot Soldiers. I’ve created a page at Lawfare where I’ll collect leading documents as they are published on the autonomous weapon systems debate.)