Banning Autonomous Weapon Systems Won’t Solve the Problems the Ban Campaign Thinks It Will

Although much less visible in the United States than in Europe, the campaign to ban “killer robots” has not gone away. If anything, it’s gathering steam in Europe and also at the UN, where it is likely to be taken up following a report by Special Rapporteur Christof Heyns calling, not precisely for a ban, but for a “moratorium.”  The International Coalition for Robot Arms Control (ICRAC) has released a letter signed by 270+ “computing scientists” calling for a “ban on the  development and deployment of weapon systems in which the decision to apply violent force is made autonomously.”

One can share the “computing scientists” overall concerns about humanity and accountability in war, however, without thinking that a sweeping, preemptory “ban” is the right way to approach these issues of emerging technology.  Over at The New Republic blog “Security States” (a joint project with the national security law website Lawfare), Matthew Waxman and I have a new post talking about these developments, and explaining why the ban approach to regulating the gradual automation of weapon systems is not likely to be effective, and moreover is deeply mistaken because, if somehow it did take hold, it gives up the potential gains from automation technologies in reducing the harms of war.  This post follows on a policy paper we did for the Hoover Institution a few months ago, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can – here is the opening (the piece, title notwithstanding, btw, is actually about weapons and war, not domestic drones):

What if armed drones were not just piloted remotely by humans in far-away bunkers, but they were programmed under certain circumstances to select and fire at some targets entirely on their own? This may sound like science fiction, and deployment of such systems is, indeed, far off. But research programs, policy decisions, and legal debates are taking place now that could radically affect the future development and use of autonomous weapon systems. To many human rights NGOs, joined this week by a new international coalition of computing scientists, the solution is to preemptively ban the development and use of autonomous weapon systems (which a recent U.S. Defense Department directive on the topic defines as one “that, once activated, can select and engage targets without further intervention by a human operator”). While a preemptive ban may seem like the safest path, it is unnecessary and dangerous ….

Besides the self-protective advantages to military forces that might use them, it is quite possible that autonomous machine decision-making may, at least in some contexts, reduce risks to civilians by making targeting decisions more precise and firing decisions more controlled. True, believers in artificial intelligence have at times overpromised before, but we also know for certain that humans are limited in their capacity to make sound and ethical decisions on the battlefield, as a result of sensory error, fear, anger, fatigue, and so on. As a moral matter, states should strive to use the most sparing methods and means of war—and at some point that may involve autonomous weapons systems. No one can say with certainty how much automation technologies might gradually reduce the harms of warfare, but it would be morally wrong not to seek such gains as can be had—and especially pernicious to ban research and development into such technologies at the front end, before knowing what benefits they might offer.

This is not to say that autonomous weapons warrant no special regulation, or that the United States should heedlessly rush to develop them. After all, the United States’ interest has never been trigger-pulling robot soldiers chasing down their human enemies—the cartoonish “killer robots” of the ban campaign—or taking humans out of targeting for its own sake.

Comments are closed.

Powered by WordPress. Designed by Woo Themes