An “Ethical Turing Test”? More on Comparing Self-Driving Vehicles and Autonomous Weapon Systems

In my earlier posts comparing self-driving cars and autonomous weapon systems, I pointed out that in neither case are we seeing a sudden, systemic paradigm change, a shift from one whole technological system to another.  Not, at least, in the sense that I had long assumed – that a change-over to driverless cars would require necessarily a systemic change from individuals driving their individual cars to a centralized computer system dealing with all the vehicles in the system as a whole system – including things like sensors in the roads, no commingling of system-controlled cars with individually-controlled cars, etc.

Instead, the changes in these particular technologies are occurring incrementally.  It might be different for other technologies, but for these, it so happens that the changes are taking place bit by bit.  Cars are being sold one-by-one that are gradually incorporating more and more of these automatic systems, as safety and convenience features.  This fact alters the nature of the legal, ethical, and policy review that has to be made of the systems – regulatory review, too, has to be incremental.  Moreover, changes toward automation often occur in highly discrete technological functions within the larger activity – braking systems in cars, or the detailed and particular criteria used for target identification in weapons, for example.  Legal, ethical, and policy decisions have to address both the particular function and its impact on the overall machine system.  In this regard, I once again highly recommend the new report by Bryant Walker Smith (Stanford Center for Internet and Society), on the legality of self-driving cars in the US. For those of us interested in weapon systems, it provides a useful basis for comparing ways in which vehicle codes will have to gradually take account of evolutionary technologies with what the legal review for automating weapon systems has to be.

Still, vehicles and weapons are different for many reasons – starting with the intentions behind their uses as technology.  Vehicles are not intended to be violent; weapons are – intentionally potentially violent and often lethal.  But what does that difference in intention finally net us?  The machine itself doesn’t have an “intention” as a human being does; it has its programming.  The problem of the programmer is to mimic that human intention in the machine’s behaviors.  We might refer to this as a “Ethical Turing Test” – behind the veil, so to speak, can we distinguish between the behaviors of the intentionally ethical human and the “behaviorally” (i.e., programmed) ethical robot?  I don’t know (with respect to either vehicles or weapons) whether, to what extent, or in what particular activities machines might surpass humans on the “Ethical Turing Test.” That will only be answered by the progress of technology.  But technology has made remarkable advances up to this point – I would not have guessed how quickly self-driving vehicles would be emerging, for example, and I would not have guessed that it would be possible to create the technology without a complete technological paradigm shift.

The benefits from these innovations might well be so great that we would make a tragic mistake not to explore them, irrespective of whether or how close they get to ethical adequacy, and that’s as true of weapons systems as it is of self-driving cars.  Contra Human Rights Watch’s recent, problematic report (remarkably unconsidered for a HRW report and, to be blunt, simply unserious) calling for a preemptive international ban on autonomous weapon systems or any research and development that could lead to such a system, this is true of weapon systems.  Not to undertake the research and development into how automation can increase precision and discrimination, finally lessening the harms of war – particularly when highly relevant and similar development is already proceeding in such areas as vehicles – entails a potentially steep and tragic opportunity cost.  What might be given up by adhering to HRW’s quite irresponsible call to ban even such research and development seems to me profoundly wrong to give up – indeed immoral.  We owe it to future generations to seek to use the same technologies that we might find will gradually, incrementally protect human life in activities from driving to surgery to the care of the elderly – all technologies that will involve machine decisions about potentially lethal actions – to decrease the harms of war.  It’s frankly inconceivable to me, in any case, that future generations accustomed to automation in the name of superior safety in all these other areas of human activity, if that’s what the new technologies succeed in bringing about, would not simply presume as a matter of course that they would be applied as feasible in weapons and conflict.

What does seem important in comparing vehicles and weapons, then, is that at the granular level, though they differ in the intentions behind their use, each inevitably involves decisions implying the possibility of violence and even lethality.  Yes, it matters ethically and legally that a weapon is intended to have lethal application, whereas the technologies of self-driving cars are intended to avoid violence and lethality.  But at the granular level of the actions that the technologies take, they are programmed to make decisions and initiate actions that might well still cause violence directly, or collaterally, or by error.  As Gary Marcus pointed out in the New Yorker a few weeks ago, even vehicular automation technologies will one day (not so long from now) have to start grappling with programming decisions about whether to risk the driver in order to spare, say, the school bus full of children; that is an intentional decision about life and death made by a human programmer and written into a machine.  And that decision will likely be part of a system beyond human intervention, at least at the speed with which it will be made in a real life possible crash.

Moreover, as far as “humans in the decision loop” is concerned, the automotive technology already overrides human actions in some narrow yet crucial matters – anti-lock brakes, for example, exist to prevent and override a systemically predictable but wrong human response, at least among most drivers.  In one sense, anti-lock brakes carry out the highest level human intention – stopping the car – but at the granular level they override the immediate human intentional action in braking.  But the technology does so at the level of one crucial, yet relatively small, part of driving the car; a self-driving car is thus actually better understood as a whole bunch of particular systems and functions in the car, which, in a completely self-driving vehicle, must all come together – braking, steering, acceleration, etc. The automation of the vehicle has to address each of them, and all of them together.

The same is true of complex weapon systems. They are bundles of particular systems and functions forming a larger whole.  Permitting or not permitting automation has to take the unbundled bits on their own; next as a whole system; and finally as one weapon system in relation to other military systems with which it might interact, sometimes in unexpected and unintended ways.  This is, by the way, the approach taken by the best practice policy in this area, the recent Department of Defense Directive on Autonomous Weapon Systems, which requires legal and policy evaluation of any autonomous or semi-autonomous weapon system on a basis integrating incremental review of the system parts throughout the development process, review of the system as a whole, and finally review of its interaction with other military systems.

(In my next post on this topic, I query framing real life robot debates in terms of pop culture.)

 

Comments are closed.

Powered by WordPress. Designed by Woo Themes