(Notes from my panel presentation on covert action and international law, at the Harvard National Security Journal annual symposium yesterday. Very fine conference and congratulations to HLS student James Moxness, who served as organizer.) Seen from the standpoint of the emerging regulation of covert uses of force, cyberwarfare and targeted killing using drones share several important features, but differ in at least one. They share at least the following:
- Each can be used to gather intelligence – to engage in surveillance.
- Each can also be used to intervene, that is to take action with “physical world” results that can be characterized as using force.
- Each allows intervention to be taken at a distance, so that one’s own personnel are not risked.
- Each favors the attack – although defensive and counter technologies will not doubt emerge, at this point both cyber and drones favor offense over defense.
- Each tends to make “attribution” of the intervention or attack difficult, and make more difficult deterrence or the threat of retaliation – the reciprocal threat that has traditionally undergirded international law on the resort to force. The difficulty of attribution also makes more difficult response by any international mechanisms for enforcing the laws on resort to force, such as action by the Security Council.
- The combination of favoring offense over defense, and the difficulty of attribution, mean that they tend to be de-stabilizing strategies, undermining rather than reinforcing the status quo.
These are all features that these two technologies tend to share in common. However, there are differences, and one stands out in the question of regulation of covert action:
- Cyberwarfare tends, at least as a tendency and at least how we imagine it now, toward lack of discrimination in its attacks and consequences. Stuxnet appears to have been created to be extraordinarily precise in its targeting, but the general tendency of cyberwarfare is either to target widely or even, in legal terms, indiscriminately without much attention to collateral consequences of attacks on infrastructure such as electrical grids. Or else to deliberately target infrastructure for the purpose of attacking civilians. This is not necessarily so, but it does appear to be a likely tendency of cyberwar.
- Targeted killing using drones, by contrast, is by its strategic design aimed at greater precision and greater discrimination, while at greater and more personally remote distance. That is its point. However, the greater discretion in targeting is the same that makes attribution more difficult; the benefit of precision and discrimination in the conduct of operations carries the problem of attribution.
Which is to say that in the case of cyberwar, a tendency toward indiscrimination goes hand in hand with making attribution difficult. In the case of targeted killing using drones, there is a tradeoff between the increased difficulty of attribution and the greater discrimination offered by precision technology. The tradeoff situation is the morally more difficult one, because it raises the question of genuine tradeoffs.
I have argued strenuously that in the case of such tradeoffs, the moral answer cannot be to pass up the possibility of more discriminating targeting technology because we fear that increased lack of attribution makes it easier for states to anonymously resort to force. If there’s a problem with states using force illegally or immorally, that has to be addressed on its own terms – it is immoral to hold the civilians or military whose lives would otherwise be protected hostage against the behavior of political leadership. But that does not make the tradeoffs go away – this argument merely says that in the case of this kind of tradeoff, this is how you must make the moral trade.
(Tufts’ Michael Glennon, who moderated my panel yesterday at which I made the above comments, explains the attribution problem as a matter of tradeoffs in a new paper to which I will link once it is up on SSRN.)