File this under “extremely speculative” (and it also goes on for a while, so be warned). But I’ve been working – mostly with Columbia’s Matthew Waxman – on autonomous weapons systems for a while now, with particular focus on the incremental ways in which weapons platforms gradually increase their levels of automation until, someday, they might start to be genuinely autonomous in their programming. We talk in a forthcoming Policy Review article about ways in which, as the platform – such as the remotely piloted aerial vehicle – gradually becomes automated, the weapons system gradually moves toward automation as well, in order to remain in synch with the rest of the system. Matt’s and my article is in footnoted draft at “Law and Ethics for Robot Soldiers,” at SSRN, but there is a fabulous discussion of the drivers of all this in a much longer, more involved article by William Marra and Sonia McNeil, “Understanding the Loop: Autonomy, Decision-Making, and the Next Generation of War Machines,” also at SSRN.
One of the drivers of automation in weapons platforms is speed – and specifically speed to compete with the other guy’s machines. Speed is not the only driver, but it is an important one for certain activities. The arms race for speed – simplifying from the OODA loop and all that – has been thoroughly studied in military aircraft design and strategy for a long time, and it is easy to imagine something similar emerging in drone technology. I don’t want to underestimate the difficulties involved in either speeding up the operation of the drone, even leaving the weapons aside, or the difficulties of automating all this; perhaps we’ll never get to the point of an air vehicle that can essentially fly, manuever, and engage in aerial combat at speeds and with capabilities that exceed human pilots. I don’t know. But it might turn out that way.
But imagine that such an aircraft did emerge over time. The weapons system might simply have to be automated because it has to operate in synch and at the same speeds as the rest of the craft; there won’t be a human being able to address weapons control quickly enough in real time. Moreover, the speed and perhaps even the nature of targeting information might well be that a human still in the weapons loop would have no special information about incoming threats that would give it a reason to override the machine. Whether this is possible, feasible, a good idea or a bad idea – or for certain kinds of systems, such as those involved in air-to-air combat, driven by what the other guy does with technology – leave aside for now.
Instead I want ask how this might compare to automation in the financial markets, and particularly high frequency, high speed algorithmic trading. The two areas share a tendency toward automation, and automation driven in considerable part by desire for what we might call “post-human” speeds deliverable only by machines. There has been considerable debate in the press, within regulatory agencies and supervisors of financial markets, and among other parties over whether this is a good thing for the financial markets. On one side, the firms that engage in this argue that it increases liquidity and, perhaps less convincingly, price discovery – but anyway, efficiency overall.
On the other side, there is the argument that if there are gains in liquidity or similar factors, those gains are overwhelmingly offset by the increase in tail risk – uncontrollable meltdowns in the market that take place within nanoseconds as the algorithms drive whatever they are programmed to drive before any human can even take note. This is usually conjoined with an argument that – even without high frequency trading firms implicitly depending upon moral hazard in the form of a rescue from the exchange/government in case of a genuinely galactic smashup – market players systematically underestimate the remote but colossal tail risk. Maybe that’s because of irrational risk estimation, or maybe it’s because of a more diffuse belief in moral hazard that all the market players share this risk, and so taking it on is rational for any particular firm in a kind of “tragedy of the commons” approach to risk, or both.
My own view tends to look skeptically at HFT, for the reasons stated above. There are other reasons – outlined in a NYT opinion piece by the financial writer Roger Lowenstein, “Putting the Brakes on High-Frequency Trading.” Lowenstein says that the “‘liquidity’ H.F.T. provides is long past the point of being helpful.” That is consistent with the tail risk cost of that liquidity, if it exists, noted above. But he goes on to make a much stronger argument based around the proposition that liquidity for its own sake misses the point, which is to provide sufficient liquidity for investors driven by longer term concerns.
The purpose of financial markets, remember, is not to provide a forum for split-second trading. If you want to gamble, go to Las Vegas. Markets exist to provide some minimal level of liquidity, so that long-term investors have the confidence to invest. And they exist so that companies and investors can discover how much an ownership position in, say, Apple is worth. When Apple stock goes up, it sends a signal to other firms to invest in the same or similar technologies. Thus does a capitalist society allocate resources.
A well-functioning market can accommodate some hyperactive turnstile traders as long as it has enough legitimate investors — people who are thinking about the outlook for companies down the road.
The reason that market squares like me harp on the long term isn’t because we’re technologically illiterate. It’s because, again, society relies on the market to allocate capital. If market signals are based on algorithms that become outmoded in a nanosecond, we end up with empty factories and useless investment. How much effort do high-speed traders devote to analyzing the future prospects of Apple? Precisely none. Their aim is only to exploit tiny price discrepancies that disappear in milliseconds.
Put another way, he’s denying that these price discrepancies are connected to fundamentals even remotely or indirectly – and so to the extent they increase liquidity, that liquidity is not actually useful – merely noise or perhaps even counter-productive – because it does not arbitrage genuinely relevant information of a kind that actually increases the efficiency of capital allocation.
The response to that, of course, is to say that those tiny price discrepancies exist in in the aggregate – not necessarily any particular price discrepancy, but in the aggregate – because they do translate longer run concerns into price swings that are so tiny because they have been so relentlessly arbitraged. Or, in a less ambitious version, they translate longer run concerns into tiny and instantaneous price discrepancies to the extent that anyone should rationally care about long term economic fundamentals. And then the counterargument to that … this involves heroic reliance on efficient market theory that asks, in this case, to be taken on faith that, overall, the reason these price discrepancies exist has something to do with more fundamental concerns.
Broadly speaking, I share Lowenstein’s concerns that in fact these price movements are signficantly disconnected from factors leading to better capital allocation. But even if that were not so, I would still tend to think that the prudent regulatory move – in the absence of any ability to have complete certainty about the benefits and the risks – should treat the gains as tiny even if real, and the risks underpriced. I also think understatement of the risks is compounded by three basic tendencies in the financial markets and firms (drawing here on terminology and concepts developed by Steven Schwarcz, though I’m deploying them here for my own ends) – complexity, conflicts, and complacency.
- First, HFT introduces problems of complexity just in understanding the nature of the algorithms themselves, what they do, how closely they do what you think they are supposed to do – and then in addition puts the complexity on steroids, so to speak, by moving them into speeds that cannot be monitored.
- Second, therefore, the complexity, which comes here in two distinct varieties – understanding the algorithm and whether it will perform as intended or have unintended consequences, and complexity that comes from moving it to unmonitorable speeds – creates conflicts of interest and agency failures. These also come in three varieties, at least. The first is one noted across the financial sector in the financial crisis – conflicts of interest within financial firms between lower level managers, financial engineers, and other mid-tier agents of the firm responsible for dreaming up complex systems in the first place, on the one hand, and the top level managers, board, and ultimately shareholders who have no ability to manage such complexity, or even understand what true risks it poses – apart from such measures as VAR – and who therefore face tendencies of the lower level managers to take gains for themselves and externalize risk onto the firm. Complexity makes this much easier to accomplish. But the second variety is closely linked to HFT – algorithms in which the machine, operating beyond human speed, can be thought of conceptually (not legally, to be sure – yet) as an agent, in theory executing a preset and predictable program, but in reality acting with something that as a matter of behavior looks for all the world like a rogue and unmonitored agent unmoored from the principal. Not one that has a conflict of interest the way a human agent might, but still maximizing something that is not within the ability of the principal to determine, predict or, in real time, control. In the real time of the machine’s world, the principal is not even able to “fire” its rogue agent. The third is the agency problem driven by complexity and the interests of any given firm to reap the gains of HFT but offloading risk of its unmonitored AI agent off onto the market as a whole, as already noted: this agency failure is made possible in large part by the complexity of the system. Note, however, that the principals in this case are in one sense the market – but in human terms are the market’s regulators, who must decide how to address the fact of complexity and address arguments that just because they, as government employees, can’t understand it, the greater private brains of the market do, so please don’t object. This expresses, almost by definition, an agency monitoring problem that involves the market as a whole.
- Third, faced with complexity and abetted by many conflicts of interest internal and external to financial firms, market actors, including the regulators and the market participants, essentially become complacent about the tail risk and tacitly agree to ignore it.
But now a point of comparison between where we started with weapons systems and where we ended with financial markets. In the case of weapons systems, the point is not to reach some “efficient” point with respect to the other guy – the other guy is the enemy, and the aim is to destroy him, not to “maximize welfare” as between what he wants and what we want. If one likes, one can redefine welfare maximization only to include what we want and reach the same result – but the better way to think of it is that efficiency and welfare maximized for both sides taken together is not the point of war (at least it’s not the point of war if one takes welfare to be that which each side wants for itself). In that case, however, if an arms race between the two sides leads to things that are beyond human speeds, that might be perfectly okay, from the standpoint of winning (it depends, of course, on important complexity issues as well, such as the unintended consequence of the targeting algorithm that causes the weapon to start shooting at you).
In the case of financial markets, however, efficiency for the whole market – social welfare in the allocation of capital – is the relevant criterion. The social gains have to be set against costs, for all players and for the system as a whole. If one accepts a certain theory of capital allocation, whether that offered by Lowenstein or that offered by the HFT trading firms, then it has certain consequences for how one determines net social welfare. Either way, however, an arms race among traders requires a different kind of justification than that offered in weapons systems. An arms race for speed among traders has to justify itself in terms of net social welfare and not simply beating the enemy. Or at any rate, that’s what I’m thinking about by way of comparison of these two fields; informed comments welcomed.
(This is loosely another application, btw, of an argument I have made with respect to drones and targeted killing – that people who talk about net social welfare in war, e.g., the use of drones and the tradeoff between the propensity to resort to force and risk to civilians or one’s own forces misapply concepts applicable within a society and a social space for which “net social welfare” has meaning because of shared interests, to war. The difficult is that war is a situation in which the sides do not share a common ground on which to determine “welfare.”)