[Einer Elhauge, guest-blogging, May 22, 2007 at 6:01am] Trackbacks
Sabermetrics and the Future of Legal Empirical Studies

The book I have recently read that I think may offer the most insight into the future of legal studies is, of all things, a book on sabermetrics, called Baseball Between the Numbers. Sabermetrics, for those of you not into baseball, is the advanced statistical analysis of baseball. With Bill James as its most famous pioneer, it raised all sorts of probing analysis about which statistics really best determined the value of a baseball player, and what sorts of strategies worked and which didn't.

So what does any of this have to do with law? Well, what this book does is compile, in a readable way, the major points this advanced statistical analysis has taught us over the last few decades about matters that were previously resolved by tradition, customs, and intuitive reasoning. Some of those traditional views turn out to have some basis, others none or only in a limited way.

For example, advanced statistical analysis shows that batting averages are a useful statistic, but much less important to winning than on-base-percentage. RBIs are largely a distraction, and hot streaks and clutch hitting are stories we tell ourselves to describe statistical clumps that are really just random. Pitchers vary in their ability to strike out batters and avoid walks and home runs, but have little effect on the odds that balls hit in play will become outs, so their ERAs are worse predictors of their future performance that their rates of strikeouts, walks and home runs. The bunt is hugely overused, and generally reduces the odds of victory, other than in a few instances that can be specified with precision.

And it occurred to me that in law we now are largely where sabermetrics was in the early days when Bill James began cranking out his seminal Baseball Abstracts. The bulk of what we teach our students reflects tradition, customs, and intuitive reasoning. Little of it has been subject to rigorous statistical analysis.

In Contracts class, for example, I regularly teach that we can understand all contract law largely as default rules that either reflect what most parties would want or are thought most likely to trigger an explicit contract provision. Then we explore how courts and scholars have resolved such issues, which is largely through armchair reasoning. The issue cries out for rigorous statistical analysis, and we have little to offer.

In Antitrust, much turns on how we think firms are likely to behave. After a merger, will firms engage in Bertrand competition by pricing down to cost, Cournot competition by setting output in a way that depends on the output of others, or oligopolistic coordination on price or output? The traditional approach considers various factors that theoretically bear on this issue in particular cases, but the weighing of them generally turns on unavoidable judgment calls. It would be better to rely on the growing statistical analysis of how firms actually behave (often, it turns out, in ways that lie in between these models). It would be even better to have rigorous statistical data about what the price effects were of a particular method for deciding which mergers to approve or condemn. Right now we choose our merger law methodologies based on theory and never gather and analyze the data to see whether the theory worked.

We are probably even further behind in empirical analysis of basic legal strategy. What sorts of arguments are most effective with judges? Which with juries? Which sorts of contract design are most likely to avoid disputes latter? Which settlement offers are most likely to be successful? These are important things to teach our students, but all we can do is either tell them the received wisdom (which may well be wrong) or avoid discussing these issues (so as not to expose our ignorance).

In short, in law, we are currently still largely in the position of the baseball scouts lampooned so effectively in Moneyball for their reliance on traditional beliefs that had no empirical foundation. But all this is changing. At Harvard Law School, as traditional a place as you can get, we now have by my count 10 professors who have done significant statistical analysis of legal issues. We just hired our first JD with a PhD in statistics. The movement is not at all limited to Harvard, and seems to be growing at all law schools.

So we are hardly devoid of empirical analysis of law. We are just, rather, in our early Bill James era, and can expect the analysis to get more sophisticated and systematic as things progress. I expect within a couple of decades we will have our own book distilling the highlights of things we will know then that conflict with what is now conventional legal wisdom.

None of this means this new empiricism will replace traditional legal theory, much like sabermetrics has not eliminated the need for scouting. Indeed, it is clear to me that a lot of legal empirical analysis misses the boat because it has a poor or thin understanding of legal theory. Many empiricists are good at providing useful input to policy analysis, but surprisingly terrible at doing policy analysis about the implications of their own findings. There will also be some growing pains because it is not clear that empiricists are the best placed persons to teach law students, given that the students themselves need not learn how to do statistical analysis to become excellent lawyers.

But I have no doubt that empirical analysis of law will provide the biggest contributions to our understanding of law over the next few decades. That is where the low-hanging fruit is. The decline of doctrinalism will only accentuate this trend. Because anti-doctrinalist law professors can no longer persuade lawmakers with claims about what the law inherently must mean, they will find it more promising to try to influence lawmakers with findings about what effects particular laws would have. This brings us to my next topic, the death of doctrinalism, which will be the subject of my post tomorrow.