Law Review Article Accuses Louisana Supreme Court Justices, Is Itself Found To Have Serious Errors:

Dan Slater (Wall Street Journal's Law Blog) reports on an interesting story about errors in the academy. The Tulane Law Review recently published an article that purported to compare Louisiana Supreme Court Justices' voting records with the campaign contributions to them from litigants and lawyers; the article asserted that

some of the justices have been significantly influenced — wittingly or unwittingly — by the campaign contributions they have received from litigants and lawyers appearing before these justices.

The New York Times reported on the story before the article was published, as did WSJ Law Blog.

1. Now it turns out that there many of the cases were miscoded — a rebuttal asserts that "in thirty-seven of the 186 opinions included in the study, the information about the case on which Palmer and Levendis based their conclusions is just plain wrong, such as how a Justice voted or even if the Justice was on the panel that decided the case." [UPDATE: After I posted this post, the rebuttal I linked to was replaced with a corrected version, and I adjusted the quote. The original version said 40 of the 186 opinions were miscategorized; the revised version says 37.]

The authors acknowledge that there were errors; one of the authors asserted that "with all the mistakes now corrected, ... the study's conclusions, broadly speaking, are the same," but the revised study and the revised dataset has not yet been publicly distributed. (I e-mailed that author mentioning I was going to blog about the controversy, and asking whether he could provide a response that I could link to; but though he originally offered to pass along the revised dataset, he later said that because of a newly arising lawsuit threat, he was told not to distribute the data until it could be independently reviewed. This may well be sensible, but at this point all that can be said with confidence is that the original data is wrong, and that this has been admitted by the researchers.)

Incidentally, while it's not clear from the article exactly how the data was gathered, it looks like part of the problem might be lack of checking by the authors or by the law review: The article asserts (p. 1298) that "Each case was thoroughly read and analyzed by a researcher. Once the cases and contribution information were gathered, we entered our observations into a standard data table." If the article "a" in "a researcher" is precise, and if the researchers were research assistants and not the authors (seemingly likely, given the thanks in footnote * and the use of the term "a researcher" rather than "one of the coauthors"), this suggests that each case was read only by one research assistant, with no further verification.

I also asked the current Tulane Law Review editor-in-chief, and he reported that to his knowledge the law review cite-checking process did not check the underlying database. This is probably consistent with standard law review practice; law reviews generally check all the citations that appear in the text of the article, but I'm unaware of any practice of checking the case data that appears in databases that aren't published within the body, footnotes, or appendix of the article. Nonetheless, it's unfortunate that this happened, since cite-checking often (though not always) does uncover factual errors such as the ones that appear to have been present here.

2. But there's more than this to the situation, I think. Even if the data were correct, the article would still be drawing what strikes me as an unsupported inference from correlation to causation. The article asserts in footnote 14 that

It is worth observing that this Article does not claim that there is a cause and effect relationship between prior donations and judicial votes in favor of donors' positions. It asserts instead that there is evidence of a statistically significant correlation between the two.

But many other passages in the article seem to argue that there was indeed causation. The opening paragraph says (as I quoted above) that "This empirical and statistical study of the Louisiana Supreme Court over a fourteen-year period demonstrates that some of the justices have been significantly influenced — wittingly or unwittingly — by the campaign contributions they have received from litigants and lawyers appearing before these justices." (Emphasis added, in this quote and the later ones.) "Some justices may sincerely believe that they have not been influenced by the money they take, but sincerity makes no difference if the reality is otherwise." "Some studies have tracked the rise in contributions made to judicial candidates, but few have attempted to determine whether these increasing contributions actually influence subsequent adjudications involving contributors. This Article demonstrates that the debate is not evenly balanced." (This quote is just a sentence before footnote 14.) There are many more examples.

The trouble, as this other critique of the article points out, is that there's a perfectly plausible alternate explanation for correlation between contributions and voting patterns — that contributors contribute money to the election of those candidates whose ideologies they agree with, rather than that the elected candidates then decide based on the identities of their contributors.

Say, for instance, that we discover that a liberal state supreme court justice often votes in favor of plaintiffs in individual tort cases, employment cases, or environmental cases; and say that we find that those plaintiffs are often represented by law firms that have contributed money to the justice. It's of course possible that the justice is influenced by the identities of his contributors. But it's also possible that the justice is voting solely based on his view of the law — and the contributors contributed to him because they share his view of the law (or in any event find his view of the law to be good for them and their clients). Simple evidence of a correlation, even a very strong correlation, cannot distinguish between these two explanations, and thus can't show that contributions "influence" the justices' votes.

The authors' attempts to control in a way that eliminates the alternative explanation strikes me as quite weak. The authors look separately at voting patterns in favor of plaintiffs and in favor of defendants, and they conclude that

In cases where the defendant was the net contributor, Justice [A] ruled for the defendant's position 66% of the time, and Justice [B] 86% of the time. On the other hand, in cases where the plaintiff was the net contributor, Justice [A]'s vote was for the plaintiff's position 66% of the time, and Justice [B]'s vote was for the plaintiff's position 63% of the time. This is a swing of 32% for Justice [B] and 49% for Justice [A] when the net donor changes from being a defendant to a plaintiff. The marked shift favoring the net contributor, irrespective of being plaintiff or defendant, strongly indicates that it is the donation, not the underlying philosophical orientation, that accounts for the voting outcome.

But no judge has any inherent philosophical orientation for plaintiffs or defendants as a broad category. For instance, some judges may support cities as defendants (for instance, voting in favor of broad municipal immunity against various claims) and cities as plaintiffs (for instance, voting in favor of broad municipal power to get injunctions against nuisances) — not because the law firms that represent small cities have donated money to the judge, but because the judge generally supports municipal government authority.

Likewise, say a law firm tends to represent employers as defendants in many employment cases, but also represents employers as plaintiffs in other employment cases, such as cases enforcing nondisclosure or anticompetition agreements. In this situation, a judge with a generally pro-employer perspective will vote in ways this law firm likes — which may mean the law firm will try to help elect the judge even if the judge never pays attention to who contributed to him. (Recall that the study considered donations from lawyers as well as donations from litigants.)

3. Of course, it's certainly possible that judges' decisions may be influenced by whether the litigants or the lawyers have contributed money — or time or an endorsement or other things — to the judges' election. I don't want to suggest that anything I say above proves this effect is absent, either on the Louisiana Supreme Court or elsewhere. But the study's claim that it has "demonstrate[d]" such a "significant[] influence[]" is not adequately supported. And before one claims that identified judges have indeed been so influenced (not just that judges generally might be influenced, a proposition that our knowledge of human nature tells us must be true at least in some instances), it seems to me that one should have significantly more evidence than the article adduces. And this would be true even if the article's underlying database were accurate, or even if the authors can replicate their correlations after the database has been corrected.

4. Finally, note that the law school's dean has written a letter of apology to the Louisiana Supreme Court Justices, and the law review has noted the error on its site. (I don't know what further attempts at correction the law review might be taking.)

Thanks to How Appealing for the pointer.

Related Posts (on one page):

  1. Why Speculate, When You Can Look It Up?
  2. Law Review Editors, Take Note:
  3. Making Data Available:
  4. Law Review Article Accuses Louisana Supreme Court Justices, Is Itself Found To Have Serious Errors:
Comments
Making Data Available:

The Tulane Law Review controversy brings up an important point, which commenter Frog Leg noted: Shouldn't law reviews make a practice of including the raw data supporting an article's assertions in an Appendix, at least so long as the data wouldn't take more than several pages?

That way, law reviews would be reminded of their responsibility to check the data, and readers will find it more consistently accessible. Putting on the Web is nice, but it involves various risks, including a risk that the law review editors won't feel it to be their responsibility to check it -- and the risk that it will get taken down. As it happens, though the law reviewarticle states that "The table will be available without charge on the Tulane Law Review's Web site ... for one year from publication," the law review has taken down the original table in the wake of the errors that have been discovered. This makes it harder for future researchers to closely follow the course of the controversy; even if the revised table is eventually posted, it will be hard for people to see what errors had originally been made.

Had all the cases been included in a short appendix, the data would have been permanently available the same way the text is permanently available. Of course, if there were a practice of putting the data online while still having it be cite-checked, and still having a firm promise on the law review's part that the data will be permanently retained -- perhaps in some centralized repository from which the data couldn't vanish as a result of law review decisions, or for that matter law review technical errors -- that might be as good or better. But for now, putting the material in the article's text remains the most traditional and most reliable way of preserving the data, and seems quite sensible for datasets that don't take more than several pages.

Comments
Law Review Editors, Take Note:

I just wanted to stress that the Tulane Law Review article incident isn't just an interesting story of academic error -- it's also a story of law review embarrassment. I'm pretty sure that no law review likes to have to post on its front page,

Erratum

The Louisiana Supreme Court in Question: An Empirical Statistical Study of the Effects of Campaign Money on the Judicial Function published in Volume 82 of the Tulane Law Review at 1291 (2008), was based on empirical data coded by the authors, but the data contained numerous coding errors. Tulane Law Review learned of the coding errors after the publication. Necessarily, these errors call into question some or all of the conclusions in the study as published. The Law Review deeply regrets the errors.

I assume the law review will also have to publish a print correction. The incident also led the law school dean to have to feel obligated to publicly apologize for the errors in the article, and though the apology said the law review members did nothing wrong, the matter can't have been great for relations between the dean and the journal. And I suspect the incident in some measure tarnished the law review's brand with local employers, especially those who are friendly with the judges whom the article criticized based on inaccurate information (and an unsound confusion of causation and correlation).

Of course, law reviews must accept the risk of public hostility when they publish articles that criticize much-liked people and institutions. That's part of law review editors' responsibilities as participants in the scholarly publishing process. But the hostility is likely to be considerably higher when the criticisms prove to be based on error. And it's one thing to incur unjustified hostility in the service of truth, and quite another to incur justified condemnation because one's institution has been mistaken.

So it seems to me that there are three important lessons here:

1. When the author's article rests on data that you can check, check it. Here, the data was information about who voted which way in certain cases, and who got what contributions from whom -- something cite-checkers are amply competent to check; and checking the data for fewer than 200 cases is not a crushing burden.

If the data had been in footnotes or in an appendix, as it is in many articles, the law review would have checked it. That the data never made its way into a print article is no reason to skip checking it (as this incident illustrates). The printed article, after all, relied on the data, and errors in the data infected the information reported by the article. Had the law review done the cite-checking, they might have avoided the embarrassment to themselves, their dean, and (incidentally) the authors.

2. Look closely through the article's description of what it's saying, and watch out for self-contradiction (especially when the article is controversial enough that authors might be tempted into some self-contradictory self-protection). So when a footnote says,

It is worth observing that this Article does not claim that there is a cause and effect relationship between prior donations and judicial votes in favor of donors' positions. It asserts instead that there is evidence of a statistically significant correlation between the two,

but the rest of the article repeatedly suggests causation -- for instance, saying that "This empirical and statistical study of the Louisiana Supreme Court ... demonstrates that some of the justices have been significantly influenced -- wittingly or unwittingly -- by the campaign contributions" (emphasis added) -- you should note the contradiction, and insist that the authors revise their claims to be internally consistent.

3. Finally, remember that correlation is not causation. If authors give evidence of correlation and from there makes claims of causation, make sure that the evidence adequately supports the claims, for instance by controlling for possible confounding factors. If the claim is that X (here, contributions) causes Y (voting patterns), consider what things may cause both X and Y (for instance, even though ice cream sales and the rate of forcible rape are closely correlated, might something else cause both, rather than ice cream sales causing rape?). Look also whether the causation leads the other way, which is to say that Y or predictions of Y can cause X: For instance, might a contributor's prediction of a judge's voting patterns lead him to contribute to the judge's election campaign, even if the contribution in no way influences the judge's vote? And if there are possible other explanations, does the author deal adequately with them.

Coming up with these alternative explanations doesn't require an understanding of statistics; even law review editors with little mathematical skill can do this. And law review editors should ask such skeptical questions just as they should look for counterarguments to authors' key doctrinal or normative assertions, and make sure that the authors deal with at least the main such counterarguments. If the authors do a poor enough job of dealing with these counterarguments, you should reject the article; or if you think the article is basically sound but needs to respond to those counterarguments, you should insist that the authors deal with them.

Authors should rightly have a great deal of discretion in how they craft their arguments. But when they don't adequately respond to the obvious counterarguments to their main assertion -- for instance, when they claim causation based on correlation, but don't control for obvious confounding factors -- part of your job is to call them on this.

And if you don't, when others call the authors on the errors, the result can be embarrassment for you as well as for the authors.

Comments
Why Speculate, When You Can Look It Up?

Commenter bdog, commenting on one of the posts on the badly done study of the Louisiana Supreme Court, writes:

Why is is it that no one has bothered to remark on the policitcal affiliations of the Judges? Without reading either the article or the rebuttal, I would bet that the accused judges are probably considered 'conservative', if not actual members of the republican party.

My reasoning? Simple logic and statistics: Professors are democrat 10-1. Law students, especially those on the Law Review are probably democrat by 30-1.(And personally, I would also bet money that on most Law Reviews, republicans/conservatives aren't represented at all.) And of course, the lack of attention to detail, like fact checking, faulty statistical analysis (can anyone say global warming), are just part and parcel of what is accepted as scholarship and scientific concensus, as long as it advances the 'correct' agenda.

The editors reviewed the article and it was just too good to check. If I ever need a lawyer, and I have in the past, I'll just make sure that it isn't one from Tulane.

And of course, law professors wouldn't write an article like this about democrat judges. It. Just. Does. Not. Happen.

Of course, I could be wrong.

Well, one can certainly speculate based on generalized assumptions about students' political orientations. Or one can look things up, for instance in Judgepedia (you'd find it quickly just by searching search for, for instance, Pascal Calogero Democrat Republican).

Looking up will reveal that the three justices as to whom the study purported to show influence from contributions -- Pascal Calogero, Catherine Kimball, and John Weimer -- are Democrats. (I've also confirmed this through non-open-source sources. Please note the "purported to show"; my point in these posts is that the study is badly flawed, and does not adequately demonstrate its claims of causation.) Of the remaining four justices, for whom the study didn't purport to show such influence, two were Republicans (Chet Traylor and Jeffrey Victory) and two were Democrats, Bernette Johnson and Jeannette Theriot Knoll.

So a law professor did cowrite an article like this about Democratic judges. It. Just. Did. Happen.

A broader point: It may well be true that as a general matter, more law review editors in the country are Democrats or lean Democratic than are Republicans or lean Republican, and that may in particular be true as to Tulane -- or it might not be; that's all sheer speculation. And it surely is the case that even fair-minded Democrats are more likely to assume the worst about Republicans and cut slack to Democrats, just as even fair-minded Republicans are more likely to assume the worst about Democrats and cut slack to Republicans. That's human nature.

But it's dangerous to speculate from such general tendencies to the facts in any particular case. And it's pretty pointless to do so, when the actual facts are pretty easily available.

Comments