In 2006, the Lancet published a controversial study finding a substantial, continuing Iraqi death toll in the years following the 2003 U.S. invasion. The study bolstered critics of the Iraq war and prompted substantial debate, online and elsewhere.
Neil Munro revisits the Lancet study in the new issue of National Journal.
In the ensuing year, numerous skeptics have identified various weaknesses with the study’s methodology and conclusions. Political blogs and academic journals have registered and responded to the objections in a debate that has been simultaneously arcane and predictable. The arguments are arcane because that is the nature of statistical analysis. They are predictable because that is the nature of today’s polarized political discourse, with liberals defending the Lancet study and conservatives contesting it.
How to explain the enormous discrepancy between The Lancet’s estimation of Iraqi war deaths and those from studies that used other methodologies? For starters, the authors of the Lancet study followed a model that ensured that even minor components of the data, when extrapolated over the whole population, would yield huge differences in the death toll. Skeptical commentators have highlighted questionable assumptions, implausible data, and ideological leanings among the authors, Gilbert Burnham, Riyadh Lafta, and Les Roberts.
I did not follow the debate closely enough to reach a conclusion about the merits of the study or its critics. The Munro article provides a convenient overviewof the controversy for those of us without the time or patience to wade into the depths of the debate. Munro is not entirely neutral, however, as he concludes there are potential problems with the initial study.
Over the past several months, National Journal has examined the 2006 Lancet article, and another [PDF] that some of the same authors published in 2004; probed the problems of estimating wartime mortality rates; and interviewed the authors and their critics. NJ has identified potential problems with the research that fall under three broad headings: 1) possible flaws in the design and execution of the study; 2) a lack of transparency in the data, which has raised suspicions of fraud; and 3) political preferences held by the authors and the funders, which include George Soros’s Open Society Institute.
Of these critiques, I find the political preferences of the authors and their funders to be the least persuasive. Political bias of this sort could certainly explain problems with the study, such as a failure to scrutinize sources and ensure their reliability, but I do not think that the authors’ ideological predispositions (or those of the funders) should, in and of themselves, case doubts on the study’s findings. The Lancet study’s conclusions should stand or fall on their own. In this regard, it is interesting that Munro reports the Lancet editors are less confident of the analysis than they once had been.
Today, the journal’s editor tacitly concedes discomfort with the Iraqi death estimates. “Anything [the authors] can do to strengthen the credibility of the Lancet paper,” Horton told NJ, “would be very welcome.” If clear evidence of misconduct is presented to The Lancet, “we would be happy to go ask the authors and the institution for an official inquiry, and we would then abide by the conclusion of that inquiry.”