Daniel Davies is impressed with the Lancet study estimating 655,000 “excess” Iraqi casualties since the U.S.-led invasion. Based on his experience conducting surveys in Iraq, Steve E. Moore is not.
Of Moore’s criticisms, the most significant seem to be 1) the relatively small number of cluster points used for the survey (which, he claims, is more important than the number of actual interviews in the sample); and 2) the failure to obtain demographic data on those interviewed so as to verify the representativeness of the survey sample.
the key to the validity of cluster sampling is to use enough cluster points. In their 2006 report, “Mortality after the 2003 invasion of Iraq: a cross-sectional sample survey,” the Johns Hopkins team says it used 47 cluster points for their sample of 1,849 interviews. This is astonishing: I wouldn’t survey a junior high school, no less an entire country, using only 47 cluster points.
Neither would anyone else. For its 2004 survey of Iraq, the United Nations Development Program (UNDP) used 2,200 cluster points of 10 interviews each for a total sample of 21,688. True, interviews are expensive and not everyone has the U.N.’s bank account. However, even for a similarly sized sample, that is an extraordinarily small number of cluster points. A 2005 survey conducted by ABC News, Time magazine, the BBC, NHK and Der Spiegel used 135 cluster points with a sample size of 1,711–almost three times that of the Johns Hopkins team for 93% of the sample size. . . .
With so few cluster points, it is highly unlikely the Johns Hopkins survey is representative of the population in Iraq. However, there is a definitive method of establishing if it is. Recording the gender, age, education and other demographic characteristics of the respondents allows a researcher to compare his survey results to a known demographic instrument, such as a census. . . .
while the gender and the age of the deceased were recorded in the 2006 Johns Hopkins study, nobody, according to Dr. Roberts, recorded demographic information for the living survey respondents. . . .
Without demographic information to assure a representative sample, there is no way anyone can prove–or disprove–that the Johns Hopkins estimate of Iraqi civilian deaths is accurate.
These are also criticisms that I have not seen responses to elsewhere (e.g. here). No charges of fraud here, just poor methodology.
I do not mean to diminish the tragedy of civilian Iraqi deaths. I agree with Moore that “there have been far too many deaths in Iraq by anyone’s measure,” and I certainly believe that the Bush Administration’s poor policy decisions and execution bear a significant portion of the blame. Nonetheless, Moore seems to mount a serious challenge to the validity of the Lancet estimates.
UPDATE: Many commentators note that the realtively small number of clusters is accounted for in the study by providing a confidence interval — this is true, and it accounts for the huge range of estimates presented (plus-or-minus 40-some percent of the point estimate). So, while the study is evidence that mortality rates in Iraq have gone up post-invasion — a point I did not think was in dispute — it does not provide a particularly reliable estimate of excess deaths. Of course, as Frank Cross notes here, Iraq is not exactly the easiest place to perform this sort of study these days.
Tim Lambert offers additional responses to Moore’s critique here.