Be Careful Trusting Data, Even in Nature:

I found Ben Barres' Nature article, "Does Gender Matter?", to be very interesting; and one thing that quite struck me was this assertion: "[D]espite all the social forces that hold women back from an early age, one-third of the winners of the elite Putnam Math Competition last year were women." Perhaps I overestimated the importance of this assertion because I'm actually familiar with the Putnam Competition (I never participated, and I'm not nearly good enough at math to get anywhere near top scores on it, but I occasionally look at some old problems and enjoy taking a whack at them). Still, the competition seems to test creative math ability and not just rote application of rules, and to test high-end ability: The "does gender matter?" debate in science faculty hiring, after all, has to do with claimed differences between the very far right tails of the male and female math ability bell curves, not between the average man and the average woman or even the average male and female college students.

So, I thought, if one-third of the winners — basically, of the top 15 or so finishers — of the competition are women, despite the social pressures that I'm quite sure would drive down the number of successful women, that really is a powerful data point. (Recall that the question generally isn't whether the disproportionate representation of men and women in high-end science jobs is due entirely to biology, but only whether it's due partly to biology.)

Unfortunately, when I looked more closely at this data point, it turned out to be in error. Here's what I submitted as a letter to the editors of Nature:

Dear Editors:

I read with interest Ben Barres' "Does gender matter?" (13 July 2006), and particularly the statement that "one-third of the winners of the elite Putnam Math Competition last year were women." This struck me as a particularly telling piece of evidence: If indeed so many women performed so well in such a respected competition, this would indeed undermine assertions of substantial biological gender differences in the higher levels of mathematical ability.

Unfortunately, on further research, it seems that this statement is mistaken. Last year's (2005's) top 16 finishers seem to have included only one woman (UNL 2005). Prof. Barres was likely referring to 2004, but even in that year the top 15 included only four women (Hopkins 2005; UNL 2004). In 2003, two of the top 16 were women (UNL 2003; Princeton 2006). In 2002 and 2001, the number was one of 15. Perhaps I'm mistaken, despite my attempts to verify the ambiguous names; but this is the data as best I can determine it.

Prof. Barres' other claims in the article may well be accurate; the data I cite above certainly don't prove that the reason for the low numbers is even partly biological sex differences. On the other hand, I thought it might be helpful to let readers know that one particular piece of evidence mentioned in the article seems mistaken.

Eugene Volokh
UCLA School of Law

Sources: Hopkins, Nancy, 2005. "Academic Responsibility and Gender Bias," XVII MIT Faculty Newsletter No. 4, pp. 1, 24.
UNL Web site, 2005. "The William Lowell Putnam Mathematical Competition, Announcement of Winners ...."
Princeton, 2006. Telephone Conversation with Mathematics Department at Princeton University, July 19, 2006.

Unfortunately, Nature has decided not to publish the letter; here's their response:

Dear Professor Volokh

Thank you for your letter. We have checked into the figures and it seems that in 2004 four of the fifteen top ranked Putnam winners were women (one other might have been, we can't tell). Although we agree that it is unfortunate that we did not include the year in the relevant sentence in the commentary, we feel that 4 (probably, but maybe 5) out of 15 is sufficiently close to one-third not to publish a correction on this occasion.

Thank you again for writing to us.

Perhaps it's me, but it seems to me that the response is missing my point — not only was the number 5 likely wrong (as Prof. Nancy Hopkins' article agrees), and not only was the year wrong (not just omitted, but wrong, since the story unambiguously says "last year"), but the data that Prof. Barres cites is highly unrepresentative, and its unrepresentativeness is hidden by the omission of the year.

If you see "last year the results were X," that might suggest to you that the results in previous years were similar but only the last year was mentioned because it's the closest data point; or it might suggest to you that in any event the trend is towards last year's X. But if you see "in 2004, the results were X," you'd be much likelier to quickly recognize that maybe the 2005 results were different. And given how different the results are — in reverse chronological order, they seem to be 1, 4, 2, 1, 1 — is it quite right to solely cite the 4 (even setting aside the dispute about whether it's 4 or 5); to suggest that it's the most recent result; and to omit the four data points, one of them a more recent one, that would suggest a very different situation?

Two notes. First, I corresponded with Prof. Barres when trying to track all this down, and he was quite gracious about it. I'm sure his error was entirely innocent; I just wish the Nature editors were willing to correct it. Second, I should stress that the aggregate data does not prove that biology is the reason for disparity; cultural factors may well account for the entire gulf even so. My point is simply that one of the reasons to believe that the biological factors are absent or slight — much closer to par representation of men and women on the Putnam exam — appears not to be correct.

And, more broadly, as the title suggests, don't trust everything you read — even relatively easily verifiable data in a respect journal such as Nature.