A reader passed along a citation to a very interesting article, Peter C. Austin, at al., Testing Multiple Statistical Hypotheses Resulted in Spurious Associations: A Study of Astrological Signs and Health, 59 J. Clinical Epidemiology 964 (2006). The basic point of the article is that, as an accompanying editorial (59 J. Clinical Epidemiology 871) notes "spurious P-values arise at a suprisingly high frequency if a researcher has sufficient creativity and a large database." The lesson, I think, is the importance of adhering to the Bradford Hill criteria before drawing any conclusions from an epidemiological study; also, before scientific journals publish such studies, they should ask the authors how many potential associations they investigated. Given the standard significance threshold of 95%, a researcher who looks for one hundred different associations may find five of them by chance.
I can't say I'm any sort of expert in statistics, but from what I've seen at academic workshops, economists (including law and economics scholars) tend to be even more sloppy than epidemiologists about (a) exhibiting naivete about the importance of finding a "statistically significant" result; (b) concluding that they have discovered causation when they've only discovered, at best, correlation; and (c) coming up with unconvincing post-hoc rationalizations as to why they manipulated their data set in a particular way that just happened to help them achieve a "positive" result. That's not to say that there isn't some fine empirical work out there in the law and economics literature, just that appropriate cautions and safeguards don't seem to me to be as built-in to professional culture as they should be. One safeguard I'd like to see: to prevent data dredging and other forms of manipulation, researchers should publish on the Internet their research protocol, what factors they are going to consider, and why, BEFORE they start looking for results. For (random) example, if one is going to do a paper on the effect of mandatory sentencing on crime rates, one should announce in advance which crimes one is going to consider and why, and not instead be able to first elmininate roberries, than eliminate rapes, than add back robberies and eliminate burglaries, etc., until one comes up with an "interesting" result, after which one can post-hoc rationalizes why one chose the crimes one did.