This Steven Landsburg piece in Slate has prompted blogospheric commentary (Brad DeLong, Tyler Cowen, John Quiggin, others) about the core empirical/ policy claim: that minimum wage increases (at least of the scale that we’ve seen in the U.S. in the past couple decades) have at most a small negative impact on employment.
I have a question, not about the economics but about one of Landsburg’s pieces of meta-evidence.
Twenty years ago, they’d have told you otherwise. Back then, dozens of published studies concluded that minimum wages had put a lot of people (especially teenagers, blacks, and women) out of work. As the studies continued to pile up, you might think we’d have grown more confident about their common conclusion. Instead, the opposite happened. Even though the studies were all in agreement, they managed to undercut each other.
Here’s how: Ordinarily, studies with large sample sizes should be more convincing than studies with small sample sizes. Following the fates of 10,000 workers should tell you more than following the fates of 1,000 workers. But with the minimum-wage studies, that wasn’t happening. According to the standard tests of statistical significance, the results of the large-scale studies were, by and large, neither more nor less significant than the results of the small-scale studies. That’s screwy. Screwy enough to suggest that the studies being published couldn’t possibly be a representative sample of the studies being conducted.
Here’s why that matters: Even if minimum wages don’t affect employment at all, about five out of every 100 studies will, for unavoidable statistical reasons, appear to show a significant effect. If you could read all 100 studies, that wouldn’t be a problem—95 conclude the minimum wage is pretty harmless as far as employment goes, five conclude it’s a big job-killer, you realize the latter five are spurious, and you draw the appropriate conclusion. But if the 95 studies that found no effect were deemed uninteresting and never got published, then all you’d see were the spurious five. And then the next year, another five, and the next year another five.
Even when the bulk of all research says one thing, the bulk of all published research can tell a very different and very misleading story.
How do we know what was in all the unpublished research about the minimum wage? Of course we don’t know for sure, but here’s what we do know: First, the big published studies were no more statistically significant than the small ones. Second, this shouldn’t happen if the published results fairly represent all the results. Third, that means there must be some important difference between the published and the unpublished work. And fourth, that means we should be very skeptical of what we see in the published papers.
But if it were really the case that the minimum wage was employment-neutral, and that the studies finding otherwise were just statistical noise, then shouldn’t they be equally distributed across pro-employment and anti-emplyment results? That is, the distribution should be: 95 (unpublished) studies showing no effect, 2.5 (published) studies showing a pro-employment effect, and 2.5 (published) studies showing an anti-employment effect. And surely the pro-employment-effect studies would get published; after all, they have a very interesting and policy-relevant counterintuitive result.
If, as Landsburg claims, the published studies are “all in agreement” about the direction of the effect, then the underlying distribution of studies can’t be as he describes it, can it? Publication bias in favor of significant findings, superimposed on an actually-neutral relationship ought to generate equal numbers of ostensibly-significant findings in each direction.
The econo-bloggers all seem to think Landsburg is basically right about the consensus view among economists. But is that consensus view really based on the meta-analysis position he describes? If so, what’s the explanation for the (according to Landsburg) absence of studies that randomly happen to show significant increases in employment?
John Quiggin answers, in an update to the same post linked to above.
Actually, the Card and Krueger study found weak positive impacts of minimum wages on employment using a data set where most of the obvious sources of bias had been removed. There may have been earlier studies with similar results, but they would almost certainly have been discarded, on reasonable grounds of weak statistical significance or omitted variable bias. By contrast, studies with similar weaknesses, but with the expected sign would have been published.