Science, Political Ignorance, and Deference to “Authoritative” Experts

Economist David Friedman has an insightful post on the problems inherent in deferring to the views of “authoritative” scientific bodies:

A pattern I have observed in a variety of public controversies is the attempt to establish some sort of official scientific truth, as proclaimed by a suitable authority—a committee of the National Academy of Science, the Center for Disease Control, or the equivalent. It is, in my view, a mistake, one based on a fundamental misunderstanding of how science works. Truth is not established by an authoritative committee but by a decentralized process which (sometimes) results in everyone or almost everyone in the field agreeing.

Part of the problem with that approach is that, the more often it is followed, the less well it will work….

The first time it might work, although even then there is the risk that the committee established to give judgement will end up dominated not by the most expert but by the most partisan. But the more times the process is repeated, the greater the incentive of people who want their views to get authoritative support to get themselves or their friends positions of influence within the organization, to keep those they disapprove of out of such positions, and so to divert it from its original purpose to becoming a rubber stamp for their views. The result is to subvert both the organization and the scientific enterprise, especially if support by official truth becomes an important determinant of research funding.

I. The Dangers of Deference to Biased Experts.

Friedman makes two important points here. Scientific truth cannot be established by the endorsement of an authoritative body such as the NAS or the CDC. And if people start to take the pronouncements of such expert bodies as gospel, there is an obvious potential for abuse.

Both problems are exacerbated in cases where the scientific question at issue is relevant to some hot-button political controversy. When it comes to politics, most people have strong incentives to be “rationally ignorant,” and therefore devote little time and effort to determining whether the pronouncements of “experts” are really backed by evidence or not. Given the very low chance that your vote in an election will be decisive, there is little incentive to make a serious effort to double-check the pronouncements of experts on political issues, if your only motivation for doing so is to figure out which candidate or party has the “right” position on a given issue. For similar reasons, voters tend to be highly biased in evaluating whatever information they do learn about politics, often acting as “fans” for their respective party or ideology rather than as objective truth-seekers. This often leads them to place excessive credence in real or imagined experts who support their preexisting views, while discounting those on the other side.

II. Why Deference is Often Unavoidable.

That said, I don’t believe we can simply dispense with deference to scientific experts. There are so many complex issues in the world that none of us have the time or expertise to really delve into the evidence on more than a small fraction of them. As I explained in reference to the “Climategate” controversy in 2009:

[O]ur knowledge of complex issues we don’t have personal expertise on is largely based on social validation. For example, I think that Einsteinian physics is generally more correct than Newtonian physics, even though I know very little about either. Why? Because that’s the overwhelming consensus of professional physicists, and I have no reason to believe that their conclusions should be discounted as biased or otherwise driven by considerations other than truth-seeking. My views of climate science were (and are) based on similar considerations. I thought that global warming was probably a genuine and serious problem because that is what the overwhelming majority of relevant scientists seem to believe, and I generally didn’t doubt their objectivity.

Even if you consider yourself a great skeptic, I suspect that you too defer to expertise on many issues. You probably follow your doctor’s advice on what medicine to take when you are sick, usually without first reading up on the scientific literature on that medicine’s effectiveness, and almost certainly without performing your own laboratory experiments to assess its potency first-hand.

III. Increasing Our Expertise on When to Defer to Experts.

Given the near-inevitability of deference to experts, can we avoid the pitfalls Friedman rightly emphasizes? There’s no perfect solution. But some rules of thumb can help. First, deference to expertise is more warranted in cases where there is an expert consensus that crosses ideological lines. Like the rest of us, experts are prone to ideological bias. Thus, if experts of differing ideologies converge on the same conclusion, that’s a sign that the resulting opinion is really driven by expertise rather than bias. It doesn’t prove that the experts are right, of course, but it does justify a stronger presumption in their favor. When, on the other hand, experts do split along ideological lines, that suggests the issue is more disputable, and that bias may be influencing their judgment. It doesn’t mean that the experts are wrong or that their expertise is useless. Their views are still probably worth listening to more than those of laypeople. But it does mean that we should be more cautious about concluding that an expert pronouncement must be correct simply because the person or the institution making it has impressive credentials.

A weaker but still significant indicator of expert reliability is to ask whether expertise makes you more likely to support a given conclusion, after controlling for ideology and other factors that might bias judgment. For example, if experts in a given field are 50% more likely to believe X about a key controversy in their area of expertise than are otherwise comparable non-experts, that is some indication that X derives some support from the evidence and relevant expert analysis thereof. Bryan Caplan’s research on the differences between economists and laypeople on economic policy issues is a good example of this kind of analysis. He shows many issues where expertise in economics has a major effect on policy views even after controlling for ideology, self-interest, and various relevant demographic variables. That doesn’t mean that economists are necessarily right about those economic issues where they differ from laypeople. But it does suggest that the difference really is a product of their expertise and is therefore entitled to greater deference than a supposedly expert judgment that is mostly driven by ideology or narrow self-interest.

Finally, as in the Climategate controversy, it may be worth considering whether experts in a given field have good incentives to pursue the truth, or whether theose incentives are skewed by funding sources or by the ability of one faction to “freeze out” those who dispute the received orthodoxy. However, crude analysis of funding incentives can be even more misleading than simply ignoring them entirely. Unfortunately, properly assessing the impact of incentives on the range of views expressed by experts in a given field itself often requires detailed knowledge that most of us do not have.

Such rules of thumb don’t matter much in cases where you know enough about the field in question to assess the evidence for yourself. But in the many situations where we must defer to experts, they might help reduce the dangers inherent in doing so.

Powered by WordPress. Designed by Woo Themes