At Watts Up With That, David M. Hoffer has an odd essay on peer review:
[I]s the notion of climate science today as easily falsified by simple observation? I submit that it is. We have the climate models themselves to upon which to rely.
For what are the climate models other than the embodiment of the peer reviewed science? Is there a single model cited by the IPCC that claims to not be based on peer reviewed science? Of course there isn’t. Yet simple observation shows that the models, and hence the peer reviewed literature upon which they are based, are wrong. We have none other than the IPCC themselves to thank for showing us that.
The leaked Second Order Draft of IPCC AR5 laid bare the failure of the models to predict the earth’s temperature going forward in time. In fact, if one threw out all but the best 5% of the model results…they would still be wrong, and obviously so. They all run hotter than reality. Exposed for the world to see that the models (and hence the science upon which they are based) had so utterly failed, the IPCC responded by including older models they had previously declared obsolete as now being part of the current literature.. . .
No longer is the debate in regard to if the models are wrong. The debate is now about why the models are wrong. The models having fallen, the peer reviewed science they purport to represent falls with them.
While Hoffer is correct that we now have enough data to know that most prior climate models are wrong, his logic is faulty. His main argument is that if the models are wrong and if they are based on the peer-reviewed literature, then the peer-reviewed literature in bulk is wrong. That does not follow logically. You may be able to falsify the published models, but that doesn’t falsify most of the climate literature, just most of the models and the segment of the literature presenting them.
To illustrate my point, assume I develop a model to predict the stock market that fails to predict in the years after I announce it. If I base the bad model on good data about patterns in stock prices (or even on the peer-reviewed literature on the stock prices), that does not invalidate the quality of the data or the peer-reviewed literature in bulk.
One can base bad models on mostly (or entirely) good data and good research. For example, in building a model one might cherry pick results, use a method for selecting temperature proxies that is biased in favor of finding a signal, or reverse the signs of the data by treating evidence of warm years as cold years and vice versa. In my opinion, Hoffer’s logic does not stand.