I promised to start by addressing some common criticisms of prediction markets. What better way to start than by attacking my friend, GW colleague, and now co-conspirator Orin Kerr? Orin has at least twice (in 2005, and earlier this month) endorsed the criticism that the election markets don't seem to do much more than track the conventional wisdom. Orin is in good if unfamiliar company; Paul Krugman recently made a similar criticism.

Unfortunately for my attack, I don't entirely disagree. On issues for which there is likely to be lots of public information but little private information, prediction markets reflect what highly informed people believe. No better, but no worse. If you want to know what the probability is that X will be President, you probably won't be surprised by the prediction markets, but on average over many independent events the market's predictions will probably be at least slightly better than the ones you would make on your own.

A stronger version of this criticism insists that the markets are worse than the highly informed conventional wisdom. Critics will say that the markets put too much weight on the pro-Obama pre-New Hampshire polls or the pro-Kerry 2004 exit polls. I'm skeptical of this criticism. At any time, the markets may be slightly off, but if they have obvious, large imperfections, people will trade back to more sensible values.

Usually, such criticisms are made after the fact, and they often reflect hindsight bias. It seems obvious now that election observers put too much weight on the pro-Kerry and pro-Obama polls. But even the most sophisticated analysts may have trouble afterward figuring out why (see here on 2004 and here on 2008).

The real problem is that our models of voter behavior aren't as good as we'd like to think. No one cries foul because Tradesports.com gave the Giants a 1% chance in early October of winning the Super Bowl. Football is full of surprises. But whenever something unexpected happens in an election, we feel that we should have expected it all along.

Some might still object that if prediction markets do no more than reflect the fallible informed conventional wisdom, they aren't worth much. And indeed, election markets may do little but save some of us the time of reading the election news. But in a world of ideology, special interests, and agency costs, institutions could do a lot worse than rely on prediction markets in their decision making. The central argument for prediction markets for me is not that they are magically accurate, but that they are fairly objective.

But there is one other final point, hearkening back to yesterday's post: Many prediction markets that would be useful to institutions are on topics on which there may be little public or private information. It is especially important to constrain opportunistic decisionmakers when they are making claims that few if any people have the information to assess. With subsidies and automated market makers, these markets will not merely reflect existing conventional wisdom, but give incentives to a few individuals to do research and make disinterested forecasts.

All Related Posts (on one page) | Some Related Posts:

I'll cry foul. Clearly their chances of winning the Super Bowl are

much lessthan 1%!Go Pats!

The a priori analysis we've seen in the posts so far is no substitute for actual empirical testing. There must be papers on this: what have they found?

Orin is in good if unfamiliar company; Paul Krugman recently made a similar criticism.Them's fightin' words, Michael! Seriously, though, your points here seem very fair.

trailthe results I was seeing on TV. The current results had Clinton slightly ahead with some precincts reporting, but Obama still had a 80-20 leadI wondered why the market wasn't correcting for that; I figured that the market had some special knowledge about why the polls were wrong.

It turned out that they were just

slowto correct, which seemed absolutely the wrong thing to expect from a prediciton market. Surely, if the money is all sitting their on the table just waiting for your mouse click, you should be able to just grab it.(You can ask why I didn't do that arbitrage myself, and that's a fair question. The answer is that I didn't have an account set up, and rushing is nearly always a bad idea. And I didn't have much experience watching InTrade, so I wasn't familiar enough with how it worked to try to throw money at it.)

Can prediction markets be manipulated by wealthy people with an agenda? Coould, for example, a George Soros spend $50M on Obama futures to create a trend.

Obviously, this would only be worthwhile if people paid a lot of attention to these markets.

how oftenthose low-probability events happen. Do they happen as (in)frequently as the market says? If so, the low-probability events (occasionally) happening look like cases where the markets were right about their (low) probability. But without that kind of analysis saying "a low probability event just happened" is more like explaining away the data than explaining it.I take it that Orin does not like to be lumped in with Krugman. =)

---

Well, I think that Michael has made such a reasonable argument, that he could just as well be a moderate critic of prediction markets as well as a moderate defender.

The problem that many of us have had with prediction markets is not with the markets themselves, but with the

hypesurrounding them. I don't think that moderate critics were ever trying to argue that such markets do not contain any relevant information at all.The probabilities that they produce for unique events aren't worth a whole lot. They are more akin to subjective probabilities rather than empirical probabilities, and as such, aren't necessarily worth much.

Here is another vicious attack. The claim that advocates for prediction markets make that the probabilities produced by such markets are accurate does not seem to be easily falsifiable. These "probabilities" are constantly updating in light of new information (which reflect either new events or the revelation of old events). When evaluating prediction markets, which of the many probabilities that these markets give us over time are we to take as correct?

Like I said, I am not saying that these markets do not contain any useful information, but I don't buy the hype.

To the extent that those who think George Soros is full of it do not have enough money collectively to "correct" his irrational spending on Obama futures, the answer is yes.

Suppose we are betting on the sum of a pair of dice, rolled one by one. Before the first die is rolled, the probability of an 11 is 1/18. The probability of an 11 then changes based on the roll of the first die. If the first die is 1-4, the probability of an 11 drops to 0; if the first die is a 5 or 6, the probability of an 11 increases to 1/6.

The way to test the accuracy of a prediction market for this sort of die roll would work like this: we'd need a large number of different die rolls with contracts for all the possible outcomes. If the market is perfectly accurate, the "dice 1 sums to 11" contract should be trading at 1/18, as should the "dice 2 sums to 11" contract, the "dice 3 sums to 11 contract", etc., up to "dice n sums to 11", for n large.

After we roll the first die for each dice, the values of the contracts should all shift. The sum-to-11 contracts for the dice for which the first die was a 5 or 6 should now be worth 1/6, and the others should all be worthless because to get an 11 one die must be 5 and the other must be 6.

Then to empirically test the predictions (rather that doing it a priori because we know the probabilities of different dice rolls) we could look at how many contracts paid off. Assuming our dice are fair, we would find that about 1/18 of the "sum to 11" contracts paid off, which would mean that before the first round they all *should* all have been trading at 1/18 if the markets were accurate. If they weren't, that indicates there's probably a problem with the market.

After the first round, to test the markets, we would look at the different probabilities assigned to the contracts. If the markets are perfectly accurate, we'd find that some of the sum-to-11 contracts were trading at 1/6 and others were trading at 0. We'd then look at how many of the contracts trading at 1/6 actually paid off and how many of the contracts trading at 0 paid off: if our dice are fair, those numbers should be roughly accurate. If second die *wasn't* fair, but the contracts were trading at 1/6 and 0 anyway, that indicates a problem with the market: it failed to anticipate that the second die was loaded.

There is no reason the market's estimate of the probability could not be perfectly accurate at all times even though those estimates change over time. I am not a libertarian markets-are-perfect-therefore-prediction-markets-must-be-too type, but your argument doesn't work. What we need is empirical testing.

Imagine if we had a market for "will there be a terrorist attack on the Superbowl?" Terrorists could spend a million to lower the chances to 1%, which makes security be lax, which lets their attack happen.

Add into this the fact that markets don't just predict, they also allow people to hedge against adverse results.

empirically, by actually rolling a large number of dice and seeing how close the market's estimated probabilities are to the actual number of 11's rolled. We can test the accuracy of prediction markets empirically in exactly the same way even though we do not know the "objective" probability of the predicted events happening.I am using the example of dice to illustrate the way probabilities change over time, so that the market's estimate of the probability can be correct at all times even though it changes over time. This does not mean that the market's estimate of the probability is in fact correct at all times, or at any time, for that matter: that is a matter that requires empirical testing.

1. For each of the state races, place 1 vote on the front runner 1 week before the contract expires.

2. See how the contracts end up at the end. It should be about equal to your starting value.

This isn't scientifically rigorous. The various races aren't independent. And even if they were, predicting NH.DEM.CLINTON shouldn't necessarily have the same weight as predicting IA.REP.HUCKABEE.

But it would be a start.

(You could also try the above experiment by placing the bid on different time periods before the contract expires, maybe placing them at multiple times. You could also bid on other positions besides #1, maybe bidding on multiple positions. Exercise for the reader: how should betting on all positions resolve?)

My problem is not that I do not understand the idea of updating subjective probabilities.

(I will note that there is a huge difference between instable probabilities and stable probabilities. When you update a stable probability due to old information, you can correctly say that the old probability was wrong. When you update an unstable probability because some event has come along that has actually change the probability, then both could be right.)

Here is my vicious objection, hopefully more clearly stated. It is very difficult to falsify the probabilities produced by prediction markets when (1) the probabilities are unstable, as for elections (2) the probabilities tend to be subjective, as for elections (3) the probabilities are updating constantly, as opposed to less frequently.

X event happens. You say X increases the probability of A being elected by 2%. I say 3%. Given the uniqueness of X event, how will we ever now who was right, even after we know the outcome of the election?

Basically, the problem is that these unstable probabilities which are affected by unique never to be replicated events are just not verifiable. People who worship prediction markets are engaged in a faith-based exercise.

Again, that is not to say that prediction markets contain no useful information whatsoever. It is to say you should take them with a grain of salt.

Imagine if we had a market for "will there be a terrorist attack on the Superbowl?" Terrorists could spend a million to lower the chances to 1%, which makes security be lax, which lets their attack happen.Actually, by setting up an account and betting on that topic in that way the terrorists would tip off the authorities, letting the authorities identify them and capture them, saving thousands of lives. All thanks to prediction markets.

I should point out one more thing. The fact that dice have objective probabilities does weaken your example. The fact is, we can test a particular die in repeated experiments to ensure that it is fair. (That is, over a repeated number of trials, each side comes up with 1/6th probability.) We cannot do the same for unique events. If politicians A makes a gaffe in 1996, when politician B makes a similar gaffe in 2004, it is likely that the two gaffes will have different affects in terms of changing the probability that the respective politician is elected. The fact is, we cannot have repeated trials. These are unique one time events. All we have are subjective probabilities, and that makes a big difference.

You are right that there's no way to falsify an individual probability estimate, but I don't see why it matters. If events happen in direct proportion to their trading values, I have no trouble saying the markets are very good estimators.

I think this illustrates my argument.

If you have dice with known probabilities, then you know how much a particular event

shouldaffect the probability of a certain outcome.Basically, here is the problem.

In an election, we only have one verifiable piece of data. Who won. And, that person won after a unique series of events.

Assume two candidates, A and B.

Assume two people, p1 and p2, who have very different views of the affects of different events.

Assume probability affecting events, e1, e2, e3, e4 ... eN before the election.

Say that at the beginning of the election p1 believes that A has a 54% chance of winning. Assume p2 believes there is a 52% chance A will win. Assume e1 occurs. Both p1 and p2 update their probabilities. p1 assumes a 58% chance of winning, p2 assumes a 54% chance of winning. And so on. The point is, that p1 and p2 start with different probabilities and they change update their subjective probabilities differently as events occur.

The fact is, take any point in time after a significant event except for eN (the last significant event before the election itself) say eI. At point eI is is impossible to know who had the better probability. Even if p2 probability at point eN was better, that does not mean that p2 had a better probability at point eI.

Basically, an election can be conceptualized as a series of unique events which have unknown probability affects on the outcome. And we have an actual outcome after only one of them eN.

That doesn't mean that election markets are not at all subject to empirical testing. In particular, if election markets are any good, we should expect

allthe predictions occurring at various point e1..eN to be fairly good onaverage. What we would hope to find (and what we probably would find) is that, after many election, the probabilities at points e1...eN are pretty decent and that the closer you are to eN, the better the probability.But, comparing this data set to the conventional wisdom, to say that it is "better" is quite difficult. Especially for any particular election.

This isn't the question for a moderate critic of election markets. The only qualification I would make is I would change "very good" to "decent."

I can tell you right now that conventional wisdom and prediction markets both predict John Edwards will lose the Democratic Primary. When he does so, we can say that there is a relationship between between prediction markets that election outcomes.

But, no moderate critic of prediction markets would argue that there was no relationship. We just don't think that "direct proportion" is good enough to call prediction markets "very good" as opposed to "decent."

That moderate critics are against the faith-based hype surrounding prediction markets does not imply that we think they do not contain useful information. They are at least decent reflection of the conventional wisdom. And the predictions coming from the conventional wisdom will be in "direct proportion" to the actual result as well.

Hell, I'd hate to see what would happen if I bet in

favorof an attack.There goes your liquidity. (Especially among people with private knowledge, but you've lost people with public knowledge, too.)

MDJD2B -- I'll get to manipulation soon.

Mr. Liberal -- What's so bad about subjective probability? Decision makers have to make decisions based on something, and often subjective probability is the best we have. Prediction markets are a good tool for making probabilistic forecasts when a number of different people may have their own subjective probability estimates.

Mr. McGurg -- That is a controversial point. There are papers that offer models with results different from Manski's.

And then there's this: Maybe you're just using too much shorthand in your writing, so hopefully you'll agree that all you could have

reallymeant was "our estimate ofthe probability of an event changes."I'm also curious about another thing: has anyone done any technical analysis studies of market movements in these prediction markets?

The probability is (mostly) in the error of observation, not in the randomness of the event. The prediction market aggregates the observations for each "stock" which is obvious. But by having many stocks, the market also helps to make each stock better defined. For example, having a Hillary stock in the market may affect the price of the Obama stock.

There is nothing wrong with subjective probability. Unless, of course, there is an objective probability available.

The fact is, subjective probability is inferior to objective probability. But, it is true that sometimes that is the best we can do.

My point is not that subjective probability is bad per se. My point is it is difficult to say that a particular subjective probability is the "true" probability. And, that makes it hard to put rank one subjective probability production mechanism (like prediction markets) above another (like conventional wisdom).

I think an empirical study might demonstrate that the subjective probabilities produced by one mechanism (say prediction markets) are, on average, better than another mechanism (say conventional wisdom). But even then, in any given election, the advantage of one mechanism over another may not be that great, and in any given election, one mechanism might be superior to the one than the one that is better on average.

If we had a greater number and more detailed objective probabilities that we could produce ex post, we would better be able to verify prediction markets accuracy ex ante.