pageok
pageok
pageok
[Michael Abramowicz, guest-blogging, January 29, 2008 at 11:05am] Trackbacks
Prediction Markets vs. Conventional Wisdom:

I promised to start by addressing some common criticisms of prediction markets. What better way to start than by attacking my friend, GW colleague, and now co-conspirator Orin Kerr? Orin has at least twice (in 2005, and earlier this month) endorsed the criticism that the election markets don't seem to do much more than track the conventional wisdom. Orin is in good if unfamiliar company; Paul Krugman recently made a similar criticism.

Unfortunately for my attack, I don't entirely disagree. On issues for which there is likely to be lots of public information but little private information, prediction markets reflect what highly informed people believe. No better, but no worse. If you want to know what the probability is that X will be President, you probably won't be surprised by the prediction markets, but on average over many independent events the market's predictions will probably be at least slightly better than the ones you would make on your own.

A stronger version of this criticism insists that the markets are worse than the highly informed conventional wisdom. Critics will say that the markets put too much weight on the pro-Obama pre-New Hampshire polls or the pro-Kerry 2004 exit polls. I'm skeptical of this criticism. At any time, the markets may be slightly off, but if they have obvious, large imperfections, people will trade back to more sensible values.

Usually, such criticisms are made after the fact, and they often reflect hindsight bias. It seems obvious now that election observers put too much weight on the pro-Kerry and pro-Obama polls. But even the most sophisticated analysts may have trouble afterward figuring out why (see here on 2004 and here on 2008).

The real problem is that our models of voter behavior aren't as good as we'd like to think. No one cries foul because Tradesports.com gave the Giants a 1% chance in early October of winning the Super Bowl. Football is full of surprises. But whenever something unexpected happens in an election, we feel that we should have expected it all along.

Some might still object that if prediction markets do no more than reflect the fallible informed conventional wisdom, they aren't worth much. And indeed, election markets may do little but save some of us the time of reading the election news. But in a world of ideology, special interests, and agency costs, institutions could do a lot worse than rely on prediction markets in their decision making. The central argument for prediction markets for me is not that they are magically accurate, but that they are fairly objective.

But there is one other final point, hearkening back to yesterday's post: Many prediction markets that would be useful to institutions are on topics on which there may be little public or private information. It is especially important to constrain opportunistic decisionmakers when they are making claims that few if any people have the information to assess. With subsidies and automated market makers, these markets will not merely reflect existing conventional wisdom, but give incentives to a few individuals to do research and make disinterested forecasts.

Ignorance is Bliss:

No one cries foul because Tradesports.com gave the Giants a 1% chance in early October of winning the Super Bowl.

I'll cry foul. Clearly their chances of winning the Super Bowl are much less than 1%!

Go Pats!
1.29.2008 11:36am
Elliot Reed (mail):
Here's the obvious question: how close to the true probabilities have the markets been? This wouldn't be that hard to test: if the markets are perfect, half of bets trading at $0.50 on the dollar should pay off, 90% of bets trading at $0.90 on the dollar should pay off, 5% of bets trading at $0.05 on the dollar should pay off, etc. Looking at individual bets simply isn't a very good way of testing the accuracy of the markets.

The a priori analysis we've seen in the posts so far is no substitute for actual empirical testing. There must be papers on this: what have they found?
1.29.2008 12:06pm
OrinKerr:
Orin is in good if unfamiliar company; Paul Krugman recently made a similar criticism.

Them's fightin' words, Michael! Seriously, though, your points here seem very fair.
1.29.2008 12:07pm
Dan Weber (www):
The evening of New Hampshire, I watched InTrade trail the results I was seeing on TV. The current results had Clinton slightly ahead with some precincts reporting, but Obama still had a 80-20 lead

I wondered why the market wasn't correcting for that; I figured that the market had some special knowledge about why the polls were wrong.

It turned out that they were just slow to correct, which seemed absolutely the wrong thing to expect from a prediciton market. Surely, if the money is all sitting their on the table just waiting for your mouse click, you should be able to just grab it.

(You can ask why I didn't do that arbitrage myself, and that's a fair question. The answer is that I didn't have an account set up, and rushing is nearly always a bad idea. And I didn't have much experience watching InTrade, so I wasn't familiar enough with how it worked to try to throw money at it.)
1.29.2008 12:11pm
procrastinating clerk (mail):
Another thing to consider is that when a prediction market says X has a 60% chance of occurring and Y has a 40% chance of occurring, Y still has a probability of occurring. Just because Y occurs doesn't mean the market has failed. Sometimes improbable things happen.
1.29.2008 12:20pm
MDJD2B (mail):
This is a query, and not a comment.

Can prediction markets be manipulated by wealthy people with an agenda? Coould, for example, a George Soros spend $50M on Obama futures to create a trend.

Obviously, this would only be worthwhile if people paid a lot of attention to these markets.
1.29.2008 12:26pm
Elliot Reed (mail):
procrastinating clerk—precisely. But that's why we need empirical testing to figure out how often those low-probability events happen. Do they happen as (in)frequently as the market says? If so, the low-probability events (occasionally) happening look like cases where the markets were right about their (low) probability. But without that kind of analysis saying "a low probability event just happened" is more like explaining away the data than explaining it.
1.29.2008 12:27pm
OrinKerr:
Totally off topic, but I just want to say that "procrastinating clerk" is a great userid for a law blog.
1.29.2008 12:32pm
Mr. Liberal:

Them's fightin' words


I take it that Orin does not like to be lumped in with Krugman. =)

---

Well, I think that Michael has made such a reasonable argument, that he could just as well be a moderate critic of prediction markets as well as a moderate defender.

The problem that many of us have had with prediction markets is not with the markets themselves, but with the hype surrounding them. I don't think that moderate critics were ever trying to argue that such markets do not contain any relevant information at all.
1.29.2008 12:32pm
Mr. Liberal:
Just to make sure that this conversation does not get too friendly, I did want to viciously attack prediction markets on this ground:

The probabilities that they produce for unique events aren't worth a whole lot. They are more akin to subjective probabilities rather than empirical probabilities, and as such, aren't necessarily worth much.

Here is another vicious attack. The claim that advocates for prediction markets make that the probabilities produced by such markets are accurate does not seem to be easily falsifiable. These "probabilities" are constantly updating in light of new information (which reflect either new events or the revelation of old events). When evaluating prediction markets, which of the many probabilities that these markets give us over time are we to take as correct?

Like I said, I am not saying that these markets do not contain any useful information, but I don't buy the hype.
1.29.2008 12:43pm
Mr. Liberal:

Can prediction markets be manipulated by wealthy people with an agenda? Coould, for example, a George Soros spend $50M on Obama futures to create a trend.


To the extent that those who think George Soros is full of it do not have enough money collectively to "correct" his irrational spending on Obama futures, the answer is yes.
1.29.2008 12:47pm
Elliot Reed (mail):
Here is another vicious attack. The claim that advocates for prediction markets make that the probabilities produced by such markets are accurate does not seem to be easily falsifiable. These "probabilities" are constantly updating in light of new information (which reflect either new events or the revelation of old events). When evaluating prediction markets, which of the many probabilities that these markets give us over time are we to take as correct?
This one's actually pretty easy: all of them. As you get more information, the probability of an event changes.

Suppose we are betting on the sum of a pair of dice, rolled one by one. Before the first die is rolled, the probability of an 11 is 1/18. The probability of an 11 then changes based on the roll of the first die. If the first die is 1-4, the probability of an 11 drops to 0; if the first die is a 5 or 6, the probability of an 11 increases to 1/6.

The way to test the accuracy of a prediction market for this sort of die roll would work like this: we'd need a large number of different die rolls with contracts for all the possible outcomes. If the market is perfectly accurate, the "dice 1 sums to 11" contract should be trading at 1/18, as should the "dice 2 sums to 11" contract, the "dice 3 sums to 11 contract", etc., up to "dice n sums to 11", for n large.

After we roll the first die for each dice, the values of the contracts should all shift. The sum-to-11 contracts for the dice for which the first die was a 5 or 6 should now be worth 1/6, and the others should all be worthless because to get an 11 one die must be 5 and the other must be 6.

Then to empirically test the predictions (rather that doing it a priori because we know the probabilities of different dice rolls) we could look at how many contracts paid off. Assuming our dice are fair, we would find that about 1/18 of the "sum to 11" contracts paid off, which would mean that before the first round they all *should* all have been trading at 1/18 if the markets were accurate. If they weren't, that indicates there's probably a problem with the market.

After the first round, to test the markets, we would look at the different probabilities assigned to the contracts. If the markets are perfectly accurate, we'd find that some of the sum-to-11 contracts were trading at 1/6 and others were trading at 0. We'd then look at how many of the contracts trading at 1/6 actually paid off and how many of the contracts trading at 0 paid off: if our dice are fair, those numbers should be roughly accurate. If second die *wasn't* fair, but the contracts were trading at 1/6 and 0 anyway, that indicates a problem with the market: it failed to anticipate that the second die was loaded.

There is no reason the market's estimate of the probability could not be perfectly accurate at all times even though those estimates change over time. I am not a libertarian markets-are-perfect-therefore-prediction-markets-must-be-too type, but your argument doesn't work. What we need is empirical testing.
1.29.2008 1:13pm
Dan Weber (www):
But by manipulating the market, Soros could create the feeling that an Obama win is inevitable, thus causing nominees/voters/funding to flow to him, thus causing what he wanted to happen.

Imagine if we had a market for "will there be a terrorist attack on the Superbowl?" Terrorists could spend a million to lower the chances to 1%, which makes security be lax, which lets their attack happen.

Add into this the fact that markets don't just predict, they also allow people to hedge against adverse results.
1.29.2008 1:18pm
Elliot Reed (mail):
Also, in case my point wasn't clear: my argument is in no way dependent on die rolls having an "objective" probability. I am discussing a way of testing the accuracy of a "die roll prediction market" empirically, by actually rolling a large number of dice and seeing how close the market's estimated probabilities are to the actual number of 11's rolled. We can test the accuracy of prediction markets empirically in exactly the same way even though we do not know the "objective" probability of the predicted events happening.

I am using the example of dice to illustrate the way probabilities change over time, so that the market's estimate of the probability can be correct at all times even though it changes over time. This does not mean that the market's estimate of the probability is in fact correct at all times, or at any time, for that matter: that is a matter that requires empirical testing.
1.29.2008 1:23pm
Dan Weber (www):
As for testing the prediction markets, I recommend a method:

1. For each of the state races, place 1 vote on the front runner 1 week before the contract expires.
2. See how the contracts end up at the end. It should be about equal to your starting value.

This isn't scientifically rigorous. The various races aren't independent. And even if they were, predicting NH.DEM.CLINTON shouldn't necessarily have the same weight as predicting IA.REP.HUCKABEE.

But it would be a start.

(You could also try the above experiment by placing the bid on different time periods before the contract expires, maybe placing them at multiple times. You could also bid on other positions besides #1, maybe bidding on multiple positions. Exercise for the reader: how should betting on all positions resolve?)
1.29.2008 1:26pm
Mr. Liberal:
Elliot Reed,

My problem is not that I do not understand the idea of updating subjective probabilities.

(I will note that there is a huge difference between instable probabilities and stable probabilities. When you update a stable probability due to old information, you can correctly say that the old probability was wrong. When you update an unstable probability because some event has come along that has actually change the probability, then both could be right.)

Here is my vicious objection, hopefully more clearly stated. It is very difficult to falsify the probabilities produced by prediction markets when (1) the probabilities are unstable, as for elections (2) the probabilities tend to be subjective, as for elections (3) the probabilities are updating constantly, as opposed to less frequently.

X event happens. You say X increases the probability of A being elected by 2%. I say 3%. Given the uniqueness of X event, how will we ever now who was right, even after we know the outcome of the election?

Basically, the problem is that these unstable probabilities which are affected by unique never to be replicated events are just not verifiable. People who worship prediction markets are engaged in a faith-based exercise.

Again, that is not to say that prediction markets contain no useful information whatsoever. It is to say you should take them with a grain of salt.
1.29.2008 1:26pm
OrinKerr:
Imagine if we had a market for "will there be a terrorist attack on the Superbowl?" Terrorists could spend a million to lower the chances to 1%, which makes security be lax, which lets their attack happen.

Actually, by setting up an account and betting on that topic in that way the terrorists would tip off the authorities, letting the authorities identify them and capture them, saving thousands of lives. All thanks to prediction markets.
1.29.2008 1:28pm
Mr. Liberal:
Elliot Reed,

I should point out one more thing. The fact that dice have objective probabilities does weaken your example. The fact is, we can test a particular die in repeated experiments to ensure that it is fair. (That is, over a repeated number of trials, each side comes up with 1/6th probability.) We cannot do the same for unique events. If politicians A makes a gaffe in 1996, when politician B makes a similar gaffe in 2004, it is likely that the two gaffes will have different affects in terms of changing the probability that the respective politician is elected. The fact is, we cannot have repeated trials. These are unique one time events. All we have are subjective probabilities, and that makes a big difference.
1.29.2008 1:34pm
Elliot Reed (mail):
Mr. Liberal: I'm not really interested in having a theoretical argument about subjective vs. objective probability. There's no reason we couldn't test the markets over time. Say we look at the contracts that are trading at $0.50 on the dollar one year before the event: if the markets are perfectly accurate, 50% of those should pay off. If the markets are perfectly accurate, the same should also be true of those that are trading at $0.50 on the dollar six months before the event, one week before the event, one day before the event, or five minutes before the event. (And the same should be true of things that are trading at other values, mutatis mutandis.) All of those things could be true even though the probabilities are constantly changing.

You are right that there's no way to falsify an individual probability estimate, but I don't see why it matters. If events happen in direct proportion to their trading values, I have no trouble saying the markets are very good estimators.
1.29.2008 1:36pm
Elliot Reed (mail):
Also, regarding the "objective" probabilities of the dice: it doesn't matter that we can test the dice for fairness through repeated trials because we could test a die roll market in exactly the way I'm describing without ever testing the dice for fairness. As far as that example is concerned, we could roll 10000 different sets of dice exactly once, then destroy them all.
1.29.2008 1:40pm
Mr. Liberal:

Also, regarding the "objective" probabilities of the dice: it doesn't matter that we can test the dice for fairness through repeated trials because we could test a die roll market in exactly the way I'm describing without ever testing the dice for fairness. As far as that example is concerned, we could roll 10000 different sets of dice exactly once, then destroy them all.


I think this illustrates my argument.

If you have dice with known probabilities, then you know how much a particular event should affect the probability of a certain outcome.

Basically, here is the problem.

In an election, we only have one verifiable piece of data. Who won. And, that person won after a unique series of events.

Assume two candidates, A and B.

Assume two people, p1 and p2, who have very different views of the affects of different events.

Assume probability affecting events, e1, e2, e3, e4 ... eN before the election.

Say that at the beginning of the election p1 believes that A has a 54% chance of winning. Assume p2 believes there is a 52% chance A will win. Assume e1 occurs. Both p1 and p2 update their probabilities. p1 assumes a 58% chance of winning, p2 assumes a 54% chance of winning. And so on. The point is, that p1 and p2 start with different probabilities and they change update their subjective probabilities differently as events occur.

The fact is, take any point in time after a significant event except for eN (the last significant event before the election itself) say eI. At point eI is is impossible to know who had the better probability. Even if p2 probability at point eN was better, that does not mean that p2 had a better probability at point eI.

Basically, an election can be conceptualized as a series of unique events which have unknown probability affects on the outcome. And we have an actual outcome after only one of them eN.

That doesn't mean that election markets are not at all subject to empirical testing. In particular, if election markets are any good, we should expect all the predictions occurring at various point e1..eN to be fairly good on average. What we would hope to find (and what we probably would find) is that, after many election, the probabilities at points e1...eN are pretty decent and that the closer you are to eN, the better the probability.

But, comparing this data set to the conventional wisdom, to say that it is "better" is quite difficult. Especially for any particular election.
1.29.2008 2:04pm
Mr. Liberal:

If events happen in direct proportion to their trading values, I have no trouble saying the markets are very good estimators.


This isn't the question for a moderate critic of election markets. The only qualification I would make is I would change "very good" to "decent."

I can tell you right now that conventional wisdom and prediction markets both predict John Edwards will lose the Democratic Primary. When he does so, we can say that there is a relationship between between prediction markets that election outcomes.

But, no moderate critic of prediction markets would argue that there was no relationship. We just don't think that "direct proportion" is good enough to call prediction markets "very good" as opposed to "decent."

That moderate critics are against the faith-based hype surrounding prediction markets does not imply that we think they do not contain useful information. They are at least decent reflection of the conventional wisdom. And the predictions coming from the conventional wisdom will be in "direct proportion" to the actual result as well.
1.29.2008 2:13pm
Justin (mail):
Silly question: how are initial "stocks" put onto the market? If they aren't "given" away, how is the original price they are sold at determined?
1.29.2008 2:42pm
Dan Weber (www):
Actually, by setting up an account and betting on that topic in that way the terrorists would tip off the authorities, letting the authorities identify them and capture them, saving thousands of lives. All thanks to prediction markets.
So if I bet against a terrorist attack, the Feds are gonna come knocking on my door?

Hell, I'd hate to see what would happen if I bet in favor of an attack.

There goes your liquidity. (Especially among people with private knowledge, but you've lost people with public knowledge, too.)
1.29.2008 2:44pm
mr. mcgurg:
part of the issue here, i think, is a basic misunderstanding about what the price in prediction markets represents. it is *not* a point estimate of the 'wisdom of crowds' consensus on the probability of an event occurring. put in stat-geek language, that quantity is not identified by the data we have available (i.e., the price and volume of trading). what the data do identify is a *range* of probabilities that are consistent with the data. obtaining a point estimate requires identification assumptions, which may be incorrect. i don't have the paper handy, but googling 'charles manski prediction markets' should turn it up.
1.29.2008 3:01pm
Michael Abramowicz (mail):
Dan Weber -- The typical criticism of prediction markets is that they overreact to new information. You're arguing that they underreact. It could be that traders were rationally taking into account that the first precincts might be unrepresentative, and gradually changing their views as more precinct results came in.

MDJD2B -- I'll get to manipulation soon.

Mr. Liberal -- What's so bad about subjective probability? Decision makers have to make decisions based on something, and often subjective probability is the best we have. Prediction markets are a good tool for making probabilistic forecasts when a number of different people may have their own subjective probability estimates.

Mr. McGurg -- That is a controversial point. There are papers that offer models with results different from Manski's.
1.29.2008 3:32pm
k parker (mail):
Elliot Reed, I certainly wouldn't call an outcome that happened 40% of the time a "low-probability event".

And then there's this:
As you get more information, the probability of an event changes.
Maybe you're just using too much shorthand in your writing, so hopefully you'll agree that all you could have really meant was "our estimate of the probability of an event changes."
1.29.2008 4:03pm
Elliot Reed (mail):
Justin—I think they go on the market by letting people pay $1 for a portfolio that includes one share of all the possibilities (e.g., one share of Clinton, one of Obama, one of McCain, one of Ron Paul, one share of all the other candidates, and one share of "someone else"). Since exactly one of those bets will pay off, the value of that portfolio is $1. Then the traders set the prices when they trade the individual bets among themselves.
1.29.2008 4:03pm
Duffy Pratt (mail):
Has anyone created one of these prediction markets in the likelihood that, say, Obama or Hillary will get assassinated before the election? Would this sort of contract violate public policy? (I'm guessing that it might.) But when taking the probability of either of them getting the nomination, or getting elected, wouldn't one also have to weigh the chances of their not making it that far because of a successful assassination attempt?

I'm also curious about another thing: has anyone done any technical analysis studies of market movements in these prediction markets?
1.29.2008 4:05pm
Elliot Reed (mail):
k parker: i don't have a position on that issue, and don't really care about it. whether it is a subjective or objective probability is completely irrelevant to my point.
1.29.2008 4:08pm
Justin (mail):
Elliot, what's the incentive to buy the portfolio?
1.29.2008 4:55pm
seadrive:
What does it mean to say that, e.g. "there is a 35% probability that Obama will be elected president"? Does it mean that in 1000 parallel universes, he will be elected in 350? Don't think so. Is it well defined? Don't think so.

The probability is (mostly) in the error of observation, not in the randomness of the event. The prediction market aggregates the observations for each "stock" which is obvious. But by having many stocks, the market also helps to make each stock better defined. For example, having a Hillary stock in the market may affect the price of the Obama stock.
1.29.2008 6:22pm
Mr. Liberal:

What's so bad about subjective probability?


There is nothing wrong with subjective probability. Unless, of course, there is an objective probability available.

The fact is, subjective probability is inferior to objective probability. But, it is true that sometimes that is the best we can do.

My point is not that subjective probability is bad per se. My point is it is difficult to say that a particular subjective probability is the "true" probability. And, that makes it hard to put rank one subjective probability production mechanism (like prediction markets) above another (like conventional wisdom).

I think an empirical study might demonstrate that the subjective probabilities produced by one mechanism (say prediction markets) are, on average, better than another mechanism (say conventional wisdom). But even then, in any given election, the advantage of one mechanism over another may not be that great, and in any given election, one mechanism might be superior to the one than the one that is better on average.

If we had a greater number and more detailed objective probabilities that we could produce ex post, we would better be able to verify prediction markets accuracy ex ante.
1.29.2008 9:45pm