Looks like my Ellsberg paradox post below was pretty popular — about two dozen comments just in the first hour, between 11 p.m. and midnight (Eastern)! I'll repeat the problem below, then give my explanation. If you haven't done so before, you may want to think about what you would choose before reading the explanation.
There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries. Lottery A: You win a prize if we draw a red ball. Lottery B: You win a prize if we draw a white ball.
Which lottery do you choose? (Mini-update: I allow you to be indifferent, if you want.)
Now I give you another choice between two lotteries. Lottery C: You win a prize if we draw a ball that's not red. Lottery D: You win a prize if we draw a ball that's not white.
Now which lottery do you choose?
UPDATE: Just in case you're confused about this — and apparently some people were — we're talking about the SAME THREE BALLS each time. I haven't changed the balls. Nor have I drawn any balls. We haven't conducted any lotteries in the time it took you to read this post. All there is is a single box of balls, and me asking you your preferences over lotteries. (END OF UPDATE)
UPDATE 2: You ask one of these questions, and you find out all sorts of aspects that you weren't expecting people to find important. This will affect how I phrase the problem next time, but for now, let me just clear up one extraneous aspect. I'm not running the lottery. I don't own the balls. I'm not offering a prize. Someone else, who isn't connected with me, is doing all that. I'm just asking questions about which lotteries you prefer. Also, as I mentioned in the first update, we don't draw any balls between your first choice and your second choice. In fact, we're never going to draw any balls. Why? I'm not running the lottery! I'm just asking questions! If you want to draw balls, take it up with the guy actually running the lottery, who is not me.
There are two points here, one theoretical and another practical. I'll give you the theoretical point now, and save the practical one for a later post.
If you know about expected utility theory, you can skip this paragraph and the next four. Expected utility theory assumes that (to simplify) when you're faced with lotteries over, say, amounts of money, and each amount has some probability attached to it, and you have a utility-of-money function U, you choose which lottery you prefer based on the lottery's "expected utility," which is a kind of weighted average of the utilities of the different possible outcomes.
So if I offer you $1 if a fair coin comes up heads, then the expected utility is 0.5 U($1) + 0.5 U(0). (When I say U(0), that means the utility of however much money you already have; when I say U($1), that means the utility of that amount of money plus $1.)
Usually we assume people are risk averse, meaning they prefer the certainty of 50 cents. That would be U($0.5). So you would express risk aversion by saying that U(0.5) > 0.5 U(1) + 0.5 U(0). A risk neutral person doesn't care, as long as the lotteries have equal expected value, so he's got a different function U such that U(0.5) = 0.5 U(1) + 0.5 U(0).
But whether you've got risk aversion, risk neutrality, or something else, expected utility theory always assumes that only two things matter: (1) The utilities of the outcomes and (2) the probabilities. No matter how complicated a set of lotteries I give you, you always reduce it to the ultimate probabilities over the outcomes.
For instance, consider the set of nested lotteries: Lottery A = [Heads you lose, Tails you get to participate in Lottery B]; Lottery B = [Heads you lose, Tails you win $100]. Expected utility theory says you crunch the numbers and figure out that this is identical to a single lottery where you win $100 with probability 0.25. Everything else is irrelevant.
Now consider the choice of Lottery A vs. Lottery B. Lottery A is the prize with probability 1/3. Lottery B is the prize with a probability that could be 0, 1/3, or 2/3. Whatever the true probability is (you can make assumptions where the ultimate probability is 1/3, for instance if each ball is black or white with a 50-50 probability — but it doesn't need to be that), ultimately you'll make some choice. Suppose it's A. Under expected utility theory, that can only be because you think red has a higher probability. If you think the probabilities are equal, then under expected utility theory, you must be indifferent between the two lotteries. If you choose B, under expected utility theory that can only be because you think white has a higher probability.
Now go on to Lottery C vs. Lottery D. If you chose A the first time around, that means you think P(R) > P(W). But then you have to have P(not R) < P(not W). That's just mathematically true because P(not R) = 1 - P(R). So you can't prefer C if you preferred A.
Nonetheless, most people chose both A and C. Mostly, they did so because the probability of R is a known 1/3, and the probability of not-R is a known 2/3, while the probability of W and not-W are kind of unknown. Note: This is not risk aversion, because the probabilities we're talking about aren't the probabilities of the ultimate prize. Rather, we're talking about the probabilities of what the probabilities are. This is called ambiguity aversion. Ambiguity aversion plays no role in expected utility theory, where only the ultimate probabilities (and the utility of the outcomes, which I've held constant here) count. Therefore, in this setup, most people make choices inconsistent with expected utility theory.
Is this good? Bad? Irrelevant? Does it illustrate the crooked timber of humanity? The uselessness of expected utility theory? Stay tuned.