|
Balls:
There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries.
Lottery A: You win a prize if we draw a red ball.
Lottery B: You win a prize if we draw a white ball.
Which lottery do you choose?
Now I give you another choice between two lotteries.
Lottery C: You win a prize if we draw a ball that's not red.
Lottery D: You win a prize if we draw a ball that's not white.
Which lottery do you choose?
Post your answers, plus any reasoning, in the comments. If you're already familiar with the Ellsberg paradox, you can just watch. Explanations to come later.
UPDATE: Glad this is getting so many comments. Just a few comments of my own:
(1) Many people are assuming that each of the two balls is white or black with a 50-50 probability. Maybe, maybe not. Just keep in mind that it's not part of the assumptions.
(2) Just in case you reject the problem because you don't know the probabilities of white vs. black (though you shouldn't), you can answer the question assuming there's a 50-50 probability. Then, just for fun, answer the problem again where white has a 49% chance.
(3) Also, some people are wondering about the motivations of the "house," i.e., whether it wants you to win or lose. Think what you like about the motivations of the house, but keep in mind that the colors of the balls (however determined) are the same in Part 1 and Part 2.
(4) Some of you are wondering what's the "paradox." I'll explain soon (or you can just look it up on Wikipedia). It may not be right to call it a paradox; perhaps it's just an illustration of an interesting aspect of how people make choices.
Ellsberg paradox, take 2:
Looks like my Ellsberg paradox post below was pretty popular — about two dozen comments just in the first hour, between 11 p.m. and midnight (Eastern)! I'll repeat the problem below, then give my explanation. If you haven't done so before, you may want to think about what you would choose before reading the explanation.
There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries. Lottery A: You win a prize if we draw a red ball. Lottery B: You win a prize if we draw a white ball.
Which lottery do you choose? (Mini-update: I allow you to be indifferent, if you want.)
Now I give you another choice between two lotteries. Lottery C: You win a prize if we draw a ball that's not red. Lottery D: You win a prize if we draw a ball that's not white.
Now which lottery do you choose?
UPDATE: Just in case you're confused about this — and apparently some people were — we're talking about the SAME THREE BALLS each time. I haven't changed the balls. Nor have I drawn any balls. We haven't conducted any lotteries in the time it took you to read this post. All there is is a single box of balls, and me asking you your preferences over lotteries. (END OF UPDATE)
UPDATE 2: You ask one of these questions, and you find out all sorts of aspects that you weren't expecting people to find important. This will affect how I phrase the problem next time, but for now, let me just clear up one extraneous aspect. I'm not running the lottery. I don't own the balls. I'm not offering a prize. Someone else, who isn't connected with me, is doing all that. I'm just asking questions about which lotteries you prefer. Also, as I mentioned in the first update, we don't draw any balls between your first choice and your second choice. In fact, we're never going to draw any balls. Why? I'm not running the lottery! I'm just asking questions! If you want to draw balls, take it up with the guy actually running the lottery, who is not me.
There are two points here, one theoretical and another practical. I'll give you the theoretical point now, and save the practical one for a later post.
If you know about expected utility theory, you can skip this paragraph and the next four. Expected utility theory assumes that (to simplify) when you're faced with lotteries over, say, amounts of money, and each amount has some probability attached to it, and you have a utility-of-money function U, you choose which lottery you prefer based on the lottery's "expected utility," which is a kind of weighted average of the utilities of the different possible outcomes.
So if I offer you $1 if a fair coin comes up heads, then the expected utility is 0.5 U($1) + 0.5 U(0). (When I say U(0), that means the utility of however much money you already have; when I say U($1), that means the utility of that amount of money plus $1.)
Usually we assume people are risk averse, meaning they prefer the certainty of 50 cents. That would be U($0.5). So you would express risk aversion by saying that U(0.5) > 0.5 U(1) + 0.5 U(0). A risk neutral person doesn't care, as long as the lotteries have equal expected value, so he's got a different function U such that U(0.5) = 0.5 U(1) + 0.5 U(0).
But whether you've got risk aversion, risk neutrality, or something else, expected utility theory always assumes that only two things matter: (1) The utilities of the outcomes and (2) the probabilities. No matter how complicated a set of lotteries I give you, you always reduce it to the ultimate probabilities over the outcomes.
For instance, consider the set of nested lotteries: Lottery A = [Heads you lose, Tails you get to participate in Lottery B]; Lottery B = [Heads you lose, Tails you win $100]. Expected utility theory says you crunch the numbers and figure out that this is identical to a single lottery where you win $100 with probability 0.25. Everything else is irrelevant.
Now consider the choice of Lottery A vs. Lottery B. Lottery A is the prize with probability 1/3. Lottery B is the prize with a probability that could be 0, 1/3, or 2/3. Whatever the true probability is (you can make assumptions where the ultimate probability is 1/3, for instance if each ball is black or white with a 50-50 probability — but it doesn't need to be that), ultimately you'll make some choice. Suppose it's A. Under expected utility theory, that can only be because you think red has a higher probability. If you think the probabilities are equal, then under expected utility theory, you must be indifferent between the two lotteries. If you choose B, under expected utility theory that can only be because you think white has a higher probability.
Now go on to Lottery C vs. Lottery D. If you chose A the first time around, that means you think P(R) > P(W). But then you have to have P(not R) < P(not W). That's just mathematically true because P(not R) = 1 - P(R). So you can't prefer C if you preferred A.
Nonetheless, most people chose both A and C. Mostly, they did so because the probability of R is a known 1/3, and the probability of not-R is a known 2/3, while the probability of W and not-W are kind of unknown. Note: This is not risk aversion, because the probabilities we're talking about aren't the probabilities of the ultimate prize. Rather, we're talking about the probabilities of what the probabilities are. This is called ambiguity aversion. Ambiguity aversion plays no role in expected utility theory, where only the ultimate probabilities (and the utility of the outcomes, which I've held constant here) count. Therefore, in this setup, most people make choices inconsistent with expected utility theory.
Is this good? Bad? Irrelevant? Does it illustrate the crooked timber of humanity? The uselessness of expected utility theory? Stay tuned.
Ellsberg paradox, take 3:
Back to the Ellsberg paradox (so-called, 'cause it's not really a paradox). Based on a bunch of previous comments, let me summarize where we're at, with a simplified version of the paradox.
There are three balls. One is red. For each of the other balls, someone flipped a fair coin to determine whether they would be white or black.
You can imagine a number of lotteries based on a draw from these balls. For example, consider the following four lotteries:
Lottery A: Win $100 if we draw a red ball.
Lottery B: Win $100 if we draw a white ball.
Lottery C: Win $100 if we draw a ball that isn't red.
Lottery D: Win $100 if we draw a ball that isn't white.
Do you prefer Lottery A or Lottery B? Do you prefer Lottery C or Lottery D?
(This is different than the previous example in the following ways: First, I've given a specific set of probabilities for white vs. black. Second, I've made it clear that I'm not offering any lotteries, just eliciting your opinion. Third, I've made the prize $100, just to be more specific.)
It turns out that most people prefer A to B, and prefer C to D. This is inconsistent with expected utility theory, which says your preferences over lotteries should only depend on what the ultimate probabilities are and the utility of the item. More below the fold, including the answer to the question: "Who cares?"
Since, in this setup, the probability of white is 1/3 — the same as red — expected utility theory says you're indifferent between A and B. Suppose you get utility X from having the prize and utility N from not having it; then Lottery A gives you utility 1/3 X + 2/3 N, and so does Lottery B. Similarly, you're indifferent between C and D. Or, if we make the coin flip slightly biased toward black, you should strictly prefer A over B, and you should strictly prefer C over D D over C. [UPDATE: Mistake corrected here.]
O.K., so the way people act — call it "ambiguity aversion" here — is inconsistent with expected utility theory. (Expected utility theory, by the way, is the dominant theory in economics of how people act under uncertainty.) It looks like risk aversion, but it's not, because in terms of bottom-line probabilities and outcomes, A and B are identical to a risk averse person, and so are C and D.
So what? There are those who would say you should act according to expected utility theory. Even people who wouldn't go that far would say that if you can see that the probability of white is 1/3, you should treat B as equivalent to A. You shouldn't let yourself be fooled by the two-stage process where first we flip a coin to determine the probabilities and then we run the prize lottery; you should treat the whole thing as a single unified lottery.
I won't go there — to me, expected utility theory is a way of trying to describe how people behave, and if people don't behave that way, so much the worse for expected utility theory. If you prefer A and C over B and D, more power to you.
But, note that I can make money off this (even though I'm unconnected with the lottery).
Step 1: Suppose you're holding a B lottery ticket, and I have $100. Because you prefer A to B, you'd be willing to trade B plus some amount of money to get A. So I write you a piece of paper saying "I promise to pay you $100 if red is drawn," take your "$100 if white is drawn" ticket, and pocket the extra money.
Step 2: Now I find someone with a D lottery ticket, and I write him a C lottery ticket, i.e., a piece of paper saying "I promise to pay you $100 if red isn't drawn." We make the trade and I pocket some extra cash.
Step 3: I observe that with the two papers I've given up — an A ticket and a C ticket — I've given up $100 for sure, because I'm going to have to pay $100 on exactly one of those tickets. What I've received in exchange — a B ticket and a D ticket — also give me $100 for sure, because I'm going to get $100 on exactly one of those tickets, depending on whether the ball drawn is or isn't white. So the ticket exchange was exactly a wash. But I've pocketed some change twice.
Step 4: Find two more holders of a B and a D lottery ticket. Return to Step 1.
There are similar cases, with lotteries, where transitivity is violated — you can find objects A, B, and C, such that someone prefers A to B, B to C, and C to A. Economists call this a failure of "rationality" — where rationality is just defined as being (1) completeness and (2) transitivity. But that doesn't necessarily have any normative component. Be intransitive if you like.
However, let me invite you over while you're holding A. I'll offer to take A off your hands for C plus cash. Then I'll offer to take C off your hands in exchange for B plus cash. Then I'll offer to take B off your hands for A plus cash. At the end of this whole transaction, you're still holding A, but I've collected money from you three times.
Moral: The Ellsberg "paradox" isn't a paradox at all, just an example of how people violate expected utility theory. I don't think you're a bad person for violating expected utility theory; in a conflict between my theory and your practice, your practice wins. But let me propose a little transaction....
Ellsberg paradox, take 4:
Glad you all liked the series of Ellsberg paradox posts. If you're interested in these issues, check out the following fairly accessible article: Mark J. Machina, Choice Under Uncertainty: Problems Solved and Unsolved, Journal of Economic Perspectives, Summer 1987, at 121.
A most ingenious paradox:
This is another old paradox, which I'm posting mostly because I like the author's style in presenting it:
In the Hanged-Man Paradox, a man, K, is sentenced on Sunday to be hanged, but the judge, who is evidently French or enamored of the French wit for surprising those sentenced to the guillotine in their last moments, orders that the hanging take place on one of the next five days at noon. Smiling wistfully, he says to K, "You will not know which day until they come to take you to the gallows."
K, who has evidently been condemned for logical perversions, cannot prevent his mind from nevertheless trying to figure out in advance which day will be his last. He quickly realizes it cannot be Friday, because if he has not been hanged by Thursday noon, he will know nearly a full day before they come to get him that he will be hanged on Friday. He is simultaneously pleased at his cleverness and depressed that he has pushed his date with the gallows closer to Sunday.
Soon enough, he realizes that if Friday is logically excluded, then so is Thursday, because if he has not been hanged by noon Wednesday, he will know that, Friday being excluded, his date must be Thursday. In like manner, he can exclude Wednesday, Tuesday, and Monday. As a logician, he smugly concludes that the judge's decree is false. On Thursday noon he is hanged. The paradox is that he is surprised when they come to take him to the gallows.
(One can easily think up less macabre relatives of the Hanged-Man Paradox, such as the Surprise Quiz, a device with which we are all familiar and by which no doubt many of us have illogically been surprised.)
Russell Hardin, Collective Action 147 (1982) (paragraph breaks added). (Of course this isn't a real paradox — just a cautionary tale.) Hardin concludes (p. 148): "His problem was that facing a hangman focused his mind a little too admirably."
P.S. On people named K, see Kozinski & Volokh, The Appeal, 103 Mich. L. Rev. 1391 (2005).
UPDATE: AnonVCfan refers, in the comments, to the "less refined, ugly cousin of this paradox," the famous dialogue from The Princess Bride. I'll reproduce here what I wrote in the comments: "I see the Princess Bride dialogue as illustrating the fact from Game Theory that the game of Matching Pennies has no Nash equilibrium in pure strategies. The Hanged Man's paradox is 'simpler' in a way, because all you need to refute it is elementary logic."
UPDATE 2: Just so no one gets confused here — this paradox is only "simpler" in a way. It's got an intuitive explanation, but in fact it's very hard, and logicians have written like a hundred articles about it. For a good overview, see this paper by Tim Chow. I can follow the gist of it, but the technical aspects are beyond my knowledge of logic.
|
|