Back to the Ellsberg paradox (so-called, 'cause it's not really a paradox). Based on a bunch of previous comments, let me summarize where we're at, with a simplified version of the paradox.
There are three balls. One is red. For each of the other balls, someone flipped a fair coin to determine whether they would be white or black.
You can imagine a number of lotteries based on a draw from these balls. For example, consider the following four lotteries:
Lottery A: Win $100 if we draw a red ball.
Lottery B: Win $100 if we draw a white ball.
Lottery C: Win $100 if we draw a ball that isn't red.
Lottery D: Win $100 if we draw a ball that isn't white.
Do you prefer Lottery A or Lottery B? Do you prefer Lottery C or Lottery D?
(This is different than the previous example in the following ways: First, I've given a specific set of probabilities for white vs. black. Second, I've made it clear that I'm not offering any lotteries, just eliciting your opinion. Third, I've made the prize $100, just to be more specific.)
It turns out that most people prefer A to B, and prefer C to D. This is inconsistent with expected utility theory, which says your preferences over lotteries should only depend on what the ultimate probabilities are and the utility of the item. More below the fold, including the answer to the question: "Who cares?"
O.K., so the way people act — call it "ambiguity aversion" here — is inconsistent with expected utility theory. (Expected utility theory, by the way, is the dominant theory in economics of how people act under uncertainty.) It looks like risk aversion, but it's not, because in terms of bottom-line probabilities and outcomes, A and B are identical to a risk averse person, and so are C and D.
So what? There are those who would say you should act according to expected utility theory. Even people who wouldn't go that far would say that if you can see that the probability of white is 1/3, you should treat B as equivalent to A. You shouldn't let yourself be fooled by the two-stage process where first we flip a coin to determine the probabilities and then we run the prize lottery; you should treat the whole thing as a single unified lottery.
I won't go there — to me, expected utility theory is a way of trying to describe how people behave, and if people don't behave that way, so much the worse for expected utility theory. If you prefer A and C over B and D, more power to you.
But, note that I can make money off this (even though I'm unconnected with the lottery).
Step 1: Suppose you're holding a B lottery ticket, and I have $100. Because you prefer A to B, you'd be willing to trade B plus some amount of money to get A. So I write you a piece of paper saying "I promise to pay you $100 if red is drawn," take your "$100 if white is drawn" ticket, and pocket the extra money.
Step 2: Now I find someone with a D lottery ticket, and I write him a C lottery ticket, i.e., a piece of paper saying "I promise to pay you $100 if red isn't drawn." We make the trade and I pocket some extra cash.
Step 3: I observe that with the two papers I've given up — an A ticket and a C ticket — I've given up $100 for sure, because I'm going to have to pay $100 on exactly one of those tickets. What I've received in exchange — a B ticket and a D ticket — also give me $100 for sure, because I'm going to get $100 on exactly one of those tickets, depending on whether the ball drawn is or isn't white. So the ticket exchange was exactly a wash. But I've pocketed some change twice.
Step 4: Find two more holders of a B and a D lottery ticket. Return to Step 1.
There are similar cases, with lotteries, where transitivity is violated — you can find objects A, B, and C, such that someone prefers A to B, B to C, and C to A. Economists call this a failure of "rationality" — where rationality is just defined as being (1) completeness and (2) transitivity. But that doesn't necessarily have any normative component. Be intransitive if you like.
However, let me invite you over while you're holding A. I'll offer to take A off your hands for C plus cash. Then I'll offer to take C off your hands in exchange for B plus cash. Then I'll offer to take B off your hands for A plus cash. At the end of this whole transaction, you're still holding A, but I've collected money from you three times.
Moral: The Ellsberg "paradox" isn't a paradox at all, just an example of how people violate expected utility theory. I don't think you're a bad person for violating expected utility theory; in a conflict between my theory and your practice, your practice wins. But let me propose a little transaction....