I’ve just been teaching my law and economics class about mixed strategies in game theory, and I thought I’d share. This will be totally old hat to those who know anything about game theory, but hey, mixed strategies are fun.

The game is the stag hunt, once described by Rousseau. We have two hunters, who could hunt either stag or hare. Stag hunting is a cooperative endeavor, so you need two people to do it; the stag is worth 20, and the two hunters share the stag for payoffs of 10. But if either of the hunters decides to hunt hare, he can do that by himself and get a hare worth 8; any hunter who’s hunting stag at that point gets 0, because he doesn’t have anyone helping him. These particular numbers are taken from Game Theory and the Law by Baird, Gertner, and Picker.

We can write this game in normal form using a 2×2 game board, where Hunter 1’s possible strategies are the rows, Hunter 2’s possible strategies are the columns, and the notation (x,y) indicates that Hunter 1 gets a payoff of x and Hunter 2 gets y.

Hunter 2 | |||

Stag | Hare | ||

Hunter 1 | Stag | (10,10) |
(0,8) |

Hare | (8,0) | (8,8) |

Let’s look for Nash equilibria of this game — that is, pairs of strategies where, given that one player is playing a particular strategy, the other player can’t do better by defecting.

It should be clear that (Stag, Stag) is a Nash equilibrium: Hunter 1 gets 10 (as opposed to 8 if he went to the cell below and hunted hare), and Hunter 2 likewise gets 10 (as opposed to 8 if he went to the cell to the right and hunted hare). (Intuitively, if you both hunt stag, you get half a stag, worth 10, so why would you instead pursue a hare only worth 8?) It should also be clear that (Hare, Hare) is a Nash equilibrium: Hunter 1 gets 8 (as opposed to 0 if he went to the cell above and hunted stag), and Hunter 2 likewise gets 8 (as opposed to 0 if he went to the cell to the left and hunted stag). (Intuitively, if you both hunt hare, you get a hare worth 8, so why would you unilaterally hunt stag, an endeavor that will be unsuccessful unless the other guy is helping?)

These sorts of games are called “coordination games”; if the two parties haven’t coordinated ahead of time, we’re not sure which equilibrium to predict that they’ll choose. You’d think they might choose the one that gives them (10,10) because that’s Pareto superior, but apparently people don’t always choose the Pareto-superior equilibrium in experiments.

But these are only “pure strategies”, i.e. “hunt stag” or “hunt hare”. There are also “mixed strategies”, for instance “hunt stag with a 50% probability”. It’s clear that if Hunter 1 is definitely hunting stag, Hunter 2’s best choice is to hunt stag, so he’ll have no reason to randomize; and if Hunter 1 is definitely hunting hare, Hunter 2’s best choice is to hunt hare, so here, too, he’ll have no reason to randomize. (Because this is a symmetrical game, we can similarly show that Hunter 1 will never randomize if Hunter 2 is playing a pure strategy.) **So the only way we can see a mixed strategy equilibrium is if both parties are randomizing.** And the only reason anyone would randomize between stag and hare is if he’s indifferent between stag and hare.

So, what will it take to make the hunters indifferent between stag and hare? Suppose Hunter 1 plays the following strategy: “Hunt stag with a probability *p* and hunt hare with a probability *1-p*“. Then Hunter 2’s payoff from hunting stag is *10p + 0 (1-p) = 10p*, and his payoff from hunting hare is 8. Hunter 2 is indifferent between hunting stag and hunting hare if *10p = 8*; in other words, if *p = 0.8*. (Because this is a symmetrical game, we can similarly show that, if Hunter 2 is randomizing with a probability *q*, Hunter 1 will be indifferent when *q = 0.8*.)

So a third Nash equilibrium of the game is where each hunter hunts stag with an 80% probability and hunts hare with a 20% probability. The payoffs from this strategy are (8,8). Any unilateral deviation from this equilibrium gives the deviator the same payoff of 8, so he can’t do strictly better by deviating.