The “Prisoner’s Dilemma” is one of game theory’s oldest, most influential and most poetic ideas. As in life, a player’s best strategy depends on the kind of game she’s in (one round? ten rounds? endless rounds?), who the other players are (strangers? familiar partners?) and how much she knows (being sure there are many turns has different logical consequences from being sure there are exactly 8). Playing it requires arithmetic, logic, memory, and a feel for psychology, perhaps even a philosophy of life. It’s as simple in principle and as varied in variety as chess or go. Which is why I was surprised by this paper last month, which reports that laboratory rats can play the game.
The game’s name comes from Albert Tucker’s lucid way of representing its logic: Two crooks are being interrogated in separate cells. If both refuse to talk, they each get a minor jail sentence. If both confess, they each get five years. But if one confesses while the other clams up, their fates are different: The confessor goes free, while the silent prisoner gets 10 years. Each player’s object is to maximize his “payoff” (get the shortest sentence possible). The greatest good for the greatest number would result if both stayed silent (two jail terms of six months each) but for each individual, the best strategy is to squeal: If the other guy also rats you out, you both get five years, but if he stays silent, you go free. On the other hand, if you trust in the other player, and he betrays you, it’s you who spend a decade in the slammer.
So the best strategy for individuals causes the two prisoners together to spend ten years in prison (either five each or 10 for the patsy while the squealer goes free). The best strategy for them together means only a year behind bars (six months each), but that strategy requires them to trust each other (not necessarily by attending each others’ weddings but simply by supplying some way to be certain that each one won’t betray the other).
As it turns out, repeated playing of the game will supply such a path (but only if the players don’t know how many rounds they will have). Over many turns, each “prisoner” can adjust his behavior according to the actions of the other. At a famous “tournament” of computer programs that played the game more than two hundred times, the winner was Anatol Rapoport’s program “Tit for Tat,” which cooperated unless it was betrayed, in which case it betrayed on the next go-round.
Here is the rodent version, developed by Duarte S. Viana and coauthors: Two rats were placed in separate mazes from which they could see and smell each other. At the end of his maze, each rat had one of two compartments to enter. The other rat, then, could either enter the corresponding compartment in his maze or go the opposite way. This created the same four-box choice offered by the Prisoner’s Dilemma Game: If both rats matched their behavior, both got food. If both rats didn’t cooperate, they both got their tails pinched. But if one went into the “cooperation” box and the other into the “betrayal” box, then their fates differed: The “sucker” who cooperated got his tail pinched. The “cheater” also got pinched, but he also got food.
As Brian Mossop explains over at The Scientist, the rats understood all this quite quickly. The researchers controlled one player in each game, by forcing that rat into a compartment they’d picked. When their “stooge” rat defected, the other rat usually would too. When he cooperated, the other rats did more than half the time. In other words, in playing the game repeatedly, they learned to play “Tit for Tat” rather than going for the short-term win in each round. So the Norway rat, as the scientists write, has the self-control, memory and arithmetical skills to develop a strategy, and to calibrate it to the identity of his fellow-player. Strike one more item off the list of “uniquely human” skills.