Later On

A blog written for those whose interests more or less match mine.

William Press: At lunch with Freeman Dyson

leave a comment »

William Press writes in Inference:

IN THIS ESSAY, I would like to tell the story of a minor discovery in mathematical game theory that Freeman Dyson and I made in 2011. Dyson was a personal friend and one of the great mathematical physicists of the twentieth century. He died in 2020, at the age of ninety-six. He was famously self-effacing, which is not to say that he lacked an accurate opinion of his own abilities. Freeman would deny that he had done anything at all and then allow friends—or even strangers—to vehemently contradict him. Our discovery was not of that character. It really was very minor. The reasons for telling the story now are less about the discovery itself and more about the tendency of scientists to seek lessons in moral philosophy in the least likely of places—high-school algebra, for example.

Imagine that a group of scientists gather to play a kind of terror game. They must propose scenarios that, should they eventuate, would shake their belief in the foundations of their fields. The mathematician’s proposed terror is that a long message, in English, is found to be encoded—in excess of any plausible random probability—somewhere in the first billion digits of pi.1 The physicist’s terror is that the interaction cross-section of a fundamental particle will have significantly different values when measured in different places on earth, or in the same place at different times.2 The biologist’s terror is that some feature of the living world will be unexplainable by the principle of natural selection. Within biology’s subspecialty of evolution theory, there is a small area of study known as evolution of cooperation. That study, some would say, lies closest to the biologist’s terror. That makes it worth poking at.

Cooperation and Defection

IN BIOLOGY, a cooperator is an individual who pays a cost for another individual to receive a benefit. When cooperation is mutually beneficial to two individuals of the same or different species—a condition termed direct reciprocity—then it is favored by natural selection. There are other possibilities. In so-called kin selection, an individual’s self-sacrifice may be favored if, on average, it helps another individual in the same gene pool to survive.3 The unit of survival is understood in this case to be not the individual, but the gene that two individuals share.4 It is harder to understand why individuals cooperate when defection would be more favorable or when the reciprocity is only indirect.

Suppose that two microbe species, A and B, both need processed nutrients a and b. The cooperative state might be that A produces aB produces b, and each secretes a portion of its nutrient for the benefit of the other. But this equilibrium is not evolutionarily stable: a defecting with a mutation that halts its sharing of a becomes a free rider, benefitting from B without paying the fare. Free riders, avoiding a cost, will tend to take over the population. The evolutionarily stable endpoint is noncooperation, even though cooperation would be better for both species.

Cooperation among humans seems hardest of all to understand. “Humans are the champions of cooperation,” Martin Nowak has remarked. “From hunter-gatherer societies to nation-states, cooperation is the decisive organizing principle of human society.”5 In much, if not most, of our cooperation, reciprocity is indirect. To be sure, some people give money to universities in the hope of getting their own children admitted—kin selection—but many more give to charities that are of no direct benefit to themselves or their kin. Many billionaires become philanthropists, but from the standpoint of evolution theory, why is this? A quirk of our culture, maybe? But cultures, too, compete for dominance with other contemporaneous cultures, and by a process akin to natural selection. Are we to understand that generosity is selectively favored? Or are the generous billionaires only transient?

Charles Darwin recognized that cooperation posed a challenge to his theory of natural selection. He described an elegant experiment to ferret out whether the aphid yields its excretion to the ant voluntarily, or involuntarily with the ant as a parasite.6 He provided a convincing argument that it was the former. Darwin, the consummate naturalist, hated overgeneralized theory. Yet the significant literature on the evolution of cooperation that has flourished in the last fifty years is almost entirely theoretical. Much of it is cast in the formalism of mathematical game theory, a subject that came into existence more than half a century after Darwin’s death in the work of John von Neumann and Oskar Morgenstern. Game theory describes how competing, sentient players, in a well-defined universe of choices and payoffs, may knowingly seek to optimize their own outcomes. Evolution is the blind watchmaker,7 optimizing only by trial and error. Exactly how the achievable outcomes of evolution correspond to the mathematical optima of game theory is not a settled question.

The Prisoner’s Dilemma

GO BACK TO MICROBES A and B, but now promote them to sentience. They become Alice and Bob, who are arrested on suspicion of committing, together, a serious crime. Each has sworn not to betray the other. They are questioned in separate rooms.

“We already have enough evidence to convict you both of a misdemeanor,” the detective says to each, “that will put you away for one year.” Each, separately, says nothing. “But if you defect, rat out your partner and turn state’s evidence,” the detective continues, “we’ll let you go, scot-free. Your partner will get a felony conviction, six years in the state penitentiary.”

“What if we both turn state’s evidence?” Alice and Bob each ask.

“Well, I can’t let you both go free,” the detective says. “You’ll each get three years.”

Alice reasons as follows: . . .

Continue reading. There’s much more.

I will add this: Robert Axelrod had two “tournaments” in which computer algorithms competed in the Prisoner’s Dilemma. Anatol Rapoport (author of the book Operational Philosophy that had a big impact on me when I read it in high school) wrote the algorithm that won the first tournament, using a simple strategy of tit for tat: cooperate initially, then respond as the other player did from then on.

Axelrod published the results and the strategies used, and then, with that information available to assist participants in their work, held a second tournament. Anatol Rapoport won the second tournament as well, using the same strategy.

Axelrod’s book is a lot of fun. Recommended.

Written by Leisureguy

14 May 2021 at 1:15 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: