Archive for the ‘Evolution’ Category
A very interesting article in Psychology Today by Christopher Bergland on the evolutionary advantage of altruism. It begins:
I woke up early this Christmas morning. While I was waiting for the water to boil I noticed a book called “Essays of E.B. White” on the kitchen table and started flipping through it. I stumbled on an essay called Unity which E.B. White wrote in 1960. I had been reading a lot of science articles about the evolutionary importance of community, cooperation, and empathy lately and the words from his essay hit home:
“Most people think of peace as a state of Nothing Bad Happening, or Nothing Much Happening. Yet if peace is to overtake us and make us the gift of serenity and well being, it will have to be the state of Something Good Happening. What is this good thing? I think it is the evolution of community.”
My mom has a December 24th tradition of spending the day with her good friend and next-door-neighbor at “The Haven” which is a local food bank. They distribute food to individuals and families in the community who are in need. Last night she came home with heartwarming (and heartbreaking) stories of various people who had come to the food bank that day. My mom doesn’t consider working at The Haven “volunteering”, or a sacrifice. Not because she’s saintly, or more altruistic than most….My mom realized a long time ago that it made her feel better around the holidays to connect with other people in the community from all walks of life than to sit at home all day by the fire with family, indulging. Scientists continue to confirm that her empirical findings and intuitions can be backed up in a laboratory or clinical studies. . .
Whatever genes drive cooperative behavior would have a strong survival advantage when humans began to live interdependently in small groups, for at that point non-cooperative individuals would be a real problem. I cannot help but believe such people would be culled from the group—hunting accidents would be the most common method, I imagine, but simply have a (cooperative) few get together and kill or chase away the non-coperative one would also happen. Indeed, it still happens.
So it seems likely that a tendency toward cooperative behavior would be selected for, but the emergence of community as a supported entity seems more to be a matter of meme evolution and grows/emerges under the right conditions, a fruit of meme evolution, surrounded by meme thorns also arising meme evolution (wars, discord, oppression, and the like). And when community emerges, it’s great, but we don’t seem quite to know how to make it emerge. We don’t have a good grip on creating community, though we seem to have a good handle on what goes into it and what it consists of.
The problem, I think, is getting people to see it as a thing, and to see how their own actions can nourish or injure it. If people can see it and understand its benefits, then perhaps that can cooperate in creating and maintaining a community—and the maintenance requires maintaining some on-going renewal. It’s like keeping sourdough starter: you have to keep feeding it, and so it is with community, I would think. That can mean bringing in new people, incorporating them into the community, and see how the community grows in new directions.
The key is having people able to see how their actions affect it. And, of course, to care about it.
More on memes in The Meme Machine, by Susan Blackmore.
UPDATE: I just read this in The Week, March 24, 2017 issue:
Haitian immigrant Denis Estimon remembers how isolated he felt when he first immigrated to the U.S. Now a popular senior at Boca Raton Community High School, Estimon is making sure no other student has to sit alone at lunchtime. Estimon has started a club called “We Dine Together,” whose members seek out wallflowers in the courtyard and strike up a conversation. The club has sparked hundreds of unlikely friendships since it formed last fall, and jocks and geeks now sit side by side. “It’s not a good feeling, like you’re by yourself,” says Estimon, of his own experience. “That’s something I don’t want anybody to go through.”
Estimon is a community cultivator.
Carrie Arnold has an interesting article in Quanta:
the 1950s, the Finnish biologist Björn Kurtén noticed something unusual in the fossilized horses he was studying. When he compared the shapes of the bones of species separated by only a few generations, he could detect lots of small but significant changes. Horse species separated by millions of years, however, showed far fewer differences in their morphology. Subsequent studies over the next half century found similar effects — organisms appeared to evolve more quickly when biologists tracked them over shorter timescales.
Then, in the mid-2000s, Simon Ho, an evolutionary biologist at the University of Sydney, encountered a similar phenomenon in the genomes he was analyzing. When he calculated how quickly DNA mutations accumulated in birds and primates over just a few thousand years, Ho found the genomes chock-full of small mutations. This indicated a briskly ticking evolutionary clock. But when he zoomed out and compared DNA sequences separated by millions of years, he found something very different. The evolutionary clock had slowed to a crawl.
Baffled by his results, Ho set to work trying to figure out what was going on. He stumbled upon Kurtén’s 1959 work and realized that the differences in rates of physical change Kurtén saw also appeared in genetic sequences.
His instincts as an evolutionary biologist told him that the mutation rates he was seeing in the short term were the correct ones. The genomes varied at only a few locations, and each change was as obvious as a splash of paint on a white wall.
But if more splashes of paint appear on a wall, they will gradually conceal some of the original color beneath new layers. Similarly, evolution and natural selection write over the initial mutations that appear over short timescales. Over millions of years, an A in the DNA may become a T, but in the intervening time it may be a C or a G for a while. Ho believes that this mutational saturation is a major cause of what he calls the time-dependent rate phenomenon.
“Think of it like the stock market,” he said. Look at the hourly or daily fluctuations of Standard & Poor’s 500 index, and it will appear wildly unstable, swinging this way and that. Zoom out, however, and the market appears much more stable as the daily shifts start to average out. In the same way, the forces of natural selection weed out the less advantageous and more deleterious mutations over time.
Ho’s discovery of the time-dependent rate phenomenon in the genome had major implications for biologists. It meant that many of the dates they used as bookmarks when reading life’s saga — everything from the first split between eukaryotes and prokaryotesbillions of years ago to the re-emergence of the Ebola virus in 2014 — could be wrong. “When this work came out, everyone went ‘Oh. Oh, dear,’” said Rob Lanfear, an evolutionary biologist at the Australian National University in Canberra.
The time-dependent rate phenomenon wasn’t fully appreciated at first. For one thing, it is such a large and consequential concept that biologists needed time to wrap their heads around it. But there’s a bigger stumbling block: The concept has been all but impossible to use. Biologists have not been able to quantify exactly how much they should change their estimates of when things happened over the course of evolutionary history. Without a concrete way to calculate the shifts in evolutionary rates over time, scientists couldn’t compare dates.
Recently, Aris Katzourakis, a paleovirologist at the University of Oxford, has taken the time-dependent rate phenomenon and applied it to the evolution of viruses. In doing so, he has not only pushed back the origin of certain classes of retroviruses to around half a billion years ago — long before the first animals moved from the seas to terra firma — he has also developed a mathematical model that can be used to account for the time-dependent rate phenomenon, providing biologists with much more accurate dates for evolutionary events.
Other scientists are excited by the prospect. “It’s like Einstein’s theory of relativity, but for viruses,” said Sebastián Duchêne, a computational evolutionary biologist at the University of Melbourne. The time-dependent rate phenomenon says that the speed of an organism’s evolution will depend on the time frame over which the observer is looking at it. And as with relativity, researchers can now calculate by how much.
Viral Fossil Hunting
Katzourakis has spent his career trying to pin down the origin of HIV and other so-called “retroviruses,” which are made out of single strings of RNA.
When he looked at the mutation rates of HIV, he found that it is among the fastest-evolving viruses ever studied. The speedy mutation rate makes sense: Double-stranded molecules like DNA have molecular proofreaders that can often correct errors made during replication, but HIV and other single-strand RNA viruses don’t. Spelling errors occur on top of spelling errors.
Because of this, virologists can directly study only the recent history of viruses like this. Older samples have reached mutation saturation, with so many accumulated spelling errors that scientists can’t account for them all. Taking the history of retroviruses back thousands or millions of years would require a different way to measure mutation rates.
Katzourakis turned to another technique. . .
It’s interesting to find that entropy, which we generally think of as maximizing disorder, actually can organize things in the right circumstances, including the circumstances under which life seems to have arisen. More and more the sequence from big bang to life seems inevitable, given the right conditions and enough time.
In Quanta Natalie Wolchover interviews Sharon Glotzer:
Sharon Glotzer has made a number of career-shifting discoveries, each one the kind “that completely changes the way you look at the world,” she said, “and causes you to say, ‘Wow, I need to follow this.’”
A theoretical soft condensed matter physicist by training who now heads a thriving 33-person research group spanning three departments at the University of Michigan in Ann Arbor, Glotzer uses computer simulations to study emergence — the phenomenon whereby simple objects give rise to surprising collective behaviors. “When flocks of starlings make these incredible patterns in the sky that look like they’re not even real, the way they’re changing constantly — people have been seeing those patterns since people were on the planet,” she said. “But only recently have scientists started to ask the question, how do they do that? How are the birds communicating so that it seems like they’re all following a blueprint?”
Glotzer is searching for the fundamental principles that govern how macroscopic properties emerge from microscopic interactions and arrangements. One big breakthrough came in the late 1990s, when she was a young researcher at the National Institute of Standards and Technology in Gaithersburg, Maryland. She and her team developed some of the earliest and best computer simulations of liquids approaching the transition into glass, a common yet mysterious phase of matter in which atoms are stuck in place, but not crystallized. The simulations revealed strings of fast-moving atoms that glide through the otherwise frustrated material like a conga line. Similar flow patterns were later also observed in granular systems, crowds and traffic jams. The findings demonstrated the ability of simulations to illuminate emergent phenomena.
A more recent “wow” moment occurred in 2009, when Glotzer and her group at Michigan discovered that entropy, a concept commonly conflated with disorder, can actually organize things. Their simulations showed that entropy drives simple pyramidal shapes called tetrahedra to spontaneously assemble into a quasicrystal — a spatial pattern so complex that it never exactly repeats. The discovery was the first indication of the powerful, paradoxical role that entropy plays in the emergence of complexity and order.
Lately, Glotzer and company have been engaged in what she calls “digital alchemy.” Let’s say a materials scientist wants to create a specific structure or material. Glotzer’s team can reverse-engineer the shape of the microscopic building blocks that will assemble themselves into the desired form. It’s like whipping up gold from scratch — only in modern times, the coveted substance might be a colloidal crystal or macromolecular assembly.
Glotzer ultimately seeks the rules that govern emergence in general: a single framework for describing self-assembling quasicrystals, crystallizing proteins, or living cells that spontaneously arise from simple precursors. She discussed her eureka-studded path with Quanta Magazine in February; a condensed and edited version of the interview follows.
QUANTA MAGAZINE: Tell me about your famous 2009 Nature paperthat linked self-assembly with entropy.
SHARON GLOTZER: Imagine if you had baseballs in a pool of water, and imagine that they had exactly the same density as the pool, so they didn’t sink, they didn’t float, they were just suspended, jostling about. Then you try to confine them all together. Self-assembly is what happens when the baseballs spontaneously organize themselves into a recognizable pattern. And if the particles are perfectly hard and have no other interactions, they will organize themselves to have the highest entropy possible.
So we were studying these tetrahedra, and it’s the simplest Platonic solid — the simplest three-dimensional shape, right? These Dungeons & Dragons dice. I had an inkling that it would be interesting to look at how they like to arrange with one another based solely on entropy, meaning they had no direct interactions between them — they didn’t want to stick together; there’s no charges; there’s no nothing; there’s just entropy. But I had no idea how interesting. I had no inkling that they would form the kind of structures that they did.
You showed that tetrahedra organize into a quasicrystal — this really complex, ordered structure. People normally understand the law of increasing entropy as the tendency of things to get messier, but you’re saying entropy leads to order. Why is that not a paradox?
You’re absolutely right that it’s completely counterintuitive. We typically think entropy means disorder, and so a disordered structure would have more entropy than an ordered structure. That can be true under certain circumstances, but it’s not always true, and in these cases, it’s not. I prefer to think of entropy as related to options: The more options a system of particles has to arrange itself, the higher the entropy. In certain circumstances, it’s possible for a system to have more options — more possible arrangements — of its building blocks if the system is ordered.
What happens is the particles try to maximize the amount of space that they have to wiggle around in. If you can wiggle, you can rearrange your position and orientation. The more positions, the more options, and thus the more entropy. So you imagine these baseballs in water. They are moving around — translating, rotating. They’re jiggling, because of the thermal motion of the water molecules. And what these systems want to do is space out the particles enough so that it maximizes the amount of wiggle room available to all the particles. Depending on the particle shape, that can lead to extremely complicated arrangements.
So particles like tetrahedra and baseballs evolve to states that allow them to wiggle in more ways and therefore have higher entropy. Did people know before that you could get order from entropy?
It’s been known that entropy alone can cause platelets and rodlike particles and spherical particles to align, but those ordered phases were pretty simple. It wasn’t really thought of as being such an important driving force for organization. When we did this tetrahedra computer experiment and got out what is still today the most complicated entropically stabilized structure that anyone has ever seen, that really changed the way people looked at this.
So then my group started studying every shape under the sun. We just started throwing all kinds of convex shapes onto the computer, and we just kept getting a crystal structure after another after another, some that were very complicated. In 2012 we published a paper in Science where we studied 145 different shapes and showed that 101 of them self-assembled into some kind of complicated crystal. Since then, my group has done tens of thousands of different shapes. We published one paper with 50,000 shapes in it.
What are some of the things you’re figuring out?
The kinds of questions I’m after now are: There’s this whole database of all the crystal structures that are known. And all these “space groups,” meaning structures that can obey all these different symmetry operations [rotations and translations that leave the structures unchanged]. There’s a couple hundred of those. Can I get every one of them just with entropy? With colloidal particles [like what you find in gels], even without interactions we’ve already been able to get as many as 50 of the known space groups. Are there any that aren’t possible just with entropy? And if so, why? We’ve also started looking at mixtures of shapes. We haven’t even talked about complicated crazy shapes, and concave shapes. So how far can you go with just entropy? And what does it mean that I can form the same structure in a whole bunch of different ways? There’s something much more fundamental to understand about the organization of matter, and by focusing on shape and entropy, we’re getting to the core of that.
One of the things we’ve noticed is that there are some design rules. For example, when your polyhedra have big, flat facets, they want to align so that their facets are facing each other — because this gives more wiggle room, more ways of arranging the particles. But if you have lots of facets that are all differently sized, then it’s harder to predict. You might end up with a glassy system or a jammed system instead of an ordered structure.
In the past couple of years, you’ve started working backward.
We’re basically doing alchemy in the computer. The ancient alchemists wanted to transmute the elements and turn lead into gold. But imagine that you had a particular structure and wanted to know what shape is the best shape to get the structure. That’s what many materials scientists are doing now — trying to turn the problem on its head. This “inverse design” approach is different from the way you might screen for compounds, for example, or find protein crystals. In that case you do simulation after simulation after simulation, where you’re just running tons of different molecules and saying: Which one gives me what I want?
Inverse design is more strategic. We start with a target structure, and use statistical thermodynamics to find the particle that solves the design problem. What we did is, we extended the way that these kinds of simulations are typically done to include shape as a variable. We can now do a single simulation where we let the shape of the building blocks change on the fly in the simulation and let the system tell us what the best one is. So instead of running thousands of simulations, I can run one and have the system tell me: What’s the best building block for the desired structure? So I call it digital alchemy.
You’ve also thought about how entropy might have played a role in the origin of life.
Most scientists think that to have order you need chemical bonds — you need interactions. And we’ve shown that you don’t. You can just have objects that, if you just confine them enough, can self-organize. So if you go to the question of: What was the first self-organizing of stuff, and how did it happen? You could imagine that you had these tiny microscopic crevices in rocks with water, and there were molecules in there, that they could self-organize just due to entropy for exactly the reasons that I was just describing. So it’s a completely different way to think about life and increasing complexity. They’re compatible with each other, but this is just saying: I know because I’ve done this, that I can take a bunch of objects and put them in a little droplet and shrink the droplet a little, and these objects will spontaneously organize. So maybe that phenomenon is important in the origin of life, and I don’t think that’s been considered.
When did you first become fascinated with emergence? . . .
Video at the link. Emergence is fascinating because it’s in a sense unexpected. Harold Morowitz wrote an intriguing book, The Emergence of Everything: How the World Became Complex. Inexpensive secondhand copies at the link.
Interesting how a particular development is compelling enough to drive evolution—that is, once it by chance arises, the advantages are so great that it defines an evolutionary direction. In that sense, the move onto land was inevitable, driven by the logic of the evolutionary engine. So perhaps also the creation of the meme: being able to imitate another may have arisen naturally in group living—it’s one of the things you can do in a group that you can’t do by yourself—and imitation allows learning and teaching, and that starts meme evolution. Certainly learning and teaching is a survival advantage for a social animal since it in effect pools intelligence to learn more and more quickly than could one alone.
So memes emerge as a useful capability, with their own evolutionary process moving very quickly.
I would imagine more things are determined by inexorable physical law than we realize. We know that matter is common in the universe, and there seems to be evidence that lifeforms inevitably arise (in time, perhaps in long time) when conditions are favorable. Now it seems those lifeforms will move to land because of the difference in efficiency of light transmission, which is physical law. It’s perhaps a lot more deterministic than we realize, and so thus our own processes: a cascade of falling dominoes can present a great display with no free will. Maybe we are the same thing, only fancier and with the illusion of free will to boot (which also inevitably arises, I would imagine).
Jennifer Ouellette reports in Quanta:
Life on Earth began in the water. So when the first animals moved onto land, they had to trade their fins for limbs, and their gills for lungs, the better to adapt to their new terrestrial environment.
A new study, out today, suggests that the shift to lungs and limbs doesn’t tell the full story of these creatures’ transformation. As they emerged from the sea, they gained something perhaps more precious than oxygenated air: information. In air, eyes can see much farther than they can under water. The increased visual range provided an “informational zip line” that alerted the ancient animals to bountiful food sources near the shore, according to Malcolm MacIver, a neuroscientist and engineer at Northwestern University.
This zip line, MacIver maintains, drove the selection of rudimentary limbs, which allowed animals to make their first brief forays onto land. Furthermore, it may have had significant implications for the emergence of more advanced cognition and complex planning. “It’s hard to look past limbs and think that maybe information, which doesn’t fossilize well, is really what brought us onto land,” MacIver said.
MacIver and Lars Schmitz, a paleontologist at the Claremont Colleges, have created mathematical models that explore how the increase in information available to air-dwelling creatures would have manifested itself, over the eons, in an increase in eye size. They describe the experimental evidence they have amassed to support what they call the “buena vista” hypothesis in the Proceedings of the National Academy of Sciences.
MacIver’s work is already earning praise from experts in the field for its innovative and thorough approach. While paleontologists have long speculated about eye size in fossils and what that can tell us about an animal’s vision, “this takes it a step further,” said John Hutchinson of the Royal Veterinary College in the U.K. “It isn’t just telling stories based on qualitative observations; it’s testing assumptions and tracking big changes quantitatively over macro-evolutionary time.”
MacIver first came up with his hypothesis in 2007 while studying the black ghost knifefish of South America — an electric fish that hunts at night by generating electrical currents in the water to sense its environment. MacIver compares the effect to a kind of radar system. Being something of a polymath, with interests and experience in robotics and mathematics in addition to biology, neuroscience and paleontology, MacIver built a robotic version of the knifefish, complete with an electrosensory system, to study its exotic sensing abilities and its unusually agile movement.
When MacIver compared the volume of space in which the knifefish can potentially detect water fleas, one of its favorite prey, with that of a fish that relies on vision to hunt the same prey, he found they were roughly the same. This was surprising. Because the knifefish must generate electricity to perceive the world — something that requires a lot of energy — he expected it would have a smaller sensory volume for prey compared to that of a vision-centric fish. At first he thought he had made a simple calculation error. But he soon discovered that the critical factor accounting for the unexpectedly small visual sensory space was the amount that water absorbs and scatters light. In fresh shallow water, for example, the “attenuation length” that light can travel before it is scattered or absorbed ranges from 10 centimeters to two meters. In air, light can travel between 25 to 100 kilometers, depending on how much moisture is in the air.
Because of this, aquatic creatures rarely gain much evolutionary benefit from an increase in eye size, and they have much to lose. Eyes are costly in evolutionary terms because they require so much energy to maintain; photoreceptor cells and neurons in the visual areas of the brain need a lot of oxygen to function. Therefore, any increase in eye size had better yield significant benefits to justify that extra energy. MacIver likens increasing eye size in the water to switching on high beams in the fog in an attempt to see farther ahead.
But once you take eyes out of the water and into air, a larger eye size leads to a proportionate increase in how far you can see.
MacIver concluded that eye size would have increased significantly during the water-to-land transition. When he mentioned his insight to the evolutionary biologist Neil Shubin — a member of the team that discovered Tiktaalik roseae, an important transitional fossil from 375 million years ago that had lungs and gills — MacIver was encouraged to learn that paleontologists had noticed an increase in eye size in the fossil record. They just hadn’t ascribed much significance to the change. MacIver decided to investigate for himself.
MacIver had an intriguing hypothesis, but he needed evidence. He teamed up with Schmitz, who had expertise in interpreting the eye sockets of four-legged “tetrapod” fossils (of which Tiktaalik was one), and the two scientists pondered how best to test MacIver’s idea.
MacIver and Schmitz first made a careful review of the fossil record to track changes in the size of eye sockets, which would indicate corresponding changes in eyes, since they are proportional to socket size. The pair collected 59 early tetrapod skulls spanning the water-to-land transition period that were sufficiently intact to allow them to measure both the eye orbit and the length of the skull. Then they fed those data into a computer model to simulate how eye socket size changed over many generations, so as to gain a sense of the evolutionary genetic drift of that trait.
They found that . . .
Elizabeth Kolbert has an intriguing article in the New Yorker:
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.
The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.
Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. . .
Do keep reading. Lots more. Later in the article:
In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.
Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”
It seems clear that consciousness is an emergent phenomenon from chemical, biological, and electrical processes in the brain, much as language (for example) is an emergent phenomenon from the activities of certain lifeforms. My own view is that consciousness is a construction of, and interaction among, memes, but let’s see what Daniel Dennett this, as discussed in Thomas Nagel’s review of his recent book in the NY Review of Books:
From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Norton, 476 pp., $28.95
For fifty years the philosopher Daniel Dennett has been engaged in a grand project of disenchantment of the human world, using science to free us from what he deems illusions—illusions that are difficult to dislodge because they are so natural. In From Bacteria to Bach and Back, his eighteenth book (thirteenth as sole author), Dennett presents a valuable and typically lucid synthesis of his worldview. Though it is supported by reams of scientific data, he acknowledges that much of what he says is conjectural rather than proven, either empirically or philosophically.
Dennett is always good company. He has a gargantuan appetite for scientific knowledge, and is one of the best people I know at transmitting it and explaining its significance, clearly and without superficiality. He writes with wit and elegance; and in this book especially, though it is frankly partisan, he tries hard to grasp and defuse the sources of resistance to his point of view. He recognizes that some of what he asks us to believe is strongly counterintuitive. I shall explain eventually why I think the overall project cannot succeed, but first let me set out the argument, which contains much that is true and insightful.
The book has a historical structure, taking us from the prebiotic world to human minds and human civilization. It relies on different forms of evolution by natural selection, both biological and cultural, as its most important method of explanation. Dennett holds fast to the assumption that we are just physical objects and that any appearance to the contrary must be accounted for in a way that is consistent with this truth. Bach’s or Picasso’s creative genius, and our conscious experience of hearing Bach’s Fourth Brandenburg Concerto or seeing Picasso’s Girl Before a Mirror, all arose by a sequence of physical events beginning with the chemical composition of the earth’s surface before the appearance of unicellular organisms. Dennett identifies two unsolved problems along this path: the origin of life at its beginning and the origin of human culture much more recently. But that is no reason not to speculate.
The task Dennett sets himself is framed by a famous distinction drawn by the philosopher Wilfrid Sellars between the “manifest image” and the “scientific image”—two ways of seeing the world we live in. According to the manifest image, Dennett writes, the world is
full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.
According to the scientific image, on the other hand, the world
is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?).
This, according to Dennett, is the world as it is in itself, not just for us, and the task is to explain scientifically how the world of molecules has come to include creatures like us, complex physical objects to whom everything, including they themselves, appears so different.
He greatly extends Sellars’s point by observing that the concept of the manifest image can be generalized to apply not only to humans but to all other living beings, all the way down to bacteria. All organisms have biological sensors and physical reactions that allow them to detect and respond appropriately only to certain features of their environment—“affordances,” Dennett calls them—that are nourishing, noxious, safe, dangerous, sources of energy or reproductive possibility, potential predators or prey.
For each type of organism, whether plant or animal, these are the things that define their world, that are salient and important for them; they can ignore the rest. Whatever the underlying physiological mechanisms, the content of the manifest image reveals itself in what the organisms do and how they react to their environment; it need not imply that the organisms are consciously aware of their surroundings. But in its earliest forms, it is the first step on the route to awareness.
The lengthy process of evolution that generates these results is first biological and then, in our case, cultural [i.e., meme evolution – LG], and only at the very end is it guided partly by intelligent design, made possible by the unique capacities of the human mind and human civilization. But as Dennett says, the biosphere is saturated with design from the beginning—everything from the genetic code embodied in DNA to the metabolism of unicellular organisms to the operation of the human visual system—design that is not the product of intention and that does not depend on understanding.
One of Dennett’s most important claims is that most of what we and our fellow organisms do to stay alive, cope with the world and one another, and reproduce is not understood by us or them. It is competence without comprehension. This is obviously true of organisms like bacteria and trees that have no comprehension at all, but it is equally true of creatures like us who comprehend a good deal. Most of what we do, and what our bodies do—digest a meal, move certain muscles to grasp a doorknob, or convert the impact of sound waves on our eardrums into meaningful sentences—is done for reasons that are not our reasons. Rather, they are what Dennett calls free-floating reasons, grounded in the pressures of natural selection that caused these behaviors and processes to become part of our repertoire. There are reasons why these patterns have emerged and survived, but we don’t know those reasons, and we don’t have to know them to display the competencies that allow us to function.
Nor do we have to understand the mechanisms that underlie those competencies. In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,
like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.
He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.
Our user-illusions were not, like the little icons on the desktop screen, created by an intelligent interface designer. Nearly all of them—such as our images of people, their faces, voices, and actions, the perception of some things as delicious or comfortable and others as disgusting or dangerous—are the products of “bottom-up” design, understandable through the theory of evolution by natural selection, rather than “top-down” design by an intelligent being. Darwin, in what Dennett calls a “strange inversion of reasoning,” showed us how to resist the intuitive tendency always to explain competence and design by intelligence, and how to replace it with explanation by natural selection, a mindless process of accidental variation, replication, and differential survival.
As for the underlying mechanisms, . . .
Melissa Healy has an interesting story in the LA Times:
In an electoral season that has blurred the line between fact and fantasy, a team of UCLA researchers is offering new evidence to support a controversial proposition: that when it comes to telling the difference between truth and fiction, not all potential voters are created equal.
When “alternative facts” allege some kind of danger, people whose political beliefs are more conservative are more likely than those who lean liberal to embrace them, says the team’s soon-to-be-published study.
Conservatives’ vulnerability to accepting untruths didn’t apply equally to all false claims: When lies suggested dangerous or apocalyptic outcomes, more conservative participants were more likely to believe them than when the lie suggested a possible benefit.
Participants whose views fell further left could be plenty credulous. But they were no more likely to buy a scary falsehood than they were to buy one with a positive outcome.
In short, conservatives are more likely to drop their guard against lies when they perceive the possible consequences as being dark. Liberals, less so.
Slated for publication in the journal Psychological Science, the new study offers insight into why many Americans embraced fabricated stories about Clinton that often made outlandish allegations of criminal behavior. And it may shed light on why so many believed a candidate’s assertions that were both grim and demonstrably false.
Finally, the results offer an explanation for why these false claims were more readily embraced by people who endorse conservative political causes than by those whose views are traditionally liberal.
“There are a lot of citizens who are especially vigilant about potential threats but not especially motivated or prepared to process information in a critical, systematic manner,” said John Jost, co-director of New York University’s Center for Social and Political Behavior. For years, Jost said, those Americans “have been presented with terrifying messages that are short on reason and openly contemptuous of scholarly and scientific standards of evidence.”
Jost, who was not involved with the latest research, said the new findings suggest that when dark claims and apocalyptic visions swirl, many of these anxious voters will cast skepticism aside and selectively embrace fearful claims, regardless of whether they’re true. The result may tilt elections toward politicians who stoke those fears.
“We may be witnessing a perfect storm,” Jost said.
The preliminary study, led by UCLA anthropologist Daniel M.T. Fessler, is the first to explore credulity as a function of ideological belief. The pool of participants was not strictly representative of the U.S. electorate, and some of the findings were weakened when the researchers removed questions pertaining to terrorism.
Moreover, some argue that it is not ideological belief but feeling beaten that makes people more credulous. When parties are thrown out of power, or have been out of office for long periods, their adherents are naturally drawn to believe awful things of the other party, says Joseph Uscinski, a political scientist at the University of Miami.
Until the new findings have been replicated under the changed circumstances of a Republican victory, said Uscinski, they should be greeted with caution.
But the new results are in line with a picture of partisan differences emerging from an upstart corner of the social sciences. In a wide range of studies, anthropologists, social psychologists and political scientists have found that self-avowed liberals and people who call themselves conservatives simply think differently.
All people range across a spectrum of personality traits and thinking styles. But when compared to liberals, conservatives show a lower tolerance for risk and have a greater need for closure and certainty, on average.
Wired up to monitors that measure physiological changes, people who are more conservative respond to threatening stimuli with more pronounced changes than do their peers on the other end of the political spectrum: On average, their hearts race more, their breathing becomes more shallow and their palms get clammier. . .