Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Evolution’ Category

The radical aristocrat who put kindness on a scientific footing

leave a comment »

Lydia Syson has an interesting article in Psyche, which begins:

Five years had passed since Czar Alexander II promised the emancipation of the serfs. Trusting in a map drawn on bark with the point of a knife by a Tungus hunter, three Russian scientists set out to explore an area of trackless mountain wilderness stretching across eastern Siberia. Their mission was to find a direct passage between the gold mines of the river Lena and Transbaikalia. Their discoveries would transform understanding of the geography of northern Asia, opening up the route eventually followed by the Trans-Manchurian Railway. For one explorer, now better known as an anarchist than a scientist, this expedition was also the start of a long journey towards a new articulation of evolution and the strongest possible argument for a social revolution.

Prince Peter Kropotkin, the aristocratic graduate of an elite Russian military academy, travelled in 1866 with his zoologist friend Ivan Poliakov and a topographer called Maskinski. Boat and horseback took them to the Tikono-Zadonsk gold mine. From there, they continued with 10 Cossacks, 50 horses carrying three months’ supply of food, and an old Yukaghir nomad guide who’d made the journey 20 years earlier.

Kropotkin and Poliakov – enthusiastic, curious and well-read young men in their 20s – were fired by the prospect of finding evidence of that defining factor of evolution set out by Charles Darwin in On the Origin of Species (1859): competition. They were disappointed. As Kropotkin later wrote:

We saw plenty of adaptations for struggling, very often in common, against the adverse circumstances of climate, or against various enemies, and Polyakoff wrote many a good page upon the mutual dependency of carnivores, ruminants, and rodents in their geographical distribution; we witnessed numbers of facts of mutual support … [but] facts of real competition and struggle between higher animals of the same species came very seldom under my notice, though I eagerly searched for them.

Kropotkin pursued this contradiction for decades. Observation and wide reading convinced him that what he’d seen in Siberia was no exception, but a rule. In the 1860s, he watched a vast exodus of fallow deer gather in their thousands to cross the river Amur at its narrowest point to escape an early snowfall. In 1882, he was fascinated by a crab stuck on its back in a tank in Brighton Aquarium; it was painstakingly rescued by a band of comrades. Kropotkin collected descriptions from all over the world of the sociable behaviours of ants, bees, termites, falcons, swallows, horned larks, migrating birds, gazelles, buffalo, colonies of beavers, squirrels, mice, flocks of seals, herds of wild horses, tribes of dogs, wolf packs, marmots, rats, chinchillas, as well as apes and monkeys. He wrote that:

[A]s we ascend the scale of evolution, we see association growing more and more conscious. It loses its purely physical character, it ceases to be simply instinctive, it becomes reasoned.

It proved impossible for Kropotkin, a man ‘amiable to the point of saintliness’ according to George Bernard Shaw, to dedicate himself entirely to the ‘highest joys’ of scientific discovery, when all around him he saw ‘nothing but misery and struggle for a mouldy bit of bread’, as he put it in his Memoirs of a Revolutionist (1899). In 1872, in Switzerland, he became an anarchist, impressed by the egalitarian fraternity he found among the watchmakers of Jura. Back in Russia, he joined the revolutionary Circle of Tchaikovsky, disseminating underground literature and lecturing to the workers of St Petersburg disguised as Borodin the peasant agitator. His propaganda landed him in prison, but he escaped in 1876 with the help of comrades. By 1883, he was a political prisoner once again, this time in France. This second confinement gave him time to develop his arguments about evolution: he started to address systematically the conflicting interpretations of Darwin emerging in different parts of the world.

In England, the biologist, anthropologist and anatomist Thomas Huxley had quickly emerged as ‘Darwin’s bulldog’. Self-described as sharp of ‘claws and beak’, Huxley was prepared to ‘go to the Stake if requisite’ to defend evolutionary doctrine. His views on human nature and political economy were defined by Thomas Hobbes and Thomas Robert Malthus: life was an endless fight for scarce resources. The libertarian Herbert Spencer likewise applied natural selection to economics, using his infamous coinage the ‘survival of the fittest’ to justify laissez-faire capitalism. Popularly labelled ‘social Darwinism’, this view became gospel for Gilded Age industrialists such as John D Rockefeller. Although Huxley himself didn’t recommend the ‘survival of the fittest’ rule as a basis for morality – quite the reverse – he certainly believed that human beings were brutal and competitive, their sociability merely a recent veneer, rationalised by self-interest.

After Huxley published his pessimistic essay ‘The Struggle for Existence and Its Bearing Upon Man’ (1888) in The Nineteenth Century, an influential Victorian monthly review, Kropotkin was in a good position to launch an attack on Huxley’s idea of nature as a ‘gladiator’s show’. By this time, having been released from prison following an international outcry, Kropotkin was established in England, becoming quite a celebrity in the socialist and anarchist circles that blossomed through the mid-1880s. He promoted his political ideas in the international Left-wing press, and cofounded a London-based journal called Freedom, but made a living writing for scientific periodicals.

Between 1890 and 1915, in a series of interdisciplinary essays, Kropotkin drew on biology, sociology, history, (anti-racist) ethnology and anthropology to argue that species can organise and cooperate to overcome the natural environment and ensure their future survival. In 1902, the first eight essays were brought together in a book entitled Mutual Aid: A Factor of Evolution, an account of mutual support in action across the animal world (from microorganisms to mammals), ancient and modern ‘barbarian’ and ‘savage’ societies, medieval city-states and, finally, among modern humanity.

Kropotkin sought to recover an uncorrupted Darwin, whose metaphors should not be read too literally. But his call to understand compassion as ‘a powerful factor of further evolution’ cleared the way for a very particular political vision: human beings could overcome competitive struggle by voluntarily restructuring and decentralising society along principles of community and self-sufficiency.

Kropotkin became enamoured with mutual aid after reading an 1880 lecture on the subject by the celebrated zoologist Karl Kessler. Like other Russian naturalists at the time, Kessler didn’t deny the struggle for existence, but his own fieldwork in harsh and sparsely populated regions of the Russian empire strongly suggested that ‘the progressive development of the animal kingdom, and especially of mankind, is favoured much more by mutual support than by mutual struggle’. But, as Kropotkin mourned: ‘like so many good things published in the Russian tongue only, that remarkable address remains almost entirely unknown’.

Neither was Kropotkin alone politically. The historian of science Eric Johnson has recently demonstrated that . . .

Continue reading.

Written by LeisureGuy

3 April 2021 at 3:15 pm

The Awe Before There Are Words

leave a comment »

MIT Press has an article excerpted from Shierry Weber Nicholsen’s book The Love of Nature and the End of the World, “a psychological exploration of how the love of nature can coexist in our psyches with apathy toward environmental destruction.” The article begins:

To move from speechlessness to speech requires a person — perhaps a wiser part of ourselves — who can hear and receive our experience. As we are heard, we become able to hear our experience ourselves. In the beginning, however, is speechlessness, unformed experience no doubt both beautiful and terrifying. Silence sometimes means that there are no words yet.

Awe touches us even more deeply than a felt love, yet it is deep in darkness. It is not simply unspoken; it is speechless. A friend tells me that she cannot describe her feeling for the natural world as love. It’s not love but awe, she says. She is simply struck speechless at the sight of a heron lifting its wing. Awe-struck, she is incapable of saying more.

In part, awe does not have words because it is utterly private, not “for show.” But it is more than private. It is an involuntary speechlessness. That we seldom find the sense of awe in our talk about the environment may be due in part to our diminished capacity for awe, but it is also due to the inherent speechlessness that awe brings us to. We cannot even put words to it ourselves. It is not surprising that we do not speak of it to others.

Awe is the sense of an encounter with some presence larger than ourselves, mysterious, frightening and wonderful, numinous, sacred. It is the sense of something that we are not capable of containing within our capacity for thought and speech. In awe, one’s self is felt only as something small and incapable, speechless, perhaps graced by the experience but unequal to it.

Awe makes us feel amazed, astounded, struck dumb. Joseph Campbell’s term aesthetic arrest, which denotes something similar, conveys this sense. We are stopped in our tracks. The words amazed and astounded both suggest a blow to one’s normal mental functioning, as when one is literally stunned or struck or loses one’s normal orientation (as in a maze). In his book “Dream Life,” Donald Meltzer, the influential psychoanalyst, tells the story of a little boy whose therapist, in a gesture out of the ordinary, wiped his face. The boy sat there “amazed.” How are we to understand this? Meltzer quotes from the Talmud, the Jewish book of law“Stand close to the dying, because when the soul sees the abyss it is amazed.” For the soul of the one dying, death seems an “unbearably new” experience. When a particular emotion has never been felt before, it will not immediately yield its meaning, says Meltzer, and the psyche responds with amazement.

The notion of an experience that does not immediately yield its meaning is the key to the speechlessness of awe in the face of the natural world. While awe stops us in our tracks, this is not the end of our experiencing but rather a beginning. Somehow . . .

Continue reading.

Written by LeisureGuy

27 March 2021 at 4:29 pm

In the gut microbiome, at least, it’s nurture, not nature

leave a comment »

Clea Simon writes in the Harvard Gazette:

We are what we eat, and so are our microbiomes. A new study shows that alterations in diet, along with other environmental factors, had a major impact on gut biomes over time as animals were domesticated. In a process that closely tracks changes in the human diet since industrialization, this shift had implications on the health of domesticated animals — and possibly on humans as well.

The question that challenged human evolutionary biologist Rachel Carmody was one of nature vs. nurture. Her study “Effects of domestication on the gut microbiota parallel those of human industrialization,” published today in eLife, answered it definitively.

“Evidence in humans and many animals to this point suggests that, surprisingly, genetics plays a small role compared to environmental influences,” said Carmody, an assistant professor in the Department of Human Evolutionary Biology and principal investigator of the department’s Nutritional and Microbial Ecology Lab.

Carmody and Aspen Reese, a junior fellow in her lab, looked at nine different pairs of wild animals and their domesticated descendants, such as wolves and dogs, wild boars and pigs, and wild European rabbits alongside the domestic variety. Though the pairs differed profoundly from one another, the tame counterparts have encountered many common environmental changes during domestication, including shifts in population density, physical activity, patterns of reproduction, medical interventions such as exposure to antibiotics, and human contact.

In addition, “We’ve changed their diets,” said Carmody. “For example, many domesticated animals are eating foods originally cultivated for human use, in processed forms that are relatively easily digestible, and that tend to be richer in fat.” While the microbiomes of wild and domesticated animal pairs resembled one another, “The process of domestication shifted the divergent microbiomes of these different species in a common direction. In other words, we were able to detect a global signature of domestication,” she said.

The fact that environment rather than genetics drove that shift became apparent as researchers switched a single environmental variable between wild and domesticated pairs — feeding wolves dog chow, for example, and raw meat to dogs. “We used diet as one example of an environmental factor that we know has changed with domestication and with industrialization in profound ways,” Carmody said.

The researchers then sampled and sequenced the microorganisms in the animals’ fecal matter. With just a short-term diet change, the wolves’ gut microbial community became dog-like, and the dogs’ wolfish. This discovery confirmed earlier work done in Carmody’s lab with mice and humans that revealed how diet not only changed the gut biome but did so relatively quickly. “Within 24 hours of seeing a new diet, the gut microbiome looks and behaves very different,” she said.

To bring the study closer to home, researchers also looked at the closest parallels in human evolution, comparing chimpanzees’ gut biomes with those of modern humans. While the evolutionary distance between chimps and humans is greater than that between, say, wolves and dogs, the same kinds of changes were seen. Notably, the shifts were clearest in humans living in industrialized societies, who have experienced the greatest changes in diet, population density, physical activity, antibiotic use, and other factors that were also involved in animal domestication.

The implications are considerable. “We know that the gut microbiome has really important effects on human health,” said Carmody. Indeed, this internal environment has been linked to “a range of human diseases,” she said, including metabolic diseases like atherosclerosis and Type 2 diabetes, cancer, neurodegenerative diseases, and autoimmune disorders.

“In some ways, it’s great news that the gut microbiome is so sensitive to environmental conditions, as this means we can manipulate it more easily to improve human health,” Carmody said. “But it’s a double-edged sword, as all the changes our recent lifestyles have had on the microbiome may create opportunities for mismatch with human biology, which changes on much slower timescales.”

This study also  . . .

.Continue reading.

Written by LeisureGuy

24 March 2021 at 12:10 pm

A Newfound Source of Cellular Order in the Chemistry of Life

leave a comment »

Viviane Callier has an interesting article in Quanta, and the GIF at the link that illustrates the article is quite striking. She writes:

Imagine packing all the people in the world into the Great Salt Lake in Utah — all of us jammed shoulder to shoulder, yet also charging past one another at insanely high speeds. That gives you some idea of how densely crowded the 5 billion proteins in a typical cell are, said Anthony Hyman, a British cell biologist and a director of the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden.

Somehow in that bustling cytoplasm, enzymes need to find their substrates, and signaling molecules need to find their receptors, so the cell can carry out the work of growing, dividing and surviving. If cells were sloshing bags of evenly mixed cytoplasm, that would be difficult to achieve. But they are not. Membrane-bounded organelles help to organize some of the contents, usefully compartmentalizing sets of materials and providing surfaces that enable important processes, such as the production of ATP, the biochemical fuel of cells. But, as scientists are still only beginning to appreciate, they are only one source of order.

Recent experiments reveal that some proteins spontaneously gather into transient assemblies called condensates, in response to molecular forces that precisely balance transitions between the formation and dissolution of droplets inside the cell. Condensates, sometimes referred to as membraneless organelles, can sequester specific proteins from the rest of the cytoplasm, preventing unwanted biochemical reactions and greatly increasing the efficiency of useful ones. These discoveries are changing our fundamental understanding of how cells work.

For instance, condensates may explain the speed of many cellular processes. “The key thing about a condensate — it’s not like a factory; it’s more like a flash mob. You turn on the radio, and everyone comes together, and then you turn it off and everyone disappears,” Hyman said.

As such, the mechanism is “exquisitely regulatable,” said Gary Karpen, a cell biologist at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory. “You can form these things and dissolve them quite readily by just changing concentrations of molecules” or chemically modifying the proteins. This precision provides leverage for control over a host of other phenomena, including gene expression.

The first hint of this mechanism arrived in the summer of 2008, when Hyman and his then-postdoctoral fellow Cliff Brangwynne (now a Howard Hughes Medical Institute investigator at Princeton University) were teaching at the famed Marine Biological Laboratory physiology course and studying the embryonic development of C. elegans roundworms. When they and their students observed that aggregates of RNA in the fertilized worm egg formed droplets that could split away or fuse with each other, Hyman and Brangwynne hypothesized that these “P granules” formed through phase separation in the cytoplasm, just like oil droplets in a vinaigrette.

That proposal, published in 2009 in Science, didn’t get much attention at the time. But more papers on phase separation in cells trickled out around 2012, including a key experiment in Michael Rosen’s lab at the University of Texas Southwestern Medical Center in Dallas, which showed that cell signaling proteins can also exhibit this phase separation behavior. By 2015, the stream of papers had turned into a torrent, and since then there’s been a veritable flood of research on biomolecular condensates, these liquid-like cell compartments with both elastic and viscous properties. . .

Continue reading. There’s much more.

Written by LeisureGuy

18 March 2021 at 7:09 pm

Posted in Evolution, Science

Why Extraterrestrial Life May Not Seem Entirely Alien

leave a comment »

Don Falk interviews Arik Kershenbaum in Quanta:

On the website for the department of zoology of the University of Cambridge, the page for Arik Kershenbaum lists his three main areas of research, one of which stands out from the others. Kershenbaum studies “Wolves & other canids,” “Dolphins & cetaceans” — and “Aliens.” Granted, science hasn’t yet found any aliens to study, but Kershenbaum says that there are certain things we can still say about them with reasonable certainty. Topping the list: They evolved.

“The bottom line — why animals do the things that they do, why they are the things that they are — is because of evolution,” said Kershenbaum, a lecturer and director of studies in the natural sciences at the university’s Girton College. He argues that evolution is a universal law of nature, like gravity — and that studies of plants and animals here can therefore tell us something useful about potential inhabitants of worlds far beyond Earth. He finds evidence for this in the process of evolutionary convergence, in which unrelated lineages of organisms evolve similar features as adaptations to similar environmental challenges. It’s an argument he presents in detail in his new book, The Zoologist’s Guide to the Galaxy: What Animals on Earth Reveal About Aliens — and Ourselves, which draws on comparisons of animals’ physical adaptations as well as his own research (and that of others) into animal communications.

Quanta recently spoke with Kershenbaum at his home in Cambridge via videoconference. The interview has been condensed and edited for clarity.

You’re a zoologist; you study life here on our own planet. What made you want to write a book about alien life?

When zoologists study life on Earth, we’re studying mechanisms. We’re studying how life became the way it is. And because evolution is the explanatory mechanism for life everywhere, then the principles that we uncover on Earth should be applicable in the rest of the universe. Thinking about how life on other planets evolves and behaves is just a natural extension of my work with animals on Earth. If we discovered a lost island on this planet, we’d be examining its animals from the perspective of what we know about the evolution of life in general. You can be sure that if we discovered alien life on another planet, we’d be using the same methods to ask why they look and behave the way they do, and how they evolved.

You argue that natural selection — the key mechanism behind evolution — is inevitable, and that it applies universally. What makes you so confident about that?

No planet will have a complex form of life that popped into existence all on its own. Whatever life is like on an alien planet, it must have begun simply. Now, it could be that it remained simple; that’s possible. Probable, even, on many planets. But if life is to achieve any kind of complexity, the only way that complexity can accumulate is if favorable changes and innovations are retained and unfavorable ones are lost — and that’s precisely evolution by natural selection.

One of the key ideas in your book is the notion of “convergent evolution.” What is that, and why is it important?

If you observe two animals with similar features — feathers, for instance — you might presume that they inherited them from a common ancestor: the feathered dinosaur that was the ancestor of all modern birds. That’s just regular evolution, where children have similarities because they inherit the characteristics of their parents.

But sometimes you see animals with traits that they couldn’t possibly have inherited from a common ancestor. For instance, the wings of birds work in pretty much the same way as the wings of bats. But the common ancestor of birds and bats was a small lizardlike creature that lived over 300 million years ago, long before even the dinosaurs. It certainly didn’t have wings, and the large majority of its descendants, including elephants and crocodiles, don’t have wings (thankfully). So those wings must have evolved separately in different lines of descendants.

Sometimes this “convergence” of traits is for something obviously useful, like wings. But sometimes convergence produces bizarrely similar creatures that share so many characteristics, it can be hard to believe they’re not closely related. The recently extinct thylacine [a large predatory marsupial native to Tasmania and mainland Australia], for example, could easily be mistaken for a peculiar breed of dog, but it’s much more closely related to a kangaroo! And yet living a life similar to that of modern coyotes or jackals meant that it evolved many similar characteristics convergently.

You’re arguing that wherever organisms confront similar environmental challenges, they may come up with similar adaptive solutions. And you expect to see this throughout the universe?

Consider flight, since that’s the most famous example of convergence. If you live on a planet with an atmosphere, or even with an ocean or some other fluid, if you want to get from one place to another through that fluid, there’s only a handful of ways to do it. You can . . .

Continue reading. There’s much more.

I have great difficulty understand the reasoning of those who dispute the fact of evolution.

Written by LeisureGuy

18 March 2021 at 4:07 pm

Google Director Of Engineering: This is how fast the world will change in ten years

leave a comment »

I’ve blogged several times about memes (cultural entities) and how they are subject to the same Darwinian conditions as lifeforms and therefore evolve, and how memetic evolution is millions of times faster than the evolution of lifeforms. Michael Simmons in an interesting article points out that memetic evolution is accelerating as memes lead to the creation of tools that actively support memes and their propagation and thus their evolution — for example, the printing press, radio, television, the Internet. He writes in Medium:

“For the first time since his creation man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

— John Maynard Keynes (1930)

“Rather than being bored to death, our actual challenge is to avoid anxiety attacks, psychotic breakdowns, heart attacks, and strokes resulting from being accelerated to death.”

— Geoffrey West

Time Is Accelerating Because Of The Red Queen Effect

If a competitor makes an improvement, you must make an equal or greater improvement just to stay neck-and-neck with them. Stay the same and you fall behind.

“Standing still is the fastest way of moving backwards in a rapidly changing world.”

Continue reading. One can observe some of the phenomenon he describes.

Written by LeisureGuy

11 March 2021 at 12:51 pm

Posted in Daily life, Evolution, Memes

What Is Life? Its Vast Diversity Defies Easy Definition.

leave a comment »

Carl Zimmer writes in Quanta:

People often feel that they can intuitively recognize whether something is alive, but nature is filled with entities that flout easy categorization as life or non-life — and the challenge may intensify as other planets and moons open up to exploration. In this excerpt from his new book, Life’s Edge: The Search for What It Means to Be Alive, published today, the science writer Carl Zimmer discusses scientists’ frustrated efforts to develop a universal definition of life.

“It is commonly said,” the scientists Frances Westall and André Brack wrote in 2018, “that there are as many definitions of life as there are people trying to define it.”

As an observer of science and of scientists, I find this behavior strange. It is as if astronomers kept coming up with new ways to define stars. I once asked Radu Popa, a microbiologist who started collecting definitions of life in the early 2000s, what he thought of this state of affairs.

“This is intolerable for any science,” he replied. “You can take a science in which there are two or three definitions for one thing. But a science in which the most important object has no definition? That’s absolutely unacceptable. How are we going to discuss it if you believe that the definition of life has something to do with DNA, and I think it has something to do with dynamic systems? We cannot make artificial life because we cannot agree on what life is. We cannot find life on Mars because we cannot agree what life represents.”

With scientists adrift in an ocean of definitions, philosophers rowed out to offer lifelines.

Some tried to soothe the debate, assuring the scientists they could learn to live with the abundance. We have no need to zero in on the One True Definition of Life, they argued, because working definitions are good enough. NASA can come up with whatever definition helps them build the best machine for searching for life on other planets and moons. Physicians can use a different one to map the blurry boundary that sets life apart from death. “Their value does not depend on consensus, but rather on their impact on research,” the philosophers Leonardo Bich and Sara Green argued.

Other philosophers found this way of thinking — known as operationalism — an intellectual cop‐out. Defining life was hard, yes, but that was no excuse not to try. “Operationalism may sometimes be unavoidable in practice,” the philosopher Kelly Smith countered, “but it simply cannot substitute for a proper definition of life.”

Smith and other foes of operationalism complain that such definitions rely on what a group of people generally agree on. But the most important research on life is at its frontier, where it will be hardest to come to an easy agreement. “Any experiment conducted without a clear idea of what it is looking for ultimately settles nothing,” Smith declared.

Smith argued that the best thing to do is to keep searching for a definition of life that everyone can get behind, one that succeeds where others have failed. But Edward Trifonov, a Russian‐born geneticist, wondered if a successful definition already exists but is lying hidden amidst all the past attempts.

In 2011, Trifonov reviewed 123 definitions of life. Each was different, but the same words showed up again and again in many of them. Trifonov analyzed the linguistic structure of the definitions and sorted them into categories. Beneath their variations, Trifonov found an underlying core. He concluded that all the definitions agreed on one thing: life is self‐reproduction with variations. What NASA’s scientists had done in eleven words (“Life is a self‐sustained chemical system capable of undergoing Darwinian evolution”), Trifonov now did with three.

His efforts did not settle matters. All of us — scientists included — keep a personal list of things that we consider to be alive and not alive. If someone puts forward a definition, we check our list to see where it draws that line. A number of scientists looked at Trifonov’s distilled definition and did not like the line’s location. “A computer virus performs self‐reproduction with variations. It is not alive,” declared the biochemist Uwe Meierhenrich.

Some philosophers have suggested that we need to think more carefully about how we give a word like life its meaning. Instead of building definitions first, we should start by thinking about the things we’re trying to define. We can let them speak for themselves.

These philosophers are following in the tradition of Ludwig Wittgenstein. In the 1940s, Wittgenstein argued that everyday conversations are rife with concepts that are very hard to define. How, for example, would you answer the question, “What are games?”

If you tried to answer with a list of necessary and sufficient requirements for a game, you’d fail. Some games have winners and losers, but others are open‐ended. Some games use tokens, others cards, others bowling balls. In some games, players get paid to play. In other games, they pay to play, even going into debt in some cases.

For all this confusion, however, we never get tripped up talking about games. Toy stores are full of games for sale, and yet you never see children staring at them in bafflement. Games are not a mystery, Wittgenstein argued, because they share a kind of family resemblance. “If you look at them you will not see something that is common to all,” he said, “but similarities, relationships, and a whole series of them at that.”

A group of philosophers and scientists at Lund University in Sweden wondered if the question “What is life?” might better be answered the way Wittgenstein answered the question “What are games?” Rather than come up with a rigid list of required traits, they might be able to find family resemblances that could naturally join things together in a category we could call Life. . . .

Continue reading. There’s more.

Written by LeisureGuy

9 March 2021 at 6:15 pm

Sea Slugs That Chop Off Their Heads and Grow New Bodies

leave a comment »

The head of Elysia cf. marginata just after autotomy. Credit: Sayaka Mitoh

There’s a stunning video in a NY Times report by Annie Roth, but I don’t know how to embed it. Do take a look. The report begins:

A few years ago, Sayaka Mitoh, a Ph.D. candidate at Nara Women’s University in Japan, was perusing her lab’s vast collection of sea slugs when she stumbled upon a gruesome sight. One of the lab’s captive-raised sea slugs, an Elysia marginatahad somehow been decapitated.

When Ms. Mitoh peered into its tank to get a better look, she noticed something even more shocking: The severed head of the creature was moving around the tank, munching algae as if there was nothing unusual about being a bodiless slug.

Ms. Mitoh also saw signs that the sea slug’s wound was self-inflicted: It was as if the sea slug had dissolved the tissue around its neck and ripped its own head off. Self-amputation, known as autotomy, isn’t uncommon in the animal kingdom. Having the ability to jettison a body part, such as a tail, helps many animals avoid predation. However, no animal had ever been observed ditching its entire body.

“I was really surprised and shocked to see the head moving,” said Ms. Mitoh, who studies the life history traits of sea slugs. She added that she expected the slug “would die quickly without a heart and other important organs.” But it not only continued to live, it also regenerated the entirety of its lost body within three weeks.

This prompted Ms. Mitoh and her colleagues to conduct a series of experiments aimed at figuring out how and why some sea slugs guillotine themselves. The results of their experiments, published Monday in Current Biology, provide evidence that Elysia marginataand a closely related species, Elysia atroviridis, purposefully decapitate themselves in order to facilitate the growth of a new body. Although more research is needed, the researchers suspect these sea slugs ditch their bodies when they become infected with internal parasites.

Ms. Mitoh and her team monitored several groups of Elysia marginata and Elysia atroviridis over the course of the creatures’ lives. Not all the sea slugs they monitored decapitated themselves, but many did — one even did it twice. Bodies regenerated from the heads of both species, but the headless bodies stayed headless. However, those dumped bodies reacted to stimuli for as long as months, before decomposing.

The head wounds the sea slugs created during autotomy took only one day to heal. Organs such as the heart took an average of one week to regenerate. For most of the sea slugs, the regeneration process took less than three weeks to complete.

“We’ve known for a long time that sea slugs have regenerative capabilities, but this really goes beyond what we had thought,” said Terry Gosliner, senior curator of invertebrate zoology at the California Academy of Science.

Dr. Gosliner, who has discovered over one-third of all sea slug species known to exist, suspects that the impressive regenerative capability of these sea slugs may relate to another impressive biological talent they possess.

Elysia marginata and Elysia atroviridis are often called “solar-powered sea slugs.” They are among a small number of slugs that can incorporate chloroplasts from the algae they eat into their bodies. This lets the slugs sustain themselves, at least partially, on the sugars the chloroplasts produce through photosynthesis.

Having this ability, which is known as kleptoplasty, could be what allows these sea slugs to survive long periods of time without their bodies.

In most animals and even some sea slugs, autotomy is  . . .

Continue reading.

Written by LeisureGuy

9 March 2021 at 11:58 am

What the Coronavirus Variants Mean for the End of the Pandemic

leave a comment »

Illustration by Timo Lenzen

Dhruv Khullar has a good article in the New Yorker, which begins:

Last March, during the first wave of the pandemic, Adriana Heguy set out to sequence coronavirus genomes. At the time, New York City’s hospitals were filling up, and American testing capacity was abysmal; the focus was on increasing testing, to figure out who had the virus and who didn’t. But Heguy, the director of the Genome Technology Center at N.Y.U. Langone Health, recognized that diagnostic tests weren’t enough. Tracking mutations in the virus’s genetic code would be crucial for understanding it. “No one was paying attention to the need for sequencing,” Heguy told me recently. “I thought, I can’t just sit here and not do anything.” Within weeks, her team had sequenced hundreds of samples of the virus collected in New York City and published a paper with three key findings: the virus had been circulating in the city for weeks before the lockdown; most cases had come from Europe, not China; and the variant infecting New Yorkers carried a mutation, D614G, that scientists soon confirmed made it far more contagious than the original virus isolated in Wuhan.

Heguy’s efforts were prescient. The world is now confronting a growing number of coronavirus variants that threaten to slow or undo our vaccine progress. In recent months, it’s become clear that the virus is mutating in ways that make it more transmissible and resistant to vaccines, and possibly more deadly. It’s also clear that, at least in the United States, there is no organized system for tracking the spread or emergence of variants. As Heguy sees it, the U.S. has more than enough genome-sequencing expertise and capacity; the problem is focus. “Efforts in the U.S. have been totally scattered,” she said. “There’s no mandate to do it in a timely fashion. The government is kind of like, Let us know if you find something.” Funding has also been a major constraint. “It boils down to money,” Heguy said. “With money, I could hire a technician, another scientist, get the reagents and supplies I need.” Because of their better-organized efforts, other countries have been more successful in identifying new versions of the virus: “The reason the U.K. variant was identified in the U.K. is that the U.K. has a good system for identifying variants.” The U.K. has, for months, sequenced at least ten per cent of its positive tests. “If you’re doing ten per cent, you’re not going to miss things that matter,” Heguy said. “If a variant becomes prevalent, you’ll catch it.”

Heguy’s lab sequences ninety-six samples a week—as many as will fit onto a single sample plate, which has eight rows and twelve columns. The process—receiving, preparing, sequencing, and analyzing samples, then reporting the results—takes time and resources, and diverts attention from other research. “Mostly we do this out of a sense of moral obligation,” Heguy told me. “This feeling that the country shouldn’t be left in the dark.” As we enter what seems to be the endgame of the pandemic, tracking and analyzing variants—which could fill hospitals and reduce the effectiveness of therapies and vaccines—is more important than ever.

To understand coronavirus variants, you need to understand a little about viral biology and, more specifically, about how the fragments of RNA and protein from which viruses are made go about replicating. sars-CoV-2, the coronavirus that causes covid-19, has about thirty thousand letters of RNA in its genome. These letters, or “bases,” are like the architectural plans for the virus’s twenty-nine proteins, including the “spike” protein that it uses to enter cells. Once inside a cell, the virus hijacks the cellular machinery, using it to make copies of itself. Because the machinery is good but not perfect, there are occasional errors. sars-CoV-2 has a mechanism that checks the new code against the old code; still, it’s possible for the substitution, deletion, or addition of an amino acid to evade this proofreading. If the errors don’t arrest the replication process completely, they sneak into the next generation. Most mutations don’t meaningfully change a protein’s structure or function. Sometimes, however, one of these accidental experiments “works.” A variant has been created—a virus with a slightly different design.

In the time that sars-CoV-2 has troubled humans, it’s accumulated innumerable mutations. Those that matter have one of two key features: they either help the virus latch onto and enter cells more easily, or they allow it to better evade tagging and destruction by the immune system. Today, scientists are following three variants of particular concern: B.1.1.7, originally detected in the U.K.; B.1.351, from South Africa; and P.1, from Brazil. Predictably, variants seem to have emerged more quickly in countries with rampant viral spread—places where the virus has had more chances to replicate, mutate, and hit upon changes that confer an evolutionary advantage. The U.K.’s B.1.1.7 variant has spread to more than eighty countries and has been doubling every ten days in the U.S., where it is expected to soon become the dominant variant. Its key mutation is called N501Y: the name describes the fact that the amino acid asparagine (“N”) is replaced with tyrosine (“Y”) at the five-hundred-and-first position of the spike protein. The mutation affects a part of the spike that allows the virus to bind to cells, making the variant some fifty per cent more transmissible than the original; new evidence also suggests that people infected with it have higher viral loads and remain infectious longer, which could have implications for quarantine guidelines.

Both the B.1.351 and P.1 variants carry the N501Y mutation. They also have another, more dangerous mutation, known as E484K: a substitution of glutamate (“E”) for lysine (“K”) at the spike protein’s four-hundred-and-eighty-fourth position. This mutation diminishes the ability of antibodies—both naturally acquired and vaccine-generated—to bind to and neutralize the virus. Last month, South Africa halted use of the vaccine produced by AstraZeneca, citing evidence that it offers minimal protection against the B.1.351 variant that is now dominant in that country; a monoclonal antibody drug from Eli Lilly is also inactive against it. In the U.S., a number of homegrown variants are beginning to circulate, including some with the antibody-evading E484K mutation; in the U.K., B.1.1.7 has, in some cases, also acquired the mutation, becoming more like the South African and Brazilian variants.

There’s growing concern that B.1.351 and P.1 can infect people who’ve already had covid-19. The city of Manaus, in Brazil, has faced a viral surge this winter, even though some three-quarters of its population is thought to have been infected by the original virus in the fall—a level at which herd immunity is believed to settle in. This suggests that the antibodies produced by the original virus have struggled to neutralize its successor. Lab tests examining blood from immunized people have shown that the Pfizer-BioNTech and Moderna vaccines—which are effective against the U.K. variant—tend to produce fewer antibodies that fight the South African and Brazilian variants. It’s not yet clear how this will affect real-world protection: the vaccines still elicit huge numbers of antibodies—probably more than enough to neutralize the virus—and they stimulate other parts of the immune system, such as T cells, that weren’t assessed in the blood tests. At least for now, a degree of uncertainty is inevitable.

How worried should we be about the variants? They pose  . . .

Continue reading. There’s much more.

Written by LeisureGuy

7 March 2021 at 12:31 pm

Beyond human-centered design: A human–silkworm collaboration shows the way to sustainable design

leave a comment »

Tomasz Hollanek, a PhD researcher at the University of Cambridge, working at the intersection of design theory, technology ethics and critical AI studies, writes in Psyche:

With advancements in artificial intelligence (AI), it’s common to hear designers preach about the need for human-centred technology – systems, devices and software that cater to our specific human needs, behaviours and foibles. As algorithms affect our lives in profound yet invisible ways, placing the human at the centre of the design process is meant to ensure that they work in our favour – and that we get technological progress right. Most often, this approach translates into products and tools that are intuitive and user-friendly, and that support human wellbeing.

But what if situating the human at the heart of design isn’t enough to steer innovation in the right direction? What if it’s precisely what we should avoid? Human-centred thinking has marked drawbacks. We can trace the desire to focus on the human – and the human alone – to an anthropocentric logic that has guided technological development for centuries and, ultimately, led to the current state of ecological crisis. Viewed in this light, the rise of AI represents a chance to forge new, less extractive but still productive relationships with the organisms and entities with which we share the planet.

‘[A]ll design is human-centred,’ argues Paola Antonelli, senior curator at the Museum of Modern Art (MoMA) in New York, ‘in that it touches all live beings … but cares only about some – humans.’ For Antonelli, human-centric inevitably translates to egocentric, because it ‘reflects an old anthropocentric and anthropogenic view of reality’. The calls to prioritise the human in design, even if well-meaning, fails to address the biggest challenge ahead of us: reconciling our distinctly human needs with environmental concerns. The potential for change lies in what Antonelli calls allocentric design  as the opposite of egocentric – an approach to making that acknowledges the interdependence of species and aims to ensure the flourishing of us all.

In early 2020, Antonelli curated an exhibition at MoMA dedicated to the designer Neri Oxman. Oxman labels her work as naturecentric, but it’s of a piece with allocentric design, since nature for Oxman stands for a complex system of interactions in which the human is only a single component. Oxman’s process, which she has been working out with the Mediated Matter group at the MIT Media Lab, is a good example of what de-centring the human in design could mean in practice. It also shows how new, AI-based technologies could help us mediate between human needs, environmental constraints and the wellbeing of other entities with which we share the planet.

Silk Pavilion (2013), a dome of stainless steel and silk threads (275 x 275 x 300 cm), is one of Oxman’s most frequently exhibited projects. The Pavilion explores the potential for interspecies co-design, in which both human and nonhuman actors are co-creators and stakeholders. Searching for alternatives to synthetic materials used in product and architectural design, Oxman’s team studied the Bombyx mori silkworm’s cocoon-production patterns. To ‘reverse engineer’ the silkworm’s spinning behaviour, the group examined naturally occurring fabrication patterns to reproduce them on a new scale. They learned that the silkworm’s movements were affected by environmental conditions such as natural light and heat and that, by manipulating these factors, humans could direct the worm’s spinning.

In the subsequent phase, the collected data served to compute a scalable model for the Pavilion dome: composed of 26 polygonal stainless-steel frames, the structure’s skeleton was filled with silk threads by a computer-numerically controlled machine, generating an optimal scaffolding for living silkworms to begin their spinning. In the final phase, 6,500 silkworms were released on to the shell. Over a period of three weeks, they completed the dome, closing the gaps between the machine-spun threads with new silk fibres.

The algorithm that initially replicated the living silkworm’s behaviour also directed this process of biological growth – mediating between species so as to optimise construction and avoid excess use of resources. The benefits of the Pavilion model are manifold. Combining digital and biological fibre-based fabrication allows for . . .

Continue reading.

Written by LeisureGuy

3 March 2021 at 11:48 am

Time to Take the Penis off Its Pedestal

leave a comment »

Rachel Gross writes in Scientific American:

It can taste, smell and sing. It can be a corkscrew, a crowbar or a hypodermic needle. It can stretch up to nine times your body length (if you’re a barnacle); be a detachable tentacle covered in suckers (if you’re an argonaut octopus); or even see, using light-sensing cells that guide it smoothly to its destination (if you’re a Japanese yellow swallowtail butterfly). Or, it can be a limp, fleshy tube, hardly worth writing home about, if you’re a human.

It is the penis, as you’ll know if you’ve read Phallacy: Life Lessons from the Animal Penis, biologist and journalist Emily Willingham’s recent exploration of phallic diversity across the animal kingdom.

Meanwhile, another book argues that what we need is even more penis science. GUYnecology: The Missing Science of Men’s Reproductive Health, by Yale sociologist Rene Almeling, asks why medicine has failed to fully probe “the male gonad,” as one scientist put it, and its role in human reproduction. Almeling explains why no medical specialty exists that is devoted to male reproductive health—the guy equivalent of gynecology. When it comes to penis science, it seems, men have gotten shafted.

At first glance, these two very different books appear to point to the same enduring truth: that scientists—and readers—remain as penis-obsessed as ever. Or, as Willingham puts it, “Nothing gets clicks like a story about dicks.”

Actually, it’s the opposite. In both, the flashy focus on the male member serves as a Trojan horse (pun intended) for a very different message: that a culture of phallus-worship has slanted the science in crucial and sometimes unexpected ways. On the one hand, we’ve inflated the role of the penis in genital evolution; on the other, we’ve left the male contribution to infertility, genetic abnormalities and other reproductive consequences unexamined. The result is stunted, lopsided science that shows only one side of the story.

Consider that myriad beetle species are classified solely by their penis shape, while the true breadth of vaginal diversity has yet to be explored. This tradition has deep roots: Going back to Charles Darwin, who waxed poetic on the wonders of barnacle dongs, biologists have trained their lens on the penis while remaining largely uninterested in what vaginas were doing. Yet penises don’t evolve in a vacuum. All those traits we ooh and aah over—length, girth, bristles—are shaped by vaginal evolution, and the mutual dance between the two that plays out over generations.

Today, as more women and LGBTQ scientists enter the field, we’re finding that vaginas, far from passive tubes for ejaculate, are active organs that sort, store and reject sperm. Kangaroos have three vaginas (two for sperm reception, one for joey ejection); swallowtail butterflies see out of theirs; and duck vaginas spiral and curve in a penis-repelling labyrinth. Even for non-vagina-lovers, these facts help us understand how genitals evolve as a whole. Both are part of the same unified story—a much richer tapestry than just one body part can tell. Leaving one out, whichever one, blinds us to the fuller picture of sex and sexuality.

Similarly, leaving guys out of gynecology paints a false picture that, beyond sperm, men don’t contribute much to human reproduction. While medical messaging hits women over the head with the fact of their ticking biological clocks, men are rarely told how their health and age will affect their offspring. This incongruity gives the impression “that reproduction is women’s business—that it occurs primarily in women’s bodies and is solely women’s responsibility,” Almeling writes. In reality, sperm age and quality likely play just as large a role in rates of developmental disorders and infant survival as eggs do—to say nothing of the interactions between the two.

Both examples reflect a deeper flaw in science’s approach to sex: the assumption that . . .

Continue reading. There’s more.

Written by LeisureGuy

23 February 2021 at 9:27 am

An instance of the problem of generalizing from a small sample: Beyond the !Kung

leave a comment »

Manvir Singh, an anthropologist and postdoctoral research fellow at the Institute for Advanced Study in Toulouse who studies the origins of universal or near-universal cultural practices, including music, marriage, shamanism, and witchcraft, writes in Aeon:

When the anthropologist Irven DeVore suggested in 1962 to then-graduate student Richard Lee that they study hunter-gatherers, neither expected to transform the modern understanding of human nature. A baboon expert, DeVore mostly wanted to expand his research to human groups. Lee was searching for a dissertation project. Being interested in human evolution, they decided not to study peoples in the Americas or Australia, as was the norm in hunter-gatherer studies. Instead, they looked for a site that was, in Lee’s words, ‘close to the actual faunal and floral environment occupied by early man’. So, they headed to Africa – specifically, to the Kalahari.

Twice as big as California and, in places, three times as dry, the Kalahari is a red, scorched scar that yawns across Botswana and Namibia. It’s a brutal place. For nine months a year, the sun tortures the earth. There are no clouds and, with the exception of great scraggly baobabs, no tall trees that provide shade. When in 2013 the travel writer Andrew Evans visited Tau Pan, a settlement in the eastern Kalahari, he said it looked like ‘the deadest part of our planet’.

But appearances can be deceiving. Despite its harshness, the Kalahari still hosts a surprising clamour of life. There are cacti and acacia trees, hyenas and secretary birds, wildebeests and ostriches. There are antelopes that feed on roots and leopards that stalk them. And there are lions: hungry ones and scrawny ones and ones with black manes. According to local hunter-gatherers, some even float through the air. These, they said, were not true lions but shape-shifting sorcerers: former shamans who traded good medicine for evil poison.

The hunter-gatherers of the Kalahari are known collectively by many names – San, Bushmen, Baroa, Basarwa. The people whom DeVore and Lee studied – and who have since been visited by countless anthropologists – are known as the !Kung. (Like other Kalahari hunter-gatherers, the !Kung speak languages with clicks, represented with symbols such as ‘!’ and ‘ǂ’.)

The !Kung astounded DeVore and Lee. They had trance healing. They had an elaborate cosmology. They had a puzzling kinship system and deep ethnobotanical knowledge. ‘The Bushmen were so rich,’ DeVore later said in a video interview. ‘And we saw these poor people had not been well represented.’ Excited by their initial findings, the duo launched the Harvard Kalahari Project. An ambitiously multidisciplinary study aimed at documenting as much as possible about the !Kung, the project drew together ethnographers, demographers, archaeologists and psychologists.

‘We were able to sell it,’ DeVore said, ‘because we had this mantra almost, which is: for 100,000 years, or certainly more, we were hunter-gatherers. And we’ve only been something else in the last 10,000 years. And as late as time of Christ, 2,000 years ago, half the world was still hunter-gatherers. So, we’re really hunter-gatherers – never mind some of us wear Brooks Brothers suits – and we don’t know that life.’

The Harvard Kalahari Project propelled the !Kung into anthropological stardom. By 1976, researchers on the team had published more than 100 academic articles, on topics as varied as infant care, trance healing, and blood pressure. The research sparked more interest, which drew in more anthropologists, which produced more research. In a video for the Annual Review of Anthropology in 2012, DeVore speculated that there was no culture ‘outside the West that has as much fine-grained data on it’. In the same video, his Harvard colleague Peter Ellison said:

It’s not hard to appreciate why the !Kung San became such a paradigm within anthropology. They were hunter-gatherers for so many people and for so many generations. There was no other study that came close to that richness of detail.

Through research on the !Kung and similar hunter-gatherers, anthropologists now have a clear picture of what society looked like for most of our species’ history. We were mobile. We were egalitarian. We shared. We lived in small bands composed mostly of kin. We had few possessions and weak notions of property. Slavery was unknown. Then, 10,000 years ago: a rupture. The world warmed. Sea levels rose. We started to settle. We domesticated plants and animals. We invented inequality and slavery. Property intensified. War intensified. Societies became larger and more complex. Strangers became neighbours. We built courts. We built governments. We built monuments and bureaucracies and moralistic gods and every other instrument of power exercised in service of order and oppression. Prehistory ended. History began.

This is more than just a theory of prehistory. It’s the modern, scientific origin myth. Yes, we live in mega-societies with property and slavery and inequality but, at heart, we are mobile, egalitarian hunter-gatherers, wired for small groups and sharing. According to the evolutionary social scientist Peter Turchin, this view is ‘so standard that it is rarely formulated in explicit terms’. The archaeologist David Wengrow and the late anthropologist David Graeber described it as ‘the foundation of all contemporary debate on inequality’. This view serves as a narrative of human nature, a symbol of our capacity to establish good societies, and a reminder of just how far we have strayed in the past 10,000 years.

It’s also probably wrong.

n 1549, a vessel sailing from modern-day Colombia to Spain was shipwrecked on the south Florida coast. The survivors were taken captive by local peoples who, in turn, handed them over as tribute to the local king. One of the captives was Hernando de Escalante Fontaneda, the 13-year-old son of a conquistador. Not much is known about Fontaneda, but something about him must have appealed to his captors. Perhaps it was his youth. Perhaps he was linguistically gifted. Whatever the reason, the Indigenous peoples killed most of their captives, including Fontaneda’s older brother, yet decided to keep him alive. For 17 years, he lived, learned and travelled among the peoples of south Florida until a Spanish expedition rescued him in 1566.

No one today would know (or care) about Fontaneda were it not for his memoirs. These memoirs offer a rare glimpse into cultures whose histories were later erased by disease and colonialism. They describe people’s diets (‘in those freshwater rivers there are infinite eels, and very delicious’). They describe geography and place names (‘it is called the lake of Mayaimi because it is very large, and around it there are many small towns’). They even describe the mortuary practices of local nobility (‘they take the bones and attach one bone with another until they reassemble the man as he was, and they place him in a house that they have as a temple’). Most importantly for us, they describe the Calusa.

The Calusa ruled southern Florida. At the time of Spanish contact, they comprised 50 to 60 politically united villages along Florida’s southwest coast, although their domain extended far beyond that, from Tampa to Cape Canaveral and down to the Florida Keys: an area twice as large as modern-day Belgium. They collected tribute from client villages in the form of mats, hides, captives, feathers and breads made of roots. In return, they offered protection.

Both a kingdom and a state, the Calusa concentrated power in a hereditary sovereign who had life-and-death control over his subjects, a fact he demonstrated with regular human sacrifice. He ruled from the island of Mound Key – specifically, from a massive house perched atop a 32-foot-high mound and spacious enough to fit 2,000 people. He oversaw full-time military and priestly classes and funnelled surplus production into lavish celebrations. After one Calusa king met with the conquistador Pedro Menéndez de Avilés, an observer marvelled at the extravagance of the festivities, which included a choir of 500 adolescent Calusa girls who sang as long as Menéndez was in attendance. ‘This,’ the observer wrote, ‘was the greatest celebration, respect, and obedience that that chief, or any other in the land, could offer.’

The Calusa mobilised people for more than just singing. On Pine Island, they dug a 4 km-long canal, likely for fishing, exchange, transportation and tribute payments. Construction was a grand, collaborative project, involving the movement of 30,000 cubic metres of earth. They dug another canal through Mound Key, bisecting the island into mirrored halves. At the canal’s mouth, they built what archaeologists called ‘watercourts’ – inundated, rectangular structures, each thousands of square metres in area. A cross between enormous nets and living refrigerators, the watercourts ensured a reliable supply of live fish.

How did the Calusa build such a large, stratified society? A reasonable guess would be through agriculture. Perhaps they harvested maize, that mother of bounty and civilisation. Perhaps some lucky farmers ended up with more of it and converted that difference into power. Perhaps that power spiralled: inequities expanded, nobilities appeared and soon great farmer-kings collected maize by the boatload, storing it in granaries before paying military personnel to terrorise rural vassal-farmers into handing over more maize. Perhaps that maize supported priests and infrastructure and pubertal court singers.

Turns out, that’s not what happened. The Calusa built a state not through agriculture but through wild game – in particular, fish.

The Calusa are exceptional. They developed, as far as anthropologists know, the largest and most politically complex society of any non-agricultural people. But they’re not that exceptional. For more than a century, anthropologists have known of another set of foragers who developed sedentary and politically stratified societies: the peoples of the Pacific Northwest Coast. Inhabitants of temperate rainforests at the nexus of river and sea, these peoples harvested and stored salmon. They warred and took slaves. They lived in towns, some of which exceeded 1,000 people. Rather than suppressing inequality, they institutionalised it through potlatches – ceremonies in which hosts gained status through distributions of food, furs, slaves and houses.

If hunter-gatherers can build large, sedentary societies, why do we assume that they lived in small bands for most of our species’ history? Surely our ancestors preferred lush spots over the dead-looking Kalahari. And, once in those spots, surely they had the same political savvy to engineer semi-fixed, stratified societies. Yet many leading anthropologists still imagine the 100,000 years preceding agriculture to resemble, with slight variations, the lives of the mid-20th-century !Kung. Why?

One reason is that stratified foragers are unusual. Look at recent hunter-gatherers, and what do you see? You see dispersed clusters of Inuit families in the Arctic. You see Hadza bands sharing honey in eastern Tanzania. You see the Sirionó traipsing through the Amazon and Mbuti camps assembling for a nighttime dance in the Congo. You see, in short, small, mobile, egalitarian bands.

But looking at modern hunter-gatherers is misleading. For one, the more we dig through history, the more we encounter foragers who were sedentary and hierarchical. They covered Japan before agriculture. They dotted the South China coast before agriculture. They inhabited the Levant, tracts of the Nile, the beaches of southern Scandinavia, the central plains of Russia, the coasts of the Atacama Desert, and the grasslands of the high-altitude Andes – all before agricultural peoples dominated those regions. Even today, sedentary foragers live in riverine and coastal regions of New Guinea.

Sedentary and hierarchical hunter-gatherers are not unusual. If anything, it’s the profusion of mobile, egalitarian bands that might be the historical outlier. Rather than reflecting ancient ways, these small-scale societies are often products of modern forces. Rather than being untouched, many have been bullied, pacified, employed, enslaved and marginalised by colonial powers and agricultural neighbours.

Take the Sirionó of Bolivia, who were studied by the anthropologist Allan Holmberg in 1940-42.  . .

Continue reading. There’s much more.

Written by LeisureGuy

20 February 2021 at 4:57 pm

Brexit chickens come home to roost, while the UK encourages emergence of vaccine-resistant strains of Covid

leave a comment »

Umair Haque writes in Medium:

Continue reading for the story of how Britains approach to Covid not has backfired but promises worse to come.

In the Guardian, Andrew Rawnsley’s article “The bill for Boris Johnson’s Brexit is coming in and it’s punishingly steep” is also worth reading, and you can look at their overall Brexit coverage.

Written by LeisureGuy

25 January 2021 at 9:58 am

Scientist Debunks Myths About Exercise And Sleep

leave a comment »

Terry Gross at NPR interviewed Daniel Lieberman, a professor in the department of human evolutionary biology at Harvard. The audio of the interview (36 minutes) is at the link. The article there includes excerpts from the interview transcript. The article begins:

For much of history, human beings needed to be physically active every day in order to hunt or gather food — or to avoid becoming food themselves. It was an active lifestyle, but one thing it didn’t include was any kind of formal exercise.

Daniel Lieberman is a professor in the department of human evolutionary biology at Harvard. He says that the notion of “getting exercise” — movement just for movement’s sake — is a relatively new phenomenon in human history.

“Until recently, when energy was limited and people were physically active, doing physical activity that wasn’t necessarily rewarding, just didn’t happen,” Lieberman says. “When I go to these [remote African tribal] villages, I’m the only person who gets up in the morning and goes for a run. And often they laugh at me. They think I’m just absolutely bizarre. … Why would anybody do something like that?”

nt a lot of time with indigenous hunter-gatherers in Africa and Latin America, cataloging how much time they spend walking, running, lifting, carrying and sitting. He writes about his findings, as well as the importance of exercise and the myths surrounding it in his new book, Exercised.

“If you actually look at what our ancestors do, they walk about 5 miles a day, which turns out to be, for most people, about 10,000 steps,” Lieberman says.

Lieberman notes that many people are moving less than they did before the pandemic. He says if 10,000 steps feels out of reach, it’s OK to shoot for less — just so long as you’re focused on movement. Even fidgeting can keep your muscles engaged.

“The more we study physical activity, the more we realize that it doesn’t really matter what you do,” Lieberman says. “You don’t have to do incredible strength training … to get some benefits of physical activity. There’s all different kinds of physical activity, and it’s all good in different ways.”

Interview highlights

On the demonizing of sitting as “the new smoking”

When I walk into a village in a remote part of the world where people don’t have chairs or a hunter-gatherer camp, people are always sitting. … Some friends and colleagues of mine actually put some accelerometers on some hunter-gatherers and found that they sit on average about 10 hours a day, which is pretty much the same amount of time Americans like me spend sitting.

So it turns out that I think we’ve kind of demonized sitting a little falsely. It’s not unnatural or strange or weird to sit a lot, but it is problematic if, of course, that’s all you do. As I started to explore the literature more, I was fascinated because most of the data that associates sitting a lot with poor health outcomes turns out to be leisure-time sitting. So if you look at how much time people spend sitting at work, it’s not really that associated with heart disease or cancers or diabetes. But if you look at how much people sit when they’re not at work, well, then the numbers get a little bit scary.

On the importance of “interrupted sitting”

Just getting up every once in a while, every 10 minutes or so — just to go to the bathroom or pet your dog or make yourself a cup of tea — even though you’re not spending a lot of energy, you’re turning on your muscles. And your muscles, of course, are the largest organ in your body — and just turning them on turns down inflammation. It uses up fats in your bloodstream and sugars in your bloodstream, and it produces molecules that turn down inflammation. So the evidence is that interrupted sitting is really the best way to sit. In hunter-gatherer camps, people are getting up every few minutes, to take care of the fire or take care of a kid or something like that. And that kind of interrupted sitting, as well as not sitting in a chair that’s kind of nestling your body and preventing you from using any muscles, all that kind of keeps your muscles going and turns out to be a much healthier way to sit.

On how chairs with backs have contributed to our back pain

We all think that it’s normal for a chair to have a seat back. But until recently, only really rich people — the pope or the king — had a chair with a seat back. Until recently, all human beings pretty much either sat on the ground or, if they did have chairs, they were stools or benches or things like that. …

The reason it matters for our health is that a seat back essentially makes sitting even more passive than just sitting on a bench or a stool because you lean against the seat back and you’re using even fewer muscles, even less effort to stabilize your upper body. And the result is that we end up having very weak backs. So there are a lot of muscles that we use in our backs to hold up our upper body, and those muscles, if we don’t use them, just like every other muscle in your body, they atrophy. And weak muscles then make us more prone to back pain. In fact, studies show that the best predictor of whether or not somebody gets lower-back pain — and most of us do get lower-back pain — is whether or not we have weak and, importantly, fatigable backs. I think sitting a lot on chairs with backrests contributes to that.

On the idea that running is bad for your knees

There’s this kind of general idea out there that running is like driving your car too much — [that] it’s wear and tear, and that running is highly stressful and it just wears away your cartilage, just like driving your car for a long period of time wears out your springs, for example. And that turns out not to be true. Study after study has shown . . .

Continue reading.

Written by LeisureGuy

24 January 2021 at 10:56 am

How the world embraced consumerism

leave a comment »

A consumerist society is just one of many possible cultural outcomes, and the evolution of the consumerist society in which we now live and struggle was the result of choices and deliberate initiatives. Some of those I’ve blogged about in times past, referring to this article (listed on the Useful Posts page). Now Kerryn Higgs has a lengthy article in The MIT Press Reader that provides details of how our consumerist society was created, which shows that the primary goal was by no means to benefit the public. The article is an extract adapted from Higgs’ book Collision Course: Endless Growth on a Finite Planet. (You can also get a Kindle edition of the book. The BBC has a version of this article with more plentiful illustrations.)

Over the course of the 20th century, capitalism preserved its momentum by molding the ordinary person into a consumer with an unquenchable thirst for more stuff.

The notion of human beings as consumers first took shape before World War I, but became commonplace in America in the 1920s. Consumption is now frequently seen as our principal role in the world.

People, of course, have always “consumed” the necessities of life — food, shelter, clothing — and have always had to work to get them or have others work for them, but there was little economic motive for increased consumption among the mass of people before the 20th century.

Quite the reverse: Frugality and thrift were more appropriate to situations where survival rations were not guaranteed. Attempts to promote new fashions, harness the “propulsive power of envy,” and boost sales multiplied in Britain in the late 18th century. Here began the “slow unleashing of the acquisitive instincts,” write historians Neil McKendrick, John Brewer, and J.H. Plumb in their influential book on the commercialization of 18th-century England, when the pursuit of opulence and display first extended beyond the very rich.

But, while poorer people might have acquired a very few useful household items — a skillet, perhaps, or an iron pot — the sumptuous clothing, furniture, and pottery of the era were still confined to a very small population. In late 19th-century Britain a variety of foods became accessible to the average person, who would previously have lived on bread and potatoes — consumption beyond mere subsistence. This improvement in food variety did not extend durable items to the mass of people, however. The proliferating shops and department stores of that period served only a restricted population of urban middle-class people in Europe, but the display of tempting products in shops in daily public view was greatly extended — and display was a key element in the fostering of fashion and envy.

Although the period after World War II is often identified as the beginning of the immense eruption of consumption across the industrialized world, the historian William Leach locates its roots in the United States around the turn of the century.

In the United States, existing shops were rapidly extended through the 1890s, mail-order shopping surged, and the new century saw massive multistory department stores covering millions of acres of selling space. Retailing was already passing decisively from small shopkeepers to corporate giants who had access to investment bankers and drew on assembly-line production of commodities, powered by fossil fuels; the traditional objective of making products for their self-evident usefulness was displaced by the goal of profit and the need for a machinery of enticement.

“The cardinal features of this culture were acquisition and consumption as the means of achieving happiness; the cult of the new; the democratization of desire; and money value as the predominant measure of all value in society,” Leach writes in his 1993 book “Land of Desire: Merchants, Power, and the Rise of a New American Culture.” Significantly, it was individual desire that was democratized, rather than wealth or political and economic power.

The 1920s: “The New Economic Gospel of Consumption”

Release from the perils of famine and premature starvation was in place for most people in the industrialized world soon after the Great War ended. U.S. production was more than 12 times greater in 1920 than in 1860, while the population over the same period had increased by only a factor of three, suggesting just how much additional wealth was theoretically available. The labor struggles of the 19th century had, without jeopardizing the burgeoning productivity, gradually eroded the seven-day week of 14- and 16-hour days that was worked at the beginning of the Industrial Revolution in England. In the United States in particular, economic growth had succeeded in providing basic security to the great majority of an entire population.

In these circumstances, there was a social choice to be made. A steady-state economy capable of meeting the basic needs of all, foreshadowed by philosopher and political economist John Stuart Mill as the stationary state, seemed well within reach and, in Mill’s words, likely to be an improvement on “the trampling, crushing, elbowing and treading on each other’s heels … the disagreeable symptoms of one of the phases of industrial progress.” It would be feasible to reduce hours of work further and release workers for the spiritual and pleasurable activities of free time with families and communities, and creative or educational pursuits. But business did not support such a trajectory, and it was not until the Great Depression that hours were reduced, in response to overwhelming levels of unemployment.

In 1930 the U.S. cereal manufacturer Kellogg adopted a six-hour shift to help accommodate unemployed workers, and other forms of work-sharing became more widespread. Although the shorter workweek appealed to Kellogg’s workers, the company, after reverting to longer hours during World War II, was reluctant to renew the six-hour shift in 1945. Workers voted for it by three-to-one in both 1945 and 1946, suggesting that, at the time, they still found life in their communities more attractive than consumer goods. This was particularly true of women. Kellogg, however, gradually overcame the resistance of its workers and whittled away at the short shifts until the last of them were abolished in 1985.

Even if a shorter working day became an acceptable strategy during the Great Depression, the economic system’s orientation toward profit and its bias toward growth made such a trajectory unpalatable to most captains of industry and the economists who theorized their successes. If profit and growth were lagging, the system needed new impetus. The short depression of 1921–1922 led businessmen and economists in the United States to fear that the immense productive powers created over the previous century had grown sufficiently to meet the basic needs of the entire population and had probably triggered a permanent crisis of overproduction; prospects for further economic expansion were thought to look bleak.

The historian Benjamin Hunnicutt, who examined the mainstream press of the 1920s, along with the publications of corporations, business organizations, and government inquiries, found extensive evidence that such fears were widespread in business circles during the 1920s. Victor Cutter, president of the United Fruit Company, exemplified the concern when he wrote in 1927 that the greatest economic problem of the day was the lack of “consuming power” in relation to the prodigious powers of production. . .

Continue reading. There’s much, much more.

Later in the article:

Bernays’s views, like those of several other analysts of the “crowd” and the “herd instinct,” were a product of the panic created among the elite classes by the early 20th-century transition from the limited franchise of propertied men to universal suffrage. “On every side of American life, whether political, industrial, social, religious or scientific, the increasing pressure of public judgment has made itself felt,” Bernays wrote. “The great corporation which is in danger of having its profits taxed away or its sales fall off or its freedom impeded by legislative action must have recourse to the public to combat successfully these menaces.”

The opening page of “Propaganda” discloses his solution:

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country.… It is they who pull the wires which control the public mind, who harness old social forces and contrive new ways to bind and guide the world.

The front-line thinkers of the emerging advertising and public relations industries turned to the key insights of Sigmund Freud, Bernays’s uncle. As Bernays noted: . . .

Written by LeisureGuy

22 January 2021 at 11:27 am

The People for Whom the Suburbs Were Built Are Gone

leave a comment »

In Vice Shayla Love writes on how daily life — and its accompanying expectations and needs — has evolved and thus changed:

Last summer, Donald Trump and Ben Carson, then Secretary of Housing and Urban Development, co-bylined an op-ed in the Wall Street Journal promising to “protect America’s suburbs,” describing how they reversed policies that would allow for the creation of denser living structures in areas zoned only for single-family homes.

“America’s suburbs are a shining example of the American Dream, where people can live in their own homes, in safe, pleasant neighborhoods,” they wrote.

But the suburbs, in the sense of the idyllic American pastoral Trump and Carson referenced, have been changing for some time—not necessarily the physical homes, stores, roads, and offices that populate them, but the people who live there, along with their needs and desires. Previous mainstays of suburban life are now myths: that the majority of people own their homes; that the suburbs are havens for the middle class; or that the bulk of people are young families who value privacy over urban amenities like communal spaces, walkability, and mixed-use properties.

This mismatch has led to a phenomenon called “suburban retrofitting,” as documented by June Williamson, an associate professor of architecture at the City College of New York, and Ellen Dunham-Jones, a professor of architecture at the Georgia Institute of Technology. They have a new book out this week: Case Studies in Retrofitting Suburbia: Urban Design Strategies for Urgent Challenges.

Since the 1990s, Williamson and Dunham-Jones have been watching the suburbs evolve. They have found that much of the suburban sprawl of the 20th century was built to serve a very different population than the one that exists now, and so preserving what the suburbs once were doesn’t make sense.

Their book describes 32 recent instances in which suburban structures have been transformed into something new. Many of the cases in Williamson and Dunham-Jones first book from 2011 on the same topic were focused on underused parking lots being transformed into mixed-use spaces. But in this new book, the retrofitting projects have become more ambitious, as cities and towns turn old box stores, malls, motels, or office parks into places for people to live, work, eat, play, exercise, go to the doctor, or even watch Mexican wrestling.

They have found that when the suburbs are retrofitted, they can take on an astonishing array of modern issues: car dependency, public health, supporting aging people, helping people compete for jobs, creating water and energy resilience, and helping with social equity and justice.

Motherboard talked with Williamson and Dunham-Jones about why and how we should retrofit the suburbs, and whether or not the COVID-19 has made the suburbs appealing again, or instead accelerated the desire to retrofit the burbs.

The conversation has been edited for length and clarity.

Motherboard: How do you define the suburbs—a slippery term with no concrete definition? You write in the book that you define something as suburban based on its “suburban form,” not necessarily on location or city lines—what do you mean by that?
June Williamson:
 We’re architects and urban designers and so we are focused on the built environment. That means that when we’re looking at places, generally, that have been built out in the second half of the 20th century to be car dependent, not walkable, and have comparatively lower density.
Ellen Dunham-Jones: Similarly, you can look at the street networks. If . . .

Continue reading.

Written by LeisureGuy

21 January 2021 at 12:50 pm

Secrets of the largest animal genome ever: The Australian lungfish

leave a comment »

The platypus genome revealed a mix of mammal, bird, and reptile characteristics, and the lungfish is similarly bears signs of lifeform transition. Donna Lu writes in New Scientist:

The Australian lungfish has the largest genome of any animal so far sequenced.

Siegfried Schloissnig at the Research Institute of Molecular Pathology in Austria and his colleagues have found that the lungfish’s genome is 43 billion base pairs long, which is around 14 times larger than the human genome.

Its genome is 30 per cent larger than that of the previous record holder: the axolotl, a Mexican amphibian that the team sequenced in 2018.

The researchers used high-powered computer sequencers to piece together the lungfish genome.

To account for inherent errors that the sequencers introduce, they used multiple copies of the genome, each fragmented into small pieces of DNA. After all the fragments were sequenced, the team used algorithms to reassemble the pieces into a complete genome.

The result took roughly 100,000 hours of computer processing power, Schloissnig estimates.

The Australian lungfish (Neoceratodus forsteri), native to south-east Queensland, has changed little in appearance since the time when animals began transitioning from a water-based to a terrestrial-based lifestyle, says Schloissnig.

The animal’s fins are fleshy and flipper-like, and it has a single dorsal lung, which it can use to breathe air at the water’s surface.

Previously, it was unclear whether lungfish or coelacanths – a group of archaic fish found in the Indian Ocean and around Indonesia – were more closely related to land-based vertebrates such as mammals and birds.

The new genomic analysis shows unequivocally that lungfish are more closely linked to the evolutionary line that gave rise to four-legged animals. Coelacanths diverged earlier, while lungfish branched off 420 million years ago.

“In order to get out of the water, you need

Continue reading.

Written by LeisureGuy

20 January 2021 at 11:55 am

Posted in Evolution, Science

What New Science Techniques Tell Us About Ancient Women Warriors

leave a comment »

The past is a foreign country; they do things differently there.
— L.P. Hartley, writer (30 Dec 1895-1972)

A NY Times article suggests how people attempt to project their own cultural and social conventions on other societies even when it is totally inappropriate. To be fair, such projection is generally done from ignorance rather than ill will, though ill will quickly arises if the conventions are questioned. I think this is because people construct their identities from memes, generally taken from their cultural/social environment, so those conventions tend to be view as natural law with a heavy moral overlay. To deny them can feel to some as if their identity is in danger.

Of course, culture and social convention are subject to evolution and thus change over time. As an example of a change in social/cultural conventions, take the author of the article mentioned below, Mx. Annalee Newitz. Some decades back we went through a cultural shift away from the requirement that the marital status of women must be signified in the honorific: back then one had to use “Miss” for unmarried women, “Mrs.” for married women. (Men, of course, were called “Mr.” regardless of their marital status.)

The inequality was obvious, so in a relatively short period of time, the honorific “Ms.” (pronounced “mizz”) became common, readily adopted because it solved the problem of knowing which honorific to use when you did not know the woman’s marital status. (“Miss” and “Mrs.” were outliers among honorifics in requiring knowledge of marital status, since other honorifics — Mr., Capt., Dr., Rev., Prof., etc. — required no knowledge of marital status.) Indeed, in Southern speech, “mizz” was long since commonly used for both “Miss” and “Mrs.”

“Mx.” eases the burden of knowledge one step further: “Mx.” (pronounced “mix”) is an honorific that applies to a person without regard to gender — in effect, it is the honorific equivalent of “human” or “person.”

I think it would be quite useful, and will be quickly adopted by those whose name is ambiguous as regards gender and thus frequently get the wrong honorific (“Mr.” when “Ms.” is right, or “Ms.” when “”Mr.” is right — “Mx.” finesses the problem altogether). I’m thinking of names like Shirley (remember the spots columnist Shirley Povich?), Pat, Robin, Leslie, Sandy, Kim, Marion (John Wayne’s real name), Charlie, Evelyn, Sue, and so on.

So “Mx.” is the honorific equivalent of “human” or “person”: no comment regarding gender, but showing respect as a person.

Mx. Newitz writes:

Though it’s remarkable that the United States finally is about to have a female vice president, let’s stop calling it an unprecedented achievement. As some recent archaeological studies suggest, women have been leaders, warriors and hunters for thousands of years. This new scholarship is challenging long-held beliefs about so-called natural gender roles in ancient history, inviting us to reconsider how we think about women’s work today.

In November a group of anthropologists and other researchers published a paper in the academic journal Science Advances about the remains of a 9,000-year-old big-game hunter buried in the Andes. Like other hunters of the period, this person was buried with a specialized tool kit associated with stalking large game, including projectile points, scrapers for tanning hides and a tool that looked like a knife. There was nothing particularly unusual about the body — though the leg bones seemed a little slim for an adult male hunter. But when scientists analyzed the tooth enamel using a method borrowed from forensics that reveals whether a person carries the male or female version of a protein called amelogenin, the hunter turned out to be female.

With that information in hand, the researchers re-examined evidence from 107 other graves in the Americas from roughly the same period. They were startled to discover that out of 26 graves with hunter tools, 10 belonged to women. Bonnie Pitblado, an archaeologist at the University of Oklahoma, Norman, told Science magazine that the findings indicate that “women have always been able to hunt and have in fact hunted.” The new data calls into question an influential dogma in the field of archaeology. Nicknamed “man the hunter,” this is the notion that men and women in ancient societies had strictly defined roles: Men hunted, and women gathered. Now, this theory may be crumbling.

While the Andean finding was noteworthy, this was not the first female hunter or warrior to be found by re-examining old archaeological evidence using fresh scientific techniques. Nor was this sort of discovery confined to one group, or one part of the world.

Three years ago, scientists re-examined the remains of a 10th-century Viking warrior excavated in Sweden at the end of the 19th century by Hjalmar Stolpe, an archaeologist. The skeleton had been regally buried at the top of a hill, with a sword, two shields, arrows and two horses. For decades, beginning with the original excavation, archaeologists assumed the Viking was a man. When researchers in the 1970s conducted a new anatomical evaluation of the skeleton, they began to suspect that the Viking was in fact a woman. But it wasn’t until 2017, when a group of Swedish archaeologists and geneticists extracted DNA from the remains, that the sex of the warrior indeed proved to be female.

The finding led to controversy over whether the skeleton was really a warrior, with scholars and pundits protesting what they called revisionist history. Although the genetic sex determination thus was indisputable (the bones of the skeleton had two X chromosomes), these criticisms led the Swedish researchers to examine the evidence yet again, and present a second, more contextual analysis in 2019. Their conclusion again was that the person had been a warrior.

The naysayers raised fair points. In archaeology, as the researchers admitted, we can’t always know why a community buried someone with particular objects. And one female warrior does not mean that many women were leaders, just as the reign of Queen Elizabeth I was not part of a larger feminist movement.

Challenges to “man the hunter” have emerged in new examinations of the early cultures of the Americas as well. In the 1960s, an archaeological dig uncovered in the ancient city of Cahokia, in what is now southwestern Illinois, a 1,000-to-1,200-year-old burial site with two central bodies, one on top of the other, surrounded by other skeletons. The burial was full of shell beads, projectile points and other luxury items. At the time, the archaeologists concluded that this was a burial of two high-status males flanked by their servants.

But in 2016 archaeologists conducted a fresh examination of the grave. The two central figures, it turned out, were a male and a female; they were surrounded by other male-female pairs. Thomas Emerson, who conducted the study with colleagues from the Illinois State Archaeological Survey at the University of Illinois, alongside scientists from other institutions, said the Cahokia discovery demonstrated the existence of male and female nobility. “We don’t have a system in which males are these dominant figures and females are playing bit parts,” as he put it.

Armchair history buffs love to obsess about  . . .

Continue reading.

Written by LeisureGuy

2 January 2021 at 4:54 pm

Why blue tigers went extinct early in the 20th century

leave a comment »

Christie Wilcox writes in Quanta:

If you had braved the jungles of China’s Fujian province in the early 20th century, various accounts say you could have witnessed a stunningly unexpected animal: a blue tiger. These tigers were described as “marvelously beautiful” with bodies “a deep shade of Maltese, changing into almost deep blue on the under parts.” As late as the 1950s, hunters reported spotting their blue hairs alongside the traditional orange fur of other South China tigers on trails.

Then the blue tigers disappeared. The last reported sighting was in 1953, and blue tigers were soon the stuff of legends, with not so much as a preserved hide to prove they ever existed.

It is tempting to think the cats’ blueness was tied to some flaw that left them unable to compete with their bright orange kin. But it’s more likely their bizarre coats had nothing to do with their extinction; it was simply bad luck that the color arose in a small population that continued to shrink.

This kind of chance evolution is the purview of neutral theory, the historically controversial idea that “survival of the fittest” isn’t the only, or even the most common, way that species change, split or disappear. Simple as the proposition sounds, its consequences for genetics, evolution, ecology and even disciplines outside of biology have been sweeping.

Not So Neutral Theory

The random rise or fall of gene variants in a population is known as genetic drift. Today it’s accepted as a key driver of evolution and diversity, but that wasn’t always the case. Until the 1960s, biologists generally ascribed all variation to selective forces: Deleterious traits hampered an individual’s reproduction, ensuring that over time, the traits would disappear (negative or purifying selection). Conversely, helpful traits bolstered the number of offspring an individual had and raised their own prevalence (positive selection) — all as predicted by Charles Darwin and Alfred Russel Wallace’s principle of natural selection.

Then sequencing studies on proteins revealed much more genetic variation within populations than expected. The idea that selection was acting on all those genes at once, weighing their effects and picking which ones should stay or go, didn’t sit right with some scientists.

In 1968, the renowned geneticist Motoo Kimura proposed an alternative explanation, now called neutral theory. Kimura posited that most of the variation between organisms is neither advantageous nor disadvantageous. Consequently, most of the variety we see isn’t a product of the hidden hand of selection but rather of luck. “All you need is some input of variation, and random forces will do the rest,” said Armand Leroi, an evolutionary biologist at Imperial College London.

Kimura’s neutral theory of molecular evolution sparked debate because it seemed to water down the influence of selection. But the genomics revolution of the late 20th century and widespread DNA sequencing confirmed that Kimura was right; swapping out one letter for another in a gene’s code usually has little effect.

Ever since, neutral theory has been the default assumption (or null hypothesis) in genetics. “If you want to show that a given variant in a DNA sequence is under selection, you first have to really show that it can’t be just explained by neutrality,” Leroi said.

(Population) Size Matters

Some continue to fight the idea that neutral forces drive diversity at the molecular level, but Parul Johri, a population geneticist at Arizona State University, largely chalks that up to a misunderstanding of its role in evolution. “Kimura never said everything is neutral,” she said. What matters is how much neutral forces have shaped biodiversity. And that depends largely on the size of the group you’re looking at.

Imagine a population of 10 birds: one red, one green and all the rest brown. These colors aren’t harmful or helpful, so all the birds have the same chance of reproducing. Then a tornado kills six of the brown birds, purely by chance. Now half the population is brown, a quarter is red and a quarter is green. A random event caused a major shift in diversity. That’s genetic drift.

If there had been 98 brown birds and the same lone red and green ones, however, the catastrophe wouldn’t have mattered as much. Even if . . .

Continue reading.

Written by LeisureGuy

10 December 2020 at 2:21 pm

Posted in Evolution, Math, Science

“I should have loved biology”

leave a comment »

James Somers describes in this post how schools shortchanged him on biology — particularly poignant given the previous post on Lake Tanganyika’s cichlids. He writes:

I should have loved biology but I found it to be a lifeless recitation of names: the Golgi apparatus and the Krebs cycle; mitosis, meiosis; DNA, RNA, mRNA, tRNA.

In the textbooks, astonishing facts were presented without astonishment. Someone probably told me that every cell in my body has the same DNA. But no one shook me by the shoulders, saying how crazy that was. I needed Lewis Thomas, who wrote in The Medusa and the Snail:

For the real amazement, if you wish to be amazed, is this process. You start out as a single cell derived from the coupling of a sperm and an egg; this divides in two, then four, then eight, and so on, and at a certain stage there emerges a single cell which has as all its progeny the human brain. The mere existence of such a cell should be one of the great astonishments of the earth. People ought to be walking around all day, all through their waking hours calling to each other in endless wonderment, talking of nothing except that cell.

I wish my high school biology teacher had asked the class how an embryo could possibly differentiate—and then paused to let us really think about it. The whole subject is in the answer to that question. A chemical gradient in the embryonic fluid is enough of a signal to slightly alter the gene expression program of some cells, not others; now the embryo knows “up” from “down”; cells at one end begin producing different proteins than cells at the other, and these, in turn, release more refined chemical signals; …; soon, you have brain cells and foot cells.

How come we memorized chemical formulas but didn’t talk about that? It was only in college, when I read Douglas Hofstadter’s Gödel, Escher, Bach, that I came to understand cells as recursively self-modifying programs. The language alone was evocative. It suggested that the embryo—DNA making RNA, RNA making protein, protein regulating the transcription of DNA into RNA—was like a small Lisp program, with macros begetting macros begetting macros, the source code containing within it all of the instructions required for life on Earth. Could anything more interesting be imagined?

Someone should have said this to me:

Imagine a flashy spaceship lands in your backyard. The door opens and you are invited to investigate everything to see what you can learn. The technology is clearly millions of years beyond what we can make.

This is biology.

–Bert Hubert, “Our Amazing Immune System”

In biology class, biology wasn’t presented as a quest for the secrets of life. The textbooks wrung out the questing. We were nowhere acquainted with real biologists, the real questions they had, the real experiments they did to answer them. We were just given their conclusions.

For instance I never learned that a man named Oswald Avery, in the 1940s, puzzled over two cultures of Streptococcus bacteria. One had a rough texture when grown in a dish; the other was smooth, and glistened. Avery noticed that when he mixed the smooth strain with the rough strain, every generation after was smooth, too. Heredity in a dish. What made it work? This was one of the most exciting mysteries of the time—in fact of all time.

Most experts thought that protein was somehow responsible, that traits were encoded soupily, via differing concentrations of chemicals. Avery suspected a role for nucleic acid. So, he did an experiment, one we could have replicated on our benches in school. Using just a centrifuge, water, detergent, and acid, he purified nucleic acid from his smooth strep culture. Precipitated with alcohol, it became fibrous. He added a tiny bit of it to the rough culture, and lo, that culture became smooth in the following generations. This fibrous stuff, then, was “the transforming principle”—the long-sought agent of heredity. Avery’s experiment set off a frenzy of work that, a decade later, ended in the discovery of the double helix.

In his “Mathematician’s Lament,” Paul Lockhart describes how school cheapens mathematics by robbing us of the questions. We’re not just asked, hey, how much of the triangle . . .

Continue reading.

Written by LeisureGuy

1 December 2020 at 3:05 pm

<span>%d</span> bloggers like this: