Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Evolution’ Category

Fossil Discoveries Challenge Ideas About Earth’s Start

leave a comment »

Rebecca Boyle writes in Quanta:

In the arid, sun-soaked northwest corner of Australia, along the Tropic of Capricorn, the oldest face of Earth is exposed to the sky. Drive through the northern outback for a while, south of Port Hedlund on the coast, and you will come upon hills softened by time. They are part of a region called the Pilbara Craton, which formed about 3.5 billion years ago, when Earth was in its youth.

Look closer. From a seam in one of these hills, a jumble of ancient, orange-Creamsicle rock spills forth: a deposit called the Apex Chert. Within this rock, viewable only through a microscope, there are tiny tubes. Some look like petroglyphs depicting a tornado; others resemble flattened worms. They are among the most controversial rock samples ever collected on this planet, and they might represent some of the oldest forms of life ever found.

Last month, researchers lobbed another salvo in the decades-long debate about the nature of these forms. They are indeed fossil life, and they date to 3.465 billion years ago, according to John Valley, a geochemist at the University of Wisconsin. If Valley and his team are right, the fossils imply that life diversified remarkably early in the planet’s tumultuous youth.

The fossils add to a wave of discoveries that point to a new story of ancient Earth. In the past year, separate teams of researchers have dug up, pulverized and laser-blasted pieces of rock that may contain life dating to 3.7, 3.95 and maybe even 4.28 billion years ago. All of these microfossils — or the chemical evidence associated with them — are hotly debated. But they all cast doubt on the traditional tale.

As that story goes, in the half-billion years after it formed, Earth was hellish and hot. The infant world would have been rent by volcanism and bombarded by other planetary crumbs, making for an environment so horrible, and so inhospitable to life, that the geologic era is named the Hadean, for the Greek underworld. Not until a particularly violent asteroid barrage ended some 3.8 billion years ago could life have evolved.

But this story is increasingly under fire. Many geologists now think Earth may have been tepid and watery from the outset. The oldest rocks in the record suggest parts of the planet’s crust had cooled and solidified by 4.4 billion years ago. Oxygen in those ancient rocks suggest the planet had water as far back as 4.3 billion years ago. And instead of an epochal, final bombardment, meteorite strikes might have slowly tapered off as the solar system settled into its current configuration.

“Things were actually looking a lot more like the modern world, in some respects, early on. There was water, potentially some stable crust. It’s not completely out of the question that there would have been a habitable world and life of some kind,” said Elizabeth Bell, a geochemist at the University of California, Los Angeles.

Taken together, the latest evidence from the ancient Earth and from the moon is painting a picture of a very different Hadean Earth: a stoutly solid, temperate, meteorite-clear and watery world, an Eden from the very beginning.

Ancient Clues

About 4.54 billion years ago, Earth was forming out of dust and rocks left over from the sun’s birth. Smaller solar leftovers continually pelted baby Earth, heating it up and endowing it with radioactive materials, which further warmed it from within. Oceans of magma covered Earth’s surface. Back then, Earth was not so much a rocky planet as an incandescent ball of lava.

Not long after Earth coalesced, a wayward planet whacked into it with incredible force, possibly vaporizing Earth anew and forming the moon. The meteorite strikes continued, some excavating craters 1,000 kilometers across. In the standard paradigm of the Hadean eon, these strikes culminated in an assault dubbed the Late Heavy Bombardment, also known as the lunar cataclysm, in which asteroids emigrated to the inner solar system and pounded the rocky planets. Throughout this early era, ending about 3.8 billion years ago, Earth was molten and couldn’t support a crust of solid rock, let alone life.

But starting around a decade ago, this story started to change, thanks largely to tiny crystals called zircons. The gems, which are often about the size of the period at the end of this sentence, told of a cooler, wetter and maybe livable world as far back as 4.3 billion years ago. In recent years, fossils in ancient rock bolstered the zircons’ story of calmer climes. The tornadic microfossils of the Pilbara Craton are the latest example.

Today, the oldest evidence for possible life — which many scientists doubt or outright reject — is at least 3.77 billion years old and may be a stunningly ancient 4.28 billion years old. . .

Continue reading.

Good charts at the link, plus much more.

Written by LeisureGuy

22 January 2018 at 12:41 pm

Evolution unleashed (and needing an overhaul?)

leave a comment »

Kevin Laland, professor of behavioural and evolutionary biology at University of St Andrews in Scotland, and project leader of the extended evolutionary synthesis research programme, writes in Aeon:

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

If you are not a biologist, you’d be forgiven for being confused about the state of evolutionary science. Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet as novel ideas flood in from genomics, epigenetics and developmental biology, most evolutionists agree that their field is in flux. Much of the data implies that evolution is more complex than we once assumed.

Some evolutionary biologists, myself included, are calling for a broader characterisation of evolutionary theory, known as the extended evolutionary synthesis (EES). A central issue is whether what happens to organisms during their lifetime – their development – can play important and previously unanticipated roles in evolution. The orthodox view has been that developmental processes are largely irrelevant to evolution, but the EES views them as pivotal. Protagonists with authoritative credentials square up on both sides of this debate, with big-shot professors at Ivy League universities and members of national academies going head-to-head over the mechanisms of evolution. Some people are even starting to wonder if a revolution is on the cards.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor was contentious for two reasons. First, as we’ll see, it’s no less true that culture holds genes on a leash. Second, while there must be a genetic propensity for cultural learning, few cultural differences can be explained by underlying genetic differences.

Nonetheless, the phrase has explanatory potential. Imagine a dog-walker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The struggling dog-walker is a good metaphor for how EES views the adaptive process. Does this require a revolution in evolution? Before we can answer this question, we need to examine how science works. The best authorities here are not biologists but philosophers and historians of science. Thomas Kuhn’s book The Structure of Scientific Revolutions (1962) popularised the idea that sciences change through revolutions in understanding. These ‘paradigm shifts’ were thought to follow a crisis of confidence in the old theory that arose through the accumulation of conflicting data.

Then there’s Karl Popper, and his conjecture that scientific theories can’t be proven but can be falsified. Consider the hypothesis: ‘All sheep are white.’ Popper maintained that no amount of positive findings consistent with this hypothesis could prove it to be correct, since one could never rule out the chance that a conflicting data-point might arise in the future; conversely, the observation of a single black sheep would decisively prove the hypothesis to be false. He maintained that scientists should strive to carry out critical experiments that could potentially falsify their theories.

While Kuhn and Popper’s ideas are well-known, they remain disputed and contentious in the eyes of philosophers and historians. Contemporary thinking in these fields is better captured by the Hungarian philosopher Imre Lakatos in The Methodology of Scientific Research Programmes (1978):

The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths.

Popper’s arguments might make logical sense, but they don’t quite map on to how science works in the real world. Scientific observations are susceptible to errors of measurement; scientists are human beings and get attached to their theories; and scientific ideas can be fiendishly complex – all of which makes evaluating scientific hypotheses a messy business. Rather than accepting that our hypotheses might be wrong, we challenge the methodology (‘That sheep’s not black – your instruments are faulty’), dispute the interpretation (‘The sheep’s just dirty’), or come up with tweaks to our hypotheses (‘I meant domesticated breeds, not wild mouflon’). Lakatos called such fixes and fudges ‘auxiliary hypotheses’; scientists propose them to ‘protect’ their core ideas, so that they need not be rejected.

This sort of behaviour is clearly manifest in scientific debates over evolution. Take the idea that new features acquired by an organism during its life can be passed on to the next generation. This hypothesis was brought to prominence in the early 1800s by the French biologist Jean-Baptiste Lamarck, who used it to explain how species evolved. However, it has long been regarded as discredited by experiment – to the point that the term ‘Lamarckian’ has a derogatory connotation in evolutionary circles, and any researchers expressing sympathy for the idea effectively brand themselves ‘eccentric’. The received wisdom is that parental experiences can’t affect the characters of their offspring.

Except they do. The way that genes are expressed to produce an organism’s phenotype – the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens.

Let’s return to the almond-fearing mice. The inheritance of an epigenetic mark transmitted in the sperm is what led the mice’s offspring to acquire an inherited fear. In 2011, another extraordinary study reported that worms responded to exposure to a nasty virus by producing virus-silencing factors – chemicals that shut down the virus – but, remarkably, subsequent generations epigenetically inherited these chemicals via regulatory molecules (known as ‘small RNAs’). There are now hundreds of such studies, many published in the most prominent and prestigious journals. Biologists dispute whether epigenetic inheritance is truly Lamarckian or onlysuperficially resembles it, but there is no getting away from the fact that the inheritance of acquired characteristics really does happen.

By Popper’s reasoning, a single experimental demonstration of epigenetic inheritance – like a single black sheep – should suffice to convince evolutionary biologists that it’s possible. Yet, by and large, evolutionary biologists have not rushed to change their theories. Rather, as Lakatos anticipated, we have come up with auxiliary hypotheses that allow us to retain our long-held beliefs (ie, that inheritance is pretty much explained by the transmission of genes across generations). These include the ideas that epigenetic inheritance is rare, that it does not affect functionally important traits, that it is under genetic control, and that it is too unstable to underpin the spread of traits through selection.

Unfortunately for the traditionalists, . . .

Continue reading.

Written by LeisureGuy

18 January 2018 at 10:37 am

Posted in Evolution, Science

With ‘Downsized’ DNA, Flowering Plants Took Over the World

leave a comment »

Jordana Cepelewicz writes in Quanta:

When people consider evolutionary events related to the origin and diversification of new species and groups, they tend to emphasize novel adaptations — specific genes giving rise to new, beneficial traits. But a growing body of research suggests that in some cases, that deciding factor may be something much more fundamental: size. In a paper published today in PLOS Biology, a pair of researchers studying the angiosperms, or flowering plants, has named genome size as the limiting constraint in their evolution.

The success of flowering plants, a group that includes everything from orchids and tulips to grasses and wheat, represents a long-standing puzzle for biologists. (In an 1879 letter to the renowned botanist Joseph Dalton Hooker, Charles Darwin called it an “abominable mystery.”) Terrestrial plants first appeared nearly half a billion years ago, but flowering plants arose only in the past 100 million years, beginning in the Cretaceous period. Yet, once angiosperms emerged, their structural and functional diversity exploded — far outpacing the diversification and spread of the other major plant groups, the gymnosperms (including conifers) and ferns.

Today, the 350,000 flowering-plant species, which have flourished in the vast majority of environments on Earth, constitute 90 percent of all plants on land. Since Darwin’s time, biologists pursuing the answer to that abominable mystery have sought to explain how the flowering plants could possibly have achieved this level of dominance in such a relatively short time.

Perhaps the answer has been so elusive because those scientists have usually focused on the physiological traits that set the angiosperms apart from their relatives. In the PLOS Biology paper, however, Kevin Simonin, a plant biologist at San Francisco State University, and Adam Roddy, a postdoctoral fellow at Yale University, argue that it’s the genome sizes underlying those individual adaptations that really matter.

The Advantages of Genome Downsizing

Species diverge hugely in genome size, without respect to the organisms’ complexity; in an oft-cited example, the onion has five times as much DNA as humans do. In the new study, Simonin and Roddy demonstrated what that variability in genome size means for large-scale biodiversity. They compiled vast amounts of data on genome size, cell size, cell density and photosynthetic rate for hundreds of angiosperms, ferns and gymnosperms, then traced the correlations among those traits back through time to weave a cohesive evolutionary narrative.

The emergence of angiosperms was marked by many events in which lineages of plants duplicated their whole genome. This process opened a door for greater diversification because the extra copies of genes could evolve and take on new functions. But because carrying so much genetic material can also be physiologically taxing, natural selection typically followed up these duplication events by aggressively pruning unneeded sequences. This “genome downsizing” often left flowering plants with less DNA than their parent species had. In fact, by following the family trees of the flowering plants back to their base, researchers have determined that the very first angiosperms had small genomes. “We now know that this not only contributed to their diversity, but may have given angiosperms the metabolic advantage to outcompete the other plant groups,” Simonin said.

He and Roddy posited that angiosperms’ small genomes set off a cascade of effects that over time flowed from their physiology into their structure and ultimately into their ecological role. Less DNA made it possible for the flowering plants to build their leaves from smaller cells, which in turn allowed them to pack more of certain cell types into the same volume. They could therefore have a higher density of stomata — the pores that facilitate the intake of carbon dioxide from the air and the release of water vapor — and a higher density of veins to provide enough water to keep those pores open. And the flowering plants did not have to sacrifice a high density of photosynthetic cells to achieve these benefits.

As a result, flowering plants could turn sunlight into sugars through photosynthesis much more efficiently. The rise of their superior capacity for hydration and gas exchange also coincided with falling levels of atmospheric carbon dioxide during the Cretaceous period, which contributed further to the angiosperms’ competitive edge over their green-plant peers. According to Simonin and Roddy, phylogenetic evidence shows that changes in genome size and the relevant physiological traits occurred together. . .

Continue reading. Much more, and chart that shows vividly (and visually) what “outcompete” means.

Written by LeisureGuy

15 January 2018 at 2:16 pm

Posted in Evolution, Science

New Bird Species Arises From Hybrids, as Scientists Watch

leave a comment »

Jordana Cepelewicz describes in Quanta the emergence of a new species. Creationists generally insist that evolution cannot be right because we don’t see new species actually being formed. This ends that line of argument (or would, if Creationists argued in good faith). Cepelwicz writes:

t’s not every day that scientists observe a new species emerging in real time. Charles Darwin believed that speciation probably took place over hundreds if not thousands of generations, advancing far too gradually to be detected directly. The biologists who followed him have generally defaulted to a similar understanding and have relied on indirect clues, gleaned from genomes and fossils, to infer complex organisms’ evolutionary histories.

Some of those clues suggest that interbreeding plays a larger role in the formation of new species than previously thought. But the issue remains contentious: Hybridization has been definitively shown to cause widespread speciation only in plants. When it comes to animals, it has remained a hypothesis (albeit one that’s gaining increasing support) about events that typically occurred in the distant, unseen past.

Until now. In a paper published last month in Science, researchers reported that a new animal species had evolved by hybridization — and that it had occurred before their eyes in the span of merely two generations. The breakneck pace of that speciation event turned heads both in the scientific community and in the media. The mechanism by which it occurred is just as noteworthy, however, because of what it suggests about the undervalued role of hybrids in evolution.

Eyewitnesses to Speciation

In 1981, Peter and Rosemary Grantthe famous husband-and-wife team of evolutionary biologists at Princeton University, had already been studying Darwin’s finches on the small Galápagos island Daphne Major for nearly a decade. So when they spotted a male bird that looked and sounded different from the three species residing on the island, they immediately knew he didn’t belong. Genetic analysis showed he was a large cactus finch (Geospiza conirostris) from another island, either Española or Gardner, more than 60 miles away — too great a distance for the bird to fly home.

Tracking the marooned male bird’s activity, the Grants observed him as he mated with two female medium ground finches (G. fortis) on Daphne and produced hybrid offspring. Such interbreeding by isolated animals in the wild is not uncommon, though biologists have usually dismissed it as irrelevant to evolution because the hybrids tend to be unfit. Often they cannot reproduce, or they fail to compete effectively against established species and quickly go extinct. Even when the hybrids are fertile and fit, they frequently get reabsorbed into the original species by mating with their parent populations.

But something different happened with the hybrids on Daphne: When they matured, they became a population distinct from Daphne’s other bird species by inbreeding extensively and exclusively — siblings mating with siblings, and parents mating with their offspring.

In short, an incipient hybrid species, which the researchers dubbed the Big Bird lineage, had emerged within two generations. Today, six generations have passed, and the island is home to around 30 Big Bird finches. “If you were a biologist none the wiser to what had happened,” said Leif Andersson, a geneticist at Uppsala University in Sweden and one of the study’s co-authors, “and you started studying these birds, you’d think there were four different species on the island.”

Where Hybrids Thrive

On Daphne Major, the conditions may have been just right for hybrid speciation. “It shows what is possible, given the right circumstances,” Peter Grant said, and it sends “a valuable message about the importance of rare and unpredictable events in evolution. These have probably been underestimated.”

The Big Bird lineage became reproductively isolated so quickly because those birds could not successfully attract mates among the island’s resident species, which preferred their own kind. Big Bird finches couldn’t pass muster: They had relatively large beaks for their body size, and they boasted a unique song. These differences prevented gene flow between the hybrids and the native medium ground finches from which they had descended, leading to a distinct hybrid population. (In their Science paper, the Grants and their colleagues noted that the species status of Big Bird finches is still unofficial because no one has yet tested whether the birds will breed with their ancestral finches on Española and Gardner. But they cited reasons to suspect that the Big Bird lineage is reproductively isolated from them as well.) . . .

Continue reading.

Written by LeisureGuy

13 December 2017 at 10:48 am

Posted in Evolution, Science

Aliens in our midst

leave a comment »

Douglas Fox, a freelance science and environmental writer whose work has been published in Discover, Esquire and National Geographic magazines, writes in Aeon:

Leonid Moroz has spent two decades trying to wrap his head around a mind-boggling idea: even as scientists start to look for alien life in other planets, there might already be aliens, with surprisingly different biology and brains, right here on Earth. Those aliens have hidden in plain sight for millennia. They have plenty to teach us about the nature of evolution, and what to expect when we finally discover life on other worlds.

Moroz, a neuroscientist, saw the first hint of his discovery back in the summer of 1995, not long after arriving in the United States from his native Russia. He spent that summer at the Friday Harbor marine laboratory in Washington. The lab sat amid an archipelago of forested islands in Puget Sound – a crossroads of opposing tides and currents that carried hundreds of animal species past the rocky shore: swarms of jellyfish, amphipod crustaceans, undulating sea lilies, nudibranch slugs, flatworms, and the larvae of fish, sea stars and countless other animals. These creatures represented not just the far reaches of Puget Sound, but also the farthest branches of the animal tree of life. Moroz spent hours out on the pier behind the lab, collecting animals so he could study their nerves. He had devoted years to studying nervous systems across the animal kingdom, in hopes of understanding the evolutionary origin of brains and intelligence. But he came to Friday Harbor to find one animal in particular.

He trained his eyes to recognise its bulbous, transparent body in the sunlit water: an iridescent glint and fleeting shards of rainbow light, scattered by the rhythmic beating of thousands of hair-like cilia, propelling it through the water. This type of animal, called a ctenophore (pronounced ‘ten-o-for’ or ‘teen-o-for’), was long considered just another kind of jellyfish. But that summer at Friday Harbor, Moroz made a startling discovery: beneath this animal’s humdrum exterior was a monumental case of mistaken identity. From his very first experiments, he could see that these animals were unrelated to jellyfish. In fact, they were profoundly different from any other animal on Earth.

Moroz reached this conclusion by testing the nerve cells of ctenophores for the neurotransmitters serotonin, dopamine and nitric oxide, chemical messengers considered the universal neural language of all animals. But try as he might, he could not find these molecules. The implications were profound.

The ctenophore was already known for having a relatively advanced nervous system; but these first experiments by Moroz showed that its nerves were constructed from a different set of molecular building blocks – different from any other animal – using ‘a different chemical language’, says Moroz: these animals are ‘aliens of the sea’.

If Moroz is right, then the ctenophore represents an evolutionary experiment of stunning proportions, one that has been running for more than half a billion years. This separate pathway of evolution – a sort of Evolution 2.0 – has invented neurons, muscles and other specialised tissues, independently from the rest of the animal kingdom, using different starting materials.

This animal, the ctenophore, provides clues to how evolution might have gone if not for the advent of vertebrates, mammals and humans, who came to dominate the ecosystems of Earth. It sheds light on a profound debate that has raged for decades: when it comes to the present-day face of life on Earth, how much of it happened by pure accident, and how much was inevitable from the start?

If evolution were re-run here on Earth, would intelligence arise a second time? And if it did, might it just as easily turn up in some other, far-flung branch of the animal tree? The ctenophore offers some tantalising hints by showing just how different from one another brains can be. Brains are the crowning case of convergent evolution – the process by which unrelated species evolve similar traits to navigate the same kind of world. Humans might have evolved an unprecedented intellect, but the ctenophore suggests that we might not be alone. The tendency of complex nervous systems to evolve is probably universal – not just on Earth, but also in other worlds.

As major animal groups go, the ctenophore is poorly understood. Its body superficially resembles that of a jellyfish – gelatinous, oblong or spherical, with a circular mouth at one end. Ctenophores are abundant in the oceans, but long-neglected by scientists. Well into the 20th century, drawings in textbooks often showed the animal upside down, its mouth hanging toward the seafloor, in jellyfish fashion, whereas in real life, it drifts with its mouth pointed upward.

Unlike the jellyfish, which uses muscles to flap its body and swim, the ctenophore uses thousands of cilia to swim. And unlike the jellyfish with its stinging tentacles, the ctenophore hunts using two sticky tentacles that secrete glue, an adaptation with no parallel in the rest of the animal kingdom. The ctenophore is a voracious predator, known for its ambush tactics. It hunts by spreading its branched, sticky tentacles to form something like a spiderweb, and catches its prey meticulously, one by one.

When scientists began examining the ctenophore nervous system in the late 1800s, what they saw through their microscopes seemed ordinary. A thick tangle of neurons sat near the animal’s south pole, a diffuse network of nerves spread throughout its body, and a handful of thick nerve bundles extended to each tentacle and to each of its eight bands of cilia. Electron microscope studies in the 1960s showed what seemed to be synapses between these neurons, with bubble-like compartments poised to release neurotransmitters that would stimulate the neighbouring cell.

Scientists injected the neurons of living ctenophores with calcium – causing them to fire electric pulses, just as happens in the nerves of rats, worms, flies, snails and every other animal. By stimulating the right nerves, researchers could even prompt its cilia to rotate in different patterns – causing it to swim forward or back.

In short, the ctenophore’s nerves seemed to look and act just like those of any other animal. So biologists assumed that they were the same. This view of ctenophores played into a larger narrative on the evolution of all animals – one that would also turn out to be wrong.

By the 1990s, scientists had placed ctenophores low on the animal tree of life, on a branch next to cnidarians, the group that includes jellyfish, sea anemones and coral. Jellyfish and ctenophores both have muscles, and both have diffuse nervous systems that haven’t fully condensed into a brain. And, of course, both have bodies that are famously soft, jiggly and often transparent.

Below ctenophores and jellyfish on the evolutionary tree sat two other branches of animals that were clearly more primitive: placozoans and sea sponges, which both lacked nerve cells of any kind. The sponge in particular had seemed just barely on the cusp of animalhood: not until 1866 did the English biologist Henry James Clark demonstrate that the sponge was, indeed, an animal.

This helped to enshrine the sponge as our closest living link to an ancient, pre-animal world of single-celled protists, akin to modern-day amoeba and paramecium. Researchers reasoned that sponges had evolved when ancient protists gathered into high-rise colonies, with each cell using its flagella – threadlike structures akin to cilia – for feeding instead of swimming.

This narrative supported the convenient view that the nervous system had evolved gradually, toward greater complexity with each successive branch of the animal tree. All animals were sons and daughters of a single moment of evolutionary creation: the birth of the nerve cell. And only once, in subsequent evolution, had those neurons crossed a second momentous threshold – aggregating into a centralised brain. This view was bolstered by another line of evidence: striking similarities in the way that individual nerve cells were arranged in insects and humans, into neural circuits underlying episodic memory, spatial navigation and overall behaviour. In fact, scientists held, the first brain must have appeared quite early, before the ancestors of insects and vertebrates parted evolutionary ways. If this was true, then the 550 to 650 million years elapsing since that event would represent a single storyline, with multiple animal lineages elaborating on the same, basic brain blueprint up and down the chain.

This picture of brain evolution made sense, but observing the scene at Friday Harbor in 1995, Moroz began to suspect that it was profoundly wrong. To demonstrate his hunch, he collected several species of ctenophores. He sliced their neural tissue into thin slivers and treated them with chemical stains indicating the presence of dopamine, serotonin or nitric oxide – three neurotransmitters that were widespread across the animal kingdom. Again and again, he looked into the microscope and saw no trace of the yellow, red or green stains.

Once you repeat the experiments, says Moroz: ‘You start to realise it’s a really different animal.’ He surmised that the ctenophore was not just different from its supposed sister group, the jellyfish. It was also vastly different from any other nervous system on Earth.

The ctenophore seemed to follow an entirely different evolutionary pathway, but Moroz couldn’t be sure. And if he published his results now, after looking at just a few important molecules, people would utterly dismiss them. ‘Extraordinary claims require extraordinary evidence,’ says Moroz. And so he embarked on a long, slow road, one even longer than even he suspected at the time.

He applied for funding to study ctenophores using other techniques – for example, looking at their genes – but gave up after being turned down multiple times. He was still young at that point, had left the Soviet Union only a few years before, and had only just started publishing his work in English-language journals where it would generate broader interest. So Moroz put ctenophores on a back burner and returned to his primary work, studying neural signalling in snails, clams, octopuses and other molluscs. It was only by chance, 12 years later, that he returned to his passion project.

In 2007, he briefly visited Friday Harbor for a scientific conference. One evening, he strolled out onto the same docks where he had spent so much time in 1995. There, by chance, he glimpsed the iridescent sparkles of ctenophores drifting under the light of a lantern. Scientific tools had advanced by then, making it possible to sequence an entire genome in days rather than years. And Moroz was now established, with his own lab at the University of Florida. He could finally afford to dabble in curiosities.

So he fetched a net and fished a dozen or so ctenophores, a species called Pleurobrachia bachei, from the water. He froze them and shipped them to his lab in Florida. Within three weeks, he had a partial ‘transcriptome’ of the ctenophore – some 5,000 or 6,000 gene sequences that were actively turned on in the animal’s nerve cells. The results were startling.

First, they showed that Pleurobrachia lacked the genes and enzymes required to manufacture a long list of neurotransmitters widely seen in other animals. These missing neurotransmitters included not just the ones that Moroz had noted back in 1995 – serotonin, dopamine and nitric oxide – but also acetylcholine, octopamine, noradrenaline and others. The ctenophore also lacked genes for the receptors that allow a neuron to capture these neurotransmitters and respond to them.

This confirmed what Moroz had waited years to find out: that when he failed to find common neurotransmitters in ctenophore nerves back in 1995, it wasn’t simply that his tests weren’t working; rather, it was because the animal wasn’t using them in any way. This, says Moroz, was ‘a big surprise’.

‘We all use neurotransmitters,’ he says. ‘From jellyfish to worms, to molluscs, to humans, to sea urchins, you will see a very consistent set of signalling molecules.’ But, somehow, the ctenophore had evolved a nervous system in which these roles were filled by a different, as-yet unknown set of molecules.

Moroz’s transcriptome and genomic DNA sequences showed that the ctenophore also lacked many other genes, known from the rest of the animal kingdom, that are crucial for building and operating nervous systems. Pleurobrachia was missing many common proteins called ion channels that generate electric signals that travel rapidly down a nerve. It was missing genes that guide embryonic cells through the complex transformation into mature nerve cells. And it was missing well-known genes that orchestrate the stepwise connection of those neurons into mature, functioning circuits. ‘It was much more than just the presence or absence of just a few genes,’ he says. ‘It was really a grand design.’

It meant that the nervous system of the ctenophore had evolved from the ground up, using a different set of molecules and genes than any other animal known on Earth. It was a classic case of convergence: the lineage of ctenophores had evolved a nervous system using whatever genetic starting materials were available. In a sense, it was an alien nervous system – evolved separately from the rest of the animal kingdom.

But the surprises didn’t stop there. The ctenophore was turning out to be unique from other animals in far more than just its nervous system. . .

Continue reading.

Written by LeisureGuy

2 December 2017 at 10:40 am

Posted in Evolution, Science

Evolution can produce very complex mechanisms: How Bacteria Help Regulate Blood Pressure

leave a comment »

Veronique Greenwood writes in Quanta:

Some years ago, when Jennifer Pluznick was nearing the end of her training in physiology and sensory systems, she was startled to discover something in the kidneys that seemed weirdly out of place. It was a smell receptor, a protein that would have looked more at home in the nose. Given that the kidneys filter waste into urine and maintain the right salt content in the blood, it was hard to see how a smell receptor could be useful there. Yet as she delved deeper into what the smell receptor was doing, Pluznick came to a surprising conclusion: The kidney receives messages from the gut microbiome, the symbiotic bacteria that live in the intestines.

In the past few years, Pluznick, who is now an associate professor of physiology at Johns Hopkins University, and a small band of like-minded researchers have put together a picture of what the denizens of the gut are telling the kidney. They have found that these communiqués affect blood pressure, such that if the microbes are destroyed, the host suffers. The researchers have uncovered a direct, molecular-level explanation of how the microbiome conspires with the kidneys and the blood vessels to manipulate the flow of blood.

The smell receptor, called Olfr78, was an orphan at first: It had previously been noticed in the sensory tissues of the nose, but no one knew what specific scent or chemical messenger it responded to. Pluznick began by testing various chemical possibilities and eventually narrowed down the candidates to acetate and propionate. These short-chain fatty acid molecules come from the fermentation breakdown of long chains of carbohydrates — what nutritionists call dietary fiber. Humans, mice, rats and other animals cannot digest fiber, but the bacteria that live in their guts can.

As a result, more than 99 percent of the acetate and propionate that floats through the bloodstream is released by bacteria as they feed. “Any host contribution is really minimal,” Pluznick said. Bacteria are therefore the only meaningful source of what activates Olfr78 — which, further experiments showed, is involved in the regulation of blood pressure.

Our bodies must maintain a delicate balance with blood pressure, as with electricity surging through a wire, where too much means an explosion and too little means a power outage. If blood pressure is too low, an organism loses consciousness; if it’s too high, the strain on the heart and blood vessels can be deadly. Because creatures are constantly flooding their blood with nutrients and chemical signals that alter the balance, the control must be dynamic. One of the ways the body exerts this control is with a hormone called renin, which makes blood vessels narrower when the pressure needs to be kept up. Olfr78, Pluznick and her colleagues discovered, helps drive the production of renin.

How did a smell receptor inherit this job? The genes for smell receptors are present in almost every cell of the body. If in the course of evolution these chemical sensors hooked up to the machinery for manufacturing a hormone rather than to a smell neuron, and if that connection proved useful, evolution would have preserved the arrangement, even in parts of the body as far from the nose as the kidneys are.

Olfr78 wasn’t the end of the story, however. While the team was performing these experiments, they realized that another receptor called Gpr41 was getting signals from the gut microbiome as well. In a paper last year, Pluznick’s first graduate student, Niranjana Natarajan, now a postdoctoral fellow at Harvard University, revealed the role of Gpr41, which she found on the inner walls of blood vessels. Like Olfr78, Gpr41 is known to respond to acetate and propionate — but it lowers blood pressure rather than raising it. Moreover, Gpr41 starts to respond at low levels of acetate and propionate, while Olfr78 kicks in only at higher levels.

Here’s how the pieces fit together: When you — or a mouse, or any other host organism whose organs and microbes talk this way — have a meal and dietary fiber hits the gut, bacteria feed and release their fatty-acid signal. This activates Gpr41, which ratchets down the blood pressure as all the consumed nutrients flood the circulation.

If you keep eating — a slice of pie at Thanksgiving dinner, another helping of mashed potatoes — Gpr41, left to itself, might bring the pressure down to dangerous levels. “We think that is where Olfr78 comes in,” Pluznick said. That receptor, triggered as the next surge of fatty acids arrives, keeps blood pressure from bottoming out by calling for renin to constrict the blood vessels.

The new understanding of how symbiotic bacteria manipulate blood pressure is emblematic of wider progress in linking the microbiome to our vital statistics and health. While vague statements about the microbiome’s effect on health have become commonplace in recent years, the field has moved beyond simply making associations, said Jack Gilbert, a microbiome researcher at the University of Chicago.

“Everybody goes on about . . .

Continue reading.

And this is good:

Written by LeisureGuy

1 December 2017 at 2:41 pm

Who first buried the dead?

leave a comment »

Paige Madison, a PhD candidate in the history and philosophy of science at Arizona State University, writes in Aeon:

A mysterious cache of bones, recovered from a deep chamber in a South African cave, is challenging long-held beliefs about how a group of bipedal apes developed into the abstract-thinking creatures that we call ‘human’. The fossils were discovered in 2013 and were quickly recognised as the remains of a new species unlike anything seen before. Named Homo naledi, it has an unexpected mix of modern features and primitive ones, including a fairly small brain. Arguably the most shocking aspect of Homo naledi, though, concerned not the remains themselves but rather their resting place.

The chamber where the bones were found is far from the cave entrance, accessible only through a narrow, difficult passage that is completely shrouded in darkness. Scientists believe the chamber has long been difficult to access, requiring a journey of vertical climbing, crawling and tight squeezing through spaces only 20 cm across. It would be an impossible place to live, and a highly unlikely location for many individuals to have ended up by accident. Those details pushed the research team toward a shocking hypothesis: despite its puny brain, Homo naledi purposefully interred its dead. The cave chamber was a graveyard, they concluded.

For anthropologists, mortuary rituals carry an outsize importance in tracing the emergence of human uniqueness – especially the capacity to think symbolically. Symbolic thought gives us the ability to transcend the present, remember the past, and visualise the future. It allows us to imagine, to create, and to alter our environment in ways that have significant consequences for the planet. Use of language is the quintessential embodiment of such mental abstractions, but studying its history is difficult because language doesn’t fossilise. Burials do.

Burials provide a hard, material record of a behaviour that is deeply spiritual and meaningful. It allows scientists to trace the emergence of beliefs, values and other complex ideas that appear to be uniquely human. Homo sapiens is unquestionably unlike any other species alive today. Pinpointing what separates us from the rest of nature is surprisingly difficult, however.

The paradox is that humans are also unquestionably a part of nature, having evolved alongside with all the rest of life. Anthropologists have narrowed in on one singular human feature in particular: the capacity to think in the abstract. Our ability to imagine and communicate ideas about things that are not immediately in front of us is a complex cognitive process, scientists argue, one that is remarkably different from simple, primitive communication about nearby food or imminent danger.

Humans use symbols to communicate and convey these abstract thoughts and ideas. We imbue non-practical things with meaning. Art and jewellery, for example, communicate concepts about beliefs, values and social status. Mortuary rituals, too, have been put forward as a key example of symbolic thought, with the idea that deliberate treatment of the dead represents a whole web of ideas. Mourning the dead involves remembering the past and imagining a future in which we too will die – abstractions believed to be complex enough to be contemplated only by our species.

The assumption, then, was that death rituals were practised only by modern humans, or perhaps also by their very closest relatives. The possibility that primitive, small-brained Homo naledi could have engaged in the deliberate disposal of dead bodies not only challenges the timeline about when such behaviours appeared; it disrupts the whole conventional thinking about the distinction between modern humans and earlier species and, by extension, the distinction between us and the rest of nature.

For humans, death is an enormously culturally meaningful process. Cultures around the world honour the deceased with rituals and ceremonies that communicate a variety of values and abstract ideas. Since the 19th century, anthropologists have examined these mortuary practices to learn about the religions and beliefs of other cultures. During this time, it never occurred to anyone that other creatures, even other hominins (the primate group encompassing the genus Homo, along with the genus Australopithecus and other close relatives) could have engaged in similar behaviour. Surely, the thinking went, humans alone operate in such an abstract world as to assign deep meaning to death.

Yet this behaviour must have appeared at some point in our evolutionary history. Since mortuary rituals such as song and dance are invisible in the archaeological record, scientists focused on material aspects such as burial to trace the history of the practice. The discoveries soon prompted tough questions about the conventional viewpoint, suggesting that mortuary rituals might not have been uniquely human after all.

The first debate over non-humans burying their dead arose in 1908 with the discovery of a fairly complete Neanderthal skeleton near La Chapelle-aux-Saints in France. After excavating their find, the discoverers argued that the skeleton had clearly been deliberately buried. To them, it looked as though a grave had been dug, the body purposefully laid inside in the foetal position, and safely covered up from the elements. Many contemporary scientists remained dubious of this interpretation or dismissed the evidence outright. Later skeptics suggested that early 20th-century excavation techniques were too sloppy to prove such a sweeping conclusion. Debate over the burial of the La Chappelle Neanderthal continues to this day.

It is fitting that the controversy over mortuary ritual in hominins began with the Neanderthals, now known as the species Homo neanderthalensis. Ever since the first discovery of Neanderthal fossils in 1856 in the Neandertal valley in Germany, the species has occupied an ambiguous relationship to humans. Neanderthals are the closest species to humans, and their location on the spectrum between humans and other animals has constantly been contested.

For the first century after their discovery, they were typically imagined as highly non-human creatures, their primitive aspects emphasised to such an extent that they became known as brutes who couldn’t even stand up straight. More recently, the pendulum has swung the other way, with some scientists arguing that the creatures were so close to humans that a Neanderthal wearing a suit and a hat on a subway would go largely unnoticed. The debate over Neanderthal burials has similarly wavered back and forth. At some times, . . .

Continue reading.

Written by LeisureGuy

14 November 2017 at 2:37 pm

Posted in Evolution, Religion, Science

%d bloggers like this: