Archive for the ‘Evolution’ Category
I would think that the algorithms would equally well apply to memes and their evolution. John Palvus reports in Quanta:
To the computer scientist Leslie Valiant, “machine learning” is redundant. In his opinion, a toddler fumbling with a rubber ball and a deep-learning networkclassifying cat photos are both learning; calling the latter system a “machine” is a distinction without a difference.
Valiant, a computer scientist at Harvard University, is hardly the only scientist to assume a fundamental equivalence between the capabilities of brains and computers. But he was one of the first to formalize what that relationship might look like in practice: In 1984, his “probably approximately correct” (PAC) model mathematically defined the conditions under which a mechanistic system could be said to “learn” information. Valiant won the A.M. Turing Award — often called the Nobel Prize of computing — for this contribution, which helped spawn the field of computational learning theory.
Valiant’s conceptual leaps didn’t stop there. In a 2013 book, also entitled “Probably Approximately Correct,” Valiant generalized his PAC learning framework to encompass biological evolution as well.
He broadened the concept of an algorithm into an “ecorithm,” which is a learning algorithm that “runs” on any system capable of interacting with its physical environment. Algorithms apply to computational systems, but ecorithms can apply tobiological organisms or entire species. The concept draws a computational equivalence between the way that individuals learn and the way that entire ecosystems evolve. In both cases, ecorithms describe adaptive behavior in a mechanistic way.
Valiant’s self-stated goal is to find “mathematical definitions of learning and evolution which can address all ways in which information can get into systems.” If successful, the resulting “theory of everything” — a phrase Valiant himself uses, only half-jokingly — would literally fuse life science and computer science together. Furthermore, our intuitive definitions of “learning” and “intelligence” would expand to include not only non-organisms, but non-individuals as well. The “wisdom of crowds” would no longer be a mere figure of speech.
Quanta Magazine spoke with Valiant about his efforts to dissolve the distinctions between biology, computation, evolution and learning. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: How did you come up with the idea of “probably approximately correct” learning?
LESLIE VALIANT: I belonged to the theoretical computer science community, specializing in computational complexity theory, but I was also interested in artificial intelligence. My first question was: Which aspect of artificial intelligence could be made into a quantitative theory? I quickly settled on the idea that it must be learning.
At the time I started working on it [in the 1980s], people were already investigating machine learning, but there was no consensus on what kind of thing “learning” was. In fact, learning was regarded with total suspicion in the theoretical computer science community as something which would never have a chance of being made a science.
On the other hand, learning is a very reproducible phenomenon — like an apple falling to the ground. Every day, children all around the world learn thousands of new words. It’s a large-scale phenomenon for which there has to be some quantitative explanation.
So I thought that learning should have some sort of theory. Since statistical inference already existed, my next question was: Why was statistics not enough to explain artificial intelligence? That was the start: Learning must be something statistical, but it’s also something computational. I needed some theory which combined both computation and statistics to explain what the phenomenon was.
So what is learning? Is it different from computing or calculating?
It is a kind of calculation, but the goal of learning is to perform well in a world that isn’t precisely modeled ahead of time. A learning algorithm takes observations of the world, and given that information, it decides what to do and is evaluated on its decision. A point made in my book is that all the knowledge an individual has must have been acquired either through learning or through the evolutionary process. And if this is so, then individual learning and evolutionary processes should have a unified theory to explain them.
And from there, you eventually arrived at the concept of an “ecorithm.” What is an ecorithm, and how is it different from an algorithm?
An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.
So the concept of an ecorithm is meant to dislodge this mistaken intuition many of us have that “machine learning” is fundamentally different from “non-machine learning”?
Yes, certainly. Scientifically, the point has been made for more than half a century that if our brains run computations, then if we could identify the algorithms producing those computations, we could simulate them on a machine, and “artificial intelligence” and “intelligence” would become the same. But the practical difficulty has been to determine exactly what these computations running on the brain are. Machine learning is proving to be an effective way of bypassing this difficulty.
Some of the biggest challenges that remain for machines are those computations which concern behaviors that we acquired through evolution, or that we learned as small children crawling around on the ground touching and sensing our environment. In these ways we have acquired knowledge that isn’t written down anywhere. For example, if I squeeze a paper cup full of hot coffee, we know what will happen, but that information is very hard to find on the Internet. If it were available that way, then we could have a machine learn this information more easily.
Can systems whose behavior we already understand well enough to simulate with algorithms — like solar systems or crystals — be said to “learn” too? . . .
Carl Zimmer reports in the NY Times:
The agricultural revolution was one of the most profound events in human history, leading to the rise of modern civilization. Now, in the first study of its kind, an international team of scientists has found that after agriculture arrived in Europe 8,500 years ago, people’s DNA underwent widespread changes, altering their height, digestion, immune system and skin color.
Researchers had found indirect clues of some of these alterations by studying the genomes of living Europeans. But the new study, they said, makes it possible to see the changes as they occurred over thousands of years.
“For decades we’ve been trying to figure out what happened in the past,” said Rasmus Nielsen, a geneticist at the University of California, Berkeley, who was not involved in the new study. “And now we have a time machine.”
Before the advent of studies of ancient DNA, scientists had relied mainly on bones and other physical remains to understand European history. The earliest bones of modern humans in Europe date to about 45,000 years ago, researchers have found.
Early Europeans lived as hunter-gatherers for over 35,000 years. About 8,500 years ago, farmers left their first mark in the archaeological record of the continent.
By studying living Europeans, scientists had already found evidence suggesting that their ancestors adapted to agriculture through natural selection. As tools to sequence DNA became more readily available, researchers even discovered some of the molecular underpinnings of these traits.
But these studies couldn’t help determine exactly when the changes occurred, or whether they resulted from natural selection or the migrations of people into Europe from other regions.
Scientists are now tackling these questions in a much more direct way, thanks to a rapidly growing supply of DNA from ancient skeletons. These studies have revealed that the DNA of Europeans today comes from three main sources. . .
I think it’s the cuteness of those eight stubby little legs. Victoria Turk writes in Motherboard:
I’m going to call it: tardigrades are the weirdest animal on the planet (and beyond).
Also known as water bears, the microscopic eight-legged creatures have been around for hundreds of millions of years, and are best known for being almost indestructible. They can go into a state of suspended animation and survive temperatures way below freezing and well above boiling, go without food and water for years, and have even been known to survive the vacuum and radiation of space. A new study published in PNAS adds another to the list of tardigrades’ extraordinary features: their genome contains an unprecedented proportion of foreign DNA. Lead author Thomas Boothby said the finding was “extremely surprising.”
The group of researchers based out of the University of North Carolina at Chapel Hill set out to sequence the genome of the tardigrade species Hypsibius dujardini, in the hope of gaining more insight into the unusual creature’s biology. What they found was that an unprecedented one-sixth of the tardigrade’s genome was not made of tardigrade DNA. It was composed of “foreign” DNA from a large range of completely different organisms—mainly bacteria, but also plants, fungi, and single-celled archaea. . .
Kevin Hartnett reports in Quanta:
There are few bigger — or harder — questions to tackle in science than the question of how life arose. We weren’t around when it happened, of course, and apart from the fact that life exists, there’s no evidence to suggest that life can come from anything besides prior life. Which presents a quandary.
Christoph Adami does not know how life got started, but he knows a lot of other things. His main expertise is in information theory, a branch of applied mathematics developed in the 1940s for understanding information transmissions over a wire. Since then, the field has found wide application, and few researchers have done more in that regard than Adami, who is a professor of physics and astronomy and also microbiology and molecular genetics at Michigan State University. He takes the analytical perspective provided by information theory and transplants it into a great range of disciplines, including microbiology, genetics, physics, astronomy and neuroscience. Lately, he’s been using it to pry open a statistical window onto the circumstances that might have existed at the moment life first clicked into place.
To do this, he begins with a mental leap: Life, he argues, should not be thought of as a chemical event. Instead, it should be thought of as information. The shift in perspective provides a tidy way in which to begin tackling a messy question. In the following interview, Adami defines information as “the ability to make predictions with a likelihood better than chance,” and he says we should think of the human genome — or the genome of any organism — as a repository of information about the world gathered in small bits over time through the process of evolution. The repository includes information on everything we could possibly need to know, such as how to convert sugar into energy, how to evade a predator on the savannah, and, most critically for evolution, how to reproduce or self-replicate.
This reconceptualization doesn’t by itself resolve the issue of how life got started, but it does provide a framework in which we can start to calculate the odds of life developing in the first place. Adami explains that a precondition for information is the existence of an alphabet, a set of pieces that, when assembled in the right order, expresses something meaningful. No one knows what that alphabet was at the time that inanimate molecules coupled up to produce the first bits of information. Using information theory, though, Adami tries to help chemists think about the distribution of molecules that would have had to be present at the beginning in order to make it even statistically plausible for life to arise by chance.
Quanta Magazine spoke with Adami about what information theory has to say about the origins of life. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: How does the concept of information help us understand how life works?
CHRISTOPH ADAMI: Information is the currency of life. One definition of information is the ability to make predictions with a likelihood better than chance. That’s what any living organism needs to be able to do, because if you can do that, you’re surviving at a higher rate. [Lower organisms] make predictions that there’s carbon, water and sugar. Higher organisms make predictions about, for example, whether an organism is after you and you want to escape. Our DNA is an encyclopedia about the world we live in and how to survive in it.
Think of evolution as a process where information is flowing from the environment into the genome. The genome learns more about the environment, and with this information, the genome can make predictions about the state of the environment.
If the genome is a reflection of the world, doesn’t that make the information context specific?
Information in a sequence needs to be interpreted in its environment. Your DNA means nothing on Mars or underwater because underwater is not where you live. A sequence is information in context. A virus’s sequence in its context — its host — has enough information to replicate because it can take advantage of its environment.
What happens when the environment changes?
The first thing that happens is that stuff that was information about the environment isn’t information anymore. Cataclysmic change means the amount of information you have about the environment may have dropped. And because information is the currency of life, suddenly you’re not so fit anymore. That’s what happened with dinosaurs.
Once you start thinking about life as information, how does it change the way you think about the conditions under which life might have arisen? . . .
How does multicellularity arise? Emily Singer describes several ways:
Until one or two billion years ago, life on Earth was limited to a soup of single-celled creatures. Then one fateful day, a lonely cell surrendered solitude for communal living. It developed a chance mutation that made its progeny stick together, eventually giving rise to the first multicellular life.
With that simple innovation, a world of possibilities burst open. These new organisms were too big to be eaten, and their mammoth size allowed them to pull in more food from the environment. Most important, individual cells within the bunch could begin to specialize, taking on new functions, such as hunting, eating and defense. The transition to multicellularity was so successful that it happened over and over again in Earth’s evolutionary history — at least 25 times, and very likely more.
Multicellularity has clear advantages — just look at the menagerie of form and function among animals, plants and fungi. But scientists have long been puzzled as to how this transformation took place. A true multicellular organism acts as a unit, meaning that each cell must surrender its will to survive as an individual and act to ensure the survival of the larger group. “The problem with all the major evolutionary transitions is how Darwinian entities relinquish their individual fitness and become part of a higher-level unit,” said Richard Michod, an evolutionary biologist at the University of Arizona in Tucson.
Scientists are gaining insight into the process by re-creating the evolution of multicellularity in the lab. Using an approach known as experimental evolution, they prod single-celled microbes, such as yeast, algae or bacteria, to develop a multicell form.
“It’s easy to think of [these major transitions] as a giant leap in evolution, and in some sense that’s true,” said Ben Kerr, a biologist at the University of Washington in Seattle and one of the researchers studying major transitions in evolution. But each transition actually involved a series of small advances — the organisms had to evolve effective ways to stick together, to cooperate, to divide and to develop specialized jobs within the greater whole. “We’re trying to do the opposite of a giant leap. We’re trying to break one giant leap for evolution into an understandable series of small steps.”
William Ratcliff, a biologist at the Georgia Institute of Technology in Atlanta, and his collaborators have discovered a surprisingly simple route to multicellularity: a single mutation in yeast that adheres the mother cell to its daughter to create a snowflake-like shape. These snowflakes grow and divide in a way that provides a clever solution to one of the major pitfalls of multicellularity: the cheater problem, in which lazy cells take advantage of cooperative ones. And while the work hasn’t produced a true multicellular organism, the snowflake yeast has shown just how easy it can be for life to take the first step toward a major biological transformation.
Ratcliff began his quest for multicellularity while still a graduate student at the University of Minnesota. Over a series of coffee-fueled conversations, Ratcliff and his collaborator Michael Travisano began brainstorming the “coolest experiment we could do,” according to Ratcliff. Tackling the biggest unsolved question in biology — how life first began — was too far out of their wheelhouse, the pair decided. So they settled on the runner-up: How did multicellular creatures evolve? To untangle that transition, the researchers would try to re-create it, converting single-celled yeast into multicellular organisms. . .
Evolution has shaped baroque solutions in creating networks of benefits and trade-offs in various lifeforms. Take, for example, wasps… Veronique Greenwood interviews Joan Strassmann in Quanta:
As an undergraduate in the 1970s, Joan Strassmann split her time between writing short stories and laying siege to the office of her mentor, the sociobiologist Dick Alexander. For two years, she insisted on meeting with him every Friday to discuss research, a schedule that she now thinks was probably quite an imposition. “He would give me so much reading,” she recalled, “it would take me forever. I would work night and day to finish it, and maybe this was his strategy — maybe he was hoping I’d cancel or something.” But that intense focus became one of the hallmarks of her pioneering research on social insects.
In graduate school at the University of Texas, Austin, Strassmann began to study wasps that live in hierarchical colonies, starting with a nest that was thriving in a tractor shed near the main campus. “I was really planning to work on something else, some other social organism, like ground squirrels,” said Strassmann, now at Washington University in St. Louis. “Everyone knew that I was actually terrified of wasps.” But on a dare, she and a friend ventured into the shed and painted each wasp with a different identifying dot of paint, the standard preparation for studying the social dynamics of an insect colony. Then she just started watching them. And watching them.
She was in good company. Insects that live incooperative colonies — ants, termites, and some wasps and bees — have fascinated scientists for more than a century because they pose an evolutionary conundrum. Darwin himself saw their way of life as a challenge to his ideas. The theory of evolution seems to predict that each individual will fight to pass on its traits, but in a colony, only a very small number of insects actually get to reproduce: the queens and their mates. The rest give up their chance to contribute to the gene pool, caring for the offspring of others instead. How could this lifestyle, known as eusociality, have evolved? How could it make sense for the ancestors of modern worker bees or wasps to give up their autonomy? It seems biologically implausible.
Right around the time Strassmann was in college, however, biologists began to understand how social insects could fit into the framework of evolutionary theory. In aseminal paper, W.D. Hamilton proposed that cooperation might make sense in closely related individuals that share enough genes. If a bee with a maiden-aunt helper produces twice as many offspring as it might have otherwise, that arrangement makes evolutionary sense for the non-reproducing assistant, which is indirectly passing on its genes. But that benefit is reduced as relatedness declines, so eusociality would only arise among close relatives.
Strassmann found clear evidence of Hamilton’s idea in her tractor-shed wasps. When a nest is destroyed, its members will disperse to sister nests in a pattern that reflects their level of relationship to the queens. Each wasp serves only the leaders it is most closely related to. Strassmann continued to study wasps over the next 20 years with her husband and collaborator, David Queller, and the two have uncovered many other details of how relatedness shapes the behavior of social insects, including how colonies keep relatedness high when multiple queens reign, and what it takes to turn a worker into a queen.
About 17 years ago, however, the pair began to shift to a new model organism, the amoeba Dictyostelium discoideum. They suspected that this unusual creature could offer new insights into the dynamics of cooperation. In moments of starvation, these soil-dwelling amoebas crowd together and build a tower rising above the ground from which they disperse their spores to other, more hospitable places. Some 20 percent of the group will sacrifice themselves to build the tower with their bodies, while the rest take advantage of it to spread their genes.
Quanta Magazine spoke with Strassmann about the evolution of social insects, the secret lives of queen wasps, and what she’s learned about cooperation from a single-celled creature. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: You began to study social wasps just as scientists were debating the origins of eusociality. What was that like?
JOAN STRASSMANN: It was a really exciting time. W.D. Hamilton’s paper came out in 1964, but it wasn’t really appreciated until the early ’70s, when I was in college. He outlined the framework of something called inclusive fitness, which is the sum of the effects of an individual’s actions on its own and others’ reproduction. Ultimately, it is the measure of an individual’s actions on the representation of its genes in the next generation.
You would help relatives, because they share genes with you. But it’s not quite as simple as that. People often act like you’ll always help a sibling more than a cousin, and so on, but the costs and benefits are really crucial. For example, you wouldn’t help sisters who share half your genes unless they would have at least double the number of progeny as a result of of your help.
Relatedness is easy to assess with molecular techniques. Assessing the costs and benefits, though, requires that you really understand the biology of the individuals in their natural habitat. Dave and I have a paper laying out two major benefits social insects gain from joining forces: fortress defense, where two of you can guard better than one, especially if you live in a nest with a tiny opening; and life insurance, where if one of you dies, the other ensures the survival of your babies. Fortress defense and life insurance are the two main reasons that cooperating could be beneficial in social insects.
Are there behaviors or traits that make it easier to become eusocial?
Yes — they’re actually very well known. The ancestors of social insects are parasitic wasps. As a parasite, you don’t want to make your host sick until you’re going to kill it, so these wasps don’t really poop as larvae. They just sort of hold it in. The same goes with social insects — you really don’t want to poop in the nest. How you control the poop in social organisms is a really important question.
Then there is maternal care. . .
Do read the whole thing. It becomes more and more interesting.
Emily Singer writes in Quanta:
In September 2014, Christa Schleper embarked on an unusual hunting expedition in Slovenia. Instead of seeking the standard quarry of deer or wild boar, Schleper was in search of Lokiarchaeota, or Loki, a newly discovered group of organisms first identified near deep-sea vents off the coast of Norway. The simple, single-celled creatures have captured scientists’ interest because they are unlike any other organism known to science. They belong to an ancient group of creatures known as archaea, but they seem to share some features with more complex life-forms, including us.
Though little is known about Loki, scientists hope that it will help to resolve one of biology’s biggest mysteries: how life transformed from simple single-celled organisms to the menagerie of complex life known as eukaryotes — a category that includes everything from yeast to azaleas to elephants. “Next to the origins of life, there’s probably no bigger mystery in the history of life,” said John Archibald, an evolutionary biologist at Dalhousie University in Nova Scotia.
The jump from single cells to complex creatures is so puzzling because it represents an enormous evolutionary gulf. “How do you make a eukaryote, that’s a big question,” said Schleper, a microbiologist at the University of Vienna in Austria. “It’s a huge transition.”
Though single-celled organisms blanket the Earth and are capable of impressive biochemistry — some can eat nuclear waste, for example — their structure and shape remain simple. Cells from animals, plants and fungi, which make up the eukaryotes, are much more sophisticated. They possess a suite of features lacking in their simpler brethren: a nucleus that houses DNA; an energy-producing device known as the mitochondrion; and molecular architecture, known as the cytoskeleton, that controls cell shape and movement.
Most biologists agree that at some point around two billion years ago, one featureless cell swallowed another, and the two began to work together as one. But the details of this process — whether this symbiosis jump-started an evolutionary process, or whether it happened midway along the path to eukaryotes — continue to drive huge disputes in the field. One group theorizes that eukaryotes emerged in a rapid burst, driven by the acquisition of the cellular energy factories known as mitochondria. Others propose a slower, stepwise process. They say that mitochondria couldn’t have developed in simple cells; some level of complexity must have evolved before mitochondria came onboard. The debate has grown so heated that members of each camp no longer attend the other’s conference sessions.
Since biologists [unlike other specialists? – LG] can’t travel back in time, they search surviving life-forms for clues. But no detectable intermediates between ancient, single-celled life and early eukaryotes exist, making it nearly impossible to reconstruct the order of evolutionary events. “When something only happens once, it’s hard to grapple with the problem,” Archibald said. “We’re left studying the DNA sequence of modern organisms and trying to piece it together.”
Enter Loki, which some scientists have dubbed a microbial missing link. It is descended from an ancient lineage and is a simple organism with patches of apparent complexity. Genetic analysis places Loki squarely within the single-celled archaea. But it possesses an intriguing collection of genes that look as though they would be more at home in eukaryotes, rather like modern words dotting a medieval manuscript. In fact, Loki’s genetic machinery suggests that the organism might be able to engulf other cells, the first step in the creation of mitochondria. “These genes could have provided a starter kit for eukaryogenesis, the emergence of eukaryotes,” said Thijs Ettema, a microbiologist at Uppsala University in Sweden who first described Loki in collaboration with Schleper in Nature last May.
Loki thus outlines a new potential origin story for eukaryotes, one that walks a middle path between the two extremes. Mitochondria may have been born early in the evolution of eukaryotes. But that first mitochondrial host may have already possessed some sophisticated features, most notably the ability to engulf other cells. “It hints that [the Loki] are stepping-stones to eukaryotic complexity,” Archibald said.
Schleper, Ettema and others are now searching for new varieties of Loki, hoping to find some that are even closer to eukaryotes on the evolutionary tree. Schleper’s expedition to Slovenia was part of this ongoing hunt.