Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science’ Category

This year’s winner of the Dance Your Ph.D. contest

leave a comment »

More information on the contest, with some of the runners up.

Written by LeisureGuy

25 November 2015 at 8:46 am

Posted in Education, Science, Video

‘Outsiders’ Crack 50-Year-Old Math Problem

leave a comment »

Fascinating article, even though I don’t know some of the things they’re talking about—but it’s interesting to see how the same problem occurs in many fields and in many guises. Erica Klarreich writes in Quanta:

In 2008, Daniel Spielman told his Yale University colleague Gil Kalai about a computer science problem he was working on, concerning how to “sparsify” a network so that it has fewer connections between nodes but still preserves the essential features of the original network.

Network sparsification has applications in data compression and efficient computation, but Spielman’s particular problem suggested something different to Kalai. It seemed connected to the famous Kadison-Singer problem, a question about the foundations of quantum physics that had remained unsolved for almost 50 years.

Over the decades, the Kadison-Singer problem had wormed its way into a dozen distant areas of mathematics and engineering, but no one seemed to be able to crack it. The question “defied the best efforts of some of the most talented mathematicians of the last 50 years,” wrote Peter Casazza and Janet Tremain of the University of Missouri in Columbia, in a 2014 survey article.

As a computer scientist, Spielman knew little of quantum mechanics or the Kadison-Singer problem’s allied mathematical field, called C*-algebras. But when Kalai, whose main institution is the Hebrew University of Jerusalem, described one of the problem’s many equivalent formulations, Spielman realized that he himself might be in the perfect position to solve it. “It seemed so natural, so central to the kinds of things I think about,” he said. “I thought, ‘I’ve got to be able to prove that.’” He guessed that the problem might take him a few weeks.

Instead, it took him five years. In 2013, working with his postdoc Adam Marcus, now at Princeton University, and his graduate student Nikhil Srivastava, now at the University of California, Berkeley, Spielman finally succeeded. Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

Mathematicians in these disciplines greeted the news with a combination of delight and hand-wringing. The solution, which Casazza and Tremain called “a major achievement of our time,” defied expectations about how the problem would be solved and seemed bafflingly foreign. Over the past two years, the experts in the Kadison-Singer problem have had to work hard to assimilate the ideas of the proof. Spielman, Marcus and Srivastava “brought a bunch of tools into this problem that none of us had ever heard of,” Casazza said. “A lot of us loved this problem and were dying to see it solved, and we had a lot of trouble understanding how they solved it.”

“The people who have the deep intuition about why these methods work are not the people who have been working on these problems for a long time,” said Terence Tao, of the University of California, Los Angeles, who has been following these developments. Mathematicians have held several workshops to unite these disparate camps, but the proof may take several more years to digest, Tao said. “We don’t have the manual for this magic tool yet.”

Computer scientists, however, have been quick to exploit the new techniques. Last year, for instance, two researchers parlayed these tools into a major leap forward in understanding the famously difficult traveling salesman problem. There are certain to be more such advances, said Assaf Naor, a mathematician at Princeton who works in areas related to the Kadison-Singer problem. “This is too profound to not have many more applications.”

A Common Problem

The question Richard Kadison and Isadore Singer posed in 1959 asks how much it is possible to learn about a “state” of a quantum system if you have complete information about that state in a special subsystem. Inspired by an informally worded comment by the legendary physicist Paul Dirac, their question builds on Werner Heisenberg’s uncertainty principle, which says that certain pairs of attributes, like the position and the momentum of a particle, cannot simultaneously be measured to arbitrary precision.

Kadison and Singer wondered about subsystems that contain as many different attributes (or “observables”) as can compatibly be measured at the same time. If you have complete knowledge of the state of such a subsystem, they asked, can you deduce the state of the entire system?

In the case where the system you’re measuring is a particle that can move along a continuous line, Kadison and Singer showed that the answer is no: There can be many different quantum states that all look the same from the point of view of the observables you can simultaneously measure. “It is as if many different particles have exactly the same location simultaneously — in a sense, they are in parallel universes,” Kadison wrote by email, although he cautioned that it’s not yet clear whether such states can be realized physically.

Kadison and Singer’s result didn’t say what would happen if the space in which the particle lives is not a continuous line, but is instead some choppier version of the line — if space is “granular,” as Kadison put it. This is the question that came to be known as the Kadison-Singer problem.

Based on their work in the continuous setting, Kadison and Singer guessed that in this new setting the answer would again be that there are parallel universes. But they didn’t go so far as to state their guess as a conjecture — a wise move, in hindsight, since their gut instinct turned out to be wrong. “I’m happy I’ve been careful,” Kadison said.

Kadison and Singer — now at the University of Pennsylvania and the Massachusetts Institute of Technology (emeritus), respectively — posed their question at a moment when interest in the philosophical foundations of quantum mechanics was entering a renaissance. Although some physicists were promoting a “shut up and calculate” approach to the discipline, other, more mathematically inclined physicists pounced on the Kadison-Singer problem, which they understood as a question about C*-algebras, abstract structures that capture the algebraic properties not just of quantum systems but also of the random variables used in probability theory, the blocks of numbers called matrices, and regular numbers.

C*-algebras are an esoteric subject — “the most abstract nonsense that exists in mathematics,” in Casazza’s words. “Nobody outside the area knows much about it.” For the first two decades of the Kadison-Singer problem’s existence, it remained ensconced in this impenetrable realm.

Then in 1979, Joel Anderson, now an emeritus professor at Pennsylvania State University, popularized the problem by proving that it is equivalent to an easily stated question about when matrices can be broken down into simpler chunks. Matrices are the core objects in linear algebra, which is used to study mathematical phenomena whose behavior can be captured by lines, planes and higher-dimensional spaces. So suddenly, the Kadison-Singer problem was everywhere. Over the decades that followed, it emerged as the key problem in one field after another.

Because there tended to be scant interaction between these disparate fields, no one realized just how ubiquitous the Kadison-Singer problem had become until Casazza found that it was equivalent to the most important problem in his own area of signal processing. The problem concerned whether the processing of a signal can be broken down into smaller, simpler parts. Casazza dived into the Kadison-Singer problem, and in 2005, he, Tremain and two co-authors wrote a paper demonstrating that it was equivalent to the biggest unsolved problems in a dozen areas of math and engineering. A solution to any one of these problems, the authors showed, would solve them all.

One of the many equivalent formulations they wrote about had been devised just a few years earlier by Nik Weaver, of Washington University in St. Louis. Weaver’s version distilled the problem down to a natural-sounding question about when it is possible to divide a collection of vectors into two groups that each point in roughly the same set of directions as the original collection. “It’s a beautiful problem that brought out the core combinatorial problem” at the heart of the Kadison-Singer question, Weaver said.

So Weaver was surprised when — apart from the mention in Casazza’s survey and one other paper that expressed skepticism about his approach — his formulation seemed to meet with radio silence. He thought no one had noticed his paper, but in fact it had attracted the attention of just the right people to solve it.

Electrical Properties

When Spielman learned about Weaver’s conjecture in 2008, he knew it was his kind of problem. There’s a natural way to switch between networks and collections of vectors, and Spielman had spent the preceding several years building up a powerful new approach to networks by viewing them as physical objects. If a network is thought of as an electrical circuit, for example, then the amount of current that runs through a given edge (instead of finding alternate routes) provides a natural way to measure that edge’s importance in the network.

Spielman discovered Weaver’s conjecture after Kalai introduced him to another form of the Kadison-Singer problem, and he realized that it was nearly identical to a simple question about networks: . . .

Continue reading.

Written by LeisureGuy

24 November 2015 at 12:48 pm

Posted in Math, Science

Agriculture Linked to DNA Changes in Ancient Europe

leave a comment »

Carl Zimmer reports in the NY Times:

The agricultural revolution was one of the most profound events in human history, leading to the rise of modern civilization. Now, in the first study of its kind, an international team of scientists has found that after agriculture arrived in Europe 8,500 years ago, people’s DNA underwent widespread changes, altering their height, digestion, immune system and skin color.

Researchers had found indirect clues of some of these alterations by studying the genomes of living Europeans. But the new study, they said, makes it possible to see the changes as they occurred over thousands of years.

“For decades we’ve been trying to figure out what happened in the past,” said Rasmus Nielsen, a geneticist at the University of California, Berkeley, who was not involved in the new study. “And now we have a time machine.”

Before the advent of studies of ancient DNA, scientists had relied mainly on bones and other physical remains to understand European history. The earliest bones of modern humans in Europe date to about 45,000 years ago, researchers have found.

Early Europeans lived as hunter-gatherers for over 35,000 years. About 8,500 years ago, farmers left their first mark in the archaeological record of the continent.

By studying living Europeans, scientists had already found evidence suggesting that their ancestors adapted to agriculture through natural selection. As tools to sequence DNA became more readily available, researchers even discovered some of the molecular underpinnings of these traits.

But these studies couldn’t help determine exactly when the changes occurred, or whether they resulted from natural selection or the migrations of people into Europe from other regions.

Scientists are now tackling these questions in a much more direct way, thanks to a rapidly growing supply of DNA from ancient skeletons. These studies have revealed that the DNA of Europeans today comes from three main sources. . .

Continue reading.

Written by LeisureGuy

23 November 2015 at 9:24 pm

Everyone loves the tardigrade, but they’re weirder than we thought

leave a comment »

I think it’s the cuteness of those eight stubby little legs. Victoria Turk writes in Motherboard:

I’m going to call it: tardigrades are the weirdest animal on the planet (and beyond).

Also known as water bears, the microscopic eight-legged creatures have been around for hundreds of millions of years, and are best known for being almost indestructible. They can go into a state of suspended animation and survive temperatures way below freezing and well above boiling, go without food and water for years, and have even been known to survive the vacuum and radiation of space. A new study published in PNAS adds another to the list of tardigrades’ extraordinary features: their genome contains an unprecedented proportion of foreign DNA. Lead author Thomas Boothby said the finding was “extremely surprising.”

The group of researchers based out of the University of North Carolina at Chapel Hill set out to sequence the genome of the tardigrade species Hypsibius dujardini, in the hope of gaining more insight into the unusual creature’s biology. What they found was that an unprecedented one-sixth of the tardigrade’s genome was not made of tardigrade DNA. It was composed of “foreign” DNA from a large range of completely different organisms—mainly bacteria, but also plants, fungi, and single-celled archaea. . .

Continue reading.

Written by LeisureGuy

23 November 2015 at 2:51 pm

Posted in Evolution, Science

The Information Theory of Life

leave a comment »

Kevin Hartnett reports in Quanta:

There are few bigger — or harder — questions to tackle in science than the question of how life arose. We weren’t around when it happened, of course, and apart from the fact that life exists, there’s no evidence to suggest that life can come from anything besides prior life. Which presents a quandary.

Christoph Adami does not know how life got started, but he knows a lot of other things. His main expertise is in information theory, a branch of applied mathematics developed in the 1940s for understanding information transmissions over a wire. Since then, the field has found wide application, and few researchers have done more in that regard than Adami, who is a professor of physics and astronomy and also microbiology and molecular genetics at Michigan State University. He takes the analytical perspective provided by information theory and transplants it into a great range of disciplines, including microbiology, genetics, physics, astronomy and neuroscience. Lately, he’s been using it to pry open a statistical window onto the circumstances that might have existed at the moment life first clicked into place.

To do this, he begins with a mental leap: Life, he argues, should not be thought of as a chemical event. Instead, it should be thought of as information. The shift in perspective provides a tidy way in which to begin tackling a messy question. In the following interview, Adami defines information as “the ability to make predictions with a likelihood better than chance,” and he says we should think of the human genome — or the genome of any organism — as a repository of information about the world gathered in small bits over time through the process of evolution. The repository includes information on everything we could possibly need to know, such as how to convert sugar into energy, how to evade a predator on the savannah, and, most critically for evolution, how to reproduce or self-replicate.

This reconceptualization doesn’t by itself resolve the issue of how life got started, but it does provide a framework in which we can start to calculate the odds of life developing in the first place. Adami explains that a precondition for information is the existence of an alphabet, a set of pieces that, when assembled in the right order, expresses something meaningful. No one knows what that alphabet was at the time that inanimate molecules coupled up to produce the first bits of information. Using information theory, though, Adami tries to help chemists think about the distribution of molecules that would have had to be present at the beginning in order to make it even statistically plausible for life to arise by chance.

Quanta Magazine spoke with Adami about what information theory has to say about the origins of life. An edited and condensed version of the interview follows.

QUANTA MAGAZINE: How does the concept of information help us understand how life works?

CHRISTOPH ADAMI: Information is the currency of life. One definition of information is the ability to make predictions with a likelihood better than chance. That’s what any living organism needs to be able to do, because if you can do that, you’re surviving at a higher rate. [Lower organisms] make predictions that there’s carbon, water and sugar. Higher organisms make predictions about, for example, whether an organism is after you and you want to escape. Our DNA is an encyclopedia about the world we live in and how to survive in it.

Think of evolution as a process where information is flowing from the environment into the genome. The genome learns more about the environment, and with this information, the genome can make predictions about the state of the environment.

If the genome is a reflection of the world, doesn’t that make the information context specific?

Information in a sequence needs to be interpreted in its environment. Your DNA means nothing on Mars or underwater because underwater is not where you live. A sequence is information in context. A virus’s sequence in its context — its host — has enough information to replicate because it can take advantage of its environment.

What happens when the environment changes?

The first thing that happens is that stuff that was information about the environment isn’t information anymore. Cataclysmic change means the amount of information you have about the environment may have dropped. And because information is the currency of life, suddenly you’re not so fit anymore. That’s what happened with dinosaurs.

Once you start thinking about life as information, how does it change the way you think about the conditions under which life might have arisen? . . .

Continue reading.

Written by LeisureGuy

20 November 2015 at 3:43 pm

Posted in Evolution, Math, Science

The FBI’s forensic “science” has put many innocent people in prison for decades

leave a comment »

Donald Gates, for example, whose 27 years in prison even though he was “stone-cold innocent” of the rape and murder of which he was accused—and of course the actual perpetrator was not caught (at least not for those crimes). You can read about it in this column by Radley Balko, in which Balko points out:

The case is as good an illustration as any that most fields of forensic “science” weren’t developed to find the truth but to aid police and prosecutors in convicting the person they already believe committed the crime. They aren’t neutral methods of analysis; they’re tools for the state. That doesn’t mean some fields don’t have some evidentiary value (though hair fiber analysis has very little). It just means that those that have some value should be considered and presented to juries for what they are. Too often, they’re presented as magical guilt/innocence divining rods.

There was also an informer who, return for $1,300, provided testimony to implicate the accused. Balko quotes from a report by Spencer Hsu, who notes:

Although it was not part of this month’s trial, Gates’s innocence triggered investigations that led to exonerations of four additional men in the District who had served up to 30 years for rape or murder since the 1980s based on flawed FBI forensic testimony about hairs.

The FBI in April acknowledged that for more than 20 years before 2000, nearly every member of an elite FBI forensic unit overreached by testifying to the near-certainty of hair matches without a scientific basis. Defendants are now being notified.

Written by LeisureGuy

20 November 2015 at 2:57 pm

Depression-Fighting SAD Lamps Aren’t Just For Your Winter Blues

leave a comment »

Seasonal Affective Disorder affects quite a few people, and it’s interesting that using a bright lamp to combat it also helps with regular (non-seasonal) depression. Jorden Pearson reports at Motherboard:

There’s a familiar ritual for people suffering from seasonal affective disorder, or SAD, a very real condition that leaves people feeling depressed in the slushy depths of winter: you wake up, the world still dark and frigid, and you flip on a little lamp that tricks your brain into thinking you’re absorbing sunlight. It’s a treatment with some serious techno-dystopian vibes, yet research has shown for decades that it works.

But according to new research, those dorky little lamps aren’t only useful for people with SAD. In a study that tested the efficacy of SAD lightboxes alongside antidepressants on people with non-seasonal depression, researchers at the University of British Columbia concluded that those little lamps can help with regular old non-winter related clinical depression, too.

“We always think of seasonal depression as a different type of depression for all kinds of good reasons, so people haven’t considered light therapy as a way to treat non-seasonal depression,” Dr. Raymond Lam, the psychiatrist at the University of British Columbia who led the study, told me over the phone. “We thought it was time for a good study, so that’s what we did.”

Although some of the earliest studies on the effectiveness of SAD lamps involved people without SAD itself, these studies were small—just seven subjects, in some cases—and unsatisfactory, Lam said.

The study, published on Wednesday in JAMA Psychiatry, split 121 subjects into four test groups. The first group got the lamp with a placebo antidepressant, the second got the lamp with a real pill, the third group used a switched-off negative ion generator—effectively a placebo, since it’s hard to fake light—and an antidepressant, and the final group got a real negative ion generator with a fake pill.

Negative ion generators are machines that fill the air with electrically-charged air molecules, which supposedly have an effect that may help with depression. But for the purposes of this study, Lam said, they just had to look fancy and make a nice noise.

After eight weeks of treatment, subjects with the real lamp and fake pills reported feeling much better, while those with the lamps and real pills reported feeling the best out of all the groups. Sorry, folks with ion generators. . .

Continue reading.

Light boxes for treating SAD have come down a lot in price over the past several years.

Written by LeisureGuy

18 November 2015 at 12:51 pm


Get every new post delivered to your Inbox.

Join 2,140 other followers

%d bloggers like this: