Archive for the ‘Science’ Category
Katherine Hobson reports at NPR:
Placebos can’t cure diseases, but research suggests that they seem to bring some people relief from subjective symptoms, such as pain, nausea, anxiety and fatigue.
But there’s a reason your doctor isn’t giving you a sugar pill and telling you it’s a new wonder drug. The thinking has been that you need to actually believe that you’re taking a real drug in order to see any benefits. And a doctor intentionally deceiving a patient is an ethical no-no.
So placebos have pretty much been tossed in the “garbage pail” of clinical practice, says Ted Kaptchuk, director of the Program for Placebo Studies and the Therapeutic Encounter at Beth Israel Deaconess Medical Center. In an attempt to make them more useful, he has been studying whether people might see a benefit from a placebo even if they knew it was a placebo, with no active ingredients. An earlier study found that so-called “open-label” or “honest” placebos improved symptoms among people with irritable bowel syndrome.
And Kaptchuk and his colleagues found the same effect among people with garden-variety lower back pain, the most common kind of pain reported by American adults.
The study included 83 people in Portugal, all of whom had back pain that wasn’t caused by cancer, fractures, infections or other serious conditions. All the participants were told that the placebo was an inactive substance containing no medication. They were told that the body can automatically respond to placebos, that a positive attitude can help but isn’t necessary and that it was important to take the pills twice a day for the full three weeks.
Then half the group was assigned to simply continue their usual treatment regimens, which included things like non-opioid painkillers, rest and alternative or complementary remedies. (They were also given the opportunity to use the placebo pills at the end of the study, if they chose.) The other half were assigned to continue with their usual treatment, but to also take the placebo.
Participants rated their pain levels and their difficulty in performing daily activities, like getting dressed or climbing stairs, at the beginning and the end of the three-week study. And researchers found that people who received the placebo saw their scores for both usual and maximum pain levels drop by 30 percent, compared to 9 percent and 16 percent declines for the control group. The placebo group also reported a 29 percent reduction in their score for difficulty in performing daily activities, while the control group saw no change. (A higher score on that test indicated more difficulties.)
That translates to a tangible reduction in pain, says Kaptchuk, who is also a professor at Harvard Medical School. “Patients would feel the difference and physicians would notice it,” he says. . .
Emily Singer reports in Quanta:
Peel away the layers of a house — the plastered walls, the slate roof, the hardwood floors — and you’re left with a frame, the skeletal form that makes up the core of any structure. Can we do the same with life? Can scientists pare down the layers of complexity to reveal the essence of life, the foundation on which biology is built?
That’s what Craig Venter and his collaborators have attempted to do in a new study published today in the journal Science. Venter’s team painstakingly whittled down the genome ofMycoplasma mycoides, a bacterium that lives in cattle, to reveal a bare-bones set of genetic instructions capable of making life. The result is a tiny organism named syn3.0 that contains just 473 genes. (By comparison, E. colihas about 4,000 to 5,000 genes, and humans have roughly 20,000.)
Yet within those 473 genes lies a gaping hole. Scientists have little idea what roughly a third of them do. Rather than illuminating the essential components of life, syn3.0 has revealed how much we have left to learn about the very basics of biology.
“To me, the most interesting thing is what it tells us about what we don’t know,” said Jack Szostak, a biochemist at Harvard University who was not involved in the study. “So many genes of unknown function seem to be essential.”
“We were totally surprised and shocked,” said Venter, a biologist who heads the J. Craig Venter Institute in La Jolla, Calif., and Rockville, Md., and is most famous for his role in mapping the human genome. The researchers had expected some number of unknown genes in the mix, perhaps totaling five to 10 percent of the genome. “But this is truly a stunning number,” he said.
The seed for Venter’s quest was planted in 1995, when his team deciphered the genome of Mycoplasma genitalium, a microbe that lives in the human urinary tract. When Venter’s researchers started work on this new project, they chose M. genitalium — the second complete bacterial genome to be sequenced — expressly for its diminutive genome size. With 517 genes and 580,000 DNA letters, it has one of the smallest known genomes in a self-replicating organism. (Some symbiotic microbes can survive with just 100-odd genes, but they rely on resources from their host to survive.)
M. genitalium’s trim package of DNA raised the question: What is the smallest number of genes a cell could possess? “We wanted to know the basic gene components of life,” Venter said. “It seemed like a great idea 20 years ago — we had no idea it would be a 20-year process to get here.”
Venter and his collaborators originally set out to design a stripped-down genome based on what scientists knew about biology. They would start with genes involved in the most critical processes of the cell, such as copying and translating DNA, and build from there.
But before they could create this streamlined version of life, the researchers had to figure out how to design and build genomes from scratch. Rather than editing DNA in a living organism, as most researchers did, they wanted to exert greater control — to plan their genome on a computer and then synthesize the DNA in test tubes.
In 2008, Venter and his collaborator Hamilton Smith created the first synthetic bacterial genome by building a modified version of M. genitalium’s DNA. Then in 2010 they made the first self-replicating synthetic organism, manufacturing a version of M. mycoides’ genome and then transplanting it into a different Mycoplasma species. The synthetic genome took over the cell, replacing the native operating system with a human-made version. The synthetic M. mycoides genome was mostly identical to the natural version, save for a few genetic watermarks — researchers added their names and a few famous quotes, including a slightly garbled version of Richard Feynman’s assertion, “What I cannot create, I do not understand.”
With the right tools finally in hand, the researchers designed a set of genetic blueprints for their minimal cell and then tried to build them. Yet “not one design worked,” Venter said. He saw their repeated failures as a rebuke for their hubris. Does modern science have sufficient knowledge of basic biological principles to build a cell? “The answer was a resounding no,” he said.
So the team took a different and more labor-intensive tack, replacing the design approach with trial and error. They disrupted M. mycoides’ genes, determining which were essential for the bacteria to survive. They erased the extraneous genes to create syn3.0, which has a smaller genome than any independently replicating organism discovered on Earth to date.
What’s left after trimming the genetic fat? . . .
Take a look at this very interesting article in the New Yorker by Maria Konnikova. From the article:
Together with one of Ericsson’s own students, Len Hill, they decided to tackle the golf question head-on. Hambrick spent weeks tracking down data for P.G.A. tour stats and running analyses to determine how the pros reached their level of success. The work continued when he returned to Atlanta, and even went on into the first years of his professorship at Michigan State University. But the analyses weren’t turning out quite as expected—training was not explaining nearly as much as it should. So, while the work languished in unpublished state, Hambrick began to focus more and more on the other possible components of expert accomplishment. Of course, training was important—but how important? “I started to ask, Well, wait a second, can these strong claims about the primacy of practice actually hold up—is there the evidence to back it up?” The more he researched, the more he concluded that the answer was no. No matter how much he had practiced as a teen-ager, he would never have reached the P.G.A. tour. Of course, he’d known that all along, on some level—after all, he quit golf. People do have natural ceilings to their talent in any given area, and after a certain point their success arose from things other than deliberate practice.
In one study, for instance, Hambrick looked at pianists and measured their working memory, or the ability to keep chunks of information in mind and accessible for short periods of time. In the past, working-memory capacity has been found to be heritable. In his sample, it predicted success even when you accounted for the effects of practice; pianists with better working memory were better at sight reading—and increased practice did not alter the effect. When he looked back to one of the most frequently studied groups in expertise research, chess players, he found that, in addition to working or short-term memory, three more components of cognitive ability—fluid reasoning, comprehension knowledge, and processing speed, all abilities that are, to some extent, heritable—were related to performance. This was especially true of younger and less experienced players. If you’re naturally better, you don’t have to practice quite as much to get good.
So how much did practice actually explain? In a 2014 meta-analysis that looked specifically at the relationship between deliberate practice and performance in music, games like chess, sports, education, and other professions, Hambrick and his team found a relationship that was even more complex than they had expected. For some things, like games, practice explained about a quarter of variance in expertise. For music and sports, the explanatory power accounted for about a fifth. But for education and professions like computer science, military-aircraft piloting, and sales, the effect ranged from small to tiny. For all of these professions, you obviously need to practice, but natural abilities matter more.
What’s more, the explanatory power of practice fell even further when Hambrick took exact level of expertise into account. In sports—one of the areas in which deliberate practice seems to make the most difference—it turned out that the more advanced the athlete, the less of a role practice plays. Training an average athlete for a set number of hours yields far more results than training an élite athlete, which, in turn, yields greater results than training a super-élite athlete. Put differently, someone like me is going to improve a great deal with even a few hundred hours of training. But within an Olympic team tiny differences in performance are unlikely to be the result of training: these athletes train together, with the same coach, day in and day out. Those milliseconds come from somewhere else. Some may be due to the fact that genetic differences can account for some of the response to training. At Stanford’s elite study, which looks at the most accomplished athletes in the world, Euan Ashley, a professor of medicine and genetics, is studying how an Olympian’s body may respond differently to a given training regimen. Some changes are due to genetic variants that may affect blood transport or oxygen uptake or fat metabolism, or any other number of factors. Some are due to sheer luck—How much sleep did you get? How are you feeling? And some, of course, are due to hours of training. But at the top of the top of the top, the power of additional training falls off sharply.
So where else, exactly, do performance differences come from? . . .
The article suggests some answers to that question.
First, watch this brief (less then 3 minutes) video; then read this article by Madison Margolin in Motherboard.
Veronique Greenwood writes in Quanta:
It’s a curious fact of biology: In yeast, only one in five genes is essential. If any of the approximately 1,200 critical genes are destroyed (out of 6,000), the result is death. Remove one of the others, and the yeast soldiers on.
The same is not always true, however, if a pair of nonessential genes is removed — sometimes, death comes quickly. In these cases, it’s likely that the genes have similar roles. They might both take out the cell’s garbage, for instance, or fix damaged DNA. The loss of one might not be deadly — the other could pick up the slack. But the loss of both is catastrophic.
Can we use what happens when a pair of genes is destroyed to find out their function? This is the question that Charles Boone and Brenda Andrews, biologists at the University of Toronto, began to ask themselves about 17 years ago. If you know what one gene is doing in the cell, and destroying it kills the cell only if another, more mysterious gene goes too — can that give you clues to what the mystery gene does?
To answer the question, they began to orchestrate a precise campaign to destroy, two by two, all the genes in yeast. Using a fleet of yeast-growing robots, they created approximately 23 million strains of yeast, each effectively missing a pair of genes. By watching to see whether the yeast lived, died or grew sickly, the researchers generated data about the existence of relationships between the genes.
Now Boone, Andrews and a large team of collaborators have published inScience a sprawling report on the nearly two-decade-long set of experiments. In all, they found 550,000 pairs that, when removed, result in sickness or death. This network of genetic connections reveals a previously hidden scaffolding that underlies the operation of the cell. “The complete picture,” Boone said, “clearly shows a beautiful hierarchical structure.”
Over here are the genes involved in taking out the cell’s garbage, and over there are the genes responsible for its metabolism. Zoom out from one cluster of genes, and you’ll find the ones involved in the larger process the cluster is nested in. Zoom out from those and you’ll find all the ones that function alongside them in the same compartment of the cell. There’s something vertiginous in this view of life, a feeling that all the layers of complexity that let the organism thrive are there to look through, just as they were laid down by evolution.
As beautiful as the bird’s-eye view of the cell is, this work goes beyond biological voyeurism. This information can tell us about the evolution of the cell and, potentially, about how things go wrong in disease.
Using maps of interactions between genes or proteins is a popular approach to understanding the cell these days. Many researchers, looking at organisms from yeast to worms to humans, are building networks made up of proteins that attach to each other or genes that regulate each other. But the scale of Boone and Andrews’ effort sets it apart. In addition, their method can uncover connections that can’t be made by other tests, like those that focus on proteins that physically attach to one another. “It’s really a magisterial undertaking,” said David Botstein, the chief scientific officer of Google’s anti-aging startup Calico and a pioneer of genome mapping. When Boone and Andrews’ goal of knocking out all possible pairs of genes was floated years ago, “people thought, well, it’s just insane!” recalled Marian Walhout, a systems biologist at the University of Massachusetts Medical School. Even today, with advances in technology, it’s breathtaking, she said.
With the new information and the website where it can be navigated, researchers will be able to look up the genes they study and perhaps find that they have connections that have never been noticed before. “That utility is going to be, I predict, one of the major uses of the paper,” Botstein said. Earlier this month the yeast biochemist Yoshinori Ohsumi was awarded the Nobel Prize in physiology or medicine for his work on autophagy, the programmed destruction of pieces of the cell. “If he were doing his work now, he could go look at this data and see which genes genetically interact with the autophagy genes, and make huge progress much more quickly,” Walhout said.
For those of us who are not scientists, the research also provides an interesting reminder that . . .
I think everyone knows not to refrigerate fresh tomatoes, but maybe not yet. Joanna Klein reports in the NY Times:
The tomato hitching a ride home in your grocery bag today is not the tomato it used to be. No matter if you bought plum, cherry or heirloom, if you wanted the tastiest tomato, you should have picked it yourself and eaten it immediately.
That’s because a tomato’s flavor — made up of sugars, acids and chemicals called volatiles — degrades as soon as it’s picked from the vine. There’s only one thing you can do now: Keep it out of the fridge.
Researchers at The University of Florida have found in a study published Monday in Proceedings of the National Academy of Sciences that when tomatoes are stored at the temperature kept in most refrigerators, irreversible genetic changes take place that erase some of their flavors forever.
Harry J. Klee, a professor of horticultural sciences who led the study, and his colleagues took two varieties of tomatoes — an heirloom and a more common modern variety — and stored them at 41 degrees Fahrenheit before letting them recover at room temperature (68 degrees Fahrenheit). When they looked at what happened inside the tomatoes in cold temperatures, Dr. Klee said the subtropical fruit went into shock, producing especially damaging changes after a week of storage. After they were allowed to warm up, even for a day, some genes in the tomatoes that created its flavor volatiles had turned off and stayed off.
It’s like a symphony: “Remove the violins and the woodwinds,” Dr. Klee wrote in an email. “You still have noise, but it’s not the same. Add back just the violins and it still isn’t right. You need that orchestra of 30 or more chemicals in the right balance to give you a good tomato.”
When you can get fresh tomatoes, Dr. Klee recommends storing them at room temperature, to preserve their flavor, and eating them within a week of bringing them home. If you see your grocer storing them at temperatures that are too cold, tell them not to, he says. . .