Later On

A blog written for those whose interests more or less match mine.

Archive for November 3rd, 2019

How a Data Detective Exposed Suspicious Medical Trials

leave a comment »

David Adam writes in Scientific American (a reprint of an article first published on July 23, 2019, in Nature):

If John Carlisle had a cat flap, scientific fraudsters might rest easier at night. Carlisle routinely rises at 4.30 A.M. to let out Wizard, the family pet. Then, unable to sleep, he reaches for his laptop and starts typing up data from published papers on clinical trials. Before his wife’s alarm clock sounds 90 minutes later, he has usually managed to fill a spreadsheet with the ages, weights and heights of hundreds of people—some of whom, he suspects, never actually existed.

By day, Carlisle is an anesthetist working for England’s National Health Service in the seaside town of Torquay. But in his spare time, he roots around the scientific record for suspect data in clinical research. Over the past decade, his sleuthing has included trials used to investigate a wide range of health issues, from the benefits of specific diets to guidelines for hospital treatment. It has led to hundreds of papers being retracted and corrected, because of both misconduct and mistakes. And it has helped to end the careers of some large-scale fakers: of the six scientists worldwide with the most retractions, three were brought down using variants of Carlisle’s data analyses.

“His technique has been shown to be incredibly useful,” says Paul Myles, director of anesthesia and perioperative medicine at the Alfred Hospital in Melbourne, Australia, who has worked with Carlisle to examine research papers containing dodgy statistics. “He’s used it to demonstrate some major examples of fraud.”

Carlisle’s statistical sideline is not popular with everyone. Critics argue that it has sometimes led to the questioning of papers that aren’t obviously flawed, resulting in unjustified suspicion.

But Carlisle believes that he is helping to protect patients, which is why he spends his spare time poring over others’ studies. “I do it because my curiosity motivates me to do so,” he says, not because of an overwhelming zeal to uncover wrongdoing: “It’s important not to become a crusader against misconduct.”

Together with the work of other researchers who doggedly check academic papers, his efforts suggest that the gatekeepers of science—journals and institutions—could be doing much more to spot mistakes. In medical trials, the kind that Carlisle focuses on, that can be a matter of life and death.

ANESTHETISTS BEHAVING BADLY

Torquay looks like any other traditional provincial English town, with pretty floral displays on the roundabouts and just enough pastel-colored cottages to catch the eye. Carlisle has lived in the area for 18 years and works at the town’s general hospital. In an empty operating theater, after a patient has just been stitched up and wheeled away, he explains how he began to look for faked data in medical research.

More than 10 years ago, Carlisle and other anesthesiologists began chattering about results published by a Japanese researcher, Yoshitaka Fujii. In a series of randomized controlled trials (RCTs), Fujii, who then worked at Toho University in Tokyo, claimed to have examined the impact of various medicines on preventing vomiting and nausea in patients after surgery. But the data looked too clean to be true. Carlisle, one among many concerned, decided to check the figures, using statistical tests to pick up unlikely patterns in the data. He showed in 2012 that, in many cases, the likelihood of the patterns having arisen by chance was “infinitesimally small.” Prompted in part by this analysis, journal editors asked Fujii’s present and former universities to investigate; Fujii was fired from Toho University in 2012 and had 183 of his papers retracted, an all-time record. Four years later, Carlisle co-published an analysis of results from another Japanese anesthesiologist, Yuhji Saitoh—a frequent co-author of Fujii’s—and demonstrated that his data were extremely suspicious, too. Saitoh currently has 53 retractions.

Other researchers soon cited Carlisle’s work in their own analyses, which used variants of his approach. In 2016, researchers in New Zealand and the United Kingdom, for example, reported problems in papers by Yoshihiro Sato, a bone researcher at a hospital in southern Japan. That ultimately led to 27 retractions, and 66 Sato-authored papers have been retracted in total.

Anesthesia had been rocked by several fraud scandals before Fujii and Saitoh’s cases—including that of German anesthetist Joachim Boldt, who has had more than 90 papers retracted. But Carlisle began to wonder whether only his own field was at fault. So he picked eight leading journals and, working in his spare moments, checked through thousands of randomized trials they had published.

In 2017, he published an analysis in the journal anesthesia stating that he had found suspect data in 90 of more than 5,000 trials published over 16 years. At least 10 of these papers have since been retracted and six corrected, including a high-profile study published in the New England Journal of Medicine (NEJM) on the health benefits of the Mediterranean diet. In that case, however, there was no suggestion of fraud: the authors had made a mistake in how they randomized participants. After the authors removed erroneous data, the paper was republished with similar conclusions.

Carlisle has kept going. This year, he warned about dozens of anesthesia studies by an Italian surgeon, Mario Schietroma at the University of L’Aquila in central Italy, saying that they were not a reliable basis for clinical practice. Myles, who worked on the report with Carlisle, had raised the alarm last year after spotting suspicious similarities in the raw data for control and patient groups in five of Schietroma’s papers.

The challenges to Schietroma’s claims have had an impact in hospitals around the globe. The World Health Organization cited Schietroma’s work when, in 2016, it issued a recommendation that anesthetists should routinely boost the oxygen levels they deliver to patients during and after surgery, to help reduce infection. That was a controversial call: anesthetists know that in some procedures, too much oxygen can be associated with an increased risk of complications—and the recommendations would have meant hospitals in poorer countries spending more of their budgets on expensive bottled oxygen, Myles says.

The five papers Myles warned about were quickly retracted, and the WHO revised its recommendation from “strong” to “conditional,”, meaning that clinicians have more freedom to make different choices for various patients. Schietroma says his calculations were assessed by an independent statistician and through peer review, and that he purposely selected similar groups of patients, so it’s not surprising if the data closely match. He also says he lost raw data and documents related to the trials when L’Aquila was struck by an earthquake in 2009. A spokesperson for the university says it has left enquiries to “the competent investigating bodies,” but did not explain which bodies those were or whether any investigations were under way.

SPOTTING UNNATURAL DATA

The essence of Carlisle’s approach is nothing new, he says: it’s simply that real-life data have natural patterns that artificial data struggle to replicate. Such phenomena were spotted in the 1880s, were popularized by the U.S. electrical engineer and physicist Frank Benford in 1938, and have since been used by many statistical checkers. Political scientists, for example, have long used a similar approach to analyze survey data—a technique they call Stouffer’s method after sociologist Samuel Stouffer, who popularized it in the 1950s.

In the case of RCTs, Carlisle looks at the baseline measurements that describe the characteristics of the groups of volunteers in the trial, typically the control group and the intervention group. These include height, weight and relevant physiological characteristics—usually described in the first table of a paper.

In a genuine RCT, volunteers are randomly allocated to the control or (one or more) intervention groups. As a result, the mean and the standard deviation for each characteristic should be about the same—but not too identical. That would be suspiciously perfect.

Carlisle first constructs a P value for each pairing: a statistical measurement of how likely the reported baseline data points are if one assumes that volunteers were, in fact, randomly allocated to each group. He then pools all these P values to get a sense of how random the measurements are overall. A combined P value that looks too high suggests that the data are suspiciously well-balanced; too low and it could show that the patients have been randomized incorrectly.

The method isn’t foolproof. The statistical checks demand that the variables in the table are truly independent—whereas in reality, they often aren’t. (Height and weight are linked, for example.) In practice, this means that some papers that are flagged up as incorrect actually aren’t—and for that reason, some statisticians have criticized Carlisle’s work.

But Carlisle says that applying his method is a good first step, and one that can highlight studies that might deserve a closer look, such as requesting the individual patient data behind the paper.

“It can put up a red flag. Or an amber flag, or five or 10 red flags to say this is highly unlikely to be real data,” says Myles.

MISTAKES VERSUS MISCREANTS

Carlisle says that he is careful not to attribute any cause to the possible problems he identifies. In 2017, however, when Carlisle’s analysis of 5,000 trials appeared in Anesthesia—of which he is an editor—an accompanying editorial by anesthetists John Loadsman and Tim McCulloch at the University of Sydney in Australia took a more provocative line. . .

Continue reading.

Written by LeisureGuy

3 November 2019 at 2:39 pm

Posted in Daily life, Science

Coffee and health

leave a comment »

Written by LeisureGuy

3 November 2019 at 12:30 pm

The Surprising Benefits of Serving Prisoners Food That Meets Their Nutritional Requirements

leave a comment »

Madison Paul writes in Mother Jones:

Jose Villarreal remembers going to bed hungry most nights during his 10 years in solitary confinement at California’s Pelican Bay State Prison. Dinner might consist of mashed potatoes, bread, and a slice of processed meat—never with salt, and always cold. Shouting through air vents between their cells, his neighbors would count the number of vegetables on their trays: eight string beans one day, 26 peas the next. “It became almost a joke,” Villarreal recalls.

This low-nutrient fare is typical of many corrections systems, which calibrate menus to meet budget demands and minimum calorie counts. Prices per meal range from about $1.30 to as low as the 15 cents that Arizona Sheriff Joe Arpaio once bragged about spending. The high-starch meals are often served up by scandal-plagued private companies. Meats are typically processed, and fresh fruit is rare, in part because it can be turned into booze.

To supplement tasteless grub, prisoners turn to the commissary, says Kimberly Dong, a Tufts University assistant professor researching prisoner health. This behind-bars bodega stocks items like Fritos and ramen, which inmates mix together to concoct dishes such as “spread,” a San Francisco County Jail specialty often made from noodles topped with hot chips, cheese sauce, and chili beans. “It’s like a carrot and a stick,” Villarreal says of the choice between commissary and facility-provided food. “But even the carrot is dipped in poison.”

This uninspiring diet is likely taking a toll on inmates’ health. It’s not just that prisoners are 6.4 times more likely to be sickened from spoiled or contaminated food than people on the outside, as the Centers for Disease Control and Prevention determined in 2017. Prison food can damage their long-term wellness. According to the Bureau of Justice Statistics, about 44 percent of state and federal prisoners have experienced chronic disease, compared with 31 percent of the general population, even after controlling for age, sex, and race. Chronic illnesses common among prisoners—high blood pressure, diabetes, and heart problems—are linked to obesity, which is in turn associated with highly processed, high-carb jailhouse fare. And because inmates disproportionately come from lower socioeconomic backgrounds, they’re already more likely to experience chronic disease than the general public, so prison grub can exacerbate preexisting conditions.

Corrections facilities often cut corners on food in an effort to save money. But this may cost taxpayers more in the long run. According to a 2017 analysis by the Prison Policy Initiative, after staffing, health care is the public prison system’s largest expense, setting government agencies back $12.3 billion a year. Outside prisons, there’s ample evi­dence that improving diets can shrink health care spending: One study of food stamp recipients found that incentivizing purchases of produce while reducing soda consumption could save more than $4.3 billion in health care expenses over five years. Extrapolating from these numbers, similar changes for America’s 2.3 million prisoners could save taxpayers more than $500 million over the same time period.

That’s not counting the added savings on security, since prisoners often protest when they notice food quality deteriorating. In the last few years, dietary discontent helped spark riots in at least three states. As part of the national prison strike that started in August, prisoners in North Carolina hung a makeshift banner demanding better food. Public officials sometimes take the hint: In February, prison officials in Washington ended a nearly 1,700-person food strike at medium-security prison by agreeing to replace sugary breakfast muffins with hard-boiled eggs. Last year, Michigan upped its prison food spending by $13.7 million to replace maggot-ridden meals provided by Trinity Services Group.

But nutritious chow doesn’t always have to cost more money. While harsh farm labor was once common at rural lockups, prison agricultural programs increasingly involve smaller-scale gardens that enable inmates to consume produce they’ve grown. Such programs have also been found to improve mental health, reduce recidivism rates, and provide job skills. One initiative, run by the Oregon-based nonprofit Growing Gardens, has graduated more than 900 inmates with gardening certificates.

A year and a half after his release, Villarreal still isn’t sure what is medically wrong with him. . .

Continue reading.

I certainly think a class-action suit to require prisons and jails to serve meals that meet nutritional requirements should succeed, whether it would or not. If society wants to lock people up, then society must be required to provide adequate food, shelter, and clothing. That might seem expensive (especially given for mass-incarceration nations like the US — and no other nation is even close — so it would then makes sense to see what steps can be taken to prevent the locking up of people by ensuring people have better opportunities for productive (and law-abiding) lives.

Written by LeisureGuy

3 November 2019 at 12:09 pm

%d bloggers like this: