Archive for the ‘Health’ Category
Kaleigh Rogers reports at Motherboard on a problem due totally to marijuana being illegal instead of being legal, taxed, and regulated:
It goes without saying that growing weed is a little different from growing other kinds of crops. I mean, I don’t suspect vegetable farmers lose much sleep worrying aboutmischievous teens sneaking into their fields at night to grab fistfuls of organic kale (maybe hipster teens). But there’s one area where the difference between marijuana and other crops is particularly stark: pesticides, and it has both growers and consumers concerned.
For every other crop grown in the US, the chemicals used on them (like pesticides, herbicides, and fungicides) are carefully monitored and restricted by the Department of Agriculture and the Environmental Protection Agency. There are different limits setfor what kind of pesticides can be used and what is an acceptable level of chemicals that can be left behind on a crop (crops we eat, like tomatoes, are treated differently than crops we use for other purposes, like cotton).
But because marijuana is still illegal at the federal level, there are no protocols for pesticides when it comes to growing weed. From the federal government’s point of view, you shouldn’t be using any pesticides on cannabis because you shouldn’t be growing cannabis in the first place.
This has left growers with limited resources for trying to determine the best way to keep their crops healthy and their customers safe.
“Until very recently, it was the wild west: everyone was using whatever they wanted to, whatever they heard about on the internet,” said Whitney Cranshaw, a professor of entomology at the University of Colorado who studies pest management for crops. “Some were appropriate, others were inappropriate, but there was no direction from the feds, no direction from the state, no direction from anybody. So they just did what they thought was right.”
Recently, states where it’s legal to grow and sell medical or recreational marijuana have started rolling out recommendations for growers. In May, Colorado’s Department of Agriculture released a list of pesticides and fungicides that cannabis growers can use. Washington state followed suit earlier this month. But the lists are limited—they mostly focus on natural pesticides like cinnamon oil and garlic—and don’t provide a lot of info about the potential long-term effects of synthetic pesticides on a crop that isn’t just ingested, but inhaled.
“You can consume a large amount of pesticides from the plant by smoking it,” said Jeffrey Raber, a chemistry PhD who has studied the effects of pesticides on cannabis with his lab The WercShop. In 2013, The WercShop published a peer-reviewed study on the effects of pesticides on marijuana and found that up to 70 percent of pesticide residues on pot could be ingested through smoking. Aside from the high rate, Raber pointed out that inhaling a chemical very different from eating it.
“Usually the safety limits for a chemical on an inhalable substance are about ten times greater because they feel it’s that much more sensitive,” Raber said. “You don’t have stomach acid and your liver coming at things first. When you inhale things, it goes directly into your bloodstream. That’s a very different beast.”
The easy solution would seem to be looking to the pesticide restrictions on tobacco. People inhale tobacco the same way they inhale marijuana, so if a pesticide is safe to use on tobacco it must be safe for growing weed, right? Not quite, Raber said. Turns out the EPA has never been all that strict with tobacco regulations: research has shown the tobacco industry lobbies hard to keep its favored pesticides legal, and the list of pesticides commonly used on tobacco is fairly lengthy. Raber said at the end of the day, tobacco is getting mixed up with dozens of other nasty chemicals before it’s rolled into a cigarette. If you’re getting sick from a cigarette, it’s probably not because of a little bit of residual pesticide on the tobacco leaf.
And besides, Raber pointed out that tobacco, though also smoked, is a pretty different product than marijuana. While pot is often prescribed for people going through cancer treatments like chemotherapy to help ease pain and curb nausea, cigarettes are pretty much universally considered a bad idea when you’re going through chemo.
So if growers can’t look to the government and they can’t look to other crops as an example, what’s a modern day grow-op to do? . . .
Very interesting article in Salon by Laura Miller. The whole thing’s worth reading, but note this:
. . . This conception of addiction as a biological phenomenon seemed to be endorsed over the past 20 years as new technologies have allowed neuroscientists to measure the human brain and its activities in ever more telling detail. Sure enough, the brains of addicts are physically different — sometimes strikingly so — from the brains of average people. But neuroscience giveth and now neuroscience taketh away. The recovery movement and rehab industry (two separate things, although the latter often employs the techniques of the former) have always had their critics, but lately some of the most vocal have been the neuroscientists whose findings once lent them credibility.
One of those neuroscientists is Marc Lewis, a psychologist and former addict himself, also the author of a new book “The Biology of Desire: Why Addiction is Not a Disease.” Lewis’s argument is actually fairly simple: The disease theory, and the science sometimes used to support it, fail to take into account the plasticity of the human brain. Of course, “the brain changes with addiction,” he writes. “But the way it changes has to do with learning and development — not disease.” All significant and repeated experiences change the brain; adaptability and habit are the brain’s secret weapons. The changes wrought by addiction are not, however, permanent, and while they are dangerous, they’re not abnormal. Through a combination of a difficult emotional history, bad luck and the ordinary operations of the brain itself, an addict is someone whose brain has been transformed, but also someone who can be pushed further along the road toward healthy development. (Lewis doesn’t like the term “recovery” because it implies a return to the addict’s state before the addiction took hold.)
“The Biology of Desire” is grouped around several case studies, each one illustrating a unique path to dependency. A striving Australian entrepreneur becomes caught up in the “clarity, power and potential” he feels after smoking meth, along with his ability to work long hours while on the drug. A social worker who behaves selflessly in her job and marriage constructs a defiant, selfish, secret life around stealing and swallowing prescription opiates. A shy Irishman who started drinking as a way to relax in social situations slowly comes to see social situations as an occasion to drink and then drinking as a reason to hole up in his apartment for days on end.
Each of these people, Lewis argues, had a particular “emotional wound” the substance helped them handle, but once they started using it, the habit itself eventually became self-perpetuating and in most cases ultimately served to deepen the wound. Each case study focuses on a different part of the brain involved in addiction and illustrates how the function of each part — desire, emotion, impulse, automatic behavior — becomes shackled to a single goal: consuming the addictive substance. The brain is built to learn and change, Lewis points out, but it’s also built to form pathways for repetitive behavior, everything from brushing your teeth to stomping on the brake pedal, so that you don’t have to think about everything you do consciously. The brain is self-organizing. Those are all good properties, but addiction shanghais them for a bad cause.
As Lewis sees it, addiction really is habit; we just don’t appreciate how deeply habit can be engraved on the brain itself. “Repeated (motivating) experience” — i.e., the sensation of having one’s worries wafted away by the bliss of heroin — “produce brain changes that define future experiences… So getting drunk a lot will sculpt the synapses that determine future drinking patterns.” More and more experiences and activities get looped into the addiction experience and trigger cravings and expectations like the bells that made Pavlov’s dogs salivate, from the walk home past a favorite bar to the rituals of shooting up. The world becomes a host of signs all pointing you in the same direction and activating powerful unconscious urges to follow them. At a certain point, the addictive behavior becomes compulsive, seemingly as irresistibly automatic as a reflex. You may not even want the drug anymore, but you’ve forgotten how to do anything else besides seek it out and take it. . .
An interesting article from Eliza Barclay at NPR. Note that an oil-based dressing does as well as an egg in a green salad, but certainly a poached egg atop a salad—or, less ambitiously, a boiled egg or two sliced into the salad—is more interesting.
I like that she explains the why—e.g., red bell pepper with beans because the vitamin C the pepper adds helps. That lets one decide to use, say, orange sections instead in a bean salad.
What are the makings of a great salad? You need fresh greens, of course, and then a layer of colorful vegetables like tomatoes and carrots.
That’s a good start. But to help the body absorb more of the nutrients packed into this medley, you may want to add something else: a cooked egg.
A small study published in May in The American Journal of Clinical Nutritionconcludes that adding eggs to salads makes it easier to absorb the carotenoids in the raw vegetables. Carotenoids are the yellowish-red pigments that give carrots and tomatoes — and lots of other fruits and vegetables — their color. Two famous ones are beta carotene and lycopene. In addition to giving us those pretty colors, they’re also beneficial phytonutrients that help fight inflammation.
For the study, the researchers gave 16 participants raw mixed-vegetable salad with no eggs, a salad with one and a half eggs and a salad with three eggs. They found that the absorption of carotenoids was 3.8-fold higher when the salad included three eggs compared to no eggs.
Now, we should point out that the study was funded by a grant from the American Egg Board’s Egg Nutrition Center, which may raise eyebrows. But the scientists at Purdue University who carried out the study say they worked independently. And the findings hold up, since the scientific mechanism behind this phenomenon is well-documented in other studies.
It’s the fat in the egg yolk that is responsible for upping the nutrient intake. And, as we’ve reported, oil-based salad dressing helps accomplish the same goal.
The dynamic duo of eggs and carrots (or any other vegetable or fruit high in carotenoids) got us wondering about other food power couples. Turns out, they’re not so hard to find.
“The impact of consuming one food with another on the absorption of nutrients is well known in nutrition science,” Wayne Campbell, professor of nutrition science at Purdue University and lead author of the egg and salad study, tells The Salt. “Sometimes the impact may be positive and at other times negative.”
A classic example: After corn is soaked in lime and water, then ground up, all kinds of nutrients in the corn are released and made available for absorption — calcium, iron, niacin and minerals. This is why corn tortillas have been one of the bedrocks of Mesoamerican cuisine for millennia.
So, what about some other foods that you might as well throw together if you’ve got them on hand?
Campbell tells us that eating something high in vitamin C, like a red pepper, helps convert the nonheme iron in plant foods and iron-fortified foods into a chemical form that promotes absorption. (The other form of iron is heme iron, which is only found in meat and seafood.) Sounds like a good excuse to go Tex-Mex and stir some peppers into your black beans. . .
Continue reading. The article discusses other good combinations.
I am right now altering the Breakfast Bites recipe to include lots of freshly ground black pepper to aid with the turmeric.
From EWG, the foods with the worst pesticide residue—so buy these foods from the organic section:
- Sweet bell peppers
- Cherry tomatoes
- Snap peas – imported
- Hot Peppers
- Kale / Collard greens
And the foods lowest in pesticide residue, so safe to buy from the conventional section:
- Sweet corn
- Sweet peas
- Sweet potatoes (I prefer Jewell)
Broccoli is another safe one. Check the link.
Nice touch: they offer a mobile phone app so you can get food’s score as you shop.
Fascinating article in ProPublica by Abrahm Lustgarten:
A couple of miles outside the town of Page, three 775-foot-tall caramel-colored smokestacks tower like sentries on the edge of northern Arizona’s sprawling red sandstone wilderness. At their base, the Navajo Generating Station, the West’s largest power-generating facility, thrums ceaselessly, like a beating heart.
Football-field-length conveyors constantly feed it piles of coal, hauled 78 miles by train from where huge shovels and mining equipment scraped it out of the ground shortly before. Then, like a medieval mortar and pestle machine, wheels crush the stone against a large bowl into a smooth powder that is sprayed into tremendous furnaces — some of the largest ever built. Those furnaces are stoked to 2,000 degrees, heating tubes of steam to produce enough pressure to drive an 80-ton rod of steel to spin faster than the speed of sound, converting the heat of the fires into electricity.
The power generated enables a modern wonder. It drives a set of pumps 325 miles down the Colorado River that heave trillions of gallons of water out of the river and send it shooting over mountains and through canals. That water — lifted 3,000 vertical feet and carried 336 miles — has enabled the cities of Phoenix and Tucson to rapidly expand.
This achievement in moving water, however, is gained at an enormous cost. Every hour the Navajo’s generators spin, the plant spews more climate-warming gases into the atmosphere than almost any other facility in the United States. Alone, it accounts for 29 percent of Arizona’s emissions from energy generation. The Navajo station’s infernos gobble 15 tons of coal each minute, 24 hours each day, every day.
At sunrise, a reddish-brown snake slithers across the sky as the burned coal sends out plumes of carbon dioxide, nitrogen oxide, mercury, lead and other metals. That malignant plume — containing 16 million tons of carbon dioxide every year — contributes to causing the very overheated weather, drought and dwindling flows of water the plant’s power is intended to relieve.
Its builders knew that the Navajo Generating Station, which began being constructed in 1969, would cause enormous pollution. An early government analysis warned that burning so much coal would degrade the region’s air by “orders of magnitude,” and federal scientists suggested Navajo and other coal plants in the region could turn the local terrain into a “national sacrifice area.” But for more than a decade, the pollution went largely unchecked. Climate change wasn’t yet a threat, and the other option for getting water into central Arizona — damming the Grand Canyon — seemed worse.
At times, officials have tried to mitigate the plant’s problems, pouring $420 million into improvements to limit sulfur dioxide emissions as acid rain blanketed parts of the country, for example.
But again and again, the federal government and the other agencies responsible for the plant have dodged calls to clean up the facility and have pushed some of the most stringent environmental requirements far into the future.
In a series of reports, ProPublica has examined how . . .
Sometimes it seems that America has lost the capacity to address large social problems—infrastructure being a prime example. But look also at the continuing breakdown of the medical/healthcare system. Lara Goitein reviews a recent book in the NY Review of Books:
Let Me Heal: The Opportunity to Preserve Excellence in American Medicine
by Kenneth M. Ludmerer
Oxford University Press, 431 pp., $34.95
In the 1890s, Sir William Osler, now regarded as something of a demigod in American medicine, created at the Johns Hopkins Hospital a novel system for training physicians after graduation from medical school. It required young physicians to reside in the hospital full-time without pay, sometimes for years, to learn how to care for patients under the close supervision of senior physicians.
This was the first residency program. Despite the monastic existence, the long hours, and the rigid hierarchy, Osler’s residents apparently loved it. They felt exalted to be able to learn the practice of medicine under the tutelage of great physicians who based their teachings on science, inquiry, and argument, not tradition. And far from bridling at being at the bottom of the pyramid, they virtually worshiped their teachers, who in turn generally lavished great attention and affection on their charges. Osler’s innovation spread rapidly, and the residency system is still the essential feature of teaching hospitals throughout the country.
Residents are young doctors who have completed medical school and are learning their chosen specialty by caring for patients under the supervision of senior physicians, called attendings. Residents in their first year are called interns. As in Osler’s time, residents work long hours, although they no longer live in the hospital and are now paid a modest salary. The time this training takes varies—three years, for example, to complete a program in internal medicine. Following that, many go on to a few more years of training in subspecialties (for example cardiology, a subspecialty of internal medicine), and at this point they are called fellows.
Together residents and fellows, who now number about 120,000 across the country, are called house officers, and their training is termed graduate medical education (GME). The teaching hospitals where most of this takes place are often affiliated with medical schools, which in turn are often part of universities, and together they make up sometimes gigantic conglomerates, called academic medical centers.
Despite the fact that Osler’s idea lives on, there have been enormous changes over the years, and this is the subject of Kenneth Ludmerer’s meticulous new book, Let Me Heal. Ludmerer, a senior faculty physician and professor of the history of medicine at Washington University in St. Louis, sounds a warning. The Oslerian ideal of faculty and residents forming close relationships and thinking together about each patient is in trouble. Instead, residents, with little supervision, are struggling to keep up with staggering workloads, and have little time or energy left for learning. Attending physicians, for their part, are often too occupied with their own research and clinical practices—often in labs and offices outside of the hospital—to pay much attention to the house officers.
The implications for the public are profound. Nearly anyone admitted to a teaching hospital—and these are the most prestigious hospitals in the country—can expect to be cared for by residents and fellows. Whether house officers are well trained and, most important, whether they have the time to provide good care are crucial. Yet until Ludmerer’s book, there has been very little critical attention to these questions. It’s simply assumed that when you are admitted to a teaching hospital, you will get the best care possible. It’s odd that something this important would be regarded in such a Panglossian way.
Ludmerer refers to graduate medical education in the period between the world wars, following Osler, as the “educational era,” by which he means that the highest priority of teaching hospitals was education. Heads of departments were omnipresent on the wards, and knew the house officers intimately. A network of intense, often lifelong mentorships formed. Ludmerer gives a fascinating account of the propagation of talent; for example, William Halsted, the first chief of surgery at Johns Hopkins, had seventeen chief residents, eleven of whom subsequently established their own surgical residency programs at other institutions. Of their 166 chief residents, eighty-five became prominent faculty members at university medical schools. The influence of the giants of the era of education still reaches us through three, four, or five generations of disciples, and house officers quote Osler even today.
There was a strong moral dimension to this system. Ludmerer writes that “house officers learned that medicine is a calling, that altruism is central to being a true medical professional, and that the ideal practitioner placed the welfare of his patients above all else.” Commercialism was antithetical to teaching hospitals in the era of education. “Teaching hospitals regularly acknowledged that they served the public,” writes Ludmerer, “and they competed with each other to be the best, not the biggest or most profitable.”
Indeed, teaching hospitals deliberately limited their growth to maintain the ideal setting for teaching and research. Ludmerer offers the example of the prestigious Peter Bent Brigham Hospital in Boston (now named the Brigham and Women’s Hospital), which in its 1925 annual report declared that it had “more patients than it can satisfactorily handle…. The last thing it desires is to augment this by patients who otherwise will secure adequate professional service.” They also kept prices as low as possible, and delivered large amounts of charity care. With few exceptions, members of the faculty did not patent medical discoveries or accept gifts from industry, and regularly waived fees for poor patients.
To be sure, this golden age was not pure gold. These physicians were, on the whole, paternalistic toward patients; by today’s standards, many were elitist, sexist, and racist. But they were utterly devoted to what they were doing, and to one another, and put that commitment ahead of everything, including their own self-interest.
World War II brought great changes. In the postwar prosperity, the United States began to invest heavily in science and medicine, with rapid expansion of the National Institutes of Health (NIH), which in turn poured money into research at academic medical centers. In addition, the growth of health insurance led to more hospital admissions. In 1965, the creation of Medicare and Medicaid accelerated this growth enormously. According to Ludmerer, between 1965 and 1990, the number of full-time faculty in medical schools increased more than fourfold, NIH funding increased elevenfold, and revenues of academic medical centers from clinical treatment increased nearly two hundred–fold.
Initially, in the couple of decades following the war, the influx of money and the rapid growth simply gave momentum to the trajectory begun in the era of education. Reinforced by leaders who had trained during that era, the established traditions endured, and teaching hospitals for the most part defended their commitment to educational excellence and public service. However, the close-knit, personal character of graduate medical education began to unravel. By the late 1970s, academic medical centers began to take on the character of large businesses, both in their size and complexity, and in their focus on growth and maximizing revenue. Even if technically nonprofit, the benefits of expansion accrued to everyone who worked there, most particularly the executives and administrators. In 1980, Arnold Relman wrote a landmark article in The New England Journal of Medicine, warning of the emergence of a “medical-industrial complex.”
The growing commercialization of teaching hospitals was exacerbated by a change in the method of payment for hospital care. Health care costs were rising rapidly and unsustainably, and in the 1980s health insurers responded with what has been termed “the revolt of the payers.” Previously, most insurers had paid hospitals according to “fee-for-service,” in which payment was made for each consultation, test, treatment, or other service provided. But now Medicare and other insurers, in an effort to control costs, began to reimburse hospitals less liberally and by “prospective payment” methods, in which the hospital received a fixed payment for each patient’s admission according to the diagnosis. Whatever part of that payment was not spent was the hospital’s gain; if the hospital spent more, it was a loss. Hospitals now had a strong incentive to get patients in and out as fast as possible.
Quite suddenly, the torrent of clinical revenue that had so swollen academic medical centers slowed. Many hospitals did not survive in the new environment (the total number of US hospitals decreased by nearly 20 percent between 1980 and 2000). Those that stayed afloat did so by promoting high-revenue subspecialty and procedural care, for example heart catheterization and orthopedic and heart surgery, which were still lucratively rewarded. They also developed more extensive relationships with pharmaceutical and biotech companies and manufacturers of medical devices, which paid them for exclusive marketing rights to drugs or technologies developed by faculty, as well as access to both patients and faculty for research and marketing purposes.1. . .
A bankrupt policy that costs $15 billion per year and results in increasing drug use. Jon Lee Anderson reports in the New Yorker:
1971, President Nixon announced the U.S. “war on drugs,” which every President since has carried forward as a battle standard. Until recently, most Latin American governments have coöperated, and in return have received intelligence, equipment, and, perhaps most importantly, financial assistance. The overall investment has been huge—the federal government now spends about fifteen billion dollars on it each year—with the net result that drug use has proliferated in the U.S. and worldwide. In the drug-producing countries, where drug consumption was negligible at the start of the American effort, the criminal narcoculture has attained ghoulishly surreal proportions.
Over the course of the past few years, a growing number of Latin American governments have begun to challenge U.S. policy and to call for a radical rethinking of the war on drugs, including widespread decriminalization. A handful of leftist governments, such as those of Venezuela, Ecuador, and Bolivia, have gone so far as to end their coöperation with the U.S. Drug Enforcement Administration, alleging that U.S. drug policy is a new form of Yankee imperialism. Uruguay, under the former President José Mujica, became the first country to legalize state-sponsored production, sale, and use of marijuana.
The latest opposition to the forty-five-year-old drug war came not from a government that is hostile to the U.S. but from its most steadfast ally in the Americas, Colombia. On May 14th, President Juan Manuel Santos announced that his government was halting its longstanding practice of spraying the country’s illicit coca crop with chemicals to kill the plants. The spraying began in the late nineties under the U.S.-sponsored Plan Colombia, which aimed to wipe out the country’s drug culture and its guerrillas, who largely depend onnarcotráfico for their survival. Santos made the announcement after U.N. scientists confirmed what critics of spraying had long alleged: that glyphosate, a key ingredient in the herbicide known as Roundup, is probably carcinogenic to humans.
Colombia was the last country in the world to use chemical spraying to combat illegal drug cultivation. Citing health hazards and damage to impoverished rural economies, both Bolivia and Peru, which also grow coca, have banned aerial spraying. Afghanistan, the world’s chief supplier of opium, overrode American protests to ban spraying in 2007. The Karzai government argued that the program drove poor Afghan farmers into the hands of the Taliban by destroying their livelihoods without offering realistic economic alternatives. Similar arguments have long been made in Colombia, where millions of farmers have been driven from their land to live in urban slums.
The U.S. State and Defense Departments, which jointly oversee Plan Colombia, have always lobbied heavily in favor of spraying, which is outsourced to the giant U.S. security contractor DynCorp. DynCorp has earned hundreds of millions from its Colombian contracts, just as it previously did in Afghanistan, where it also won the government contract to implement counter-narcotics strategy. Notably, after President Santos announced the halt to spraying, that U.S. Ambassador to Colombia, Kevin Whitaker, published an Op-Ed in the leading Colombian newspaper, El Tiempo, arguing in favor of continuing the spraying campaign while saying that the U.S would continue working closely with Colombia in spite of the recent decision. Whitaker ended his Op-Ed with the English phrase “We have your back.”
So who is to be believed about the war on drugs, and what is the right way forward? After almost twenty years, many deaths, and billions of dollars spent under Plan Colombia, has illicit coca production decreased in Colombia? Overall, yes, according to the plan’s proponents: in his piece, Whitaker asserted that the area under cultivation for illegal coca production was reduced by half between 2007 and 2013. But studies also show that that area increased by thirty-nine per cent last year—so the most recent trends aren’t good. And if one third of the initial cultivation area is still left, that means that a significant amount of cocaine is still coming out of Colombia, and will be for the foreseeable future. . .
Maybe we’re going about drugs all wrong?