Archive for the ‘Science’ Category
Salt build-up from irrigation has destroyed other societies. Brian Merchant writes in Motherboard:
Eating too much salt in your diet can beget a litany of adverse health effects—blood pressure, stroke, heart disease, cancer. That’s well documented. It’s not as well known that consuming too much salt can have similarly dire effects on the environment, and, by extension, our food supply. Salt degradation has caused tens of billions of dollars worth of damage, mars an area of cropland the size of Manhattan every week, and has hit nearly one-fifth of the world’s farmland so far.
“Salts have damaging effects whether they are in excess amounts in the human body or in agricultural lands,” Manzoor Qadir, the lead author of an eye-opening new study on the subject, published by the United Nations’ Institute for Water, Environment and Health, told me in an email conversation.
“If salt degradation goes on unchecked, more and more land will be highly degraded leading to wasteland,” he said. “Restoring such lands will not be economically feasible at all.”
When farmers irrigate crops with water—even “good quality” freshwater—salt comes along for the ride. Without proper drainage systems, the salt can then accumulate in soil whenever water evaporates and leaves it behind, or plants suck out the ‘pure water’ and leave salt concentrated in the root zone. Once enough salt accumulates, it can cause a host of problems to the crops—not entirely unlike how a salt-heavy diet adversely impacts people.
“In terms of effects on crops, salt-induced land degradation results in reduction in plant growth rate, reduced yield, and in severe cases, total crop failure,” Qadir told me. This happens especially quickly in arid regions, which suggests the process may be accelerated by climate change.
The UN report brings some fairly astonishing findings—his team estimates that 2,000 hectares of farmland (nearly 8 square miles) of farmland is ruined daily by salt degradation. So far, nearly 20 percent of the world’s farmland has been degraded, an area approximately the size of France. . .
This particular threat has been known for years if not decades, but again no action is taken: very like global warming in that it’s an enormous problem that is quite foreseeable and whose cause is known, but we find ourselves paralyzed into inaction, encouraged in that by those making money from the status quo.
From DietDoctor.com, run by an MD in Sweden:
Hi! I feel like sharing my story with you.
When I was younger, I was a healthy and happy girl of normal weight. But when I was 12 years old I was hit by epilepsy. I quickly turned into a different person. Due to the frequent seizures (often several times a week) and more medications with very difficult side effects, I became a girl who was constantly depressed, tired, and I gained a lot of weight. For eight years I lived my life in a tangible darkness, and completely without energy or joy of living.
In early 2014 I was told by my doctors that there was nothing more they could do for me, and that they had tried everything to stop the seizures. They said the only thing I could do was to hope that the seizures would disappear with time. At that time I was having seizures 3–4 times a week. I was quite overweight and because I was in such poor shape I couldn’t work either. In addition, this unsustainable situation had already brought me to a suicide attempt, and I knew that if there were no improvement soon, I’d make a second attempt.
I decided to take things into my own hands, and thought that at least I could try to lose some weight. In late February (8 months ago), I started to eat LCHF. Without a doubt, absolutely the very best decision I have ever made! Already in the first week my seizures disappeared. By 3 months on LCHF, I had phased out all drugs and was still seizure free. Today I’ve lost 121 pounds (55 kilo). I’m not taking any drugs, and I finally became that happy, healthy girl full of energy again. As long as I stick to LCHF, I don’t notice my disease at all and I don’t run out of energy.
Imagine that something as simple as reducing the amount of carbohydrates gave me my life back, and experiencing this journey has also made me grow a lot as a person. The plan now is to study to become a diet counselor, and I hope to help others in a similar situation.
I also want to thank the Diet Doctor for inspiring and great posts.
Sincerely, Emmy Frisk
Emmy Frisk, 20 years old. Please feel free to publish name and picture. [picture's at the link - LG]
A couple of other posts on LCHF and epilepsy:
An article in Business Insider by Daniel Duane:
For more than half a century, the conventional wisdom among nutritionists and public health officials was that fat is dietary enemy No. 1 — the leading cause of obesity and heart disease.
It appears the wisdom was off.
And not just off. Almost entirely backward.
According to a new study from the National Institutes of Health, a diet that reduces carbohydrates in favor of fat — including the saturated fat in meat and butter — improves nearly every health measurement, from reducing our waistlines to keeping our arteries clear, more than the low-fat diets that have been recommended for generations.
“The medical establishment got it wrong,” says cardiologist Dennis Goodman, director of Integrative Medicine at New York Medical Associates. “The belief system didn’t pan out.”
It’s not the conclusion you would expect given the NIH study’s parameters. Lead researcher Lydia Bazanno, of the Tulane University School of Public Health, pitted this high-fat, low-carb diet against a fat-restricted regimen prescribed by the National Cholesterol Education Program.
“We told both groups to get carbs from green, leafy vegetables, because those are high in nutrients and fiber to keep you sated,” Bazanno says. “We also told everyone to stay away from trans fats.” The fat-restricted group continued to eat carbs, including bread and cereals, while keeping saturated fat — common in animal products — below 7 percent of total calories.
By contrast, the high-fat group cut carbs in half and did not avoid butter, meat, and cheese. Most important, both groups ate as much as they wanted — no calorie counting, no going hungry.
One year later, the high-fat, low-carb group had lost three times as much weight — 12 pounds compared with four — and that weight loss came from body fat, while the low-fat group lost muscle. Even more persuasive were the results of blood tests meant to measure the risk of heart disease and diabetes. The high-fat group, despite eating nearly twice as much saturated fat, still saw greater improvements in LDL cholesterol, HDL cholesterol, and triglycerides. . . .
Continue reading. The article in context includes these links:
Considering fat simply as a food feels odd at first, but then it becomes a relief and a pleasure. And in the meantime the sugar industry, which has zero concerns about our health but big concerns about its bottom line, is fight fiercely to keep the Nutrition Facts label from showing us how much sugar has been added to our foods. (Note: 1 tsp sugar = 4 grams, the 11 grams of sugar in one serving of Clamato juice (see video below) is almost 3 teaspoons, or 1 tablespoon, of sugar. How often do you stir a tablespoon of sugar into your drink? (Well, obviously, you don’t have to: lots of sugar has already been added, enough so that food manufacturers really don’t want you to know how much.)
I think she’s onto something, and I can see no reason on earth why the customer should not be able to delete apps from his or her own phone. Apple is sometimes way too controlling.
And Elon Musk warns us that with AI we unleash the demon. Kevin Kelly describes in Wired exactly how the unleashing is being accomplished:
A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here—it’s about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines’ backs. It is surprisingly warm inside, as if the cluster were alive.
Today’s Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of one single program, it’s an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.
Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that one of the apps under development is a medical-diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is Giardia—the correct answer. This expertise isn’t yet available to patients directly; IBM provides access to Watson’s intelligence to partners, helping them develop user-friendly interfaces for subscribing doctors and hospitals. “I believe something like Watson will soon be the world’s best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a cloud AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”
Medicine is only the beginning. All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to quantitative analysis firm Quid, AI has attracted more than $17 billion in investments since 2009. Last year alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook and Google have recruited researchers to join their in-house AI research teams. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since last year. Private investment in the AI sector has been expanding 62 percent a year on average for the past four years, a rate that is expected to continue.
Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.
Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder, who became the company’s CEO in 2011. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an AI.”
I’ve thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google’s 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google’s main product will not be search but AI.
This is the point where it is entirely appropriate to be skeptical. For almost 60 years, AI researchers have predicted that AI is right around the corner, yet until a few years ago it seemed as stuck in the future as ever. There was even a term coined to describe this era of meager results and even more meager research funding: the AI winter. Has anything really changed?
Yes. Three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence: . . .
Science is in effect a filter that selects out conservatives, for reasons to some extent explored and explained in a Pacific Standard article by Tom Jacobs:
While political partisans of all stripes have been known to take issue with research findings that contradict their positions, conservatives have come across as particularly anti-science of late. Columnist and commentator George Will, who dismisses the consensus opinion of climate-change researchers, recently expressed skepticism of the medical community’s assurances that Ebola cannot be caught via airborne transmission.
Will clearly perceives scientists as untrustworthy, their conclusions skewed by self-interest and preconceived notions. While this view is obviously self-serving—he really should check out the psychological notion of projection—it raises disturbing questions about whether science has become hopelessly politicized.
So are scientists—as conservatives suspect—more likely to be liberals? Recently published research suggests they are, but—contrary to the implication left by Will and his colleagues—this is not because political progressives are more intrinsically inclined than right-wingers to choose a scientific career.
Rather, according to a research team led by Harvard University psychologist Christine Ma-Kellams, immersion in the world of science tends to shifts students’ attitudes toward the left side of the political spectrum.
Specifically, they report adopting a scientific mindset makes one less likely to endorse a hierarchy-based ideology in which one group of people is considered superior to another—an attitude that has been strongly linked to political conservatism.
In the Journal of Social and Political Psychology, Ma-Kellams and her colleagues describe four studies that support their thesis. In the first, 196 students from a New England university revealed their ideological positions by responding to 18 statements expressing political opinions.
“Across domains,” the researchers report, “those who are in scientific fields exhibited greater political liberalism compared to those in non-hard-scientific fields.”
Importantly, this was only found for students in their third or fourth year of college. This strongly suggests that, rather than political liberals being attracted to science, it was the hands-on study that made the difference.
The second study featured 100 undergraduates, who expressed their views on three hot-button political issues (same-sex marriage, affirmative action, and the Affordable Care Act). They also completed the Social Dominance Orientation Scale, in which they expressed their level of agreement or disagreement with such statements as “Sometimes other groups must be kept in their place,” and “In getting what you want, it is sometimes necessary to use force against other groups.”
Consistent with the first study, the researchers found that “for those with significant exposure to their discipline (i.e., upperclassmen), studying science is associated with more liberal political attitudes.” Furthermore, they found this was due to a lower level of support for the my-group-deserves-to-dominate positions outlined above.
Additional studies featuring Canadian students and a community sample from the Boston area came to the same conclusions.
“Relative to those studying non-sciences, students in the sciences exhibited greater political liberalism across a variety of domains (including foreign policy, health care, and the economy) and a variety of social issues (gay marriage, affirmative action), as well as in general self-reported liberalism,” Ma-Kellams and her colleagues write.
This, they conclude, is the result of “science’s emphasis on rationality, impartiality, fairness, progress, and the idea that we are to use these rational tools for the mutual benefit of all people in society.”
In one sense, these results are something of a surprise. . .
This explains (to a degree and on a general level) why conservatives seem to know and understand very little science: those who do learn science tend to stop being conservative. (It’s a general rule, with many exceptions.)
Last March, during the enormous South by Southwest music festival in Austin, Texas, the late-night talk show Jimmy Kimmel Live! sent a camera crew out into the streets to catch hipsters bluffing. “People who go to music festivals pride themselves on knowing who the next acts are,” Kimmel said to his studio audience, “even if they don’t actually know who the new acts are.” So the host had his crew ask festival-goers for their thoughts about bands that don’t exist.
“The big buzz on the street,” said one of Kimmel’s interviewers to a man wearing thick-framed glasses and a whimsical T-shirt, “is Contact Dermatitis. Do you think he has what it takes to really make it to the big time?”
“Absolutely,” came the dazed fan’s reply.
The prank was an installment of Kimmel’s recurring “Lie Witness News” feature, which involves asking pedestrians a variety of questions with false premises. In another episode, Kimmel’s crew asked people on Hollywood Boulevard whether they thought the 2014 film Godzilla was insensitive to survivors of the 1954 giant lizard attack on Tokyo; in a third, they asked whether Bill Clinton gets enough credit for ending the Korean War, and whether his appearance as a judge on America’s Got Talent would damage his legacy. “No,” said one woman to this last question. “It will make him even more popular.”
One can’t help but feel for the people who fall into Kimmel’s trap. Some appear willing to say just about anything on camera to hide their cluelessness about the subject at hand (which, of course, has the opposite effect). Others seem eager to please, not wanting to let the interviewer down by giving the most boringly appropriate response: I don’t know. But for some of these interviewees, the trap may be an even deeper one. The most confident-sounding respondents often seem to think they do have some clue—as if there is some fact, some memory, or some intuition that assures them their answer is reasonable.
At one point during South by Southwest, Kimmel’s crew approached a poised young woman with brown hair. “What have you heard about Tonya and the Hardings?” the interviewer asked. “Have you heard they’re kind of hard-hitting?” Failing to pick up on this verbal wink, the woman launched into an elaborate response about the fictitious band. “Yeah, a lot of men have been talking about them, saying they’re really impressed,” she replied. “They’re usually not fans of female groups, but they’re really making a statement.” From some mental gossamer, she was able to spin an authoritative review of Tonya and the Hardings incorporating certain detailed facts: that they’re real; that they’re female (never mind that, say, Marilyn Manson and Alice Cooper aren’t); and that they’re a tough, boundary-breaking group.
To be sure, Kimmel’s producers must cherry-pick the most laughable interviews to put the air. But late-night TV is not the only place where one can catch people extemporizing on topics they know nothing about. In the more solemn confines of a research lab at Cornell University, the psychologists Stav Atir, Emily Rosenzweig, and I carry out ongoing research that amounts to a carefully controlled, less flamboyant version of Jimmy Kimmel’s bit. In our work, we ask survey respondents if they are familiar with certain technical concepts from physics, biology, politics, and geography. A fair number claim familiarity with genuine terms like centripetal force and photon. But interestingly, they also claim some familiarity with concepts that are entirely made up, such as the plates of parallax, ultra-lipid, and cholarine. In one study, roughly 90 percent claimed some knowledge of at least one of the nine fictitious concepts we asked them about. In fact, the more well versed respondents considered themselves in a general topic, the more familiarity they claimed with the meaningless terms associated with it in the survey.
It’s odd to see people who claim political expertise assert their knowledge of both Susan Rice (the national security adviser to President Barack Obama) and Michael Merrington (a pleasant-sounding string of syllables). But it’s not that surprising. For more than 20 years, I have researched people’s understanding of their own expertise—formally known as the study of metacognition, the processes by which human beings evaluate and regulate their knowledge, reasoning, and learning—and the results have been consistently sobering, occasionally comical, and never dull.
The American author and aphorist William Feather once wrote that being educated means “being able to differentiate between what you know and what you don’t.” As it turns out, this simple ideal is extremely hard to achieve. Although what we know is often perceptible to us, even the broad outlines of what we don’t know are all too often completely invisible. To a great degree, we fail to recognize the frequency and scope of our ignorance.
In 1999, . . .