Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science’ Category

Malignant Narcissism diagnostic checklist

leave a comment »

Interesting and perhaps even useful, at the personal or national level.

Written by LeisureGuy

20 August 2018 at 7:33 am

Effective treatment for depression: Transcranial magnetic stimulation

leave a comment »

Transcranial magnetic stimulation is relatively new but has proved effective against depression when talk therapy and/or medication has not worked. The link is to an article by the Mayo Clinic, which begins:

Transcranial magnetic stimulation (TMS) is a noninvasive procedure that uses magnetic fields to stimulate nerve cells in the brain to improve symptoms of depression. TMS is typically used when other depression treatments haven’t been effective.

How it works

During a TMS session, an electromagnetic coil is placed against your scalp near your forehead. The electromagnet painlessly delivers a magnetic pulse that stimulates nerve cells in the region of your brain involved in mood control and depression. And it may activate regions of the brain that have decreased activity in people with depression.

Though the biology of why rTMS works isn’t completely understood, the stimulation appears to affect how this part of the brain is working, which in turn seems to ease depression symptoms and improve mood.

Treatment for depression involves delivering repetitive magnetic pulses, so it’s called repetitive TMS or rTMS.

Mayo Clinic’s approach

Why it’s done

Depression is a treatable condition, but for some people, standard treatments aren’t effective. Repetitive transcranial magnetic stimulation (rTMS) is typically used when standard treatments such as medications and talk therapy (psychotherapy) don’t work. . .

Continue reading.

Written by LeisureGuy

18 August 2018 at 2:13 pm

The Growing-ups: A modern life stage for 18- to 29-year-olds

leave a comment »

Jeffrey Jensen Arnett, Professor of Psychology at Clark University in Worcester, Massachusetts and an author of Getting to 30: A Parent’s Guide to the 20-Something Years, writes in Aeon:

I will readily admit, it took me a long time to grow up. I graduated from Michigan State University in 1980 at the age of 23 with a freshly printed bachelor’s degree in psychology and no idea what I really wanted to do. I’d learned to play guitar in college and, intent on avoiding the drudgery of a crummy low-paying job, I now worked up a repertoire of songs large enough to enable me to make money by playing in bars and restaurants. I made enough to live on, but only because I had moved back home with my parents and didn’t have to pay for rent or groceries.

After a couple of years, I entered graduate school in psychology, but even after I got my PhD four years later, I still didn’t really know what I wanted to do. In the romantic department, I was no further along either. I’d had lots of girlfriends by then, but never come close to marriage. Eventually, I did find my way in love and work, but it took many years more: I got my first long-term job (as a professor) at 35, married at 36, and had my kids (twins) at 42.

When my research on how young people make their way to adulthood first began, the initial inspiration was my own odyssey. I was in my early 30s and thinking about how long it was taking me – and lots of my peers – to get there. But I have maintained my research focus on these 18- to 29-year-olds because I found they were so rewarding to talk to.

I had studied adolescents for years before that, mostly high-school students, focusing on their media use and risky behaviours such as driving while intoxicated. I enjoyed this research, but found that adolescents often clammed up when I tried to interview them. They were wary of me – a potentially intrusive adult – but it seemed that they also lacked self-reflection and self-understanding. Their egocentrism prevented them from being able to take a step back and reflect thoughtfully on what they did and why they did it.

But ‘emerging adults’, in the 18-29 age group, did have that ability, and not only those who were college-educated like me. My most memorable interviews were with emerging adults whose experiences were totally different from mine – those who had been in prison, or abused as children, or raised by a drug-addicted single mom. Across the board, I found them to be insightful about what they had experienced and who they were becoming now. It was this insightfulness, expressed with humour and everyday eloquence, that led me to devote my career to understanding them and explaining them to others.

Since then, I’ve written two books about this distinct life stage, in part to help emerging adults and their parents understand the longer road to adulthood in America today. I’ve also directed two national surveys, the Clark Poll on Emerging Adults in 2012 and 2013, which have given us a picture of this age group nationwide.

I was to discover, however, that there were many others who didn’t share my warm and benevolent views of emerging adults. Quite the contrary.

In 2004, after a decade of interviewing 18- to 29-year-olds in various parts of the US, I published a book announcing the theory of emerging adulthood as a new life stage between adolescence and adulthood, and summarising what I’d found in my research, on topics ranging from relations with parents to love and sex, education, work and religious beliefs. Prior to publication, TIME magazine told my publisher, Oxford University Press, that they were planning to run a cover story inspired by the book. Naturally, I was excited. However, when the TIME piece came out, it was shockingly bad. The cover photo showed a young man clad in a dress shirt and pants, sitting in a sandbox. Readers were invited to meet today’s young people, ‘who live off their parents, bounce from job to job and hop from mate to mate… THEY JUST WON’T GROW UP’. The text was mostly a lament on their deficiencies and an invitation to ridicule them for taking longer to enter marriage, parenthood and full-time work than their parents or grandparents did.

Ten years later, I am no longer surprised by this view of emerging adults, but I remain puzzled and dismayed. I have spent a regrettable amount of my time in the past decade playing Whac-A-Mole with the derogatory descriptions that my fellow Americans reflexively apply to emerging adults: they’re lazy, selfish and they never want to grow up. Oh, and they’re worse than ever, certainly worse than the adults now criticising them were in their own youth. Is there is any truth to these stereotypes or are they just unfair?

One of the most common insults to today’s emerging adults is that they’re lazy. According to this view, young people are ‘slackers’ who avoid work whenever possible, preferring to sponge off their parents for as long as they can get away with it. One of the reasons they avoid real work is that have an inflated sense of entitlement. They expect work to be fun, and if it’s not fun, they refuse to do it.

It’s true that emerging adults have high hopes for work, and even, yes, a sense of being entitled to enjoy their work. Ian, a 22-year-old who was interviewed for my 2004 book, chose to go into journalism, even though he knew that: ‘If I’m a journalist making $20,000 a year, my dad [a wealthy physician] makes vastly more than that.’ More important than the money was finding a job that he could love. ‘If I enjoy thoroughly doing what I’m doing in life, then I would be better off than my dad.’ Emerging adults enter the workplace seeking what I call identity-based work, meaning a job that will be a source of self-fulfillment and make the most of their talents and interests. They want a job that they will look forward to doing when they get up each morning.

You might think that this is not a realistic expectation for work, and you are right. But keep in mind it was their parents’ generation, the Baby Boomers, who invented the idea that work should be fun. No one had ever thought so before. Baby Boomers rejected the traditional assumption that work was a dreary but unavoidable part of the human condition. They declared that they didn’t want to spend their lives simply slaving away – and their children grew up in this new world, assuming that work should be meaningful and self-fulfilling. Now that those children are emerging adults, their Baby Boomer parents and employers grumble at their presumptuousness.

So, yes, emerging adults today have high and often unrealistic expectations for work, but lazy? That’s laughably false. While they look for their elusive dream job, they don’t simply sit around and play video games and update their Facebook page all day. The great majority of them spend most of their twenties in a series of unglamorous, low-paying jobs as they search for something better. The average American holds ten different jobs between the ages of 18 and 29, and most of them are the kinds of jobs that promise little respect and less money. Have you noticed who is waiting on your table at the restaurant, working the counter at the retail store, stocking the shelves at the supermarket? Most of them are emerging adults. Many of them are working and attending school at the same time, trying to make ends meet while they strive to move up the ladder. It’s unfair to tar the many hard-working emerging adults with a stereotype that is true for only a small percentage of them.

Is striving for identity-based work only for the middle class and the wealthy, who have the advantages in American society? Yes and no. The aspiration stretches across social classes: in the national Clark poll, 79 per cent of 18- to 29-year-olds agreed that: ‘It is more important for me to enjoy my job than to make a lot of money,’ and there were no differences across social class backgrounds (represented by mother’s education). However, the reality is quite different from the aspiration. Young Americans from lower social class backgrounds are far less likely than those from higher social backgrounds to obtain a college education and, without a college degree, jobs of any kind are scarce in the modern information-based economy. The current US unemployment rate is twice as high for those with only a high-school degree or less than it is for those with a four-year college degree. In the national Clark poll, emerging adults from lower social class backgrounds were far more likely than their more advantaged peers to agree that ‘I have not been able to find enough financial support to get the education I need.’ That’s not their fault. It is the fault of their society which short-sightedly fails to fund education and training adequately, and thereby squanders the potential and aspirations of the young.

Another widespread slur against emerging adults is that they are selfish. Some American researchers – most notoriously Jean Twenge, a professor at San Diego State University and a well-known writer and speaker – claim that young people today have grown more ‘narcissistic’ compared with their equivalents 30 or 40 years ago. This claim is based mainly on surveys of college students that show increased levels of self-esteem. Today’s students are more likely than in the past to agree with statements such as: ‘I am an important person.’

With this stereotype, too, there is a grain of truth that has been vastly overblown. It’s probably true that most emerging adults today grow up with a higher level of self-esteem than in previous generations. Their Baby Boomer parents have been telling them from the cradle onward: ‘You’re special!’ ‘You can be whatever you want to be!’ ‘Dream big dreams!’ and the like. Popular culture has reinforced these messages, in movies, television shows and songs. Well, they actually believed it. In the national Clark poll, nearly all 18- to 29-year-olds (89 per cent) agreed with the statement: ‘I am confident that eventually I will get what I want out of life.’

But – and this is the key point – that doesn’t mean they’re selfish. It certainly doesn’t mean they are a generation of narcissists. It simply means that they are highly confident in their abilities to make a good life for themselves, whatever obstacles they might face. Would we prefer that they cringed before the challenges of adulthood? I have come to see their high self-esteem and confidence as good psychological armour for entering a tough adult world. Most people get knocked down more than once in the course of their 20s, by love, by work, by any number of dream bubbles that are popped by rude reality. High self-esteem is what allows them to get up again and continue moving forward. For example, Nicole, 25, grew up in poverty as the oldest of four children in a household with a mentally disabled mother and no father. Her goals for her life have been repeatedly delayed or driven off track by her family responsibilities. Nevertheless, she is pursuing a college degree and is determined to reach her ultimate goal of getting a PhD. Her self-belief is what has enabled her to overcome a chaotic childhood full of disadvantages. ‘It’s like, the more you come at me, the stronger I’m going to be,’ she told me when I interviewed her for my 2004 book.

The ‘selfish’ slur also ignores how idealistic and generous-hearted today’s emerging adults are. In the national Clark poll, 86 per cent of 18- to 29-year-olds agreed that: ‘It is important to me to have a career that does some good in the world.’ And it is not just an idealistic aspiration: they are, in fact, more likely to volunteer their time and energy for serving others than their parents did at the same age, according to national surveys by the US Higher Education Research Institute.

As for the claim that they never want to grow up, it’s true that entering the full range of adult responsibilities comes later than it did before, in terms of completing education and entering marriage and parenthood. Many emerging adults are ambivalent about adulthood and in no hurry to get there. In the national Clark poll, 35 per cent of 18- to 29-year-olds agreed with the statement: ‘If I could have my way, I would never become an adult.’ That’s not a majority, but it’s a lot, and that 35 per cent is probably the basis of the stereotype. . .

Continue reading.

Written by LeisureGuy

18 August 2018 at 9:43 am

Posted in Books, Daily life, Science

Genetics: How to Edit a Human

leave a comment »

Tom Whipple writes in 1843 Magazine:

This story begins nearly four billion years ago, when the Earth was just another rock in just another solar system. In a pool of sludge on that rock, something astonishing happened. A long stringy molecule found a way to copy itself. Similar molecules would later carry the code that would enable life forms to grow, digest, run, breathe, read, launch rockets to the Moon. But for now, that molecule only knew how to do a single, important thing – to reproduce. This was the moment that life emerged.

Since then, as each living organism has multiplied, the codes of life have altered by the tiniest increments generation after generation, stretching across time. Most of these mutations have had little impact. Very, very occasionally, they have been extraordinarily useful. The sum of millions of minuscule modifications over billions of generations has given some organisms the ability to survive in water, land, ice or the desert. They have helped them to beat disease, to be stronger, faster, fly.

Across the aeons of biological time, this process has led one particular organism – us – to grow large brains, develop opposable thumbs and communicate complex ideas. We’ve mastered fire, tools, technology. In the great span of evolution, this transformation happened a mere split second ago. Degree by degree we continue to change.

Six years ago a group of those highly evolved organisms worked out how to shrink evolutionary time. Scientists in laboratories on either side of the Atlantic discovered a way to manipulate the blind stumblings of random mutations. Through meticulous trial after trial and not a little fortune, they found a way to edit the code of life – to tweak the information that makes our eyes blue, muscles strong or IQs high. Humans had advanced so far that we were finally able to control our own evolution.

Jennifer Doudna, one of those scientists, was not the first to edit genes or genetically modify an organism. But the tool that her team discovered made a previously painstaking and expensive process simpler and usable by almost anyone. Entire PhDs were once spent changing a single gene to make one mutant mouse for research.

The eureka moment came in 2012. Doudna remembers the instant when she realised what she had found. She was in her office high above San Francisco bay and her postgraduate student, Martin Jinek, was at the whiteboard. “It was a beautiful California day. I was looking across and seeing the Golden Gate Bridge,” says Doudna, gesturing towards the window: “The sun was streaming in, Martin was writing at the whiteboard.” Stroke by stroke he began sketching a simplified version of a previously obscure molecular mechanism that bacteria use to fight infection.

The device had an ungainly name, CRISPR-Cas9. But realisation now dawned that its function was supremely elegant: it chopped up the DNA of invading viruses. What made that discovery important was that the tool could also be programmed to cut up DNA of any kind. Doudna’s team had worked out how to edit the genome of every living thing – even humans.

Sitting beside that same whiteboard now she finds it hard to convey the magnitude of that instant – she talks of chills going down her neck, of thinking “this is cool” – as it dawned on her that this could transform not only the lives of the scientists who uncovered it, but all of our lives.

Rarely, if ever, has a scientific tool spread as fast as the one they drew that day. It took a millennium for the mathematical concept of zero to be fully accepted in Europe. It took centuries for the first rudimentary microscopes to develop into something scientifically useful. Even the computer took decades to become a mainstay of offices and homes.

In just over five years, the new gene-editing mechanism that Doudna and her colleagues found has attained ubiquity in life-science laboratories. Other means of editing DNA already existed, but CRISPR was better and faster. Yet the speed of CRISPR’s dissemination represents a threat as well as a breakthrough. Anxious about the lack of control, Doudna convened a conference of 500 ethicists, scientists and lawyers in 2015 to consider all the apparently fantastical futures ushered in by the ability to tinker with the code of life. She wanted to set out rules and protocols before the technology was applied to humans.

Doudna was in for a shock. “One attendee [at the conference] pulled me aside and said three manuscripts had been submitted to journals involving experiments on human embryos. He said, ‘You should know this is happening.’” The labs in China had destroyed the embryos they had developed, and the modification had been only partial. Far sooner than predicted, a threshold had been crossed.

That moment in Doudna’s office came 60 years after James Watson and Francis Crick interrupted lunchtime at the Eagle pub in Cambridge with the words “We’ve discovered the meaning of life.” It was only slight hyperbole. They had revealed the structure of DNA, the alphabet in which the code of life was written.

That achievement was monumental. It was also, on its own, useless. Though they knew the letters, they didn’t know what they meant. They couldn’t read the code; they couldn’t write it.

Over subsequent decades genetics researchers have slowly built a DNA dictionary to explain what individual strings of code mean. First in bacteria, then in worms and eventually, in 2003, in humans, we have read full genomes and started piecing together the functions of DNA, the code that builds bodies, keeps them running and occasionally makes them fail. That achievement, too, has been monumental. But even when we can determine the exact mutation that led to a genetic disease, we can’t do anything about it. The search for a means to rewrite that code became the most pressing quest in genetics research.

Although no one realised it at the time, that journey was already under way. In 1987, almost unnoticed, a scientist in Japan spotted an oddity in the genome of a bacterium found in the human gut: a repeating sequence of genetic code, roughly palindromic, in the bacterium’s DNA. If you read along it, this code would appear in one section, then there would be an incomprehensible sequence of code, then it would appear again, and so on. The oddity was noted and the world moved on. The planet is not short of oddities.

Six years later a Spanish researcher spotted the same repeating structure in a microscopic organism from a different part of the living world. Since the common evolutionary ancestor of these two organisms came hundreds of millions of years earlier, it seemed significant that both had this structure. The oddity was upgraded to a curiosity.

For 20 years, that was where it remained. The sequence gained its awkward name: Clustered Regularly Interspersed Palindromic Repeats, or CRISPR (pronounced “crisper”). Slowly, we learned more about it. The first breakthrough in establishing the importance of CRISPR came when scientists identified the code that lies between the repeats. It turned out that these sequences appeared elsewhere too: in the genetic code of viruses that had attacked these microbes. Evolution doesn’t create such unlikely coincidences without a purpose.

There was a logical conclusion: microbes were storing the viral code to defend themselves against the viruses. CRISPR seemed to be not just a code but a tool, that both held crucial intelligence on bacteria’s viral enemies and used this intelligence to defeat them. That was when the curiosity became a business proposition – bacteria can be worth a lot of money. So the funding for the next step came from one of the world’s most famous bacteria farmers: Danone, a dairy company that boasts of the good bacteria in its products, and which every year lost cash, cheese and yogurt to viruses that attacked its bacteria.

Two research scientists at Danone began investigating how to use the matching parts of code found on bacteria and viruses to protect the good bacteria they wanted to foster. They found that a small proportion of bacteria were able to capture and store the DNA of invading viruses in their own genomes so that, when they were attacked again, they were essentially immune. Even more excitingly, when those resistant bacteria reproduced, the new bacteria were also protected. So the change wasn’t just a one-off sticking plaster that helped individual bacteria to survive: the immunity they acquired was genetic, and could be passed on to their offspring too.

Suddenly people realised that CRISPR might do more than make yogurt cheaper. Separately, in different labs across the world, groups of scientists started to think about whether this tool could be used to manipulate DNA of all kinds. The race to harness its powers began.

That was when Doudna joined the story. Many other scientists interested in CRISPR were experts on DNA – the genetic code in every cell. Doudna’s specialism was RNA, which helps DNA translate that code into something usable. Seeing the structure of RNA is hard. Imagine if all you knew about the Eiffel tower came from its shadow at different times of day. Then imagine forming an image of it using just that information, with every strut and platform in place. This type of work has been Doudna life’s pursuit: trying to construct a model of molecules from scatterings left by x-rays or the products left behind as they break down.

Dounda would probably have continued happily on her academic path of quiet distinction were it not for a telephone call one afternoon in 2006 from across the campus at the University of California, Berkeley. Jill Banfield was unusual among CRISPR scientists because she wasn’t interested in its application to humans. She wanted to understand what microbes do with it: for over a decade she has been searching for it in organisms in extreme environments underground, in the heat of geysers, even in slurry ponds at a dairy farm. Banfield needed an expert on RNA, so she began looking on the university intranet for resident RNA experts: “I saw Jennifer’s name, and contacted her.” That phone call marked the first time Doudna had ever heard of CRISPR (later she admitted that she thought it was spelt “crisper”).

By this stage, laboratories around the world were trying to uncover how CRISPR could be used to edit a gene. In Lithuania, Virginijus Siksnys was one of the acknowledged leaders. His laboratory had treated CRISPR like an app, showing how its DNA sequence could be taken from one bacteria and “installed” in another, where it worked perfectly to protect the organism. In the race to discover how to use CRISPR to alter the genome, his laboratory and Doudna’s would eventually come near to a dead heat. “Race” is not quite the right word, though: at this point neither knew for certain where the finish line was.

The first step in determining what CRISPR could do was to break it down into its components, to see what role each part of the CRISPR code played. All DNA works by making proteins; CRISPR is no different. Understanding the function of each protein was the key to developing CRISPR into a deployable device. By generating those same proteins in bulk in the lab, scientists could start experimenting to see what they did.

CRISPR programmes a number of mechanisms, each named after the protein involved: Cas1, Cas2, Cas3, and so on. Slowly, Doudna’s laboratory went through each one to determine its function. Once again luck (and the backing of a wealthy institution) propelled Doudna to the next milestone. In 2011, at a conference in Puerto Rico, Doudna met Emmanuelle Charpentier, a French scientist who was looking at a CRISPR protein called Cas9 that Doudna hadn’t yet considered. Charpentier needed a biologist to look at its structure.

Charpentier approached two colleagues in Vienna. “One did not have enough manpower, the other not enough money,” says Charpentier. “Life is sometimes unfair.” She recently spoke to one of the scientists who turned her down. “He still thinks of this. He was very much affected.” As well he might be: he works in relative obscurity; Doudna and Charpentier are superstars.

Doudna’s lab had both people and dollars. Over the next year, separated by a continent, Doudna and Charpentier worked on cracking CRISPR’s codes. Their collaboration was professionally rather than personally close. Charpentier’s lab cultured and analysed bacteria to identify the genetic data that made up Cas9 and its function, and shipped that to America. Doudna’s team found a way to manufacture the protein in bulk, using genetically modified bacteria to pump it out and then separate it from the soup of other proteins. Their goal was to surmise how Cas9 cut up the DNA of invading viruses, and if it could be trained to do the same to any length of any DNA they chose – to edit any genome.

Some experiments to test Cas9 were disarmingly simple. “It’s a bit like cooking,” says Doudna. In one they took a few drops of a liquid containing the Cas9 protein, mixed it with a few drops of another liquid that included a protein that helped the molecular mechanism find its target, and added in some DNA to see the effect. Then they heated it up. During the heating process they would periodically take out samples to see if the DNA was changing size or being cut – Doudna moves her hands apart, like a fisherman describing a catch. More often than not, it was. The CRISPR system had gone in and snipped precisely the right section. That was the outcome they were after.

Charpentier speaks of that period in almost unworldly terms. At the time, she was based out of a university in Sweden. “There was either too much light or not enough light, which meant I was available at all times of the day.” Time differences were not a problem. “When we were writing the paper, it was March, April, May, June. You experience nights where you see only light and so you stay awake pretty much all night long. I would come from the lab at 3 in the morning and there would be full light.”

The excitement was heightened by growing certainty that other labs must be close behind, says Doudna. “Emmanuelle would work in the Californian night. I’d wake up in the morning and there would be a new draft of our article in my inbox. Then I’d work on it. We knew there was good reason to try to wrap it up and write it up as quickly as we could.”

Slowly, a picture of the Cas9 system emerged. In Doudna’s office today in Berkeley she has a model of it made out of 3D-printed plastic, the result of those final experiments that uncovered its structure. It sits on a table beside a Japanese katana sword, one of many gifts Doudna has since received on her travels. The model of the Cas9 targeting system looks like a Gordian knot of twisted molecules. She slowly pulls it apart with the patience of a teacher – she still has undergraduate students – demonstrating the section that locates the target DNA and the cleaver that chops it up.

It is that cleaver, Cas9, that is important. A katana of the genome, it carefully slices both sides of the target DNA, breaking the strand. Since DNA is good at repairing itself, the two loose ends of DNA can tie themselves back together, but without the lost code the virus is neutralised. That offered the possibility of a further leap: rather than merely remove part of the DNA, it might be possible to replace it with another strand. Then the DNA should still be able to stitch itself up again. To achieve that would be one miniscule change for a genome – and one giant step for genetics.

No wonder Doudna smiles so widely as she remembers the day when a year of experiments suddenly made sense.

Scientists sometimes complain that society still believes in the idea of the lone genius whose insight allowed him (it’s normally a man) to see further – even if modesty dictates that he says he did so by standing on the shoulders of giants. In this narrative, you forget about the other people in the laboratory, other teams developing the science or working in parallel. No one ever asks a whole laboratory to give a keynote address. TED talks are not delivered by committee.

Doudna is meticulous about attributing credit. There were six authors on the breakthrough paper. She notes that her postgraduate students did much of the lab work. Hundreds developed the field to enable the final discovery. Like the 15 members of the 1953 Everest expedition, all had parts to play. We already know, too, that Doudna’s involvement was not once but twice a matter of chance and money. Yet Charpentier and Doudna are the ones who finally got to the top, the Hillary and Tenzing of CRISPR. They are the ones who get the plaudits, the international speaking engagements, and the prizes. When the Nobel committee gets around to awarding for CRISPR, as it surely will in time, it will only ever be able to honour three people at most. Two places are taken. The others will forever be at the Everest base camp of science, watching their colleagues disappear into the clouds.

What, though, if when Hillary and Tenzing had arrived at the summit to find that someone had already reached it but hadn’t got the message home yet? That type of luck is more awkward. . .

Continue reading.

Written by LeisureGuy

16 August 2018 at 3:19 pm

Why obesity has increased so drastically

leave a comment »

I’ve added a link this Guardian article by George Monbiot to my (rather lengthy) post giving my current diet advice. The article begins:

When I saw the photograph I could scarcely believe it was the same country. A picture of Brighton beach in 1976, featured in the Guardian a few weeks ago, appeared to show an alien race. Almost everyone was slim. I mentioned it on social media, then went on holiday. When I returned, I found that people were still debating it. The heated discussion prompted me to read more. How have we grown so fat, so fast? To my astonishment, almost every explanation proposed in the thread turned out to be untrue.

Unfortunately, there is no consistent obesity data in the United Kingdom before 1988, at which point the incidence was already rising sharply. But in the United States, the figures go back further. They show that, by chance, the inflection point was more or less 1976. Suddenly, at around the time that the photograph was taken, people started becoming fatter – and the trend has continued ever since.

The obvious explanation, many on social media insisted, is that we’re eating more. Several pointed out, not without justice, that food was generally disgusting in the 1970s. It was also more expensive. There were fewer fast food outlets and the shops shut earlier, ensuring that if you missed your tea, you went hungry.

So here’s the first big surprise: we ate more in 1976. According to government figures, we currently consume an average of 2,130 kilocalories a day, a figure that appears to include sweets and alcohol. But in 1976, we consumed 2,280 kcal excluding alcohol and sweets, or 2,590 kcal when they’re included. I have found no reason to disbelieve the figures.

Others insisted that the cause is a decline in manual labour. Again, this seems to make sense, but again the data doesn’t support it. A paper last year in the International Journal of Surgery states that “adults working in unskilled manual professions are over four times more likely to be classified as morbidly obese compared with those in professional employment”.

So how about voluntary exercise? Plenty of people argued that, as we drive rather than walk or cycle, are stuck to our screens and order our groceries online, we exercise far less than we did. It seems to make sense – so here comes the next surprise. According to a long-term study at Plymouth University, children’s physical activity is the same as it was 50 years ago. A paper in the International Journal of Epidemiology finds that, corrected for body size, there is no difference between the amount of calories burned by people in rich countries and those in poor ones, where subsistence agriculture remains the norm. It proposes that there is no relationship between physical activity and weight gain. Many other studies suggest that exercise, while crucial to other aspects of good health, is far less important than diet in regulating our weight. Some suggest it plays no role at all as the more we exercise, the hungrier we become.

Other people pointed to more obscure factors: adenovirus-36 infectionantibiotic use in childhood and endocrine-disrupting chemicals. While there is evidence suggesting they may all play a role, and while they could explain some of the variation in the weight gained by different people on similar diets, none appears powerful enough to explain the general trend.

So what has happened? The light begins to dawn when you look at the nutrition figures in more detail. Yes, we ate more in 1976, but differently. Today, we buy half as much fresh milk per person, but five times more yoghurt, three times more ice cream and – wait for it – 39 times as many dairy desserts. We buy half as many eggs as in 1976, but a third more breakfast cereals and twice the cereal snacks; half the total potatoes, but three times the crisps. While our direct purchases of sugar have sharply declined, the sugar we consume in drinks and confectionery is likely to have rocketed (there are purchase numbers only from 1992, at which point they were rising rapidly. Perhaps, as we consumed just 9kcal a day in the form of drinks in 1976, no one thought the numbers were worth collecting.) In other words, the opportunities to load our food with sugar have boomed. As some experts have long proposed, this seems to be the issue.

The shift has not happened by accident. As Jacques Peretti argued in his film The Men Who Made Us Fat, food companies have invested heavily in designing products that use sugar to bypass our natural appetite control mechanisms, and in packaging and promoting these products to break down what remains of our defences, including through the use of subliminal scents. They employ an army of food scientists and psychologists to trick us into eating more than we need, while their advertisers use the latest findings in neuroscience to overcome our resistance.

They hire biddable scientists and thinktanks to confuse us about the causes of obesity. Above all, just as the tobacco companies did with smoking, they promote the idea that weight is a question of “personal responsibility”. After spending billions on overriding our willpower, they blame us for failing to exercise it.

To judge by the debate the 1976 photograph triggered, it works. . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 3:31 pm

How America Convinced the World to Demonize Drugs

leave a comment »

J.S. Rafaeli has an interesting article in Vice, with this subhead:

Much of the world used to treat drug addiction as a health issue, not a criminal one. And then America got its way.

The article begins:

In Baltimore, a young black man is sent to prison for felony cannabis possession. In Glasgow, Scotland, an apartment door is kicked in by the drugs squad. In Afghanistan, a field of poppies is incinerated from the air. In Mexico, police corrupted by drug cartels are implicated in disappearances and massacres.

The War on Drugs is generally presented as a global phenomenon. Each country has its own drug laws and enforces them as they see fit. Despite small regional differences, the world—we are told—has always been united in addressing the dangers of illicit drug use through law enforcement.

This is a lie.

When one traces back the history of what we now call the War on Drugs, one discovers it has a very specific origin: the United States. The global development of the drug war is inseparable from the development of US imperialism, and indeed, is a direct outgrowth of that imperialism.

Prior to the 19th century, drugs now illegal were widely used across the world. Remedies derived from opium and cannabis were used for pain relief, and less widely for “recreation.” Queen Victoria herself was fond of both opium and cannabis, before being introduced to cocaine later in life.

Then came the American railroads.

Thousands of Chinese workers came to America during the mid-1800s to build the Central Pacific Railroad. Once the track was complete, however, they immediately became regarded as a threat to white American workers. In 1882, Congress passed the Chinese Exclusion Act, the only US law to ever successfully ban immigration solely on the basis of race.

One method of stirring up anti-Chinese hatred was to attack the practice of opium smoking. Although morphine and laudanum were popular as a medicine throughout the US, Chinese opium was seen as a threat to American Christian morality, and particularly to American Christian women.

By 1881, as the Exclusion Act was being debated in Congress, reports began flooding out of San Francisco of opium dens where “white women and Chinamen sit side by side under the effects of this drug—a humiliating sight to anyone with anything left of manhood.”

Newspaper editorials thundered that the Chinese opium menace must be wiped out lest it “decimate our youth, emasculate the coming generation, if not completely destroy the population of our coast,” and that for white Americans, smoking opium was “not at all consistent with their duties as Capitalists or Christians.”

Crucially, however, the first modern prohibition regime was not founded in America itself, but in its first overseas colony. In 1898, America conquered the Philippines in the Spanish–American War. Charles H. Brent, the openly racist Episcopal bishop of the Philippines, despised opium users, and appealed to President Roosevelt to ban this “evil and immoral” habit. By 1905, Brent had succeeded in installing the first American prohibition regime—not in the US itself, but in the Philippines.

Unsurprisingly, the ban failed. Bishop Brent decided that continued opium use must be the fault of the booming trade in China, and wrote again to President Roosevelt, urging that the US had a duty to “promote some movement that would gather in its embrace representatives from all countries where the traffic and use of opium is a matter of moment.” The idea of international control of the drug trade had been born.

In the American debate, drug addiction had been framed as an infection and contamination of white America by foreign influences. Now, that vision was internationalized. To protect white American moral purity, the supply of drugs from overseas had to be curtailed at their source. As the campaigner, Richard P. Hobson had it, “like the invasions and plagues of history, the scourge of narcotic drug addiction came out of Asia.”

In 1909, America succeeded in convening the first International Commission on Opium in Shanghai. Representing the US was Bishop Brent and the doctor Hamilton Wright, who was to become a major force in the American prohibitionist movement. For the next century, almost every major international conference and commission on drug control was formed through American pressure and influence.

Interestingly, despite what we are told about the “special relationship,” the country that offered the most consistent and organized resistance to the American drive toward drug prohibition was the United Kingdom. Time and again, Great Britain diplomatically frustrated American attempts to impose prohibition regimes and establish international protocols.

This was partly because the British were themselves operating lucrative opium monopolies in their own overseas colonies, but also because they resented “overtones of high-mindedness and superior virtue.” Britain had its own system of dealing with drug addiction—treating it as a medical rather than a law enforcement issue—and, for a long time, resisted the moralizing hysteria of the American approach.

But it was difficult for the US to push the prohibition of drugs on the rest of the world while not enforcing it itself. Wright began spearheading a fresh campaign for full drug prohibition within the US—once again built almost entirely on racial prejudice.

But this time, a new drug had emerged to capture America’s fevered imagination, with a fresh racial minority to use it to persecute. The drug was cocaine, and the minority was African Americans. In 1910, Wright submitted a report to the Senate stating that “this new vice, the cocaine vice… has been a potent incentive in driving the humbler negroes all over the country to abnormal crimes.”

There followed an explosion of headlines linking black people to cocaine use and criminality. The New York Times ran a typical story under the headline “NEGRO COCAINE FIENDS—NEW SOUTHERN MENACE.” The story tells of “a hitherto inoffensive negro” who had reportedly taken cocaine and been sent into a frenzy. The local police chief was forced to shoot him several times to bring him down. Cocaine, it was implied, was turning black men into superhuman brutes. As the medical officer quoted in the article put it, “the cocaine nigger sure is hard to kill.”

This hysteria resulted in the Harrison Narcotics Tax Act of 1914, instituting the prohibition of drugs across the United States. Over the next 50 years, America would aggressively seek to internationalize its form of prohibition across the world. . .

Continue reading. There’s much more, to America’s shame.

Written by LeisureGuy

15 August 2018 at 2:43 pm

Social Connection Makes a Better Brain

leave a comment »

This article by Emily Esfahani Smith in the Atlantic is from 2013, but it caught my eye today:

Matthew Lieberman, a distinguished social psychologist and neuroscientist, basically won the lottery. This past summer, he was offered three million dollars for an academic position—one million in raw income and two to do lab research. That’s a king’s ransom for a psychology professor. On average, psychology professors make less than six figures and rely on a patchwork of modest grants to sustain their research. All Lieberman had to do was spend four months this year and next year in Moscow, a nice enough city, doing some research—which he would have done anyway at home at UCLA.

But there was a catch. He would have to be away from his wife Naomi and seven-year-old son Ian for those eight months. They could not join him in Moscow. He had a basic trade-off problem, one that kept him up for many nights: Should I take the money and give up those eight months with my family or should I stay home and give up the money and research opportunities? In one form or another, we’ve all faced this dilemma, if on a more modest scale. Do you work late tonight or join your family for dinner? Do you go to the conference or to your friend’s wedding? Do you prioritize your career or your relationships?

Lieberman’s new book Social: Why Our Brains Are Wired to Connect hits the shelves this month. It’s a book about relationships and why relationships are a central—though increasingly absent—part of a flourishing life. Lieberman draws on psychology and neuroscience research to confirm what Aristotle asserted long ago in his Politics: “Man is by nature a social animal … Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god.”

Just as human beings have a basic need for food and shelter, we also have a basic need to belong to a group and form relationships. The desire to be in a loving relationship, to fit in at school, to join a fraternity or sorority, to avoid rejection and loss, to see your friends do well and be cared for, to share good news with your family, to cheer on your sports team, and to check in on Facebook—these things motivate an incredibly impressive array of our thoughts, actions, and feelings.

Lieberman sees the brain as the center of the social self. Its primary purpose is social thinking. One of the great mysteries of evolutionary science is how and why the human brain got to be so large. Brain size generally increases with body size across the animal kingdom. Elephants have huge brains while mice have tiny ones. But humans are the great exception to this rule. Given the size of our bodies, our brains should be much smaller—but they are by far the largest in the animal kingdom relative to our body size. The question is why.

Scientists have debated this question for a long time, but the research of anthropologist Robin Dunbar is fairly conclusive on this point. Dunbar has found that the strongest predictor of a species’ brain size—specifically, the size of its neocortex, the outermost layer—is the size of its social group. We have big brains in order to socialize. Scientists think the first hominids with brains as large as ours appeared about 600,000-700,000 years ago in Africa. Known as Homo heidelbergensis, they are believed to be the ancestors of Homo sapiens and the Neanderthals. Revealingly, they appear to be the first hominids to have had division of labor (they worked together to hunt), central campsites, and they may have been the first to bury their dead.

One of the most exciting findings to emerge from neuroscience in recent years underlines the brain’s inherently social nature. When neuroscientists monitor what’s going on in someone’s brain, they are typically interested in what happens in it when people are involved in an active task, like doing a math problem or reaching for a ball. But neuroscientists have looked more closely at what the brain does during non-active moments, when we’re chilling out and the brain is at rest. Every time we are not engaged in an active task—like when we take a break between two math problems—the brain falls into a neural configuration called the “default network.” When you have down time, even if it’s just for a second, this brain system comes on automatically.

What’s remarkable about the default network, according to Lieberman’s research, is that it looks almost identical to another brain configuration—the one used for social thinking or “making sense of other people and ourselves,” as he writes: “The default network directs us to think about other people’s minds—their thoughts, feelings, and goals.” Whenever it has a free moment, the human brain has an automatic reflex to go social. Why would the brain, which forms only 2 percent of our body weight but consumes 20 percent of its energy, use its limited resources on social thinking, rather than conserving its energy by relaxing?

“Evolution has made a bet,” Lieberman tells me, “that the best thing for our brain to do in any spare moment is to get ready for what comes next in social terms.”

Evolution only makes bets if there are payoffs—and when it comes to being social, there are many benefits. Having strong social bonds is as good for you as quitting smoking. Connecting with other people, even in the most basic ways, also makes you happier—especially when you know they need your help.

One study of adults found that the brain’s reward center, which turns on when people feel pleasure, was more active when people gave $10 to charity than when they received $10. In another study, comforting someone in distress activated the reward center in a powerful way. Couples were brought into the lab and the girlfriend was placed inside a brain scanner while the boyfriend sat in a chair right next to her. In some cases, the boyfriend would receive a painful electrical shock.

The girlfriend, who knew when her boyfriend was being shocked, was instructed to either hold her boyfriend’s hand or to hold onto a small ball. When the scientists looked at the girlfriend’s brain activity, they found that her reward system was active when she was holding the hand of her boyfriend both when he was being shocked and when he wasn’t in pain—but it was most activewhen she held his hand as he was being shocked. Holding your boyfriend’s hand feels nice, but it’s especially meaningful when you know that he needs your love and affection.

***

When economists put a price tag on our relationships, we get a concrete sense of just how valuable our social connections are—and how devastating it is when they are broken. If you volunteer at least once a week, the increase to your happiness is like moving from a yearly income of $20,000 to $75,000. If you have a friend that you see on most days, it’s like earning $100,000 more each year. Simply seeing your neighbors on a regular basis gets you $60,000 a year more. On the other hand, when you break a critical social tie—here, in the case of getting divorced—it’s like suffering a $90,000 per year decrease in your income.

You don’t have to be a social scientist to know how badly a breakup hurts. . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 11:53 am

%d bloggers like this: