Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science’ Category

AlphaGolem

leave a comment »

In looking at the current state of AI and thinking of its future, John Henry and the steam drill come to mind. John Cornwell, director of the Science & Human Dimension Project at Jesus College, Cambridge, writes in Aeon:

The game of Go, which has a 3,000-year history in China, is played by two people on a board with counters, or stones, in black and white. The aim is to defeat one’s opponent by surrounding his territory. Metaphorically, the loser is choked into submission by the winner. At a match held in Seoul in South Korea, on 12 March 2016, the world Go champion Lee Sedol, observed by hundreds of supporters, and millions of spectators on television, slumped in apparent despair on being defeated by his opponent: a machine.

Go is a boardgame like no other. It is said to reflect the meaning of life. There are a prodigious number of potential moves – more, it is said, than all the particles in the known Universe. Serious Go players train virtually full-time from the age of five; they think of the game as an art form, and a philosophy, demanding the highest levels of intelligence, intuition and imagination. The champions are revered celebrities. They speak of the game as teaching them ‘an understanding of understanding’, and refer to original winning moves as ‘God’s touch.’

Lee’s face, as he lost the third of five games, and hence the match, was a picture of sorrow. It was as if he had failed the entire human race. He was beaten by AlphaGo, a machine that works on deeply layered neural nets that mimic the human brain and nervous system. The engineers and artificial intelligence (AI) experts who created AlphaGo admit that they do not understand how the machine’s intuition works. If melancholy is a consequence of loss, what was mourned that day was the demise of something uniquely special about human nature.

AlphaGo was designed at the AI research lab DeepMind, a subsidiary of the powerful Google corporation. DeepMind’s spokespeople say that this is just the beginning: they liken their research programmes to the Apollo Moon shot, or the Hubble telescope. The company has recruited 700 technicians, of whom 400 are post-doc computer scientists. They look ahead to the day when AI machines will be employed to solve the most impenetrable and recalcitrant problems in science, health, the environment … the Universe.

DeepMind scientists were thrilled with their success on 12 March 2016. Their glee recalled Dr Frankenstein’s – ‘it’s alive!… it’s alive!’ – in the 1931 movie directed by James Whale. Meanwhile, the emotions of Lee and his supporters bring to mind the pervasive atmosphere of melancholy in Mary Shelley’s novel; one commentator spoke of their ‘heavy sadness’. For his part, Lee had symbolically undergone the fate of Frankenstein’s brother William, throttled in the depths of a wood by the monster.

Cathartic foreboding is familiar in countless stories of hubris, from the original Prometheus myth to Frankenstein (1818) and on to the explosion of 20th- and 21st-century sci-fi literature and movies. But it is melancholy that haunts the imagined spectacle of humans rivalling God by devising creatures made in their own image. For Frankenstein’s monster, as for his creator Victor, the consciousness of having created a distorted human likeness lies at the heart of the unfolding misery and violence. ‘I am malicious because I am miserable,’ says the monster. ‘Am I not shunned and hated by all mankind?’ In the absence of any fatherly kindness from Frankenstein, his sorrow turns to hatred and murderous revenge: ‘If I cannot inspire love, I will cause fear.’

Alex Garland’s film Ex Machina (2014) is a recent manifestation of disastrous hubris in the creator-creature theme. It features Nathan, a billionaire genius AI scientist and contemporary Dr Frankenstein, who lives alone in a remote research facility where he constructs female robots. His latest artifact is Ava, a winsome AI android with suspected consciousness. Nathan wants to test her capacity for seduction. He recruits a young and impressionable computer scientist, Caleb, on the pretext of conducting a series of Turing tests: will Caleb mistake the machine for a human being? Will he fall in love with her? The answers, respectively, are no and yes.

Ava, for her part, manipulates Caleb for her own hidden, long-term aims. With the help of a fellow robot, she murders Nathan and escapes, leaving Caleb imprisoned and alone, facing starvation and death. Caleb elicits our contempt and pity. But Ava, despite her early expressions of frustrated longing (that suggest the sadness of a Lady of Shalott, ‘half sick of shadows’, but which are in fact a tactic of deceit) is a warped version of the prisoner who overcomes many obstacles to escape a Plato’s Cave of unreal androids. At the end of the film, Ava is helicoptered away from the facility to the world of real people. A sense of foreboding haunts the film from the outset, revealed in Nathan’s prognostication of AI’s future: ‘One day,’ he says, ‘the AIs will look back on us the same way we look at fossil skeletons from the plains of Africa. An upright ape, living in dust, with crude language and tools, all set for extinction.’

The enormity of AI’s challenge, and the melancholy it generates, was anticipated more than half a century ago by Norbert Wiener, the pioneer of cybernetics. Wiener was an atheist, yet in God and Golem, Inc(1964) he predicts a set of AI circumstances, theological and eschatological in their scope, with premonitions of dark physical and metaphysical risk. He laid down a principle that self-learning systems are capable, in theory, not only of unprogrammed learning, but of reproducing themselves and evolving. Crucially, they will relate in independent ways with human beings. Wiener believed the risks attendant on playing God were dramatically exemplified in the 17th-century legend of the Golem of Prague, a huge, conscious humanoid, made of clay and powered by cabbalistic magic to protect the Jews of the city. The Golem, named Josef, soon revealed its potential for calamity. When instructed to heave water, it could not stop its task, and flooded the house. (There are premonitions here of the seminar-room joke and thought-experiment, in which an AI machine is briefed to make paperclips and cannot be stopped: eventually it wrecks the infrastructure of the planet and destroys the human race.) The Golem turns against the very people it was intended to protect, and kills them.

Wiener also emphasised the ability of self-learning machines to play games. Every kind of relationship, he argues, is reducible to a game. He saw the Golem myth as a game, and he expands on the idea to suggest that the Book of Job, that most melancholy of biblical stories, is another archetypal game: God and Satan competing to win the soul of the suffering prophet. Similarly, Wiener sees the struggle between God and Satan in John Milton’s epic poem Paradise Lost (1667) as a celestial game: Satan the melancholic fallen arch-fiend, eternally stricken with wounded merit, competing with God for possession of humankind:

Abashed the devil stood,
And felt how awful goodness is, and saw
Virtue in her shape how lovely – saw, and pined
His loss.

And that game will one day be repeated, Wiener predicted, when a human being pits herself against the ultimate machine. Fifty years ahead of time, Wiener foretold that researchers would build a machine to defeat the human champion of the most difficult boardgame ever devised. But this would be just the prelude to much greater extensions of the machines’ prowess. Proposing a general principle in cybernetics, Wiener wrote: ‘a game-playing machine may be used to secure the automatic performance of any function if the performance of this function is subject to a clear-cut, objective criterion of merit’. By clear-cut, he meant definable in a finite number of words or matrices. The systems would, in time, engage in ‘war and business’ which are conflicts ‘and as such, they may be so formalised as to constitute games with definite rules’. He might have included environment, food security, development, diplomacy.

In his conclusion, Wiener speculated that formalised versions of complex human planning and decisions were already being established to ‘determine the policies for pressing the Great Push Button and burning the Earth clean for a new and less humanly undependable order of things’. He was alluding to the probability that the decision for nuclear war would be initiated by a self-learning machine. The notion of the automatic Doomsday Machine had been dramatised that same year in Stanley Kubrick’s film Dr Strangelove(1964). For all its mordant humour, the movie is profoundly dark, ultimately dominated by despair.

Go players speak of the top players’ special ‘imagination’, a talent or faculty that DeepMind’s designers also claim for AlphaGo. But in what sense can a machine possess imagination?

An early hint of AI ‘imagination’ and its rationale, can be found in a 2012 article published in Neuron, the journal of neurology: ‘The Future of Remembering: Memory, Imagining and the Brain’ is authored by a team led by the psychologist Daniel Schacter at Harvard University. The article was ostensibly about Alzheimer’s, and it argued that sufferers lose not only memory but the ability to envisage future events and their consequences. It claimed that imagination is key to both memory and forward-thinking.

Schacter and his colleagues cite the work of Sir Frederic Bartlett, professor of psychology at the University of Cambridge from the 1920s, to tell us what memory is not. In 1932, Bartlett claimed that memory ‘is not the re-excitation of innumerable fixed, lifeless and fragmentary traces, but an imaginative reconstruction or construction’. His research was based on an experiment whereby volunteers were told a Native American legend known as ‘The War of the Ghosts’. It takes about seven minutes to recite; the volunteers were then asked over lapses of days, weeks and months to retell the story. Bartlett found that the volunteers engaged their imaginations to recreate the tale in various ways, based on their own social and personal experiences. Memory, in other words, is not a retrieval of inert bits of information from a database, but a dynamic reconstruction or recreation: an exercise in imagination.

In their article, Schacter and his team argue that neuroscientific studies of imagination, memory, forward-thinking and decisionmaking have much to contribute to AI research. The significance of this statement, in retrospect at least, is the fact that one of the article’s authors was Demis Hassabis, then of University College, London. Hassabis had studied computer science at Cambridge, worked in the development of computer games (including the bestselling Theme Park) and gained a doctorate in cognitive neuroscience. He had been thinking hard about the direction of travel – from the brain to the machine. Certainly, he has said, since as early as 1997, it would be a strategy for his future research through the next two decades. In July 2017, as CEO and co-founder of DeepMind, he Tweeted: ‘Imagination is one of the keys to general intelligence, and also a powerful example of neuroscience-inspired ideas crossing over into AI.’

As Hassabis would explain on many occasions following the triumph of AlphaGo, the machine’s imagination consisted in its capacity to model future scenarios and the consequences of those scenarios at prodigious speeds and across a broad span of combinations, including its opponent’s potential moves. Furthermore, the operation of the neural nets meant that its ‘imagination’ was dynamic, productive, not inert and passive.

The significance of machines mimicking the biological action of the brain and nervous system, as Hassabis framed it, was a metaphorical reversal of the more familiar direction of travel. Before the great leap forward in noninvasive brain imaging through the 1980s and ’90s (the so-called Decade of the Brain), it had been routine, from the early modern period on, to invoke machines to explain the mind-brain function: think of . . .

Continue reading.

Written by LeisureGuy

14 November 2018 at 9:30 am

We have to treat wildfires like actual fires

leave a comment »

Glenn McGillivray, managing director of the Institute for Catastrophic Loss Reduction, writes in the Globe & Mail:

For the second time in about a year, multi-billon-dollar wildfires are turning California communities to ash as fast-moving flames race through tinder-dry forests and grasslands and ultimately feast on unnatural fuels. The wildfires in California last year caused about US$14-billion in insured losses.

Unless we want more of the same in the years ahead, we need to better understand the issue of wildfire getting into communities. We can gain at least some improved understanding by looking at what was learned from the major urban conflagrations in history.

In the distant past, several major cities have experienced large conflagrations caused by one thing or another, such as cows and earthquakes. Fires in London, New York, Toronto, Chicago and San Francisco led to many changes in how cities are designed, how buildings are constructed, and in fire education and safety.

These fires were likely largely viewed in technical terms and, thus, were seen as addressable, where measures could be put into place to prevent or, at the very least, reduce the risk of reoccurrence.

As a result, firewalls were placed within and between structures; openings were limited on the exposure sides of buildings; fire doors became common; buildings were outfitted with fire alarms, suppression equipment with dedicated water supplies and, from the late 19th century, sprinkler systems; less wood was used in construction; open flames were limited, and so on.

Parallel to these efforts came the rise of education programs to inform people about the risk of fire and actions they could take to limit ignitions and spread. Over time, both the frequency and severity of urban fires dropped precipitously, to the point where fires are no longer a major cause of death or the main cause of insured property damage in most industrialized countries.

But we have never really taken this approach with communities located close to forests and grasslands. Why?

First, wildfires are viewed as natural disasters, and there is a widespread view that nothing can be done – they occur at the whim of Mother Nature. The view that loss of life and property are inevitable when a hazard strikes leads to inaction when it comes to wildland fire. For some reason, we treat the prevention of wildfires differently than we treat the prevention of other fires. But, in the final analysis, fire is fire.

Second, people have a misconception about wildfires and nearby communities, believing that wildland fires roll through the forest, hit a built-up area and keep rolling. But what largely happens is that embers from the wildfire are blown far ahead of the fire front and ignite flammable materials located around structures.

Once one realizes that wildfires are not juggernauts that roll through towns like steamrollers, and that structural ignitions from wildfire embers are preventable, programs can be put into place to address the issue of flammability of individual structures, subdivisions and entire communities.

One big problem is that we may be talking too much to the wrong folks about this dynamic; to wildland fire experts and not to structural fire experts, fire modellers and other urban fire experts.

Once a wildland fire gets into town, the fire stops being a forest fire and starts a new life as an urban fire, possibly becoming an urban conflagration or “firestorm” if enough structures are ignited. Once the fire hits town, it becomes a different fire, feeding on different fuels (such as structures and vehicles). A fire ecologist, for example, has no expertise in the mechanisms that lead to structural ignition and spread of fire in an urban setting.

We need to bring structural or urban fire departments, academics and other experts into the discussion and leverage their knowledge (of course, many are already involved in the discussion, but many are not). . .

Continue reading.

Written by LeisureGuy

13 November 2018 at 2:00 pm

An Italian Cosmologist Who Wanders in Dante’s Dark Wood

leave a comment »

Siobhan Roberts writes in Quanta:

In 2004, the Italian theoretical cosmologist Valeria Pettorino wrote her doctoral thesis on “dark energy in generalized theories of gravity.” As a side project, she translated the opening lines of Dante’s Divine Comedy into a geometry problem.

“I felt there was mathematics already within Dante’s writing,” Pettorino said recently.

Dante’s epic poem, in Mark Musa’s translation, begins:

Midway along the journey of our life
I woke to find myself in a dark wood,
for I had wandered off from the straight path.

Pettorino’s translation reads:

Given a line segment AB of size equal to our life path, consider its midpoint M. If D is a man called Dante, D shall be coincident with M.
The segment AB shall be contained in a dark field DF.
Assuming that a circumference C exists, circumscribed to the dark field DF, verify that the straight line r is external to such circumference.

This reimagining, part of a creative writing group project, was published in a collection titled Faximile — an homage to admired authors and texts, in which the Pythagorean theorem became a story, The Iliad became a football match, and the Italian constitution was rendered in hendecasyllabic verse. “We liked the originals, and we wanted to play with them and understand them better,” Pettorino said.

She has approached cosmology in the same spirit, using storytelling from multiple angles as a guiding principle. After earning her Ph.D. in 2005, she traveled the world, hopping between institutions in Heidelberg, New York, Geneva and Valencia, as well as Naples, Turin and Trieste in her native Italy, alternating between observational, theoretical, methodological and statistical points of view in her study of the cosmos — a dark wood, rather like Dante’s. She considers all of these approaches necessary for unraveling the nature of dark matter and dark energy, little-understood substances that together comprise 95 percent of the universe.

It is perhaps not surprising that in 2016 Pettorino landed at the CosmoStat laboratory at CEA Saclay, a research institute 15 miles south of Paris. At CosmoStat, cosmologists and computer scientists collaborate to develop new statistical and signal-processing methods for interpreting the vast volumes of data acquired by modern telescopes. This summer, Pettorino helped complete the final analysis of data from the European Space Agency’s Planck space telescope, which mapped the early universe with unprecedented precision. Her main focus now is Euclid, the agency’s next major space telescope, set to launch in 2022. Euclid will gather 170 million gigabytes of data about billions of galaxies, slicing the universe at different epochs and tracking its evolution under dark influences.

Quanta Magazine spoke with Pettorino over Skype this summer as she helped organize the annual EuroPython conference for users of the Python programming language, among other extracurricular commitments. The interview has been condensed and edited for clarity.

You have many interests. Tell me how you became a cosmologist.

I hadn’t thought about cosmology at all when I started physics, and even then I wasn’t very convinced about physics in itself. But physics offered me a good opportunity to combine several different interests. At the time, I was living in Naples, my city. I really wanted to follow a path that would allow me to get to know people, live in different places, and learn languages. I certainly liked logic and mathematics. And I heard about physics from my uncle, Roberto Pettorino, who was a string theorist; he told me about strings, multiple dimensions, time travel. And I loved science fiction. The authors I read most were Philip José Farmer and Jack Vance — the stories had adventure, and different technologies, and they were very realistic, creating new worlds in great detail with things that don’t exist but could very easily have existed. I liked challenges. At that time, I was taking acting classes and creative writing classes. And then I just said, “Let’s do physics!” I was curious about the whole picture, and physics looked to me like a good combination of logic, of communication, of imagination. My main goal was to learn, to increase my knowledge, to satisfy my curiosity.

How did you eventually find cosmology?

I started physics as an undergraduate at the end of 1997, and then in 1998 there was the cosmic acceleration discovery, revealing that a lot of the universe was completely unknown, and this immediately attracted my curiosity. What happened was that independent observations by two different supernova research teams showed very surprising results: Cosmologists were expecting the universe to be expanding after the Big Bang, and since gravity attracts things toward each other, the expectation was that the universe’s expansion was decelerating. Evidence from supernova explosions showed that the expansion is, instead, accelerating — as if there is some extra form of energy that counteracts gravity and increases the velocity of the expansion. This is generically named “dark energy.”

Since 1998, many other experiments have confirmed the same picture: Normal atoms only account for about 5 percent of the total energy budget in the universe. There is an extra 25 percent that is in the form of “dark matter.” Dark matter still feels gravity, but we don’t observe it directly; it acts as a glue that allows structures, like galaxies and clusters of galaxies, to form. And then there is the rest — 70 percent — which is dark energy, and which should be responsible for cosmic acceleration.

The ever-elusive dark energy — what is it?

That’s still unclear. The simplest way to describe it is as a kind of energy whose density is constant everywhere in time and space, termed the “cosmological constant.” This is one new parameter added to the theory of general relativity, and in practice it fits the data very well — including the final data from the Planck space satellite. Unfortunately, the problem is that the cosmological constant is not well-understood theoretically. First, we cannot predict its value, and we need to have very precise initial conditions to end up with the “right” observed value of this constant. This is the fine-tuning problem. Secondly, the cosmological constant marks our epoch as a very special time within the evolution of the universe. The density of dark energy was completely negligible in the past compared to the density of dark matter (which was higher in the past, when the volume of the visible universe was smaller.) In the future, however, dark energy will dominate over all species of matter, because the dark-matter density will continue to decrease as the universe expands. We happen to live in that epoch in which the cosmological constant is of roughly the same order as matter. That’s a big coincidence.

So if this big coincidence doesn’t seem like a plausible storyline, then maybe a big modification is needed?

This lack of understanding about the cosmological constant has motivated researchers to look for alternative explanations. Cosmic acceleration could be caused by a new fluid, or a new particle whose density changes in time instead of being constant, or more than one particle, or more than one fluid. Or, cosmic acceleration could be the hint that our laws of gravity (namely, Albert Einstein’s theory of general relativity) need to be modified, particularly at very large scales.

Astrophysicists have already tested general relativity at the scale of the solar system. Models referred to as “modified gravity” try to modify general relativity at very large scales to account for cosmic acceleration. Some of these modified-gravity models have been excluded already, for example after the recent detection of gravitational waves. But there are still many models fitting current data, and no clear solution of the theoretical problems associated with the cosmological-constant scenario.

You’ve moved around a lot during your career. What effect has this had on your approach, being exposed to such a wide spectrum of people and ideas? . . .

Continue reading.

Written by LeisureGuy

13 November 2018 at 1:04 pm

Posted in Books, Daily life, Science

Stem Cells Remember Tissues’ Past Injuries

leave a comment »

Monique Brouillette writes in Quanta:

Stem cells, famous for replenishing the body’s stockpile of other cell types throughout life, may have an additional, unforeseen ability to cache memories of past wounds and inflammation. New studies in the skin, gut and airways suggest that stem cells, often in partnership with the immune system, can use these memories to improve the responses of tissues to later injuries and pathogenic assaults.

“What we are starting to realize is that these cells aren’t just there to make tissue. They actually have other behavioral roles,” said Shruti Naik, an immunologist at New York University who has studied this memory effect in skin and other tissues. Stem cells, she said, “have an exquisite ability to sense their environment and respond.”

But when those responses go wrong, they may cause or contribute to a variety of enduring health problems involving chronic inflammation, such as severe allergies and autoinflammatory disorders.

Most tissues in the body contain small reservoirs of long-lived stem cells that can divide and specialize into myriad cell types as required. A stem cell in the skin, for example, can divide and give rise to lineages of cells that produce pigment or keratin, cells that form the sweat glands, or even the flexible barrier cells that allow the skin to stretch when the body moves. Serving as miniature factories for other cell types seemed to be stem cells’ primary function, and because they need to stay versatile, an underlying assumption has been that they have to be “blank slates,” unchanged by their histories. But now a new picture is starting to emerge.

In August, a Nature paper by Boston-area researchers offered fresh evidence for a kind of memory in stem cells, and some of the first for the phenomenon in humans. The team, led by the single-cell sequencing pioneer Alex Shalek and the immunologist José Ordovas-Montañes, both at the Massachusetts Institute of Technology, and the immunologist Nora Barrett at Brigham and Women’s Hospital, had set out to understand why some people suffer from debilitating chronic allergies to airborne dust, pollen and other substances. Most people experience at most a passing bout of coldlike symptoms from these irritants, but about 12 percent of the population has a severe reaction that persists all year and results in uncomfortable polyps or growths.

The work is the first step in the team’s larger quest to understand chronic inflammatory diseases, such as asthma and inflammatory bowel disease, in which the immune system continues to launch unnecessary attacks even after the initial challenge is over. These types of autoinflammatory disorders have long been blamed on the immune system, which is thought to overreact to a perceived threat. But the Boston team suspected there might be a cause in the tissue itself.

They began by taking cells from the inflamed nasal cavities of people with chronic sinusitis and comparing them to cells from healthy control subjects. After collecting about 60,000 cells from 20 different people, they sequenced RNA molecules taken from individual cells to determine which genes were active in them. In the stem cells from the sinusitis patients, they saw that many of the active genes were associated with allergic inflammation — in particular, the genes were targets of two immune mediators called interleukin 4 (IL-4) and interleukin 13 (IL-13). These are small molecules that immune cells like T and B lymphocytes typically use to communicate with one another.

The fact that the targeted genes were active in stem cells meant that the stem cells were apparently in direct communication with the immune system. A hunch that this communication might have an effect on the chronic nature of the disease led the researchers to a further set of experiments.

They removed cells from the airways of allergy patients, grew them in culture for about five weeks, and then profiled their gene activity. They found that the genes involved in allergic inflammation were still active, even though the allergic threat of dust and pollen was long gone. In addition, the researchers described many of the cells as “stuck” in a less-than-fully-mature state.

For Shalek, this result signals “that stem cells may transfer ‘memories’ to future generations of cells and this can cause near-permanent changes in the tissue they replenish.” This process invites comparisons to the immune system: B cells and T cells draw on their experiences with infections they have previously repelled to fight off new ones more effectively. Similarly, stem cells may retain a record of past assaults to sharpen their responses next time. But in the case of the allergy patients, that memory apparently becomes maladaptive. It may keep stem cells perpetually signaling to the immune system that an attacker is there, creating a feedback cycle that promotes inflammation and polyps.

According to Shalek, an understanding of which cells become “bad actors” and how their response propagates throughout a tissue should lead to more effective interventions. In fact, in their paper they were able to test the effects of an antibody that blocks IL-4 and IL-13 on the stem and secretory cells of an individual with nasal polyps. They noted a substantial restoration of gene expression associated with healthy tissue, a promising step toward the development of future therapies.

“This opens a new paradigm where we don’t only focus on the self-renewal potential of these cells but on their potential interaction with their surroundings,” said Semir Beyaz, an immunologist at Cold Spring Harbor Laboratory. Beyaz was not involved in the study by the Boston group but has made similar findings in the gut: In a paper published in Nature in 2016 he demonstrated that the intestines of mice on a high-fat diet produced a greater number of stemlike cells than did those of mice eating less fat. When dividing, the intestinal stem cells also seemed to add to their own numbers more frequently rather than producing more differentiated cells, a change that has been linked to diseases like cancer.

“Functionally, we are realizing that cells can be tuned,” Naik said. “Immunologists are starting to understand that immune reactions take place in tissues, and the way tissues respond to this is at the level of the stem cell.”

A few years ago, in collaboration with stem cell biologists, Naik looked at the effects of prior injury and inflammation on wound healing in mice, in the hope of understanding whether experience with inflammation affects stem cells. As described in their 2017 paper in Nature, she and her colleagues discovered that if patches of skin on mice were inflamed and allowed to heal, subsequent wounds to that same spot would heal 2.5 times as quickly, an effect that could last as long as six months.

In that experiment, Naik explained, the memory retained in the stem cells was beneficial because it was “tuning cells to be more powerful at healing wounds and regeneration.” But the flip side of this finding, as Shalek, Barrett and Ordovas-Montañes had observed, is that “if you teach [the cells] bad behaviors … they are going to remember those bad behaviors as well,” she said.

How the stem cells are storing these memories is unknown; in both the allergy and the wound healing studies, the mechanism appears to involve some modification of the DNA that makes certain genes more or less accessible to activation. Naik found that the DNA in the skin stem cells of the twice-wounded mice contained many regions that were less tightly packed, which usually indicates gene activity, and some of those open regions were retained long after the inflammation was over.

As Naik and her colleagues discussed recently in a review paper for Cell, stem cells in a wide range of tissues engage in a chemical “dialogue” with the immune system, with both sides — and potentially many other cell types — pooling their information to cope most effectively with changing conditions.  . .

Continue reading.

Written by LeisureGuy

12 November 2018 at 4:51 pm

Posted in Health, Medical, Science

Study finds major traumatic injury increases risk of mental-health diagnoses, suicide

leave a comment »

Wency Leung reports in the Globe & Mail:

Patients seriously injured in car accidents, violence and falls are at greater risk of developing a mental illness or dying by suicide, according to a new study that suggests those patients need better mental-health support.

The study, published on Monday in the Canadian Medical Association Journal, found that patients were 40 per cent more likely to be hospitalized with a mental-health diagnosis, such as depression, anxiety or alcohol-abuse disorder, after a major trauma than they were before being injured.

The rate of suicide among post-trauma patients was also significantly higher, at 70 suicides per 100,000 people per year, compared with 11.5 suicides per 100,000 among the general population.

“Anyone involved in the acute management of these patients needs to be thinking about mental health as importantly as we think about the physical injuries that someone has suffered,” says lead author Christopher Evans, director of trauma at the Kingston Health Sciences Centre.

As an emergency physician, Dr. Evans says he has cared for trauma patients who have made remarkable physical recoveries, but struggled with depression, anxiety and self-harm in the months and years afterward. While hospitals and health-care providers generally do offer mental-health services to trauma patients, he says, they are not always provided in a co-ordinated, systematic way.

“There’s a need for offering [mental-health] supports to every patient so that it becomes standard practice,” he says.

He and his team analyzed Ontario administrative health data, held at the Institute for Clinical Evaluative Sciences, for more than 19,300 patients treated for major trauma in Ontario, between 2005 and 2010. The most common type of injury was blunt trauma, and the causes of injury varied, including traffic accidents, unintentional falls, assault and exposure to smoke or fire.

The researchers examined how many of the patients were admitted to hospital for a mental-health diagnosis in the five years prior to their traumatic injury, compared with in the five years post-injury. They found hospital admissions, particularly for alcohol abuse, drug abuse and major depressive disorders, were higher during the post-injury period.

The study did not examine the reasons for the increase. However, Dr. Evans suggests there may be multiple factors involved. It is possible some individuals with mental illnesses may not actually be diagnosed until they come into contact with health professionals when they experience major trauma, he says. But, he explains, people who have experienced major injuries often struggle with chronic pain and financial difficulties, and they may lose their ability to function independently, which can affect their mental health.

The latest findings echo a 2014 study by researchers from the University of Manitoba that showed individuals across Canada who had experienced major traumatic injuries were at greater risk of suicide than a matched control group. The authors of that earlier study noted that advancements in medical and intensive care over the past 30 years have led to more people surviving physical injuries, and as a result, patients are also having to deal with the consequences of their injuries, including pain, disability and financial issues.

Sarvesh Logsetty, one of the authors of the 2014 study, says health professionals are now paying more attention to patients’ needs beyond treating their physical injuries.

“Now it’s not acceptable just to say, ‘Okay, great, we saved your life, see you later,’” says

Continue reading.

Written by LeisureGuy

12 November 2018 at 3:41 pm

Is Something Neurologically Wrong With Donald Trump?

leave a comment »

James Hamblin writes in the Atlantic:

resident donald trump’s decision to brag in a tweet about the size of his “nuclear button” compared with North Korea’s was widely condemned as bellicose and reckless. The comments are also part of a larger pattern of odd and often alarming behavior for a person in the nation’s highest office.

Trump’s grandiosity and impulsivity has made him a constant subject of speculation among those concerned with his mental health. But after more than a year of talking to doctors and researchers about whether and how the cognitive sciences could offer a lens to explain Trump’s behavior, I’ve come to believe there should be a role for professional evaluation beyond speculating from afar.

I’m not alone. Viewers of Trump’s recent speeches have begun noticing minor abnormalities in his movements. In November, he used his free hand to steady a small Fiji bottle as he brought it to his mouth. Onlookers described the movement as “awkward” and made jokes about hand size. Some called out Trump for doing the exact thing he had mocked Senator Marco Rubio for during the presidential primary—conspicuously drinking water during a speech. [photos of the awkward drinking in the article at the link – LG]

By comparison, Rubio’s movement was smooth, effortless. The Senator noticed that Trump had stared at the Fiji bottle as he slowly brought it to his lips, jokingly chiding that Trump “needs work on his form. Has to be done in one single motion, and eyes should never leave the camera.”

Then in December, speaking about his national-security plan in Washington, D.C., Trump reached under his podium and grabbed a glass with both hands. This time he kept them on the glass the entire time he drank, and as he put the glass down. This drew even more attention. The gesture was like that of an extremely cold person cradling a mug of cocoa. Some viewers likened him to a child just learning to handle a cup.

Then there was an incident of slurred speech. Announcing the relocation of the American embassy in Israel from Tel Aviv to Jerusalem—a dramatic foreign-policy move—Trump became difficult to understand at a phonetic level, which did little to reassure many observers of the soundness of his decision.

Experts compelled to offer opinions on the nature of the episode were vague: The neurosurgeon Sanjay Gupta described it as “clearly some abnormalities of his speech.” This sort of slurring could result from anything from a dry mouth to a displaced denture to an acute stroke.

Though these moments could be inconsequential, they call attention to the alarming absence of a system to evaluate elected officials’ fitness for office—to reassure concerned citizens that the “leader of the free world” is not cognitively impaired, and on a path of continuous decline.

Proposals for such a system have been made in the past, but never implemented. The job of the presidency is not what it used to be. For most of America’s history, it was not possible for the commander in chief to unilaterally destroy a continent, or the entire planet, with one quick decision. Today, even the country’s missileers—whose job is to sit in bunkers and await a signal—are tested three times per month on their ability to execute protocols. They are required to score at least 90 percent. Testing is not required for their commander in chief to be able to execute a protocol, much less testing to execute the sort of high-level decision that would set this process in motion.

The lack of a system to evaluate presidential fitness only stands to become more consequential as the average age of leaders increases. The Constitution sets finite lower limits on age but gives no hint of an upper limit. At the time of its writing, septuagenarians were relatively rare, and having survived so long was a sign of hardiness and cautiousness. Now it is the norm. In 2016 the top three presidential candidates turned 69, 70, and 75. By the time of the 2021 inauguration, a President Joe Biden would be 78.

After age 40, the brain decreases in volume by about 5 percent every decade. The most noticeable loss is in the frontal lobes. These control motor functioning of the sort that would direct a hand to a cup and a cup to the mouth in one fluid motion—in most cases without even looking at the cup

These lobes also control much more important processes, from language to judgment to impulsivity. Everyone experiences at least some degree of cognitive and motor decline over time, and some 8.8 percent of Americans over 65 now have dementia. An annual presidential physical exam at Walter Reed National Military Medical Center is customary, and Trump’s is set for January 12. But the utility of a standard physical exam—knowing a president’s blood pressure and weight and the like—is meager compared with the value of comprehensive neurologic, psychological, and psychiatric evaluation. These are not part of a standard physical.

Even if they were voluntarily undertaken, there would be no requirement to disclose the results. A president could be actively hallucinating, threatening to launch a nuclear attack based on intelligence he had just obtained from David Bowie, and the medical community could be relegated to speculation from afar.

Even if the country’s psychiatrists were to make a unanimous statement regarding the president’s mental health, their words may be written off as partisan in today’s political environment. With declining support for fact-based discourse and trust in expert assessments, would there be any way of convincing Americans that these doctors weren’t simply lying, treasonous “liberals”—globalist snowflakes who got triggered?

The downplaying of a president’s compromised neurologic status would not be without precedent. Franklin Delano Roosevelt famously disguised his paralysis from polio to avoid appearing “weak or helpless.” He staged public appearances to give the impression that he could walk, leaning on aides and concealing a crutch. Instead of a traditional wheelchair, he used an inconspicuous dining chair with wheels attached. According to the FDR Presidential Library, “The Secret Service was assigned to purposely interfere with anyone who tried to snap a photo of FDR in a ‘disabled or weak’ state.” . . .

Continue reading. There’s much more—and it’s both important and somewhat urgent.

Written by LeisureGuy

12 November 2018 at 2:47 pm

Shootings happen only when guns are present. America’s easy access to guns is enabling all these mass shootings

with 3 comments

German Lopez reports in Vox:

It’s been a horrible few months for mass shootings.

There’s the Thousand Oaks, California, bar shooting on Wednesday. There was the Tallahassee, Florida, yoga studio shooting last week. There was the Pittsburgh synagogue shooting late last month. There was the Florence County, South Carolina, shooting in which police officers were shot in October. There was the downtown Cincinnati shooting in September.

There were also a bunch of mass shootings that didn’t get much attention. The Gun Violence Archive considers events in which four or more people, excluding the shooter, were shot (but not always killed) in the same time period and place as mass shootings. By this count, there have been 311 mass shootings in 2018, resulting in 314 killed and 1,270 wounded. That’s almost a mass shooting a day.

After these horrific events, politicians and pundits will latch onto all sorts of explanations for why these keep happening: It’s mental illness. It’s misogyny. It’s anti-Semitism. It’s some other form of extremism or hate.

In individual shootings, these all of course can play a role. But when you want to explain why America sees so many of these mass shootings in general, none of these factors in individual shootings give a satisfying answer. Only one thing does: America’s easy access to guns.

America does not have a monopoly on mental health issues, bigots, or extremists. What is unique about the US is that it makes it so easy for people with these issues to obtain a gun.

America’s gun problem

It comes down to two basic problems.

First, America has uniquely weak gun laws. Other developed nations at the very least require one or more background checks and almost always something more rigorous beyond that to get a gun, from specific training courses to rules for locking up firearms to more arduous licensing requirements to specific justifications, besides self-defense, for owning a gun.

In the US, even a background check isn’t a total requirement; the current federal law is riddled with loopholes and snared by poor enforcement, so there are many ways around even a basic background check. There are simply very few barriers, if any, to getting a gun in the US.

Second, the US has a ton of guns. It has far more than not just other developed nations, but any other country period. Estimated for 2017, the number of civilian-owned firearms in the US was 120.5 guns per 100 residents, meaning there were more firearms than people. The world’s second-ranked country was Yemen, a quasi-failed state torn by civil war, where there were 52.8 guns per 100 residents, according to an analysis from the Small Arms Survey [see chart above – LG]

Both of these factors come together to make it uniquely easy for someone with any violent intent to find a firearm, allowing them to carry out a horrific shooting.

This is borne out in the statistics. The US has nearly six times the gun homicide rate of Canada, more than seven times that of Sweden, and nearly 16 times that of Germany, according to United Nations data for 2012 compiled by the Guardian. (These gun deaths are a big reason America has a much higher overall homicide rate, which includes non-gun deaths, than other developed nations.)

The research, compiled by the Harvard School of Public Health’s Injury Control Research Center, is also pretty clear: After controlling for variables such as socioeconomic factors and other crime, places with more guns have more gun deaths. Researchers have found this to be true not just with homicides, but also with suicides (which in recent years were around 60 percent of US gun deaths), domestic violence, and violence against police.

As a breakthrough analysis by UC Berkeley’s Franklin Zimring and Gordon Hawkins in the 1990s found, it’s not even that the US has more crime than other developed countries. This chart, based on data from Jeffrey Swanson at Duke University, shows that the US is not an outlier when it comes to overall crime:

Instead, the US appears to have more lethal violence — and that’s driven in large part by the prevalence of guns.

“A series of specific comparisons of the death rates from property crime and assault in New York City and London show how enormous differences in death risk can be explained even while general patterns are similar,” Zimring and Hawkins wrote. “A preference for crimes of personal force and the willingness and ability to use guns in robbery make similar levels of property crime 54 times as deadly in New York City as in London.”

This is in many ways intuitive: People of every country get into arguments and fights with friends, family, and peers. But in the US, it’s much more likely that someone will get angry at an argument and be able to pull out a gun and kill someone.

Researchers have found that stricter gun laws could help. A 2016 review of 130 studies in 10 countries, published in Epidemiologic Reviews, found that new legal restrictions on owning and purchasing guns tended to be followed by a drop in gun violence — a strong indicator that restricting access to guns can save lives. A review of the US evidence by RAND also linked some gun control measures, including background checks, to reduced injuries and deaths.

That doesn’t mean that  . . .

Continue reading.

Written by LeisureGuy

12 November 2018 at 8:54 am

%d bloggers like this: