Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science’ Category

When Police Kill

leave a comment »

Alex Tabarrok’s article, mentioned in an earlier post, appears in Marginal Revolution, and it begins:

When Police Kill is the 2017 book by criminologist Franklin Zimring. Some insights from the book.

Official data dramatically undercount the number of people killed by the police. Both the Bureau of Justice Statistics’ Arrest-Related Deaths and the FBI’s Supplemental Homicide Reports estimated around 400-500 police kills a year, circa 2010. But the two series have shockingly low overlap–homicides counted in one series are not counted in the other and vice-versa. A statistical estimate based on the lack of overlap suggests a true rate of around 1000 police killings per year.

The best data come from newspaper reports which also show around 1000-1300 police killings a year (Zimring focuses his analysis on The Guardian’s database.) Fixing the data problem should be a high priority. But the FBI cannot be trusted to do the job:

Unfortunately, the FBI’s legacy of passive acceptance of incomplete statistical data on police killings, its promotion of the self-interested factual accounts from departments, and its failure to collect significant details about the nature of the provocation and the nature of the force used by police suggest that nothing short of massive change in its orientation, in its legal authority to collect data and its attitude toward auditing and research would make the FBI an agency worthy of public trust and statistical reliability in regard to the subject of this book.

The FBI’s bias is even seen in its nomenclature for police killings–“justifiable homicides”–which some of them certainly are not.

The state kills people in two ways, executions and police killings. Executions require trials, appeals, long waiting periods and great deliberation and expense. Police killings are not extensively monitored, analyzed or deliberated upon and, until very recently, even much discussed. Yet every year, police kill 25 to 50 times as many people as are executed. Why have police killings been ignored?

When an execution takes place in Texas, everybody knows that Texas is conducting the killing and is accountable for its consequences. When Officer Smith kills Citizen Jones on a city street in Dallas, it is Officer Smith rather than any larger governmental organization…[who] becomes the primary repository of credit or blame.

We used to do the same thing with airplane crashes and medical mistakes–that is, look for pilot or physician error. Safety didn’t improve much until we started to apply systems thinking. We need a systems-thinking approach to police shootings.

Police kill males (95%) far more than females, a much larger ratio than for felonies. Police kill more whites than blacks which is often forgotten, although not surprising because whites are a larger share of the population. Based on the Guardian data shown in Zimring’s Figure 3.1, whites and Hispanics are killed approximately in proportion to population. Blacks are killed at about twice their proportion to population. Asians are killed less than in proportion to their population.

A surprising finding:

Crime is a young man’s game in the United States but being killed by a police officer is not.

The main reason for this appears to be that a disproportionate share of police killings come from disturbance calls, domestic and non-domestic about equally represented. A majority of the killings arising from disturbance calls are of people aged forty or more.

The tendency  of both police and observers to assume that attacks against police and police use of force is closely associated with violent crime and criminal justice should be modified in significant ways to accord for the disturbance, domestic conflicts, and emotional disruptions that frequently become the caseload of police officers.

A slight majority (56%) of the people who are killed by the police are armed with a gun and another 3.7% seemed to have a gun. Police have reason to fear guns, 92% of killings of police are by guns. But 40% of the people killed by police don’t have guns and other weapons are much less dangerous to police. In many years, hundreds of people brandishing knives are killed by the police while no police are killed by people brandishing knives. The police seem to be too quick to use deadly force against people significantly less well-armed than the police. (Yes, Lucas critique. See below on policing in a democratic society).

Police kill more people than people kill police–a ratio of about 15 to 1–and the ratio has been increasing over time. Policing has become safer over the past 40 years with a 75% drop in police killed on the job since 1976–the fall is greater than for crime more generally and is probably due to Kevlar vests. Kevlar vests are an interesting technology because they make police safer without imposing more risk on citizens. We need more win-win technologies. Although policing has become safer over time, the number of police killings has not decreased in proportion which is why the “kill ratio” has increased.

A major factor in the number of deaths caused by police shootings is the number of wounds received by the victim. In Chicago, 20% of victims with one wound died, 34% with two wounds and 74% with five or more wounds. Obvious. But it suggests a reevaluation of the police training to empty their magazine. Zimring suggests that if the first shot fired was due to reasonable fear the tenth might not be. A single, aggregational analysis:

…simplifies the task of police investigator or district attorney, but it creates no disincentive to police use of additional deadly force that may not be necessary by the time it happens–whether with the third shot or the seventh or the tenth.

It would be hard to implement this ex-post but I agree that emptying the magazine isn’t always reasonable, especially when the police are not under fire. Is it more dangerous to fire one or two shots and reevaluate than to fire ten? Of course, but given the number of errors police make this is not an unreasonable risk to ask police to take in a democratic society.

The successful prosecution of even a small number of extremely excessive force police killings would reduce the predominant perception among both citizens and rank-and-file police officers that police have what amounts to immunity from criminal liability for killing citizens in the line of duty.

Prosecutors, however, rely on the police to do their job and in the long-run won’t bite the hand that feeds them. Clear and cautious rules of engagement that establish bright lines would be more helpful. One problem is that police are protected because police brutality is common (somewhat similar to my analysis of riots).

The more killings a city experiences, the less likely it will be that a particular cop and a specific killings can lead to a charge and a conviction. In the worst of such settings, wrongful killings are not deviant officer behavior.

…clear and cautious rules of engagement will …make officers who ignore or misapply departmental standards look more blameworthy to police, to prosecutors, and to juries in the criminal process.

Police kill many more people in the United States than in other developed countries, even adjusting for crime rates (where the U.S. is less of an outlier than most people imagine). The obvious reason is that . . .

Continue reading.

Written by LeisureGuy

1 July 2020 at 2:21 pm

Something else to worry about: Snakes that fly

leave a comment »

They don’t in fact fly, any more than do flying squirrels. For both animals, the glide is the best they can do (though I would think with a good updraft they could ascend). Bats, in contrast, do fly.

Shamini Bundell writes at Nature.com:

Flying snakes glide through the air, flattening their bodies to provide lift. But as they glide they seem to swim, undulating their bodies from side to side. Now a team in the United States has used motion capture technology to study snake gliding in precise detail. Their models reveal that undulation is vital for the snake’s stability as they glide from branch to branch.

Read the paper here: https://www.nature.com/articles/s41567-020-0935-4

Written by LeisureGuy

1 July 2020 at 12:34 pm

Posted in Science

Lead Poisoning and Domestic Violence

leave a comment »

At Mother Jones Kevin Drum points out a tragedy of bad technology:

Alex Tabarrok reviews Franklin Zimring’s When Police Kill and notes the following:

A surprising finding:

Crime is a young man’s game in the United States but being killed by a police officer is not.

The main reason for this appears to be that a disproportionate share of police killings come from disturbance calls, domestic and non-domestic about equally represented. A majority of the killings arising from disturbance calls are of people aged forty or more.

I can’t fool you guys. You know what I’m going to say, don’t you? A likely explanation for this is that in 2015, when this data was collected, 20-year-olds were born around 1995 and grew up lead free. This means they were far less likely to act out violently than in the past. Conversely, 40-year-olds were born around 1975, right near the peak of the lead poisoning epidemic. They are part of the most violent, explosive generation in US history.

This is the saddest part of lead poisoning: it scars your brain development as a child and there’s no cure. If you’re affected by it and are more aggressive and violent as a result, you will be that way for the rest of your life.

The biggest villain in the lead-poisoning of a country was leaded gasoline. After it had been phased out, George W. Bush flirted with bringing it back, but fortunately rationality in that case prevailed.

Written by LeisureGuy

1 July 2020 at 11:48 am

The US and its new “Can’t Do” spirit, expressed in a graph

leave a comment »

From this article:

Written by LeisureGuy

30 June 2020 at 12:53 pm

The 3 Weeks That Changed Everything; or, Botched Opportunities

with one comment

James Fallows writes in the Atlantic:

Coping with a pandemic is one of the most complex challenges a society can face. To minimize death and damage, leaders and citizens must orchestrate a huge array of different resources and tools. Scientists must explore the most advanced frontiers of research while citizens attend to the least glamorous tasks of personal hygiene. Physical supplies matter—test kits, protective gear—but so do intangibles, such as “flattening the curve” and public trust in official statements. The response must be global, because the virus can spread anywhere, but an effective response also depends heavily on national policies, plus implementation at the state and community level. Businesses must work with governments, and epidemiologists with economists and educators. Saving lives demands minute-by-minute attention from health-care workers and emergency crews, but it also depends on advance preparation for threats that might not reveal themselves for many years. I have heard military and intelligence officials describe some threats as requiring a “whole of nation” response, rather than being manageable with any one element of “hard” or “soft” power or even a “whole of government” approach. Saving lives during a pandemic is a challenge of this nature and magnitude.

It is a challenge that the United States did not meet. During the past two months, I have had lengthy conversations with some 30 scientists, health experts, and past and current government officials—all of them people with firsthand knowledge of what our response to the coronavirus pandemic should have been, could have been, and actually was. The government officials had served or are still serving in the uniformed military, on the White House staff, or in other executive departments, and in various intelligence agencies. Some spoke on condition of anonymity, given their official roles. As I continued these conversations, the people I talked with had noticeably different moods. First, in March and April, they were astonished and puzzled about what had happened. Eventually, in May and June, they were enraged. “The president kept a cruise ship from landing in California, because he didn’t want ‘his numbers’ to go up,” a former senior government official told me. He was referring to Donald Trump’s comment, in early March, that he didn’t want infected passengers on the cruise ship Grand Princess to come ashore, because “I like the numbers being where they are.” Trump didn’t try to write this comment off as a “joke,” his go-to defense when his remarks cause outrage, including his June 20 comment in Tulsa that he’d told medical officials to “slow the testing down, please” in order to keep the reported-case level low. But the evidence shows that he has been deadly earnest about denying the threat of COVID-19, and delaying action against it.

“Look at what the numbers are now,” this same official said, in late April, at a moment when the U.S. death toll had just climbed above 60,000, exceeding the number of Americans killed in the Vietnam War. By late June, the total would surpass 120,000—more than all American military deaths during World War I. “If he had just been paying attention, he would have asked, ‘What do I do first?’ We wouldn’t have passed the threshold of casualties in previous wars. It is a catastrophic failure.”

As an amateur pilot, I can’t help associating the words catastrophic failure with an accident report. The fact is, confronting a pandemic has surprising parallels with the careful coordination and organization that has saved large numbers of lives in air travel. Aviation is safe in large part because it learns from its disasters. Investigators from the U.S. National Transportation Safety Board go immediately to accident sites to begin assessing evidence. After months or even years of research, their detailed reports try to lay out the “accident chain” and explain what went wrong. In deciding whether to fly if I’m tired or if the weather is marginal, I rely on a tie-breaking question: How would this look in an NTSB report?

Controlling the risks of flight may not be as complex as fighting a pandemic, but it’s in the ballpark. Aviation is fundamentally a very dangerous activity. People are moving at high altitudes, at high speed, and in high volume, with a guarantee of mass casualties if things go wrong. Managing the aviation system involves hardware—airframes, engines, flight control systems—and “software,” in the form of training, routing, and coordinated protocols. It requires recognition of hazards that are certain—bad weather, inevitable mechanical breakdowns—and those that cannot be specifically foreseen, from terrorist episodes to obscure but consequential computer bugs. It involves businesses and also governments; it is nation-specific and also worldwide; it demands second-by-second attention and also awareness of trends that will take years to develop.

The modern aviation system works. From the dawn of commercial aviation through the 1990s, 1,000 to 2,000 people would typically die each year in airline crashes. Today, the worldwide total is usually about one-10th that level. Last year, before the pandemic began, more than 25,000 commercial-airline flights took off each day from airports in the United States. Every one of them landed safely.

In these two fundamentally similar undertakings—managing the skies, containing disease outbreaks—the United States has set a global example of success in one and of failure in the other. It has among the fewest aviation-related fatalities in the world, despite having the largest number of flights. But with respect to the coronavirus pandemic, it has suffered by far the largest number of fatalities, about one-quarter of the global total, despite having less than one-20th of the world’s population.

Consider a thought experiment: What if the NTSB were brought in to look at the Trump administration’s handling of the pandemic? What would its investigation conclude? I’ll jump to the answer before laying out the background: This was a journey straight into a mountainside, with countless missed opportunities to turn away. A system was in place to save lives and contain disaster. The people in charge of the system could not be bothered to avoid the doomed course.

The organization below differs from that of a standard NTSB report, but it covers the key points. Timelines of aviation disasters typically start long before the passengers or even the flight crew knew anything was wrong, with problems in the design of the airplane, the procedures of the maintenance crew, the route, or the conditions into which the captain decided to fly. In the worst cases, those decisions doomed the flight even before it took off. My focus here is similarly on conditions and decisions that may have doomed the country even before the first COVID-19 death had been recorded on U.S. soil.

What happened once the disease began spreading in this country was a federal disaster in its own right: Katrina on a national scale, Chernobyl minus the radiation. It involved the failure to test; the failure to trace; the shortage of equipment; the dismissal of masks; the silencing or sidelining of professional scientists; the stream of conflicting, misleading, callous, and recklessly ignorant statements by those who did speak on the national government’s behalf. As late as February 26, Donald Trump notoriously said of the infection rate, “You have 15 people, and the 15 within a couple of days is going to be down close to zero.” What happened after that—when those 15 cases became 15,000, and then more than 2 million, en route to a total no one can foretell—will be a central part of the history of our times.

But what happened in the two months before Trump’s statement, when the United States still had a chance of containing the disease where it started or at least buffering its effects, is if anything worse.

1. The Flight Plan

The first thing an airplane crew needs to know is what it will be flying through. Thunderstorms? Turbulence? Dangerous or restricted airspace? The path of another airplane? And because takeoffs are optional but landings are mandatory, what can it expect at the end of the flight? Wind shear? An icy runway? The biggest single reason flying is so much safer now than it was even a quarter century ago is that flight crews, air traffic controllers, and the airline “dispatchers” who coordinate with pilots have so many precise tools with which to anticipate conditions and hazards, hours or days in advance.

And for the pandemic? . . .

Continue reading.

Written by LeisureGuy

29 June 2020 at 6:13 pm

Why does DARPA work?

leave a comment »

Ben Reinhardt has an interesting post:

How can we enable more science fiction to become reality?

If you want to do something, it usually pays to study those who have done that thing successfully in the past. Asking ‘what is this outlier’s production function?’ can provide a starting point.

DARPA is an outlier organization in the world of turning science fiction into reality. Since 1958, it has been a driving force in the creation of weather satellites, GPS, personal computers, modern robotics, the Internet, autonomous cars, and voice interfaces, to name a few. However, it is primarily limited to the domain of defense technology – there are DARPA-style ideas that are out of its scope.  Which emulatable attributes contributed to DARPA’s outlier results? What does a domain-independent “ARPA Model” look like? Is it possible to build other organizations that generate equally huge results in other domains by riffing on that model?

Gallons of ink have been spilled describing how DARPA works1, but in a nutshell here is how DARPA works. Around 100 program managers (PMs) with ~5 year appointments create and run programs to pursue high-level visions like “actualize the idea of man-computer symbiosis.” In these programs they fund researchers at universities and both big and small companies to do research projects of different sizes. Collectively, groups working on projects are called performers. Top-level authority lies with a Director who ultimately reports to the Secretary of Defense.

DARPA has an incredibly powerful model for innovation in defense research, and I believe an abstract ‘ARPA Model’ could yield similar results in other domains. In this piece I’ll explain in detail why DARPA works. I’ll use that description to feel out and describe to the best of my ability a platonic ARPA Model.  I’ll also distill some of the model’s implications for potential riffs on the model. Incidentally, I’m working on just such an imitator, and in future essays, I’ll explain why this model could be incredibly powerful when executed in a privately-funded context.

How to use this document

This document acts more like a collection of atomic notes than a tight essay – a DARPA-themed tapas if you will. The order of the sections is more of a guideline than a law so feel free to skip around. Throughout you will come across internal links that look like this. These links are an attempt to illustrate the interconnectedness of the ARPA Model.

There are two stand-alone pieces to accomodate your time and interest: a distillation, and the full work. The distillation is meant to capture and compress the main points of the full work. Each section of the distillation internally links to the corresponding section one level deeper so if you want more info and nuance you can get it.

I would rather this be read by a few people motivated to take action than by a broad audience who will find it merely interesting. In that vein, if you find yourself wanting to share this on Twitter or Hacker News, consider instead sharing it with one or two friends who will take action on it. Thank you for indulging me!

Distillation

Program Managers

At the end of the day the ARPA Model depends on badass program managers. Why is this the case? PMs need to think for themselves and go up and down the ladder of abstraction in an unstructured environment. On top of that they need to be effective communicators and coordinators because so much of their jobs is building networks. There’s a pattern that the abstract qualities that make “great talent” in different high-variance industries boils down to the ability to successfully make things happen under a lot of uncertainty. Given that pattern, the people who would make good DARPA PMs would also make good hedge fund analysts, first employees at startups, etc. so digging into people’s motivations for becoming a PM is important. More precise details about what makes a PM good prevent you from going after the exact same people as every other high-variance industry. When ‘talent’ isn’t code for ‘specialized training’ it means the role or industry has not been systematized. Therefore, despite all the talk here and elsewhere about ‘the ARPA Model’ we must keep in mind that we may be attributing more structure to the process than actually exists.

DARPA program managers pull control and risk away from both researchers and directors. PMs pull control away from directors by having only one official checkpoint before launching programs and pull control away from performers through their ability to move money around quickly. PMs design programs to be high-risk aggregations of lower-risk projects. Only 5–10 out of every 100 programs successfully produce transformative research, while only 10% of projects are terminated early. Shifting the risk from the performers to the program managers enables DARPA to tackle systemic problems where other models cannot.

The best program managers notice systemic biases and attack them. For example, noticing that all of the finite element modeling literature assumes a locally static situation and asking ‘what if it was dynamic?’ “The best program managers can get into the trees and still see the forest.” Obviously, this quality is rather fuzzy but leads to two precise questions:

  1. How do you find people who can uncover systemic biases in a discipline?
  2. How could you systematize finding systemic biases in a discipline?

The first question suggests that you should seek out heretics and people with expertise who are not experts. The second question suggests building structured frameworks for mapping a discipline and its assumptions.

A large part of a DARPA program manager’s job is focused network building. DARPA PMs network in the literal sense of creating networks, not just plugging into them. PMs meet disparate people working on ideas adjacent to the area in which they want to have an impact and bring them together in small workshops to dig into which possibilities are not impossible and what it would take to make them possible. The PMs host performer days — small private conferences for all the people working on different pieces of the program where performers can exchange ideas on what is working, what isn’t working, and build connections that don’t depend on the PM. J.C.R. Licklider2 is a paragon here. He brought together all the crazy people interested in human-focused computing. On top of that,  he also helped create the first computer science lab groups. PMs also build networks of people in different classes of organizations – government, academia, startups, and large companies. These connections smooth the path for technologies to go from the lab to the shelf.

DARPA PMs need to think for themselves, be curious, and have low ego. Why does this matter? When you are surrounded by smart, opinionated people the easy option is to either 100% accept what they’re saying because it’s eloquent and well-thought through or reject it outright because it sounds crazy or goes against your priors. Thinking for yourself allows you to avoid these traps. PMs need to be curious because building a complete picture of a discipline requires genuine curiosity to ask questions nobody else is asking. A large ego would lead to a program manager imposing their will on every piece of the program, killing curiosity and the benefits of top down problems and bottom up solutions.

DARPA is incredibly flexible with who it hires to be program managers. There are legal provisions in place that let DARPA bypass normal government hiring rules and procedures. Hiring flexibility is important because PMs are the sort of people who are in high demand, so they may be unwilling to jump through hoops. Bureaucracies ensure consistency through rules – minimal bureaucracy means there are no safeguards against hiring a terrible program manager so the principle that ‘A players hire A players and B players hire C players’ is incredibly important.

DARPA Program managers have a tenure of four to five years. This transience is important for many reasons. Transience can inculcate PMs against the temptation to play it safe or play power games because there’s only one clear objective – make the program work. You’re out regardless of success or failure. Explicitly temporary roles can incentivize people with many options to join because they can have a huge impact, and then do something else. There’s no implicit tension between the knowledge that most people will leave eventually and the uncertainty about when that will be. Regular program manager turnover means that there is also turnover in ideas.

Why do people become DARPA Program managers? From a career and money standpoint, being a program manager seems pretty rough. There are unique benefits though. It offers an outlet for people frustrated with the conservative nature of academia. The prospect of getting to control a lot of money without a ton of oversight appeals to some people. Patriotism is definitely a factor, and hard to replicate outside of a government. Being a PM can gain you the respect of a small, elite group of peers who will know what you did. Finally, there may be a particular technological vision they want to see out in the world and DARPA gives them the agency to make it happen in unique ways.

Incentives and Structure

Opacity is important to DARPA’s outlier success. Congress and the DoD have little default oversight into how a PM is spending money and running a program. Opacity removes incentives to go for easy wins or to avoid being criticized by external forces. Of course, opacity can also be abused in too many ways to list, so it’s important to ask: How does DARPA incentivize people not to abuse opacity? . . .

Continue reading.

Written by LeisureGuy

29 June 2020 at 4:17 pm

What’s up with my blogging

with 2 comments

A reader wrote inquiring about the change in pattern of my blogging (less frequent) and about the spareribs recipe I posted (do I still follow a whole-food plant-based diet?). I thought others might be wondering about that, so here’s what’s up with me on those accounts.

Blogging and its interruptions

My decision to acquire fluency in Esperanto has required a fair amount of time — here’s my current regimen. That post includes some detail on the reasons for the regimen.

The time spent in study means fewer blog posts. However, I now have the bit in my teeth and am determined to achieve fluency.

Whole-food plant-based diet

I still follow this diet, but my family and (I suspect) many of my readers do not, though certainly my family and I hope my readers do emphasize the consumption of fresh vegetables (including leafy greens), dried beans, intact whole grains, fresh fruit, berries, and nuts and seeds, and minimize the consumption of meat, dairy, and eggs — and try to avoid refined and “product” foods.

Still, I like food, and when I see a recipe like the St.-Louis-style spareribs (riparaĵo laŭ la stilo “St. Louis”), a recipe that is interesting, sounds tasty, and is easy, I post it for my meat-eating readers. Indeed, I might eat a rib or two on a special occasion, but certainly I continue now to follow a diet that is almost exclusively whole-food and plant-based. If I don’t, my blood glucose goes up (since I no longer take any medication for that — or for high blood pressure, since I also have cut out added salt).

I do think it’s a good idea to cut out refined food (e.g., refined sugar and foods that contain it, ultra-processed foods, fruit juice) and move toward whole foods, and to minimize one’s consumption of meat, dairy, and eggs, for the reasons explained in Dr. Michael Greger’s book How Not to Die and his more recent book How Not to Diet. But I figure you can read those and decide for yourself based on the research findings he points out.

 

Written by LeisureGuy

29 June 2020 at 10:25 am

1 in 200 Men Are Direct Descendants of Genghis Khan

leave a comment »

That Genghis! Here’s the article.

Written by LeisureGuy

27 June 2020 at 1:19 pm

Posted in Science

Tagged with

CRISPR gene editing in human embryos wreaks chromosomal mayhem

leave a comment »

CRISPR is not so precise as we’ve been led to believe. Heidi Ledford writes in Nature:

A suite of experiments that use the gene-editing tool CRISPR–Cas9 to modify human embryos have revealed how the process can make large, unwanted changes to the genome at or near the target site.

The studies were published this month on the preprint server bioRxiv, and have not yet been peer-reviewed1,2,3. But taken together, they give scientists a good look at what some say is an underappreciated risk of CRISPR–Cas9 editing. Previous experiments have revealed that the tool can make ‘off target’ gene mutations far from the target site, but the nearby changes identified in the latest studies can be missed by standard assessment methods.

“The on-target effects are more important and would be much more difficult to eliminate,” says Gaétan Burgio, a geneticist at the Australian National University in Canberra.

These safety concerns are likely to inform the ongoing debate over whether scientists should edit human embryos to prevent genetic diseases — a process that is controversial because it creates a permanent change to the genome that can be passed down for generations. “If human embryo editing for reproductive purposes or germline editing were space flight, the new data are the equivalent of having the rocket explode at the launch pad before take-off,” says Fyodor Urnov, who studies genome editing at the University of California, Berkeley, but was not involved in any of the latest research.

Unwanted effects

Researchers conducted the first experiments using CRISPR to edit human embryos in 2015. Since then, a handful of teams around the world have begun to explore the process, which aims to make precise edits to genes. But such studies are still rare and are generally strictly regulated.

The latest research underscores how little is known about how human embryos repair DNA cut by the genome-editing tools — a key step in CRISPR–Cas9 editing, says reproductive biologist Mary Herbert at Newcastle University, UK. “We need a basic road map of what’s going on in there before we start hitting it with DNA-cutting enzymes,” she says.

The first preprint was posted online on 5 June by developmental biologist Kathy Niakan of the Francis Crick Institute in London and her colleagues. In that study1, the researchers used CRISPR–Cas9 to create mutations in the POU5F1 gene, which is important for embryonic development. Of 18 genome-edited embryos, about 22% contained unwanted changes affecting large swathes of the DNA surrounding POU5F1. They included DNA rearrangements and large deletions of several thousand DNA letters — much greater than typically intended by researchers using this approach.

Another group, led by stem-cell biologist Dieter Egli of Columbia University in New York City, studied embryos created with sperm carrying a blindness-causing mutation in a gene called EYS2. The team used CRISPR–Cas9 to try to correct that mutation, but about half of the embryos tested lost large segments of the chromosome — and sometimes the entire chromosome — on which EYS is situated.

And a third group,

Continue reading.

Written by LeisureGuy

26 June 2020 at 10:49 am

Posted in Science

And the flood gates open: US Covid-19 deaths jump — UPDATE: False alarm.

leave a comment »

Apparently it doesn’t work to simply deny that Covid-19 is a problem. The denial does allow the US not to address the problem, particularly in states that still believe President Trump and his minions like Mike Pence, but that denial has a price:

Kevin Drum updated the chart to remove the jump. He notes:

UPDATE: I originally showed a sharp uptick in deaths, but it turns out this was because New Jersey reported a whole bunch of “probable” deaths all at once on June 25, which caused the spike. I’ve now corrected for that and the chart shows roughly the same plateau that we’ve had for the past few days.

Written by LeisureGuy

26 June 2020 at 9:57 am

The Pandemic’s Worst-Case Scenario Is Unfolding in Brazil

leave a comment »

Julia Leite, Simone Preissler Iglesias, Martha Viotti Beck, and Ethan Bronner report in Bloomberg Businessweek:

On a recent afternoon in São Luís, the capital of Maranhão state in northeastern Brazil, Hosana Lima Castro sat on a flimsy plastic chair in front of her house as stray dogs sniffed potholes in the narrow street and a few neighborhood kids launched kites. The bar across the way, where a few months ago an acquaintance of Castro’s had been shot, was closed because of the pandemic.

Her job at a convenience store had disappeared too, so Castro, who’s 43 and shares her modest home with her father, two brothers, and two of her kids, had nowhere else to be. Although the novel coronavirus is widespread across Brazil’s northeast, she wasn’t wearing a mask. Nor was anyone else in her crowded neighborhood, where basic services have been so neglected that many residents have no access to clean water.

Castro’s brother Moises, a garbage collector, was the first in her family to get sick. Then her other brother, Luciano, did too, followed by their father, Francisco, who has diabetes. He suffered badly, struggling to breathe and running a soaring fever. But no one in Castro’s household went to the hospital—a place that some in São Luís believe makes patients sicker than when they came in, or worse. “That would be a death sentence,” Castro said.

As Asia, Western Europe, and parts of the U.S. emerge from what will hopefully be the worst of the pandemic, the virus in Brazil isn’t slowing down. Between late May and mid-June the country galloped past Spain, Italy, and the U.K. in total fatalities, which now exceed 51,000, the second-highest toll after the U.S. It’s second in overall cases too, with more than 1 million confirmed infections. With local officials now lifting quarantines despite continued growth in cases, it’s conceivable that, when Covid-19 finally recedes, Brazil will have been hit harder than any other country.

The reasons Brazil has made such a perfect host for the coronavirus are diverse and not yet fully understood. Like the U.S. it never issued nationwide rules for social distancing. Even if the government had wanted to, the rules would have been impossible to enforce in a country of 210 million where some states are larger in land area than France. That left local officials to do as they saw fit, issuing orders that varied wildly and sometimes contradicted each other. Poverty is certainly also part of the picture: In the densely packed favelas threaded through Brazilian cities, social distancing isn’t feasible, and not working means not eating, especially with the cash-strapped state unable to provide enough support. So is the dysfunction of the government. Overcrowding in public hospitals is a long-standing problem, as is graft among the people who are supposed to build new ones.

And then there’s President Jair Bolsonaro, a right-wing populist who came to power with a 2018 campaign that echoed Donald Trump’s pledges to “drain the swamp.” Since the coronavirus appeared in Brazil in late February, Bolsonaro has frequently obstructed efforts to contain it, demanding local officials abandon severe tactics like shuttering businesses, firing a health minister who pushed for a more aggressive response, and at one point limiting the disclosure of epidemiological data, saying that without the numbers there would “no longer be a story” on the evening news. (The Supreme Court ordered the government to resume releasing the figures.) While in the early weeks of the outbreak Bolsonaro’s intransigence resembled what was happening in the White House, even Trump grudgingly conceded the severity of the situation once the body count started to soar. Bolsonaro, meanwhile, has doubled down, insisting that the anti-malarial drug chloroquine is an effective treatment and claiming the number of cases is being exaggerated.

The president’s office did not respond to requests for comment on this story. In a written response to questions, Brazil’s Health Ministry said it’s acted aggressively to test patients and add intensive-care beds, protective gear, and ventilators across the country, spending more than 11 billion reais ($2.1 billion) so far.

Most local and state leaders have ignored Bolsonaro’s push to end lockdowns. Brazil has a federal system, and governors have wide powers over public health. But his continued dismissal of the pandemic’s seriousness has undermined distancing measures, while mismanagement and corruption at all levels of government have prevented help from getting to where it’s needed.

The consequences are severe. In Pará, a vast and underdeveloped state that neighbors Maranhão, Covid-19 has been killing about 50 out of every 100,000 citizens, more than double the national average. “I saw people getting to the hospital with family members already dead in the passenger seat, people given CPR on the sidewalks because the hospitals are full,” says Alberto Beltrame, the state health secretary. One day in April, he visited the morgue in the capital, Belém. “There were 120 bodies, scattered everywhere. It’s something you’d see in a war.” As the virus’s spread continues, Brazil may be turning into the true worst-case scenario, a laboratory for what happens when a deadly and little-understood pathogen spreads without much restriction.

Unlike past plagues, the coronavirus has spread in substantial part from the rich to the poor, with prosperous and well-connected global cities—Milan, London, New York—among the earliest hot spots outside China. The story in Brazil was similar. The first clusters emerged in São Paulo, Brazil’s financial capital, in early March as wealthy residents returned from overseas trips.

One of the first so-called superspreader events was the wedding of a social media star, held at a beach-side resort in Bahia state on March 7. A 27-year-old São Paulo lawyer named Pedro Pacífico—an Instagram personality himself, with hundreds of thousands of followers for a feed devoted mainly to literary recommendations—was one of the guests. He felt lousy when he got home, figuring he had an exceptionally bad hangover. When he found out that another guest had been diagnosed with Covid-19, Pacífico went for a test. He had it too—as, he gradually learned, did about 15 of his friends. But at that point, Pacífico says over a video call, the disease seemed more like a nuisance than a threat. He isolated at home, suggesting quarantine reading to his followers and trading virus stories with other well-off paulistanos. “It was the novelty of it,” Pacífico says. “No one saw it coming, or thought it would be so bad.”

On the weekend of the Bahia wedding Bolsonaro was in Florida, visiting Trump at Mar-a-Lago in Palm Beach. The two leaders’ entourages took no real precautions, shaking hands and hugging as usual. The first person to test positive after returning home was Fabio Wajngarten, Bolsonaro’s communications chief. As everyone who deals with him knows, Wajngarten is what Jerry Seinfeld would call a close talker, with a habit of leaning in when he speaks. Five of the eight people who sat at his table at a Mar-a-Lago dinner tested positive, and in all 30 people on the trip got sick. One was Alexandre Fernandes, an athletic 44-year-old who’s developing a grain-export terminal in southern Brazil. After four days isolating in his apartment, Fernandes was so weak he couldn’t walk to the bathroom. He went to the hospital, where he was placed in intensive care. “I couldn’t pull the covers up in bed,” he says. At one point doctors thought he wouldn’t make it: “The nurse had to help me hold the phone so I could Facetime with my daughters to say goodbye.”  . . .

Continue reading. There’s much more.

Written by LeisureGuy

25 June 2020 at 10:34 am

Facebook is basically run by scum

leave a comment »

Judd Legum reports at Popular Information:

Facebook is “aiding and abetting the spread of climate misinformation,” said Robert Brulle, an environmental sociologist at Drexel University. “They have become the vehicle for climate misinformation, and thus should be held partially responsible for a lack of action on climate change.”

Brulle was reacting to Facebook’s recent decision, made at the request of climate science deniers, to create a giant loophole in its fact-checking program. Last year, Facebook partnered with an organization, Science Feedback, that would bring in teams of Ph.D. climate scientists to evaluate the accuracy of viral content. It was an important expansion of the company’s third-party fact-checking program.

But now Facebook has reportedly decided to allow its staffers to overrule the climate scientists and make any climate disinformation ineligible for fact-checking by deeming it “opinion.”

The organization that requested the change, the CO2 Coalition, is celebrating, E&E news reported on Monday. The group, which has close ties to the fossil fuel industry, says its views on climate change are increasingly ignored by the mainstream media. Now it plans to use Facebook to aggressively push climate misinformation on the public — without having to worry about fact checks from climate scientists.

How it all started

A column published in the Washington Examiner in August 2019 claimed that “climate models” were a “failure” that predicted exponentially more warming of the earth than has occurred. The piece, co-authored by notorious climate science denier Pat Michaels, was quickly shared more than 2,000 times on Facebook.

There was just one issue: It wasn’t true.

This is exactly the kind of mess that Facebook’s network of independent fact-checkers is supposed to solve. In May 2019, Facebook partnered with Science Feedback, a site dedicated to explaining “why information is or is not consistent with the science.” Science Feedback’s process is extremely rigorous. Each piece has multiple . . .

Continue reading.

Written by LeisureGuy

24 June 2020 at 6:33 pm

Turns out that, contrary to claims, it wasn’t safe after all: Roundup Maker to Pay $10 Billion to Settle Cancer Suits

leave a comment »

Patricia Cohen reports in the NY Times:

Bayer, the world’s largest seed and pesticide maker, has agreed to pay more than $10 billion to settle tens of thousands of claims in the United States that its popular weedkiller Roundup causes cancer, the company said Wednesday.

The figure includes $1.25 billion to deal with potential future claims from people who used Roundup and may develop the form of cancer known as non-Hodgkin’s lymphoma in the years to come.

“It’s rare that we see a consensual settlement with that many zeros on it,” said Nora Freeman Engstrom, a professor at Stanford University Law School.

Bayer, a German company, inherited the legal morass when it bought Roundup’s manufacturer, Monsanto, for $63 billion in June 2018. It has repeatedly maintained that Roundup is safe and will continue to sell the product without adding a warning on the label.

The settlement, which covers an estimated 95,000 cases, was extraordinarily complex because it includes separate agreements with 25 lead law firms whose clients will receive varying amounts.

Most of the lawsuits filed early on were brought by homeowners and groundskeepers, although they account for only a tiny portion of Roundup’s sales. Farmers are the biggest customers, and many agricultural associations contend glyphosate, the key ingredient in Roundup, is safe and effective.

Bayer still faces at least 25,000 claims from plaintiffs who have not agreed to be part of the settlement.

“This is nothing like the closure they’re trying to imply,” said Fletch Trammell, a Houston-based lawyer who said he represented 5,000 claimants not taking part in the settlement. “It’s like putting out part of a house fire.”

But Kenneth R. Feinberg, the Washington lawyer who oversaw the mediation process, said he expected most current plaintiffs to eventually join the settlement. . .

Continue reading. There’s more.

Later in the article:

Part of the $1.25 billion will be used to establish an independent expert panel to resolve two critical questions about glyphosate: Does it cause cancer, and if so, what is the minimum dosage or exposure level that is dangerous?

If the panel concludes that glyphosate is a carcinogen, Bayer will not be able to argue otherwise in future cases — and if the experts reach the opposite conclusion, the class action’s lawyers will be similarly bound.

Pressure on Bayer for a settlement has been building over the past year after thousands of lawsuits piled up and investors grew more vocal about their discontent with the company’s legal approach.

Just weeks after the deal to purchase Monsanto was completed in 2018, a jury in a California state court awarded $289 million to Dewayne Johnson, a school groundskeeper, after concluding that glyphosate caused his cancer. Monsanto, jurors said, had failed to warn consumers of the risk.

In March 2019, a second trial, this time in federal court in California, produced a similar outcome for Edwin Hardeman, a homeowner who used Roundup on his property, and an $80 million verdict.

Two months later, a third jury delivered a staggering award of more than $2 billion to a couple, Alva and Alberta Pilliod, who argued that decades of using Roundup caused their non-Hodgkin’s lymphoma.

“Plaintiffs have gone to the plate three times and hit it out of the park,” Ms. Engstrom at Stanford said. “When you see they’re batting a thousand, and thousands more cases are waiting in the wings, that spells a very bleak picture for Monsanto.”

All three monetary awards were later reduced by judges and Bayer appealed the verdicts, but the losses rattled investors and the stock price tumbled sharply. Those cases are unaffected by Wednesday’s settlement.

Written by LeisureGuy

24 June 2020 at 12:56 pm

Scientific Panel on New Dietary Guidelines Draws Criticism From Health Advocates

leave a comment »

The US is to a great extent controlled by major corporations, whose interest in the public’s welfare is minimal. Thus nutrition/food guidelines tend to be crafted to support the sale of (highly profitable) junk food rather than fresh vegetables and fruit, dried beans and whole grain, because those are commodities and the big bucks are in candy, soda pop, and highly processed foods.

Andrew Jacbos writes in the NY Times:

Are children who consume prodigious amounts of sugary drinks at higher risk for cardiovascular disease?

Can a diet rich in fruits, vegetables and legumes reduce the risk of hip fractures in older adults?

Should sweetened yogurts be a part of a healthy diet for toddlers making their first foray into solid food?

These and other nutrition-related questions will be addressed on Wednesday when a panel of 20 nutrition scientistsmeeting publicly by videoconference, discusses suggested changes to the federal government’s Dietary Guidelines for Americans, recommendations that directly impact the eating habits of millions of people through food stamp policies, school lunch menus and the product formulations embraced by food manufacturers.

The guidelines, updated every five years by the Department of Agriculture and Department of Health and Human Services, have long prompted jousting among nutrition advocates and food industry interests, like pork producers and soda companies, seeking to influence the final document. But the process this year is especially fraught, given the Trump administration’s skepticism of science and its well-established deference to corporate interests.

More than half of this year’s panel has ties to the food industry, and the scientists leading newly created subcommittees on pregnant women, lactating mothers and toddlers have ties to the baby food industry.

Some groups have criticized federal officials for omitting questions about red meat and salt consumption from the 80 diet-related questions that panel members were charged with answering. And government watchdog groups have questioned the panel’s objectivity.

“Amid a pandemic made worse by diet-related disease that’s hitting black and Indigenous communities hardest, junk food corporations should be paying for their abuses, not stacking scientific panels and official drafting committees,” said Ashka Naik, the research director at the advocacy group Corporate Accountability.

In a statement, the Department of Agriculture said panel members were nominated by the public and that those chosen were required to submit financial disclosure forms that were reviewed by agency staff members for possible conflicts of interest. The entire process, it noted, has garnered 62,000 public comments.

“Throughout the entire 2020-2025 dietary guidelines process, we have relied on the nation’s leading scientists and dietary experts to inform our development of science-based guidelines and have taken numerous steps to promote transparency, integrity, and public involvement,” Pam Miller, the agency’s Food and Nutrition Service Administrator, said in the statement.

The final guidelines, scheduled for release later this year, shape federal food programs in schools, prisons and military bases that sustain one in four Americans.

The coronavirus pandemic has fueled a greater sense of urgency over the guidelines, given emerging research suggesting that people with diet-related illnesses like Type 2 diabetes, hypertension and cardiovascular disease have a significantly higher risk of developing serious complications from Covid-19.

Such diseases, like Covid-19 itself, have struck African-American and Hispanic communities particularly hard. The members of the nutrition panel, however, are almost all white.

“People of color are already disproportionately impacted by chronic diseases but Covid-19 has really placed a magnifying glass on the health disparities that make us more vulnerable to the pandemic,” said Dr. Yolandra Hancock, a pediatrician and obesity expert at George Washington University’s Milken Institute of Public Health. “My concern is that these guidelines, heavily influenced by the food and beverage industry, will dictate what kinds of food are offered at schools and set the eating habits of children, particularly black and brown children, for the rest of their lives.” . . .

Continue reading.

Written by LeisureGuy

23 June 2020 at 10:24 am

Looking at Masks and Respiratory Health

leave a comment »

Jane E. Brody writes in the NY Times:

Starting with the first reports of breathing difficulties among people who contracted Covid-19 and extending now to those wearing masks to limit the risk of acquiring or unwittingly transmitting the virus, the ability to breathe normally has become a common concern.

Some worry: Are we taking in enough oxygen to adequately supply our muscles, organs and especially our brains? (I’m among many who purchased a pulse oximeter to do daily checks of my blood’s oxygen level.) Are the masks we wear interfering with our breathing?

As I walk and cycle in my Brooklyn neighborhood, I see many people with masks under their chins who pull them over nose and mouth only when they’re about to pass another person.

Believe me, I understand and empathize. Walking around with half one’s face under layers of cloth, neoprene or some other protective covering is neither attractive nor comfortable, even more so now with summer heat approaching. This is especially challenging for people who must wear masks throughout their workday, as well as those with pre-existing respiratory problems and people with poor hearing who now struggle to participate in mask-muffled conversations without the added assist of lip reading.

Alas, this is a fact of life we will most likely have to endure for many more months, perhaps even years, until an effective vaccine against this deadly virus can be developed and administered widely. There are ways, though, to maintain and even improve respiratory health while following the important guidelines for wearing masks issued by the Centers for Disease Control and Prevention to curb the spread of Covid-19.

But first, we could all benefit from a better understanding of a bodily function most of us have long taken for granted and learn how to maximize its efficiency and life-sustaining benefits. Based on the research I’ve done for this column, it’s apparent that even without a mask as an impediment, many people breathe in ways that compromise their well-being.

“Doctors who study breathing say that the vast majority of Americans do it inadequately,” James Nestor, author of a new book, “Breath: The New Science of a Lost Art,” wrote recently in The Wall Street Journal. “How we breathe matters,” he said, “and our attention to it is long overdue.”

For example, Mr. Nestor noted, “nose breathing is better than mouth breathing” because it’s protective; the nose filters, heats and treats raw air. “Inhaling through the nose stimulates the release of hormones and nitric oxide, which helps to regulate vital functions like blood pressure and increase oxygenation throughout the body,” Mr. Nestor said in an email.

Given that most of us take about 25,000 breaths a day and breathing properly is critical to how well our bodies function, we should try to get the most benefit we can from this life-sustaining activity, with or without a mask.

So, in addition to Mr. Nestor’s comprehensive treatise on breathing, I consulted an unusual expert, Paul DiTuro, a former professional athlete and special forces medic in the United States military who is now a performance breathing specialist for a company called PN Medical, which makes devices to help train respiratory muscles for people with conditions like emphysema as well as professional athletes.

Breathing done properly keeps the body in acid-base balance, which enables tissues to get the amount of oxygen they need to function optimally, Mr. DiTuro explained. This balance is achieved by maintaining an ideal level of carbon dioxide (CO2) in the blood. Too little CO2, which can happen when breathing is rapid and shallow, impedes the release of oxygen to body tissues and can result in feelings of anxiety, irritability, fatigue and lack of focus, Mr. DiTuro said.

Rapid, shallow breathing keeps the body in a high state of alert and engages the sympathetic nervous system, an adaptation that is useful in times of danger but counterproductive to feeling calm and relaxed the rest of the time.

Even during normal times, many people breathe too fast and through their mouths, perhaps because of chronic stress or . . .

Continue reading.

Written by LeisureGuy

23 June 2020 at 10:04 am

Posted in Daily life, Health

The Hard Truth Of Poker — And Life: You’re Never ‘Due’ For Good Cards

leave a comment »

Five Thirty Eight has posted an excerpt from Maria Konnikova’s book:

Maria Konnikova is a New York Times bestselling author and contributor to The New Yorker with a doctorate in psychology. She decided to learn how to play poker to better understand the role of luck in our lives, examining the game through the lens of psychology and human behavior. This excerpt is adapted from her new book, “The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win,” which is available June 23.


For many years, my life centered around studying the biases of human decision-making: I was a graduate student in psychology at Columbia, working with that marshmallow-tinted legend, Walter Mischel, to document the foibles of the human mind as people found themselves in situations where risk abounded and uncertainty ran high. Dissertation defended, I thought to myself, that’s that. I’ve got those sorted out. And in the years that followed, I would pride myself on knowing so much about the tools of self-control that would help me distinguish myself from my poor experimental subjects. Placed in a stochastic environment, faced with stress and pressure, I knew how I’d go wrong — and I knew precisely what to do when that happened.

Fast-forward to 2016. I have embarked on my latest book project, which has taken me into foreign territory: the world of No Limit Texas Hold ’em. And here I am, at my first-ever tournament. It’s a charity event. I’ve been practicing for weeks, playing online, running through hands, learning the contours of basic tournament poker strategy.

I get off to a rocky start, almost folding pocket aces, the absolute best hand you can be dealt, because I’m so nervous about messing up and disappointing my coach, Erik Seidel — a feared crusher considered one of the best poker players in the world. He’s the one who finagled this invitation for me in the first place, and I feel certain that I’m going to let him down. But somehow, I’ve managed to survive out of the starting gate, and a few hours in, I’m surprised to find myself starting to experience a new kind of feeling. This isn’t that hard. This is fun. I’m not half-bad.

This moment, this I’m not half-bad making its fleeting way through my brain, is the first time I notice a funny thing start to happen. It’s as if I’ve been cleaved in two. The psychologist part of my brain looks dispassionately on, noting everything the poker part of me is doing wrong. And the poker player doesn’t seem to be able to listen. Here, for instance, the psychologist is screaming a single word: overconfidence. I know that the term “novice” doesn’t even begin to describe me and that my current success is due mostly to luck. But then there’s the other part of me, the part that is most certainly thinking that maybe, just maybe, I have a knack for this. Maybe I’m born to play poker and conquer the world.

The biases I know all about in theory, it turns out, are much tougher to fight in practice. Before, I was working so hard on grasping the fundamentals of basic strategy that I didn’t have the chance to notice. Now that I have some of the more basic concepts down, the shortcomings of my reasoning hit me in the face. After an incredibly lucky straight draw on a hand I had no business playing — the dealer helpfully tells me as much with a “You’ve got to be kidding me” as I turn over my hand and win the pot — I find myself thinking maybe there’s something to the hot hand, the notion that a player is “hot,” or on a roll. Originally, it was taken from professional basketball, from the popular perception that a player with a hot hand, who’d made a few shots, would continue to play better and make more baskets. But does it actually exist — and does believing it exists, even if it doesn’t, somehow make it more real? In basketball, the psychologists Thomas Gilovich, Amos Tversky, and Robert Vallone argued it was a fallacy of reasoning — when they looked at the Boston Celtics and the Philadelphia 76ers, they found no evidence that the hot hand was anything but illusion. But in other contexts, mightn’t it play out differently? I’ve had the conventional thinking drilled into me, yet now I think I’m on a roll. I should bet big. Definitely bet big.

That idea suffers a debilitating blow after a loss with a pair of jacks — a hand that’s actually halfway decent. After a flop that has an ace and a queen on it — both cards that could potentially make any of my multiple opponents a pair higher than mine — I refuse to back down. I’ve had bad cards for the last half an hour. I deserve to win here! I lose over half my chips by refusing to fold — hello, sunk cost fallacy! We’ll be seeing you again, many times. And then, instead of reevaluating, I start to chase the loss: Doesn’t this mean I’m due for a break? I can’t possibly keep losing. It simply isn’t fair. Gambler’s fallacy — the faulty idea that probability has a memory. If you are on a bad streak, you are “due” for a win. And so I continue to bet when I should sit a few hands out.

It’s fascinating how that works, isn’t it? Runs make the human mind uncomfortable. In our heads, probabilities should be normally distributed — that is, play out as described. If a coin is tossed ten times, about five of those should be heads. Of course, that’s not how probability actually works — and even though a hundred heads in a row should rightly make us wonder if we’re playing with a fair coin or stuck in a Stoppardian alternate reality, a run of ten or twenty may well happen. Our discomfort stems from the law of small numbers: We think small samples should mirror large ones, but they don’t, really. The funny thing isn’t our discomfort. That’s understandable. It’s the different flavors that discomfort takes when the runs are in our favor versus not. The hot hand and the gambler’s fallacy are actually opposite sides of the exact same coin: positive recency and negative recency. We overreact to chance events, but the exact nature of the event affects our perception in a way it rightly shouldn’t.

We have a mental image of the silly gamblers who think they’re due to hit the magic score, and it’s comforting to think that won’t be us, that we’ll recognize runs for what they are: statistical probabilities. But when it starts happening in reality, we get a bit jittery. “All these squalls to which we have been subjected are signs the weather will soon improve and things will go well for us,” Don Quixote tells his squire, Sancho Panza, in Miguel de Cervantes’s 1605 novel, “because it is not possible for the bad or the good to endure forever, from which it follows that since the bad has lasted so long a time, the good is close at hand.” We humans have wanted chance to be equitable for quite some time. Indeed, when we play a game in which chance doesn’t look like our intuitive view of it, we balk.

Frank Lantz has spent over twenty years designing games. When we meet at his office at NYU, where he currently runs the Game Center, he lets me in on an idiosyncrasy of game design. “In video games where there are random events — things like dice rolls — they often skew the randomness so that it corresponds more closely to people’s incorrect intuition,” he says. “If you flip heads twice in a row, you’re less likely to flip heads the third time. We know this isn’t actually true, but it feels like it should be true, because we have this weird intuition about large numbers and how randomness works.” The resulting games actually accommodate that wrongness so that people don’t feel like the setup is “rigged” or “unfair.” “So they actually make it so that you’re less likely to flip heads the third time,” he says. “They jigger the probabilities.”

For a long time, Lantz was a serious poker player. And one of the reasons he loves the game is that . . .

Continue reading.

Written by LeisureGuy

22 June 2020 at 2:16 pm

Posted in Books, Daily life, Math, Science

People Don’t Trust Public-Health Experts Because Public-Health Experts Don’t Trust People

leave a comment »

David Wallace-Wells has an interesting take on the public-health issue in New York:

Almost as soon as the first marches to protest the killing of George Floyd began, in Minneapolis on May 26, conservatives and COVID contrarians seized on the rallies as a case study of liberal coronavirus hypocrisy. If the disease spread rapidly through the assembled protesters, they felt, it would show that those who’d spent the spring scolding Americans for resisting lockdowns didn’t care as much about public health as they did about advancing their own set of political values. (Liberals, of course, would put it differently: that the cause was worth the risk.) If there were relatively few new cases, the thinking went, it would demonstrate that the lockdowns themselves were unnecessary.

Three weeks later, we have the first results from the natural experiment: Across the country, from Minneapolis to California and New York City to Albany, the protests produced, at most, very few additional cases of COVID-19. The same, more or less, was observed in the aftermath of the much-derided Lake of the Ozarks Memorial Day party (where one sick partygoer may have infected as many as … one other). Does this mean we’re out of the COVID-19 woods, all clear for mass gatherings and the end of social distancing, and that the intrusive and intensely burdensome lockdowns of the spring were excessive? Well, no. The same week, a major study led by Berkeley’s Solomon Hsiang exploring the effect of lockdowns across the world found that, in the U.S., social distancing and shelter-in-place guidelines prevented as many as 60 million additional cases (since, at least in the early days of the epidemic, many more are believed to have been infected than were tested for the disease). And if those measures had been implemented sooner and more effectively, one review suggests, between 70% and 99% of American deaths could have been avoided. Instead of 120,000 deaths, we might have had fewer than 2,000.

These two findings would seem to contradict each other, but only if you are proceeding from the reductive assumption that either lockdowns were absolutely necessary or that no precautions at all were. But whatever your impression as a lay consumer of public-health guidance, scientists have known for months that “all or nothing” was a misleading way to approach the question of how to combat the spread of the disease — which could be substantially mitigated by warm weather, mask-wearing, and better hygienic practices (and whose lethality would be reduced significantly if those who were infected in environments like rallies were mostly young and healthy). Indeed, one recent analysis of more than 1,000 “super-spreader” events around the world, for instance, found that more than 97 percent of them took place indoors (most of them also during local flu seasons and in settings that put people into close contact with one another for long periods of time). As Emily Atkin, among others, has pointed out, this is what makes the president’s Tulsa rally tonight so much more dangerous than any of the protests he’s been trolling and threatening from the bunker of the White House over the last few weeks.

But all the way up through the beginning of the protests, and even after, America’s jury-rigged, Rube Goldberg health-messaging apparatus (epidemiologists, local public-health officials, civic-minded journalists, improvising and coordinating guidance in the total absence of any federal leadership) failed to communicate most of these nuances — suggesting, for instance, that Georgia’s reopening was a “death sentence,” and that its governor, Brian Kemp, had “blood on his hands,” rather than emphasizing relative risks and the precautions that might be taken to avoid them. The Atlantic ran a piece calling the state’s reopening “an experiment in human sacrifice.” Groups of scientists who would weeks later defend the marches on public-health grounds vociferously attacked Wisconsin’s in-person election. Even the same scientist who called reopening the economy “extraordinarily dangerous” in late May “wholeheartedly” defended and embraced the protests in early June.

And while there has been plenty of crowing among conservatives, recently, about what these reversals say about liberals’ true concerns, the failure on the right has been considerably larger — because while it is indeed the case that reopenings can be relatively safe if the right precautions are taken, conservative leaders in the states reopening first have done basically nothing to ensure that they are. In Arizona, where the pandemic is growing rapidly, the governor has even tried to prohibit local officials from offering mask-wearing advisories.

This erratic pattern of advisories wasn’t just about mass gatherings, in other words. And it wasn’t a sign that the underlying science had changed; it hadn’t. Instead, it reflects an unfortunate pattern from the first months of the pandemic, in which public-health messaging has had a considerably less stellar and considerably less reliable record than you might hope for — not just for those worrying about the coronavirus threat but anyone who is concerned about the status of scientific expertise and technocratic policy more generally.

In January, as the earliest scary research into the outbreak in Wuhan began arriving from China, public-health officials downplayed the threat and systematically advised coronavirus panic be channeled into vigilance about the flu, which they considered a bigger problem. In February, as initial data arrived from China showing a dramatic age skew in mortality, with the older at far greater risk than the young, and the very old at greater risk still, political leaders and public-health officials did practically nothing to protect the most vulnerable. Indeed, in New York, where now 6,000 have died in nursing homes, totaling roughly 6 percent of the nursing-home population, Governor Cuomo granted legal immunity to the executives who run those facilities; in California, where no such immunity was given, the toll was just 2,000, meaning less than one percent of its assisted-living population (though, to be fair, the total death toll outside nursing homes was much lower as well). In March, as evidence about the imperfect-but-still-significant efficacy of masks began rolling in, the WHO continued to advise against them. As recently as March 8, Anthony Fauci was advising the same thing on 60 Minutes, presumably to try and head off a possible mask run that would leave health workers undersupplied. In April, as it became clearer that outdoor transmission was significantly harder than indoor transmission, public officials across the country nevertheless continued closing parks and beaches.

All of this guidance was issued in something of the fog of war, of course, and each piece, taken on its own, might seem sensible — the science being new, and imperfect, and often contradictory, it’s reasonable to try and guide the public toward more caution rather than less. But taken together they suggest a perhaps concerning pattern, one familiar to me now from years of writing about climate change and its long-understated risks: Instead of simply presenting the facts — what they knew, how certain they were about it, and what they didn’t know — experts massaged their messaging in the hope of producing a particular response from the public (and with the faith that they can expertly enough massage it to produce that outcome).

This has been a global pattern, or at least was in the early days of the epidemic. But almost everywhere but America, the experts learned their lesson quickly. At first, . . .

Continue reading.

It should be noted that in fact wearing a mask makes an enormous difference. See this post by Kevin Drum. One chart from that post:

Written by LeisureGuy

22 June 2020 at 2:10 pm

A large number of Americans are opposed to health

leave a comment »

Or at least are opposed to doing what is required to ensure health. Protecting public health is a vital and central function of government, but in the US that effort is condemned by many. Rachel Weiner and Ariana Eunjung Cha report in the Washington Post:

For Lauri Jones, the trouble began in early May. The director of a small public health department in western Washington State was working with a family under quarantine because of coronavirus exposure. When she heard one family member had been out in the community, Jones decided to check in.

“Someone posted on social media that we had violated their civil liberties [and] named me by name,” Jones recalled. “They said, ‘Let’s post her address . . . Let’s start shooting.’ ”

People from across the country began calling her personal phone with similar threats.

“We’ve been doing the same thing in public health on a daily basis forever. But we are now the villains,” said Jones, 64, who called the police and set up surveillance cameras at her home.

Public health workers, already underfunded and understaffed, are confronting waves of protest at their homes and offices in addition to pressure from politicians who favor a faster reopening. Lori Tremmel Freeman, chief executive of the National Association of County and City Health Officials, said more than 20 health officials have been fired, resigned or have retired in recent weeks “due to conditions related to having to enforce and stand up for strong public health tactics during this pandemic.”

Although shutdown measures are broadly popular, a vocal minority opposes them vociferously. There have been attacks on officials’ race, gender, sexual orientation and appearance. Freeman said some of the criticisms “seem to be harsher for women.”

Marcus Plescia, chief medical officer of the Association of State and Territorial Health Officials, said attacks on health officials have been particularly awful in California, Colorado, Georgia, Ohio and Pennsylvania.

This month in California, Nichole Quick, Orange County’s chief health officer, stepped down after she faced threats and protests at her home for requiring face coverings in many businesses as cases rose. The mandate, issued May 23, was softened to a recommendation a week later.

Andrew Noymer, a professor of public health at the University of California at Irvine who is part of a county task force, said it was not the first time Quick had been undermined.

On March 17, Quick issued a strict lockdown order; a day later it was amended to add exceptions.

“It was couched as a clarification, but it was a walk back,” Noymer said, because of pressure from business leaders.

Quick’s departure is part of an exodus of public health officials across the country who have been blamed by both citizens and politicians for the disruptions caused by the coronavirus pandemic.

“I don’t think I’ve ever seen anybody resign for the kinds of reasons we’ve seen recently,” Plescia said. “We are very concerned that if it continues to get worse it’s going to have major implications for who will be willing to have these jobs.”

Ohio’s public health director, Amy Acton, shifted to an advisory role after enduring months of anger against the state’s preventive measures, including armed protesters at her home bearing messages including anti-Semitic and sexist slurs. One Republican lawmaker linked Acton, who is Jewish, to Nazi Germany; another called her a dictator.

Georgia’s public health director said last month that she receives threats daily and now has an armed escort.

Pennsylvania’s secretary of health, who is transgender, has come under fire for the state’s handling of the pandemic, including from a county official who resigned after saying at a recent meeting that he was “tired of listening to a guy dressed up as a woman.”

Four public health officials in Colorado have left their jobs recently.

A day after telling political leaders in Weld County, Colo. that their insistence on a speedy reopening despite a high case rate and widespread transmission was giving him “serious heartburn,” Public Health Director Mark Wallace got a 7:30 p.m. email: He had until 9 a.m., it said, to weigh in on guidelines for reopening businesses — “churches, salons, restaurants, etc.” They would go public an hour later.

Wallace, who declined to comment for this article, retired soon after.

Theresa Anselmo, executive director of the Colorado Association of Local Public Health Officials, said 80 percent of members had reported being threatened and more than that were at risk of termination or lost funding.

“It’s exhausting to be contradicted and argued with and devalued and demoralized all the time, and I think that’s what you’re seeing around the country,” Anselmo said. “We’ve seen from the top down the federal government is pitting public health against freedom, and to set up that false dichotomy is really a disservice to the men and women who have dedicated their lives . . . to helping people.”

Not everyone has left willingly. In Colorado’s Rio Grande County, Emily Brown was fired, she says, after advocating a more cautious response to the virus.

“I think I just finally pushed too hard,” she said. “There was resistance to taking steps as quickly as I felt they needed to be taken or move in directions I thought we needed to.” . . .

Continue reading.

Many people with no education, training, or experience in public health are attacking public health officials for doing their job. Those who attack seem to assume that they are have more expertise than those with a strong background in public health.

It really seems as though the social fabric of the US is starting to disintegrate.

Written by LeisureGuy

22 June 2020 at 10:23 am

Computation All the Way Down

leave a comment »

From Edge, and worth reading in full. In passing, I will point out that Plato already said that the universe is constructed from the basic reality of mathematics (cf. Timaeus). The talk begins:

We’re now in this situation where people just assume that science can compute everything, that if we have all the right input data and we have the right models, science will figure it out. If we learn that our universe is fundamentally computational, that throws us right into the idea that computation is a paradigm you have to care about. The big transition was from using equations to describe how everything works to using programs and computation to describe how things work. And that’s a transition that has happened after 300 years of equations. The transition time to using programs has been remarkably quick, a decade or two. One area that was a holdout, despite the transition of many fields of science into the computational models direction, was fundamental physics.

If we can firmly establish this fundamental theory of physics, we know it’s computation all the way down. Once we know it’s computation all the way down, we’re forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can’t get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can’t work. It says that even if we know the rules for the system, it may be the case that we can’t work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can’t have a predictive theoretical science of what’s going to happen.

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. His most recent endeavor is The Wolfram Physics Project. He is also the author, most recently, of A Project to Find the Fundamental Theory of Physics. Stephen Wolfram’s Edge Bio Page

COMPUTATION ALL THE WAY DOWN

The question that I’m asking myself is how does the universe work? What is the lowest level machine code for how our universe works? The big surprise to me is that over the last six months or so, I think we’ve figured out a path to be able to answer that question.

There’s a lot of detail about how what we figured out about the path to that question relates to what’s already known in physics. Once we know this is the low-level machine code for the universe, what can we then ask ourselves about why we have this universe and not another? Can we ask questions like why does this universe exist? Why does any universe exist? Some of those are questions that people asked a couple thousand years ago.

Lots of Greek philosophers had their theories for how the universe fundamentally works. We’ve gotten many layers of physics and mathematics sophistication since then, but what I’m doing goes back to these core questions of how things fundamentally work underneath. For us, it’s this simple structure that involves elements and relations that build into hypergraphs that evolve in certain ways, and then these hypergraphs build into multiway graphs and multiway causal graphs. From pieces of the way those work, we see what relativity is, what quantum mechanics is, and so on.

One of the questions that comes about when you imagine that you might hold in your hand a rule that will generate our whole universe, how do you then think about that? What’s the way of understanding what’s going on? One of the most obvious questions is why did we get this universe and not another? In particular, if the rule that we find is a comparatively simple rule, how did we get this simple-rule universe?

The lesson since the time of Copernicus has been that our Earth isn’t the center of the universe. We’re not special in this or that way. If it turns out that the rule that we find for our universe is this rule that, at least to us, seems simple, we get to ask ourselves why we lucked out and got this universe with a simple rule. I have to say, I wasn’t expecting that there would be a good scientific answer to that question. One of the surprises from this project to try to find the fundamental theory of physics has been that we have an understanding of how that works.

There are three levels of understanding of how the universe works in this model of ours. It starts from what one can think of as atoms of space, these elements that are knitted together by connectivity to form what ends up behaving like the physical space in which we move. The first level of what’s going on involves these elements and rules that describe how elements connected in a particular way should be transformed to elements connected in some other way. This connectivity of the elements is what makes up when we look at, say, 10100, 10400 of these elements. That’s what behaves like space as we’re familiar with it, and not only space but also all of the things that are in space—all the matter and particles—are all just features of this underlying structure and its detailed way of connecting these elements together.

We’ve got this set of transformation rules that apply to those underlying elements. In this set up, space is a very different thing from time. One of the wrong turns of 20th-century physics was this idea that space and time should always be packaged together into this four-dimensional spacetime continuum. That’s wrong. Time is different from space. Time is the inexorable operation of computation in figuring out what the next state will be from previous states, where our space to something that is a more specific extent of, in this particular case, the hypergraph that knits together these different elements.

From the idea of this hypergraph being rewritten through time, when you are an observer embedded within that hypergraph, the only thing you are ultimately sensitive to is the question of which events that happen inside this hypergraph affect which other ones. What are the causal relationships between different events in this process of time evolution? From that, you get what we call a causal graph of what events affect what other events. It turns out that special relativity and then general relativity emerge basically from properties of that causal graph.

In our way of thinking about fundamental physics, there are three levels of description that end up corresponding to general relativity—the theory of space and time and gravity—quantum mechanics, and then the third level, which is something different.

In the lowest level of these models that we’re constructing, the only thing we know about all of these elements is that they’re just things. We know which things are related to which other things; for example, if we say that there are relations that involve pairs of things—binary relations—then we can say we’ve got these things and there are pairs that are related. We can draw that as a mathematical graph or a network, where we’re just putting down points and joining them by a line. We happen to need a slight generalization of that, usually called a hypergraph in mathematics, where instead of just having relations between pairs of things, you can have relations between triples or quadruples of things.

You can’t represent that with just a line between two things. It’s like a bag of things that corresponds to each hyperedge. But that’s a detail not really important to the big picture. The thing that is relevant is that the underlying rules just say that some collection of elements that are related in a certain way are transformed to some other collection of elements related in some other way.

The whole operation of the universe consists of just rerunning that particular rule a gazillion times. Maybe the gazillion is about 10400 for our universe, I’m not sure about that—that’s based on one estimate of how this might work.

The first level is to understand, as you apply these rules, what are the causal relationships between applying a rule in one place, then that rule produces certain output, and that output gets used when the rule is applied again in the same place or in a nearby place. You can draw this network, this graph, of the causal relationships of what output is needed to feed the input to another updating event. That causal graph turns out to be our representation of space and time.

That causal graph has properties that reproduce special relativity and then general relativity, the theory of gravity. That’s a feature of these models, that in the limit of a very large number of these little update rules, with certain assumptions—like the assumption that the limiting space of our universe is finite dimensional—it follows that what happens satisfies Einstein’s equations for general relativity. Then the next level of this is to apply these transformations to this hypergraph, to this collection of relations. But there might be many possible places where a particular transformation might apply, which one should I run? Which one should I do? The next piece of these models is to do all of them, and what you’ll build is what we call a multiway graph, which represents all possible updates that you can have done.

If you do one update it might allow you to do another update. If you don’t do that update, it wouldn’t allow you to do another update. It’s not saying just do everything. There’s still a lot of structural information in what could happen after what, and what can happen at the same time as what. So, this multiway graph turns out to be a representation of what in quantum mechanics people have thought about as the path integral. In classical mechanics, say you throw a ball, the ball moves in a particular definite trajectory. In quantum mechanics, the ball has many possible trajectories it follows, which are all weighted in a certain way, and what we observe corresponds to, say, some weighting or some combination of those trajectories.

In our models, that corresponds to what happens in this multiway graph, that there are these many possible paths that can be followed in the multiway graph. In quantum mechanics, we believe we measure definite things. It turns out it’s very elegant and wonderful that in relativity we’re used to this idea of reference frames, observers thinking about the universe in terms of their reference frame. Are they at rest? Are they traveling at a certain velocity? Are they accelerating? What is their state of motion? In quantum mechanics, we have this analog of reference frames, which we call quantum observation frames (QOF) that represent the way we’re choosing to experience this multiway system of possibilities.

In any case, one can reproduce the various results of quantum mechanics. We’re busily going through and trying to reproduce all the different things that show up in quantum mechanics. One of things we can do is take, for example, quantum computers and compile all that formalism into these multiway graphs. If you’ve got a quantum computer that’s described in the standard formalism of quantum computing in this way, then you just run this program and you’ll get a multiway graph that basically implements the same thing. So that’s proof that these multiway graphs reproduce the physics of quantum computing.

In spacetime, a big result are Einstein’s equations, which say that the curvature of space depends on the presence of matter. If you have a thing that is following a straight line, let’s say you shoot a laser in some direction. Normally, you think the light from a laser just goes in a straight line. But when there’s a massive object, like a star or a black hole, the path of that laser light will be turned by the presence of that mass. Einstein’s equations describe how that turning works. They say that the curvature of space, the amount of turning, depends on the amount of energy momentum that exists in space.

In our multiway graph, we also think about paths through the multiway graph. We can also think about the presence of energy momentum in the multiway graph, the presence of energy momentum in the quantum system that is described by this multiway graph. Something really amazing happens, which is that Einstein’s equations in the classical idea of space and time turns out to be exactly Feynman’s path integral in quantum mechanics.

These various paths that are representing the possibilities in quantum mechanics are effectively being turned in this multiway space by the presence of energy momentum, or more specifically, by the presence of the Lagrangian density, which is a relativistically invariant analog of energy momentum. In other words, the core of quantum mechanics, which is the way that the phases work in the path integral, is the exact same phenomenon as the core of classical general relativity, the way that trajectories are turned by the presence of energy momentum in spacetime. That’s a pretty cool thing that I’m excited about.

When we think about this multiway system, we’re saying that . . .

Continue reading.

Written by LeisureGuy

19 June 2020 at 5:56 pm

Posted in Math, Science

Tagged with

The Scientist Who Predicted 2020’s Political Unrest On What Comes Next

leave a comment »

Interesting interview in Vice with a link to the predictions published in 2012. Jamie Clifton interviews Peter Turchin (aka Hari Seldon?):

In 2012, VICE published an article titled, “2012 Is Bullshit; 2020 Is When We’ll Really Be in Trouble”. That headline was fairly prophetic: while 2012 saw the embers of the Arab Spring uprisings and quaint fears of a foretold apocalypse, it had nothing on what we’re experiencing now.

In 2020, the climate is on its deathbed. A global pandemic has killed almost half a million people and sent economies spiralling. The world is finally reckoning with centuries of entrenched racial inequality, with street protests met by a violent response from police and the far-right, exacerbated by an American president who intentionally stokes division among his base.

That 2012 headline was for an interview with the scientist Peter Turchin, whose field of study, “cliodynamics”, tracks “temporally varying processes and the search for causal mechanisms” throughout US history, to essentially predict the future. You can read his team’s assessment of the last ten years here. VICE News recently caught up with him over email to ask what’s coming next.

VICE: When we spoke in 2012, you explained that 2020 would see the next state of upheaval in the US. Do you feel validated? Or were you just always certain it was coming?
Peter Turchin: The theory that made this prediction was validated, rather than me. Of course, nobody could be certain it was coming – future cannot be predicted in any absolute sense.

Fair point. Was there any stage over the last few years where you began to see it coming, though, and could tell what it might be related to?
It is a cumulative thing. The structural trends driving up instability – falling living standards, increasing intra-elite competition and conflict – have actually been going in the wrong direction since roughly 1980, so by 2010, I and my colleagues saw three decades of these trends already.

Furthermore, there were no signs that our political elites were ready to take the appropriate action to reverse these trends. They still aren’t. Then there was a growing wave of suicide terrorism, AKA rampage shootings. Life expectancies of large swaths of the American population actually shrank in absolute terms – I didn’t expect that things would get so bad. The election of Donald Trump is a very good example of a political entrepreneur channeling mass discontent – there are lots of historical examples of this. So, as I said, it was a cumulative thing.

READ: Know Their Names – Seven Stories of Police Brutality in Europe

You also said revolutions start when “members of the elite try to overturn the political order to better suit themselves”. Could you expand on that, knowing what we know now?
As I said, Donald Trump is a good example of intra-elite conflict. In terms of our theory, he started as a frustrated elite aspirant who was attempting to translate his wealth into political power. He was eventually able to do it riding the wave of mass discontent with the established elites in 2016. This resulted in even more polarisation and intra-elite conflict than what we saw before 2016.

Finally, your theory says these periods work in 50-year cycles, but does it predict when this specific period of upheaval will come to an end?
You actually didn’t get this part right – the fundamental dynamic results in very long cycles. So in American history we had two broad cycles. First, there was a rising tide of prosperity and elite unity that peaked around 1820. From there, the crisis indicators rose sharply in the years leading up to the Civil War. Indicators of crisis conditions then dropped slightly from their peak but remained high until 1920 – the years of Reconstruction, Jim Crow, Gilded Age and violent labour unrest, and the anarchists. This was our first Age of Discord.

Then the tide shifted; as a result of the reforms introduced during the Progressive Era and clinched in the New Deal, wages rose and political unity grew stronger. The 1950s were . . .

Continue reading.

Written by LeisureGuy

17 June 2020 at 7:35 pm

%d bloggers like this: