Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Environment’ Category

How Do You Convince People to Eat Less Meat?

with 6 comments

Jan Dutkiewicz has an interesting article in The New Republic with the subheading:

A recent fracas in Spain shows that simply telling people to reduce meat consumption in the name of climate and personal health won’t work.

I’m not a good source on this: I dropped meat, dairy, and eggs (except for an occasional rare exception) exactly for reasons of health, and of course it was difficult at first — as I write in my (lengthy and detailed) post about my diet,

It takes a few weeks to get the hang of a new approach to food when you change your diet, so I would recommend you stick with this approach for two months and then take stock, evaluating it in the light of your own experience. Changing your diet is difficult because it requires revising patterns of eating that you have learned so well you use them unconsciously. Just as you don’t have to think much to get around your own town or neighborhood, the diet you already know is easy because it’s based on established dishes and established routines.

And just as moving to a new city requires a lot of work and attention at first just to find your way around, moving to a new way of eating requires thought and attention to figure out a new repertoire of “standard” dishes and meals. But over time, both become easy once again as new patterns are figured out, learned, and become familiar, and easy routines again emerge.

You gain the new knowledge and regain the old comfort more readily if you have the mindset and attitude of a new permanent resident rather than a visitor, because as a permanent resident you’re more motivated to explore and discover what all it has to offer beyond the obvious tourist attractions (or obvious recipes) See this post: “Finding pleasure in the learning of new skills.”

So my method would be to focus on the health benefits plus the interesting new foods — the new dishes, tastes, and textures — that a whole-food plant-based diet offers. But here’s the article:

In early July, Spain’s minister of consumer affairs, Alberto Garzón, posted a short video on Twitter urging Spaniards to decrease their meat consumption. From a political communication perspective, it was flawless. He listed the many ways large-scale meat production and consumption harm humans, the environment, and animals, all backed by peer-reviewed science. He focused on reducing meat intake, not eliminating it—he praised nonindustrial livestock systems and family barbecues. He acknowledged that changing diets is hard for those without access to cheap, accessible, and diverse food choices. He explained that the government would launch food education campaigns and implement regulations to incentivize more sustainable diets. He even added a hashtag: #MenosCarneMasVida (Less Meat More Life).

Spanish politics exploded. While Garzón’s nuanced, well-researched message received some support (the number of Spaniards who claim to want to reduce their meat consumption is rising), several fellow politicians turned to juvenile trolling. Prime Minister Pedro Sanchez, of Spain’s socialist party, gushed about his love of the chuletón steak to a press conference, and Teodoro García Egea of the right-wing People’s Party tweeted out a picture of a grill packed with slabs of meat with the caption, “To your health.”

The affair brilliantly displayed the fraught politics of dietary change. The average Western diet—prevalent in Spain, just as it is in the United States and the United Kingdom—is high in meat, fat, and sugar, its production and consumption an environmental and public health disaster. This has been true for decades. But in the past few years, a growing chorus of voices have begun to call for major dietary changes in the interest of human and planetary health. The EAT-Lancet report published in February 2019 called for a global shift to a primarily plant-based diet if we are to keep agricultural production within planetary limits. The problem, however, is that actually changing what people eat is extremely difficult. Who should drive this change: individuals, governments, or corporations? Can a balance be struck between consumer freedom and regulation? And how can rational policymaking be squared with food’s significant cultural, nationalist, and personal meaning?

Beef is where this kind of discussion usually starts because it’s where the scientific consensus is particularly strong. The world’s one billion cows contribute about 6 percent of all greenhouse gases through their methane-rich burps, require vast amounts of grazing land, and are often fattened for slaughter on industrial feedlots where they are fed a diet of monocrops like corn and soy, whose planting in turn contributes to widespread deforestation and pesticide use. Overconsumption of red meat has also been linked to a range of health issues.

Steaks, in other words, are the SUVs of meat: expensive, unnecessary, environmentally noxious status symbols that do far more harm than good. There’s a good case for eliminating beef consumption entirely, and drastically reducing it ought to be a no-brainer: The EAT-Lancet model diet, for instance, suggests limiting beef to 98 grams per week (and all meat to under 500 grams). That amounts to a 60 percent decrease, relative to a Spaniard’s average diet, and a massive 86 percent decrease in the USA.

The traditional way for NGOs, companies, and governments to approach dietary change is through information campaigns and so-called nudges that don’t impinge on individual choice or risk regulatory and legislative battles. They’re nonintrusive ways of suggesting more healthy or ethical choices to consumers—like releasing EAT-Lancet recommendations or national dietary guidelines, slapping “fair trade” labels on coffee or “humanely raised” labels on meat. It can also mean deciding not to promote a product, as the food website Epicurious did when it vowed to stop running beef recipes for many of the reasons mentioned by Garzón.

The problem with these interventions is that they are not all that effective. While consumers may claim they want to make more informed or sustainable decisions, they tend to default to their usual habits in the supermarket aisles. And information doesn’t necessarily shift behavior; it may even have the opposite effect. Psychologists argue that when consumers face the “meat paradox” of eating meat while being opposed to the harms caused by it, they will often create justificatory narratives and rationalizations that deny harm or personal responsibility rather than actually halting meat consumption.

These mild, less effective policy efforts also tend to be attacked by critics as if they were actually reducing consumer choice. EAT-Lancet was met with a coordinated online countercampaign under the hashtag #yes2meat. Epicurious was lambasted by pro-beef critics, including foodies and food writers, in the wake of its decision. When the United Nations tried to call for meat reduction to mitigate climate change, it too was brutally critiqued, including by pro-meat climate scholars.

Changing the scope and availability of choices in any given situation may be more productive. This is called changing “choice architecture,” and there’s good evidence for its efficacy. For instance, removing beef jerky from among the impulse-buy items in a checkout line disincentivizes jerky purchases just by moving them out of sight and out of mind. Major opportunities for choice-architecture manipulation exist in supermarkets and restaurants, which could commit to selling less beef, promoting more healthful options, or replacing meat with alternative proteins, as a growing number of fast-food joints are doing.

These changes can have an even bigger impact in institutional spaces like schools that have large provisioning budgets and feed large numbers of people; such changes can shift both individuals’ habits and influence the economics of food distribution. Studies have shown that simply increasing the number of vegetarian options or making plant-based meals the default instead of meat massively increases more sustainable eating. And shifting food patterns in schools can build the next generation of more sustainable eaters.

But there’s stiff opposition here, too. When schools in Lyon, France, moved to make lunches plant-forward (albeit with fish and egg and dairy options available), farmers stormed the city in protest and the French minister of agriculture clamored against anti-meat “ideology.” In the U.S., Joni Ernst, the infamously meat-industry-friendly senator from Iowa whose campaign advertising included boasts about pig castrationhas introduced an act to preemptively preclude federal institutions from engaging in nudges like “Meatless Monday.”

That brings us to state intervention. Government has tremendous power to address collective action problems through incentives, regulations, and taxation. In the world of public health, these interventions are ranked on a scale called the Nuffield Ladder, with gentle nudges at the bottom and outright bans at the top. One of the most commonly used tools is taxation. In particular, governments can implement what are known as Pigouvian taxes on things like sugary drinks, tobacco, or polluting factories—the idea is to force producers to cover the cost of the harms their products do. They can also slap so-called “sin taxes” on products to increase direct costs for consumers. These taxes work. Numerous studies show that these are very effective in decreasing consumption, leading groups like the World Health Organization to strongly support them. The academic case for such taxes on meat is robust and convincing. But taxes in general are massively politically unpopular and lead to accusations of a nanny state interfering in consumers’ free choice, as the battles over sugar taxes around the world have shown.

On July 15, the U.K. released its Food Strategy, a well-researched document urging a reshaping of the British food system in the interest of health and sustainability. It called for reductions in sugar, salt, and meat. But the authors only suggested a tax on sugar and salt, shying away from a “politically impossible” meat tax. Instead, they recommended plant-forward dietary nudges and subsidies for the development of alternative proteins.

It’s a good illustration of the way policymakers often self-edit when it comes to such a fraught topic. The problem is that, while this approach is politically pragmatic, it is naïve to expect that clinging to the lower rungs of the Nuffield Ladder can lead to even the Food Strategy’s suggested 30 percent reduction in meat consumption, let alone the EAT-Lancet standard.

But the problem isn’t only that policymakers are wary of inviting pro-meat backlash. It’s also that virtually all governments subsidize and promote meat production and consumption. The EU, despite its Green Deal commitment to carbon neutrality by 2050, has spent millions of Euros on a “Beefatarian” advertising campaign, and both Europe and the USA support animal agriculture through extensive subsidies and supports. Changing this dynamic—a status quo in which politicians pick up points by slamming vegetarians while support for the meat industry is baked into countless national budgets—will require a multifaceted approach.

Incentivizing the production of alternatives in addition to, or ideally instead of, harmful products like beef, as the U.K. Food Strategy does with its support for alternative proteins, is one good option. But such support should include not only plant-based or cell-based “meat alternatives” but also plants as alternatives to meat. A recent study published in Global Food Security, for instance, shows that humble legumes, with the right government push, could provide a far more sustainable and diverse source of protein than meat. Creating opportunities for food access is also . . .

Continue reading. There’s more.

And note this interesting map of land-use in the UK from the recently published National Food Strategy, which can be download at the link and which has some very interesting information (for those who make decisions in that way):

Written by Leisureguy

28 July 2021 at 3:24 pm

A Soil-Science Revolution Upends Plans to Fight Climate Change

leave a comment »

Gabriel Popkin writes in Quanta:

The hope was that the soil might save us. With civilization continuing to pump ever-increasing amounts of carbon dioxide into the atmosphere, perhaps plants — nature’s carbon scrubbers — might be able to package up some of that excess carbon and bury it underground for centuries or longer.

That hope has fueled increasingly ambitious climate change–mitigation plans. Researchers at the Salk Institute, for example, hope to bioengineer plants whose roots will churn out huge amounts of a carbon-rich, cork-like substance called suberin. Even after the plant dies, the thinking goes, the carbon in the suberin should stay buried for centuries. This Harnessing Plants Initiative is perhaps the brightest star in a crowded firmament of climate change solutions based on the brown stuff beneath our feet.

Such plans depend critically on the existence of large, stable, carbon-rich molecules that can last hundreds or thousands of years underground. Such molecules, collectively called humus, have long been a keystone of soil science; major agricultural practices and sophisticated climate models are built on them.

But over the past 10 years or so, soil science has undergone a quiet revolution, akin to what would happen if, in physics, relativity or quantum mechanics were overthrown. Except in this case, almost nobody has heard about it — including many who hope soils can rescue the climate. “There are a lot of people who are interested in sequestration who haven’t caught up yet,” said Margaret Torn, a soil scientist at Lawrence Berkeley National Laboratory.

A new generation of soil studies powered by modern microscopes and imaging technologies has revealed that whatever humus is, it is not the long-lasting substance scientists believed it to be. Soil researchers have concluded that even the largest, most complex molecules can be quickly devoured by soil’s abundant and voracious microbes. The magic molecule you can just stick in the soil and expect to stay there may not exist.

“I have The Nature and Properties of Soils in front of me — the standard textbook,” said Gregg Sanford, a soil researcher at the University of Wisconsin, Madison. “The theory of soil organic carbon accumulation that’s in that textbook has been proven mostly false … and we’re still teaching it.”

The consequences go far beyond carbon sequestration strategies. Major climate models such as those produced by the Intergovernmental Panel on Climate Change are based on this outdated understanding of soil. Several recent studies indicate that those models are underestimating the total amount of carbon that will be released from soil in a warming climate. In addition, computer models that predict the greenhouse gas impacts of farming practices — predictions that are being used in carbon markets — are probably overly optimistic about soil’s ability to trap and hold on to carbon.

It may still be possible to store carbon underground long term.  Indeed, radioactive dating measurements suggest that some amount of carbon can stay in the soil for centuries. But until soil scientists build a new paradigm to replace the old — a process now underway — no one will fully understand why.

The Death of Humus

Soil doesn’t give up its secrets easily. Its constituents are tiny, varied and outrageously numerous. At a bare minimum, it consists of minerals, decaying organic matter, air, water, and enormously complex ecosystems of microorganisms. One teaspoon of healthy soil contains more bacteria, fungi and other microbes than there are humans on Earth.

The German biologist Franz Karl Achard was an early pioneer in making sense of the chaos. In a seminal 1786 study, he used alkalis to extract molecules made of long carbon chains from peat soils. Over the centuries, scientists came to believe that such long chains, collectively called humus, constituted a large pool of soil carbon that resists decomposition and pretty much just sits there. A smaller fraction consisting of shorter molecules was thought to feed microbes, which respired carbon dioxide to the atmosphere.

This view was occasionally challenged, but by the mid-20th century, the humus paradigm was “the only game in town,” said Johannes Lehmann, a soil scientist at Cornell University. Farmers were instructed to adopt practices that were supposed to build humus. Indeed, the existence of humus is probably one of the few soil science facts that many non-scientists could recite.

What helped break humus’s hold on soil science was physics. In the second half of the 20th century, powerful new microscopes and techniques such as nuclear magnetic resonance and X-ray spectroscopy allowed soil scientists for the first time to peer directly into soil and see what was there, rather than pull things out and then look at them.

What they found — or, more specifically, what they didn’t find — was shocking: there were few or no long “recalcitrant” carbon molecules — the kind that don’t break down. Almost everything seemed to be small and, in principle, digestible.

“We don’t see any molecules in soil that are so recalcitrant that they can’t be broken down,” said Jennifer Pett-Ridge, a soil scientist at Lawrence Livermore National Laboratory. “Microbes will learn to break anything down — even really nasty chemicals.”

Lehmann, whose studies using advanced microscopy and spectroscopy were among the first to reveal the absence of humus, has become the concept’s debunker-in-chief. A 2015 Nature paper he co-authored states that “the available evidence does not support the formation of large-molecular-size and persistent ‘humic substances’ in soils.” In 2019, he gave a talk with a slide containing a mock death announcement for “our friend, the concept of Humus.”

Over the past decade or so, most soil scientists have come to accept this view. Yes, soil is enormously varied. And it contains a lot of carbon. But there’s no carbon in soil that can’t, in principle, be broken down by microorganisms and released into the atmosphere. The latest edition of The Nature and Properties of Soils, published in 2016, cites Lehmann’s 2015 paper and acknowledges that “our understanding of the nature and genesis of soil humus has advanced greatly since the turn of the century, requiring that some long-accepted concepts be revised or abandoned.”

Old ideas, however, can be very recalcitrant. Few outside the field of soil science have heard of humus’s demise.

Buried Promises

At the same time that soil scientists were rediscovering what exactly soil is, climate researchers were revealing that  . . .

Continue reading. The payoff is further down.

Written by Leisureguy

27 July 2021 at 4:28 pm

Up in Smoke

leave a comment »

Debra Kahn, Lorraine Woellert and Catherine Boudreau write in Politico:

Massive wildfires in Oregon and Washington are torching more than vegetation. They’re also burning through the very policies states and businesses are using to fight climate change.

The Bootleg Fire is raging through a carbon storage project in southern Oregon, where 400,000 acres of forest owned by Green Diamond Resource Co. are being preserved to compensate for greenhouse gas emissions elsewhere. Microsoft in February paid Green Diamond to offset a quarter million tons of the tech giant’s 2021 carbon emissions.

Two other wildfires are active within a Colville Indian Reservation carbon project in eastern Washington.

Here’s how it works. Carbon offsets — which are created by planting trees, preserving mangroves, buying renewable energy and other activities — are bought and sold as a tool to reduce emissions.

Landowners pledge to keep their forests healthy enough to store carbon for at least 100 years. They sell the resulting credits to refineries, factories and other big emitters, which use them as a substitute for reducing their own emissions.

But if trees are burning less than 10 years into their projected lifetimes, that’s a problem for landowners, industry and policymakers — not to mention the forests, which are estimated to store about twice as much carbon than they emit and are a crucial component of global plans to address climate change. The increasing severity and frequency of wildfires could reverse their role.

The Bootleg Fire so far has covered 23.9 percent of Green Diamond’s project area, according to data provider CarbonPlan.

The Colville Indian Reservation project is one of the largest sellers of credits under a California cap-and-trade program. The Summit Trail and Chuweah Creek fires have covered about 3.5 percent of its 453,000 acres, CarbonPlan estimates.

Rules governing offsets anticipate damage from fires. Landowners are required to store a certain percent of a project’s credits in a buffer pool they can tap if acreage is lost to wildfires, disease or insect outbreaks.

For now, the buffer pool, with about 28.5 million credits, is large enough to absorb the recent fires, said Dan McGraw, head of Americas at Carbon Pulse, a market analysis firm. But if acreage keeps burning, the carbon accounting system could find itself in the red.

“We’ve had two projects be affected by fires, and it’s July,” McGraw said. “It’s only going to get worse.”

Debra Kahn has the details.

While we’re here: California is moving to cut off water to farmers as the drought dries up rivers and streams in the Sacramento and San Joaquin river watersheds. The Water Resources Control Board will vote on the emergency order Aug. 3.

Former U.N. Secretary-General Ban Ki-moon led the United Nations from 2007 to 2016, an era that saw the rise of multilateral climate action culminating in the Paris Climate Agreement.

Since leaving the U.N., he has used his leadership positions at civil society groups to continue his campaign against climate change. One of his driving messages is that the U.S. and other rich nations need to do more individually and collectively through the World Bank and other international institutions. He spoke to POLITICO about skeptics, former President Trump and Pope Francis.

Is it time to ban financing and subsidies of fossil fuels, especially where the World Bank and IMF are concerned?

I think it is necessary. There are many countries who have been financing [fossil fuels] through public financial organizations. Unfortunately, my own country, Korea, was one of them. I raised this issue very strongly with the office of the president of Korea and other ministers. Now, Korea has decided not to provide any public financial support, except those which are now going on.

And you think the U.S. is not doing enough to help.

That’s right.

In the U.S. during the Trump administration, a lot of private companies made a lot of promises. Are they delivering?

In fact, the private sectors are doing better and more than government. Government has their restrictions and laws and opposition parties. But corporations, when the owners, presidents or chairmen have conviction, they can do more.

Lorraine has the full interview.

Continue reading. There’s more.

Climate change has hit and hit hard. And we’re just getting started.

Written by Leisureguy

27 July 2021 at 10:23 am

Paris Sportif: The Contagious Attraction of Parkour

leave a comment »

I first encountered parkour in a Luc Besson movie, District 13 (from 2004, original title Banlieue 13), but it has a longer history, discussed by Macs Smith in an extract from his book Paris and the Parasite: Noise, Health, and Politics in the Media City published in The MIT Reader:

In a city fixated on public health and order, a viral extreme sport offers a challenge to the status quo.1955, Letterist International, a Paris-based group of avant-garde authors, artists, and urban theorists, published “Proposals for Rationally Improving the City of Paris.” The group, which would become better known as Situationist International, or SI, and play an important role in the May 1968 demonstrations, put forward wild suggestions for breaking the monotony of urban life. Some of these, like the call to abolish museums and distribute their masterpieces to nightclubs, were iconoclastic and anti-institutional, reflecting the group’s anarchic political leanings.

Others were less overtly political and testified to a thirst for excitement. To appeal to “spelunkers” and thrill-seekers, they called for Paris’s rooftops and metro tunnels to be opened up to exploration. The group believed that the mundaneness of urban life in the 1950s was integral to bourgeois capitalism. Boredom was part of how the government maintained order, and so a more equal city would necessarily have to be more frightening, more surprising, more fun.

SI disbanded in 1972, but its ideas about the links between emotion and urban politics have been influential. Among the best examples are the subcultures centered around urban thrill-seeking that exist today, like urban exploration (Urbex), rooftopping, and skywalking, all of which involve breaking into dangerous or forbidden zones of the city. The most famous inheritor to SI’s call to experience urban space differently is parkour, which was invented in the Paris suburb of Lisses in the 1980s. It was inspired by Hébertisme, a method of obstacle course training first introduced to the French Navy in 1910 by Georges Hébert. David Belle learned the principles of Hébertisme from his father, Raymond, who had been exposed to it at a military school in Vietnam. David, along with a friend, Sébastien Foucan, then adapted those principles, originally conceived for natural environments, to the suburban architecture of their surroundings.

Over time, parkour has incorporated techniques from tumbling, gymnastics, and capoeira, resulting in a striking blend of military power and balletic artistry. Parkour involves confronting an urban map with an embodied experience of urban space. It is often defined as moving from points A to B in the most efficient way possible, and parkour practitioners, called traceurs, often depict themselves as trailblazers identifying routes through the city that cartography does not capture. Traceurs sometimes evoke the fantasy of tracing a straight line on the map and finding a way to turn it into a path, although in practice, they more often work at a single point on the map — a park, a rooftop, an esplanade — and end a session back where they started.

Traceurs’ desire to rewrite the map is another thing they share with the Situationists, who liked to cut up maps and glue them back together to show the psychological distance between neighborhoods. But parkour distinguishes itself from SI through its use of video, which continues to be a point of debate within the practice. In the early 2000s, Sébastien Foucan reignited this debate when he broke away from Belle to pioneer his own version of the training system.

Foucan’s appearance in the 2003 documentary “Jump London” cemented “freerunning” as the name for this alternate practice, which put a greater emphasis on stylized movements. Foucan would go on to play a terrorist bomb-maker in Martin Campbell’s “Casino Royale,” leaping from cranes with Daniel Craig’s James Bond in pursuit. Some parkour purists see this as a degradation of the utilitarian roots of their training, and insist instead on a physio-spiritual discourse of communion with the environment, mastery of fear, and humility. They reject freerunning as a brash corruption of Hébert’s principles. The sociologist Jeffrey Kidder notes in his interviews with traceurs in Chicago that they dismiss participants who lack interest in serious rituals like safety, humility, and personal growth. They react negatively to media coverage that highlights parkour’s danger or assimilates it into adolescent rebellions like skateboarding, drug use, or loitering.

In my own email interview with the leaders of Parkour Paris, the official parkour organization of Paris, the same will to blame media is evident: “Parkour has been mediatized in ‘connotated’ films. The traceurs depicted in those fictions were friendly delinquents a bit like Robin Hood. Friendly, yes, but for the immense majority of people they were still delinquents from the banlieue,” they gripe. “It’s been very hard to shake that image.” . . .

Continue reading. There’s much more. And it includes this 50-minute video, Jump London:

Written by Leisureguy

27 July 2021 at 10:17 am

Why Gas Stoves Are More Hazardous Than You’ve Been Led to Believe

leave a comment »

If you cook with a wok, a gas stove is the only good option, but then you should also have a strong exhaust fan that vents to the outside (and a quiet one, if possible) and a good range hood that directs the products of combustion from the burning gas into the hood. Recirculating fans are pointless unless you like the sound. (I will add: a) electric ovens with convection capabilities are far superior to gas ovens; and b) for most stovetop cooking an induction range is by far the best: heats the pan (and quickly), not the kitchen.

This December 2020 Slate article by Jonathan Mingle explains why gas burners are a health risk:

As a physician and epidemiologist with the U.S. Centers for Disease Control and Prevention, T. Stephen Jones spent his career fighting major threats to public health in the United States and globally, from smallpox to HIV to viral hepatitis. But it wasn’t until Jones was well into retirement that he learned about a widespread yet widely overlooked health risk in his own home in Florence, Massachusetts, and in most U.S. households: pollution emitted by natural gas appliances.

While many Americans might think illness linked to indoor cooking and heating is a problem confined to smoke-filled kitchens in the developing world, the natural gas-burning stoves and furnaces found in millions of U.S. kitchens and basements can produce a range of health-damaging pollutants, including particulate matter (PM), nitrogen dioxide (NO2), carbon monoxide (CO), and formaldehyde. Over the past four decades, researchers have amassed a large body of scientific evidence linking the use of gas appliances, especially for cooking, with a higher risk of a range of respiratory problems and illnesses.

Since the publication of two new reports on the subject from the nonprofit research group the Rocky Mountain Institute (RMI) and the UCLA Fielding School of Public Health, this past spring, the existence of these gas-fired health hazards has garnered increasing media scrutiny. But less discussed has been how the Covid-19 pandemic has compounded the risks of this pollution, especially for low-income and vulnerable populations, and how key regulatory agencies have lagged decades behind the science in acting to protect them.

“There’s no question this has been a neglected issue,” said Jones, who has drawn on lessons from his long career in public health epidemiology and disease prevention in sounding the alarm throughout Massachusetts and with former CDC colleagues over the past few years. The first step, he said, is “letting people know what the risks are — particularly when they can be substantial, life-threatening risks that can kill kids.”

One of the clearest signals emerging in the scientific literature is the connection between cooking with gas and childhood asthma — a disease suffered by people of color and lower-income groups at much higher rates than the rest of the population. A 2013 meta-analysis of 41 studies found that children living in homes with gas stoves had a 42 percent higher risk of experiencing asthma symptoms, and, over their lifetime, a 24 percent increase in the risk of being diagnosed with asthma. That study confirmed, in turn, what a 1992 meta-analysis found: Children exposed to higher levels of indoor NO2 (at an increment “comparable to the increase resulting from exposure to a gas stove”) had an elevated risk of respiratory illness. More recently, a 2018 study from the University of Queensland found that in Australia, where 38 percent of households rely on gas stoves for cooking, more than 12 percent of the total burden of childhood asthma was attributable to their use.

Meanwhile, troubling new findings suggest that exposure to NO2 — the primary pollutant of concern from gas appliances — could compound the dangers of the novel coronavirus in communities that are already at higher risk of infection and of dying from the disease. A recent peer-reviewed study led by researchers at Emory University examined Covid-19 mortality data in more than 3,000 U.S. counties, and found that long-term exposure to elevated NO2 was correlated with a higher risk of death from Covid-19 — and that NO2 appeared to be more dangerous than particulate matter or ozone.

The hazards now have a growing chorus of scientists and public health experts insisting that better and stricter oversight of burning gas indoors — a health threat that has been hiding in plain sight for decades, they say — can no longer be ignored. “It’s fundamental and imperative,” said Jones. “We ought to get up on the rooftops and shout about it.”

The cumulative evidence was enough for the venerable New England Journal of Medicine to publish an editorial in January recommending that “new gas appliances be removed from the market.” It was co-authored by Howard Frumkin, a former director of the CDC’s National Center for Environmental Health, which is responsible for investigating environmental drivers of illness and promulgating guidance about those risk factors.

Despite such calls — and despite compelling evidence that gas appliances can produce levels of air pollution inside homes that would be illegal outdoors in the United States — indoor air quality remains entirely unregulated in the U.S. today, and gas appliances largely maintain their industry-manufactured reputation as “clean.” The Environmental Protection Agency only monitors pollutants in outdoor air. And while building codes typically require natural gas furnaces and water heaters to be vented outside, many states lack requirements that natural gas cooking stoves be vented to the outdoors.

Still, recent signs suggest that some measure of regulatory action reflecting the current understanding of the health risks of gas cooking and heating devices might finally be forthcoming. At the end of September, . . .

Continue reading.

Written by Leisureguy

26 July 2021 at 11:03 am

MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule.

leave a comment »

Nafeez Ahmed reports in Vice Motherboard:

A remarkable new study by a director at one of the largest accounting firms in the world has found that a famous, decades-old warning from MIT about the risk of industrial civilization collapsing appears to be accurate based on new empirical data. 

As the world looks forward to a rebound in economic growth following the devastation wrought by the pandemic, the research raises urgent questions about the risks of attempting to simply return to the pre-pandemic ‘normal.’

In 1972, a team of MIT scientists got together to study the risks of civilizational collapse. Their system dynamics model published by the Club of Rome identified impending ‘limits to growth’ (LtG) that meant industrial civilization was on track to collapse sometime within the 21st century, due to overexploitation of planetary resources.

The controversial MIT analysis generated heated debate, and was widely derided at the time by pundits who misrepresented its findings and methods. But the analysis has now received stunning vindication from a study written by a senior director at professional services giant KPMG, one of the ‘Big Four’ accounting firms as measured by global revenue.

Limits to growth

The study was published in the Yale Journal of Industrial Ecology in November 2020 and is available on the KPMG website. It concludes that the current business-as-usual trajectory of global civilization is heading toward the terminal decline of economic growth within the coming decade—and at worst, could trigger societal collapse by around 2040.

The study represents the first time a top analyst working within a mainstream global corporate entity has taken the ‘limits to growth’ model seriously. Its author, Gaya Herrington, is Sustainability and Dynamic System Analysis Lead at KPMG in the United States. However, she decided to undertake the research as a personal project to understand how well the MIT model stood the test of time. 

The study itself is not affiliated or conducted on behalf of KPMG, and does not necessarily reflect the views of KPMG. Herrington performed the research as an extension of her Masters thesis at Harvard University in her capacity as an advisor to the Club of Rome. However, she is quoted explaining her project on the KPMG website as follows: 

“Given the unappealing prospect of collapse, I was curious to see which scenarios were aligning most closely with empirical data today. After all, the book that featured this world model was a bestseller in the 70s, and by now we’d have several decades of empirical data which would make a comparison meaningful. But to my surprise I could not find recent attempts for this. So I decided to do it myself.”

Titled ‘Update to limits to growth: Comparing the World3 model with empirical data’, the study attempts to assess how MIT’s ‘World3’ model stacks up against new empirical data. Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments. However, the last study of this nature was completed in 2014. 

The risk of collapse 

Herrington’s new analysis examines data across 10 key variables, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. She found that the latest data most closely aligns with two particular scenarios, ‘BAU2’ (business-as-usual) and ‘CT’ (comprehensive technology). 

“BAU2 and CT scenarios show a halt in growth within a decade or so from now,” the study concludes. “Both scenarios thus indicate that continuing business as usual, that is, pursuing continuous growth, is not possible. Even when paired with unprecedented technological development and adoption, business as usual as modelled by LtG would inevitably lead to declines in industrial capital, agricultural output, and welfare levels within this century.”

Study author Gaya Herrington told Motherboard that in the MIT World3 models, collapse “does not mean that humanity will cease to exist,” but rather that “economic and industrial growth will stop, and then decline, which will hurt food production and standards of living… In terms of timing, the BAU2 scenario shows a steep decline to set in around 2040.”

The end of growth? 

In the comprehensive technology (CT) scenario, economic decline still sets in around this date with a range of possible negative consequences, but this does not lead to societal collapse.

Unfortunately, the scenario which was the least closest fit to the latest empirical data happens to be the most optimistic pathway known as ‘SW’ (stabilized world), in which civilization follows a sustainable path and experiences the smallest declines in economic growth—based on a combination of technological innovation and widespread investment in public health and education.

Although both the business-as-usual and comprehensive technology scenarios point to the coming end of economic growth in around 10 years, only the BAU2 scenario “shows a clear collapse pattern, whereas CT suggests the possibility of future declines being relatively soft landings, at least for humanity in general.” 

Both scenarios currently “seem to align quite closely not just with observed data,” Herrington concludes in her study, indicating that the future is open.   

A window of opportunity 

While focusing on the pursuit of continued economic growth for its own sake will be futile, the study finds that technological progress and increased investments in public services could not just avoid the risk of collapse, but lead to a new stable and prosperous civilization operating safely within planetary boundaries. But we really have only the next decade to change course. 

“At this point therefore, the data most aligns with the CT and BAU2 scenarios which indicate a slowdown and eventual halt in growth within the next decade or so, but World3 leaves open whether the subsequent decline will constitute a collapse,” the study concludes. Although the ‘stabilized world’ scenario “tracks least closely, a deliberate trajectory change brought about by society turning toward another goal than growth is still possible. The LtG work implies that this window of opportunity is closing fast.” 

In a presentation at the World Economic Forum in 2020 delivered in her capacity as a KPMG director, Herrington argued for ‘agrowth’—an agnostic approach to growth which focuses on other economic goals and priorities.  

“Changing our societal priorities hardly needs to be a capitulation to grim necessity,” she said. “Human activity can be regenerative and our productive capacities can be transformed. In fact, we are seeing examples of that happening right now. Expanding those efforts now creates a world full of opportunity that is also sustainable.” 

She noted how the rapid development and deployment of vaccines at unprecedented rates in response to the COVID-19 pandemic demonstrates that we are capable of responding rapidly and constructively to global challenges if we choose to act. We need exactly such a determined approach to the environmental crisis. . .

Continue reading.


Written by Leisureguy

14 July 2021 at 2:32 pm

E.P.A. Approved Toxic Chemicals for Fracking a Decade Ago

leave a comment »

The panel above is from 50 years ago, and we have learned nothing in the meantime: we continue to trash the environment despite having to live in it. Hiroko Tabuchi reports in the NY Times:

For much of the past decade, oil companies engaged in drilling and fracking have been allowed to pump into the ground chemicals that, over time, can break down into toxic substances known as PFAS — a class of long-lasting compounds known to pose a threat to people and wildlife — according to internal documents from the Environmental Protection Agency.

The E.P.A. in 2011 [under President Barack Obama – LG] approved the use of these chemicals, used to ease the flow of oil from the ground, despite the agency’s own grave concerns about their toxicity, according to the documents, which were reviewed by The New York Times. The E.P.A.’s approval of the three chemicals wasn’t previously publicly known.

The records, obtained under the Freedom of Information Act by a nonprofit group, Physicians for Social Responsibility, are among the first public indications that PFAS, long-lasting compounds also known as “forever chemicals,” may be present in the fluids used during drilling and hydraulic fracturing, or fracking.

In a consent order issued for the three chemicals on Oct. 26, 2011, E.P.A. scientists pointed to preliminary evidence that, under some conditions, the chemicals could “degrade in the environment” into substances akin to PFOA, a kind of PFAS chemical, and could “persist in the environment” and “be toxic to people, wild mammals, and birds.” The E.P.A. scientists recommended additional testing. Those tests were not mandatory and there is no indication that they were carried out.

“The E.P.A. identified serious health risks associated with chemicals proposed for use in oil and gas extraction, and yet allowed those chemicals to be used commercially with very lax regulation,” said Dusty Horwitt, researcher at Physicians for Social Responsibility.

For fracking to work, the energy industry has an appetite for chemicals that, when pumped underground at high pressure, can coax oil out of the ground most efficiently. In 2008, a scientific paper published in an oil-industry journal and led by a DuPont researcher referred to the “exceptional” water-repelling and other characteristics of types of chemicals that include PFAS, and called the chemicals an “emerging technology” that showed promise for use in oil and gas extraction.

The E.P.A. documents describing the chemicals approved in 2011 date from the Obama administration and are heavily redacted because the agency allows companies to invoke trade-secret claims to keep basic information on new chemicals from public release. Even the name of the company that applied for approval is redacted, and the records give only a generic name for the chemicals: fluorinated acrylic alkylamino copolymer.

However, an identification number for one of the chemicals issued by the E.P.A. appears in separate E.P.A. data and identifies Chemours, previously DuPont, as the submitter. A separate E.P.A. document shows that a chemical with the same E.P.A.-issued number was first imported for commercial use in November 2011. (Chemours did not exist until 2015, though it would have had the responsibility to report chemicals on behalf of its predecessor, DuPont.)

There is no public data that details where the E.P.A.-approved chemicals have been used. But the FracFocus database, which tracks chemicals used in fracking, shows that about 120 companies used PFAS — or chemicals that can break down into PFAS, the most common of which was “nonionic fluorosurfactant” and various misspellings — in more than 1,000 wells between 2012 and 2020 in Texas, Arkansas, Louisiana, Oklahoma, New Mexico, and Wyoming. Because not all states require companies to report chemicals to the database, the number of wells could be higher.

Nine of those wells were in Carter County, Okla., within the boundaries of Chickasaw Nation. “This isn’t something I was aware of,” said Tony Choate, a Chickasaw Nation spokesman. [Fun fact: I was born and raised in Carter County, and my grandfather Ham, a surveyor, made the first official map of Carter County, for which he was paid 40 acres of land which turned out to be above the largest oil pool in Oklahoma, a discovery made some years after he traded the land for a buckboard and a team of mules. – LG]

Nick Conger, an E.P.A. spokesman, said that the chemicals in question were approved a decade ago, and that amendments to laws since then now required the agency to affirm the safety of new chemicals before they are allowed into the marketplace. He said the redactions in the documents were mandated by a statute protecting confidential business information. The Biden administration had made addressing PFAS a top priority, he added, for example by proposing a rule to require all manufacturers and importers of PFAS since 2011 to disclose more information on the chemicals, including their environmental and health effects.

Chemours, which has in the past agreed to pay hundreds of millions of dollars to settle injury claims related to PFOA pollution, declined to comment. In 2005, DuPont also agreed to pay $16 million to settle allegations by the E.P.A. that it had failed to report information about the health and environmental effects of PFAS, in the largest administrative penalty the agency had ever imposed at the time. But Chemours, which was spun off from DuPont in 2015, has not spoken publicly about the use of these chemicals in drilling and fracking. . .

Continue reading. There’s much more.

Later in the report:

A class of man-made chemicals that are toxic even in minuscule concentrations, for decades PFAS were used to make products like nonstick pans, stain-resistant carpeting and firefighting foam. The substances have come under scrutiny in recent years for their tendency to persist in the environment, and to accumulate inside the human body, as well as for their links to health problems like cancer and birth defects. Both Congress and the Biden administration have moved to better regulate PFAS, which contaminate the drinking water of as many as 80 million Americans.

Industry researchers have long been aware of their toxicity. But it wasn’t until the early 2000s, when the environmental attorney Rob Bilott sued DuPont for pollution from its Teflon plant in Parkersburg, W.Va., that the dangers of PFAS started to be widely known. In settlements with the E.P.A. in the mid-2000s, DuPont acknowledged knowing of PFAS’s dangers, and it and several other chemical manufacturers subsequently committed to phase out the use of certain kinds of the chemical by 2015.

And it’s not just the effects of the poisoning of the ground (and, presumably, groundwater). As a consequence of injecting wastewater from oil production into the ground, Oklahoma has been wracked by earthquakes.

And of course we continue to pollute the ocean — and there’s the elephant farting in the room: massive dumping of CO2 into the atmosphere, causing global warming and climate change, which we see accelerating as heat builds up (and as we continue to burn fossil fuels).

Of course, there’s the money the companies have paid, but that really doesn’t do the job, does it?

Written by Leisureguy

12 July 2021 at 7:15 pm

Sixty years of climate change warnings: the signs that were missed (and ignored)

leave a comment »

Homes destroyed by a storm in New York state in 1962

The dangers and the cause of climate change have been well-known for decades. Now we are reaping what we sowed by our neglect. The Guardian has a lengthy edited extract from the book Our Biggest Experiment: An Epic History of the Climate Crisis, by Alice Bell. It begins:

In August 1974, the CIA produced a study on “climatological research as it pertains to intelligence problems”. The diagnosis was dramatic. It warned of the emergence of a new era of weird weather, leading to political unrest and mass migration (which, in turn, would cause more unrest). The new era the agency imagined wasn’t necessarily one of hotter temperatures; the CIA had heard from scientists warning of global cooling as well as warming. But the direction in which the thermometer was travelling wasn’t their immediate concern; it was the political impact. They knew that the so-called “little ice age”, a series of cold snaps between, roughly, 1350 and 1850, had brought not only drought and famine, but also war – and so could these new climatic changes.

“The climate change began in 1960,” the report’s first page informs us, “but no one, including the climatologists, recognised it.” Crop failures in the Soviet Union and India in the early 1960s had been attributed to standard unlucky weather. The US shipped grain to India and the Soviets killed off livestock to eat, “and premier Nikita Khrushchev was quietly deposed”.

But, the report argued, the world ignored this warning, as the global population continued to grow and states made massive investments in energy, technology and medicine.

Meanwhile, the weird weather rolled on, shifting to a collection of west African countries just below the Sahara. People in Mauritania, Senegal, Mali, Burkina Faso, Niger and Chad “became the first victims of the climate change”, the report argued, but their suffering was masked by other struggles – or the richer parts of the world simply weren’t paying attention. As the effects of climate change started to spread to other parts of the world, the early 1970s saw reports of droughts, crop failures and floods from Burma, Pakistan, North Korea, Costa Rica, Honduras, Japan, Manila, Ecuador, USSR, China, India and the US. But few people seemed willing to see a pattern: “The headlines from around the world told a story still not fully understood or one we don’t want to face,” the report said.

This claim that no one was paying attention was not entirely fair. Some scientists had been talking about the issue for a while. It had been in newspapers and on television, and was even mentioned in a speech by US president Lyndon Johnson in 1965. A few months before the CIA report was issued, the US secretary of state, Henry Kissinger, had addressed the UN under a banner of applying science to “the problems that science has helped to create”, including his worry that the poorest nations were now threatened with “the possibility of climatic changes in the monsoon belt and perhaps throughout the world”.

Still, the report’s authors had a point: climate change wasn’t getting the attention it could have, and there was a lack of urgency in discussions. There was no large public outcry, nor did anyone seem to be trying to generate one.

Although initially prepared as a classified working paper, the report ended up in the New York Times a few years later. By this point, February 1977, the problem of burning fossil fuels was seen more through the lens of the domestic oil crisis rather than overseas famine. The climate crisis might still feel remote, the New York Times mused, but as Americans feel the difficulties of unusual weather combined with shortages of oil, perhaps this might unlock some change? The paper reported that both energy and climate experts shared the hope “that the current crisis is severe enough and close enough to home to encourage the interest and planning required to deal with these long-range issues before the problems get too much worse”.

And yet, if anything, debate about climate change in the last third of the 20th century would be characterised as much by delay as concern, not least because of something the political analysts at the CIA seem to have missed: fightback from the fossil fuel industries.

hen it came to constructing that delay, the spin doctors could find building materials readily available within the scientific community itself. In 1976, a young climate modeller named Stephen Schneider decided it was time for someone in the climate science community to make a splash. As a graduate student at Columbia University, Schneider wanted to find a research project that could make a difference. While hanging out at the Nasa Goddard Institute for Space Studies, he stumbled across a talk on climate models. He was inspired: “How exciting it was that you could actually simulate something as crazy as the Earth, and then pollute the model, and figure out what might happen – and have some influence on policy in a positive way,” he later recalled.

After years of headlines about droughts and famine, Schneider figured the time was right for a popular science book on the danger climate change could cause. The result was his 1976 book, The Genesis Strategy. Although he wanted to avoid positioning himself alongside either what he called the “prophets of doom” on one side or the “Pollyannas” on the other, he felt it was important to impart the gravity of climate change and catch people’s attention.

And attention it got, with a jacket endorsement from physicist Carl Sagan, reviews in the Washington Post and New York Times, and an invitation to appear on Johnny Carson’s Tonight Show. This rankled some of the old guard, who felt this just wasn’t the way to do science. Schneider’s book drew an especially scathing attack from Helmut Landsberg, who had been director of the Weather Bureau’s office of climatology, and was now a well-respected professor at the University of Maryland.

Landsberg reviewed the book for the American Geophysical Union, calling it a “wide-ranging potpourri of science, nature and politics”, and “multidisciplinary, as promised, but also very undisciplined”. Landsberg disliked what he saw as an activist spirit in Schneider, believing that climate scientists should stay out of the public spotlight, especially when it came to the uncertainties of climate modelling. He would only endanger the credibility of climatologists, Landsberg worried; much better to stay collecting data to iron out as many uncertainties as possible, only guardedly briefing politicians behind closed doors when absolutely needed. In an example of first-class scientific bitching, Landsberg concluded his review by noting that Schneider advocated scientists running for public office, and that perhaps he had better try that himself – but that if he did want to be a serious scientist, “one might suggest that he spend less time going to the large number of meetings and workshops that he seems to frequent” and join a scientific library.

In part, it was a generational clash. Schneider belonged to a younger, more rebellious cohort, happy to take science to the streets. In contrast, Landsberg had spent a career working carefully with government and the military, generally behind closed doors, and was scared that public involvement might disrupt the delicate balance of this relationship. What’s more, the cultural norms of scientific behaviour that expect a “good” scientist to be guarded and avoid anything that smells remotely of drama were deeply embedded – even when, like any deeply embedded cultural norm, they can skew the science. Landsberg was far from the only established meteorologist bristling at all this new attention given to climate change. Some felt uneasy about the drama, while others didn’t trust the new technologies, disciplines and approaches being used.

In the UK, the head of the Met Office, John Mason, called concern about climate change a “bandwagon” and set about trying to “debunk alarmist US views”. In 1977 he gave a public talk at the Royal Society of Arts, stressing that there were always fluctuations in climate, and that the recent droughts were not unprecedented.

He agreed that if we were to continue to burn fossil fuels at the rate we were, we might have 1C warming, which he thought was “significant”, in the next 50-100 years; but on the whole, he thought, the atmosphere was a system that would take whatever we threw at it. Plus, like many of his contemporaries, he figured we would all move over to nuclear power, anyway. Writing up the talk for Nature, John Gribbin described the overall message as “don’t panic”. He reassured readers there was no need to listen to “the prophets of doom”.

hange was coming, though, and it would be a combination of an establishment scientist and an activist that would kick it off . An obscure 1978 US Environmental Protection Agency report on coal ended up on the desk of Rafe Pomerance, a lobbyist at the DC offices of Friends of the Earth. It mentioned the “greenhouse effect”, noting that fossil fuels could have significant and damaging impacts on the atmosphere in the next few decades.

He asked around the office and someone handed him a recent newspaper article by a geophysicist called Gordon MacDonald. MacDonald was a high-ranking American scientist who had worked on weather modification in the 1960s as an advisor to Johnson. In 1968 he had written an essay called How to Wreck the Environment, imagining a future in which we had resolved threats of nuclear war but instead weaponised the weather. Since then he had watched people do this – not deliberately, as a means of war, but more carelessly, simply by continuing to burn fossil fuels.

More importantly, MacDonald was also a “Jason” – a member of a secret group of elite scientists who met regularly to give the government advice, outside of the public eye. The Jason group had met to discuss carbon dioxide and climate change in the summers of 1977 and 1978, and MacDonald had appeared on US TV to argue that the earth was warming.

You might imagine there was some culture clash between Pomerance, a Friends of the Earth lobbyist, and MacDonald, a secret military scientist, but they made a powerful team. They got a meeting with Frank Press, the president’s science advisor, who brought along the entire senior staff of the US Office of Science and Technology. After MacDonald outlined his case, Press said he would ask the former head of the meteorology department at MIT, Jule Charney, to look into it. If Charney said a climate apocalypse was coming, the president would act.

Charney summoned a team of scientists and officials, along with their families, to a large mansion at Woods Hole, on the south-western spur of Cape Cod. Charney’s brief was to assemble atmospheric scientists to check the Jasons’ report, and he invited two leading climate modellers to present the results of their more detailed, richer models: James Hansen at the Goddard Institute for Space Studies at Columbia University in New York, and Syukuro Manabe of the Geophysical Fluid Dynamics Lab in Princeton.

The scientific proceedings were held in the old carriage house of the mansion, with the scientists on a rectangle of desks in the middle and political observers around the side. They dryly reviewed principles of atmospheric science and dialled in Hansen and Manabe. The two models offered slightly different warnings about the future, and in the end, Charney’s group decided to split the difference. They felt able to say with confidence that the Earth would warm by about 3C in the next century, plus or minus 50% (that is, we would see warming between 1.5C or 4C). In their report of November 1979, Science magazine declared: “Gloomsday predictions have no fault.”

y the mid-1970s, the biggest oil company in the world, Exxon, was starting to wonder if climate change might finally be about to arrive on the political agenda and start messing with its business model. Maybe it was the reference in the Kissinger speech, or Schneider’s appearance on the Tonight Show. Or maybe it was just that the year 2000 – the point after which scientists warned things were going to start to hurt – didn’t seem quite so far off.

In the summer of 1977, James Black, one of the top science advisors at Exxon, made a presentation on the greenhouse effect to the company’s most senior staff. This was a big deal: executives at that level would only want to know about science that would affect the bottom line. The same year, the company hired Edward David Jr to head up their research labs. He had learned about climate change while working as an advisor to Nixon. Under David, Exxon started to build a small research project on carbon dioxide. Small, at least, by Exxon standards – at $1m a year, it was a good chunk of cash, just not much compared with the $300m a year the company spent on research at large.

In December 1978, Henry Shaw, the scientist leading Exxon’s carbon dioxide research, wrote in a letter to David that Exxon “must develop a credible scientific team” one that can critically evaluate science that comes in on the topic, and “be able to carry bad news, if any, to the corporation”.

Exxon fitted out one of its largest supertankers with custom-made instruments to do ocean research. Exxon wanted to be taken seriously as a credible player, so wanted leading scientists on board, and was willing to ensure they had scientific freedom. Indeed, some of the work they undertook with oceanographer Taro Takahashi would be later used in a 2009 paper concluding that the oceans absorb only 20% of carbon dioxide emitted from human activities. This work earned Takahashi a Champion of the Earth prize from the UN.

In October 1982, David told a global warming conference financed by Exxon: “Few people doubt that the world has entered an energy transition, away from dependence upon fossil fuels and toward some mix of renewable resources that will not pose problems of CO2 accumulation.”

The only question, he said, was how fast this would happen. Maybe he really saw Exxon as about to lead the way on innovation to zero-carbon fuels, with his R&D lab at the centre of it. Or maybe the enormity of the challenge hadn’t really sunk in. Either way, by the mid-1980s the carbon dioxide research had largely dried up.

hen Ronald Reagan was elected in November 1980, he appointed lawyer James G Watt to run the Department of the Interior. Watt had headed a legal firm that fought to open public lands for drilling and mining, and already had a reputation for hating conservation projects, as a matter of policy and of faith. He once famously described environmentalism as “a leftwing cult dedicated to bringing down the type of government I believe in”. The head of the National Coal Association pronounced himself “deliriously happy” at the appointment, and corporate lobbyists started joking: “How much power does it take to stop a million environmentalists? One Watt.”

Watt didn’t close the EPA, as people initially feared he would, but he did . . .

Continue reading. There’s much more. It’s a lengthy extract. And Anne Gorsuch, mentioned in the next paragraph, has a famous son, Neil, who is a current Associate Justice of the Supreme Court.

Starving cattle roam a cracked landscape in Mauritania in search of water, 1978.

Written by Leisureguy

10 July 2021 at 10:23 am

How to build a small and pedestrian-friendly town

with one comment

WrathOfGnon has an interesting (and lengthy) post:

Of all the questions I get on Twitter the most common is this: “How do you build a town?” We know well how it used to be done, but these last one or two centuries we have forgotten how to do it (with only a handful of notable exceptions during the last century1). The other day I was asked again, but this time with a set of premises that made the question a little easier to approach. I have anonymized all the details but the general idea remains: four guys (friends) with money have bought a suitably large piece of land in Texas and now want to create a car-free human-scaled town2 of the kind that I am always writing about.

In this text I intend to set out the most bare-bone basic premises for how to start a good town, what is needed to build something anti-fragile3 and sustainable4 under the above mentioned scenario. I will go back to this text and edit it, add points, or discuss certain aspects deeper in future texts, especially those points that stimulate questions or controversy.

This is my first published long form. It is my general idea to write as little as possible while still getting the point across. I might delete this first attempt.

  1. Size and borders: “You can’t have a garden without fences.”

To create a human scaled town we first establish what is a good size, and this is simply one third of a square kilometer, or 82 acres, or 0.13 square miles. 80 acres was the upper limit for a good family farm in medieval England, and it is still the size at which the most flexible and efficient farms run, both modern and more old fashioned Amish family farms. It allows a town where no point can’t be reached on foot in 15 minutes, and it allows comfortable living for a population of 3000, which was considered the ideal size in medieval Europe: the upper limit of efficiency and comfort, productivity and harmony: more and you get crowded, less and you risk being without some important trades and activities. Even though the premise talks about a town of 600, we plan three centuries ahead for a maximum population of ca. 3000.

A good town (the urban) is clearly defined and set apart from the countryside (the rural). The suburban has no place here. Hence the town needs to be as clearly marked out and defined as the individual family lots will be: to here, but no further. For this purpose we will mark out land to be used as a wall, raised embankment, hedge, fence, moat, canal, etc. Some sort of edge which is not routinely nor distractedly crossed.

As for shape, I recommend a somewhat irregularly oval shape, near round in one extreme, or rice grain shaped in the other extreme, for the simple reason that the best towns and cities seems to be oval to some degree5. As far as possible the existing topography should be kept or even enhanced. Perfectly flat land is only popular with boring developers. So: no bulldozing allowed. Existing trees should be left and existing paths should be left in place (even when slightly inconvenient). New paths and streets should follow the contours of the land. Anything historic (an old campsite, an ancient grave or remains of an old farmstead) should be kept and protected and venerated. History is in short supply in new developments, and interesting stories can be woven around something as mundane as an abandoned old cart or well.

The oval (left) and the (Japanese) grain of rice. Good basic shapes for a town.
  1. Water, energy, food and connections: the needs hierarchy of towns.

Since the premise is Texas, and undeveloped land, I am imagining land that is more or less parched, but with short and intense annual rains that risk flooding the entire area. The town will be in a perpetual state of drought and need to be prepared for flash floods6. Hence cisterns, reservoirs, water harvesting will be vital, and whatever gets built, roofs will harvest water into private cisterns or ponds, and all streets will direct stormwater to overflow-proofed cisterns. An area the size of two or three football pitches outside the town will be devoted to flood protection and temporary storage of water. During most the year this land will be dry and a perfect spot for sports, barbecues, festivals, playgrounds, fairs and markets.

This arrangement should make the town self-sustainable in household water at least. Pumping groundwater should not be an option, it is simply not sustainable in an arid/semi-desert environment and Texans already know how to build and manage water harvesting infrastructure. There is no need to reinvent the wheel and spend tons of resources on piping in distant water.

There will be an urge to build each home optimized for air conditioning. Don’t. All buildings must be useful and livable even with the power cut. Hence, natural ventilation, strategically designed windows that open, etc. is necessary. Obviously you can add AC (Air conditioner) on top of that, but in no way should the town be dependent on AC. I don’t think a town can casually produce the energy it needs by itself (for that a far more serious effort would be needed), but even if the grid is cut, it should have enough to power food storage, basic lights and communications (WiFi etc.). This can be achieved with limited private and public PV (photo voltaic or solar power). For hot water, solar heaters are useful even in a Texan winter, and all homes will be equipped with fireplaces, wood stoves and chimneys.

Once you remove the need for heating, cooling and transport from a town’s energy needs, you are left with something that will easily run on limited solar (and the attached batteries) in case of a grid failure. This will also save the town and its people large amounts of money even in the near future.

For food, the town should not spare any effort to be self-sustainable. Food items are also a prime export product, especially high-end refined items (exporting raw materials/food isn’t a good use of resources). It provides jobs and income and is a sure way to draw tourists. For this purpose there will be no lawns, but plenty of gardens, orchards, street side herbs, roof top apiaries and flowers to feed the bees that inhabit them. The rural area (the “market garden zone”) surrounding the town out to a radius of one mile should be devoted more or less entirely to food production in some form, and it should be farmed primarily by the people living in town on a professional or hobby level (either one is fine: create the best allotment system in Texas!). The second belt, is the farm zone. Here I would recommend, if not enough farmers could be found, to offer the land at good prices to Amish families to farm. 800 acres is enough for 10 farms. They also have the expertise to run a farm in any sort of energy crises. The rule of thumb is that only people who live directly off the land should live in the rural area (the “farm zone”).

Inside the town basic facilities for food processing should be found. From feed and dairy refinement to meat processing. People should be encouraged to plant espaliered fruit trees on every suitable south facing wall. Poultry, pigs and rabbits should be kept, not only for meat, eggs, but also to produce high quality fertilizer for the poor soil in the area. And this goes for humanure7 as well. Pesticides and chemical fertilizers should be completely banned from the start. Water should be treated organically and as low-tech as possible, on site.

A good “code-hack” for any small town was developed in Seaside, Florida: “one 14×14 feet area of a lot has no height limitation”8. This will spur people to build towers and spires, which are useful for housing bats and pigeons which will help in pest control (pigeons are also an unbeatable supply of food). Some space in the town itself should be reserved for food production: dovecotes, commons for grazing, etc. A small town like this needs no parks, so instead institute seed gardens (small gardens used only for producing seeds) of vegetables and herbs. Encourage people to keep flowers (to help honey production): consider instituting a program where each square foot of flower pot space gives you a certain weight of honey from the public or private apiary.

Ideally you want to build a new town in a region where there are already people present, near larger cities or along a “necklace” of small towns. This makes it easier to attract citizens, and it also makes the town less isolated, more easily connected to outside markets, tourism etc. but in this scenario the land is marginal and a bit far from towns and airports. Hence, save space for a convenient and scenic (you can’t do fast at this scale) rail or canal or river ferry connection to the nearest larger town. It will raise the value of the town land itself and everything it produces will have a better access to a market (especially perishables). It is also a great way to bring tourism into the city without having to provide parking.

It is possible to build isolated cities but the chances of succeeding is so slim I would not recommend it. Decide from the beginning where you want a possible rail station, by the gate? Inside the town? Through the town? It is easy to prepare the ground now, rather than wait until it is all developed and built up.

  1. Materials and harmony.

All materials used, as far as possible, should be of local origin. In Texas that means the town will be built from rammed earth, adobe bricks, some fired bricks or stone. No concrete, vinyl sidings, clapboard (not ideal in an arid town environment anyway), plastic etc. Before anything gets built, a pattern book9 for the town must be developed that should have a few very basic buildings types for new residents to easily build and that fits in anywhere in town. A color pattern will be developed using locally accessible earth tones and pigments (if the . . .

Continue reading. There’s much more — and I hope someone takes this idea and applies in some locale (though obviously some details — building materials, for example — will vary depending on locale.

This article reminded me strongly of Christopher Alexander, and specifically of The Timeless Way of Building and A Pattern Language. I was surprised not to see Alexander’s name in the footnotes.

Written by Leisureguy

8 July 2021 at 2:37 pm

Whistleblowers Expose Corruption in Epa Chemical Safety Office

leave a comment »

Sharon Lerner reports in The Intercept:

MANAGERS AND CAREER STAFF in the Environmental Protection Agency’s Office of Chemical Safety and Pollution Prevention tampered with the assessments of dozens of chemicals to make them appear safer, according to four scientists who work at the agency. The whistleblowers, whose jobs involve identifying the potential harms posed by new chemicals, provided The Intercept with detailed evidence of pressure within the agency to minimize or remove evidence of potential adverse effects of the chemicals, including neurological effects, birth defects, and cancer.

On several occasions, information about hazards was deleted from agency assessments without informing or seeking the consent of the scientists who authored them. Some of these cases led the EPA to withhold critical information from the public about potentially dangerous chemical exposures. In other cases, the removal of the hazard information or the altering of the scientists’ conclusions in reports paved the way for the use of chemicals, which otherwise would not have been allowed on the market.

This is the first of a series of articles based on the four whistleblowers’ highly detailed allegations, which were supported by dozens of internal emails with supervisors, meeting summaries, and other documents. Together, the evidence they provided shows a pattern in which the EPA failed to follow the law that oversees chemical regulation, particularly the Toxic Substances Control Act, or TSCA, and depicts a workplace in which EPA staffers regularly faced retribution for following the science.

“The Office of Chemical Safety and Pollution Prevention is broken,” the scientists wrote in a statement they provided to The Intercept and Rep. Ro Khanna, D-Calif., chair of the House Committee on Oversight and Reform. “The entire New Chemicals program operates under an atmosphere of fear — scientists are afraid of retaliation for trying to implement TSCA the way Congress intended, and they fear that their actions (or inactions) at the direction of management are resulting in harm to human health and the environment.”

The four EPA staff members, who hold doctorates in toxicology, chemistry, biochemistry, and medicinal chemistry, said that they told colleagues and supervisors within the agency about the interference with their work. Each of the scientists also filed complaints with either the EPA’s inspector general or the Office of Science Integrity, which has pledged to investigate corruption within the agency. But because most of their concerns remained unaddressed months after they disclosed them — and because, in each case, the altering of the record presented a potential risk to human health — the scientists said they felt compelled to make their complaints public.

Maybe the Hazards Will Go Away

Elyse Osterweil, one of the four scientists, said she was at first reluctant to speak up about the intense pressure she faced from her supervisors to remove references to potential toxicity from the assessments of new chemicals. The assessments, which use animal studies to gauge a chemical’s potential risk to humans, can lead the agency to place limits on its use — or to ban it entirely. In the case of one substance that Osterweil was reviewing in February of this year, the animal studies suggested serious potential for harm. Rats exposed to a single dose of the chemical had become lethargic, lost weight, and had trouble moving. Some became comatose, and others died.

“Usually with this type of acute study, there are no effects,” said Osterweil. “So this was a red flag to me that we needed further information.” But when Osterweil said in a meeting that she needed more data to complete her hazard assessment report, one of her supervisors responded with a series of questions. “She kept asking me, ‘Look at the data, look at the data, look at it again, tell me what you see,’” Osterweil said of her supervisor. “I knew she wanted me to make the hazards go away, and she even said that: ‘Why don’t you take a look at the actual study data again, and maybe the hazards will go away?’”

Although she knew she didn’t have enough information to say that the chemical didn’t pose a risk, Osterweil seriously considered giving in to the pressure to deem it safe. “There was a time when I thought, ‘Well, maybe I should let this one go and just pick my battles,’” she said. “But I just couldn’t.”

A chemist named Martin Phillips faced similar pushback when he was assessing a mixture of compounds in January of 2020. One component of the product, which was to be used in cleaning solutions, is a chemical that caused birth defects and miscarriage in experiments on rats. Phillips and another risk assessor noted the developmental effects in the chemical’s hazard assessment, which must by law then be added to the chemical’s safety data sheet, a document the Occupational Safety and Health Administration uses to communicate risk to workers. But the company that had submitted the product for approval balked at the requirement. And the day after the assessment Phillips wrote was finalized, a representative of the company who had recently worked in the same division of the EPA met with several of Phillips’s colleagues and his supervisor, whom she had known from her time at the agency. Phillips wasn’t invited to attend the meeting. The following day, another assessment of the chemical was uploaded into the EPA’s computer system without Phillips’s consent or knowledge. The new version omitted the information about the birth defects and miscarriages.

When he learned of the new assessment, Phillips asked that . . .

Continue reading. There’s much more, and there is danger to the public.

Written by Leisureguy

4 July 2021 at 9:41 am

Homeowner and Condo Associations Are In Over Their Heads

leave a comment »

Evan McKenzie, who teaches in the political science department and the law school at the University of Illinois at Chicago and is the author of two books about condominium and homeowner associations, writes in the Washington Post:

The collapse of the Champlain Towers South condominium building in Surfside, Fla., is a terrible tragedy. Besides the stories of the victims and their grieving loved ones, early attention has focused on the causes of the collapse, such as how the building was constructed, the effects of saltwater on reinforced concrete and whether the condominium association was properly maintaining the high-rise.

Those are important matters, but the disaster exemplifies a bigger problem, one that will still loom once we have answers about what went wrong in Surfside: The untrained, unpaid and unsupervised volunteer directors of the nation’s more than 350,000 condo and homeowners’ associations, armed with limited financial resources, are expected to deal with the unprecedented infrastructure challenges that climate change poses to their communities. And there is no reason to believe that they are up to that task.

More than 20 percent of the U.S. population lives in property administered by condominium and homeowners’ associations, nonprofits run by volunteers elected by the owners. These directors and officers are responsible for an estimated $7 trillion worth of private property and infrastructure, including high-rise buildings, private streets, parks, pools, sewer and water systems, lakes, garages, and many other building systems and amenities.

As condos and HOAs blossomed across the country in the last 50 years, little or no thought was given to the eventual effects of climate change, in terms of location or construction quality. The common-interest housing sector emerged in the 1960s as a way to put more people on less land, increasing developer profits and local property tax revenue. The model spread rapidly, and condos and HOAs are now the default options for new construction in many states, not just across the Sun Belt where they originated but in older metro areas as well.

Many locations are problematic from the outset. Developers often build in places that appeal to buyers but pose environmental challenges — such as on reclaimed wetlands or beaches next to rising seas, as in Surfside. Other developers place subdivisions at the top of artificial slopes that turn into mudslides in hundred-year storms, which now occur more often than they used to. Terrible disasters have struck neighborhoods built in areas that are prone to drought-induced wildfires. Local governments may approve these location decisions because they are great for sales and the property tax base, but they drop environmental issues right in the laps of condo and HOA boards.

Condo and homeowners’ associations were never designed or empowered to handle such conditions. These associations are essentially on their own, with virtually no support from any level of government. Although most of them operate well most of the time, paying for routine maintenance and repair has always been a challenge, long before climate change made things worse. For years, industry insiders have pointed out that although directors and officers are responsible for maintaining the property, most unit owners are notoriously unwilling to see their housing costs go up now to sock away funds for repairs in the future. Why, they ask, should they pay today so someone else can have a new roof long after they’ve moved out? Yet that is precisely what they are expected to do. Somehow, dozens, hundreds or even thousands of owners are supposed to overcome their self-interest and collective-action problems and commit to maintaining their private infrastructure in perpetuity.

Now the maintenance and repair responsibilities that condo boards struggle with every day, with varying degrees of success, are being amplified by the effects of global climate change. It is increasingly clear that owner resources and volunteer expertise are inadequate to meet the challenge of maintaining buildings, preventing and mitigating climate-related damage, and restoring property that is severely harmed or even destroyed.

The Surfside disaster is an instructive example of an association faced with environmental challenges beyond its means. The 12-story condo tower with 136 units was built 40 years ago on reclaimed beachfront wetlands, where the proximity of a rising ocean, saltwater and gradual land subsidence have been constant threats to structural integrity. A few years ago, engineers told the condo board that they had an expensive problem on their hands with deteriorating reinforced concrete. After much internal back-and-forth, the board recently assessed a total repair cost on the owners of $15 million. That averages out to more than $110,000 per unit for this midsize association, an eye-popping figure for any homeowner and one that would undoubtedly put many into foreclosure for failure to pay. Repairs were set to begin soon; residents were initially supposed to decide whether to pay their share of the assessment at once or in monthly installments by this past Thursday.

Some are claiming that the collapse could have been avoided if better maintenance had been done earlier. Maybe. But there are thousands of beachfront condos on the Atlantic, Pacific and Gulf coasts. We cannot expect all of them to be maintained consistently to industry standards with sea level rise, storm surge, land subsidence and a host of other coastal climate issues in mind. We know there will always be some that risk skipping maintenance, thinking the worst won’t happen there.

Sea level rise is not the only climate-related problem that places owners in harm’s way and that local governments and developers never anticipated. It is clear by now that climate change produces heavy rainfall, including hurricanes and so-called 100-year storms, that can cause major floodinglandslides and other stormwater-related disasters. Condo and homeowners’ associations have been severely affected by such events, and developers have been sued from coast to coast over their failure to anticipate them and build accordingly. Expensive litigation after the fact is no substitute for prevention, but it is unrealistic to expect condominium and homeowner associations to undertake costly anticipatory measures. They have neither the expertise nor the resources to do so. In most cases, they don’t even know where to begin.

Many other communities have been built near places prone to wildfires, which have taken on new ferocity in drought conditions fueled by global warming. The costs of safeguarding neighborhoods against these fires are daunting for owners and associations, and prevention is almost entirely out of their hands. In 2003, wildfires destroyed 331 homes in Scripps Ranch, an upscale San Diego-area neighborhood where developer-created HOA requirements for wooden “shake” shingle roofs accelerated the destruction. The San Diego City Council banned these roofs in new construction, including for people who wanted to rebuild their homes. Yet they faced intense resistance from Scripps Ranch owners over proposed building code changes intended to protect their homes against future wildfires, because implementing those changes would have been expensive. Owners in a fire-prone area might be understandably angry if their association required them to pay for new roofs, elaborate sprinkler systems, doors and windows with heat-resistant double-glazed material, and special fire-retardant house paint. If local governments encounter pushback when they require such measures, it seems unlikely that condo and homeowners’ associations would adopt them voluntarily.

In effect, condo and HOA developments are a huge experiment in privatization of local government functions, and sometimes the offloading of government responsibilities goes too far. We can expect a condo or HOA board to handle garbage collection, get the leaves and snow removed from private streets, and broadly live up to its responsibilities to residents. But when private communities took off in the 1960s, we didn’t even know what climate change was. We cannot . . .

Continue reading.

Written by Leisureguy

4 July 2021 at 9:34 am

Canada is a warning: more and more of the world will soon be too hot for humans

leave a comment »

A few decades ago I read that global warming, if unchecked, would result in the region between the Tropic of Cancer and the Tropic of Capricorn being uninhabitable. I think we are on track to see that happen in another decade, two at the most. Simon Lewis writes in the Guardian:

The climate crisis means that summer is a time of increasingly dangerous heat. This week in the Pacific north-west, temperature records are not just being broken, they are being obliterated. Temperatures reached a shocking 47.9ºC [118ºF] in British Columbia, Canada. [That temperature record was measured in the small town of Lytton (pop ~1000), most of whose residences and businesses burned to the ground yesterday as a result of wildfire (see photo above). – LG] Amid temperatures more typically found in the Sahara desert, dozens have died of heat stress, with “roads buckling and power cables melting”. [In BC, there were 486 sudden deaths during the heat wave, triple the normal average, and the number is expected to increase as more reports are filed. – LG]

Another heatwave earlier in June saw five Middle East countries top 50°C [122ºF]. The extreme heat reached Pakistan, where 20 children in one class were reported to have fallen unconscious and needed hospital treatment for heat stress. Thankfully, they all survived.

Additional warming from greenhouse gas emissions means that such extreme heatwaves are more likely and scientists can now calculate the increase in their probability. For example, the 2019 European heatwave that killed 2,500 people was five times more likely than it would have been without global warming.

In most places, extreme heatwaves outside the usual range for a region will cause problems, from disrupting the economy to widespread mortality, particularly among the young and old. Yet in places in the Middle East and Asia something truly terrifying is emerging: the creation of unliveable heat.

While humans can survive temperatures of well over 50C when humidity is low, when both temperatures and humidity are high, neither sweating nor soaking ourselves can cool us. What matters is the “wet-bulb” temperature – given by a thermometer covered in a wet cloth – which shows the temperature at which evaporative cooling from sweat or water occurs. Humans cannot survive prolonged exposure to a wet-bulb temperature beyond 35C because there is no way to cool our bodies. Not even in the shade, and not even with unlimited water.

A 35C wet-bulb temperature was once thought impossible. But last year scientists reported that locations in the Persian Gulf and Pakistan’s Indus river valley had already reached this threshold, although only for an hour or two, and only over small areas. As climate change drives temperatures upwards, heatwaves and accompanying unliveable temperatures are predicted to last longer and occur over larger areas and in new locations, including parts of Africa and the US south-east, over the decades to come.

What can governments, companies and citizens do? First, cut off the supply of ever more extreme heatwaves by halving carbon dioxide emission this decade, then reaching net zero emissions by 2050.

Second, prepare for the inevitable heatwaves of the future. Emergency public health planning is the initial priority: getting essential information to people and moving vulnerable people into air-conditioned locations. Heatwave forecasts should include wet-bulb temperatures so that people can learn to understand the dangers.

Plans should account for the fact that heatwaves intensify structural inequalities. Poorer neighbourhoods typically have fewer green spaces and so heat up more, while outdoor workers, often poorly paid, are especially vulnerable. The rich also buy up cooling equipment at high prices once a heatwave is underway and have many more options to flee, underscoring the importance of public health planning.

Beyond crisis management, governments need to invest in making countries function in the new climate we are creating, including the extremes. In climate policy terms this is known as “adaptation”.

Of paramount importance is energy supplies being resilient to heatwaves, as people will be relying on electricity for cooling from air-conditioning units, fans and freezers, which are all life-savers in a heatwave. Similarly, internet communications and data centres need to be future-proofed, as these are essential services that can struggle in the heat.

Beyond this, new regulations are needed to allow buildings to keep cool and for transport systems, from roads to trains, to be able to operate under much higher temperature extremes.

Many of these changes can meet other challenges. Retro-fitting homes to be energy-efficient is also the perfect opportunity to modify them to also keep us cool. For example, installing electric heat pumps to warm houses in the winter means that in the summer they can also be switched to run in reverse to work as a cooling system. Cities can be kept cooler with green roofs and more green spaces, which also make them better places to live.

The final task is future-proofing agriculture and the wider ecosystems we all ultimately rely on. Heat can cause havoc with crop production. In Bangladesh, just two days of hot air in April this year destroyed 68,000 hectares of rice, affecting . . .

Continue reading. There’s more.

In the meantime, the GOP and fossil fuel companies and working together to get more CO2 in the atmosphere so it will become hotter and hotter.

I fear for my children and especially for my grandchildren. The world will not be a good place in the future because humans as a species seem incapable of taking effective action, betrayed in many instances by the efforts of the wealthy to increase their wealth whatever the cost to others.

Written by Leisureguy

2 July 2021 at 10:43 am

GOP: “Let’s hold off on addressing climate change until it is too late”

leave a comment »

Heather Cox Richardson writes:

Last week, Florida governor Ron DeSantis became the latest Republican governor to sign a bill making it harder for citizens to shift away from the fossil fuels that are changing the climate. The move came after Miami, which is in danger as sea levels rise, proposed cutting carbon emissions by banning natural gas infrastructure in new buildings. The bill was written by lawyers for utility companies, based on a pattern written by the American Gas Association. Lobbyists for the Florida Petroleum Association, the Florida Natural Gas Association and the Florida Retail Federation, the Florida Home Builders Association, and the National Utility Contractors Association of Florida supported the bill.

Nine other Republican states have already passed similar legislation.

Republican-led states are defending the use of fossil fuels in other ways. News that President Biden’s climate envoy, John Kerry, was urging major U.S. banks to invest responsibly with an eye to the climate crisis, led the state treasurers of West Virginia, Alabama, Arizona, Arkansas, Idaho, Kentucky, Mississippi, Missouri, Nebraska, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, and South Dakota to write to him expressing their “deep concern” that he, along with other members of the Biden administration, was pressuring banks “to refuse to lend to or invest in coal, oil, and natural gas companies, as a part of a misguided strategy to eliminate the fossil fuel industry in our country.” They accused the Biden administration of “picking economic winners and losers” according to “Biden’s own radical political preferences,” and thus depriving “the people” of agency.

Coal, oil, and natural gas are crucial to their states’ economies, they said, providing “jobs, health insurance, critical tax revenue, and quality of life.” They warned that they would withhold public funds from any banks that refused to lend to fossil fuel industries.

And yet, historically, the government has picked fossil fuels as a winner that outranks any other energy source. While Republicans tend to claim any spending for alternative energies is wasteful, a recent report by the Stockholm Environment Institute, a nonprofit think tank, says that U.S. subsidies to new oil and gas projects inflate their value by up to $20 billion per year. This would seem to fly in the face of Republican complaints about “socialism” in which the government picks winners and losers.

A recent Morning Consult poll shows that 50% of voters say climate change is a critical threat to America. Another 26% think it is important, but not critical. Among Democrats, 75% think climate change is crucial, while another 17% say it is important. Among Republicans, 21% say that climate change is crucial, while another 37% say it is important, but not crucial.

With this support for addressing climate change, why do Republicans appear to be dead set against dealing with it in a meaningful way and instead are propping up the fossil fuels that feed that change?

At the nomination hearing for now–Secretary of the Interior Deb Haaland, who has promised to protect our lands, Senator John Barrasso (R-WY), the top Republican on the Senate Energy and Natural Resources Committee, told Haaland that his state collects more than a billion dollars a year in royalties and taxes from the oil, gas, and coal produced on federal lands in the state, and warned that the Biden administration is “taking a sledgehammer to Western states’ economies.”

Oil produces the most revenue for Texas, which earned $16.3 billion from oil in 2019, an amount that made up 7% of the state’s revenue. Oil revenues accounted for 70% of state revenues ($1.1 billion) in Alaska in 2019, 52% of state revenues ($2.2 billion) in Wyoming in 2017, and 45% of the revenues ($1.6 billion) in North Dakota in 2017.

But production declines in the past year due to the coronavirus pandemic have hurt these fossil fuel states. Wyoming expects to have 29% less money than it expected in 2021–2022. Alaska expects an estimated 18% budget deficit in fiscal 2021. Without money coming in from fossil fuels, people will have to make up the difference by paying taxes, an unpopular outcome, especially in Republican-dominated states, or by losing even more services.

Reducing dependence on fossil fuels will also cost current jobs, and one of the hallmarks of an economy developed around an extractive industry is that it tends to have little flexibility. The rural American West was developed around extractive economies, with a few wealthy men employing lots of workers, and its limited economy means that workers cannot transition easily into other fields.

Fossil fuel advocates also contribute mightily to Republican campaigns, adding financial interest to party members’ general dislike of regulation. In Florida, utility companies employ an average of one lobbyist for every two legislators. “It’s no secret we play an active role in public policy,” a spokesman for a Florida utility told Rolling Stone’s Tim Dickinson in 2016.

This week, in the Pacific Northwest, . . .

Continue reading.

Written by Leisureguy

30 June 2021 at 5:53 am

“Finding the Mother Tree”: Discovering forests’ hidden networks

leave a comment »

In Science News Cori Vanchieri reviews Suzanne Simard’s book:

Finding the Mother Tree
Suzanne Simard
Knopf, $28.95

Opening Suzanne Simard’s new book, Finding the Mother Tree, I expected to learn about the old growth forests of the Pacific Northwest. I had an inkling that Simard, a forest ecologist at the University of British Columbia in Vancouver, would walk through her painstaking research to convince logging companies and others that clear-cutting large parcels of land is too damaging for forests to recover. I didn’t expect to be carried along on her very relatable journey through life.

Simard was born in the Monashee Mountains of British Columbia in 1960. Her family of loggers selectively cut trees and dragged them out with horses, leaving plenty still standing. In her first stab at a career, she joined a commercial logging company that clear-cut with large machinery. Her job was to check on seedlings the firm had planted in those areas to restart the forest. The fledgling plants were often yellowed and failing. Simard’s instincts told her those trees were missing the resources that exist within a diverse community of plants, so she set out to see if her hunch was right.

She learned how to do experiments, with close calls with grizzly bears and other mishaps along the way, eventually becoming a tenured professor. She and colleagues discovered that underground networks of fungi among tree roots shuttle carbon and nutrients from tree to tree (SN: 8/9/97, p. 87). Simard seamlessly weaves details of her studies of these networks with her life’s travails: sibling relationships and loss, struggles as a woman in a male-dominated field and her own recovery from a health crisis. Like many women who work outside the home, she felt torn between being with her young daughters and pursuing her professional passions.

Readers will feel for Simard as much as they worry for the forests that are quickly disappearing. Simard presents plenty of evidence and writes enthusiastically to build her analogy of the “mother trees” — the biggest, oldest trees in a forest that nurture those nearby. In her experiments, seedlings planted near a mother tree were much more likely to survive.

“Trees and plants have agency,” she writes. “They cooperate, make decisions, learn and remember — qualities we normally ascribe to sentience, wisdom, intelligence.” Simard encourages logging companies to . . .

Continue reading.

Written by Leisureguy

29 June 2021 at 3:53 pm

The real urban jungle: how ancient societies reimagined what cities could be

leave a comment »

Patrick Roberts has in the Guardian an extract from his book Jungle: How Tropical Forests Shaped the World – and Us:

Visions of “lost cities” in the jungle have consumed western imaginations since Europeans first visited the tropics of Asia, Africa and the Americas. From the Lost City of Z to El Dorado, a thirst for finding ancient civilisations and their treasures in perilous tropical forest settings has driven innumerable ill-fated expeditions. This obsession has seeped into western societies’ popular ideas of tropical forest cities, with overgrown ruins acting as the backdrop for fear, discovery and life-threatening challenges in countless films, novels and video games.

Throughout these depictions runs the idea that all ancient cities and states in tropical forests were doomed to fail. That the most resilient occupants of tropical forests are small villages of poison dart-blowing hunter-gatherers. And that vicious vines and towering trees – or, in the case of The Jungle Book, a boisterous army of monkeys – will inevitably claw any significant human achievement back into the suffocating green whence it came. This idea has been boosted by books and films that focus on the collapse of particularly enigmatic societies such as the Classic Maya. The decaying stone walls, the empty grand structures and the deserted streets of these tropical urban leftovers act as a tragic warning that our own way of life is not as secure as we would like to assume.

For a long time, western scholars took a similar view of the potential of tropical forests to sustain ancient cities. On the one hand, intensive agriculture, seen as necessary to fuel the growth of cities and powerful social elites, has been considered impossible on the wet, acidic, nutrient-poor soils of tropical forests. On the other, where the rubble of cities cannot be denied, in the drier tropics of North and Central America, south Asia and south-east Asia, ecological catastrophe has been seen as inevitable. Deforestation to make way for massive buildings and growing populations, an expansion of agriculture across marginal soils, as well as natural disasters such as mudslides, flooding and drought, must have made tropical cities a big challenge at best, and a fool’s gambit at worst.

Overhauling these stereotypes has been difficult. For one thing, the kind of large, multiyear field explorations usually undertaken on the sites of ancient cities are especially hard in tropical forests. Dense vegetation, mosquito-borne disease, poisonous plants and animals and torrential rain have made it arduous to find and excavate past urban centres. Where organic materials, rather than stone, might have been used as a construction material, the task becomes even more taxing. As a result, research into past tropical urbanism has lagged behind similar research in Mesopotamia and Egypt and the sweeping river valleys of east Asia.

Yet many tropical forest societies found immensely successful methods of food production, in even the most challenging of circumstances, which could sustain impressively large populations and social structures. The past two decades of archaeological exploration, applying the latest science from the land and the air, have stripped away canopies to provide new, more favourable assessments.

Not only did societies such as the Classic Maya and the Khmer empire of Cambodia flourish, but pre-colonial tropical cities were actually some of the most extensive urban landscapes anywhere in the pre-industrial world – far outstripping ancient Rome, Constantinople/Istanbul and the ancient cities of China.

Ancient tropical cities could be remarkably resilient, sometimes surviving many centuries longer than colonial- and industrial-period urban networks in similar environments. Although they could face immense obstacles, and often had to reinvent themselves to beat changing climates and their own exploitation of the surrounding landscape, they also developed completely new forms of what a city could be, and perhaps should be.

Extensive, interspersed with nature and combining food production with social and political function, these ancient cities are now catching the eyes of 21st-century urban planners trying to come to grips with tropical forests as sites of some of the fastest-growing human populations around the world today.

s with “agriculture”, people in the west tend to view the concept of a “city” through a narrow lens. They are compact, densely populated areas, the home of administrative and political elites, full of bustling trade and manufacturing, and fed by vast agricultural fields and animal herds that are often located at some distance from the city boundaries.

This view seems somewhat out of place in tropical forests, where sweeping fields of uniform crops, grazing animals and dense settlements can lead to drastic deforestation, soil erosion and eventually starvation and social disintegration. As a result, where such seemingly “compact” cities have been identified in the tropics – for example the Classic Maya of south-eastern Mexico, Guatemala, Belize, and western Honduras and El Salvador – scientists have tended to assume that these societies were too much for their tropical forest landscapes, leading to degradation, popular rebellion and ultimately abandonment.

Maya urban forms began to appear about 800BC. Cities, monumental stone architecture and writing all gradually emerged at certain key political centres, led by kings and fed by the North and Central American staple crops of maize, beans and squash. It was during the Classic period that things truly took off, especially in a region that has become known as the southern lowlands (including northern Guatemala, Belize and south-eastern Mexico). Between AD250 and 900, growing populations, more cities, more monuments and more inscriptions appeared. Major cities such as Tikal and Calakmul had populations of as many as 120,000 people.

Although many of these cities were located on soils particularly suited to productive maize agriculture, there was one problem. Rainfall was not consistent throughout the year, and the geology often made the capture and storage of precious water through the dry months challenging. Many scholars have argued that major droughts brought the system to its knees in the Terminal Classic period, between AD800 and 900 in the southern lowlands. In this view, the large centres and their political classes had over-reached, cutting down trees to build their monuments and planting their corn on poor soils. With their populations precariously sustained across a heavily altered landscape, there was nothing they could do when drought came. People lost faith in elites, construction stopped, famine ensued and the Classic population dispersed itself across the landscape.

So often goes the story of the Classic Maya. Amazingly, however, far from being compact, we now know that even in the most well-known of Maya centres, like Copán and Tikal, the population was relatively dispersed. Instead of having fields outside and politics inside, fields were located throughout the urban infrastructure and residences. And instead of a small focal point, cities spread over 100 sq km. Recent studies of Tikal have shown a network of moats, dwellings, reservoirs and pyramid clusters that extend out from a single hill for up to 200 sq km into the surrounding landscape.

Innovative aerial surveys have now made similar findings across the Maya world. In almost all instances, instead of isolated urban buds, scientists have found vast landscapes of small and large centres connected by dispersed agrarian landscapes, residential areas, causeways and a complex, interlinking system of dams, reservoirs, sinkholes, channels and swamps that supported growing populations through even the driest of seasons. As leading Mayanist Prof Lisa Lucero, of the University of Illinois, puts it, “the Classic Maya knew the importance of water and of fertile agricultural soils, the latter dispersed in variously sized pockets, mirrored by a dispersed agricultural settlement. This low-density approach to cities was a logical, innovative solution.”

The Classic Maya also had far more diverse and sophisticated economies than has often been appreciated. Alongside the key crops, archaeobotanists have shown that the planting of avocados, pineapples, sunflowers, tomatoes and manioc added to a dispersed settlement and lifestyle. The Classic Maya are also known to have penned, fed and fattened wild turkeys and deer for their key protein sources.

Scientists have found evidence that diverse “forest gardens” sustained these cities. Based on ethnographic study of, and testaments from, Maya communities today, this type of cultivation, called milpa (or kol in the local Yukatek language), involves the use of multiple crops, and the movement of fields, allowing different parts of the forest to grow back and patches of soil to rest and restock before planting begins in a locality again. We also know that instead of indiscriminately planting in soils of all types, the Classic Maya actually followed rich veins of particularly productive soils, giving their field systems a winding appearance that snaked along rivers and up slopes. They even added special plants, like water lilies, to reservoirs. These plants are incredibly sensitive to water quality, only growing under clean conditions, and allowed people to monitor the buildup of stagnant water and thus guard against disease. . .

Continue reading. There’s much more, though I have noticed that it seems relatively rare that a group will learn from another group’s experience — the feeling usually seems to be “Our group is special and therefore different.” Not Invented Here seems a universal reason for rejecting ideas. Exceptions do occur, but instances also are common.

Later in the article:

What they have found at Angkor is simply mind-boggling: an urban residential area of more than 1,000 sq km has emerged alongside a modified landscape of 3,000 sq km. This makes Greater Angkor the most extensive pre-industrial settlement complex anywhere on Earth, and larger even than cities such as Paris today. It also revolutionises our understanding of how this ancient megacity operated.

Written by Leisureguy

26 June 2021 at 12:08 pm

How radical gardeners took back New York City

leave a comment »

The video above appears in an interesting Open Culture column by Ayun Halliday.

Written by Leisureguy

25 June 2021 at 4:18 pm

Climate Change Batters the West Before Summer Even Begins

leave a comment »

And still there are people who deny that it’s happening and fight against efforts to combat it.  Brad Plumer, Jack Healy, Winston Choi-Schagrin, and Henry Fountain report in the NY Times:

A heat dome is baking Arizona and Nevada, where temperatures have soared past 115 degrees this week and doctors are warning that people can get third-degree burns from the sizzling asphalt.

At Lake Mead, which supplies water for 25 million people in three southwestern states and Mexico, water levels have plunged to their lowest point since the reservoir was filled in the 1930s. In California, farmers are abandoning their thirstiest crops to save others, and communities are debating whether to ration tap water.

In Texas, electricity grids are under strain as residents crank their air-conditioners, with utilities begging customers to turn off appliances to help avert blackouts. In Arizona, Montana and Utah, wildfires are blazing.

And it’s not even summer yet.

“We’re still a long way out from the peak of the wildfire season and the peak of the dry season,” said Daniel Swain, a climate scientist at the University of California, Los Angeles. “Things are likely to get worse before they get better.”

Global warming, driven by the burning of fossil fuels, has been heating up and drying out the American West for years. Now the region is broiling under a combination of a drought that is the worst in two decades and a record-breaking heat wave.

“The Southwest is getting hammered by climate change harder than almost any other part of the country, apart from perhaps coastal cities,” said Jonathan Overpeck, a climate scientist at the University of Michigan. “And as bad as it might seem today, this is about as good as it’s going to get if we don’t get global warming under control.”

With temperatures expected to keep rising as nations struggle to rein in their planet-warming emissions, the Western United States will need to take difficult and costly measures to adapt. That includes redesigning cities to endure punishing heat, conserving water, and engineering grids that don’t fail during extreme weather.

This month has offered glimpses of whether states and cities are up to that task and has shown they still have far to go.

From Montana to Southern California, much of the West is suffering from unusually high temperatures. Some 50 million Americans face heat-related warnings. Records have been tied or broken in places like Palm SpringsSalt Lake City and Billings, Montana.

As 115-degree temperatures cooked Phoenix’s Roosevelt Row Arts District on Tuesday, Timothy Medina, 58, was perched on a black metal platform 12 feet above the sidewalk, finishing the blue lettering of a sign for a coffee shop. “It’s brutal — that heat against the wall,” he said. “Let me take a quick swig of water.”

Construction workers, landscapers and outdoor painters like Mr. Medina have few options but to bear the heat. He wore jeans to avoid burning his skin, along with a long sleeve fluorescent yellow shirt and a $2 woven hat. But soon the heat was winning.

“I start feeling out of breath, fatigued,” he said.

Extreme heat is the clearest signal of global warming, and the most deadly. Last year heat killed at least 323 people in Maricopa County, which includes Phoenix, a record by far. . .

Continue reading. There is a lot more, and many photographs.

And from here on, it’s going to get worse. What we’re seeing now is mild compared to what’s coming. But inaction seems attractive to most. An article by Catherine Garcia in Yahoo News, “NASA: Earth is trapping ‘unprecedented’ amount of heat, warming ‘faster than expected’,” spells it out. From the article:

Since 2005, the amount of heat trapped by the Earth has roughly doubled, according to a new study by NASA and National Oceanic and Atmospheric Administration researchers.

This is contributing to warming oceans, air, and land, the scientists write in the study, published this week in the journal Geophysical Research Letters. “The magnitude of the increase is unprecedented,” NASA scientist and lead author of the study Norman Loeb told The Washington Post. “The Earth is warming faster than expected.”

Using satellite data, the researchers measured the planet’s energy imbalance, which is the difference between how much energy the planet absorbs from the sun and how much is radiated back into space. If there is a positive imbalance, the Earth is absorbing more heat than it is losing; in 2005, there was a positive imbalance of about half a watt per square meter of energy from the sun, and in 2019, the positive imbalance was one watt per square meter, the Post reports.

“It is a massive amount of energy,” NOAA oceanographer Gregory Johnson, a co-author of the study, told the Post, adding that this energy increase is equivalent to everyone on Earth using 20 electric tea kettles at the same time. The team needs to . . .

Written by Leisureguy

17 June 2021 at 5:19 pm

The Lost Prophet of California Agriculture

leave a comment »

Charlie Siler has a well-illustrated and very interesting article in Craftsmanship magazine. He writes:

  1. Lessons of The Dust Bowl
  2. The Joys of Tinkering
  3. The Search For The Perfect Machine
  4. What Could Have Been

Al Ruozi, age 97, is a high-school dropout from Bakersfield, California, who made his living selling farm machinery that he designed and welded together, using handmade machinery that he built himself, in a building that he and his brother assembled. His primary invention, created in the 1950s, was a machine that gave cotton farmers a better way to clear their land. While little-known in the U.S., Ruozi’s invention has been emulated around the world, leading the way to a new generation of farm equipment that can save water, improve soil quality, and maybe even fight climate change.

“Al Ruozi was the inspiration for much of the innovation that happened over the next 30 years,” says Jeff Mitchell, a conservation specialist in the Department of Plant Sciences at UC Davis.

Bakersfield was a harsh place in the 1930s, when Ruozi quit school to help out on the family farm. The dust storms of the U.S. prairies had sent thousands of farmers west to California in search of jobs and land. The country was still recovering from the Great Depression, with unemployment improving only after 1933, when it peaked at 25 percent.

The Okies in the shantytowns of Bakersfield had to contend with hostile locals and inadequate sanitation that sometimes led to dysentery, impetigo or hookworm. In December 1936, concern about disease led a group of Bakersfield citizens to burn down an Okie slum that housed 1,500 people.


We now know that the seeds of the Dust Bowl were sown in the 1920s, when the virgin topsoil of the Great Plains was broken by gasoline-driven plows, destroying native grasses and ruining the ability of the land to hold itself together, and thus retain moisture. In the 1930s, the ecological payback of the disaster wasn’t as clear as it is today. Yet somehow, the lessons of the prairies’ raging dust storms were not completely lost on a teenaged Ruozi. He saw that land could be ruined, and he suspected that the plow was to blame.

He nursed his concerns as he worked behind a horse-drawn plow, tilling his family’s land by slashing into the compacted dirt with the very practice that had contributed to the plight of the refugees huddled in squalid camps a few miles away. “The idea hit me,” he recalled during a recent conversation in his Bakersfield office. “I thought, why is the ground so hard?” Ruozi resolved to find a way to make it more “pliable.”

He got his chance about a decade later, when he returned to Bakersfield following some time in welding school and a stint in the Army during World War II. In 1948, Ruozi and his brother Gilbert bought and assembled a Quonset hut, one of the semi-cylindrical pre-fabricated structures that were used by the U.S. military in World War II and sold as surplus to the public afterward.


Ruozi called the new company Interstate Equipment and Manufacturing Corp. He worked there with his torch, using his welding skills to make potato-tillage equipment for a nearby manufacturer. All the while, he kept tinkering. “I’d pick up an old machine here or there, any time there was a scrap machine, and see if I could make it work,” Ruozi says. “I started out that way. Sometimes it worked, sometimes it didn’t.”

By the early 1950s, he was making his own patented machine. He called it the . . .

Continue reading. There’s much more, including some good photos.

Written by Leisureguy

4 June 2021 at 10:46 am

The empty office: what we lose when we work from home

leave a comment »

This piece suggests science is a social enterprise. (Later in the article, there’s some interesting overlap with issues I discussed in this post.) Gillian Tett writes in the Guardian that begins:

n the summer of 2020, Daniel Beunza, a voluble Spanish social scientist who taught at Cass business school in London, organised a stream of video calls with a dozen senior bankers in the US and Europe. Beunza wanted to know how they had run a trading desk while working from home. Did finance require flesh-and-blood humans?

Beunza had studied bank trading floors for two decades, and had noticed a paradox. Digital technologies had entered finance in the late 20th century, pushing markets into cyberspace and enabling most financial work to be done outside the office – in theory. “For $1,400 a month you can have the [Bloomberg] machine at home. You can have the best information, all the data at your disposal,” Beunza was told in 2000 by the head of one Wall Street trading desk, whom he called “Bob”. But the digital revolution had not caused banks’ offices and trading rooms to disappear. “The tendency is the reverse,” Bob said. “Banks are building bigger and bigger trading rooms.”

Why? Beunza had spent years watching financiers like Bob to find the answer. Now, during lockdown, many executives and HR departments found themselves dealing with the same issue: what is gained and what is lost when everyone is working from home? But while most finance companies focused on immediate questions such as whether employees working remotely would have still access to information, feel part of a team and be able to communicate with colleagues, Beunza thought more attention should be paid to different kinds of questions: how do people act as groups? How do they use rituals and symbols to forge a common worldview? To address practical concerns about the costs and benefits of remote working, we first need to understand these deeper issues.

Office workers make decisions not just by using models and manuals or rational, sequential logic – but by pulling in information, as groups, from multiple sources. That is why the rituals, symbols and space matter. “What we do in offices is not usually what people think we do,” Beunza told me. “It is about how we navigate the world.” And these navigation practices are poorly understood by participants like financiers – especially in a digital age.

The engineers who created the internet have always recognised that people and their rituals matter. Since it was founded in 1986, the Internet Engineering Task Force (IETF) has provided a place for people to meet and collectively design the architecture of the web. Its members wanted to make design decisions using “rough consensus”, since they believed the internet should be an egalitarian community where anybody could participate, without hierarchies or coercion. “We reject: kings, presidents and voting. We believe in: rough consensus and running code” was, and still is, one of its key mantras.

To cultivate “rough consensus”, IETF members devised a distinctive ritual: humming. When they needed to make a crucial decision, the group asked everyone to hum to indicate “yay” or “nay” – and proceeded on the basis of which was loudest. The engineers considered this less divisive than voting.

Some of the biggest decisions about how the internet works have been made using this ritual. In March 2018, in a bland room of the Hilton Metropole on London’s Edgware Road, representatives from Google, Intel, Amazon, Qualcomm and others were gathered for an IETF meeting. They were debating a controversial issue: whether or not to adopt the “draft-rhrd-tls-tls13-visibility-01” protocol. To anybody outside the room, it might sound like gobbledegook, but this protocol was important. Measures were being introduced to make it harder for hackers to attack crucial infrastructure such as utility networks, healthcare systems and retail groups. This was a mounting concern at the time – a year or so earlier, hackers seemingly from Russia had shut down the Ukrainian power system. The proposed “visibility” protocol would signal to internet users whether or not anti-hacking tools had been installed.

For an hour the engineers debated the protocol. Some opposed telling users the tools had been installed; others insisted on it. “There are privacy issues,” one said. “It’s about nation states,” another argued. “We cannot do this without consensus.” So a man named Sean Turner – who looked like a garden gnome, with a long, snowy-white beard, bald head, glasses and checked lumberjack shirt – invoked the IETF ritual.

“We are going to hum,” he said. “Please hum now if you support adoption.” A moan rose up, akin to a Tibetan chant, bouncing off the walls of the Metropole. “Thanks. Please hum now if you oppose.” There was a much louder collective hum. “So at this point there is no consensus to adopt this,” Turner declared. The protocol was put on ice.

Most people do not even know that the IETF exists, much less that computer engineers design the web by humming. That is not because the IETF hides its work. On the contrary, its meetings are open to anyone and posted online. But phrases like “draft-rhrd-tls-tls1.3” mean most people instinctively look away, just as they did with derivatives before the 2008 financial crisis. And, as with finance, this lack of external scrutiny – and understanding – is alarming, particularly given the accelerating effects of innovations such as AI. Many of the engineers who build the technologies on which we rely are well-meaning. But they – like financiers – are prone to tunnel vision, and often fail to see that others may not share their mentality. “In a community of technological producers, the very process of designing, crafting, manufacturing and maintaining technology acts as a template and makes technology itself the lens through which the world is seen and defined,” observes Jan English-Lueck, an anthropologist who has studied Silicon Valley.

When the IETF members use humming, they are reflecting and reinforcing a distinctive worldview – their desperate hope that the internet should remain egalitarian and inclusive. That is their creation myth. But they are also signalling that human contact and context matter deeply, even in a world of computing. Humming enables them to collectively demonstrate the power of that idea. It also helps them navigate the currents of shifting opinion in their tribe and make decisions by reading a range of signals.

Humming does not sit easily with the way we imagine technology, but it highlights a crucial truth about how humans navigate the world of work, in offices, online or anywhere else: even if we think we are rational, logical creatures, we make decisions in social groups by absorbing a wide range of signals. And perhaps the best way to understand this is  . . .

Continue reading. There’s much more. (It’s a long read.)

Written by Leisureguy

3 June 2021 at 12:01 pm

The ‘Frog-Pond Effect’ Distorts Your Self-Image

leave a comment »

Markham Heid writes in Elemental:

For a 2012 study in PLOS One, researchers invited a young woman into a laboratory at Ohio University.

The woman learned that she would be taking part in an “aesthetic judgment” experiment. The researchers took a photograph of her face and then asked her to sit at a table that held two objects: a computer monitor and a mirror.

On the monitor, the woman viewed a series of headshots of what the study termed “attractive professional models” — all of them women. Following this barrage of beautiful faces, the woman’s own photograph appeared on the screen. But it wasn’t just a single photo; the woman saw 13 pictures of herself scattered across the monitor. Looking closely, she could see that each version of her face was different from all the others.

Using a specially designed photo-editing program, the researchers had taken the woman’s photograph and created “morphs” — copies manipulated to make the woman appear either more or less attractive. Along with her original headshot, the woman was now looking at eight photographs that airbrushed and otherwise enhanced her appearance — dramatically, in some cases — and four photographs that marred her looks.

With the mirror to guide her, the woman was instructed to pick out her true image from the false ones. Even though the complimentary headshots outnumbered the adulterated ones by a two-to-one margin, the woman selected one of the unflattering photographs as the most authentic representation of what she saw in the mirror.

The researchers repeated versions of this experiment with roughly 70 other men and women. Over and over again, the people who looked at lineups of beautiful faces tended to select self-portraits that had been manipulated to look less attractive.

On the other hand, when the researchers flipped the script and showed people unattractive faces, those people tended to view their own faces more favorably.

Since the inception of social-comparison theory in the early 1950s, psychologists and sociologists have piled up evidence that human beings form opinions of themselves — their looks, aptitude, intelligence, and achievement — based in large part on the qualities they see in the people with whom they identify and associate. When those comparisons lead to inaccurate self-representations or appraisals, this distortion is sometimes referred to as the “frog-pond effect.”

The phrase stems from a 1966 paper that found college students at elite universities who had low GPAs tended to view their own academic abilities less favorably than students at lower-tier colleges who had good GPAs. “It is better to be a big frog in a small pond than a small frog in a big pond,” the author of that paper wrote.

The “frog-pond effect” continues to show up in research today.

“We use the term ‘frog-pond effect’ as shorthand for this tendency of people with a high rank in a low-rank group to evaluate themselves more favorably than people with a low rank in a high-rank group,” says Ethan Zell, PhD, author of the PLOS One study and an associate professor of psychology at the University of North Carolina, Greensboro.

In other words, we look at other people as a frame of reference when we are evaluating ourselves. While “upward” social comparisons can make us feel less attractive, less capable, or otherwise inferior, “downward” social comparisons tend to have just the opposite effect. “These effects are magnified when we’re comparing ourselves to people we view as peers, or those in a similar situation to ourselves,” Zell explains.

It’s difficult to overstate the role that social comparison plays in our lives, perhaps especially when we’re young or vulnerable.

Research has found that all of us engage in social comparisons — consciously or unconsciously — dozens and perhaps hundreds of times each day. To one extent or another, these comparisons influence every facet of our well-being and behavior, from our confidence and self-regard, to our willingness to take risks, to the likelihood that we’ll develop anxiety or depression.

Upward social comparison is not inherently harmful. “It can inspire us to take better care of ourselves, or to be ambitious in positive ways,” Zell says. But like anything else, too much of it can cause problems.

Unfortunately, modern life may overwhelm us with comparisons that distort our self-image and so threaten our well-being.

Inthe context of social media, the “highlight reel” effect describes people’s tendency to put only their best, most-flattering selves online. There’s mounting evidence, especially among young people, that the more time we spend looking at these glamorized depictions of others, the more the frog-pond effect and upward social comparisons do a number on our egos.

“Social media and technology have really expanded the reach of comparisons,” Zell says. Like the young woman in his study whose self-assessment took a hit after she viewed beautiful faces, many of us can’t help but feel inferior when we’re exposed to image after image or post after post of people who seem cooler, more interesting, funnier, prettier, or more stylish than we are.

“When we’re surrounded by people we view as somehow better than us — even if objectively we’re above average — that can be really demoralizing or deflating,” Zell explains.

Social media influencers may be especially damaging to our self-appraisals. We tend to view these people as peers, rather than what they really are — minor celebrities who are often paid handsomely to project a certain image or lifestyle. We may hear about their bad days or insecurities, but the overall message our brain is receiving is “this is a better version of me.”

Before social media, most of our comparisons were based on face-to-face interactions with friends, schoolmates, co-workers, and those who occupied our real-world social spheres. We saw the good and the bad — the features and the flaws — in something closer to equal measure. And this helped properly calibrate our self-assessments.

There’s also evidence that, in offline contexts, we frequently downplay our shiniest attributes. Research has found that being the target of an upward social comparison is unpleasant for us, and so we tend to shift our behavior in an effort to better mesh with our peers. “We can sense when other people feel bad because we’re better off than them and we adjust, but that doesn’t seem to be the case on social media,” Zell explains.

All of this suggests that . . .

Continue reading.

Written by Leisureguy

3 June 2021 at 11:52 am

%d bloggers like this: