Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Memes’ Category

Why targets of deliberate deception often hesitate to admit they’ve been deceived.

leave a comment »

Brooke Harrington, a sociology professor at Dartmouth College and author of Pop Finance and Capital Without Borders: Wealth Management and the One Percent (see: brookeharrington.com) writes in the Atlantic:

Something very strange has been happening in Missouri: A hospital in the state, Ozarks Healthcare, had to create a “private setting” for patients afraid of being seen getting vaccinated against COVID-19. In a video produced by the hospital, the physician Priscilla Frase says, “Several people come in to get vaccinated who have tried to sort of disguise their appearance and even went so far as to say, ‘Please, please, please don’t let anybody know that I got this vaccine.’” Although they want to protect themselves from the coronavirus and its variants, these patients are desperate to ensure that their vaccine-skeptical friends and family never find out what they have done.

Missouri is suffering one of the worst COVID-19 surges in the country. Some hospitals are rapidly running out of ICU beds. To Americans who rushed to get vaccinated at the earliest opportunity, some Missourians’ desire for secrecy is difficult to understand. It’s also difficult to square with the common narrative that vaccine refusal, at least in conservative areas of the country, is driven by a lack of respect or empathy from liberals along the coasts. “Proponents of the vaccine are unwilling or unable to understand the thinking of vaccine skeptics—or even admit that skeptics may be thinking at all,” lamented a recent article in the conservative National Review. Writers across the political spectrum have urged deference and sympathy toward holdouts’ concerns about vaccine side effects and the botched CDC messaging about masking and airborne transmission early in the pandemic. But these takes can’t explain why holdouts who receive respect, empathy, and information directly from reliable sources remain unmoved—or why some people are afraid to tell their loved ones about being vaccinated.

What is going on here? Sociology suggests that pundits and policy makers have been looking at vaccine refusal all wrong: It’s not an individual problem, but a social one. That’s why individual information outreach and individual incentives—such as Ohio’s Vax-a-Million program, intended to increase vaccine uptake with cash prizes and college scholarships—haven’t worked. Pandemics, by definition, are collective problems. They propagate and kill because people live in communities. As a result, addressing pandemics requires understanding interpersonal dynamics—not just what promotes trust among people, but which behaviors convey status or lead to ostracism.

Shifting from an individual to a relational perspective helps us understand why people are seeking vaccination in disguise. They want to save face within the very specific set of social ties that sociologists call “reference groups”—the neighborhoods, churches, workplaces, and friendship networks that help people obtain the income, information, companionship, mutual aid, and other resources they need to live. The price of access to those resources is conformity to group norms. That’s why nobody strives for the good opinion of everyone; most people primarily seek the approval of people in their own reference groups.

In Missouri and other red states, vaccine refusal on partisan grounds has become a defining marker of community affiliation. Acceptance within some circles is contingent on refusal to cooperate with the Biden administration’s public-health campaign. Getting vaccinated is a betrayal of that group norm, and those who get the shot can legitimately fear losing their job or incurring the wrath of their families and other reference groups.

Sociology solves mysteries like these by zeroing in on problematic relationships, not the decisions that individuals make in isolation. Many of the people refusing safe, effective vaccination amid a deadly pandemic are enmeshed in a very distinctive type of relationship that sociologists have been studying for more than 70 years: the con job. Con artists gain social or financial advantage by convincing their marks to believe highly dubious claims—and to block out all information to the contrary.

COVID-19-related cons have become big business, not just for right-wing media outlets that have gained viewers while purveying vaccine disinformation but also for small-time social-media grifters and enterprising professionals. The New York Times recently profiled Joseph Mercola, a Florida osteopath whom the paper described as “The Most Influential Spreader of Coronavirus Misinformation.” Four years ago, the Federal Trade Commission forced Mercola to pay nearly $3 million in settlements for false advertising claims about indoor tanning beds that he had sold. In February of this year, Mercola told his millions of followers on Facebook that the vaccine would “alter your genetic coding,” and promoted his line of vitamin supplements as an alternative to ward off COVID-19.

To outsiders, the social dynamics of the con appear peculiar and irrational. Those caught up in it can seem self-destructive and, frankly, clueless. But to sociologists, including me, who study fraud, such behaviors obey a predictable logic.

The seminal text in the field—Erving Goffman’s 1952 essay “On Cooling the Mark Out”—observes that all targets of con artists eventually come to understand that they have been defrauded, yet they almost never complain or report the crime to authorities. Why? Because, Goffman argues, . . .

Continue reading. There’s more, and it’s illuminating.

Written by Leisureguy

2 August 2021 at 12:51 pm

Reading John Gray in war

leave a comment »

Andy Owen, author of All Soldiers Run Away: Alano’s War: The Story of a British Deserter (2017) and a former soldier who writes on the ethics and philosophy of war, has an interesting essay in Aeon:

‘All of humanity’s problems stem from man’s inability to sit quietly in a room alone.’
Blaise Pascal (1623-62)

Ifirst read the English philosopher John Gray while sitting in the silence of the still, mid-afternoon heat of Helmand Province in Afghanistan. In Black Mass: Apocalyptic Religion and the Death of Utopia (2007), Gray showed how the United States’ president George W Bush and the United Kingdom’s prime minister Tony Blair framed the ‘war on terror’ (which I was part of) as an apocalyptic struggle that would forge the new American century of liberal democracy, where personal freedom and free markets were the end goals of human progress. Speaking at the Sydney Writers’ Festival in 2008, Gray highlighted an important caveat to the phrase ‘You can’t have an omelette without breaking eggs,’ which is sometimes used, callously, to justify extreme means to high-value ends. Gray’s caveat was: ‘You can break millions of eggs and still not have a single omelette.’ In my two previous tours of Iraq, I had seen first-hand – as sectarian hatred, insurgency, war fighting, targeted killings and the euphemistically named collateral damage tore apart buildings, bodies, communities and the shallow fabric of the state – just how many eggs had been broken and yet still how far away from the omelette we were.

There was no doubt that Iraq’s underexploited oil reserves were part of the US strategic decision-making, and that the initial mission in Afghanistan was in response to the terrorist attacks of 11 September 2001 on the US, but both invasions had ideological motivations too. I had started the process to join the British military before 9/11. The military I thought I was joining was the one that had successfully completed humanitarian interventions in the Balkans and Sierra Leone. I believed we could use force for good, and indeed had a duty to do so. After the failure to prevent genocides in Rwanda and Srebrenica, the concept of the ‘responsibility to protect’ was developing, which included the idea that when a state was ‘unable or unwilling’ to protect its people, responsibility shifted to the international community and, as a last resort, military intervention would be permissible. It would be endorsed by all member states of the United Nations (UN) in 2005 but, under the framework, the authority to employ the last resort rested with the UN Security Council, who hadn’t endorsed the invasion of Iraq.

Despite the lack of a UN resolution, many of us who deployed to Iraq naively thought we were doing the right thing. When Lieutenant Colonel Tim Collins delivered his eve-of-battle speech to the Royal Irish Battle Group in March 2003, he opened by stating: ‘We go to liberate, not to conquer.’ We had convinced ourselves that, as well as making the region safer by seizing the Iraqi president Saddam Hussein’s weapons of mass destruction (WMD), we were there to save the people of Iraq from their own government and replace it with the single best way of organising all societies: liberal democracy. This feeling was so persuasive that it led to many troops feeling that the Iraqis were somehow ungrateful when they started to shoot at us for invading their country.

By my second tour of Iraq in 2005, it was clear that no WMD would be found and the society that was evolving was far from the one envisaged. Morale was at a low ebb as the gap between the mission and what we were achieving widened. We were stuck in a Catch-22. We would hand over to local security forces when the security situation improved enough for us to do so. However, the security situation couldn’t improve while we were still there. It would improve only if we left. The conditions that would allow us to leave were us already having left. Most troops were stuck inside the wire, their only purpose seemingly to be mortared or rocketed for being there. I was asked why we were there, especially when soldiers witnessed their friends being injured or killed, or saw the destruction of the city we’d come to liberate. They needed meaning, it couldn’t all be pointless. Meaning was found in protecting each other. My team of 30 or so men and women found purpose in trying to collect intelligence on those planting deadly improvised explosive devices along the main routes in and out of the city. Members of both the team before and the team after us were blown up trying to do so.

Much of the criticism levelled at the post-invasion failure focused on the mistake of disbanding the Iraqi state, the lack of post-conflict planning and the lack of resources. There was less focus on the utopian aims of the whole project. But it was only through Gray that I saw the similarities between the doctrines of Stalinism, Nazi fascism, Al-Qaeda’s paradoxical medieval, technophile fundamentalism, and Bush’s ‘war on terror’. Gray showed that they are all various forms (however incompatible) of utopian thinking that have at their heart the teleological notion of progress from unenlightened times to a future utopia, and a belief that violence is justified to achieve it (indeed, from the Jacobins onwards, violence has had a pedagogical function in this process). At first, I baulked at the suggested equivalence with the foot soldiers of the other ideologies. There were clearly profound differences! But through Gray’s examples, I went on to reflect on how much violence had been inflicted throughout history by those thinking that they were doing the right thing and doing it for the greater good.

A message repeated throughout Gray’s work is that, despite the irrefutable material gains, this notion is misguided: scientific knowledge and the technologies at our disposal increase over time, but there’s no reason to think that morality or culture will also progress, nor – if it does progress for a period – that this progress is irreversible. To think otherwise is to misunderstand the flawed nature of our equally creative and destructive species and the cyclical nature of history. Those I spoke to in Basra needed no convincing that the advance of rational enlightened thought was reversible, as the Shia militias roamed the streets enforcing their interpretation of medieval law, harassing women, attacking students and assassinating political opponents. By the time bodies of journalists who spoke out against the death squads started turning up at the side of the road, Basra’s secular society was consigned to history. Gray points to the re-introduction of torture by the world’s premier liberal democracy during the war on terror as an example of the reversibility of progress. The irreversibility idea emerged directly from a utopian style of thinking that’s based on the notion that the end justifies the means. Such thinking is often accompanied by one of the defining characteristics of the Iraq and Afghanistan campaigns: hubris.

The myth of progress was a key theme of Gray’s . . .

Continue reading.

Written by Leisureguy

31 July 2021 at 8:46 pm

How Do You Convince People to Eat Less Meat?

with 6 comments

Jan Dutkiewicz has an interesting article in The New Republic with the subheading:

A recent fracas in Spain shows that simply telling people to reduce meat consumption in the name of climate and personal health won’t work.

I’m not a good source on this: I dropped meat, dairy, and eggs (except for an occasional rare exception) exactly for reasons of health, and of course it was difficult at first — as I write in my (lengthy and detailed) post about my diet,

It takes a few weeks to get the hang of a new approach to food when you change your diet, so I would recommend you stick with this approach for two months and then take stock, evaluating it in the light of your own experience. Changing your diet is difficult because it requires revising patterns of eating that you have learned so well you use them unconsciously. Just as you don’t have to think much to get around your own town or neighborhood, the diet you already know is easy because it’s based on established dishes and established routines.

And just as moving to a new city requires a lot of work and attention at first just to find your way around, moving to a new way of eating requires thought and attention to figure out a new repertoire of “standard” dishes and meals. But over time, both become easy once again as new patterns are figured out, learned, and become familiar, and easy routines again emerge.

You gain the new knowledge and regain the old comfort more readily if you have the mindset and attitude of a new permanent resident rather than a visitor, because as a permanent resident you’re more motivated to explore and discover what all it has to offer beyond the obvious tourist attractions (or obvious recipes) See this post: “Finding pleasure in the learning of new skills.”

So my method would be to focus on the health benefits plus the interesting new foods — the new dishes, tastes, and textures — that a whole-food plant-based diet offers. But here’s the article:

In early July, Spain’s minister of consumer affairs, Alberto Garzón, posted a short video on Twitter urging Spaniards to decrease their meat consumption. From a political communication perspective, it was flawless. He listed the many ways large-scale meat production and consumption harm humans, the environment, and animals, all backed by peer-reviewed science. He focused on reducing meat intake, not eliminating it—he praised nonindustrial livestock systems and family barbecues. He acknowledged that changing diets is hard for those without access to cheap, accessible, and diverse food choices. He explained that the government would launch food education campaigns and implement regulations to incentivize more sustainable diets. He even added a hashtag: #MenosCarneMasVida (Less Meat More Life).

Spanish politics exploded. While Garzón’s nuanced, well-researched message received some support (the number of Spaniards who claim to want to reduce their meat consumption is rising), several fellow politicians turned to juvenile trolling. Prime Minister Pedro Sanchez, of Spain’s socialist party, gushed about his love of the chuletón steak to a press conference, and Teodoro García Egea of the right-wing People’s Party tweeted out a picture of a grill packed with slabs of meat with the caption, “To your health.”

The affair brilliantly displayed the fraught politics of dietary change. The average Western diet—prevalent in Spain, just as it is in the United States and the United Kingdom—is high in meat, fat, and sugar, its production and consumption an environmental and public health disaster. This has been true for decades. But in the past few years, a growing chorus of voices have begun to call for major dietary changes in the interest of human and planetary health. The EAT-Lancet report published in February 2019 called for a global shift to a primarily plant-based diet if we are to keep agricultural production within planetary limits. The problem, however, is that actually changing what people eat is extremely difficult. Who should drive this change: individuals, governments, or corporations? Can a balance be struck between consumer freedom and regulation? And how can rational policymaking be squared with food’s significant cultural, nationalist, and personal meaning?

Beef is where this kind of discussion usually starts because it’s where the scientific consensus is particularly strong. The world’s one billion cows contribute about 6 percent of all greenhouse gases through their methane-rich burps, require vast amounts of grazing land, and are often fattened for slaughter on industrial feedlots where they are fed a diet of monocrops like corn and soy, whose planting in turn contributes to widespread deforestation and pesticide use. Overconsumption of red meat has also been linked to a range of health issues.

Steaks, in other words, are the SUVs of meat: expensive, unnecessary, environmentally noxious status symbols that do far more harm than good. There’s a good case for eliminating beef consumption entirely, and drastically reducing it ought to be a no-brainer: The EAT-Lancet model diet, for instance, suggests limiting beef to 98 grams per week (and all meat to under 500 grams). That amounts to a 60 percent decrease, relative to a Spaniard’s average diet, and a massive 86 percent decrease in the USA.

The traditional way for NGOs, companies, and governments to approach dietary change is through information campaigns and so-called nudges that don’t impinge on individual choice or risk regulatory and legislative battles. They’re nonintrusive ways of suggesting more healthy or ethical choices to consumers—like releasing EAT-Lancet recommendations or national dietary guidelines, slapping “fair trade” labels on coffee or “humanely raised” labels on meat. It can also mean deciding not to promote a product, as the food website Epicurious did when it vowed to stop running beef recipes for many of the reasons mentioned by Garzón.

The problem with these interventions is that they are not all that effective. While consumers may claim they want to make more informed or sustainable decisions, they tend to default to their usual habits in the supermarket aisles. And information doesn’t necessarily shift behavior; it may even have the opposite effect. Psychologists argue that when consumers face the “meat paradox” of eating meat while being opposed to the harms caused by it, they will often create justificatory narratives and rationalizations that deny harm or personal responsibility rather than actually halting meat consumption.

These mild, less effective policy efforts also tend to be attacked by critics as if they were actually reducing consumer choice. EAT-Lancet was met with a coordinated online countercampaign under the hashtag #yes2meat. Epicurious was lambasted by pro-beef critics, including foodies and food writers, in the wake of its decision. When the United Nations tried to call for meat reduction to mitigate climate change, it too was brutally critiqued, including by pro-meat climate scholars.

Changing the scope and availability of choices in any given situation may be more productive. This is called changing “choice architecture,” and there’s good evidence for its efficacy. For instance, removing beef jerky from among the impulse-buy items in a checkout line disincentivizes jerky purchases just by moving them out of sight and out of mind. Major opportunities for choice-architecture manipulation exist in supermarkets and restaurants, which could commit to selling less beef, promoting more healthful options, or replacing meat with alternative proteins, as a growing number of fast-food joints are doing.

These changes can have an even bigger impact in institutional spaces like schools that have large provisioning budgets and feed large numbers of people; such changes can shift both individuals’ habits and influence the economics of food distribution. Studies have shown that simply increasing the number of vegetarian options or making plant-based meals the default instead of meat massively increases more sustainable eating. And shifting food patterns in schools can build the next generation of more sustainable eaters.

But there’s stiff opposition here, too. When schools in Lyon, France, moved to make lunches plant-forward (albeit with fish and egg and dairy options available), farmers stormed the city in protest and the French minister of agriculture clamored against anti-meat “ideology.” In the U.S., Joni Ernst, the infamously meat-industry-friendly senator from Iowa whose campaign advertising included boasts about pig castrationhas introduced an act to preemptively preclude federal institutions from engaging in nudges like “Meatless Monday.”

That brings us to state intervention. Government has tremendous power to address collective action problems through incentives, regulations, and taxation. In the world of public health, these interventions are ranked on a scale called the Nuffield Ladder, with gentle nudges at the bottom and outright bans at the top. One of the most commonly used tools is taxation. In particular, governments can implement what are known as Pigouvian taxes on things like sugary drinks, tobacco, or polluting factories—the idea is to force producers to cover the cost of the harms their products do. They can also slap so-called “sin taxes” on products to increase direct costs for consumers. These taxes work. Numerous studies show that these are very effective in decreasing consumption, leading groups like the World Health Organization to strongly support them. The academic case for such taxes on meat is robust and convincing. But taxes in general are massively politically unpopular and lead to accusations of a nanny state interfering in consumers’ free choice, as the battles over sugar taxes around the world have shown.

On July 15, the U.K. released its Food Strategy, a well-researched document urging a reshaping of the British food system in the interest of health and sustainability. It called for reductions in sugar, salt, and meat. But the authors only suggested a tax on sugar and salt, shying away from a “politically impossible” meat tax. Instead, they recommended plant-forward dietary nudges and subsidies for the development of alternative proteins.

It’s a good illustration of the way policymakers often self-edit when it comes to such a fraught topic. The problem is that, while this approach is politically pragmatic, it is naïve to expect that clinging to the lower rungs of the Nuffield Ladder can lead to even the Food Strategy’s suggested 30 percent reduction in meat consumption, let alone the EAT-Lancet standard.

But the problem isn’t only that policymakers are wary of inviting pro-meat backlash. It’s also that virtually all governments subsidize and promote meat production and consumption. The EU, despite its Green Deal commitment to carbon neutrality by 2050, has spent millions of Euros on a “Beefatarian” advertising campaign, and both Europe and the USA support animal agriculture through extensive subsidies and supports. Changing this dynamic—a status quo in which politicians pick up points by slamming vegetarians while support for the meat industry is baked into countless national budgets—will require a multifaceted approach.

Incentivizing the production of alternatives in addition to, or ideally instead of, harmful products like beef, as the U.K. Food Strategy does with its support for alternative proteins, is one good option. But such support should include not only plant-based or cell-based “meat alternatives” but also plants as alternatives to meat. A recent study published in Global Food Security, for instance, shows that humble legumes, with the right government push, could provide a far more sustainable and diverse source of protein than meat. Creating opportunities for food access is also . . .

Continue reading. There’s more.

And note this interesting map of land-use in the UK from the recently published National Food Strategy, which can be download at the link and which has some very interesting information (for those who make decisions in that way):

Written by Leisureguy

28 July 2021 at 3:24 pm

Our democracy is under attack. Washington journalists must stop covering it like politics as usual.

leave a comment »

Margaret Sullivan, one-time Public “Editor for the NY Times and now a columnist for the Washington Post, has a good piece today:

Back in the dark ages of 2012, two think-tank scholars, Norman Ornstein and Thomas Mann, wrote a book titled “It’s Even Worse Than It Looks” about the rise of Republican Party extremism and its dire effect on American democracy.

In a related op-ed piece, these writers made a damning statement about Washington press coverage, which treats the two parties as roughly equal and everything they do as deserving of similar coverage.

Ornstein and Mann didn’t use the now-in-vogue terms “both-sidesism” or “false equivalence,” but they laid out the problem with devastating clarity (the italics are mine):

“We understand the values of mainstream journalists, including the effort to report both sides of a story. But a balanced treatment of an unbalanced phenomenon distorts reality. If the political dynamics of Washington are unlikely to change any time soon, at least we should change the way that reality is portrayed to the public.”

Positive proof was in the recent coverage of congressional efforts to investigate the Jan. 6 insurrection at the Capitol.

The Democratic leadership has been trying to assemble a bipartisan panel that would study that mob attack on our democracy and make sure it is never repeated. Republican leaders, meanwhile, have been trying to undermine the investigation, cynically requesting that two congressmen who backed efforts to invalidate the election be allowed to join the commission, then boycotting it entirely. And the media has played straight into Republicans’ hands, seemingly incapable of framing this as anything but base political drama.

“ ‘What You’re Doing Is Unprecedented’: McCarthy-Pelosi Feud Boils Over,” read a CNN headline this week. “After a whiplash week of power plays . . . tensions are at an all-time high.”

Is it really a “feud” when Republican Minority Leader Kevin McCarthy performatively blames Democratic House Speaker Nancy Pelosi for refusing to seat Republicans Jim Jordan and Jim Banks — two sycophantic allies of Trump, who called the Jan. 6 mob to gather?

One writer at Politico called Pelosi’s decision a “gift to McCarthy.” And its Playbook tut-tutted the decision as handing Republicans “a legitimate grievance,” thus dooming the holy notion of bipartisanship.

“Both parties have attacked the other as insincere and uninterested in conducting a fair-minded examination,” a Washington Post news story observed. (“Can it really be lost on the Post that the Republican party has acted in bad faith at every turn to undermine every attempt to investigate the events of Jan. 6?” a reader complained to me.)

The bankruptcy of this sort of coverage was exposed on Tuesday morning, when the Jan. 6 commission kicked off with somber, powerful, pointedly nonpolitical testimony from four police officers who were attacked during the insurrection. Two Republicans, Liz Cheney and Adam Kinzinger, even defied McCarthy’s boycott to ensure their party would be sanely represented.

Law officers became truth seekers about who was responsible for the Capitol attacks

This strain of news coverage, observed Jon Allsop in Columbia Journalism Review, centers on twinned, dubious implications: “That bipartisanship is desirable and that Democrats bear responsibility for upholding it — even in the face of explicit Republican obstructionism.”

This stance comes across as both cynical (“politics was ever thus”) and unsophisticated (“we’re just doing our job of reporting what was said”). Quite a feat.

Mainstream journalists want their work to be perceived as fair-minded and nonpartisan. They want to defend themselves against charges of bias. So they equalize the unequal. This practice seems so ingrained as to be unresolvable.

There is a way out. But it requires the leadership of news organizations to radically reframe the mission of its Washington coverage. As a possible starting point, I’ll offer these recommendations:

  • Toss out the insidious “inside-politics” frame and replace it with a “pro-democracy” frame.
  • Stop calling the reporters who cover this stuff “political reporters.” Start calling them “government reporters.”

  • Stop asking who the winners and losers were in the latest skirmish. Start asking who is serving the democracy and who is undermining it

  • Stop being “savvy” and start being patriotic.

In a year-end piece for Nieman Lab, Andrew Donohue, managing editor of the Center for Investigative Reporting’s Reveal, called for news organizations to put reporters on a new-style “democracy beat” to focus on voting suppression and redistricting. “These reporters won’t see their work in terms of politics or parties, but instead through the lens of honesty, fairness, and transparency,” he wrote.

I’d make it more sweeping. The democracy beat shouldn’t be some kind of specialized innovation, but a widespread rethinking across the mainstream media.

Making this happen will call for something that Big Journalism is notoriously bad at: An open-minded, nondefensive recognition of what’s gone wrong.

Top editors, Sunday talk-show moderators and other news executives should pull together their brain trusts to grapple with this. And they should be transparent with the public about what they’re doing and why.

As a model, they might have to swallow their big-media pride and look to places like Harrisburg, Pa., public radio station WITF which has admirably explained to its audience why it continually offers reminders about the actions of those public officials who tried to overturn the 2020 election results. Or to Cleveland Plain Dealer editor Chris Quinn’s letter to readers about how the paper and its website, Cleveland.com, refuse to cover every reckless, attention-getting lie of Republican Josh Mandel as he runs for the U.S. Senate next year. . .

Continue reading. There’s more.

Written by Leisureguy

28 July 2021 at 1:34 pm

The Real Source of America’s Rising Rage

leave a comment »

Kevin Drum has a good article in Mother Jones that begins:

Americans sure are angry these days. Everyone says so, so it must be true.

But who or what are we angry at? Pandemic stresses aside, I’d bet you’re not especially angry at your family. Or your friends. Or your priest or your plumber or your postal carrier. Or even your boss.

Unless, of course, the conversation turns to politics. That’s when we start shouting at each other. We are way, way angrier about politics than we used to be, something confirmed by both common experience and formal research.

When did this all start? Here are a few data points to consider. From 1994 to 2000, according to the Pew Research Center, only 16 percent of Democrats held a “very unfavorable” view of Republicans, but then these feelings started to climb. Between 2000 and 2014 it rose to 38 percent and by 2021 it was about 52 percent. And the same is true in reverse for Republicans: The share who intensely dislike Democrats went from 17 percent to 43 percent to about 52 percent.

Likewise, in 1958 Gallup asked people if they’d prefer their daughter marry a Democrat or a Republican. Only 28 percent cared one way or the other. But when Lynn Vavreck, a political science professor at UCLA, asked a similar question a few years ago, 55 percent were opposed to the idea of their children marrying outside their party.

Or consider the right track/wrong track poll, every pundit’s favorite. Normally this hovers around 40–50 percent of the country who think we’re on the right track, with variations depending on how the economy is doing. But shortly after recovering from the 2000 recession, this changed, plunging to 20–30 percent over the next decade and then staying there.

Finally, academic research confirms what these polls tell us. Last year a team of researchers published an international study that estimated what’s called “affective polarization,” or the way we feel about the opposite political party. In 1978, we rated people who belonged to our party 27 points higher than people who belonged to the other party. That stayed roughly the same for the next two decades, but then began to spike in the year 2000. By 2016 it had gone up to 46 points—by far the highest of any of the countries surveyed—and that’s before everything that has enraged us for the last four years.

What’s the reason for this? There’s no shortage of speculation. Political scientists talk about the fragility of presidential systems. Sociologists explicate the culture wars. Historians note the widening divide between the parties after white Southerners abandoned the Democratic Party following the civil rights era. Reporters will regale you with stories about the impact of Rush Limbaugh and Newt Gingrich.

There’s truth in all of these, but even taken together they are unlikely to explain the underlying problem. Some aren’t new (presidential systems, culture wars) while others are symptoms more than causes (the Southern Strategy).

I’ve been spending considerable time digging into the source of our collective rage, and the answer to this question is trickier than most people think. For starters, any good answer has to fit the timeline of when our national temper tantrum began—roughly around the year 2000. The answer also has to be true: That is, it needs to be a genuine change from past behavior—maybe an inflection point or a sudden acceleration. Once you put those two things together, the number of candidates plummets.

But I believe there is an answer. I’ll get to that, but first we need to investigate a few of the most popular—but ultimately unsatisfying—theories currently in circulation.

Theory #1: Americans Have Gone Crazy With Conspiracy Theories

It’s probably illegal to talk about the American taste for conspiracy theorizing without quoting from Richard Hofstadter’s famous essay, “The Paranoid Style in American Politics.” It was written in 1964, but this passage (from the book version) about the typical conspiracy monger should ring a bell for the modern reader:

He does not see social conflict as something to be mediated and compromised, in the manner of the working politician. Since what is at stake is always a conflict between absolute good and absolute evil, the quality needed is not a willingness to compromise but the will to fight things out to a finish. Nothing but complete victory will do.

Or how about this passage from Daniel Bell’s “The Dispossessed”? It was written in 1962:

The politics of the radical right is the politics of frustration—the sour impotence of those who find themselves unable to understand, let alone command, the complex mass society that is the polity today…Insofar as there is no real left to counterpoise to the right, the liberal has become the psychological target of that frustration.

In other words, the extreme right lives to own the libs. And it’s no coincidence that both Hofstadter and Bell wrote about this in the early ’60s: That was about the time that the John Birch Society was gaining notoriety and the Republican Party nominated Barry Goldwater for president. But as Hofstadter in particular makes clear, a fondness for conspiracy theories has pervaded American culture from the very beginning. Historian Bernard Bailyn upended revolutionary-era history and won a Pulitzer Prize in 1968 for his argument that belief in a worldwide British conspiracy against liberty “lay at the heart of the Revolutionary movement”—an argument given almost Trumpian form by Sam Adams, who proclaimed that the British empire literally wanted to enslave white Americans. Conspiracy theories that followed targeted the Bavarian Illuminati, the Masons, Catholics, East Coast bankers, a global Jewish cabal, and so on.

But because it helps illuminate what we face now, let’s unpack the very first big conspiracy theory of the modern right, which began within weeks of the end of World War II.

In 1945 FDR met with Joseph Stalin and Winston Churchill at Yalta with the aim of gaining agreement about the formation of the United Nations and free elections in Europe. In this he succeeded: Stalin agreed to everything FDR proposed. When FDR returned home he gave a speech to Congress about the meeting, and it was generally well received. A month later he died.

Needless to say, Stalin failed to observe most of the agreements he had signed. He never had any intention of allowing “free and fair” elections in Eastern Europe, which he wanted as a buffer zone against any future military incursion from Western Europe. The United States did nothing about this, to the disgust of many conservatives. However, this was not due to any special gutlessness on the part of Harry Truman or anyone in the Army. It was because the Soviet army occupied Eastern Europe when hostilities ended and there was no way to dislodge it short of total war, something the American public had no appetite for.

And there things might have stood. Scholars could have argued for years about whether FDR was naive about Stalin, or whether there was more the US and its allies could have done to push Soviet troops out of Europe. Books would have been written and dissertations defended, but not much more. So far we have no conspiracy theory, just some normal partisan disagreement.

But then came 1948. Thomas Dewey lost the presidency to Harry Truman and Republicans lost control of the House. Soon thereafter the Soviet Union demonstrated an atomic bomb and communists overran China. It was at this point that a normal disagreement turned into a conspiracy theory. The extreme right began suggesting that FDR had deliberately turned over Eastern Europe to Stalin and that the US delegation at Yalta had been rife with Soviet spies. Almost immediately Joe McCarthy was warning that the entire US government was infiltrated by communists at the highest levels. J. Robert Oppenheimer, the architect of the Manhattan Project, was surely a communist. George Marshall, the hero of World War II, was part of “a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man.”

Like most good conspiracy theories, there was a kernel of truth here. Stalin really did take over Eastern Europe. Alger Hiss, part of the Yalta delegation, really did turn out to be a Soviet mole. Klaus Fuchs and others really did pass along atomic secrets to the Soviets. Never mind that Stalin couldn’t have been stopped; never mind that Hiss was a junior diplomat who played no role in the Yalta agreements; never mind that Fuchs may have passed along secrets the Soviets already knew. It was enough to power a widespread belief in McCarthy’s claim of the biggest conspiracy in all of human history.

There’s no polling data from back then, but belief in this conspiracy became a right-wing mainstay for years—arguably the wellspring of conservative conspiracy theories for decades. Notably, it caught on during a time of conservative loss and liberal ascendancy. This is a pattern we’ve seen over and over since World War II. The John Birch Society and the JFK assassination conspiracies gained ground after enormous Democratic congressional victories in 1958 and again in 1964. The full panoply of Clinton conspiracies blossomed after Democrats won united control of government in the 1992 election. Benghazi was a reaction to Barack Obama—not just a Democratic win, but the first Black man to be elected president. And today’s conspiracy theories about stealing the presidential election are a response to Joe Biden’s victory in 2020.

How widespread are these kinds of beliefs? And has their popularity changed over time? The evidence is sketchy but there’s polling data that provides clues. McCarthy’s conspiracy theories were practically a pandemic, consuming American attention for an entire decade. Belief in a cover-up of the JFK assassination has always hovered around 50 percent or higher. In the mid-aughts, a third of poll respondents strongly or somewhat believed that 9/11 was an inside job, very similar to the one-third of Americans who believe today that there was significant fraud in the 2020 election even though there’s no evidence to support this. And that famous one-third of Americans who are skeptical of the COVID-19 vaccine? In 1954 an identical third of Americans were skeptical of the polio vaccine that had just become available.

So how does QAnon, the great liberal hobgoblin of the past year, measure up? It may seem historically widespread for such an unhinged conspiracy theory, but it’s not: Polls suggest that actual QAnon followers are rare and that belief in QAnon hovers at less than 10 percent of the American public. It’s no more popular than other fringe fever swamp theories of the past.

It’s natural to believe that things happening today—to you—are worse than similar things lost in the haze of history, especially when social media keeps modern outrages so relentlessly in our faces. But often it just isn’t true. A mountain of evidence suggests that the American predilection for conspiracy theories is neither new nor growing. Joseph Uscinski and Joseph Parent, preeminent scholars of conspiracy theories, confirmed this with some original research based on letters to the editors of the New York Times and the Chicago Tribune between 1890 and 2010. Their conclusion: Belief in conspiracy theories has been stable since about 1960. Along with more recent polling, this suggests that the aggregate belief in conspiracy theories hasn’t changed a lot and therefore isn’t likely to provide us with much insight into why American political culture has corroded so badly during the 21st century.

Theory #2: It’s All About Social Media

How about social media? Has it had an effect? Of . . .

Continue reading. There’s much more — along with what he views as the main cause.

And note these:

RELATED STORIES

Today It’s Critical Race Theory. 200 Years Ago It Was Abolitionist Literature.

The Moral Panic Over Critical Race Theory Is Coming for a North Carolina Teacher of the Year

Post-Trump, the GOP Continues to Be the Party of (White) Grievance

Written by Leisureguy

28 July 2021 at 12:00 pm

Moving people about in cities

leave a comment »

Victoria has seriously adopted bike lanes, which are not universally popular — in fact, some people seem to find bike lanes completely offensive. But cities should accommodate a variety of forms of people movement:

It’s important to recognize that things change and diversity deserves honor and respect, whether among people (race, religion, sexual orientation, language, and so on) or among modes of transportation (or among music or literature genres, styles of clothing, and so on).

And speaking of moving people about: “This is what we’d call a “design challenge”. A 5 car-leg, 7 bicycle leg roundabout with a bidirectional cycleway, an at-grade train line, and a car underpass.”

And regarding the advent of electric cars, this chart was compiled by pulling Wikipedia data on early sales (beginning with the year of introduction) of the Ford Model T and the Tesla

Traffic control via street markings can work well, especially if the markings are well designed to convey information at a glance — zebra stripes for pedestrian crosswalks are one example; here’s another:

Written by Leisureguy

28 July 2021 at 9:16 am

Some clips from police testimony on the January 7 insurrection

leave a comment »

Watch not only this clip, but click the date of the tweet and read the thread.

Written by Leisureguy

27 July 2021 at 4:56 pm

Against Persuasion: The Wisdom of Socrates

leave a comment »

Agnes Callard writes in Boston Review:

Philosophers aren’t the only ones who love wisdom. Everyone, philosopher or not, loves her own wisdom: the wisdom she has or takes herself to have. What distinguishes the philosopher is loving the wisdom she doesn’t have. Philosophy is, therefore, a form of humility: being aware that you lack what is of supreme importance. There may be no human being who exemplified this form of humility more perfectly than Socrates. It is no coincidence that he is considered the first philosopher within the Western canon.

Socrates did not write philosophy; he simply went around talking to people. But these conversations were so transformative that Plato devoted his life to writing dialogues that represent Socrates in conversation. These dialogues are not transcripts of actual conversations, but they are nonetheless clearly intended to reflect not only Socrates’s ideas but his personality. Plato wanted the world to remember Socrates. Generations after Socrates’s death, warring philosophical schools such as the Stoics and the Skeptics each appropriated Socrates as figurehead. Though they disagreed on just about every point of doctrine, they were clear that in order to count themselves as philosophers they had to somehow be working in the tradition of Socrates.

What is it about Socrates that made him into a symbol for the whole institution of philosophy? Consider the fact that, when the Oracle at Delphi proclaims Socrates wisest of men, he tries to prove it wrong. As Plato recounts it in the Apology:

I went to one of those reputed wise, thinking that there, if anywhere, I could refute the oracle and say to it: “This man is wiser than I, but you said I was.” Then, when I examined this man—there is no need for me to tell you his name, he was one of our public men—my experience was something like this: I thought that he appeared wise to many people and especially to himself, but he was not. I then tried to show him that he thought himself wise, but that he was not. As a result he came to dislike me, and so did many of the bystanders. So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.”

If Socrates’s trademark claim is this protestation of ignorance, his trademark activity is the one also described in this passage: refuting the views of others. These are the conversations we find in Plato’s texts. How are the claim and the activity related? Socrates denies that his motivations are altruistic: he says he is not a teacher, and insists that he is himself the primary beneficiary of the conversations he initiates. This adds to the mystery: What is Socrates getting out of showing people that they don’t know what they take themselves to know? What’s his angle?

Over and over again, Socrates approaches people who are remarkable for their lack of humility—which is to say, for the fact that they feel confident in their own knowledge of what is just, or pious, or brave, or moderate. You might have supposed that Socrates, whose claim to fame is his awareness of his own ignorance, would treat these self-proclaimed “wise men” (Sophists) with contempt, hostility, or indifference. But he doesn’t. The most remarkable feature of Socrates’s approach is his punctilious politeness and sincere enthusiasm. The conversation usually begins with Socrates asking his interlocutor: Since you think you know, can you tell me, what is courage (or wisdom, or piety, or justice . . .)? Over and over again, it turns out that they think they can answer, but they can’t. Socrates’s hope springs eternal: even as he walks toward the courtroom to be tried—and eventually put to death—for his philosophical activity, he is delighted to encounter the self-important priest Euthyphro, who will, surely, be able to say what piety is. (Spoiler: he’s not.)

Socrates seemed to think that the people around him could help him acquire the knowledge he so desperately wanted—even though they were handicapped by the illusion that they already knew it. Indeed, I believe that their ill-grounded confidence was precisely what drew Socrates to them. If you think you know something, you will be ready to speak on the topic in question. You will hold forth, spout theories, make claims—and all this, under Socrates’s relentless questioning, is the way to actually acquire the knowledge you had deluded yourself into thinking you already had.

Let me sketch a little dialogue you might have with Socrates.

Socrates: What is courage?

You: Courage is being willing to take big risks without knowing how it’s going to work out.

Socrates: Such as risking your life?

You: Yes.

Socrates: Is courage good?

You: Yes.

Socrates: Do you want it for yourself and your children?

You: Yes.

Socrates: Do you want your children to go around risking their lives?

You: No. Maybe I should’ve said that courage is taking prudent risks, where you know what you are doing.

Socrates: Like an expert investor who knows how to risk money to make lots more?

You: No, that isn’t courageous. . . .

At this point, your pathways are blocked. You cannot say courage is ignorant risk-taking, and you cannot say courage is prudent risk-taking. You do not have a way forward. You are in what Socrates’s interlocutors called aporia, a state of confusion in which there is nowhere for you to go.

Suppose that the conversation goes no further than this—that, as is typical for Socrates’s interlocutors, you storm off in a huff at this point. Where does that leave you, and where does that leave Socrates?

Let’s start with you first. You might . . .

Continue reading. There’s more.

Written by Leisureguy

27 July 2021 at 12:09 pm

Paris Sportif: The Contagious Attraction of Parkour

leave a comment »

I first encountered parkour in a Luc Besson movie, District 13 (from 2004, original title Banlieue 13), but it has a longer history, discussed by Macs Smith in an extract from his book Paris and the Parasite: Noise, Health, and Politics in the Media City published in The MIT Reader:

In a city fixated on public health and order, a viral extreme sport offers a challenge to the status quo.1955, Letterist International, a Paris-based group of avant-garde authors, artists, and urban theorists, published “Proposals for Rationally Improving the City of Paris.” The group, which would become better known as Situationist International, or SI, and play an important role in the May 1968 demonstrations, put forward wild suggestions for breaking the monotony of urban life. Some of these, like the call to abolish museums and distribute their masterpieces to nightclubs, were iconoclastic and anti-institutional, reflecting the group’s anarchic political leanings.

Others were less overtly political and testified to a thirst for excitement. To appeal to “spelunkers” and thrill-seekers, they called for Paris’s rooftops and metro tunnels to be opened up to exploration. The group believed that the mundaneness of urban life in the 1950s was integral to bourgeois capitalism. Boredom was part of how the government maintained order, and so a more equal city would necessarily have to be more frightening, more surprising, more fun.

SI disbanded in 1972, but its ideas about the links between emotion and urban politics have been influential. Among the best examples are the subcultures centered around urban thrill-seeking that exist today, like urban exploration (Urbex), rooftopping, and skywalking, all of which involve breaking into dangerous or forbidden zones of the city. The most famous inheritor to SI’s call to experience urban space differently is parkour, which was invented in the Paris suburb of Lisses in the 1980s. It was inspired by Hébertisme, a method of obstacle course training first introduced to the French Navy in 1910 by Georges Hébert. David Belle learned the principles of Hébertisme from his father, Raymond, who had been exposed to it at a military school in Vietnam. David, along with a friend, Sébastien Foucan, then adapted those principles, originally conceived for natural environments, to the suburban architecture of their surroundings.

Over time, parkour has incorporated techniques from tumbling, gymnastics, and capoeira, resulting in a striking blend of military power and balletic artistry. Parkour involves confronting an urban map with an embodied experience of urban space. It is often defined as moving from points A to B in the most efficient way possible, and parkour practitioners, called traceurs, often depict themselves as trailblazers identifying routes through the city that cartography does not capture. Traceurs sometimes evoke the fantasy of tracing a straight line on the map and finding a way to turn it into a path, although in practice, they more often work at a single point on the map — a park, a rooftop, an esplanade — and end a session back where they started.

Traceurs’ desire to rewrite the map is another thing they share with the Situationists, who liked to cut up maps and glue them back together to show the psychological distance between neighborhoods. But parkour distinguishes itself from SI through its use of video, which continues to be a point of debate within the practice. In the early 2000s, Sébastien Foucan reignited this debate when he broke away from Belle to pioneer his own version of the training system.

Foucan’s appearance in the 2003 documentary “Jump London” cemented “freerunning” as the name for this alternate practice, which put a greater emphasis on stylized movements. Foucan would go on to play a terrorist bomb-maker in Martin Campbell’s “Casino Royale,” leaping from cranes with Daniel Craig’s James Bond in pursuit. Some parkour purists see this as a degradation of the utilitarian roots of their training, and insist instead on a physio-spiritual discourse of communion with the environment, mastery of fear, and humility. They reject freerunning as a brash corruption of Hébert’s principles. The sociologist Jeffrey Kidder notes in his interviews with traceurs in Chicago that they dismiss participants who lack interest in serious rituals like safety, humility, and personal growth. They react negatively to media coverage that highlights parkour’s danger or assimilates it into adolescent rebellions like skateboarding, drug use, or loitering.

In my own email interview with the leaders of Parkour Paris, the official parkour organization of Paris, the same will to blame media is evident: “Parkour has been mediatized in ‘connotated’ films. The traceurs depicted in those fictions were friendly delinquents a bit like Robin Hood. Friendly, yes, but for the immense majority of people they were still delinquents from the banlieue,” they gripe. “It’s been very hard to shake that image.” . . .

Continue reading. There’s much more. And it includes this 50-minute video, Jump London:

Written by Leisureguy

27 July 2021 at 10:17 am

The majority of Americans lack a college degree. Why do so many employers require one?

leave a comment »

My immediate thought is that possession of a college degree is a rough indicator of docility and willingness to meet sometimes arbitrary requirements, along with an ability to persist. Byron Auguste, CEO of Opportunity@Work and deputy director of the National Economic Council from 2013 to 2015, explains in the Washington Post his ideas on the question:

While companies scramble to find talent amid perceived “skills gaps” and “labor shortages,” their job postings exclude millions of qualified Americans. These applicants do not face this dispiriting experience because of race, ethnicity, gender, age or disability — these reasons would be illegal, and rightly so. Instead, they are excluded because they’re among the roughly two-thirds of U.S. workers who lack a bachelor’s degree.

Employers will never discover their aptitudes and mind-sets since they screen out these applicants before assessing their skills. Degree discrimination is not illegal, but it is a damaging bias that’s blinding companies to talent they need and reinforcing existing economic inequalities.

It wasn’t always this way. In 1971, my father left his job as a factory shipping clerk to study computer programming for six months. He had no technology experience, never worked in an office and dropped out of college. But a company allowed him to shadow a colleague on the job, tested his skills and hired him as a junior programmer. His career change launched our family firmly into the American middle class. Despite the discrimination my father faced as a Black man in 1971, the lack of a bachelor’s degree didn’t stand in his way.

Fast forward 50 years to today, and my father likely would not have the same opportunity.

Employers have been sleepwalking into a system that screens out the majority of workers, including millions of people who possess sought-after skills. As applications per job listing surged with the rise of online job-searching, employers looked for automated ways to screen applicants. Upcoming research by Joseph Fuller of Harvard Business School shows how little thought often goes into the process, as “requirements” are habitually cut and pasted from one job description to the next.

College-degree discrimination has become so widespread that many take it for granted. Almost three-quarters of new jobs from 2007 to 2016 were roles in which most employers typically “require” bachelor’s degrees — but fewer than 4 in 10 American workers have that credential. Going to war against arithmetic is a bad idea, and our post-pandemic skilled-worker shortage is a wake-up call.

Requiring a medical degree to treat patients or a civil engineering degree to design a bridge is common sense. By contrast, requiring a generic college degree to be considered for jobs such as office manager, sales representative, digital marketer or data-center technician may be common, but it makes no sense.

Some say a bachelor’s degree signals cognitive skills, commitment and a capacity to learn. However, college is not the only way to learn, nor the only source of skills. Our workforce includes more than 70 million workers who don’t have bachelor’s degrees but who are “skilled through alternative routes,” or STARs. Some of these unconventional paths include military service, certificate programs and community college. Mostly, STARS learn by doing, on the job.

We all witnessed the commitment and ingenuity of essential workers during the covid-19 pandemic; two-thirds of these workers are STARs. Employers spend time and money to recruit college-educated workers who might have the required degree and skills while overlooking STARs who could do those very same jobs.

These three seemingly innocuous words — “bachelor’s degree required” — are causing serious damage to our workers and economy. The damage falls hardest on Black, Latino and rural workers — screening for bachelor’s degrees excludes nearly 80 percent of Latino workers, almost 70 percent of African Americans and more than 70 percent of rural Americans across all backgrounds. The impact is particularly damaging for midmarket and smaller businesses, which struggle to find workers while the highest-profile companies poach pedigreed employees from one another and from their own suppliers and customers. Blocking advancement for such a large part of our workforce is economically toxic.

Smart employers are taking notice, reworking job descriptions to focus on relevant skills, not how those skills were gained. For example, IBM’s New Collar jobs initiative removed degree barriers and opened up thousands of jobs based on skills. Companies pledging racial inclusion in hiring are realizing that to attract and develop a more diverse workforce, they must move beyond sending a “virtue signal” to sending a clear “skills signal,” with emphasis on talent and relevant experience over degrees.

Even online job platforms may be changing. LinkedIn’s CEO, Ryan Roslansky, recently committed to piloting skills-based tools to encourage accessible paths to high-paying jobs and to meet the demands of today’s economy. Last month, the Biden administration issued an executive order advancing inclusive hiring within the federal government, our single largest employer. . .

Continue reading.

Written by Leisureguy

25 July 2021 at 5:55 am

Pay secrecy: Why some workers can’t discuss salaries

leave a comment »

Back when I lived in Iowa, each year the Des Moines Register would publish a list of all state employees and the salaries they made. I lived in Iowa City, home of the University of Iowa, a state institution, so that issue was of great interest since we all could see the salaries of University staff — professors, instructors, administrators, and all. Publishing the salaries did not seem to do any harm at all, and I imagine publication prevented egregious inequities.

Kate Morgan writes at BBC about the problem of pay secrecy:

When Naomi’s former boss called her in for a stern talking-to, she had no idea it was illegal to prevent workers from having salary discussions.

At the time, Naomi, whose surname is being withheld on job security concerns, had been hiring for a position under her at a Seattle-based marketing firm. “There was another co-worker I thought might be a candidate. She asked me how much it paid. We weren’t at the stage where we were telling outside recruits that information, but it made sense to me that she’d want to know, so I told her.”

When the colleague used that knowledge to negotiate a pay rise for her existing position, the leadership blamed Naomi. “I got called into my boss’s office for ‘arming her’ with that information. It was a message like, ‘you hurt the company’,” she says. “I definitely felt like I had done something wrong… but I also felt like I had done something right, even though it was against the rules.”

In many sectors in the US, a combination of longstanding taboos and company policies keep people from discussing how much money they make – a phenomenon known as ‘pay secrecy’. Despite legislation that prohibits companies from punishing workers who disclose their pay, many people still work in environments where they don’t or can’t talk about money – something that has profound knock-on effects on wage equality.

The still-delicate subject of salary 

“In the United States, the National Labor Relations Act of the 1930s protects workers’ rights to discuss their pay,” says Shengwei Sun, senior research associate at the Washington, DC-based Institute for Women’s Policy Research (IWPR). “But we still find that over half the workers in the United States are subject to pay secrecy.”

That’s the result of social taboos, company cultures or formal policies which, despite being outlawed, persist. Taboos endure because many people are taught that it’s rude to discuss money. Companies can often foster cultures that discourage the sharing of salary information and at 20% of companies, according to the IWPR report, there are actual gag orders. The latter is blatantly illegal, says Sun, but a lack of enforcement means the companies are almost never called out.

Companies are motivated to promote pay secrecy, either covertly or overtly, because it often saves them money. It can mean that new hires don’t know what kind of a salary is reasonable for their role – and some may end up receiving lower pay as a result. It can also affect workers already on the career ladder; if people aren’t aware they’re underpaid in comparison to their colleagues, they’re less likely to ask for – or have the leverage to secure – large pay rises.

Of course, this practice affects some groups of people more than others. It impacts particularly women and members of minority groups who tend to be paid less than white male counterparts. These groups face more challenges negotiating for fair pay – which then feeds into the enduring wage gap. “A lack of knowledge about who makes what within organizations contributes to the continuing disparity,” reads the IWPR paper. In environments with less pay secrecy, studies show, the wage gap between white male employees and everyone else is smaller.   

Naomi believes she and her colleague caught the wrong end of an informal pay-secrecy policy, in part because of who they were. “We were two youngish women,” she says. “I was probably not quite 30 when this happened, and the other woman was around the same age. We had all of these older corporate men with a lot of years of experience telling us we were being unprofessional. Everything felt very futile.”

Experiences like Naomi’s can occur across companies of all types. A 2017 study in the journal Social Science Researchshows that two characteristics influence the likelihood of a pay secrecy rule: market sector and union membership. Government and public sector employees and union members are least likely to work under a policy of pay secrecy. “By contrast,” according to the IWPR report, “just one in ten private sector workers report that pay information is public at their workplace.” That holds across private and non-unionised companies of all shapes and sizes.

Moving the needle

Growing awareness of pay secrecy and its damaging impacts could perhaps contribute to a shift, however.

Millennials and Gen Z employees in the US, for example, are beginning to push the envelope on pay; the IWPR survey shows they’re nearly twice as likely as baby boomers to discuss money with their colleagues. Sun attributes this to the challenges facing younger workers, like graduating into a recession and a trend towards more precarious employment. “That’s shifting attitudes toward work and fairness,” she says.

There’s also some positive news for women; while they are more likely to be subject to pay secrecy, “they’re also more likely to violate that policy”, says Sun. According to the IWPR data, more than 35% of women working under a salary discussion ban say they talk about pay anyway, compared to just 24% of men.

Recent years have also brought more clarity around . . .

Continue reading.

Written by Leisureguy

24 July 2021 at 9:02 am

A cockatoo meme: Opening rubbish bins

leave a comment »

A cockatoo did the work while others observed.Credit…Barbara Klump/Max Planck Institute of Animal Behavior

James Gorman reports in the NY Times:

Sulfur-crested cockatoos, which may sound exotic to Americans and Europeans, are everywhere in suburban areas of Sydney. They have adapted to the human environment, and since they are known to be clever at manipulating objects it’s not entirely surprising that they went after a rich food source. But you might say that the spread of their latest trick, to open trash cans, blows the lid off social learning and cultural evolution in animals.

 

Not only do the birds acquire the skill by imitating others, which is social learning. But the details of technique evolve to differ in different groups as the innovation spreads, a mark of animal culture.

Barbara C. Klump, a behavioral ecologist at the Max Planck Institute of Animal Behavior in Germany, and the first author of a report on the cockatoo research in the journal Science, said, “It’s actually quite a complex behavior because it has multiple steps.”

. . . Dr. Klump and her colleagues in Germany and Australia plotted the spread of the behavior in greater Sydney over the course of two years. The behavior became more common, but it didn’t pop up in random locations as it might if different birds were figuring out the trash bin technique on their own. It spread outward from its origin, indicating that the cockatoos were learning how to do it from each other.

he cockatoos’ new skill opens up a whole new resource for the birds. This is adaptive cultural evolution, spreading at lightning speed compared to biological evolution. Dr. Klump noted that culture has been called a second inheritance system and that applies to both humans and animals, allowing us and them to quickly adapt and change our behavior.

It’s impossible to know which bird or birds first developed the trash bin technique, but apparently there is not one lone cockatoo genius. During the course of the study, the behavior popped up a second time in a suburb too far away from the first for the spread to be by social learning, Dr. Klump said. The technique was invented again.

Scientists have observed social learning and what they term culture in primates, song birds and other animals. Different groups of chimpanzees show slightly different patterns of tool use, for example, as the cockatoos did. . .

Continue reading.

In the future we will doubtless see a report of a new improved clamp being invented for rubbish bins, and then of some unknown cockatoo inventing a crude and simplified oxy-acetylene torch. One these sort of races start, they tend to persist.

Written by Leisureguy

23 July 2021 at 4:31 pm

Posted in Daily life, Memes, Science

The Chatbot Problem

leave a comment »

Stephen Marche writes in the New Yorker:

In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.” Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.

In his 1950 science-fiction collection, “I, Robot,” Isaac Asimov outlined his three laws of robotics. They were intended to provide a basis for moral clarity in an artificial world. “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is the first law, which robots have already broken. During the recent war in Libya, Turkey’s autonomous drones attacked General Khalifa Haftar’s forces, selecting targets without any human involvement. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” a report from the United Nations read. Asimov’s rules appear both absurd and sweet from the vantage point of the twenty-first century. What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.

Artificial intelligence is an ethical quagmire. Its power can be more than a little nauseating. But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?

Ethics has never been a strong suit of Silicon Valley, to put the matter mildly, but, in the case of A.I., the ethical questions will affect the development of the technology. When Lemonade, an insurance app, announced that its A.I. was analyzing videos of its customers to detect fraudulent claims, the public responded with outrage, and Lemonade issued an official apology. Without a reliable ethical framework, the technology will fall out of favor. If users fear artificial intelligence as a force for dehumanization, they’ll be far less likely to engage with it and accept it.

Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values. The crisis, as it’s arriving, possesses aspects of a horror film. “As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice,’ ” Christian writes. “We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.” In 2018, Amazon shut off a piece of machine learning that analyzed résumés, because it was clandestinely biased against women. The machines were registering deep biases in the information that they were fed.

Language is a thornier problem than other A.I. applications. For one thing, the stakes are higher. Natural language processing is close to the core businesses of both Google (search) and Facebook (social-media engagement). Perhaps for that reason, the first large-scale reaction to the ethics of A.I. natural language processing could not have gone worse. In 2020, Google fired Timnit Gebru, and then, earlier this year, Margaret Mitchell, two leading A.I.-ethics researchers. Waves of protest from their colleagues followed. Two engineers at Google quit. Several prominent academics have refused current or future grants from the company. Gebru claims that she was fired after being asked to retract a paper that she co-wrote with Mitchell and two others called “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” (Google disputes her claim.) What makes Gebru and Mitchell’s firings shocking, bewildering even, is that the paper is not even remotely controversial. Most of it isn’t even debatable.

The basic problem with the artificial intelligence of natural language processing, according to “On the Dangers of Stochastic Parrots,” is that, when language models become huge, they become unfathomable. The data set is simply too large to be comprehended by a human brain. And without being able to comprehend the data, you risk manifesting the prejudices and even the violence of the language that you’re training your models on. “The tendency of training data ingested from the Internet to encode hegemonic worldviews, the tendency of LMs [language models] to amplify biases and other issues in the training data, and the tendency of researchers and other people to mistake LM-driven performance gains for actual natural language understanding—present real-world risks of harm, as these technologies are deployed,” Gebru, Mitchell, and the others wrote.

As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. “On the Dangers of Stochastic Parroting” offers a number of examples: “Biases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities.” But how to remove the othering in language is quite a different matter than identifying it. Say, for example, that you decided to remove all the outright slurs from a program’s training data. “If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light,” Gebru and the others write. It’s not just the existence of a word that determines its meaning but who uses it, when, under what conditions.

The evidence for stochastic parroting is fundamentally incontrovertible, rooted in the very nature of the technology. The tool applied to solve many natural language processing problems is called a transformer, which uses techniques called positioning and self-attention to achieve linguistic miracles. Every token (a term for a quantum of language, think of it as a “word,” or “letters,” if you’re old-fashioned) is affixed a value, which establishes its position in a sequence. The positioning allows for “self-attention”—the machine learns not just what a token is and where and when it is but how it relates to all the other tokens in a sequence. Any word has meaning only insofar as it relates to the position of every other word. Context registers as mathematics. This is the splitting of the linguistic atom.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? I remember a fact that I learned when I was forced to study Old English for my Ph.D.: in English, the terms for food eaten at the table derive from French—beef, mutton—while the terms for animals in the field derive from Anglo-Saxon—cow, sheep. That difference registers ethnicity and class: the Norman conquerors ate what the Saxon peons tended. So every time you use those most basic words—cow, beef—you express a fundamental caste structure that differentiates consumer from worker. Progressive elements in the United States have made extensive attempts to remove gender duality from pronouns. But it’s worth noting that, in French or in Spanish, all nouns are gendered. A desk, in French, is masculine, and a chair is feminine. The sky itself is gendered: the sun is male, the moon female. Ultimately, what we can fix in language is parochial. Caste and gender are baked into every word. Eloquence is always a form of dominance. Government is currently offering no solutions. Sam Altman, the C.E.O. of OpenAI, which created the deep-learning network GPT-3, has been very open about his pursuit of any kind of governance whatsoever. In Washington, he has found, discussing the long-term consequences of artificial intelligence leads to “a real eyes-glazed-over look.” The average age of a U.S. senator is sixty-three. They are missing in action.

Let’s imagine an A.I. engineer who wants to create a chatbot that aligns with human values. Where is she supposed to go to determine a reliable metric of “human values”?. . .

Continue reading.

Written by Leisureguy

23 July 2021 at 1:23 pm

We’re all teenagers now

leave a comment »

Paul Howe, professor of political science at the University of New Brunswick in Fredericton, Canada  and author of Teen Spirit: How Adolescence Transformed the Adult World (2020), has an extract of his book in Aeon:

Most of us are familiar with the law of unintended consequences. In the 1920s, Prohibition put a halt to the legal production and sale of alcohol in the United States only to generate a new set of social ills connected to bootlegging and wider criminal activity. More recently, mainstream news media outlets, in pursuit of ratings and advertising dollars, lavished attention on an outlandish, orange-hued candidate when he first announced his run for president in 2015, and inadvertently helped to pave his way to the White House – oops. Aiding and abetting his campaign was a communications tool – social media – originally designed to bring people together and create community, but which now seems to serve more as a vehicle of division and discord.

A different development has been seen as an unqualified boon: the mass expansion, over the past century, of public education. In place of a narrowly educated elite and the minimally schooled masses, we now have a society where the vast majority possess knowledge and skills necessary for success in various dimensions of their lives, including work, community engagement, democratic participation and more. Some might fall short of their potential, but the general impact is clear: extending greater educational opportunity to one and all has provided untold benefits for both individuals and society at large over the long haul.

The latest work from Robert Putnam, the pre-eminent scholar of social change in the modern US, illustrates the common wisdom on the matter. His book The Upswing (co-authored with the social entrepreneur Shaylyn Romney Garrett) sets the stage by describing the social strife of the Gilded Age, the final decades of the 19th century when rapid industrialisation and technological change generated social dislocation, inequality, civic discord and political corruption. In response to this troubled state of affairs, the Progressive movement sprang into being, bringing a new community spirit to society’s problems, along with a series of pragmatic solutions. One signal achievement was the establishment of the modern public high school, an innovation that began in the US West and Midwest and spread quickly throughout the country. Enrolment at the secondary level among those aged 14 to 17 leapt from about 15 per cent in 1910 to 70 per cent by 1940.

In Putnam’s account, the clearest benefit of educating Americans to a higher level was unparalleled economic growth and upward social mobility for the newly educated lower classes – positive effects that unfolded over the first half of the 20th century and made the US a more prosperous and egalitarian society. These benefits were part and parcel of a more general upswing that encompassed rising levels of social trust, community engagement, political cooperation, and a stronger societal emphasis on ‘we’ than ‘I’.

But it did not last. For reasons not entirely clear, the 1960s saw individualism resurfacing as the dominant mindset of Americans and the ethos of US society, turning the upswing into a downswing that has continued to the present day and lies at the heart of many contemporary social and political problems.

Hidden in this puzzling arc of social change is another unintended consequence. Universal secondary education not only elevated Americans by spreading relevant knowledge and skills to the masses. It also gave rise to a more complex social and cultural transformation, as the adolescent period became pivotal in shaping who we are. The fact is that high school is, and always has been, about more than just education. In the late 1950s, the sociologist James Coleman investigated student life in 10 US high schools, seeking to learn more about adolescents and their orientation towards schooling. In The Adolescent Society: The Social Life of the Teenager and Its Impact on Education (1961), he reported that it was the social, not the educational, dimension of the high-school experience that was paramount to teens. Cloistered together in the high-school setting, teenagers occupied a separate and distinct social space largely immune from adult influence. Coleman warned that:

The child of high-school age is ‘cut off’ from the rest of society, forced inward toward his own age group, made to carry out his whole social life with others his own age. With his fellows, he comes to constitute a small society, one that has most of its important interactions within itself, and maintains only a few threads of connection with the outside adult society.

The emergence of a segregated teenage realm occurred well before Coleman put his finger on the problem. In their classic study of the mid-1920s, the sociologists Robert and Helen Lynd described the high school in ‘Middletown’ (later revealed to be Muncie, Indiana) as ‘a fairly complete social cosmos in itself … [a] city within a city [where] the social life of the intermediate generation centres … taking over more and more of [their] waking life.’

Life beyond the classroom reinforced the pattern: a national survey from around the same time found that the average urban teenager spent four to six nights a week socialising with peers rather than enjoying quiet nights at home with the family. With the advent of modern high school, the day-to-day life of teenagers was transformed, their coming-of-age experiences fundamentally altered. Adolescence became a kind of social crucible where teens were afforded the time and space to interact intensively with one another and develop by their own lights.

So while there was clear educational benefit gained from the reading, writing and arithmetic taking place in high-school classrooms across the land, a wider set of changes started to emanate from this new social configuration. The most visible was the emergence of a more sharply defined youth culture rooted in shared interests and passions that flourished more freely within adolescent society. Young people flocked to the movies like no other demographic, their enthusiasm for the silver screen and its celebrity icons helping to propel Hollywood to the forefront of popular culture. They latched on to new musical styles – jazz in the 1920s, swing in the 1930s – and embraced them as their own; devoured the new literary sensation of the times, comic books; and adopted common ways of dressing and personal styling as emblems of youth fashion. Embodied in these trends was a heightened emphasis on the fun and the frivolous side of life that would slowly reset societal standards as time went on.

Other changes were more subtle but equally portentous. Sociological studies conducted between the two world wars reveal a rapid liberalisation of attitudes towards practices such as betting, smoking and divorce, with rates of disapproval among youth declining by 20 to 35 percentage points in the space of just a single decade. In this same period, young people grew increasingly tolerant of social misdemeanours such as habitually failing to keep promises, using profane language, and keeping extra change mistakenly given by a store clerk – minor incivilities by today’s standards, but harbingers of a changing social landscape where the transgression of established norms was starting to become more common and accepted.

This rapid evolution in everyday behaviour reflected a deeper transformation: the character of rising generations, their values, temperament and traits, were being reshaped by the powerful influence of peers during the formative years of adolescence. Hedonistic desires were more openly expressed, pleasurable activities more freely pursued. Conscientiousness was downplayed, social norms treated with greater scepticism and disdain. Impulsiveness and emotionality were more commonly displayed, an open, adventurous spirit widely embraced.

What these diverse adolescent qualities amounted to were the building blocks of a nascent individualism that would reshape society profoundly as they came to full fruition over the course of the next few decades. Traits conducive to self-focused and self-directed thought and action were more deeply etched in teenagers and slowly altered the character of society at large as whole groups socialised in this manner moved forward to adulthood.

The effects of peer influence, this argument implies, run deeper than is commonly imagined, affecting not just superficial features of the self during the teenage years, but the kind of person we become. Important research from the personality psychologist Judith Rich Harris, synthesised in her seminal book, The Nurture Assumption (1998), backs up this idea. Harris reviewed the body of research on the nature versus nurture debate, finding it consistently showed environmental effects outside the home loomed larger than had previously been realised. And she presented evidence that . . .

Continue reading.

I commented on the article:

Fascinating article, and the hypothesis of adolescents “setting” their cultural outlook through being grouped with coevals during the transition to early adulthood (a) makes sense and (b) explains a lot. I am now elderly but in middle age (in the 1980’s), a common topic of conversation among people of my age was how much older our parents seemed to have been when they were the age we were. We (in our view) still had a youthful outlook, but our parents had always had an older (more adult?) outlook and attitude. And of course our parents had spent their adolescent years not among coevals but embedded in an adult workforce, where they picked up the culture and expectations of those adults, whereas we had picked up in our adolescent years the culture and outlook of other adolescents.

Another thought: I recall reading about things that happened in Iraq after George W. Bush had the US invade (and pretty much destroy) that country, and among those things was the US practice of imprisoning anyone whom they suspected of being a “terrorist” (sometimes just being anti-US). That amounted, various writers pointed out, to an intensive education in terrorism, by putting together practiced and knowledgeable insurgents and terrorist with many who had been merely discontented, but in the prisons, they learned a lot — skills, attitudes, and outlooks — and made connections so that they left as members of a network. (Another unforeseen side-effect.)

By penning up adolescents together for the years of their transition from childhood to early adulthood, we pretty much ensured that a new culture would evolve and they would leave that environment with that cultural outlook embedded in them.

Both those are examples of the rapidity with which memes evolve. (“Memes” in the sense Richard Dawkins meant when he defined the term in Chapter 11 of The Selfish Gene, as units of human culture.) Memetic evolution is the result of the same logic that explains the evolution of lifeforms: reproduction with variation, occasional mutation, and a natural selection that results in some changes being successful (reproducing more) and others not so successful — cf. The Meme Machine, by Susan Blackmore.

Cultures evolve very quickly, but even lifeforms can evolve fairly quickly in circumstances in which selection is intense — cf. the rapid evolution when a species becomes island-bound. The schools (and prisons) made a cultural island, and cultural evolution was swift.

Written by Leisureguy

22 July 2021 at 8:13 pm

Raymond Scott’s bizarre but intriguing ideas

leave a comment »

Being ahead of one’s time is a serious curse. Ted Gioia has a most interesting column that begins:

Background: Below is the latest in my series of profiles of individuals I call visionaries of sound—innovators who are more than just composers or performers, but futurists beyond category. Their work aims at nothing less than altering our entire relationship with the music ecosystem and day-to-day soundscapes.

In many instances, their names are barely known, even within the music world. In some cases—as with Charles Kellogg, recently profiled here—they have been entirely left out of music history and musicology books.

In this installment, I focus on the remarkable legacy of Raymond Scott. During the coming months, I will be publishing more of these profiles. Perhaps I will collect them in a book at some point.

The Secret Music Technology of Raymond Scott

Unfortunately, I need to start this article by talking about Porky Pig.

Raymond Scott deserves better. He never intended for his legacy in music to depend on cartoon animals. But his largest audience, as it turned out, would be children who laugh at Bugs Bunny, Daffy Duck, Porky Pig and the other animated protagonists of the Looney Tunes and Merrie Melodies cartoons released by Warner Bros.

Scott didn’t write cartoon music—at least, not intentionally—but his music appears on more than 100 animated films. For that give credit (or blame) to Carl Stallings, who needed to churn out a cartoon soundtrack every week, more or less, while under contract to Warner Bros. Stallings found a goldmine in the compositions of Raymond Scott, whose music had been licensed to the studio. These works, which straddle jazz and classical stylings, possess a manic energy that made them the perfect accompaniment to a chase scene or action sequence or some random cartoon-ish act of violence.

Scott called his music “descriptive jazz”—his name for a novel chamber music style that drew on the propulsive drive of swing, with all the riffs and syncopation of that dance style, but with less improvisation and proclaiming a taste for extravagant, quasi-industrial sounds. It was like techno before there was techno, but with a jitterbug sensibility.

When I first learned about Scott, I was taught to view him as a music novelty act, akin perhaps to Zez Confrey or Spike Jones, and the most frequently cited examples of his work (to the extent, they were mentioned at all) were these cartoon soundtracks. But Scott had higher ambitions. He was, after all, a Juilliard graduate, with a taste for experimental music, and worldview more aligned with Dali and Dada than Daffy Duck. But Scott also wanted to be a technologist—his early aim had been to study engineering. He dreamed of combining these two pursuits, and gaining renown as one of the trailblazers in electronic music.

Under slightly different circumstances, he might have become even more famous for music tech than for his cartoon music, as well-known as Robert Moog or Ray Dolby or Les Paul or Leon Theremin. But those dreams were all in the future, when he picked the name “Raymond Scott” out of a phone book—because he thought it “had good rhythm.” . . .

Continue reading. It gets stranger and stranger. He invented a music synthesizer, for example, hiring Bob Moog to design circuits for him. (Moog later made his own synthesizer, of course.) Amazing story.

There’s an old country song called “Pictures from Life’s Other Side.” This whole piece reminded me of that.

Written by Leisureguy

21 July 2021 at 3:03 pm

Watching the Watchmen: The Michigan Group Who Planned to Kidnap the Governor

leave a comment »

Ken Bensinger and Jessica Garrison report in Buzzfeed:

The Michigan kidnapping case is a major test for the Biden administration’s commitment to fighting domestic terrorism — and a crucible for the fierce ideological divisions pulling the country apart.

In the inky darkness of a late summer night last September, three cars filled with armed men began circling Birch Lake in northern Michigan, looking for ways to approach Gov. Gretchen Whitmer’s three-bedroom vacation cottage, subdue her — using a stun gun if necessary — and drag her away.

One vehicle stopped to check out a boat launch while a second searched in vain for the right house in the thick woods ringing the lake. The third car ran countersurveillance, using night vision goggles to look out for cops and handheld radios to communicate with the others.

Earlier, they had scoped out a bridge over the Elk River, just a few miles away, scrambling down under the span to figure out where plastic explosives would need to be placed to blow it sky-high. That would slow police response, giving the men time to escape with the governor — who had infuriated them by imposing COVID lockdowns, among other outrages — and either take her to Lake Michigan, where they could abandon her on a boat, or whisk her to Wisconsin, where she would be tried as a “tyrant.”

“Everybody down with what’s going on?” an Iraq War veteran in the group demanded to know when they ended their recon mission, well past midnight, at a campsite where they were all staying.

“If you’re not down with the thought of kidnapping,” someone else replied, “don’t sit here.”

The men planned for all kinds of obstacles, but there was one they didn’t anticipate: The FBI had been listening in all along.

For six months, the Iraq War vet had been wearing a wire, gathering hundreds of hours of recordings. He wasn’t the only one. A biker who had traveled from Wisconsin to join the group was another informant. The man who’d advised them on where to put the explosives — and offered to get them as much as the task would require — was an undercover FBI agent. So was a man in one of the other cars who said little and went by the name Mark.

Just over three weeks later, federal and state agents swooped in and arrested more than a dozen men accused of participating in what a federal prosecutor called a “deeply disturbing” criminal conspiracy hatched over months in secret meetings, on encrypted chats, and in paramilitary-style training exercises. Seven of the men who had driven to Birch Lake that night would end up in jail.

The case made international headlines, with the Justice Department touting it as an example of law enforcement agencies “working together to make sure violent extremists never succeed with their plans.” Prosecutors alleged that kidnapping the governor was just the first step in what some on the right call “the Big Boog,” a long-awaited civil war that would overthrow the government and return the United States to some supposed Revolutionary War–era ideal.

The defendants, for their part, see it very differently. They say they were set up.


.
The audacious plot
 to kidnap a sitting governor — seen by many as a precursor to the Jan. 6 assault on the US Capitol by hundreds of Trump-supporting protesters — has become one of the most important domestic terrorism investigations in a generation.

The prosecution has already emerged as a critical test for how the Biden administration approaches the growing threat of homegrown anti-government groups. More than that, though, the case epitomizes the ideological divisions that have riven the country over the past several years. To some, the FBI’s infiltration of the innermost circle of armed anti-government groups is a model for how to successfully forestall dangerous acts of domestic terrorism. But for others, it’s an example of precisely the kind of outrageous government overreach that radicalizes people in the first place, and, increasingly, a flashpoint for deep state conspiracy theories.

The government has documented at least 12 confidential informants who assisted the sprawling investigation. The trove of evidence they helped gather provides an unprecedented view into American extremism, laying out in often stunning detail the ways that anti-government groups network with each other and, in some cases, discuss violent actions.

An examination of the case by BuzzFeed News also reveals that some of those informants, acting under the direction of the FBI, played a far larger role than has previously been reported. Working in secret, they did more than just passively observe and report on the actions of the suspects. Instead, they had a hand in nearly every aspect of the alleged plot, starting with its inception. The extent of their involvement raises questions as to whether there would have even been a conspiracy without them.

A longtime government informant from Wisconsin, for example, helped organize a series of meetings around the country where many of the alleged plotters first met one another and the earliest notions of a plan took root, some of those people say. The Wisconsin informant even paid for some hotel rooms and food as an incentive to get people to come.

The Iraq War vet, for his part, became so deeply enmeshed in a Michigan militant group that he rose to become its second-in-command, encouraging members to collaborate with other potential suspects and paying for their transportation to meetings. He prodded the alleged mastermind of the kidnapping plot to advance his plan, then baited the trap that led to the arrest.

This account is based on an analysis of court filings, transcripts, exhibits, audio recordings, and other documents, as well as interviews with more than two dozen people with direct knowledge of the case, including several who were present at meetings and training sessions where prosecutors say the plot was hatched. All but one of the 14 original defendants have pleaded not guilty, and they vigorously deny that they were involved in a conspiracy to kidnap anyone. . .

Continue reading. There’s much more.

Written by Leisureguy

21 July 2021 at 12:58 pm

“My Deep, Burning Class Rage”

leave a comment »

Charlotte Cowles recounts in The Cut how a woman, who understandably wishes to remain anonymous, encounters financial inequity among those she knows and how it affects her. It begins:

Get That Money is an exploration of the many ways we think about our finances — what we earn, what we have, and what we want. In Living With Money, we talk to people about the stories behind their bank balances. Here’s how a 40-year-old woman in New York copes with “class rage” — the feeling that all her friends and colleagues are wealthier, and she’ll never be able to catch up.

I define class rage pretty specifically. It’s how I feel when I think that someone is in a similar financial situation to me, and then I discover that they actually have this extra source of money. When I was younger, it was like, “Oh wait, you come from a rich family.” But now it’s like, there’s a secret trust fund. Or a wealthy spouse. At my core, I believe that if you have money, your life is easier. If a person grew up rich, or with relative financial security, then I just can’t relate to them at all.

I work in book publishing in New York, which definitely compounds this problem. The publishing world is full of wealthy people — like a lot of creative industries, it has some glamour but it doesn’t pay well. So if you want to live comfortably, it helps if you have another income source. And these aren’t the types of wealthy people who flaunt their money. They tend to be more embarrassed about it. So they downplay it, like, “Oh, I’m just a poor book editor. I just do this job because I love literature.” And I’m like, no! You do this job because you can! That’s what really gets to me.

I don’t feel this way toward rich people in general, like celebrities or bankers on Wall Street. It’s not about rich people who make a lot of money at their jobs. Instead, I feel it toward people who have always had money — who’ve had this sense of backup that allows them to experiment in life and do what they want. I’m so jealous of that built-in freedom.

I know that these are unfair assumptions, and I might sound like a terrible person. I have plenty of rich colleagues who still work hard and are nice, good people. I hate that I feel this way. And I’m sure that lots of people might feel the same way about me — money and resources are all relative. But I have quite a bit of debt and my whole life feels so tenuous sometimes. I’m 40 and I’m single and I spend almost all of my money on rent and I’m constantly stressed about finances. I blame a lot of my problems on money, even though I know that’s irrational — they’re not really money problems. I just can’t shake the fact that if I had more financial security, my life would be much better. I don’t get jealous about material things — it’s lifestyle stuff, like having the freedom to go out for dinner without having to go consign my clothes to pay for it, which I have definitely done.

I was always jealous of people with money. When I was growing up, my dad was a high school teacher and my mom mostly did temp work, sort of picking up jobs where she could. We weren’t dirt poor, but it was very hand-to-mouth. Money was always an issue. At one point when I was a kid, my dad got cancer and the medical bills put my parents into a lot of debt. They tried not to make a big deal out of it, but there was always just this level of concern. There was no cushion. We had one car and it was always breaking down. I always knew that if I wanted anything, I’d have to work very hard for it, probably harder than most people I knew. Asking my parents for money was and is definitely never an option.

When I went to college, that was the first time I noticed a real divide between people who had money and people who didn’t, because some of us needed jobs. It was also the first time I became aware of how it impacted how you could perform. Like, I had to work three nights a week, so I literally didn’t have as much time to spend on my assignments as I wanted to. When I moved to New York in my mid-20s for grad school, I saved up for a year beforehand, working seven days a week, often double shifts. I got a full scholarship, but I still had to pay rent and support myself. And I’ve just been in survival mode ever since. When I finished my MFA, I was earning $25,000 a year and my rent was $1,200 a month. You do the math.

In grad school, I saw a whole new level of privilege. I was working three jobs and my friends and I would talk about struggling with money and then I’d realize that their parents were paying their rent. Or they could charge things to their parents’ credit cards in an “emergency.” Or that some of them had never had a job before in their lives. I became aware of the sheer amount of money that had gone into some of these people. Like, between their private schooling and Ivy League college and grad school, that’s more money than I’ll ever make in my lifetime. To be this walking investment, with this price tag on your life — I can’t understand what that would feel like. I’m sure there’s some pressure, and that must suck. But at the same time, the road has been smoothed for you.

One of my close friends from my MFA program, we had pretty similar career struggles and worked in very similar jobs, and it seemed like we were on a similar path in life. And I did occasionally notice that she’d say something like, “Oh, my family has this little ramshackle cabin in the woods somewhere, it’s covered in cobwebs,” when she really should have said, “I have a ski house in Colorado.” But I didn’t really know the extent of it until she had a baby. And that’s when a line was drawn. Suddenly, she was looking at real estate, buying an apartment, hiring a full-time nanny. And I’m not proud of this, but it changed how I felt about our entire relationship. I felt deceived. I know that people shouldn’t have to declare how much money they have in their family as a prerequisite for friendship. But it was more that what had felt to me like a shared struggle wasn’t real for her. When we had talked about our worries, about our careers and our futures, all those conversations suddenly felt tainted. It’s possible that she was doing it just to fit in and be friendly. But I felt like I’d been fooled.

We drifted apart after that, which is what usually happens when I find out about somebody’s money. I’ve never gotten in a fight over it. I just sort of stew, and then there’s this psychological distance that emerges.

I can’t do a lot of things because of money. Everyone says that — “I’m too poor, I can’t go out.” And that enrages me because I really mean it. It’s isolating, because I can’t talk about it. I can’t say, “I have $7 in my checking account,” because it scares people. And no one wants to be around someone who complains about money. I definitely have had to cut out a lot of acquaintances and networking opportunities because I cannot afford to just meet for a drink. I’m sure that people think that I’m depressed or I’ve just drifted off or something, but it’s really just the money.

I completely understand why people downplay their wealth. I would probably do the same thing if I were around someone with a lot less money than me. But what annoys me is the hypocrisy of it, acting like you haven’t had a leg up. I would just prefer people to be honest. Just accept that you’re privileged. Accept that you’re lucky. Accept that certain things are easier for you because of money. But people never do. Sometimes I wonder if they’re even aware.

What really haunts me is when I feel like I’ve been . . .

Continue reading.

Written by Leisureguy

21 July 2021 at 12:48 pm

Our Workplaces Think We’re Computers. We’re Not.

leave a comment »

Illustration by The New York Times; photograph by Stephanie Anestis

Related somewhat is a quotation I just encountered:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”   — Edsger W. Dijkstra

In the NY Times an interesting podcast, with this introduction:

For decades, our society’s dominant metaphor for the mind has been a computer. A machine that operates the exact same way whether it’s in a dark room or next to a sunny window, whether it’s been working for 30 seconds or three hours, whether it’s near other computers or completely alone.

But that’s wrong. Annie Murphy Paul’s The Extended Mind argues, convincingly, that the human mind is contextual. It works differently in different environments, with different tools, amid different bodily states, among other minds.

Here’s the problem: Our schools, our workplaces, our society are built atop that bad metaphor. Activities and habits that we’ve been taught to associate with creativity and efficiency often stunt our thinking, and so much that we’ve been taught to dismiss — activities that look like leisure, play or rest — are crucial to thinking (and living!) well.

Paul’s book, read correctly, is a radical critique of not just how we think about thinking, but how we’ve constructed much of our society. In this conversation, we discuss how the body can pick up on patterns before the conscious mind knows what it’s seen, why forcing kids (and adults) to “sit still” makes it harder for them to think clearly, the connection between physical movement and creativity, why efficiency is often the enemy of productivity, the restorative power of exposure to the natural world, the dystopian implications of massive cognitive inequality, why open-plan offices were a terrible idea and much more.

You can listen to our whole conversation by following “The Ezra Klein Show” on AppleSpotifyGoogle or wherever you get your podcasts.

(A full transcript of the episode is available here.)

Written by Leisureguy

21 July 2021 at 12:38 pm

Some bonsai I like

leave a comment »

If I had a yard or even a balcony that got some sun, I’d definitely be doing some bonsai now. There are, of course, innumerable YouTube videos on doing bonsai (and I’ve even posted some — this one, for example, is particularly relevant). I personally like books, and many are available specifically for beginners. Given the pace of such a project, I think it would be meditative and relaxing. Consider giving it a go.

Written by Leisureguy

21 July 2021 at 11:39 am

Posted in Art, Books, Daily life, Memes

How a solitary prisoner decoded Chinese for the QWERTY keyboard

leave a comment »

Would it have been easier and faster if he had used the Dvorak keyboard? 🙂 In Psyche Thomas S Mullaney, professor of Chinese history at Stanford University, gives a fascinating account that shows the amazing way the brain works. He writes:

In China, suburban garages do not factor in the lore of computing history the way they do in the United States. But prisons do – at least, one particular prison in which a brilliant Chinese engineer was sentenced to solitary confinement for thought crimes against Mao Zedong during China’s Cultural Revolution. His name was Zhi Bingyi and, during long and anxiety-ridden days, months and years of solitude, he made a breakthrough that helped launch China’s personal computing revolution: he helped make it possible to type Chinese with a run-of-the-mill Western-style QWERTY keyboard.

Zhi was born in 1911 on China’s eastern coast, in Jiangsu province. His generation shouldered an almost unbearable burden: a mandate to dedicate their lives to the modernisation of their country. Zhi completed his undergraduate education in 1935, receiving a degree in electrical engineering from Zhejiang University. He moved to Germany in 1936, receiving his doctorate in 1944 from the University of Leipzig. He spent nearly 11 years in Germany, becoming fluent in the language, and marrying a German woman.

Upon the couple’s return to China in 1946, Zhi held a variety of distinguished posts, yet his long-time experience overseas made him suspect in the eyes of the still-nascent Chinese Communist Party regime following the 1949 revolution. When the Cultural Revolution erupted in 1966, Zhi became a marked man. Named a ‘reactionary academic authority’ (fandong xueshu quanwei) – one of the era’s many monikers for those condemned as enemies of the revolution – he was confined in one of the period’s infamous ‘ox pens’. The cell measured a claustrophobic six square metres. Outside its four walls, China descended into the political turmoil of the Cultural Revolution. In his hometown of Shanghai, fanatics and paramilitary groups pledged undying loyalty to the person of Chairman Mao. In the early months of the crisis, bands of radical youth set out upon ‘seek and destroy’ raids intent on purging the country of all pre-revolutionary vestiges of ‘Old China’.

Unsure if he would ever see his wife again, with no other voices besides his guards’, and with no work to occupy his mind, Zhi filled the long hours staring at the wall of his cell – specifically, at an eight-character poster that made a chilling assurance to him and anyone unfortunate enough to set their eyes upon it:

坦白从宽,抗拒从严
(tanbai congkuan, kangju congyan)
‘Leniency For Those Who Confess, Severity For Those Who Resist’

The message was clear: We have the authority to destroy your life (if you resist)Or to make your imprisonment somewhat more tolerable (if you confess).

Zhi read this terrifying couplet over and over again, for days, weeks and months on end. And then something began to happen – something that reminds us of the inherent strangeness of language.

No matter one’s mother tongue, the process of becoming fluent in a language is a process of forgetting that language is a form of arbitrary code. There is nothing inherently ‘candid, frank, or open’ about the character 坦 (tan), nor ‘white, blank, or clear’ about the character 白 (bai). As with any young child, Zhi in his earliest years of life would have looked upon these symbols as random assemblages of pen strokes on the page, born of a complex web of conventions whose origins we will never be able to reconstruct in full. But steadily, over the course of innumerable repetitions, something happens to us: the sounds and sights of language begin to approach, and then to achieve, a kind of natural, god-givenness. The character 白 (bai) no longer ‘stands in’ for whiteness by dint of painstaking study and memorisation, but merges with it effortlessly. This merger is the fruition of every child’s struggle to speak, read and write: the struggle to make inroads into their family and community’s semiotic universe, transforming it from an indecipherable code to a medium of expression.

While most of us experience this transformation as a one-way process, it can be reversed. A sound or symbol made second-nature can be denatured – defamiliarised and queered, in which one is somehow able to tap into the original meaninglessness of one’s mother tongue, even as one continues to be able to hear, see and speak it fluently.

This is what happened to Zhi. As he whiled away his time in prison, mulling over these eight characters (seven, if we account for one character that is repeated), this act of repetition restored to them their inherent arbitrariness. By the 100th reading – perhaps the 1,000th, we cannot know – Zhi began to explode these characters in his mind, into a variety of elements and constellations. The first character (坦), for example, could be readily divided into two distinct parts: 土 and 旦, and then further still into + and − (making up the component 土) and 日 and  (making up 旦). The second character 白 could be subdivided, as well, perhaps into 日, with a small stroke on top. Then the rest. Even in this short, eight-character passage, the possibilities of decomposition were abundant.

Zhi managed to get hold of a pen – the one he was given to write political self-confessions – but paper was impossible to find. Instead, he used the lid of a teacup, which his captors provided him to drink hot water. When turned over, Zhi discovered, the lid was large enough to fit a few dozen Latin letters. Then he could erase them and start again, like a student in ancient Greece with an infinitely reusable wax tablet. And so he mulled over each character one by one, decomposing them into elements, and then converting those elements into letters of the Latin alphabet.

He was creating a ‘spelling’ for Chinese – although not in the conventional sense of the word.

In Zhi’s system, the letters of the Latin alphabet would not be used to spell out the sound of Chinese words. Nor would they be used to ‘compose’ them per se. Instead, he envisioned using Latin letters to retrieve one’s desired Chinese character from memory. For him, Latin letters would be the instructions or criteria one fed to a machine, telling the device to, in effect, ‘retrieve the Chinese characters that match these requirements’.

Take the example of fu (幅), a Chinese character meaning ‘width’. Ultimately, Zhi settled upon an unexpected ‘spelling’ for this character, which bore no resemblance to its sound: J-I-T-K. The first letter in this sequence (J) corresponded not to the phonetic value of the character (which should begin with ‘F’) but to a structural element located on the left-most side of the character: the component 巾 that, when seen in isolation, is pronounced jin. The code symbol ‘J’ was derived from the first letter of the pronunciation of the component.

The rest of the spelling – I, T and K – followed the same logic. ‘I’ was ‘equal to’ the component/character yi (一); ‘K’ referred to the component kou (口); and ‘T’ to tian (田). Other letters in Zhi’s code performed the same role:

D = the structure 刀 (with ‘D’ being derived from dao, the pronunciation of this character when seen in isolation)
L = 力 (same logic as above, based on the Pinyin pronunciation li)
R = 人 (same logic as above, based on the Pinyin pronunciation ren)
X = 夕 (same logic as above, based on the Pinyin pronunciation xi)

Zhi eventually gave his code a name: ‘See the Character, Know the Code’ (Jianzi shima), ‘On-Site Coding’ (OSCO), or simply ‘Zhi Code’ (Zhima).

In September 1969, Zhi was released from prison, rejoining his wife and family at their apartment on South Urumqi Road, in Shanghai – albeit in a form of prolonged house arrest.

Other changes were afoot, as well. In 1971, the United Nations recognised Beijing as the official representative of China, granting the country a seat on the Security Council. In 1972, Richard Nixon shocked the world with the first US presidential delegation to the People’s Republic of China (PRC). In 1976, Mao died of cancer, setting in motion a profound sweep of political, economic and social transformations. Then, in 1979, the gates opened even wider, with the normalisation of relations with the US.

One of the many changes that Sino-US normalisation brought was an influx – first a drip, then a flood – of US-built computers . . .

Continue reading. There’s more.

Written by Leisureguy

21 July 2021 at 10:56 am

%d bloggers like this: