Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Psychology’ Category

Interview: Julia Galef

leave a comment »

Noah Smith interviews Julia Galef:

If the Rationalist movement can be said to have a leader, I would argue that it is Julia Galef. She hosts the podcast Rationally Speaking, and is the founder of the Center for Applied Rationality, which tries to train people to eliminate cognitive bias. And now she has written a book! It’s called The Scout Mindset: Why Some People See Things Clearly and Others Don’t, and you can buy it on Amazon here.

In the interview that follows, I talk to Julia about different concepts of rationality, about the purpose of the “scout mindset”, about whether rationality will win in the marketplace of ideas, and more!

N.S.: So I hear you have a new book! It’s called “The Scout Mindset: Why Some People See Things Clearly and Others Don’t”. I’m going to read it, but why don’t you give me a preview of what it’s about!

J.G.: I do! It’s about, unsurprisingly, the scout mindset — which is my term for the motivation to see things as they are, not as you wish they were. In other words, trying to be intellectually honest, objective, and curious about what’s actually true.

The central metaphor in the book is that we are often in soldier mindset, my term for the motivation to defend your own beliefs against arguments and evidence that might threaten them. Scout mindset is an alternative way of thinking. A scout’s goal is not to attack or defend, but to go out and form an accurate map of what’s really there.

So in the book, I discuss why soldier mindset is so often our default and make the case for why we’d be better off shifting towards the scout instead. And I share some tips for how to do that, which I illustrate with lots of real examples of people demonstrating scout mindset, in science, politics, sports, entrepreneurship, activism, and lots of everyday contexts as well.

N.S.: So are we always better off being scouts instead of soldiers? Just to indulge in a bit of devil’s advocacy, don’t many entrepreneurs succeed by being overconfident about their idea’s chance of success? And doesn’t irrational optimism often sustain us through trying times? Isn’t excessive realism considered a hallmark of clinical depression?

I guess this is a specific way of asking about the more general question of analytic rationality versus instrumental rationality. Are there times when, if we were a planner trying to maximize our own utility, we would choose to endow ourselves with logical fallacies and incorrect beliefs? 

J.G.: Yeah, my claim isn’t that soldier mindset has no benefits. My claim is that:

1. We overestimate those benefits, and

2. There are usually ways to get those benefits without resorting to soldier mindset

I’ll briefly sum up my case for those claims. To the first point, one reason we overestimate soldier mindset’s benefits is that they’re so immediate. When you convince yourself “I didn’t screw up” or “My company is definitely going to succeed,” you feel good right away. The harms don’t come until later, in the form of making you less likely to notice yourself making a similar mistake in the future, or a flaw in your business plan. And just in general, humans tend to over-weight immediate consequences and under-weight delayed consequences.

(As an aside, it’s worth noting that the research claiming things like “People who self-deceive are happier” is really not very good. I’m willing to believe that self-deception can make you happy, at least temporarily, but I wouldn’t believe it as a result of the academic research.

Then to the second point… even though people often claim that you “need” soldier mindset to be happy, or confident, or motivated, there are lots of counterexamples disproving that.

For example, you brought up the claim that entrepreneurs need to be overconfident in their odds of success, in order to motivate themselves. That is a common claim, but in fact, many successful entrepreneurs originally gave themselves rather low odds of success. Jeff Bezos figured he had a 30% shot at success with Amazon, and Elon Musk gave his companies (Tesla and SpaceX) each a 10% chance of success.

Yet obviously both Bezos and Musk are highly motivated despite recognizing the tough odds facing them. That’s because they were motivated not by the promise of a guaranteed win, but by the high expected value of the risk they were taking: The upside of success was huge, and the downside of failure was tolerable. (“If something is important enough, you should try,” Musk has said. “Even if the probable outcome is failure.”)

That kind of thinking about risk is a better source of motivation, I would argue — because it doesn’t require you to believe false things.

N.S.: Got it! The idea of scout mindset reminds me of my favorite Richard Feynman term: “a satisfactory philosophy of ignorance“. Was Feynman’s thinking influential to you at all?

Anyway, I have another question, about the relationship between the soldiers and the scouts. In real armies, scouts and soldiers are on the same side, doing different jobs but fighting a common enemy. What is the common enemy in the case of people who take the two mindsets? Or if not an enemy, what is the common purpose that unites them, or ought to unite them? 

J.G.: . . .

Continue reading.

Written by LeisureGuy

18 April 2021 at 10:53 am

Effective Altruism Is Not Effective

leave a comment »

Thomas R. Wells writes in The Philosopher’s Beard:

Effective altruism is based on a very simple idea: we should do the most good we can. Obeying the usual rules about not stealing, cheating, hurting, and killing is not enough, or at least not enough for those of us who have the good fortune to live in material comfort, who can feed, house, and clothe ourselves and our families and still have money or time to spare. Living a minimally acceptable ethical life involves using a substantial part of our spare resources to make the world a better place. Living a fully ethical life involves doing the most good we can. (Peter Singer)

It is almost universally agreed that the persistence of extreme poverty in many parts of the world is a bad thing. It is less well-agreed, even among philosophers, what should be done about it and by who. An influential movement founded by the philosopher Peter Singer argues that we should each try to do the best we can by donating our surplus income to charities that help those in greatest need. This ‘effective altruism’ movement has two components: i) encouraging individuals in the rich world to donate more; and ii) encouraging us to donate more rationally, to the organisations most efficient at translating those donations into gains in human well-being.

Unfortunately both components of effective altruism focus on what makes giving good rather than on achieving valuable goals. Effective altruism therefore does not actually aim at the elimination of global poverty as is often supposed. Indeed, its distinctive commitment to the logic of individualist consumerism makes it constitutionally incapable of achieving such a large scale project. Effective altruism is designed to fail.

I. The No-Sacrifice Principle of Giving

In his best-selling defense of effective altruism The Life You Can Save: Acting Now to End World Poverty (2009, p.15) Singer provides this outline of his argument.

First premise: Suffering and death from lack of food, shelter, and medical care are bad.

Second premise: If it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so.

Third premise: By donating to aid agencies, you can prevent suffering and death from lack of food, shelter, and medical care, without sacrificing anything nearly as important.

Conclusion: Therefore, if you do not donate to aid agencies, you are doing something wrong.

Singer famously supports his second premise by reference to his ‘shallow pond’ thought experiment, in which nearly everyone agrees that we would have an obligation to rescue a drowning child even at some personal inconvenience. He argues that since we already seem to accept that principle, the moral challenge is to integrate it better into how we live by donating some of our ‘surplus’ income to charities. Effective altruism is thereby identified as a way of living better in accordance with reason and right, the correct answer to Socrates’ challenge ‘How ought we to live?’

What I want to bring out here is that Singer’s main concern is the question of how good to be in terms of how much we should be giving, i.e. the internal moral economy of the subject. The ‘bads’ of suffering and death identified in premise 1 are peripheral to this analysis. They may motivate our interest in altruism but their remediation is not the measure of our altruistic success.

On the face of it, premise 2 is a very demanding principle because it links our subjective moral economy to the prevention of significant objective harms. However, the way that Singer uses the principle severs that relation. Singer is concerned to help us calculate our personal budget for good works: how much we each can spare from our other interests and commitments. As Singer makes clear, altruism on this conception should not feel like a sacrifice because it is merely the harmonious integration of our moral with our other preferences. This generates a rather generic analysis of how much it is reasonable to expect people of different levels of affluence to contribute to good causes without having to make any real sacrifices, i.e. calculations of how much money we could easily do without. (Singer suggests a progressive rate of voluntary self-taxation starting at 5% of income for those earning more than $100,000.)

Such calculations are generic because they are fundamentally concerned with how to be an altruist, not with how to fix the world’s problems, and so they are unrelated to the significance of the specific problems our donations are supposed to address, nor with what would be needed to successfully solve them. For consider, even if global poverty were eliminated entirely, there will still always be causes you could contribute to that would be more valuable than pursuing your own interests (such as generating benefits to future generations). This is the paradoxical overdemandingness of utilitarianism identified by various philosophers (Bernard Williams; Susan Wolf; etc): that a world of utilitarians would be a world incapable of happiness. What I think Singer’s ‘no sacrifice’ principle actually offers is a (not especially convincing) way to reconcile our moral duty to doing good with our right to live a life of our own.  We are effectively asked to calculate our own voluntary moral tax-rate that delineates when we have done enough for others and can turn away, morally free to pursue our private projects and commitments. How much good this amount of giving will achieve in the world is irrelevant to what that tax rate should be.

II. Efficiency is Not the Same Thing as Effectiveness

Effective altruists ….. know that saving a life is better than making a wish come true and that saving three lives is better than saving one. So they don’t give to whatever cause tugs strongest at their heartstrings. They give to the cause that will do the most good, given the abilities, time, and money they have. (Peter Singer)

The problem with the first component of effective altruism was that it focuses on the internal moral economy of the giver rather than on the real world problems our giving is supposed to address. The second component of effective altruism might not seem to have that problem because it is explicitly concerned with maximising the amount of good that each unit of resources achieves. (This is also the component that has received more emphasis in the last 10 years as the movement gained traction among a younger generation of philosophers such as Toby Ord and William MacAskill.) However, this concern is better understood as efficiency than as effectiveness (the general idea of getting things done). This might seem an innocuous distinction since efficiency is about how we ought to get things done, i.e. a way of being effective. However, there are significant consequences for practical reasoning in the kind of cases effective altruism is concerned with.

If one takes the efficiency view promoted by the effective altruism movement then one assumes a fixed set of resources and the choice of which goal to aim for follows from a calculation of how to maximise the expected value those resources can generate; i.e. the means justifies the end. For example, in the context of global poverty, you would use evidence and careful reasoning to decide in which cause or organisation to invest your chosen amount on the basis of which generates  the most QALYS per dollar. This should ensure that your donation will achieve the most good, which is to say that you have done the best possible job of giving. However, despite doing so well at the task effective altruism has set you, if you step back you will notice that very little has actually been achieved. The total amount of good we can achieve with our donations is limited to the partial alleviation of some of the symptoms of extreme poverty, symptoms that will recur so long as poverty persists. But effective altruism supplies no plan for the elimination of poverty itself, and there is no way for a feasible plan for that goal to be developed and implemented by this method of reasoning at the margin.

The underlying problem is that . . .

Continue reading. There’s much more, and in my view he annihilates Singer’s argument and position.

Written by LeisureGuy

17 April 2021 at 9:43 am

Brain control of devices

leave a comment »

This video bears an interesting relation to the previous post.

Written by LeisureGuy

16 April 2021 at 1:17 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by LeisureGuy

16 April 2021 at 12:54 pm

Justin Bieber today is stunning

leave a comment »

Just read Zach Baron’s interview of him in GQ:

Justin Bieber and I have just met when I ask him something and he talks and talks—for 10 illuminating and uninterrupted minutes he talks. He talks about God and faith and castles in Ireland, about shame and drugs and marriage. He talks about what it is to feel empty inside, and what it is to feel full. At one point he says, “I’m going to wrap it up here,” but he doesn’t, he just keeps going, and that is what it is like to talk to Justin Bieber now. Like you’re in the confessional booth with him. Like whatever rules about “privacy” or the thick opaque wall of massive celebrity that people like Bieber are supposed to follow don’t apply.

He has lived a well-documented life—maybe among the more well-documented lives in the history of this decaying planet. But to my knowledge, there is not one example of him speaking this way—in a moving but unprompted, unselfconscious torrent of words—in public prior to this moment. I will admit to being disoriented. If I’m being honest, I had been expecting someone else entirely—someone more monosyllabic; someone more distracted, more unhappy; someone more like the guy I’m pretty sure Justin Bieber was not all that long ago—and now I am so thrown that the best I can do is stammer out some tortured version of… How did you become this person? By which I mean: seemingly guileless. Bursting with the desire to connect, to tell his own story, in case it might be of use to anyone else.

It’s a question that’s not even a question, really. But what Bieber gently says in response is: “That’s okay.”

He knows approximately what I’m asking—how he got from wherever he was to here, to becoming the man in front of me, clear-eyed on a computer screen from an undisclosed location in Los Angeles. His hair, under a Vetements hat, is long in the back; he is in no particular hurry. He is married to a woman—Hailey Baldwin Bieber—who cares for him like no one has ever cared for him, he says. He is happy. He is currently renovating the house in which he will live happily with his wife. He’s spent the past several months piecing together a new record, Justice, which is dense with love songs and ’80s-style anthems—interspersed with some well-intentioned, if not totally well-advised, interludes featuring the voice of Martin Luther King Jr.—that are bluntly honest about his bad past and equally optimistic about his future. (“Everybody saw me sick, and it felt like no one gave a shit,” he sings on the cathartic last song on the record, “Lonely.”) He’s still so overflowing with music that he puts out Freedom, a meditative, postscript of an EP about faith, just a few weeks after Justice. He is, if anything, the empathetic professional in this interaction too as he goes about trying to help me understand how he’s arrived at where he’s arrived. . .

Continue reading. And do read the whole thing. He does explain well how he arrived and where he arrived.

Written by LeisureGuy

14 April 2021 at 3:24 pm

Police shooting of unarmed civilians will continue, for reasons embedded into our approach and systems

leave a comment »

Zack Cheney-Rice writes in New York:

On Sunday afternoon, two weeks into the criminal trial of Derek Chauvin, the ex-Minneapolis police officer who killed George Floyd last spring, police in nearby Brooklyn Center, Minnesota, shot and killed another Black man, 20-year-old Daunte Wright. The events’ concurrence highlighted a contradiction: For all the efforts by prosecutors and fellow officers to paint Chauvin as a rogue, officers in Brooklyn Center were doing comparatively standard police work when they encountered Wright, and the outcome was the same. Needless death is not the product of policing gone wild. It’s an inevitable outcome of policing as it was intended.

According to reports, officers stopped Wright for an alleged traffic violation and tried to detain him after discovering a warrant for his arrest. Police say they ordered Wright out of his vehicle, but the 20-year-old climbed back in when they moved to cuff him and officer Kimberly Potter, a 26-year veteran of the department, shot him. Wright’s car rolled several blocks before crashing. Police and medical workers pronounced him dead soon after. Protests erupted, continued into Monday morning, and resumed that night, with some residents hurling “rocks, bags of garbage, and water bottles” at riot-gear-clad police, who responded by firing rubber bullets and chemical agents at the demonstrators. In subsequent statements to the press, Brooklyn Center police chief Tim Gannon said Officer Potter killed Wright by mistake — she meant to discharge her Taser rather than her firearm, he said, citing publicly released body-camera footage where she seems surprised to have fired her gun. “I’m the leader of this department,” Gannon said on Monday, articulating in broad terms how the situation might be resolved amid heightened tensions. “They expect me to lead. Create a safe city. That’s what I’m trying to do.”

Police seek to create safety — a nebulous state of enforced order — by deciding who is dangerous and responding to them with violence. Even as public opinion has soured on much of their conduct in the past year, this remains, to the vast majority of Americans, a key part of what law enforcement agents are supposed to do. This is evident in broad support for reforms like establishing uniform standards for how and when police can use force — paired with more or less split opinion on whether police should keep receiving military-grade weapons and equipment — compared with less popular changes aimed at eliminating the police’s right to use violence altogether, like defunding as a precursor to abolition. Even if one agrees that a United States without armed police is a dangerously utopian proposal, it is hard to dispute that anything less will preserve this discretion to apply violence based on subjective judgment — which, in turn, will continue to spark the type of heated protests we’ve seen grow larger over the past year. As long as they exist, the police will be one split-second decision away from assaulting or killing someone because they view them as a threat. People like Daunte Wright are and will continue to be casualties. This will keep happening.

For this reason, the rough week that Derek Chauvin’s criminal defense just had should be interpreted with restraint. By all accounts, things aren’t going well for the former Minneapolis police officer, who is being tried less than ten miles away from where Wright was killed. Two weeks into his trial, where he faces charges of second-degree manslaughter and second- and third-degree murder, Chauvin has faced rare rebuke in the courtroom from his own former boss, Minneapolis police chief Medaria Arradondo. Arradondo testified for Chauvin’s prosecution last week that the former officer “absolutely” violated department policy when he knelt on George Floyd’s neck in May 2020. “Once Mr. Floyd had stopped resisting, and certainly once he was in distress and trying to verbalize that, that should have stopped,” Arradondo said. This was significant because police rarely testify against their own. The chief’s decision to do so was transparently geared toward creating distance between Chauvin’s conduct and that of officers who do things the way they’re supposed to — that is, according to normal rules and standards. Part of what generated last summer’s protest furor was the torturous cruelty that went into killing Floyd. To watch him lose his life over the course of eight and a half agonizing minutes, handcuffed on his stomach with an officer kneeling on him, marked for many viewers a striking departure from the typical video-recorded police killings, which are characterized by swift judgment and hair triggers.

But the distinction is less meaningful than it appears. Even if Officer Potter killed Wright by accident, using her firearm rather than the intended Taser, as is being claimed, police regularly shoot people unnecessarily when they resist arrest, make furtive or sudden movements, or reach abruptly into their cars. By law, these are considered reasonable grounds for lethal force, and are almost never met with administrative rebuke, let alone legal consequences. The exact circumstances around Wright’s death are still being investigated. But even if they unfolded just as they seem — a man fled arrest and was shot — it was simultaneously tragic, unnecessary, and utterly normal. And if normalcy produces outcomes that are this comparable to the supposed aberrations, like Chauvin, then it’s hard to imagine any of the reformist proposals currently on offer making more than a dent in the recurrence of such incidents. Much fanfare . . .

Continue reading.

The column concludes:

. . . If the types of reform that have enough popular support to become law continue to be items like mandating the use of body cameras; expanded data collection; and creating uniform national standards for the use of force, a ban on chokeholds, and ending qualified immunity — which would allow police to be sued for violating people’s rights — Americans can both expect a reduction in rates of police violence against civilians and be confident that killings like Wright’s will continue to happen with unconscionable frequency. That’s because none of these proposals will change the function of the police. Their trade is violence. They can be compelled to be more discreet and judicious, but they cannot stop. This is an acceptable trade-off for many people, who believe their personal welfare depends on this brand of violent intervention. But to those for whom deaths like Wright’s are truly unacceptable, it’s worth thinking harder about what changes would actually prevent them. To do otherwise is to concede that deaths like his are the cost of doing business. Most people will make that concession. Nobody has to.

Written by LeisureGuy

14 April 2021 at 10:36 am

The Tech Elite’s Favorite Pop Intellectual: Julia Galef on bringing the rationalist movement to the mainstream.

leave a comment »

Benjamin Wallace writes in New York:

n 2012, Julia Galef, the host of a podcast called Rationally Speaking, moved from New York to Berkeley to help found a nonprofit called the Center for Applied Rationality. It was the early days of the rationalist movement: a community formed on the internet whose adherents strove to strip their minds of cognitive biases and subject all spheres of life to the glare of scientific thought and probabilistic reasoning. Galef and her CFAR co-founders — mathematician Anna Salamon, research scientist Andrew Critch, and math and science educator Michael Smith — wanted to translate these principles to everyday life. They did this through multiday workshops, where participants could learn to make better decisions using techniques like “goal factoring” (breaking a goal into smaller pieces) and “paired debugging” (in which two people help identify each other’s blind spots and distortions).

Over the next several years, as rationalism became not only the de facto brand of self-help in Silicon Valley but also an intellectual movement followed by pundits and executives alike, CFAR’s profile grew; soon, the nonprofit was running workshops across the country and teaching classes at Facebook and the Thiel Fellowship. But for CFAR’s founders, it was the empirical confirmation of their work that mattered most. Early on, they began conducting a controlled study to determine whether the workshops were demonstrably helpful. They surveyed 40 participants, assessing their before-and-after answers to questions like “How together is your life?” and “How successful do you feel in your social life?” The study found that, one year after the workshop ended, participants showed decreased neuroticism and increased self-efficacy, but to Galef, the results weren’t sufficiently rigorous. “What was it about the workshop?” she says. “Was it the classes or hanging out with like-minded people that makes the difference?” Conducting more tests would have been too expensive. “My vision was we’d come up with hypotheses about techniques, keep the ones that work, and discard the ones that don’t. It turned out to be much harder than I’d realized.”

In 2016, Galef left CFAR, unsatisfied with what she had been able to accomplish there. Instead, she began working on her first book, which, after five years, will be published by Penguin on April 13. The Scout Mindset: Why Some People See Things Clearly and Others Don’t is a fitting debut for someone who has considered herself a “populizer” of the rationalist movement. “I take these ideas I think are great and try to explain them to a wider audience,” she says.

When we speak over Zoom, Galef is in Franklin, North Carolina, her face evenly lit by the ring lamp she travels with. Since she and her fiancé left their San Francisco studio this past July, they’ve been doing the digital-nomad thing. Right now, they are near Great Smoky Mountains National Park in a golf-course Airbnb. Galef holds her laptop camera up to the window, revealing a burbling creek outside. “It suits our personalities and lifestyle,” she says. “We both work remotely” — he’s a program officer focused on artificial intelligence at the effective-altruism organization Open Philanthropy — “we’re both introverts, we’re both minimalists, and we both like novelty.”

To the extent that the rationalist movement has been written about, its eccentricities have tended to get outsize attention: Some rationalists live in group houses with names like Event Horizon and Godric’s Hollow; polyamory and a preoccupation with the existential risk posed by AI are both overrepresented. In opposition to mainstream online culture, which believes that certain arguments should be off-limits, the rationalsphere wants to be able to talk about anything. Slate Star Codex — recently renamed Astral Codex Ten — the most prominent rationalist blog, has caused controversy by countenancing free-flowing discussion of topics such as race science and female harassment of men. And because of their devotion to hyperanalysis, some members of the community can present as arrogant and lacking in EQ.

Galef, however, is an amiable ambassador for the movement, adept at distilling its concepts in an accessible and plainspoken manner. The speech of rationalists is heavy on the vernacular, often derived from programming language: “updating your priors” (keeping an open mind), “steel-manning” (arguing with the strongest version of whatever point your opponent is making), “double-cruxing” (trying to get to the root of a disagreement). But . . .

Continue reading.

Written by LeisureGuy

13 April 2021 at 3:11 pm

Amazon shows how trickle-down inequality works

leave a comment »

Sarah Jones writes in the “Intelligencer” section of New York:

Bill Bodani liked his old job. He cleaned slag out at the Sparrows Point steel mill in Maryland, cleared the flues and the broken brick out of the blast furnace. He loved it despite the asbestosis it gave him, writes Alec MacGillis in his new book, Fulfillment. “I enjoyed the people,” Bodani told MacGillis. “They made it enjoyable. The Black, the white. It was a family thing. I don’t care if you knew them for five minutes, they took you in. No matter how bad I got hurt, or how bad things got, there was always a bright side. You had those guys with you.”

Until he didn’t. The mill closed, and Bodani needed a new job. He found one with Amazon, working in a Baltimore-area fulfillment center. He started out at $12 an hour — much less than he’d made at the mill. He’d traded his old friends for a place that would, as MacGillis put it, fire workers “by algorithm.” And Bodani had a problem. He was older, and he needed to use the bathroom more often than did his younger co-workers. When he had used up his breaks, he resorted to an undignified option. He’d piss in a corner of the warehouse, using a forklift as a privacy shield.

MacGillis completed Bodani’s story before the Retail, Wholesale, and Department Store Union announced that it would try to unionize the first Amazon warehouse in the country in Bessemer, Alabama. Workers there reported their own versions of Bodani’s problem. The company regimented their days so strictly that they often didn’t have the time they needed to use the restroom. The union still lost, an election now contested before the National Labor Relations Board. Despite the outcome, the stories stick. Workers said they couldn’t stay six feet apart from each other in the middle of a pandemic, spoke of dirty workstations that never got clean. Amazon, they insisted, was a bad place to work. Why, then, are cities so desperate to bring Amazon home?

In Fulfillment, MacGillis, a reporter for ProPublica and the author of 2014’s The CynicThe Political Education of Mitch McConnell, offers answers. The digital economy has fattened a handful of cities while others, often old industrial hubs, fall behind. There is historical precedent for industries to cluster: “History,” he writes, “is the story of cities with the right confluence of people in close quarters to spin the world forward, whether in classical Athens or Renaissance Florence or industrial-age Glasgow.” That dynamic, however, has “trebled” in recent years, he claims, with innovation the new resource to mine. Amazon and Microsoft swelled Seattle, brought it new wealth, a new class of resident, and a new set of problems. That wealth never reached a number of Seattle’s long-term residents, who could recall an older, more livable version of a vibrant city. What dispersed out from Seattle was not wealth, either, but something else. Inequality trickled down.

MacGillis understands the bargain Amazon offers the public and explores the consequences of that bargain with a sharp, humane eye. He succeeds in telling a story about Amazon from the bottom up — the right way to scrutinize a company that projects a progressive image. Amazon wants us to believe it treats its workers well: It pays them $15 an hour now, a fact it has repeatedly tweeted to its congressional critics. Other companies, even governments, ought to follow Amazon’s stellar example, the company says. MacGillis argues that governments have already been too eager to take Amazon at its word, and that the consequences, for workers and for the places they live, have been catastrophic.

To cities in need of jobs, Amazon can look like a savior. But salvation is an exchange: a soul for a different future. MacGillis argues that this trade is good for Jeff Bezos alone; workers and cities lose out in both a psychological and material sense. Bill Bodani has nothing to offer the new economy but his body. Amazon accepts, and forces him to accept something even more nefarious than a pay cut. To take a job at the mill was to join a community. Young high-school graduates, MacGillis writes, had walked into a union and the welcoming arms of their uncles and fathers. By contrast, the warehouse is a sterile place. Workers are welcomed not with warm introductions but with “a sheet of paper scrawled with AMAZON” and representatives for an Amazon subcontractor. The job itself can be isolating, as Amazon workers themselves have reported; steep quotas and pervasive surveillance offer few opportunities to socialize. This is a useful union-avoidance strategy. It’s also a spiritual blow.

Once cities like Sparrows Point offer up their souls, Amazon gives them a cheap future. Corporations rarely make decisions out of abundant public spirit; Amazon is no exception to the rule. Instead, it eludes taxes. MacGillis calls Amazon’s approach to tax avoidance “a veritable Swiss Army knife, with an implement to wield against every possible government tab,” and the description lines up with reality. Amazon paid no federal income tax for two years before coughing up a paltry $162 million in 2019. It settles upon cities and towns like a locust, chewing up tax breaks totaling $2.7 billion by 2019, according to MacGillis. In 2018, Amazon threatened to cancel a planned expansion in Seattle, its home turf, over an employee-hours tax intended to address the city’s homelessness crisis. The city council passed it, only to reverse itself less than a month later.

In smaller cities, the costs of attracting Amazon can be especially steep. Consider . . .

Continue reading. There’s more.

Written by LeisureGuy

13 April 2021 at 2:53 pm

How spanking may affect brain development in children

leave a comment »

I earlier blogged about studies that show that physically punishing children (by spanking, for example) adversely affect them, leading to social and emotional problems later on. Those studies demonstrated the cause and effect relationship but did not focus on the mechanism by which the result followed. Manisha Aggarwal-Schifellite has an article in the Harvard Gazette that describes the likely mechanism. The article begins:

Spanking may affect a child’s brain development in ways similar to more severe forms of violence, according to a new study led by Harvard researchers.

The research builds on existing studies that show heightened activity in certain regions of the brains of children who experience abuse in response to threat cues.

The group found that children who had been spanked had a greater neural response in multiple regions of the prefrontal cortex (PFC), including in regions that are part of the salience network. These areas of the brain respond to cues in the environment that tend to be consequential, such as a threat, and may affect decision-making and processing of situations.

“We know that children whose families use corporal punishment are more likely to develop anxiety, depression, behavior problems, and other mental health problems, but many people don’t think about spanking as a form of violence,” said Katie A. McLaughlin, John L. Loeb Associate Professor of the Social Sciences, director of the Stress & Development Lab in the Department of Psychology, and the senior researcher on the study, which was published Friday in the journal Child Development. “In this study, we wanted to examine whether there was an impact of spanking at a neurobiological level, in terms of how the brain is developing.”

According to the study’s authors, corporal punishment has been linked to the development of mental health issues, anxiety, depression, behavioral problems, and substance use disorders. And recent studies show that approximately half of parents in U.S. studies reported spanking their children in the past year and one-third in the past week. However, the relationship between spanking and brain activity had not previously been studied.

McLaughlin and her colleagues — including Jorge Cuartas, first author of the study and a doctoral candidate in the Harvard Graduate School of Education, and David Weissman, a postdoctoral fellow in the Stress & Development Lab — analyzed data from a large study of children between the ages of 3 and 11. They focused on 147 children around ages 10 and 11 who had been spanked, excluding children who had also experienced more severe forms of violence.

Each child lay in an MRI machine and watched a computer screen on which were displayed different images of actors making “fearful” and “neutral” faces. A scanner captured the child’s brain activity in response to each kind of face, and those images were analyzed to determine whether the faces sparked different patterns of brain activity in children who were spanked compared to those who were not.

“On average, across the entire sample, fearful faces elicited greater activation than neutral faces in many regions throughout the brain … and children who were spanked demonstrated greater activation in multiple regions of PFC to fearful relative to neutral faces than children who were never spanked,” the researchers wrote.

By contrast, “There were no regions of the brain where activation to fearful relative to neutral faces differed between children who were abused and children who were spanked.”

The findings are in line with similar research conducted on children who had experienced severe violence, suggesting that “while we might not conceptualize corporal punishment to be a form of violence, in terms of how a child’s brain responds, it’s not all that different than abuse,” said McLaughlin. “It’s more a difference of degree than of type.”

Researchers said the study is a first step toward . . .

Continue reading. There’s more (and no paywall).

And note these earlier posts on parenting in general: first, a better way to parent; second, avoiding having bossy, unhelpful kids; and third, best practices in parenting.

Written by LeisureGuy

13 April 2021 at 4:20 am

“Less is more” is intrinsically difficult for the human psyche, which seems always to want to add

leave a comment »

Joe Dominguez, in his (invaluable) book Your Money or Your Life: Transforming Your Relationship with Money and Achieving Financial Independence, pointed out that one route to financial independence was to accumulate enough money to satisfy all your wants and needs, but an easier route was to trim your wants and needs to fit with a smaller amount of money. In either case, financial independence is achieved, but one way is faster and easier. (The book describes the tactics he used.)

Written by LeisureGuy

12 April 2021 at 3:53 pm

Suicide and impulse

leave a comment »

Suicide can be a considered choice — for example, an elderly person in the grasp of a painful and incurable terminal illness might decide to end his life early rather than suffer — or it can be a passing impulse — for example, a person with clinical depression who encounters a temporary setback and impulsively makes a suicide attempt.

In the US the most common suicide method is with a firearm. Consider this chart (source):

If a person has a suicidal impulse and has easy access to a firearm (not unusual in the US), the firearm is likely to be used in an attempt at suicide, and the outcome is almost always fatal. Indeed, in looking at gun deaths in the US, suicide outnumbers homicide (source):

In discussing deaths due to gun violence, some object to including suicides in the total because (they believe) “if a person’s going to commit suicide, they’ll find a way to do it, with or without a gun.” That belief is false for as impulsive suicide, and impulsive suicide is much more common than considered suicide.

If a person experiencing a suicidal impulse picks a method that requires several steps and involves time and effort, the impulse is likely to dissipate before the attempt is made, and if the method is not instantly fatal so that recovery is possible (as in taking an overdose of medicine), the person may possibly be saved and not reattempt suicide.

Years ago I read an account by a man who, walking across the Golden Gate bridge and feeling depressed about his current situation (as I recall, he had just lost his job), decided to kill himself by jumping off the bridge. He was, as he later wrote, fully committed, but he wanted to face the city lights when he jumped, and he was on the side of the bridge away from the city, so that he would be facing only darkness.

He could not cross immediately to the other side because of traffic and traffic barriers, so he decided to walk to the end of the bridge, cross there, and return to jump, facing the city. By the time he reached the end of the bridge, however, the impulse had dissipated, and he simply continued on his way home (which is why we know the story). The impulse never returned.

The Harvard School of Public Health has an article that speaks to this:

Nine out of ten people who attempt suicide and survive will not go on to die by suicide at a later date. This has been well-established in the suicidology literature. A literature review (Owens 2002) summarized 90 studies that have followed over time people who have made suicide attempts that resulted in medical care. Approximately 7% (range: 5-11%) of attempters eventually died by suicide, approximately 23% reattempted nonfatally, and 70% had no further attempts.

Even studies that focused on medically serious attempts–such as people who jumped in front of a train (O’Donnell 1994)–and studies that followed attempters for many decades found similarly low suicide completion rates. At least one study, published after the 90-study review, found a slightly higher completion rate. This was a 37-year follow-up of self-poisoners in Finland that found an eventual completion rate of 13% (Suominen 2004).

This relatively good long-term survival rate is consistent with the observation that suicidal crises are often short-lived, even if there may be underylying, more chronic risk factors present that give rise to these crises.

The relationship between suicide attempts and completions is a complex one.

  • Most people who die by suicide in the U.S. did not make a previous attempt. Prevention efforts that focus only on those who attempt suicide will miss the majority of completers. An international review of psychological autopsy studies found that approximately 40% of those dying by suicide had previously attempted (Cavanagh 2003). The proportion was lower (25-33%) among studies of youth suicide in the U.S. (Brent 1993, Shaffer 1996). A history of previous attempts is lower among those dying by firearm suicide and higher among those dying by overdose (NVISS data).
  • Most people who attempt suicide will not go on to complete suicide. [Though if a gun is used, the suicide attempt almost always results in death. – LG]
  • Still, history of suicide attempt is one of the strongest risk factors for suicide. 5% to 11% of hospital-treated attempters do go on to complete suicide, a far higher proportion than among the general public where annual suicide rates are about 1 in 10,000.

Footnotes and sources are found at the link. The big problem with guns is that a suicide attempt using a gun is almost always successful.

This came to mind this morning as I read a New Yorker article by D.T. Max, which includes this passage:

Suicide is often a response to extreme personal struggles, but the immediate catalyst can be little more than a bad grade on a test or a weekend when a student’s friends have gone out of town. A widely cited 1978 study of some five hundred people who were stopped from jumping off the Golden Gate Bridge suggests how impulsive the urge to kill oneself can be: only about five per cent of the subjects later died by suicide [that is, 95% did not later commit suicide – LG]. (Studies such as this helped lead to the now ubiquitous signs on bridges with the National Suicide Prevention Lifeline number: 1-800-273-8255.)

In the past two decades, the suicide rate in the United States has risen by some thirty-five per cent, and the problem is especially acute among the young. According to the Centers for Disease Control and Prevention, by 2018 suicide had become the second most common cause of death among Americans between the ages of ten and twenty-four, exceeded only by accidental death. Experts describe as precipitating factors everything from mounting economic pressures to the broadcasting of distress on social media. At the University of Pennsylvania, more than a dozen students have died by suicide since 2013, and in late 2019 the director of the school’s mental-health services jumped from the seventeenth floor of a building. A 2018 study by researchers affiliated with Harvard University found that one in five American college students had had suicidal thoughts the previous year. Will Newman, a professor of forensic psychiatry at Saint Louis University, told me, “The percentage of freshmen seeking mental-health services is on a steady incline, and universities have to quickly adjust to keep up.” Meanwhile, the covid-19 pandemic has deepened the isolation of many Americans. More than ten per cent of respondents to a C.D.C. survey last June said that in the previous month they had seriously considered killing themselves.

Written by LeisureGuy

12 April 2021 at 3:08 pm

Americans are still spanking their kids. A new study shows how harmful that is.

leave a comment »

I have long maintained that parents should not strike their children, and now research backs up that idea. Caitlin Gibson reports in the Washington Post:

When developmental psychologist Liz Gershoff began studying the effects of spanking and harsh parenting discipline in the 1990s, the topic was still the subject of intense debate in scientific circles: Was physical punishment actually harmful to kids?

In the decades since, a growing body of research has offered a clear and resounding answer. Spanking and other forms of severe discipline — such as verbally berating or humiliating a child — have been repeatedly linked to behavioral, emotional, psychological and academic problems, a conclusion that prompted the American Academy of Pediatrics to issue a new policy statement in 2018, strengthening its stance that parents should not use physical punishment.

But spanking is still prevalent in American families, and legal in all states. Though it appears to be steadily falling out of favor among younger generations, the 2018 General Social Survey — a long-running biennial national survey of American adults — found that 66 percent of Americans agreed that “a good, hard spanking” is sometimes necessary to discipline a child.

Why you just can’t choose: Parenting through pandemic decision fatigue

And one common argument in support of spanking has lingered: How can we be sure that a child’s lack of achievement or antisocial behavior can be traced back to physical punishment specifically, versus an innate or genetic factor?

Gershoff, a professor of human development and family sciences at the University of Texas at Austin, set out to settle the question of nature vs. nurture with her newest study, published in the March volume of Psychological Science. Gershoff and her research team analyzed more than a thousand pairs of twins — including more than 400 identical twins, who share the same DNA — many of whom were disciplined differently by their parents. The researchers found that the child who was hit or yelled at more often was consistently more likely to display delinquent or antisocial behavior.

“Identical-twin studies are sort of the classic way that psychologists have of differentiating what is innate behavior from what is learned behavior, so this study follows in a long tradition,” says Robert Sege, a pediatrician at Tufts Children’s Hospital and director of the Center for Community-Engaged Medicine at Tufts Medical Center who co-authored the American Academy of Pediatrics’s 2018 statement. “This is yet another, different way of looking at this, but all the data points to the same direction. As a scientist, when you see that, no matter how you do the experiment, no matter how you ask the question, you get the same result — that’s conclusive.”

Gershoff spoke with The Washington Post about her decades of research into physical discipline in parenting, and the implications of her newest study. The conversation has been edited for length and clarity.

Q: You’ve been studying the effects of spanking for a long time. How has the scientific understanding of this issue changed over your career?

A: When I began studying physical punishment, there was a lot of resistance to the idea that it might be harmful. I went into this with an open mind, thinking, ‘Well, maybe parents are right, maybe it does work, maybe that’s what the research shows.’ By then we had several-hundred studies that had been looking at it, but no one had taken an overview to say, ‘What did we find overall?’ So that’s what I did in 2002. There have been many, many more studies since then, and they have all continued to show that the more children are physically punished, the worse their behavior, the worse their mental health. Now we also have research showing they do worse in school, they have lower achievement.

Q: It seems like those conclusions are generally accepted among scientists and pediatricians, but what have you observed about our social and cultural perceptions of spanking?

A: It used to be that we only learned how to parent from our own parents, that was it, and maybe a couple friends. I think in the last 20 years we’ve seen more parents have access to parenting information from parenting books, from the Internet. Now we can talk to people across the world and find out how they’re parenting, what works for them. There has been more public discussion about physical punishment that we hadn’t really seen before — you started seeing figures like Oprah talk openly about how she was physically punished, and how harmful she thought that was. That was a big deal.

But there are still many parents who are under the impression that you have to hit children in some circumstances. That pattern cuts across cultural groups and racial groups and different areas of the country. So yes, there are still some parents to convince.

Q: Tell me about your new study, and why you focused on twins specifically. .. .

Continue reading. There’s much more. Later in the interview:

. . . parents see twins as individuals, they’re not parenting them as a set. And that much is good, but it’s just too bad that one kid is getting singled out for the harsh parenting. We didn’t ask them about the circumstances, so we don’t know why. But what we found was that the twin who was spanked more or yelled at more within each pair, they were the ones who had more antisocial behavior. It was the same for the kids who were identical twins and for kids who were not identical twins. The amount of genetic material they shared didn’t matter. It came down to how much harsh parenting they received.

And the interview concludes:

Q: Do you think this study settled that question then, of whether a child’s innate characteristics might be to blame for their behavior, rather than harsh discipline?

A: We are pretty convinced that this is definitive. There is just absolutely no evidence for a genetic component.

Q: You mentioned earlier that there are still clearly some parents left to convince. In light of these results, what would you most want those parents to know?

A: To me, it’s just further evidence that the way we parent our children really does impact them. And if parents are on the fence about physical punishment, this is just additional evidence that it’s not doing any good for the children, and in fact it seems to be making their behavior worse. And that in turn makes our jobs harder as parents — that’s the irony. People are trying to improve their children’s behavior when they’re using physical punishment, but they’re in fact making it worse.

See also the book Hunt, Gather, Parent: What Ancient Cultures Can Teach Us About the Lost Art of Raising Happy, Helpful Little Human.

Written by LeisureGuy

12 April 2021 at 11:17 am

“After Working at Google, I’ll Never Let Myself Love a Job Again”

leave a comment »

Emil Nietfeld, a software engineer, learns that corporations, as persons, are sociopaths. She writes in the NY Times:

I used to be a Google engineer. That often feels like the defining fact about my life. When I joined the company after college in 2015, it was at the start of a multiyear reign atop Forbes’s list of best workplaces.

I bought into the Google dream completely. In high school, I spent time homeless and in foster care, and was often ostracized for being nerdy. I longed for the prestige of a blue-chip job, the security it would bring and a collegial environment where I would work alongside people as driven as I was.

What I found was a surrogate family. During the week, I ate all my meals at the office. I went to the Google doctor and the Google gym. My colleagues and I piled into Airbnbs on business trips, played volleyball in Maui after a big product launch and even spent weekends together, once paying $170 and driving hours to run an obstacle course in the freezing rain.

My manager felt like the father I wished I’d had. He believed in my potential and cared about my feelings. All I wanted was to keep getting promoted so that as his star rose, we could keep working together. This gave purpose to every task, no matter how grueling or tedious.

The few people who’d worked at other companies reminded us that there was nowhere better. I believed them, even when my technical lead — not my manager, but the man in charge of my day-to-day work — addressed me as “beautiful” and “gorgeous,” even after I asked him to stop. (Finally, I agreed that he could call me “my queen.”) He used many of our one-on-one meetings to ask me to set him up with friends, then said he wanted “A blonde. A tall blonde.” Someone who looked like me.

Saying anything about his behavior meant challenging the story we told ourselves about Google being so special. The company anticipated our every need — nap pods, massage chairs, Q-Tips in the bathroom, a shuttle system to compensate for the Bay Area’s dysfunctional public transportation — until the outside world began to seem hostile. Google was the Garden of Eden; I lived in fear of being cast out.

When I talked to outsiders about the harassment, they couldn’t understand: I had one of the sexiest jobs in the world. How bad could it be? I asked myself this, too. I worried that I was taking things personally and that if anyone knew I was upset, they’d think I wasn’t tough enough to hack it in our intense environment.

So I didn’t tell my manager about my tech lead’s behavior for more than a year. Playing along felt like the price of inclusion. I spoke up only when it looked like he would become an official manager — my manager — replacing the one I adored and wielding even more power over me. At least four other women said that he’d made them uncomfortable, in addition to two senior engineers who already made it clear that they wouldn’t work with him.

As soon as my complaint with H.R. was filed, Google went from being a great workplace to being any other company: It would protect itself first. I’d structured my life around my job — exactly what they wanted me to do — but that only made the fallout worse when I learned that the workplace that I cherished considered me just an employee, one of many and disposable.

The process stretched out for nearly three months. In the meantime I had to have one-on-one meetings with my harasser and sit next to him. Every time I asked for an update on the timeline and expressed my discomfort at having to continue to work in proximity to my harasser, the investigators said that I could seek counseling, work from home or go on leave. I later learned that Google had similar responses to other employees who reported racism or sexism. Claire Stapleton, one of the 2018 walkout organizers, was encouraged to take leave, and Timnit Gebru, a lead researcher on Google’s Ethical AI team, was encouraged to seek mental health care before being forced out.

I resisted. How would being alone by myself all day, apart from my colleagues, friends and support system, possibly help? And I feared that if I stepped away, the company wouldn’t continue the investigation.

Eventually, the investigators corroborated my claims and found my tech lead violated the Code of Conduct and the policy against harassment. My harasser still sat next to me. My manager told me H.R. wouldn’t even make him change his desk, let alone work from home or go on leave. He also told me that my harasser received a consequence that was severe and that I would feel better if I could know what it was, but it sure seemed like nothing happened.

The aftermath of speaking up had broken me down. It dredged up the betrayals of my past that I’d gone into tech trying to overcome. I’d made myself vulnerable to my manager and the investigators but felt I got nothing solid in return. I was constantly on edge from seeing my harasser in the hallways and at the cafes. When people came up behind my desk, I startled more and more easily, my scream echoing across the open-floor-plan office. I worried I’d get a poor performance review, ruining my upward trajectory and setting my career back even further.

I went weeks without sleeping through the night.

I decided to take three months of paid leave. I feared that going on leave would set me back for promotion in a place where almost everyone’s progress is public and seen as a measure of an engineer’s worth and expertise. Like most of my colleagues, I’d built my life around the company. It could so easily be taken away. People on leave weren’t supposed to enter the office — where I went to the gym and had my entire social life.

Fortunately, I still had a job when I got back. If anything, I was more eager than ever to excel, to make up for lost time. I was able to earn a very high performance rating — my second in a row. But it seemed clear I would not be a candidate for promotion. After my leave, the manager I loved started treating me as fragile. He tried to analyze me, suggesting that I drank too much caffeine, didn’t sleep enough or needed more cardiovascular exercise. Speaking out irreparably damaged one of my most treasured relationships. Six months after my return, when I broached the subject of promotion, he told me, “People in wood houses shouldn’t light matches.”

When I didn’t get a promotion, some of my stock grants ran out and so I effectively took a big pay cut. Nevertheless, I wanted to stay at Google. I still believed, despite everything, that Google was the best company in the world. Now I see that my judgment was clouded, but after years of idolizing my workplace, I couldn’t imagine life beyond its walls.

So I interviewed with and got offers from two other top tech companies, hoping that Google would match. In response,  . . .

Continue reading. There’s much more.

Written by LeisureGuy

11 April 2021 at 12:43 pm

How to make friends as an adult

leave a comment »

I found this article by Marisa G Franco interesting (and useful):

Need to know

Friends are a treasure. In an uncertain world, they provide a comforting sense of stability and connection. We laugh together and cry together, sharing our good times and supporting each other through the bad. Yet a defining feature of friendship is that it’s voluntary. We’re not wedded together by law, or through blood, or via monthly payments into our bank accounts. It is a relationship of great freedom, one that we retain only because we want to.

But the downside of all this freedom, this lack of formal commitment, is that friendship often falls by the wayside. Our adult lives can become a monsoon of obligations, from children, to partners, to ailing parents, to work hours that trespass on our free time. A study of young adults’ social networks by researchers at the University of Oxford found that those in a romantic relationship had, on average, two fewer close social ties, including friends. Those with kids had lost out even more. Friendships crumble, not because of any deliberate decision to let them go, but because we have other priorities, ones that aren’t quite as voluntary. The title of the Oxford paper summed up things well: ‘Romance and Reproduction Are Socially Costly’.

Such is the pace and busyness of many people’s adult lives that they can lose contact with their friends at a rapid rate. For instance, a study by the Dutch sociologist Gerald Mollenhorst found that, over a period of seven years, people had lost touch with half of their closest friends, on average. What’s especially alarming is that many of us seem to be losing friends faster than we can replace them. A meta-analysis by researchers in Germany published in 2013 combined data from 177,635 participants across 277 studies, concluding that friendship networks had been shrinking for the preceding 35 years. For example, in studies conducted between 1980 and 1985, participants reportedly had four more friends on average, compared with the participants who’d taken part in studies between 2000 and 2005.

If we’re not careful, we risk living out our adulthoods friendless. This is a situation that’s worth avoiding. Friends are not only a great source of fun and meaning in life, but studies suggest that, without them, we’re also at greater risk of feeling more depressed. It’s telling that in their study ‘Very Happy People’ (2002), the American psychologists Ed Diener and Martin Seligman found that a key difference between the most unhappy and most happy people was how socially connected they were. Friends give us so much, which is why we need to invest in making them. Here’s how.

What to do

Making more friends in adulthood is going to take some deliberate effort on your part. It’s an exciting challenge in theory, but one of the first obstacles you’ll encounter is having enough confidence. Especially if you are shy by nature, putting yourself out there can seem scary, triggering fears of rejection. These fears might lead you to engage in two types of avoidance that will inhibit your ability to make friends. First, you might practise ‘overt avoidance’, by not putting yourself in situations where it’s possible to meet new people. Instead of going to your friend’s movie night, with the chance to meet others, you end up staying at home. Second, you might find yourself engaging in ‘covert avoidance’, which means that you show up but don’t engage with people when you arrive. You go to the movie night, but while everyone else is analysing the film after it’s over, you stay silent in the corner, petting someone’s pet corgi and scrolling through Instagram.

Assume that people like you

Both these forms of avoidance are caused by understandable fears of rejection. So imagine how much easier it would be if you knew that, were you to show up in a group of strangers, most of them would love you and find you interesting. This mindset actually has a self-fulfilling quality – an American study from the 1980s found that volunteers who were led to believe that an interaction partner liked them began to act in ways that made this belief more likely to come true – they shared more about themselves, disagreed less, and had a more positive attitude. This suggests that if you go into social situations with a positive mindset, assuming people like you, then it’s more likely that this will actually turn out to be the case.

Of course, you might still be reluctant to assume others like you because you don’t believe it’s true. If this is you, you might take comfort from research that found, on average, that strangers like us more than we realise. The paper, by Erica J Boothby at Cornell University and colleagues, involved having pairs of strangers chat together for five minutes, to rate how much they liked their interaction partner, and to estimate how much their partner liked them. Across a variety of settings and study durations – in the lab, in a college dorm, at a professional development workshop – the same pattern emerged. People underestimated how much they were liked, a phenomenon that Boothby and her colleagues labelled ‘the liking gap’.

What wisdom should we take from this research? It can remind us to go into new social events assuming that people will like us. It can keep us from being paralysed by fears of rejection, pushing us to question some of these fears. Try working on your internal dialogue, your inner voice that perhaps makes overly negative assumptions about how people will respond to you. Doing this will help give you the confidence to go out there and start initiating friendly contact with strangers.

Initiate

In We Should Get Together: The Secret to Cultivating Better Friendships (2020), Kat Vellos describes being inspired to write her book after a moment of feeling utterly alone. She was looking for a friend to hang out with, so she posted on Facebook: ‘Who wants to go eat French fries and talk about life with me?’ Everyone who responded lived in another state; her local San Francisco Bay Area friends were all booked up. As she put it:

I didn’t just want to eat snacks and talk about life. I was craving a different kind of life – one that would give me abundant access to friends who wanted to see me as much as I wanted to see them.

This experience made Vellos realise that she needed more friends, so she created and executed a plan to make some. Eventually, she was running two successful meetup groups, and had established friendships with people she liked and wanted to get closer to. How did she change her life? She initiated. Vellos set aside time to reach out to people regularly, to revitalise old relationships and to awaken new ones, to check in, to find time to hang out. Her story reveals how initiative can change the course of our friendships.

To embrace the importance of initiating, you must to let go of the myth that friendship happens organically. You have to take responsibility rather than waiting passively. Science backs this up. Consider a study of older adults in the Canadian province of Manitoba. The participants who thought friendship was something that just happened based on luck tended to be less socially active and to feel lonelier when the researchers caught up with them five years later. By contrast, those who thought friendship took effort actually made more effort – for example, by showing up at church or at community groups – and this paid dividends, in that they felt less lonely at the five-year follow-up.

But it’s not just showing up that matters, it’s saying ‘hello’ when you get there. This means introducing yourself to other people, asking them for their phone numbers, following up and asking them to hang out. Initiating is a process, one that we must do over and over again to make new friendships.

Initiation is particularly important for people who find themselves in new social settings – such as people who have moved to a new city, started a new school or job. In a study of first-year undergraduates at the University of Denver in 1980, it was those students who rated themselves as having superior social skills who managed to develop more satisfying social relationships. Moreover, in the Fall, when everyone was new, it was specifically ‘initiation skill’ that was most important. Once friendships were more stable, it didn’t matter as much.

Although we might fear that other people will turn us down if we initiate with them, the research finds that this is a lot less likely than we might think. When the American psychologists Nicholas Epley and Juliana Schroeder asked research participants to open up conversations with their fellow train commuters, can you guess how many of them were shot down? None! Epley and Schroder concluded that: ‘Commuters appeared to think that talking to a stranger posed a meaningful risk of social rejection. As far as we can tell, it posed no risk at all.’

Keep showing up

Once you’ve initiated some new contacts, the challenge . . .

Continue reading. There’s much more. And the article ends with links to more resources.

Friendship is important for one’s mental health and sense of well-being, so developing friendships is worth doing.

Written by LeisureGuy

10 April 2021 at 1:00 pm

A Shift in American Family Values Is Fueling Estrangement

leave a comment »

Joshua Coleman, a psychologist and senior fellow at the Council on Contemporary Families and the author of the forthcoming book Rules of Estrangement: Why Adult Children Cut Ties and How to Heal the Conflict, writes in the Atlantic:

Sometimes my work feels more like ministry than therapy. As a psychologist specializing in family estrangement, my days are spent sitting with parents who are struggling with profound feelings of grief and uncertainty. “If I get sick during the pandemic, will my son break his four years of silence and contact me? Or will I just die alone?” “How am I supposed to live with this kind of pain if I never see my daughter again?” “My grandchildren and I were so close and this estrangement has nothing to do with them. Do they think I abandoned them?”

Since I wrote my book When Parents Hurt, my practice has filled with mothers and fathers who want help healing the distance with their adult children and learning how to cope with the pain of losing them. I also treat adult children who are estranged from their parents. Some of those adult children want no contact because their parents behaved in ways that were clearly abusive or rejecting. To make matters worse for their children and themselves, some parents are unable to repair or empathize with the damage they caused or continue to inflict. However, my recent research—and my clinical work over the past four decades—has shown me that you can be a conscientious parent and your kid may still want nothing to do with you when they’re older.

However they arrive at estrangement, parents and adult children seem to be looking at the past and present through very different eyes. Estranged parents often tell me that their adult child is rewriting the history of their childhood, accusing them of things they didn’t do, and/or failing to acknowledge the ways in which the parent demonstrated their love and commitment. Adult children frequently say the parent is gaslighting them by not acknowledging the harm they caused or are still causing, failing to respect their boundaries, and/or being unwilling to accept the adult child’s requirements for a healthy relationship.

Both sides often fail to recognize how profoundly the rules of family life have changed over the past half century. “Never before have family relationships been seen as so interwoven with the search for personal growth, the pursuit of happiness, and the need to confront and overcome psychological obstacles,” the historian Stephanie Coontz, the director of education and research for the Council on Contemporary Families, told me in an email. “For most of history, family relationships were based on mutual obligations rather than on mutual understanding. Parents or children might reproach the other for failing to honor/acknowledge their duty, but the idea that a relative could be faulted for failing to honor/acknowledge one’s ‘identity’ would have been incomprehensible.”

The historian Steven Mintz, the author of Huck’s Raft: A History of American Childhood, made a similar observation in an email: “Families in the past fought over tangible resources—land, inheritances, family property. They still do, but all this is aggravated and intensified by a mindset that does seem to be distinctive to our time. Our conflicts are often psychological rather than material—and therefore even harder to resolve.”

In The Marriage-Go-Round, the Johns Hopkins University sociologist Andrew Cherlin wrote that starting in the late 19th century, traditional sources of identity such as class, religion, and community slowly began to be replaced with an emphasis on personal growth and happiness. By the second half of the 20th century, American families had gone through changes that, Cherlin said, were “unlike anything that previous generations of Americans have ever seen.”

Deciding which people to keep in or out of one’s life has become an important strategy to achieve that happiness. While there’s nothing especially modern about family conflict or a desire to feel insulated from it, conceptualizing the estrangement of a family member as an expression of personal growth as it is commonly done today is almost certainly new.

Of course, not all individuals base their ideas of family on these more individualized principles. “Most immigrant families, especially those in the first generation, still value interdependence and filial duty,” Mintz noted. “However, in recent decades the majority of American families have experienced weakening [extended] kin ties and high rates of mobility and dispersion. I would argue that these factors have made the opportunities for familial alienation greater than in the past.”

Estrangement seems to affect a small but significant portion of families in the United States, and it is happening today against a backdrop of record-high parental investment. During the past 50 years, people across the classes have been working harder than ever to be good parents. They have given up hobbies, sleep, and time with their friends in the hope of slingshotting their offspring into successful adulthood.

On the positive side, . . .

Continue reading.

Written by LeisureGuy

9 April 2021 at 1:28 pm

What it’s like living without an inner monologue

leave a comment »

Alex Soloducha reports for CBC News:

Hi there! Are you hearing this sentence in your head right now? Is your inner critic voicing its thoughts on the sentence structure? Is it saying this is an odd start to a news story?

The concept of an inner monologue — the term now commonly used to describe the voice in your head — recently sparked a flurry of discussion on social media. 

A tweet by @KylePlantEmoji and subsequent blog post by Ryan Langdon brought the topic into the forefront, informing the internet that not everyone has an inner monologue.

Some people freaked out, not believing that some don’t think in a verbal, linear way. 

Other who live without that inner voice realized they think differently than many of their friends and family members. 

Olivia Rivera, 22, said she figured out she doesn’t have an internal monologue when her co-workers at a Regina salon started talking about the viral debate. 

She said that until then, she didn’t know that some people actually have a voice in their head that sounds like their own voice.

“When I hear that other people have like a constant kind of dialogue and stream in their head and that when they’re doing a task they’ll just be thinking about things the entire time they’re doing a task, it actually kind of feels a little overwhelming,” she said. “How do you deal with that and what does that feel like?”

Inner monologues and pop culture

You may have seen inner monologue portrayed in TV shows where a detective debriefs the situation via narration. Or maybe you’ve seen the movie What Women Want, where Mel Gibson’s character can read the minds of his female coworkers and romantic interests. 

Rivera said she was first confronted with the concept of inner voice as a child, watching the show Lizzie McGuire in which a small animated version of the main character shared her thoughts and commentary on what was happening. . .

Continue reading.

Written by LeisureGuy

9 April 2021 at 1:21 pm

The case of Norman Douglas: When pederasts are accepted and even lionized

leave a comment »

Rachel Hope Cleves, a historian and professor at the University of Victoria, British Columbia, has an interesting and lengthy extract from her book Unspeakable: A Life Beyond Sexual Morality (2020) in Aeon. Let me quote the conclusion:

. . . Popular toleration of pederasty, in Italy and elsewhere, took the form of wilful ignorance. As the American literary theorist Eve Kosofsky Sedgwick pointed out in Epistemology of the Closet (1990), ignorance is not a singular ‘maw of darkness’ but a multiple phenomenon. Ignorance can entail intentional not-knowing, making the closet a performance of silence as a speech act. The Australian anthropologist Michael Taussig called communal expressions of wilful ignorance ‘public secrets’ that rested on ‘active not-knowing’. The experiences of the German photographer Wilhelm von Gloeden demonstrates how such a public secret, or active not-knowing, operated. Gloeden lived in Taormina, in Sicily, from 1878 to his death in 1931. During his decades of residence, he photographed generations of boys, frequently posing them naked in Hellenic ruins, adorned with laurel crowns and other symbols of ancient Greece. Gloeden’s photographs were popular with many early gay activists, including Symonds. The people of Taormina, who benefitted from the tourist trade that Gloeden’s photography brought to their town, also liked him. Gloeden and other foreign men often paid local youths for sexual encounters, an open secret in the community. Locals silenced any journalists, priests and politicians who attempted to criticise Gloeden, since they felt that these criticisms dishonoured the community and threatened their economic wellbeing. As Mario Bolognari, a historian of Taormina, concluded in 2017: ‘having chosen not to see does not imply being blind. It only means having decided that it was preferable not to notice certain things.’

Active not-knowing happens at the intimate level as well as the communal level. Families engage in active not-knowing about sexual wrongdoing in the home. This applies not only to child sexual abuse, but to all sorts of misbehaviours, including adultery, sibling incest and domestic violence. The motivations for active not-knowing are various, ranging from love and loyalty for the offender, to fear of retribution, to a desire to shield the family from public shame. Active not-knowing applies to more than sexual misbehaviour, and extends beyond the family. Friends exercise active not-knowing on behalf of friends, not wanting to risk meaningful relationships. Fans of artists engage in active not-knowing about their idols, motivated by awe and admiration, or by a desire to protect a favourite artwork from scrutiny and rejection. And disempowered people engage in active not-knowing about the powerful, from fear of the consequences that might result from confronting the truth, or from appreciation for the benefits that accrue from maintaining ignorance. Lastly, everyone benefits from silence by avoiding being implicated themselves in the bad thing that they know about.

Many of these ways of not-knowing helped Douglas escape condemnation. Some members of his extended family disowned him because of the abusive way he treated his wife, who was his first cousin and thus their relation as well. But his sons, who witnessed firsthand his sexual encounters with children (and might even, in the case of his older son, have experienced abuse) maintained loyalty to their father and defended him from posthumous accusations. Some writer friends wrote off Douglas after his arrests, but many loved his books and maintained a deliberate ignorance about what actually happened between Douglas and the boys and girls he recounted meeting in the pages of his travel books. The children themselves knew the most about Douglas’s sexual predations, but they had the most to gain financially – and often emotionally – from keeping close to him. There’s almost no evidence of children speaking out against Douglas either during their connections or afterwards, as adults. One exception is a 16-year-old whose complaint led to Douglas’s initial arrest in London in 1916.

The lack of panic about paedophilia during Douglas’s lifetime made it easier for all these people to look the other way, even when he flaunted his predilections. Douglas went so far as to write about how he’d purchased children for sex in his memoir, Looking Back (1933). Very few reviewers took issue with the material, at least until after Douglas’s death, when, freed from the fear of a libel suit, they pointed out how unseemly it was for Douglas to have admitted to such behaviour. The author and former politician Harold Nicolson complained that he was ‘shocked by people who, when past the age of 70, openly avow indulgences which they ought to conceal’. In the eyes of reviewers who wanted to maintain the pretence of active not-knowing, Douglas’s admission might have been a worse crime than the acts themselves, since they implicated the readers by forcing them into a state of knowing.

If Douglas escaped condemnation during his lifetime, he couldn’t escape the assault on his reputation following the intensification of anti-paedophilic sentiment after his death. The shift in public mores during the 1980s towards viewing paedophiles as monsters made it impossible to defend Douglas. He disappeared from literary memory, except as an example of historical villainy – the role he plays in two novels published after the 1980s, Francis King’s The Ant Colony (1991) and Alex Preston’s In Love and War (2014). Most readers would consider that a salutary change and welcome the expulsion of paedophiles from acceptable society. However, the rise of the ‘monster’ discourse doesn’t seem to have made people much more willing to speak out against child sexual abuse in the present.

Looking at the example of Epstein, one can see the same old dynamics of active not-knowing operating among the leadership of the MIT Media Lab (who accepted donations from Epstein) and the scholars who turned a blind eye to his abuse, even after his conviction. The Media Lab didn’t want to lose Epstein’s financial patronage or be shamed by association. Individual scholars might have enjoyed his company (and the company of the girls and young women Epstein surrounded himself with), or they might have wanted funding from him, or feared the consequences to their careers if they spoke out against him. In an even more striking parallel to Douglas, Matzneff wrote and spoke openly about his paedophilia without censure, protected by fellow writers’ and publishers’ unwillingness to disturb the dense network of literary connections in which they all played a role, until one of his victims of abuse, the French publisher Vanessa Springora, broke the silence in 2019.

Is it possible that elevating the paedophile to the status of a monster has in fact, rather than making it easier to speak out against child abuse, made it more imperative for friends, family members and fans to engage in active not-knowing? Who wants to expose someone they love as a monster? More than that, people are inclined to disbelieve tales of extraordinary monstrosity. Who wants to disturb their own situation by making such explosive allegations? The stakes are too high to risk getting it wrong. Maybe it would be easier to counter the problem of child sexual abuse if we were able to acknowledge it as both bad and ordinary. In Douglas’s day, such sex was seen as questionable but mundane. Today, it’s seen as terrible but exceptional. If we could create a world where people agreed that sex between adults and children was not healthy for children, and that many ordinary adults engaged in such behaviour nonetheless, maybe more people would feel empowered to witness and speak out against everyday abuse.

This sort of wilful ignorance that accompanies acceptance is (as I fairly frequently mention) discussed in Daniel Goleman’s interesting book Vital Lies, Simple Truths.

This is also related to what is happening in France, where the acceptability of sexual harassment and rape, particularly by men in positions of power, is losing ground fairly rapidly. See Norimitsu Onishu’s NY Times article “Powerful Men Fall, One After Another, in France’s Delayed #MeToo.” (And the articles to which that report links are worth reading as well.) From the report:

. . . Since the beginning of the year, a series of powerful men from some of France’s most prominent fields — politics, sports, the news media, academia and the arts — have faced direct and public accusations of sexual abuse in a reversal from mostly years of silence. At the same time, confronted with these high-profile cases and a shift in public opinion, French lawmakers are hurrying to set 15 as the age of sexual consent — only three years after rejecting such a law.

The recent accusations have not only led to official investigations, the loss of positions for some men and outright banishment from public life for others. They have also resulted in a rethinking of French masculinity and of the archetype of Frenchmen as irresistible seducers — as part of a broader questioning of many aspects of French society and amid a conservative backlash against ideas on gender, race and postcolonialism supposedly imported from American universities.

. . . Ms. Haas said that France was going through a delayed reaction to #MeToo after a “maturation” period during which many French began to understand the social dimensions behind sexual violence and the concept of consent.

That was especially so, Ms. Haas said, after the testimony in the past year of Adèle Haenel, the first high-profile actress to speak out over abuse, and of Vanessa Springora, whose memoir, “Consent,” documented her abuse by the pedophile writer Gabriel Matzneff.

“The start of 2021 has been a sort of aftershock,” Ms. Haas said. “What’s very clear is that, today in France, we don’t at all have the same reaction that we did four, five years ago to testimonies of sexual violence against well-known people.”

Last month, Pierre Ménès, one of France’s most famous television sports journalists, was suspended indefinitely by his employer after the release of a documentary that exposed sexism in sports journalism, “I’m Not a Slut, I’m a Journalist.”

Just a few years ago, few criticized him for behavior that they now don’t dare defend in public, including forcibly kissing women on the mouth on television and, in front of a studio audience in 2016, lifting the skirt of a female journalist — Marie Portolano, the producer of the documentary.

“The world’s changed, it’s #MeToo, you can’t do anything anymore, you can’t say anything anymore,” Mr. Ménès said in a television interview after the documentary’s release. He said he didn’t remember the skirt incident, adding that he hadn’t been feeling like himself at the time because of a physical illness. . .

There’s more.

Written by LeisureGuy

9 April 2021 at 12:12 pm

How an Abstinence Pledge in the ’90s Shamed a Generation of Evangelicals

leave a comment »

Clyde Haberman reports in the NY Times:

To the uninitiated, Christianity’s evangelical movement can seem like a monolith that brooks no dissent on certain core issues: Same-sex relationships are sinful, men’s spiritual dominance over women is divinely ordained and, on the political front, Donald J. Trump was an improbable but nonetheless valued protector of the faith.

Not everything is what it appears to be. The movement is in fact rife with division, a reality reinforced last month when Beth Moore, an evangelical writer and teacher with a huge following, formally ended her long affiliation with the Southern Baptist Convention, principally because of its tight embrace of the licentious, truth-challenged Mr. Trump.

It was a rupture several years in the making. As Ms. Moore told Religion News Service, disenchantment took hold when Mr. Trump became “the banner, the poster child for the great white hope of evangelicalism, the salvation of the church in America.” But the former president’s behavior is not the only issue buffeting the evangelical movement. White supremacy, male subjugation of women, a spate of sexual abuse cases, scandals involving prominent figures like Jerry Falwell Jr. — all have combined to undermine the authority of religious leaders and prompt members like Ms. Moore to abandon the Southern Baptist Convention.

Retro Report, which examines through video how the past shapes the present, turns attention to an artifact of religious conservatism from the movement. This is the so-called purity pledge, taken in the main by teenagers who pledged to abstain from sex until they married. Some swore to not so much as kiss another person or even go on a date, for fear of putting themselves on the road to moral failure.

Devotion to this concept took hold in the early ’90s, when fear of AIDS and other sexually transmitted diseases bolstered the evangelical movement’s gospel of teen abstinence. It was a view put forth as God-commanded and had the support of like-minded political leaders, from the White House of Ronald Reagan to that of Mr. Trump.

Many people certainly found lifelong contentment because of having waited for the right mate. But for others, as the Retro Report video shows, the dictates of the purity movement were so emotionally onerous that their adulthoods have been filled with apprehension and, in some instances, physical pain. They are people like Linda Kay Klein, who embraced the movement in her teens but left it in disenchantment at 21, two decades ago.

She described the trauma and the shame she felt this way: “I would find myself in tears and in a ball in the corner of a bed, crying, my eczema coming out, which it does when I’m stressed, and scratching myself till I bled, and having a deep shame reaction.” Ms. Klein found she was far from alone. She collected tales of enduring anxiety in a book, “Pure: Inside the Evangelical Movement That Shamed a Generation of Young Women and How I Broke Free” (Touchstone, 2018). “We went to war with ourselves, our own bodies and our own sexual natures,” she wrote, “all under the strict commandment of the church.”

It was under the aegis of the Southern Baptist Convention that the vow of virginity took distinct form, in True Love Waits, a program begun in 1993. As the movement grew in the ’90s, estimates of teenage adherents reached as high as 2.5 million worldwide. Youngsters wore purity rings, signed purity pledge cards and attended purity balls, with girls dressed in white and escorted by their fathers.

The fundamental message, inspired by a verse from Paul the Apostle’s First Epistle to the Thessalonians, was this: “I am making a commitment to myself, my family and my Creator that I will abstain from sexual activity of any kind before marriage. I will keep my body and my thoughts pure as I trust in God’s perfect plan for my life.”

Separate from religious imperatives, American teenagers in general have become warier of premarital relations — and certainly of unprotected sex. According to the federal government, there were 61.8 births in 1991 for every 1,000 young women in the 15-to-19 age group. By 2018, that figure had dwindled to 17.4, a decline that cut across racial and ethnic lines.

Among those who regarded purity in terms of spiritual enlightenment, few in the ’90s came to be more celebrated than Joshua Harris, a young man who preached that even sex-free dating was a dangerous first step on the slippery slope of a compromised life. His 1997 book “I Kissed Dating Goodbye” sold roughly a million copies. In his writings and speeches, Mr. Harris advocated courtship under the watchful eyes of a couple’s parents.

His message back then, he recalled for Retro Report, was that one should avoid conventional dating just as an alcoholic ought to steer clear of a bar. “It was, like, if you don’t want to have sex,” he said, “then don’t get into these sorts of short-term romantic relationships where there is an expectation to become intimate.”

Controlling teenage hormones, however, is easier said than done. Mr. Harris, who lives in Vancouver, eventually pulled his book from circulation, and has apologized for the role he played in causing anyone feelings of shame, fear and guilt. Today, he no longer considers himself a Christian.

Part of the problem for some critics of the movement is . . .

Continue reading.

Written by LeisureGuy

8 April 2021 at 7:49 pm

The radical aristocrat who put kindness on a scientific footing

leave a comment »

Lydia Syson has an interesting article in Psyche, which begins:

Five years had passed since Czar Alexander II promised the emancipation of the serfs. Trusting in a map drawn on bark with the point of a knife by a Tungus hunter, three Russian scientists set out to explore an area of trackless mountain wilderness stretching across eastern Siberia. Their mission was to find a direct passage between the gold mines of the river Lena and Transbaikalia. Their discoveries would transform understanding of the geography of northern Asia, opening up the route eventually followed by the Trans-Manchurian Railway. For one explorer, now better known as an anarchist than a scientist, this expedition was also the start of a long journey towards a new articulation of evolution and the strongest possible argument for a social revolution.

Prince Peter Kropotkin, the aristocratic graduate of an elite Russian military academy, travelled in 1866 with his zoologist friend Ivan Poliakov and a topographer called Maskinski. Boat and horseback took them to the Tikono-Zadonsk gold mine. From there, they continued with 10 Cossacks, 50 horses carrying three months’ supply of food, and an old Yukaghir nomad guide who’d made the journey 20 years earlier.

Kropotkin and Poliakov – enthusiastic, curious and well-read young men in their 20s – were fired by the prospect of finding evidence of that defining factor of evolution set out by Charles Darwin in On the Origin of Species (1859): competition. They were disappointed. As Kropotkin later wrote:

We saw plenty of adaptations for struggling, very often in common, against the adverse circumstances of climate, or against various enemies, and Polyakoff wrote many a good page upon the mutual dependency of carnivores, ruminants, and rodents in their geographical distribution; we witnessed numbers of facts of mutual support … [but] facts of real competition and struggle between higher animals of the same species came very seldom under my notice, though I eagerly searched for them.

Kropotkin pursued this contradiction for decades. Observation and wide reading convinced him that what he’d seen in Siberia was no exception, but a rule. In the 1860s, he watched a vast exodus of fallow deer gather in their thousands to cross the river Amur at its narrowest point to escape an early snowfall. In 1882, he was fascinated by a crab stuck on its back in a tank in Brighton Aquarium; it was painstakingly rescued by a band of comrades. Kropotkin collected descriptions from all over the world of the sociable behaviours of ants, bees, termites, falcons, swallows, horned larks, migrating birds, gazelles, buffalo, colonies of beavers, squirrels, mice, flocks of seals, herds of wild horses, tribes of dogs, wolf packs, marmots, rats, chinchillas, as well as apes and monkeys. He wrote that:

[A]s we ascend the scale of evolution, we see association growing more and more conscious. It loses its purely physical character, it ceases to be simply instinctive, it becomes reasoned.

It proved impossible for Kropotkin, a man ‘amiable to the point of saintliness’ according to George Bernard Shaw, to dedicate himself entirely to the ‘highest joys’ of scientific discovery, when all around him he saw ‘nothing but misery and struggle for a mouldy bit of bread’, as he put it in his Memoirs of a Revolutionist (1899). In 1872, in Switzerland, he became an anarchist, impressed by the egalitarian fraternity he found among the watchmakers of Jura. Back in Russia, he joined the revolutionary Circle of Tchaikovsky, disseminating underground literature and lecturing to the workers of St Petersburg disguised as Borodin the peasant agitator. His propaganda landed him in prison, but he escaped in 1876 with the help of comrades. By 1883, he was a political prisoner once again, this time in France. This second confinement gave him time to develop his arguments about evolution: he started to address systematically the conflicting interpretations of Darwin emerging in different parts of the world.

In England, the biologist, anthropologist and anatomist Thomas Huxley had quickly emerged as ‘Darwin’s bulldog’. Self-described as sharp of ‘claws and beak’, Huxley was prepared to ‘go to the Stake if requisite’ to defend evolutionary doctrine. His views on human nature and political economy were defined by Thomas Hobbes and Thomas Robert Malthus: life was an endless fight for scarce resources. The libertarian Herbert Spencer likewise applied natural selection to economics, using his infamous coinage the ‘survival of the fittest’ to justify laissez-faire capitalism. Popularly labelled ‘social Darwinism’, this view became gospel for Gilded Age industrialists such as John D Rockefeller. Although Huxley himself didn’t recommend the ‘survival of the fittest’ rule as a basis for morality – quite the reverse – he certainly believed that human beings were brutal and competitive, their sociability merely a recent veneer, rationalised by self-interest.

After Huxley published his pessimistic essay ‘The Struggle for Existence and Its Bearing Upon Man’ (1888) in The Nineteenth Century, an influential Victorian monthly review, Kropotkin was in a good position to launch an attack on Huxley’s idea of nature as a ‘gladiator’s show’. By this time, having been released from prison following an international outcry, Kropotkin was established in England, becoming quite a celebrity in the socialist and anarchist circles that blossomed through the mid-1880s. He promoted his political ideas in the international Left-wing press, and cofounded a London-based journal called Freedom, but made a living writing for scientific periodicals.

Between 1890 and 1915, in a series of interdisciplinary essays, Kropotkin drew on biology, sociology, history, (anti-racist) ethnology and anthropology to argue that species can organise and cooperate to overcome the natural environment and ensure their future survival. In 1902, the first eight essays were brought together in a book entitled Mutual Aid: A Factor of Evolution, an account of mutual support in action across the animal world (from microorganisms to mammals), ancient and modern ‘barbarian’ and ‘savage’ societies, medieval city-states and, finally, among modern humanity.

Kropotkin sought to recover an uncorrupted Darwin, whose metaphors should not be read too literally. But his call to understand compassion as ‘a powerful factor of further evolution’ cleared the way for a very particular political vision: human beings could overcome competitive struggle by voluntarily restructuring and decentralising society along principles of community and self-sufficiency.

Kropotkin became enamoured with mutual aid after reading an 1880 lecture on the subject by the celebrated zoologist Karl Kessler. Like other Russian naturalists at the time, Kessler didn’t deny the struggle for existence, but his own fieldwork in harsh and sparsely populated regions of the Russian empire strongly suggested that ‘the progressive development of the animal kingdom, and especially of mankind, is favoured much more by mutual support than by mutual struggle’. But, as Kropotkin mourned: ‘like so many good things published in the Russian tongue only, that remarkable address remains almost entirely unknown’.

Neither was Kropotkin alone politically. The historian of science Eric Johnson has recently demonstrated that . . .

Continue reading.

Written by LeisureGuy

3 April 2021 at 3:15 pm

Understanding Legal Argument (1): The Five Types of Argument

leave a comment »

John Danaher writes at Philosophical Disquisitions:

I have been teaching about legal reasoning and legal argumentation for years. When I do so, I try to impress upon students that legal argument is both simple and complex.

It is simple because in every legal case there is, in essence, one basic type of argument at the core of the dispute between the parties. This argument works from a general legal rule to a conclusion about the application of that rule to a set of facts. Philosophers and logicians would say that the basic form of legal argument is a syllogism: a simple three-step argument involving a major premise (a general principle or rule), a minor premise (a claim about a particular case or scenario) and then a conclusion (an application of the general rule to the particular case).

Here is a simple conditional syllogism:

  • (1) If roses are red, then violets are blue. (Major Premise)
  • (2) Roses are red. (Minor Premise)
  • (3) Therefore, violets are blue. (Conclusion)

My view is that legal arguments take on a similar conditional, syllogistic form. There is a legal rule that stipulates that if certain conditions are met, then certain legal consequences will follow. This is the major premise of legal argument. Then there is a set of facts to which that rule may apply. This is the minor premise of legal argument. When you apply the rule to the facts you get a conclusion.

In abstract form, all legal arguments look like this:

  • (1) If conditions A, B and C are satisfied, then legal consequences X, Y and Z follow. (Major premise: legal rule)
  • (2) Conditions A, B and C are satisfied (or not). (Minor Premise: the facts of the case)
  • (3) Therefore, legal consequences X, Y and Z do (or do not) follow. (Conclusion: legal judgment in the case).

To give a more concrete example, imagine a case involving a potential murder:

  • (1*) If one person causes another person’s death through their actions, and they performed those actions with intent to kill or cause grievous bodily harm, and they had no lawful excuse for those actions, then they are guilty of murder and may be punished accordingly.
  • (2*) Cain caused Abel’s death through his actions and in doing so he intended to kill and acted without lawful excuse.
  • (3*) Therefore, Cain is guilty of murder and may be punished accordingly.

Simple, right? Unfortunately it is not. Although this basic argument is the core of all legal disputes it is not the totality of those disputes. The problem is that legal rules don’t just show up and apply themselves to particular cases. There are lots of potential legal rules that could apply to a given set of facts. And there are lots of qualifications and exceptions to legal rules. You have to argue for the rules themselves and show why a particular rule (or major premise) should apply to a particular case. In addition to this, the facts of the case don’t just establish themselves. They too need to argued for and the law adopts a formalised procedure for establishing facts, at least when a case comes to trial.

In this two-part article, I want to examine some of the complexities of legal argument. I do so first by examining the different kinds of argument you can present in favour of, or against, particular legal rules (i.e. for and against the major premise of legal argument). Understanding these kinds of arguments is the main function of legal education. People who study law at university or in professional schools spend a lot of their time examining all the different ways in which lawyers try to prove that a certain rule should apply to a given set of facts.

Several authors have presented frameworks and taxonomies that try to bring some order to the chaos of arguments for legal rules. I quite like Wilson Huhn’s framework The Five Types of Legal Argument, which not only does a good job of reducing legal argument down to five main forms, but also identifies all the different ways of arguing for or against a legal rule within those five main forms. I’ll try to explain Huhn’s framework, in an abbreviated fashion, in the remainder of this article. I should say, however, that I have modified his framework somewhat over the years and I’m not entirely clear on which bits of it are his and which bits are my own modification. Most of it is his. Some bits are mine (and most of the examples are ones that I use in my teaching and not ones that come from Huhn’s book).

1. Argument from Text

For better or worse, law has become a text-based discipline. There are authoritative legal texts — constitutions, statutes, case judgments and so on — that set down legal rules. Consequently, one of the most common forms of legal argument is to identify the case-relevant legal texts and then use them to figure out the relevant rule. This is the first type of legal in Huhn’s framework and perhaps the starting point for most legal arguments.

Here’s a real example. Suppose . . .

Continue reading. There’s much more, since he describes all five types of argument.

Written by LeisureGuy

31 March 2021 at 7:29 pm

<span>%d</span> bloggers like this: