Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Philosophy’ Category

H.L. Mencken Kindle collection for 80¢

leave a comment »

H.L. Mencken (1880-1956), famous Baltimore writer, had a glistening career, arousing both ire and admiration, generally not in the same person You can buy for 80¢ a Kindle collection of seven of his books:

The American Credo
The American Language
The Philosophy Of Friedrich Nietzsche
A Book Of Burlesques
A Book Of Prefaces
Damn! A Book Of Calumny
In Defense Of Women

Written by Leisureguy

2 August 2021 at 11:13 am

Reading John Gray in war

leave a comment »

Andy Owen, author of All Soldiers Run Away: Alano’s War: The Story of a British Deserter (2017) and a former soldier who writes on the ethics and philosophy of war, has an interesting essay in Aeon:

‘All of humanity’s problems stem from man’s inability to sit quietly in a room alone.’
Blaise Pascal (1623-62)

Ifirst read the English philosopher John Gray while sitting in the silence of the still, mid-afternoon heat of Helmand Province in Afghanistan. In Black Mass: Apocalyptic Religion and the Death of Utopia (2007), Gray showed how the United States’ president George W Bush and the United Kingdom’s prime minister Tony Blair framed the ‘war on terror’ (which I was part of) as an apocalyptic struggle that would forge the new American century of liberal democracy, where personal freedom and free markets were the end goals of human progress. Speaking at the Sydney Writers’ Festival in 2008, Gray highlighted an important caveat to the phrase ‘You can’t have an omelette without breaking eggs,’ which is sometimes used, callously, to justify extreme means to high-value ends. Gray’s caveat was: ‘You can break millions of eggs and still not have a single omelette.’ In my two previous tours of Iraq, I had seen first-hand – as sectarian hatred, insurgency, war fighting, targeted killings and the euphemistically named collateral damage tore apart buildings, bodies, communities and the shallow fabric of the state – just how many eggs had been broken and yet still how far away from the omelette we were.

There was no doubt that Iraq’s underexploited oil reserves were part of the US strategic decision-making, and that the initial mission in Afghanistan was in response to the terrorist attacks of 11 September 2001 on the US, but both invasions had ideological motivations too. I had started the process to join the British military before 9/11. The military I thought I was joining was the one that had successfully completed humanitarian interventions in the Balkans and Sierra Leone. I believed we could use force for good, and indeed had a duty to do so. After the failure to prevent genocides in Rwanda and Srebrenica, the concept of the ‘responsibility to protect’ was developing, which included the idea that when a state was ‘unable or unwilling’ to protect its people, responsibility shifted to the international community and, as a last resort, military intervention would be permissible. It would be endorsed by all member states of the United Nations (UN) in 2005 but, under the framework, the authority to employ the last resort rested with the UN Security Council, who hadn’t endorsed the invasion of Iraq.

Despite the lack of a UN resolution, many of us who deployed to Iraq naively thought we were doing the right thing. When Lieutenant Colonel Tim Collins delivered his eve-of-battle speech to the Royal Irish Battle Group in March 2003, he opened by stating: ‘We go to liberate, not to conquer.’ We had convinced ourselves that, as well as making the region safer by seizing the Iraqi president Saddam Hussein’s weapons of mass destruction (WMD), we were there to save the people of Iraq from their own government and replace it with the single best way of organising all societies: liberal democracy. This feeling was so persuasive that it led to many troops feeling that the Iraqis were somehow ungrateful when they started to shoot at us for invading their country.

By my second tour of Iraq in 2005, it was clear that no WMD would be found and the society that was evolving was far from the one envisaged. Morale was at a low ebb as the gap between the mission and what we were achieving widened. We were stuck in a Catch-22. We would hand over to local security forces when the security situation improved enough for us to do so. However, the security situation couldn’t improve while we were still there. It would improve only if we left. The conditions that would allow us to leave were us already having left. Most troops were stuck inside the wire, their only purpose seemingly to be mortared or rocketed for being there. I was asked why we were there, especially when soldiers witnessed their friends being injured or killed, or saw the destruction of the city we’d come to liberate. They needed meaning, it couldn’t all be pointless. Meaning was found in protecting each other. My team of 30 or so men and women found purpose in trying to collect intelligence on those planting deadly improvised explosive devices along the main routes in and out of the city. Members of both the team before and the team after us were blown up trying to do so.

Much of the criticism levelled at the post-invasion failure focused on the mistake of disbanding the Iraqi state, the lack of post-conflict planning and the lack of resources. There was less focus on the utopian aims of the whole project. But it was only through Gray that I saw the similarities between the doctrines of Stalinism, Nazi fascism, Al-Qaeda’s paradoxical medieval, technophile fundamentalism, and Bush’s ‘war on terror’. Gray showed that they are all various forms (however incompatible) of utopian thinking that have at their heart the teleological notion of progress from unenlightened times to a future utopia, and a belief that violence is justified to achieve it (indeed, from the Jacobins onwards, violence has had a pedagogical function in this process). At first, I baulked at the suggested equivalence with the foot soldiers of the other ideologies. There were clearly profound differences! But through Gray’s examples, I went on to reflect on how much violence had been inflicted throughout history by those thinking that they were doing the right thing and doing it for the greater good.

A message repeated throughout Gray’s work is that, despite the irrefutable material gains, this notion is misguided: scientific knowledge and the technologies at our disposal increase over time, but there’s no reason to think that morality or culture will also progress, nor – if it does progress for a period – that this progress is irreversible. To think otherwise is to misunderstand the flawed nature of our equally creative and destructive species and the cyclical nature of history. Those I spoke to in Basra needed no convincing that the advance of rational enlightened thought was reversible, as the Shia militias roamed the streets enforcing their interpretation of medieval law, harassing women, attacking students and assassinating political opponents. By the time bodies of journalists who spoke out against the death squads started turning up at the side of the road, Basra’s secular society was consigned to history. Gray points to the re-introduction of torture by the world’s premier liberal democracy during the war on terror as an example of the reversibility of progress. The irreversibility idea emerged directly from a utopian style of thinking that’s based on the notion that the end justifies the means. Such thinking is often accompanied by one of the defining characteristics of the Iraq and Afghanistan campaigns: hubris.

The myth of progress was a key theme of Gray’s . . .

Continue reading.

Written by Leisureguy

31 July 2021 at 8:46 pm

Against Persuasion: The Wisdom of Socrates

leave a comment »

Agnes Callard writes in Boston Review:

Philosophers aren’t the only ones who love wisdom. Everyone, philosopher or not, loves her own wisdom: the wisdom she has or takes herself to have. What distinguishes the philosopher is loving the wisdom she doesn’t have. Philosophy is, therefore, a form of humility: being aware that you lack what is of supreme importance. There may be no human being who exemplified this form of humility more perfectly than Socrates. It is no coincidence that he is considered the first philosopher within the Western canon.

Socrates did not write philosophy; he simply went around talking to people. But these conversations were so transformative that Plato devoted his life to writing dialogues that represent Socrates in conversation. These dialogues are not transcripts of actual conversations, but they are nonetheless clearly intended to reflect not only Socrates’s ideas but his personality. Plato wanted the world to remember Socrates. Generations after Socrates’s death, warring philosophical schools such as the Stoics and the Skeptics each appropriated Socrates as figurehead. Though they disagreed on just about every point of doctrine, they were clear that in order to count themselves as philosophers they had to somehow be working in the tradition of Socrates.

What is it about Socrates that made him into a symbol for the whole institution of philosophy? Consider the fact that, when the Oracle at Delphi proclaims Socrates wisest of men, he tries to prove it wrong. As Plato recounts it in the Apology:

I went to one of those reputed wise, thinking that there, if anywhere, I could refute the oracle and say to it: “This man is wiser than I, but you said I was.” Then, when I examined this man—there is no need for me to tell you his name, he was one of our public men—my experience was something like this: I thought that he appeared wise to many people and especially to himself, but he was not. I then tried to show him that he thought himself wise, but that he was not. As a result he came to dislike me, and so did many of the bystanders. So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.”

If Socrates’s trademark claim is this protestation of ignorance, his trademark activity is the one also described in this passage: refuting the views of others. These are the conversations we find in Plato’s texts. How are the claim and the activity related? Socrates denies that his motivations are altruistic: he says he is not a teacher, and insists that he is himself the primary beneficiary of the conversations he initiates. This adds to the mystery: What is Socrates getting out of showing people that they don’t know what they take themselves to know? What’s his angle?

Over and over again, Socrates approaches people who are remarkable for their lack of humility—which is to say, for the fact that they feel confident in their own knowledge of what is just, or pious, or brave, or moderate. You might have supposed that Socrates, whose claim to fame is his awareness of his own ignorance, would treat these self-proclaimed “wise men” (Sophists) with contempt, hostility, or indifference. But he doesn’t. The most remarkable feature of Socrates’s approach is his punctilious politeness and sincere enthusiasm. The conversation usually begins with Socrates asking his interlocutor: Since you think you know, can you tell me, what is courage (or wisdom, or piety, or justice . . .)? Over and over again, it turns out that they think they can answer, but they can’t. Socrates’s hope springs eternal: even as he walks toward the courtroom to be tried—and eventually put to death—for his philosophical activity, he is delighted to encounter the self-important priest Euthyphro, who will, surely, be able to say what piety is. (Spoiler: he’s not.)

Socrates seemed to think that the people around him could help him acquire the knowledge he so desperately wanted—even though they were handicapped by the illusion that they already knew it. Indeed, I believe that their ill-grounded confidence was precisely what drew Socrates to them. If you think you know something, you will be ready to speak on the topic in question. You will hold forth, spout theories, make claims—and all this, under Socrates’s relentless questioning, is the way to actually acquire the knowledge you had deluded yourself into thinking you already had.

Let me sketch a little dialogue you might have with Socrates.

Socrates: What is courage?

You: Courage is being willing to take big risks without knowing how it’s going to work out.

Socrates: Such as risking your life?

You: Yes.

Socrates: Is courage good?

You: Yes.

Socrates: Do you want it for yourself and your children?

You: Yes.

Socrates: Do you want your children to go around risking their lives?

You: No. Maybe I should’ve said that courage is taking prudent risks, where you know what you are doing.

Socrates: Like an expert investor who knows how to risk money to make lots more?

You: No, that isn’t courageous. . . .

At this point, your pathways are blocked. You cannot say courage is ignorant risk-taking, and you cannot say courage is prudent risk-taking. You do not have a way forward. You are in what Socrates’s interlocutors called aporia, a state of confusion in which there is nowhere for you to go.

Suppose that the conversation goes no further than this—that, as is typical for Socrates’s interlocutors, you storm off in a huff at this point. Where does that leave you, and where does that leave Socrates?

Let’s start with you first. You might . . .

Continue reading. There’s more.

Written by Leisureguy

27 July 2021 at 12:09 pm

Paris Sportif: The Contagious Attraction of Parkour

leave a comment »

I first encountered parkour in a Luc Besson movie, District 13 (from 2004, original title Banlieue 13), but it has a longer history, discussed by Macs Smith in an extract from his book Paris and the Parasite: Noise, Health, and Politics in the Media City published in The MIT Reader:

In a city fixated on public health and order, a viral extreme sport offers a challenge to the status quo.1955, Letterist International, a Paris-based group of avant-garde authors, artists, and urban theorists, published “Proposals for Rationally Improving the City of Paris.” The group, which would become better known as Situationist International, or SI, and play an important role in the May 1968 demonstrations, put forward wild suggestions for breaking the monotony of urban life. Some of these, like the call to abolish museums and distribute their masterpieces to nightclubs, were iconoclastic and anti-institutional, reflecting the group’s anarchic political leanings.

Others were less overtly political and testified to a thirst for excitement. To appeal to “spelunkers” and thrill-seekers, they called for Paris’s rooftops and metro tunnels to be opened up to exploration. The group believed that the mundaneness of urban life in the 1950s was integral to bourgeois capitalism. Boredom was part of how the government maintained order, and so a more equal city would necessarily have to be more frightening, more surprising, more fun.

SI disbanded in 1972, but its ideas about the links between emotion and urban politics have been influential. Among the best examples are the subcultures centered around urban thrill-seeking that exist today, like urban exploration (Urbex), rooftopping, and skywalking, all of which involve breaking into dangerous or forbidden zones of the city. The most famous inheritor to SI’s call to experience urban space differently is parkour, which was invented in the Paris suburb of Lisses in the 1980s. It was inspired by Hébertisme, a method of obstacle course training first introduced to the French Navy in 1910 by Georges Hébert. David Belle learned the principles of Hébertisme from his father, Raymond, who had been exposed to it at a military school in Vietnam. David, along with a friend, Sébastien Foucan, then adapted those principles, originally conceived for natural environments, to the suburban architecture of their surroundings.

Over time, parkour has incorporated techniques from tumbling, gymnastics, and capoeira, resulting in a striking blend of military power and balletic artistry. Parkour involves confronting an urban map with an embodied experience of urban space. It is often defined as moving from points A to B in the most efficient way possible, and parkour practitioners, called traceurs, often depict themselves as trailblazers identifying routes through the city that cartography does not capture. Traceurs sometimes evoke the fantasy of tracing a straight line on the map and finding a way to turn it into a path, although in practice, they more often work at a single point on the map — a park, a rooftop, an esplanade — and end a session back where they started.

Traceurs’ desire to rewrite the map is another thing they share with the Situationists, who liked to cut up maps and glue them back together to show the psychological distance between neighborhoods. But parkour distinguishes itself from SI through its use of video, which continues to be a point of debate within the practice. In the early 2000s, Sébastien Foucan reignited this debate when he broke away from Belle to pioneer his own version of the training system.

Foucan’s appearance in the 2003 documentary “Jump London” cemented “freerunning” as the name for this alternate practice, which put a greater emphasis on stylized movements. Foucan would go on to play a terrorist bomb-maker in Martin Campbell’s “Casino Royale,” leaping from cranes with Daniel Craig’s James Bond in pursuit. Some parkour purists see this as a degradation of the utilitarian roots of their training, and insist instead on a physio-spiritual discourse of communion with the environment, mastery of fear, and humility. They reject freerunning as a brash corruption of Hébert’s principles. The sociologist Jeffrey Kidder notes in his interviews with traceurs in Chicago that they dismiss participants who lack interest in serious rituals like safety, humility, and personal growth. They react negatively to media coverage that highlights parkour’s danger or assimilates it into adolescent rebellions like skateboarding, drug use, or loitering.

In my own email interview with the leaders of Parkour Paris, the official parkour organization of Paris, the same will to blame media is evident: “Parkour has been mediatized in ‘connotated’ films. The traceurs depicted in those fictions were friendly delinquents a bit like Robin Hood. Friendly, yes, but for the immense majority of people they were still delinquents from the banlieue,” they gripe. “It’s been very hard to shake that image.” . . .

Continue reading. There’s much more. And it includes this 50-minute video, Jump London:

Written by Leisureguy

27 July 2021 at 10:17 am

Can Science Explain Everything? Anything?

leave a comment »

Steven Weinberg shared the Nobel prize in physics for his work in the unification of the weak force and electromagnetic interaction between elementary particles. He died today at the age of 88. He wrote this article in the NY Review of Books in May 2001.

One evening a few years ago I was with some other faculty members at the University of Texas, telling a group of undergraduates about work in our respective disciplines. I outlined the great progress we physicists had made in explaining what was known experimentally about elementary particles and fields—how when I was a student I had to learn a large variety of miscellaneous facts about particles, forces, and symmetries; how in the decade from the mid-1960s to the mid-1970s all these odds and ends were explained in what is now called the Standard Model of elementary particles; how we learned that these miscellaneous facts about particles and forces could be deduced mathematically from a few fairly simple principles; and how a great collective Aha! then went out from the community of physicists.

After my remarks, a faculty colleague (a scientist, but not a particle physicist) commented, “Well, of course, you know science does not really explain things—it just describes them.” I had heard this remark before, but now it took me aback, because I had thought that we had been doing a pretty good job of explaining the observed properties of elementary particles and forces, not just describing them.

I think that my colleague’s remark may have come from a kind of positivistic angst that was widespread among philosophers of science in the period between the world wars. Ludwig Wittgenstein famously remarked that “at the basis of the whole modern view of the world lies the illusion that the so-called laws of nature are the explanations of natural phenomena.”

It might be supposed that something is explained when we find its cause, but an influential 1913 paper by Bertrand Russell had argued that “the word ’cause’ is so inextricably bound up with misleading associations as to make its complete extrusion from the philosophical vocabulary desirable.”

This left philosophers like Wittgenstein with only one candidate for a distinction between explanation and description, one that is teleological, defining an explanation as a statement of the purpose of the thing explained.

E.M. Forster’s novel Where Angels Fear to Tread gives a good example of teleology making the difference between description and explanation. Philip is trying to find out why his friend Caroline helped to bring about a marriage between Philip’s sister and a young Italian man of whom Philip’s family disapproves. After Caroline reports all the conversations she had with Philip’s sister, Philip says, “What you have given me is a description, not an explanation.” Everyone knows what Philip means by this—in asking for an explanation, he wants to learn Caroline’s purposes. There is no purpose revealed in the laws of nature, and not knowing any other way of distinguishing description and explanation, Wittgenstein and my friend had concluded that these laws could not be explanations. Perhaps some of those who say that science describes but does not explain mean also to compare science unfavorably with theology, which they imagine to explain things by reference to some sort of divine purpose, a task declined by science.

This mode of reasoning seems to me wrong not only substantively, but also procedurally. It is not the job of philosophers or anyone else to dictate meanings of words different from the meanings in general use. Rather than argue that scientists are incorrect when they say, as they commonly do, that they are explaining things when they do their work, philosophers who care about the meaning of explanation in science should try to understand what it is that scientists are doing when they say they are explaining something. If I had to give an a priori definition of explanation in physics I would say, “Explanation in physics is what physicists have done when they say Aha!” But a priori definitions (including this one) are not much use.

As far as I can tell, this has become well understood by philosophers of science at least since World War II. There is a large modern literature on the nature of explanation, by philosophers like Peter Achinstein, Carl Hempel, Philip Kitcher, and Wesley Salmon. From what I have read in this literature, I gather that philosophers are now going about this the right way: they are trying to develop an answer to the question “What is it that scientists do when they explain something?” by looking at what scientists are actually doing when they say they are explaining something.

Scientists who do pure rather than applied research commonly tell the public and funding agencies that their mission is the explanation of something or other, so the task of clarifying the nature of explanation can be pretty important to them, as well as to philosophers. This task seems to me to be a bit easier in physics (and chemistry) than in other sciences, because philosophers of science have had trouble with the question of what is meant by an explanation of an event (note Wittgenstein’s reference to “natural phenomena”) while physicists are interested in the explanation of regularities, of physical principles, rather than of individual events.

Biologists, meteorologists, historians, and so on are concerned with the causes of individual events, such as the extinction of the dinosaurs, the blizzard of 1888, the French Revolution, etc., while a physicist only becomes interested in an event, like the fogging of Becquerel’s photographic plates that in 1897 were left in the vicinity of a salt of uranium, when the event reveals a regularity of nature, such as the instability of the uranium atom. Philip Kitcher has tried to revive the idea that the way to explain an event is by reference to its cause, but which of the infinite number of things that could affect an event should be regarded as its cause?

Within the limited context of physics, I think one can give an answer of sorts to the problem of distinguishing explanation from mere description, which captures what physicists mean when they say that they have explained some regularity. The answer is that . . .

Continue reading.

Written by Leisureguy

26 July 2021 at 3:38 pm

Dick Gregory — great comedian and civil rights icon — wrote a great cookbook

leave a comment »

Shea Peters has an interesting article in Atlas Obscura on the origin an impact of Dick Gregory’s cookbook (available in a Kindle edition for US$1.79):

Adrian Miller, the author of Black Smoke: African Americans and the United States of Barbecue, remembers how for his family, holidays like Juneteenth always meant celebrating with food. “We went to the public celebrations in the Five Points neighborhood, Denver’s historic Black neighborhood. At those events, the celebrated foods were barbecue, usually pork spareribs, giant smoked turkey legs, watermelon, and red-colored drinks.”

To many Black Americans, barbecue and soul food mean victory. Cooking techniques passed down for generations speak to the fortitude and perseverance of Black culture and cuisine. But along with celebration comes the consideration of the health effects of meat, sugar, and fat. Running parallel to the narrative of soul food lies another story, one that ties nutrition with liberation, and one that features an unlikely hero: a prominent Black comedian whose 1974 book filled with plant-based recipes continues to influence Black American diets today.

I grew up with Dick Gregory’s Natural Diet for Folks Who Eat: Cookin’ With Mother Nature in my home in Memphis. I even took it along with me for my first semester at Tennessee State University. The campus was surrounded with fast-food and soul food restaurants, and I often referred back to Gregory’s book for nutritional advice. I also made recipes from its pages, such as the “Nutcracker Sweet,” a fruit smoothie made with a mixture that would now be known as almond milk. Today, many years later and living in Brooklyn, I still consult the book. The same copy I first saw on my mother’s bookcase—with its cover depicting Gregory’s head wearing a giant chef’s hat topped with fruit and vegetables—now sits on my own.

Now considered one of history’s greatest stand-up comedians, Dick Gregory skyrocketed to fame after an appearance on The Tonight Show with Jack Paar in 1961, a segment that almost didn’t happen. Gregory initially turned down the opportunity because the show allowed Black entertainers to perform, but not to sit on Parr’s couch for interviews. After his refusal, Parr personally called Gregory to invite him to an interview on the Tonight Show’s couch. His appearance was groundbreaking: “It was the first time white America got to hear a Black person not as a performer, but as a human being,” Gregory later said in an interview.

Gregory was particularly adept at using humor to showcase the Black experience at a time of heightened tension and division in the United States. During a performance early in his career, he quipped, “Segregation is not all bad. Have you ever heard of a collision where the people in the back of the bus got hurt?”

“He had the ability to make us laugh when we probably needed to cry,” U.S. representative and civil rights icon John Lewis said in an interview after Gregory’s death in 2017. “He had the ability to make the whole question of race, segregation, and racial discrimination simple, where people could come together and deal with it and not try to hide it under the American rug.”

But Gregory didn’t just tackle racial inequality at comedy clubs. He also used his voice to advocate for civil rights at protests and rallies. After emceeing a rally with Dr. Martin Luther King Jr. in June 1961, Gregory developed a relationship with King. (Gregory’s close ties to leaders like King and Mississippi activist Medgar Evers would eventually lead to his becoming a target of FBI surveillance.) He aided in the search for the missing civil rights workers that were killed by the Ku Klux Klan in Mississippi during the intense “Freedom Summer” of 1964 and performed at a rally on the last night of 1965’s Selma to Montgomery march.

For Gregory, who became a vegetarian in 1965, food and diet became inextricably linked to civil rights. “The philosophy of nonviolence, which I learned from Dr. Martin Luther King, Jr., during my involvement in the civil rights movement, was first responsible for my change in diet,” he writes in his book. “I felt the commandment ‘Thou shalt not kill’ applied to human beings not only in their dealings with each other—war, lynching, assassination, murder, and the like—but in their practice of killing animals for food or sport.”

Throughout Dick Gregory’s Natural Diet, he ties the liberation of Black people to health, nutrition, and basic human rights. Gregory was all too familiar with the socioeconomic obstacles to a healthy diet: Growing up poor in St. Louis, he had limited access to fresh fruits and vegetables. In his book, he notes that readers may not always have the best resources, but they can have the best information. Each chapter serves as both a rallying cry and a manual, offering everything from primers on the human body to lists of foods that are good sources of particular vitamins and minerals.

Thanks to Gregory’s longstanding collaboration with nutritionist Dr. Alvenia Fulton, the book offers healthy recipes as well as natural remedies for common ailments. The chapter  . . .

Continue reading. The article includes two recipes.

And see also this earlier post about a food that came from a radical movement: Bean Pie.

I’ll also note that the FBI’s surveillance of civil-rights leaders is yet another example of the FBI’s sleazy side, as is its support of the pedophile Larry Kassar (refusing to investigate credible allegations) and its blaming on an Oregon man for the bomb attacks in Spain because the FBI couldn’t read fingerprints — not to mention the scandal of the incompetence and bad practice at FBI forensic labs. It’s an agency that needs considerable work (and a new culture).

Written by Leisureguy

26 July 2021 at 12:40 pm

Coldplay, Achilles, and Spiderman

leave a comment »

Brian Theng writes in Antigone:

Some years ago, after receiving a rejection letter from a Cambridge college, I decided to go onto the Oxford website. I looked up the A-to-Z of courses available, from Archaeology and Anthropology to Mathematics and Theology and Religion. I crossed these off my list, and a couple more. But ‘Classics’ was sufficiently unfamiliar for me not to cross it out.

That was the start of a wonderful adventure.  It is not one I can share with many where I live, Singapore, a city that is famed for good food and the general lack of chewing gum. (It’s not illegal to chew gum, but it can’t be imported or sold.) We are a modern nation, with swimming pools on a cantilevered rooftop, 16-storey-tall concrete and steel trees, and the world’s tallest indoor waterfall in our airport.

Besides a few copies of Robert Fagles’ 2006 translation of the Aeneid in the National Library, there are few opportunities to even hear about Classics in the first place. That is a shame, because Classics makes us think harder and differently about what it means to be human.

Once in a while, I come across little things in today’s world that drive this message home. It could be rapping (sometimes wrongly) about Romans, Trojans, and the Odyssey, wanting to smell like a Roman centurion, or some interesting tunes, as in this case…

The Chainsmokers and Coldplay’s 2017 electronic pop tune Something Just Like This starts off with the singer – let us call him Chris – doubting his worth as a partner, because he is no superhero:

I’ve been reading books of old
The legends and the myths
Achilles and his gold
Hercules and his gifts
Spider-Man’s control
And Batman with his fists
And clearly I don’t see myself upon that list.

His significant other replies that it is alright. She just wants a love that is simple and honest:

I’m not lookin’ for somebody
With some superhuman gifts
Some superhero
Some fairy-tale bliss.

It is not every day that we find Achilles and Hercules mentioned in the same breath as Spider-Man and Batman. To my mind, what divides them are what different understandings of ‘heroism’ entail and what different societies value. What unites them is far less their heroism than their conflicts and struggles despite their superhero abilities and feats.

What does it mean to be a hero? Taken together, gold, gifts (whether material or god-given), self-control, and fists form an intriguing combination of heroic references. Being a comic-book hero usually means saving the world. When we imagine Spider-Man and Batman, we may think of fighting crime and cosmic threats, or the famous clichés “with great power comes great responsibility” and “it’s not who you are underneath, it’s what you do that defines you.” In the Homeric world, hērōs (ἥρως) “signifies a warrior who lives and dies in pursuit of honour (τιμή) and glory (κλέος)”.[1] Being a hero seems to mean more using one’s powers for oneself.

Let’s dive into some specifics, taking “Achilles and his gold” as our cue. Gold in Homer is often associated with the gods and immortality. It is surely associated with wealth, but sometimes it is not as highly “ranked” as we might think.[2] In setting out the chariot race prizes in Patroclus’ funeral games, Achilles offers a brand-new cauldron for third place, but two talents of gold for fourth (Iliad 23.267–9).[3] This opens up a whole conversation about symbolic and commercial value: what would we do if presented with choosing between a one-of-a-kind handcrafted kitchen appliance by a famed craftsman and a cash prize? The answer may be obvious, or not – the Iliad makes us think twice.

In any case, Achilles is not usually noted for his gold, even though he was rich in prizes and spoils. He tells us so in his great speech on honour and glory, when he rejects Agamemnon’s copious material compensation, which included seven whole cities (9.356–409). Others might say that the song lyrics refer to Homer’s famous ecphrasis, the shield and armour made by the god Hephaestus at the request of the hero’s mother Thetis (18.468–617). Achilles’ divine parentage and connection with the gods can go some way to explain why Chris does not see himself upon that list of superheroes.

To me, “Achilles and his gold” recalls the meeting between the hero and godlike Priam, who brings “countless ransom” (ἀπερείσι’ ἄποινα) for the body of his son Hector. This ransom included ten talents of gold (24.232; Agamemnon also offered this as part of his recompense in 9.122). It is a powerful and emotional scene (24.477–571).[4] Evoking Achilles’ aged father Peleus at the start and end of his supplication, Priam

roused in Achilles the desire to weep for his father. He took the old man by the hand and gently pushed him away. And the two of them began to weep in remembrance. Priam cried loud for murderous Hector, huddled at the feet of Achilles, and Achilles cried for his own father, and then again for Patroclus: and the house was filled with the sound of their weeping.[5]

Scholars raise many interesting points about the whole scene: there are themes of father-son relationships, memory, pity and anger, mortality and immortality, separation, and reconciliation with society.[6] But what strikes us first is a sense of tender vulnerability amidst overflowing emotion. For all the heroic associations we make, we find fragility. Achilles tells Priam, “this is the fate the gods have spun for poor mortal men, that we should live in misery” (24.525–6). We see and feel little fairy-tale bliss.

In a different way, the premise of the Spider-Man character elicits the same feeling.[7] Stan Lee explained how he created a superhero who “would lose out as often as he’d win – in fact, more often.” Peter Parker is a relatable teenager, self-absorbed, awkward, and misunderstood. As Brandon Wright explains, this could not have been farther from male DC superheroes, who were all the same: rational and in control, predictable, and wholly altruistic. Soon superheroes with “awesome powers and human shortcomings became the defining feature of Marvel Comics”, though in fairness I should mention that Christopher Nolan’s Batman trilogy does well to draw out the conflicts and complexity of Bruce Wayne. Despite their gifted abilities, theirs are not the lives we unquestioningly yearn for.

Finally, the song’s reference to “Hercules and his gifts” opens up a whole new . . .

Continue reading.

Written by Leisureguy

25 July 2021 at 11:34 am

Whatever Is True, Is My Own: Seneca’s Open-minded Enquiry

leave a comment »

Barnaby Taylor teaches Classics at Exeter College, Oxford and writes in Antigone:

Say that you subscribe to a particular set of values, which you believe are the key to being truly good and happy. You haven’t mastered them yet, but you pursue them with increasing devotion, and feel yourself making progress. Say now that your friend, about whom you care very much, feels some attraction to these values, to this way of life, but is yet to cultivate a deep and lasting interest. He has other intellectual temptations, and, what’s more, he is weighed down by the cares and troubles of the world. How can you help him to develop his nascent interest in your philosophy of the good life? And what attitude should you encourage him to hold towards those with whom you disagree? These questions are explored in Seneca’s Moral Epistles, written over the last few years of his life to his friend and philosophical fellow-traveller, Lucilius.

Seneca (c. 4 BC – AD 65) was a Stoic, and so thought that virtue is the only thing that matters for a truly good life. Nothing else – including health, wealth, possessions, and family – makes any contribution to happiness. This may sound austere, and indeed there was a certain unrelenting quality to Stoic ethics, but the Epistles are not an austere work by any measure. Across 124 letters, in which the narrative exploration of life is generally preferred to abstract theorising, Seneca engages in a deep and intimate evaluation of what it means to be good, discussing at length, and with much wit and uncompromising self-scrutiny, his own faltering moral progress.

In the first 29 letters – those on which I’ll concentrate here – we find discussions of reading (what should one read, and how should one read it?) – friendship, moral and social imagination, candidness, perfectionism, self-awareness, vulnerability, solitude, sociability, emotion, mental discipline, old age, and death. Above all, Seneca focuses on the question of how to pursue a life of introspection in the midst of worldly responsibilities and concerns – a focus which may be especially attractive to those who, like me, have often felt the tension between the obligations of the world and the possibility of an inner life.

These early Senecan letters appeal to me in several ways. Partly it’s the elegance, wit and economy of his Latin style; partly it’s the thoughtful depiction and exploration of the didactic process, which interests me as someone who, being a teacher, spends a lot of time helping others to develop and cultivate their intellectual interests and values; partly it’s simply the richness and depth of the discussion; and partly it’s the sense of Seneca’s own flawedness and failure – these are not the writings of a moral saint.

d like here to focus on one surprising feature of these early letters, namely their treatment of a certain philosopher with whose doctrines Seneca elsewhere expresses fundamental disagreement. While acknowledging that the strictness of Stoic doctrine may need to be relaxed for those who are just getting started, Seneca is clear that what he is engaged in, and what he is encouraging Lucilius towards, is the cultivation of a Stoic life. Now, one might think that a good way of getting ahead with Stoicism would be to focus on Stoic texts, and indeed Seneca does give a few select quotations from the Stoic philosopher Hecaton (noster, “one of ours,” 5.7).

‘Don’t read too widely,’ Seneca advises Lucilius in the second letter, where the focus on books and reading is programmatic for the whole work. It is better, he says, to focus your attention on a few particularly valuable authors, and to really learn something from them, than to try to read everything and absorb nothing from it. This second letter ends, though, with a quotation not from a fellow Stoic, but from Epicurus (341–271 BC), the founder of Epicureanism, a doctrine whose central ethical tenets were held to be quite incompatible with Stoic thought. Epicurus goes on to become by far the most regularly quoted philosopher in the first books of the Epistles. . .

Continue reading.

Written by Leisureguy

25 July 2021 at 11:05 am

The Chatbot Problem

leave a comment »

Stephen Marche writes in the New Yorker:

In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.” Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.

In his 1950 science-fiction collection, “I, Robot,” Isaac Asimov outlined his three laws of robotics. They were intended to provide a basis for moral clarity in an artificial world. “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is the first law, which robots have already broken. During the recent war in Libya, Turkey’s autonomous drones attacked General Khalifa Haftar’s forces, selecting targets without any human involvement. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” a report from the United Nations read. Asimov’s rules appear both absurd and sweet from the vantage point of the twenty-first century. What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.

Artificial intelligence is an ethical quagmire. Its power can be more than a little nauseating. But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?

Ethics has never been a strong suit of Silicon Valley, to put the matter mildly, but, in the case of A.I., the ethical questions will affect the development of the technology. When Lemonade, an insurance app, announced that its A.I. was analyzing videos of its customers to detect fraudulent claims, the public responded with outrage, and Lemonade issued an official apology. Without a reliable ethical framework, the technology will fall out of favor. If users fear artificial intelligence as a force for dehumanization, they’ll be far less likely to engage with it and accept it.

Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values. The crisis, as it’s arriving, possesses aspects of a horror film. “As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice,’ ” Christian writes. “We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.” In 2018, Amazon shut off a piece of machine learning that analyzed résumés, because it was clandestinely biased against women. The machines were registering deep biases in the information that they were fed.

Language is a thornier problem than other A.I. applications. For one thing, the stakes are higher. Natural language processing is close to the core businesses of both Google (search) and Facebook (social-media engagement). Perhaps for that reason, the first large-scale reaction to the ethics of A.I. natural language processing could not have gone worse. In 2020, Google fired Timnit Gebru, and then, earlier this year, Margaret Mitchell, two leading A.I.-ethics researchers. Waves of protest from their colleagues followed. Two engineers at Google quit. Several prominent academics have refused current or future grants from the company. Gebru claims that she was fired after being asked to retract a paper that she co-wrote with Mitchell and two others called “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” (Google disputes her claim.) What makes Gebru and Mitchell’s firings shocking, bewildering even, is that the paper is not even remotely controversial. Most of it isn’t even debatable.

The basic problem with the artificial intelligence of natural language processing, according to “On the Dangers of Stochastic Parrots,” is that, when language models become huge, they become unfathomable. The data set is simply too large to be comprehended by a human brain. And without being able to comprehend the data, you risk manifesting the prejudices and even the violence of the language that you’re training your models on. “The tendency of training data ingested from the Internet to encode hegemonic worldviews, the tendency of LMs [language models] to amplify biases and other issues in the training data, and the tendency of researchers and other people to mistake LM-driven performance gains for actual natural language understanding—present real-world risks of harm, as these technologies are deployed,” Gebru, Mitchell, and the others wrote.

As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. “On the Dangers of Stochastic Parroting” offers a number of examples: “Biases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities.” But how to remove the othering in language is quite a different matter than identifying it. Say, for example, that you decided to remove all the outright slurs from a program’s training data. “If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light,” Gebru and the others write. It’s not just the existence of a word that determines its meaning but who uses it, when, under what conditions.

The evidence for stochastic parroting is fundamentally incontrovertible, rooted in the very nature of the technology. The tool applied to solve many natural language processing problems is called a transformer, which uses techniques called positioning and self-attention to achieve linguistic miracles. Every token (a term for a quantum of language, think of it as a “word,” or “letters,” if you’re old-fashioned) is affixed a value, which establishes its position in a sequence. The positioning allows for “self-attention”—the machine learns not just what a token is and where and when it is but how it relates to all the other tokens in a sequence. Any word has meaning only insofar as it relates to the position of every other word. Context registers as mathematics. This is the splitting of the linguistic atom.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? I remember a fact that I learned when I was forced to study Old English for my Ph.D.: in English, the terms for food eaten at the table derive from French—beef, mutton—while the terms for animals in the field derive from Anglo-Saxon—cow, sheep. That difference registers ethnicity and class: the Norman conquerors ate what the Saxon peons tended. So every time you use those most basic words—cow, beef—you express a fundamental caste structure that differentiates consumer from worker. Progressive elements in the United States have made extensive attempts to remove gender duality from pronouns. But it’s worth noting that, in French or in Spanish, all nouns are gendered. A desk, in French, is masculine, and a chair is feminine. The sky itself is gendered: the sun is male, the moon female. Ultimately, what we can fix in language is parochial. Caste and gender are baked into every word. Eloquence is always a form of dominance. Government is currently offering no solutions. Sam Altman, the C.E.O. of OpenAI, which created the deep-learning network GPT-3, has been very open about his pursuit of any kind of governance whatsoever. In Washington, he has found, discussing the long-term consequences of artificial intelligence leads to “a real eyes-glazed-over look.” The average age of a U.S. senator is sixty-three. They are missing in action.

Let’s imagine an A.I. engineer who wants to create a chatbot that aligns with human values. Where is she supposed to go to determine a reliable metric of “human values”?. . .

Continue reading.

Written by Leisureguy

23 July 2021 at 1:23 pm

Our Workplaces Think We’re Computers. We’re Not.

leave a comment »

Illustration by The New York Times; photograph by Stephanie Anestis

Related somewhat is a quotation I just encountered:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”   — Edsger W. Dijkstra

In the NY Times an interesting podcast, with this introduction:

For decades, our society’s dominant metaphor for the mind has been a computer. A machine that operates the exact same way whether it’s in a dark room or next to a sunny window, whether it’s been working for 30 seconds or three hours, whether it’s near other computers or completely alone.

But that’s wrong. Annie Murphy Paul’s The Extended Mind argues, convincingly, that the human mind is contextual. It works differently in different environments, with different tools, amid different bodily states, among other minds.

Here’s the problem: Our schools, our workplaces, our society are built atop that bad metaphor. Activities and habits that we’ve been taught to associate with creativity and efficiency often stunt our thinking, and so much that we’ve been taught to dismiss — activities that look like leisure, play or rest — are crucial to thinking (and living!) well.

Paul’s book, read correctly, is a radical critique of not just how we think about thinking, but how we’ve constructed much of our society. In this conversation, we discuss how the body can pick up on patterns before the conscious mind knows what it’s seen, why forcing kids (and adults) to “sit still” makes it harder for them to think clearly, the connection between physical movement and creativity, why efficiency is often the enemy of productivity, the restorative power of exposure to the natural world, the dystopian implications of massive cognitive inequality, why open-plan offices were a terrible idea and much more.

You can listen to our whole conversation by following “The Ezra Klein Show” on AppleSpotifyGoogle or wherever you get your podcasts.

(A full transcript of the episode is available here.)

Written by Leisureguy

21 July 2021 at 12:38 pm

How a solitary prisoner decoded Chinese for the QWERTY keyboard

leave a comment »

Would it have been easier and faster if he had used the Dvorak keyboard? 🙂 In Psyche Thomas S Mullaney, professor of Chinese history at Stanford University, gives a fascinating account that shows the amazing way the brain works. He writes:

In China, suburban garages do not factor in the lore of computing history the way they do in the United States. But prisons do – at least, one particular prison in which a brilliant Chinese engineer was sentenced to solitary confinement for thought crimes against Mao Zedong during China’s Cultural Revolution. His name was Zhi Bingyi and, during long and anxiety-ridden days, months and years of solitude, he made a breakthrough that helped launch China’s personal computing revolution: he helped make it possible to type Chinese with a run-of-the-mill Western-style QWERTY keyboard.

Zhi was born in 1911 on China’s eastern coast, in Jiangsu province. His generation shouldered an almost unbearable burden: a mandate to dedicate their lives to the modernisation of their country. Zhi completed his undergraduate education in 1935, receiving a degree in electrical engineering from Zhejiang University. He moved to Germany in 1936, receiving his doctorate in 1944 from the University of Leipzig. He spent nearly 11 years in Germany, becoming fluent in the language, and marrying a German woman.

Upon the couple’s return to China in 1946, Zhi held a variety of distinguished posts, yet his long-time experience overseas made him suspect in the eyes of the still-nascent Chinese Communist Party regime following the 1949 revolution. When the Cultural Revolution erupted in 1966, Zhi became a marked man. Named a ‘reactionary academic authority’ (fandong xueshu quanwei) – one of the era’s many monikers for those condemned as enemies of the revolution – he was confined in one of the period’s infamous ‘ox pens’. The cell measured a claustrophobic six square metres. Outside its four walls, China descended into the political turmoil of the Cultural Revolution. In his hometown of Shanghai, fanatics and paramilitary groups pledged undying loyalty to the person of Chairman Mao. In the early months of the crisis, bands of radical youth set out upon ‘seek and destroy’ raids intent on purging the country of all pre-revolutionary vestiges of ‘Old China’.

Unsure if he would ever see his wife again, with no other voices besides his guards’, and with no work to occupy his mind, Zhi filled the long hours staring at the wall of his cell – specifically, at an eight-character poster that made a chilling assurance to him and anyone unfortunate enough to set their eyes upon it:

(tanbai congkuan, kangju congyan)
‘Leniency For Those Who Confess, Severity For Those Who Resist’

The message was clear: We have the authority to destroy your life (if you resist)Or to make your imprisonment somewhat more tolerable (if you confess).

Zhi read this terrifying couplet over and over again, for days, weeks and months on end. And then something began to happen – something that reminds us of the inherent strangeness of language.

No matter one’s mother tongue, the process of becoming fluent in a language is a process of forgetting that language is a form of arbitrary code. There is nothing inherently ‘candid, frank, or open’ about the character 坦 (tan), nor ‘white, blank, or clear’ about the character 白 (bai). As with any young child, Zhi in his earliest years of life would have looked upon these symbols as random assemblages of pen strokes on the page, born of a complex web of conventions whose origins we will never be able to reconstruct in full. But steadily, over the course of innumerable repetitions, something happens to us: the sounds and sights of language begin to approach, and then to achieve, a kind of natural, god-givenness. The character 白 (bai) no longer ‘stands in’ for whiteness by dint of painstaking study and memorisation, but merges with it effortlessly. This merger is the fruition of every child’s struggle to speak, read and write: the struggle to make inroads into their family and community’s semiotic universe, transforming it from an indecipherable code to a medium of expression.

While most of us experience this transformation as a one-way process, it can be reversed. A sound or symbol made second-nature can be denatured – defamiliarised and queered, in which one is somehow able to tap into the original meaninglessness of one’s mother tongue, even as one continues to be able to hear, see and speak it fluently.

This is what happened to Zhi. As he whiled away his time in prison, mulling over these eight characters (seven, if we account for one character that is repeated), this act of repetition restored to them their inherent arbitrariness. By the 100th reading – perhaps the 1,000th, we cannot know – Zhi began to explode these characters in his mind, into a variety of elements and constellations. The first character (坦), for example, could be readily divided into two distinct parts: 土 and 旦, and then further still into + and − (making up the component 土) and 日 and  (making up 旦). The second character 白 could be subdivided, as well, perhaps into 日, with a small stroke on top. Then the rest. Even in this short, eight-character passage, the possibilities of decomposition were abundant.

Zhi managed to get hold of a pen – the one he was given to write political self-confessions – but paper was impossible to find. Instead, he used the lid of a teacup, which his captors provided him to drink hot water. When turned over, Zhi discovered, the lid was large enough to fit a few dozen Latin letters. Then he could erase them and start again, like a student in ancient Greece with an infinitely reusable wax tablet. And so he mulled over each character one by one, decomposing them into elements, and then converting those elements into letters of the Latin alphabet.

He was creating a ‘spelling’ for Chinese – although not in the conventional sense of the word.

In Zhi’s system, the letters of the Latin alphabet would not be used to spell out the sound of Chinese words. Nor would they be used to ‘compose’ them per se. Instead, he envisioned using Latin letters to retrieve one’s desired Chinese character from memory. For him, Latin letters would be the instructions or criteria one fed to a machine, telling the device to, in effect, ‘retrieve the Chinese characters that match these requirements’.

Take the example of fu (幅), a Chinese character meaning ‘width’. Ultimately, Zhi settled upon an unexpected ‘spelling’ for this character, which bore no resemblance to its sound: J-I-T-K. The first letter in this sequence (J) corresponded not to the phonetic value of the character (which should begin with ‘F’) but to a structural element located on the left-most side of the character: the component 巾 that, when seen in isolation, is pronounced jin. The code symbol ‘J’ was derived from the first letter of the pronunciation of the component.

The rest of the spelling – I, T and K – followed the same logic. ‘I’ was ‘equal to’ the component/character yi (一); ‘K’ referred to the component kou (口); and ‘T’ to tian (田). Other letters in Zhi’s code performed the same role:

D = the structure 刀 (with ‘D’ being derived from dao, the pronunciation of this character when seen in isolation)
L = 力 (same logic as above, based on the Pinyin pronunciation li)
R = 人 (same logic as above, based on the Pinyin pronunciation ren)
X = 夕 (same logic as above, based on the Pinyin pronunciation xi)

Zhi eventually gave his code a name: ‘See the Character, Know the Code’ (Jianzi shima), ‘On-Site Coding’ (OSCO), or simply ‘Zhi Code’ (Zhima).

In September 1969, Zhi was released from prison, rejoining his wife and family at their apartment on South Urumqi Road, in Shanghai – albeit in a form of prolonged house arrest.

Other changes were afoot, as well. In 1971, the United Nations recognised Beijing as the official representative of China, granting the country a seat on the Security Council. In 1972, Richard Nixon shocked the world with the first US presidential delegation to the People’s Republic of China (PRC). In 1976, Mao died of cancer, setting in motion a profound sweep of political, economic and social transformations. Then, in 1979, the gates opened even wider, with the normalisation of relations with the US.

One of the many changes that Sino-US normalisation brought was an influx – first a drip, then a flood – of US-built computers . . .

Continue reading. There’s more.

Written by Leisureguy

21 July 2021 at 10:56 am

Why the meaning of life, the universe, and everything is “42”

leave a comment »

A lightly edited passage from Was It Really Like That?: Volume 2: a Glimpse into the Later History of the Gammaldi Family the Migrant Story Continues, by Gino Grimmaldi:

In The Hitchhiker’s Guide to the Galaxy, the supercomputer Deep Thought is built by a race of hyper-intelligent alien beings to determine the answer to “life, the universe, and everything.” Deep Thought determines that the answer is, somewhat anticlimactically, “42.”

It sounds like a joke, but is there more to this answer? Douglas Adams was an unabashed computer nerd and knew a heck of a lot about programming language and coding. In programming, an asterisk is commonly used to translate as “whatever you want it to be.” In ASCII, the most basic computer code for information interchange, “42” is the designation for an asterisk. A computer, Deep Thought, was asked what the true meaning of life was. It answered as a computer would. 42 = “anything you want it to be.” Genius.

Written by Leisureguy

20 July 2021 at 1:46 pm

A non-Standard model of cosmology: An adjustment to the laws of gravitation

leave a comment »

David Merritt, a former professor of physics at the Rochester Institute of Technology in New York and author of Dynamics and Evolution of Galactic Nuclei (2013) and A Philosophical Approach to MOND (2020), writes in Aeon:

he standard theory of cosmology is called the Lambda cold dark matter (ΛCDM) model. As that name suggests, the theory postulates the existence of dark matter – a mysterious substance that (according to the theorists) comprises the bulk of the matter in the Universe. It is widely embraced. Every cosmologist working today was educated in the Standard Model tradition, and virtually all of them take the existence of dark matter for granted. In the words of the Nobel Prize winner P J E Peebles: ‘The evidence for the dark matter of the hot Big Bang cosmology is about as good as it gets in natural science.’

There is one problem, however. For four decades and counting, scientists have failed to detect the dark matter particles in terrestrial laboratories. You might think this would have generated some doubts about the standard cosmological model, but all indications are to the contrary. According to the 2014 edition of the prestigious Review of Particle Physics: ‘The concordance model [of cosmology] is now well established, and there seems little room left for any dramatic revision of this paradigm.’ Still, shouldn’t the lack of experimental confirmation at least give us pause?

In fact, there are competing cosmological theories, and not all of them contain dark matter. The most successful competitor is called modified Newtonian dynamics (MOND). Observations that are explained under the Standard Model by invoking dark matter are explained under MOND by postulating a modification to the theory of gravity. If scientists had confirmed the existence of the dark particles, there would be little motivation to explore such theories as MOND. But given the absence of any detections, the existence of a viable alternative theory that lacks dark matter invites us to ask: does dark matter really exist?

Philosophers of science are fascinated by such situations, and it is easy to see why. The traditional way of assessing the truth or falsity of a theory is by testing its predictions. If a prediction is confirmed, we tend to believe the theory; if it is refuted, we tend not to believe it. And so, if two theories are equally capable of explaining the observations, there would seem to be no way to decide between them.

What is a poor scientist to do? How is she to decide? It turns out that the philosophers have some suggestions. They point out that scientific theories can achieve correspondence with the facts in two very different ways. The ‘bad’ way is via post-hoc accommodation: the theory is adjusted, or augmented, to bring it in line with each new piece of data as it becomes available. The ‘good’ way is via prior prediction: the theory correctly predicts facts in advance of their discovery, without – and this is crucial – any adjustments to the original theory.

It is probably safe to say that no theory gets everything exactly right on the first try. But philosophers are nearly unanimous in arguing that successful, prior prediction of a fact assigns a greater warrant for belief in the predicting theory than post-hoc accommodation of that fact. For instance, the philosopher of science Peter Lipton wrote:

When data need to be accommodated … the scientist knows the answer she must get, and she does whatever it takes to get it … In the case of prediction, by contrast, there is no motive for fudging, since the scientist does not know the right answer in advance … As a result, if the prediction turns out to have been correct, it provides stronger reason to believe the theory that generated it.

Some philosophers go so far as to argue that the only data that can lend any support to a theory are data that were predicted in advance of experimental confirmation; in the words of the philosopher Imre Lakatos, ‘the only relevant evidence is the evidence anticipated by a theory’. Since only one (at most) of these two cosmological theories can be correct, you might expect that only one of them (at most) manages to achieve correspondence with the facts in the preferred way. That expectation turns out to be exactly correct. And (spoiler alert!) it is not the Standard Model that is the favoured theory according to the philosophers’ criterion. It’s MOND.

Dark matter was a response to an anomaly that arose, in the late 1970s, from observations of spiral galaxies such as our Milky Way. The speed at which stars and gas clouds orbit about the centre of a galaxy should be predictable given the observed distribution of matter in the galaxy. The assumption here is that the gravitational force from the observed matter (stars, gas) is responsible for maintaining the stars in their circular orbits, just as the Sun’s gravity maintains the planets in their orbits. But this prediction was decisively contradicted by the observations. It was found that, sufficiently far from the centre of every spiral galaxy, orbital speeds are always higher than predicted. This anomaly needed to be accounted for.

Cosmologists had a solution. They postulated that every galaxy is embedded in a ‘dark matter halo’, a roughly spherical cloud composed of some substance that generates just the right amount of extra gravity needed to explain the high orbital speeds. Since we do not observe this matter directly, it must consist of some kind of elementary particle that does not interact with electromagnetic radiation (that includes light, but also radio waves, gamma rays etc). No particle was known at the time to have the requisite properties, nor have particle physicists yet found evidence in their laboratory experiments for such a particle, in spite of looking very hard since the early 1980s. The cosmologists had their solution to the rotation-curve anomaly, but they lacked the hard data to back it up.

In 1983, an alternative explanation for the rotation-curve anomaly was proposed by Mordehai Milgrom, a physicist at the Weizmann Institute of Science in Israel. Milgrom noticed that the anomalous data had two striking regularities that were not accounted for by the dark matter hypothesis. First: orbital speeds are not simply larger than predicted. In every galaxy, the orbital speed rises as one moves away from the centre and then remains at a high value as far out as observations permit. Astronomers call this property ‘asymptotic flatness of the rotation curve’. Second: the anomalously high orbital speeds invariably appear in regions of space where accelerations due to gravity drop below a certain characteristic, and very small, value. That is: one can predict, in any galaxy, exactly where the motions will begin to deviate from Newtonian dynamics.

This characteristic acceleration value, which Milgrom dubbed a0, is much lower than the acceleration due to the Sun’s gravity anywhere in our solar system. So, by measuring orbital speeds in the outskirts of spiral galaxies, astronomers were testing gravitational theory in a way that had never been done before. Milgrom knew that there were many instances in the history of science where the need for a new theory became apparent only when an existing theory was tested in a new way. And so he took seriously the possibility that the theory of gravity might simply be wrong.

In three papers published in 1983, Milgrom proposed a simple modification to Isaac Newton’s laws that relate gravitational force to acceleration. (Albert Einstein’s theory reduces to Newton’s simpler theory in the regime of galaxies.) He showed that his modification correctly predicts the asymptotic flatness of orbital rotation curves.

Milgrom was careful to acknowledge that he had designed his hypothesis in order to produce that known result. But his theory also predicted that the effective gravitational force was computable given the observed distribution of normal matter alone – not just in the case of ultra-low accelerations, but everywhere. And when astronomers tested this bold prediction, they found that it was correct. Milgrom’s hypothesis correctly predicts the rotation curve of every galaxy that has been tested in this way. And it does so without postulating the presence of dark matter.

Note the stark difference between the way in which the two theories explain the anomalous rotation-curve data. The standard cosmological model executes an ad-hoc manoeuvre: it simply postulates the existence of whatever amount and distribution of dark matter are required to reconcile the observed stellar motions with Newton’s laws. Whereas Milgrom’s hypothesis correctly predicts orbital speeds given the observed distribution of normal matter alone. No Standard Model theorist has ever come up with an algorithm that is capable of doing anything as impressive as that.

Many philosophers would argue that this predictive success of Milgrom’s theory gives us a warrant for believing that his theory – as opposed to the Standard Model – is correct. But the story does not end there. Milgrom’s theory makes a number of other novel predictions that have been confirmed by observational astronomers. Doing justice to all of these would take a book (and, in fact, I’ve recently written such a book), but I will mention one example here. Milgrom’s theory predicts that a galaxy’s total mass in normal (non-dark) matter, which astrophysicists like to call the ‘baryonic mass’, is proportional to the fourth power of the rotation speed measured far from the galaxy’s centre. This novel prediction also turned out to be correct. (For obscure historical reasons, Milgrom’s predicted relation is nowadays called the ‘baryonic Tully-Fisher relation’, or BTFR.)

Astrophysics is rife with correlations between observed quantities, but exact relations such as the BTFR are unheard of: they are the sort of thing one associates with a high-level theory (think: the ideal gas law of statistical thermodynamics), not with a messy discipline like astrophysics.

What would a Standard Model cosmologist predict for a relation such as the BTFR? The simple answer is: . . .

Continue reading.

Written by Leisureguy

19 July 2021 at 11:06 am

Posted in Books, Philosophy, Science

Interpreting Sun Tzu: The Art of Failure?

leave a comment »

John F. Sullivan writes in The Strategy Bridge:

If you now wish to inquire into the Way of [the ancient sages], may I suggest that one can hardly be certain of it? To be certain of it without evidence is foolishness, to appeal to it though unable to be certain of it is fraud.
—Hanfeizi (3rd century BCE)

“Translation,” an American poet and translator of Dante’s Inferno opined, “is the art of failure.”[2]  In Don Quixote, the eponymous character notes that distortion is often a natural byproduct of the effort: “translation from one language into another…is like looking at Flemish tapestries on the wrong side; for though the figures are visible, they are full of threads that make them indistinct, and they do not show with the smoothness and brightness of the right side.”[3] The reverse tapestry is an apt metaphor for reading any ancient Chinese text, particularly The Art of War. While the use of logographs to express complex thoughts has been a constant feature throughout China’s recorded history, the written language of thousands of years ago differs significantly from its modern variant. While the original Art of War consists of approximately 6,ooo characters, a modern Chinese version requires more than double that number to convey the same approximate meaning.[4] Even most native Chinese speakers, therefore, read a translation of the original.

While The Art of War is surprisingly short and compact, much remains ambiguous in its received message. As a result, our contemporary interpretations require constant skepticism, debate, and revision. While Sun Tzu’s text is arguably the oldest within the core strategic canon, it has been studied for the least amount of time by Western military theorists, in comparison with Thucydides and Clausewitz, for example. First translated into English only in the early twentieth century, strategists largely ignored The Art of War until the Vietnam War renewed interest in Asian military thinking.

Despite the limited scholarly focus on the text, in his foreword to the 1963 Griffith translation B.H. Liddell Hart confidently proclaimed that The Art of War “has never been surpassed in comprehensiveness and depth of understanding…Sun Tzu has clearer vision, more profound insight, and eternal freshness.” The certitude, though, with which we purport to understand The Art of War’s “clear vision” and “eternal freshness” remains inversely proportional to the collective effort we have put into researching its historical context or subjecting it to harsh philological analysis and extended debate.[5]  Unlike Thucydides’ work, which has been well-served by the commentarial traditions of A.W. Gomme and Simon Hornblower, nothing remotely similar exists in English for contextualizing this Chinese classic. While translations of Sun Tzu vastly outnumber those of Clausewitz, reliable secondary-source references on the latter theorist and his milieu abound, while those on the former remain conspicuously absent.

Given the scarcity of authoritative writings or clarifying analyses on Sun Tzu’s text, how confident should we be that we have correctly grasped “the Way” of this ancient sage? Of particular importance, one of the core ideas we almost universally believe serves as a bedrock to Sun Tzu’s overall military philosophy—that his ideal strategic objective is “to take the enemy whole and intact”—rests on a problematic and potentially untenable textual foundation.[6] Instead, a stronger case favors an interpretation of Sun Tzu prioritizing self-preservation. Whether or not one’s adversary is destroyed or taken non-violently remains a distant secondary concern.


The idea of “taking the enemy whole and intact” comes from the first verse of the third chapter. Lionel Giles’ 1910 English translation proposed the following rendition:

In the practical art of war, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to recapture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them.[7]

Since Giles, almost every subsequent translator of the text produced a similar interpretation. Before comparing with the original Chinese, though, it is helpful to also consider Ralph Sawyer’s version, since his work is more consistent and literal than either the Giles or the Griffith translations:

Preserving the [enemy’s] state capital is best, destroying their state capital is second-best.
Preserving their army is best, destroying their army is second-best.
Preserving their battalions is best, destroying their battalions is second-best.
Preserving their companies is best, destroying their companies is second-best.
Preserving their squads is best, destroying their squads is second-best.[8]

Now looking at its original written form, even without any knowledge of Chinese characters, it is clear that the verse in question is structured as a nearly identical repeating pattern. The only . . .

Continue reading.

Written by Leisureguy

16 July 2021 at 11:03 am

Ode to a world-saving idea: attribution error

leave a comment »

One book in the list of books I find myself repeatedly recommending is The Evolution of God, by Robert Wright: erudite but readable, humorous and insightful, and altogether enlightening. Of course, it is the idea of God that evolves, but what is God but an idea — a meme in the multitudinous fabric of human culture — and the evolution to which the title refers is memetic evolution, not the kind of evolution that resulted in, say, the octopus and the platypus. He writes the Nonzero Newsletter (a name related to Donald Trump and John Bolton), and one of his books is Nonzero: The Logic of Human Destiny.

A recent issue of his newsletter had this article:

Five weeks ago the psychologist Lee Ross died, and five days ago the New York Times published his obituary. Better late than never. But, even with all that time for research, the obituary doesn’t do justice to its subject.

By “its subject” I don’t mean Ross; he comes off very well. I mean the idea he is most closely associated with, an idea that occupies much of the obituary and is no doubt the reason Ross was finally deemed to have Times obit status. The idea is called “attribution error.”

Actually, Ross called it “the fundamental attribution error,” and that’s the term the Times uses. But as the idea evolved in keeping with new research, the word “fundamental” became less apt. And, oddly, this evolution made the concept itself more fundamental. In fact, if I had to pick only one scientific finding about how the human mind works and promulgate it in hopes of saving the world, I’d probably go with attribution error.

Ross coined the term “the fundamental attribution error” in 1977, in a paper that became a landmark in social psychology. The basic idea was pretty simple: When we’re explaining the behavior of other people, we tend to put too much emphasis on “disposition”—on their character, their personality, their essential nature. And we tend to put too little emphasis on “situation”—on the circumstances they find themselves in. The Times gives an illustration:

A 2014 article in Psychology Today titled ‘Why We Don’t Give Each Other a Break’ used the example of someone who cuts into a line in front of you. You might think, “What a jerk,” when in reality this person has never skipped ahead in a line before and is doing so now only because he would otherwise miss a flight to see a dying relative.

Right after this paragraph, Alex Traub, author of the obituary, writes: “Delivering folk wisdom in multisyllabic packaging, ‘the fundamental attribution error’ became one of those academic phrases that add a whiff of sophistication to any argument they adorn.”

I can see why Traub treats the idea a bit sardonically. The original formulation of it can be rendered in ways that make it seem borderline obvious, and Traub may not be aware that it evolved into something subtler. (He’s a newspaper reporter who has to write about all kinds of stuff every week, not a psych grad student.) But, before we move on to the subtler version of the idea, it’s important to understand that even the original version has potentially radical implications.

For example: Ross and Richard Nisbett, another eminent figure in modern psychology, argued that due appreciation of the power of situation in shaping behavior should lead us to revise the way we think about categories of people. If you see a picture of a minister and then a picture of a prison inmate, you’ll probably assume they have very different characters. But, Ross and Nisbett wrote:

Clerics and criminals rarely face an identical or equivalent set of situational challenges. Rather, they place themselves, and are placed by others, in situations that differ precisely in ways that induce clergy to look, act, feel, and think rather consistently like clergy and that induce criminals to look, act, feel, and think like criminals.

It’s possible to take the point Ross and Nisbett are making too far, but I think a more common mistake is not taking it far enough. In any event, that’s the basic idea behind the fundamental attribution error: most people don’t take the power of circumstance seriously enough. Now for the subtler version of the concept:

It turns out that our tendency to attribute people’s behavior to disposition rather than situation isn’t as general as Ross and other psychologists originally thought. There are two notable exceptions to it:

(1) If an enemy or rival does something good, we’re inclined to attribute the behavior to situation. (Granted, my rival for the affections of the woman I love did give money to a homeless man, but that was just to impress the woman I love, not because he’s actually a nice guy!) (2) If a friend or ally does something bad, we’re inclined to attribute the behavior to situation. (Yes, my golf buddy embezzled millions of dollars, but his wife was ill, and health care is expensive—plus, there was the mistress to support!)

These exceptions are the reason I say there is no single “fundamental” attribution error. We’re not as generally biased toward disposition in our attributions as was originally thought; under certain circumstances, we lean more toward situation. Attribution error, you might say, is itself situational.

Among the consequences of this fact is that attribution error reinforces allegiances within tribes and reinforces antagonisms between tribes. Sure, the people in your tribe may do bad things, but only in response to extraordinary circumstances. And, sure, the people in the other tribe may do good things, but only in response to extraordinary circumstances. So the fact remains that the people in your tribe are essentially good and the people in the other tribe are essentially bad.

So attribution error is one reason that, once a nation’s politics get polarized, they can be hard to de-polarize.

Attribution error can also have a big impact on international politics. It means that, once you’ve defined another nation as the enemy, that label will be hard to change. As the social scientist Herbert Kelman has put it:

Attribution mechanisms… promote confirmation of the original enemy image. Hostile actions by the enemy are attributed dispositionally, and thus provide further evidence of the enemy’s inherently aggressive, implacable character. Conciliatory actions are explained away as reactions to situational forces—as tactical maneuvers, responses to external pressure, or temporary adjustments to a position of weakness—and therefore require no revision of the original image.

This helps explain why Americans who argue for invading a country spend so much time emphasizing how nefarious the country’s leaders are. Once we’re convinced that a foreign government is bad, it’s hard to imagine how to fix the problem with anything short of regime change. After all, if the existing regime changes its behavior for the better, evil will still lurk within.

I hope you’re starting to see why . . .

Continue reading. It’s good, and there’s much more, including an interesting idea: cognitive (not emotional) empathy — understanding how the situation is understood by another.

I find it interesting that when someone acts in a way that is incongruous to our view of them (their doing something good when we view them as bad, or their doing something bad when we view them as good), we do not use that to modify our view of them (similar to how Bayesian methods uses new outcomes to adjust perceived probabilities), but rather take that as information about the situation, not the person. The tendency to avoid changing our view of a person but instead looking to a situational reason for behavior inconsistent with our view might be viewed as a way of protecting our ego — no one likes to have been wrong.

Written by Leisureguy

7 July 2021 at 1:15 pm

James Hutton’s Theory of the Earth

leave a comment »

Every day brings inew information and ideas, often of some depth, as Mike Leeder writes in Inference:

JAMES HUTTON WAS a chemist, geologist, agriculturist, and prominent member of the Edinburgh intelligentsia during the height of the Scottish Enlightenment. His contemporaries and friends included Robert Adam, David Hume, Adam Smith, Dugald Stewart, and James Watt. Hutton’s major work, Theory of the Earth, published first in 1788 and expanded in 1794, describes his geological observations and resulting theory for the cycling of the earth.1 His conclusion was startling: “We find no vestige of a beginning, no prospect of an end.”2 It was a unique perspective among seventeenth- and eighteenth-century scientific efforts, expressed freely and without regard for any scriptural constraints. Hutton’s ideas are easily recognizable in today’s understanding of deep time and the geodynamic cycling of the earth by plate tectonics. Like a Lazarus taxon, which disappears from the fossil record and resurfaces later, his notion of a recycling earth was born from scientific endeavor in the Age of Enlightenment and resurrected in the nuclear age.

Vita Lineae

THE MAJOR SOURCES for what is known of Hutton’s life and work come courtesy of his friend and biographer James Playfair, as well as seventeen preserved items of personal correspondence.3 Hutton was born in 1726 in Edinburgh of prosperous merchant stock. His father and elder brother died when he was young, and he and his sisters were raised by their mother. At the University of Edinburgh, he followed the mathematics lectures of Isaac Newton’s protégé Colin Maclaurin and studied humanities and medicine. Hutton never practiced the latter, but its study was the best route to pursue his interest in chemistry. From 1747, he continued his studies at the Universities of Paris and Leiden, and graduated from Leiden with a medical degree in 1749. Returning to Edinburgh, he set up with a friend in the profitable manufacture of sal ammoniac, a salt used for dyeing and metalworking, using chimney soot as raw material. He then made a radical change in life direction. In the early 1750s, he took a two-year stint on a Norfolk farm where he learned advanced agricultural practices. He took charge of two family farms in Berwickshire in the Scottish Borders and there became a hands-on landlord who developed the land by applying the latest techniques in agriculture and landscaping.

From the mid-1760s, Hutton lived with his sisters in central Edinburgh and threw himself into the city’s scientific and philosophical life. Over the next twenty years, he gained enough field experience to have a detailed understanding of the basic geology of Scotland, England, and Wales, and so established himself as the premier Scottish geological philosopher. His practice of geology gained breadth and substance through his love of chemistry, extensive surveys of the French scientific literature, and a close friendship with the prolific chemist Joseph Black. It was Black who discovered the latent heat of steam, invented the eighteenth century’s most precise analytical balance, and discovered carbon dioxide, a product of animal respiration and organic fermentation. He also produced aqueous precipitates of calcium carbonate and investigated silica precipitation from Icelandic geyser waters.

Version 1: 1785–1788

AN INITIAL VERSION of Theory of the Earth was read to the Royal Society of Edinburgh on two occasions in 1785, the first by Black when Hutton was ill, and the second by Hutton himself. Organized in four parts, the work was published three years later in the debut volume of the Transactions of the Royal Society of Edinburgh.

The intellectual method in Hutton’s magnum opus was influenced by . . .

Continue reading.

Written by Leisureguy

22 June 2021 at 5:56 pm

“This Time-Management Trick Changed My Whole Relationship With Time”

leave a comment »

Dean Kissick published this article in the NY Times a year ago, but I just came across it. It has an interesting idea that is worth a try:

A couple of years ago I was told a rumor about a notable artist who would break up everything she did, from making films in the day to running her studio in the afternoon to reading books in the evening, into intervals of 25 minutes, with five-minute breaks in between — 25 minutes on, five minutes off, over and over again. That’s how I first heard of the Pomodoro technique. Invented by Francesco Cirillo, a student at Rome’s Luiss Business School in the late 1980s, it’s a time-management method that takes its name from the tomato-shaped kitchen timer he used to regulate its core process, breaking the day into brief intervals. Before long I was trying it for myself, and now I start my first pomodoro as soon as my coffee’s ready in the morning.

Daily schedules, and our shared perception of time, grew hazier and more malleable during the spring lockdown, something that has persisted into our timid reopening. Hours, days and weeks merged into an ambient, dreamlike fugue. My bowl of cereal with milk slid from 2 to 3 to 4 in the afternoon. I’m writing this at 11:03 p.m. on a Thursday while drinking what I consider my afternoon coffee. There are four minutes and 13 seconds left of my pomodoro.

A pomodoro, once started, must not be interrupted, otherwise it has to be abandoned. But in this stringency, there is relief: You are not allowed to extend a pomodoro, either. After a set of four 25-minute intervals are completed, you’re supposed to take a longer break of 15 to 30 minutes before continuing. Those are the basic rules of Pomodoro technique. It tells us when to start, and also when to stop; and now, more than ever, we have to be told when to stop.

An unquestioned assumption in our culture holds that the more hours spent on work — whether a passion project or office drudgery — the better we’ll perform and the more successful and happier we’ll be. What if none of that’s true? What if it’s better to spend less time on things?

We waste hours keeping on going when our concentration’s long gone, caught in drowsy, drawn-out moments staring glumly at a screen, and not only when we’re supposed to be doing our jobs. Leisure time has also taken on a timeless, hypnotic quality lately. Everything our culture produces feels at once never-ending and meaningless — or perhaps meaningless because it’s never-ending. Movies explode into cinematic universes; series are designed to be binge-watched; every video, song or podcast tips over and auto-plays another; social media scrolls toward infinity and the news never stops broadcasting. An everlasting present expands around us in all directions, and it’s easy to get lost in there — all the more reason to set some boundaries.

Now that my breaks are short and fleeting, I think more carefully about what I’d like to do with them, and I’ve found it’s quite different from the unimaginative temptations I would otherwise default to (flopping on the sofa, scrolling on my phone, becoming annoyed). Instead I’ll make a sandwich, do a quick French lesson, reply to a few texts, have a shower, go to the laundromat; and such humdrum activities, now that they’re restricted, have become sources of great pleasure.

During lockdown, we were encouraged not to feel pressured into being productive. My alternative approach was to descend into a pomodoro-fueled delirium of work, creativity, household chores, tasks I’ve been avoiding for years, self-betterment and random undertakings from morning to night. I’ve found that tackling a range of tasks in short bursts keeps things interesting and provides a more rounded life. Variety is the sugar that helps the medicine go down; not the mirage of variety conjured by infinite scrolling content, by nearly a hundred different flavors of Oreos, but the genuine variety of pursuing different sorts of interests every day.

Last summer I took  . . .

Continue reading. There’s more.

He concludes:

. . .The Pomodoro technique showed me how much of my experience of reality is tied up with my subjective perception of it. And it’s not an exaggeration to say that, by changing my relationship with and appreciation of time, the technique has brought me to some profound existential questions about whether I’m wasting my life — my fragile, fleeting life — on activities I neither care about nor enjoy. It has forced me to think about what I’d most like to be doing every day instead. It has made me see time afresh — as something we really don’t have enough of, as something precious precisely because it’s ephemeral.

Written by Leisureguy

18 June 2021 at 5:06 pm

Edward de Bono has passed away

leave a comment »

Edward de Bono looms large in my legend. He is among the authors in my list of books I find myself repeatedly recommending. The specific book I mention is Po: Beyond Yes and No, but he wrote many books, and I read a substantial number of them and did my best to apply what I learned from them.

He also established a foundation, CoRT (Cognitive Research Trust), which publishes an excellent set of materials to teach critical thinking skills to young children, a program that I wish would be universally adopted. (I doubt it will be. Children who learn critical thinking skills will start using their skills, to the dismay of parents who do not welcome questioning, thought, or dialogue.)

Stuart Jeffries writes de Bono’s obituary in the Guardian:

The thinker and writer Edward de Bono, who has died aged 88, once suggested that the Arab-Israeli conflict might be solved with Marmite. During a 1999 lecture to Foreign Office officials, the originator of the term lateral thinking argued that the yeast extract, though proverbially socially divisive, could do what politicians and diplomats had failed for years to achieve. The problem, as he saw it, was that people in the Middle East eat unleavened bread and so lack zinc, which makes them irritable and belligerent. Feeding them Marmite, therefore, would help create peace.

Through his 60-plus books, including The Mechanism of Mind (1969), Six Thinking Hats (1985), How to Have A Beautiful Mind (2004) and Think! Before It’s Too Late (2009) [I’ve added links to inexpensive secondhand copies of the books. – LG], as well as seminars, training courses and a BBC television series, De Bono sought to free us from the tyranny of logic through creative thinking. “What happened was, 2,400 years ago, the Greek Gang of Three, by whom I mean Aristotle, Plato, and Socrates, started to think based on analysis, judgment, and knowledge,” he said. “At the same time, church people, who ran the schools and universities, wanted logic to prove the heretics wrong. As a result, design and perceptual thinking was never developed.”

De Bono’s revolution began in 1967 with his book The Use of Lateral Thinking. Imagine, he said, that a money lender claims a merchant’s daughter in lieu of her father’s debt. The merchant and daughter concoct a compromise. The money lender will put a black stone in one bag and in the other, a white. If the daughter chooses the black stone, she will be doomed to marry the money lender and the debt cancelled; if the white she will stay with her father and the debt be cancelled. But as the trio stand on a pebble-strewn path, she notices the money lender putting a black stone in each bag. What should she do to avoid a nightmarish fate?

This is where lateral thinking – ie, employing unorthodox means to solve a problem – comes in. De Bono suggested the daughter should pick either bag, but fumble and drop her stone on to the path. “Since the remaining pebble is of course black, it must be assumed she picked the white pebble, since the money lender dare not admit his dishonesty.”

What De Bono called vertical thinking, typified by logic, would be useless in reaching this elegant solution. It is lateral thinking that creates new ideas – Einstein and Darwin, according to De Bono, were lateral thinkers. “Studies have shown that 90% of error in thinking is due to error in perception. If you can change your perception, you can change your emotion [a point stressed by Stephen Covey in 7 Habits of Highly Effective People — see this brief outline. – LG] and this can lead to new ideas. Logic will never change emotion or perception.”

De Bono believed humour was one of the most significant characteristics of the human mind, precisely for its basis in shifting perceptions. “Let me tell you a joke,” he said. “An old man dies and goes to hell. When he gets there, he sees his friend, a 90-year-old man, with a beautiful woman sitting on his knee. He says to his friend, ‘This can’t be hell, you’re not being punished, you’re having fun!’, to which his friend replies, ‘This is punishment – for her!’”

His most trenchant thinking concerned children’s education. “Schools waste two-thirds of the talent in society. The universities sterilise the rest,” he said. The Maltese thinker was particularly scathing of Britain, where, he claimed, rigid thinking and an obsession with testing led to many children leaving school “believing they are stupid. They are not stupid at all, many are good thinkers who have never had the chance to show it. But that lack of confidence will pervade the rest of their lives.”

Rather than teaching children to absorb information and repeat it, he argued, schools should equip them to think creatively. He once did a study in which he asked children to design a sleep machine, an elephant-weighing machine, a system for constructing a house and a system for building a rocket. His 1972 book Children Solve Problems described the results.

In Six Thinking Hats, De Bono suggested that business meetings might be more efficient if attendees wore imaginary colour-coded hats. The black hat signified negative or realistic thoughts; white, information; red, emotion; blue, management; green, creativity; and yellow, optimism. Everyone in the meeting would figuratively place a coloured hat on their heads. This way, he claimed, “ego would be taken out of the situation”.

The method found its devotees. Motorola, IBM, and Boeing reported cutting meeting times by half by applying it. De Bono reported that one of his clients, Ron Barbaro of Prudential Insurance, said that after suggesting an idea that executives might counter was too risky, “he would say: ‘Yes, that’s fine black hat thinking. Now let’s try the yellow hat.’”

De Bono was convinced about its importance. “The Six Thinking Hats method may well be the most important change in human thinking for the past 2,300 years,” he wrote in the preface to the book.

Certainly, he was rarely burdened with humility, informing the world that his childhood nickname was “Genius”. By contrast, he did not suffer detractors gladly. Years after a stinking review of Six Thinking Hats appeared in the Independent, written by Adam Mars-Jones, De Bono told the Guardian: “That book, we know, has saved $40m dollars and tens of thousands of man-hours. Now, some silly little idiot, trying to be clever, compared to the actual results, that just makes him look like a fool.”

Mars-Jones retorted that when his review appeared, De Bono “wrote to the editor [saying] … that he was entitled to compensation for the loss of earnings which my comments had inflicted on his lecture tours (which he assessed at £200,000). He seemed less taken with my proposal that he pay a dividend to every journalist who, by taking him seriously, had inflated his earning power.”

Born in Saint Julian’s Bay, Malta, Edward was the son of Joseph de Bono, a physician, and Josephine (nee O’Byrne), an Irish journalist. He went to St Edward’s college in Malta and jumped classes twice. “I was always three or four years younger than anyone else in my class.”

He qualified as a doctor at the Royal University of Malta before going to Christ Church, Oxford, as a Rhodes scholar to study for a master’s in psychology and physiology (1957), and a DPhil in medicine (1961). There, he represented the university in both polo and rowing, and set two canoeing records, one for paddling 112 miles from Oxford to London nonstop.

Following graduation he worked at Oxford as a . . .

Continue reading. An amazing manwith many good ideas.

Written by Leisureguy

17 June 2021 at 3:26 pm

How to think about pleasure

leave a comment »

Sam Dresser has an interesting article in Psyche:

Need to know

Over breakfast one April day in 1778, James Boswell asked Samuel Johnson why he gave up booze. Dr Johnson replied that he didn’t like to lose power over himself, but assured his friend that he would one day drink again when he grew old (he was 68 at the time). Boswell replied: ‘I think, Sir, you once said to me, that not to drink wine was a great deduction from life.’ To which Dr Johnson answered: ‘It is a diminution of pleasure, to be sure; but I do not say a diminution of happiness. There is more happiness in being rational.’

It is a common notion, even in our own day, that pleasure is in some sense a distraction from happiness – or that it doesn’t lead to the kind of happiness that really matters. Pleasure, in and of itself, is ‘lower’ than the real heavy hitters, such as Truth and Virtue and Wisdom and God, those hallowed founts of authentic happiness. It is universal – indeed inherent – that we humans are drawn to pleasure. Yet pleasure-seeking itself is often seen as an indulgence, and therefore rings with a kind of selfishness, even a kind of confusion. Pleasure doesn’t last, the idea goes, but Truth does, or Rationality does, or Wisdom does, and so those are the things that we ought to seek.

Whenever and wherever they are found, moralists and their dreary ilk often describe their own times as characterised by debauched hedonism. But does it accurately describe our time? Are we in the thrall of a love affair with pleasure? I don’t think so. Even if more people are more comfortable than they used to be, it’s still hard to admit to doing something pleasurable just because it’s pleasurable. More often, pleasure is excused as a little reward, a diversion, a break from the demands of the ‘real world’. Pleasure is something that will allow you to work harder, to catch your breath before returning to the turmoils of life. Searching for pleasure for pleasure’s sake is an act tinged with shame and, when it’s admitted to, excuses ought be made.

Lord Byron gave our tense relationship with pleasure a memorable couplet: ‘O pleasure! you’re indeed a pleasant thing / Although one must be damn’d for you, no doubt.’ Those who give in to pleasure have often been compared, unkindly, to animals. The Roman Stoic Epictetus told those who identified pleasure with goodness to go ‘lead the life of a worm, of which you judged yourself worthy: eat and drink, and enjoy women, and ease yourself, and snore.’ Friedrich Nietzsche located a being that, for him, was perhaps even lower than the worm: ‘Man does not strive for pleasure,’ he wrote. ‘Only the Englishman does.’

This isn’t true of all pleasures, however. The trouble for Dr Johnson, as he was quick to explain, was ‘sensual pleasure. When a man says, he had pleasure with a woman, he does not mean conversation, but something of a very different nature.’ (You can almost see the wink on his vast face.) The pleasures he disdains are the bodily pleasures, the ones we get from aged whisky and taking off your boots after a long hike. The pleasures that count, for Dr Johnson and for many other thinkers, are the pleasures of the mind. These are the pleasures that are pure, unmarred by the Earth. They’re to be kept clean and separate from the pleasures of the body, which are for the lower sorts of people. Or, as Dr Johnson rather flatly put it: ‘[T]he greatest part of men are gross.’

The purpose of this Guide is simple: I want to talk about some of the ways that people have thought about pleasure over the years. Pleasure is a surprisingly slippery idea, surprising because it seems so obvious what it is. But trying to actually nail it down is like nailing down a cloud. Regardless, that makes it more important to reflect on pleasure – its value, its nature, and the places that people have found it. My hope is that, by thinking through what pleasure is, by analysing and probing and querying it, perhaps you’ll be more likely to find it in the places you least expect (but no promises, of course).

Think it through

Pleasure is everywhere and yet it’s hard to work out quite what it is

The sheer variety of ways that people procure pleasure is unsettling, as well as a testament to the plasticity of our species. The differences can be small – I can’t understand why people like to watch golf – and the differences can be great, especially across cultural and temporal gulfs – the pleasure people once got in attending the afternoon execution seems, to me, a bit odd.

Think of pleasure in your own life. What is common to all of the things that give you pleasure? The throughline between warm scarves and charity work and calling your grandmother; between the cool side of the pillow, the sad-happiness of nostalgia, the pop of a champagne bottle opening – what could it be other than that these are all, in their way, pleasing? So, the question is: if pleasure can be found in all these sundry ways, then what is it? And the most common answer is a tad ho-hum: stuff that feels good. Stuff that you like. The experiences that make you say: ‘Yep! There it is.’

Many philosophers have accepted this, or a version of it, and have taken it to mean that there’s not a whole lot more to be said about the nature of pleasure (moralising about how others go about getting pleasure, of course, is a different story). Pleasure is what it is. Its very heterogeneity, its inconceivable variety, has led many to conclude that it’s an elementary component of our existence, or an absolutely simple experience. Edmund Burke said it was so simple it was ‘incapable of definition’. John Locke held that pleasure ‘cannot be described … the way of knowing [pleasure] is … only by experience.’

This view of pleasure as unanalysable, it seems to me, makes the nature of pleasure even stranger given its ubiquity in our lives. Can it really just be, as William James held, that . . .

Continue reading. There’s much more.

Written by Leisureguy

16 June 2021 at 2:16 pm

GOP Senator Says Democracy and Majority Rule Are Not What Our Country Stands For

leave a comment »

In New York Jonathan Chait writes about Rand Paul and his radical beliefs:

One of the edifying side effects of the Trump era has been that, by making democracy the explicit subject of political debate, it has revealed the stark fact many influential conservatives do not believe in it. Mike Lee blurted out last fall that he opposes “rank democracy.” His fellow Republican senator, Rand Paul, tells the New York Times today, “The idea of democracy and majority rule really is what goes against our history and what the country stands for. The Jim Crow laws came out of democracy. That’s what you get when a majority ignores the rights of others.”

Paul is a bit of a crank, but here he is gesturing at a recognizable set of ideas that have long been articulated by conservative intellectuals. Importantly, these ideas are not identified solely with the most extreme or Trumpy conservatives. Indeed, they have frequently been articulated by conservatives who express deep personal animosity toward Donald Trump and his cultists.

The belief system Paul is endorsing contains a few related claims. First, the Founders explicitly and properly rejected majoritarianism. (Their favorite shorthand is “We’re a republic, not a democracy.”) Second, to the extent the current system has shortcomings, they reveal the ignorance of the majority and hence underscore the necessity of limiting democracy. Third, slavery and Jim Crow are the best historical examples of democracy run amok.

National Review has consistently advocated this worldview since its founding years, when it used these ideas to oppose civil-rights laws, and has persisted in using these ideas to argue for restrictions on the franchise. “Was ‘democracy’ good when it empowered slave owners and Jim Crow racists?,” asked NR’s David Harsanyi. Majority rule “sounds like a wonderful thing … if you haven’t met the average American voter,” argued NR’s Kevin Williamson, rebutting the horrifying ideal of majority rule with the knock-down argument: “If we’d had a fair and open national plebiscite about slavery on December 6, 1865, slavery would have won in a landslide.”

It is important to understand that these conservatives have taken Trump’s election, and escalating threats to democracy, not as a challenge to their worldview but as confirmation of it. If Trump is threatening democracy, this merely proves that the people who elected him are ignorant and therefore unfit to rule. The attempted coup of January 6, another NR column sermonized, ought to “remind us of the wisdom that the Founders held dear centuries ago: We are a republic, not a direct democracy, and we’d best act like it.”

The factual predicate for these beliefs is deeply confused. The Founders did reject “democracy,” but they understood the term to mean direct democracy, contrasting it with representative government, in which the people vote for elected officials who are accountable to them.

It is also true that they created a system that was not democratic. In part this was because they did not consider Americans like Black people, women, and non-landowners as deserving of the franchise. On top of this, they were forced to grudgingly accept compromises of the one-man, one-vote principle in order to round up enough votes for the Constitution; thus the “Three-Fifths Compromise” (granting extra weight in Congress to slaveholders) and the existence of the Senate.

Since the 18th century, the system has evolved in a substantially more democratic direction: The franchise has been extended to non-landowners, women, and Black people and senators are now elected by voters rather than state legislatures, among other pro-democratic reforms. To justify democratic backsliding by citing the Founders is to use an argument that proves far too much: Restoring our original founding principles would support disenfranchising the overwhelming majority of the electorate, after all.

Even more absurd is the notion that “Jim Crow laws came out of democracy.” Southern states attempted to establish democratic systems after the Civil War, but these governments were destroyed by violent insurrection. Jim Crow laws were not the product of democracy; they were the product of its violent overthrow.

The most insidious aspect of the Lee-Paul right-wing belief system is  . . .

Continue reading.

Written by Leisureguy

14 June 2021 at 6:17 pm

%d bloggers like this: