Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Books’ Category

Our Little Life Is Rounded with Possibility

leave a comment »

Chiara Marletto, author of The Science of Can and Can’t: A Physicist’s Journey Through the Land of Counterfactuals and research fellow at Wolfson College, University of Oxford, has an extract from her book in Nautilus:

If you could soar high in the sky, as red kites often do in search of prey, and look down at the domain of all things known and yet to be known, you would see something very curious: a vast class of things that science has so far almost entirely neglected. These things are central to our understanding of physical reality, both at the everyday level and at the level of the most fundamental phenomena in physics—yet they have traditionally been regarded as impossible to incorporate into fundamental scientific explanations. They are facts not about what is—“the actual”—but about what could or could not be. In order to distinguish them from the actual, they are called counterfactuals.

Suppose that some future space mission visited a remote planet in another solar system, and that they left a stainless-steel box there, containing among other things the critical edition of, say, William Blake’s poems. That the poetry book is subsequently sitting somewhere on that planet is a factual property of it. That the words in it could be read is a counterfactual property, which is true regardless of whether those words will ever be read by anyone. The box may be never found; and yet that those words could be read would still be true—and laden with significance. It would signify, for instance, that a civilization visited the planet, and much about its degree of sophistication.

To further grasp the importance of counterfactual properties, and their difference from actual properties, imagine a computer programmed to produce on its display a string of zeroes. That is a factual property of the computer, to do with its actual state—with what is. The fact that it could be reprogrammed to output other strings is a counterfactual property of the computer. The computer may never be so programmed; but the fact that it could is an essential fact about it, without which it would not qualify as a computer.

The counterfactuals that matter to science and physics, and that have so far been neglected, are facts about what could or could not be made to happen to physical systems; about what is possible or impossible. They are fundamental because they express essential features of the laws of physics—the rules that govern every system in the universe. For instance, a counterfactual property imposed by the laws of physics is that it is impossible to build a perpetual motion machine. A perpetual motion machine is not simply an object that moves forever once set into motion: It must also generate some useful sort of motion. If this device could exist, it would produce energy out of no energy. It could be harnessed to make your car run forever without using fuel of any sort. Any sequence of transformations turning something without energy into something with energy, without depleting any energy supply, is impossible in our universe: It could not be made to happen, because of a fundamental law that physicists call the principle of conservation of energy.

Another significant counterfactual property of physical systems, central to thermodynamics, is that a steam engine is possible. A steam engine is a device that transforms energy of one sort into energy of a different sort, and it can perform useful tasks, such as moving a piston, without ever violating that principle of conservation of energy. Actual steam engines (those that have been built so far) are factual properties of our universe. The possibility of building a steam engine, which existed long before the first one was actually built, is a counterfactual.

So the fundamental types of counterfactuals that occur in physics are of two kinds: One is . . .

Continue reading.

Written by Leisureguy

14 June 2021 at 2:00 pm

“Why I Love Lucy Maud”

leave a comment »

Carol Volkart, a retired Vancouver Sun editor and reporter in Vancouver, B.C., Canada, writes in Berfrois;

When I first met Lucy Maud Montgomery in her journals a few months ago, she was a sparkling flirt of 14 tumbling off sleds in winter snowbanks, losing her hat and laughing, laughing, laughing. When I said goodbye to her recently, she was an anguished woman of 67, full of drugs, with a depressive husband and a heartless son who may have harassed her into the grave. The last entry in the journals she kept from 1889 to 1942 described her last years as “hell, hell, hell. My mind has gone, everything I have lived for has gone – the world has gone mad. I shall be driven to end my life. Oh God, forgive me. Nobody dreams what my awful position is.”

Like many people the world over, I have known L.M. Montgomery as primarily the author of Anne of Green Gables, a book I adored as a child. I loved its lyrical descriptions of nature in an idyllic island setting, and its seemingly real characters with their foibles and wit and bravery. I suspect it was Anne that first gave me the idea of writing myself; the notion that the ordinary people and places around me held their own interest and potential for drama. Anne, first published in 1908, still holds its charm for youngsters and draws hordes of tourists to Prince Edward Island, but as an adult, I cooled. Dipping into it briefly a few years ago, I grimaced over the purple passages about nature and never got further.

That may have been why I let Montgomery’s five volumes of journals, published between 1985 and 2004, gather dust on our bookshelves until last summer, when Covid isolation opened up more time for reading. Curiously, my partner John, who had never read any of Montgomery’s fiction, was intrigued by the journals. He began buying them for me as gifts as soon as they were published, and went on to buying them for “us,” gobbling them up himself whenever a new one came out.

Now I know why. The journals are far more fascinating than Anne or any of its sequels. They’re a classic tale of a love-starved child overcoming tremendous odds to achieve great success, then toppling to a sad end. Beyond that, they’re a unique look – through the sharp eyes and articulate pen of a rural Canadian woman – at a rapidly changing society, from the late 1800s through the First World War, the Depression and into the Second World War. We learn what it was like to wear puffed sleeves so big that women had to be stuffed into their coats, to travel for hours in a horse-drawn buggy through rain and snowstorms, to wait obsessively for news from the trenches of the war, to watch a best friend die of the 1918 Spanish flu, to first encounter motorized vehicles, wireless, the telephone and even catch the first glimpse of an airplane. But what keeps us hooked are the small details of the daily life of a remarkable woman: Here’s the internationally famous author cleaning out the stables when her husband isn’t up to the job, grinding her way through boring church teas in her role as a minister’s wife, dealing with nosey neighbours, misbehaving sons and covering up for her husband’s dramatic mental breakdowns. As we accompany her step by intimate step, we’re also drawn by a sense of foreboding; clues abound that the path ahead is dark. As in a horror movie, we want, at some points, to yell: “Don’t go down to the cellar, Maud!” For those of us who like to dig into the whys and wherefores of human lives – possibly to better understand our own – Montgomery’s is a feast.

To me, the most intriguing aspect of Montgomery’s life is  . . .

Continue reading.

Written by Leisureguy

14 June 2021 at 1:53 pm

What if higher wages drive faster productivity growth?

leave a comment »

Noah Smith has an interesting column at Noahpinion:

The other day I ordered at a restaurant on my smartphone. No waiter came by to ask me if I was ready to order. I scanned a QR code on a piece of paper taped to a wooden post; this brought up the menu on my phone, and I simply indicated what I wanted. As if by magic, a server appeared a few minutes later with the food. During my meal, no one wandered by to ask me if I “was still working on that”; when I wanted more food, I just used my phone again. I’m sure I’m one of many millions of Americans who’s learning to order food this way, as a result of the understaffing and social distancing rules imposed by the Covid pandemic.

While I was ordering this way, I kept thinking over and over that this shift is a real game-changer in terms of productivity. Let people order food on their phones, and the number of wait staff you need to deliver the same service goes way down. It’s barely more onerous for the customer if at all, and it eliminates the need to have human beings constantly sashaying around the establishment, eyeing how much diners have eaten.

So I guess I wasn’t too surprised when I saw, via Erik Brynjolfsson and Georgios Petropoulos, that labor productivity is growing at rates not seen since the previous century:

What IS surprising is that this growth has accelerated in the first quarter of 2021, as Americans have started coming back to work en masse. If productivity gains had been purely a function of service establishments being forced to do stretch and do more with fewer workers because of Covid, we might have expected to see a reversal as workers came back to work en masse; instead, productivity is growing even faster.

Now, it’s important not to put too much weight on one quarter’s worth of data. This is a noisy time series with plenty of mismeasurement, and there are bound to be revisions (in fact, looking at the non-seasonally-adjusted numbers shows a slightly more modest increase in Q1). But coupled with my observations on the ground, the change seems real; we can see the actual labor-saving technologies being implemented right in front of our eyes. Nor am I the only one who sees this. Writing in the WSJ, Greg Ip reports:

Industries accounting for a third of the job loss since the start of the pandemic have increased output, including retailing, information, finance, construction, and professional and business services, said Jason Thomas, head of global research at private-equity manager Carlyle Group.

“This recession took on a life of its own by leading to greater remote work, greater reliance on technology,” Mr. Thomas said. Executives began to ask “hard questions: Why do we have so much floor space? Are we sure our cost base makes so much sense? Why were we taking so many intra-office trips? This experience has just revealed how ignorant you were about the frontiers of technology that you could exploit.”

So if employers really are investing in labor-saving technology, the question is: Why? Part of it is surely that the pandemic simply nudged managers to reconsider how they do things; inertia in the business world is strong. But another possible explanation is that workers are becoming more powerful and demanding higher wages. In the NYT, Neil Irwin notes some evidence that workers are gaining the upper hand in the labor market:

The “reservation wage,” as economists call the minimum compensation workers would require, was 19% higher for those without a college degree in March than in November 2019, a jump of nearly $10,000 a year, according to a survey by the Federal Reserve Bank of New York…

A survey of human resources executives from large companies conducted in April by The Conference Board, a research group, found that 49% of organizations with a mostly blue-collar workforce found it hard to retain workers, up from 30% before the pandemic.

With workers demanding higher wages, there could be an incentive for companies to invest in technologies that will economize on labor — like QR code ordering at restaurants.

One problem for this narrative is . . .

Continue reading. There’s much more.

Written by Leisureguy

13 June 2021 at 7:40 am

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

Exit the Fatherland: Germany’s work to rebuild its common culture

leave a comment »

In the decades following WWII, Germany deliberately and slowly reformed its cultural outlook and common values, working at every level to create a society that doesn’t encourage blindly following a leader. The US seems in need of a similar effort to build a culture of common values and understanding, and looking at how Germany did it might help (though the US seems loathe to learn from other countries’ experience). – See update below.

Helmut Walser Smith, the Martha Rivers Ingram Chair of History and professor of history at Vanderbilt University in Nashville, and author of The Butcher’s Tale: Murder and Antisemitism in a German Town (2002), The Continuities of German History: Nation, Religion, and Race across the Long 19th Century (2008), and Germany: A Nation in Its Time (2020), writes in Aeon:

After 12 years of fascism, six years of war, and the concentrated genocidal killing of the Holocaust, nationalism should have been thoroughly discredited. Yet it was not. For decades, nationalist frames of mind continued to hold. They prevailed on both sides of the so-called Iron Curtain and predominated in the Global North as well as in the developing world of the Global South. Even in the Federal Republic of Germany, the turn away from ‘the cage called Fatherland’ – as Keetenheuve, the main character in Wolfgang Koeppen’s novel The Hothouse (1953), called his depressingly nationalistic West Germany – didn’t commence immediately.

When the turn did begin, however, Keetenheuve’s country would set out on a remarkable journey – not one racing down the highway to cosmopolitanism, but rather a slow one that required a series of small steps leading to the gradual creation of a more pacific, diverse and historically honest nation – a better Germany.

After the collapse of the Third Reich, Germans widely blamed other countries for the Second World War. ‘Every German knows that we are not guilty of starting the war,’ asserted the Nazi journalist Hildegarde Roselius in 1946. With ‘every German’, this acquaintance of the American photographer Margaret Bourke-White certainly exaggerated. But in 1952, 68 per cent of Germans polled gave an answer other than ‘Germany’ to the question of who started the Second World War, and it was not until the 1960s that this opinion fell into the minority.

In the mid-1950s, nearly half of all Germans polled said ‘yes’ to the proposition that ‘were it not for the war, Hitler would have been one of the greatest statesmen of the 20th century.’ Until the late 1950s, nearly 90 per cent gave an answer other than ‘yes’ when asked if their country should recognise the Oder-Neisse line, the new border with Poland. Perhaps most revealing of all was their stance on Jews. On 12 June 1946, Hannah Arendt hazarded the opinion to Dolf Sternberger, one of occupied Germany’s most prominent publicists, that ‘Germany has never been more antisemitic than it is now.’ As late as 1959, 90 per cent of Germans polled thought of Jews as belonging to a different race – while only 10 per cent thought of the English in these terms.

The sum of these attitudes suggests that Keetenheuve’s cage called Fatherland remained shut for more than two decades after the fall of the Third Reich.

Like most of Europe and indeed the world, Germany lacked a powerful alternative discourse to nationalism. Until the 1970s, the United Nations Declaration of Human Rights possessed little traction in postwar Europe. Regional affiliations, such as those to Europe (or Pan-Africanism or Pan-Arabism), were more viable but as yet confined to a small number of elites. Strident defences of capitalism also did little to deplete the store of nationalist tropes. And on the western side of the Iron Curtain, anti-Communism supported rather than undermined Nazi-inspired nationalism.

The postwar world was, moreover, awash in new nation-states, especially as it shaded into the postcolonial era. In 1945, there were only 51 independent countries represented at the UN: 30 years later, there were 144. Whether in Jawaharlal Nehru’s India or Kwame Nkrumah’s Ghana, nationalism and promises of self-determination fired anti-colonial independence movements in Asia and Africa. In Europe, nationalism also continued to shape claims to group rights and territorial boundaries. In Germany, divided and not fully sovereign until 1990, it informed discussion over eventual unification, the right of the ethnic German expellees to return to their east European homelands, and the validity of Germany’s eastern borders. Indeed, it wasn’t until 1970, a quarter-century after the war, that the Federal Republic of Germany finally recognised as legitimate the German border (established at the Potsdam Conference in 1945) with Poland. And still nearly half the citizens of West Germany opposed the recognition.

The pervasiveness of exclusionary nationalism in the postwar period also reflected a new underlying reality. The Second World War had created a Europe made up of nearly homogeneous nation-states. A series of western European countries, now thought of as diverse, were at that time just the opposite. The population of West Germany who were born in a foreign country stood at a mere 1.1 per cent, and the minuscule percentage proved paradigmatic for the tessellated continent as a whole. The Netherlands had a still smaller foreign-born population, and foreigners made up less than 5 per cent of the population in Belgium, France and Great Britain. In the interwar years, eastern European countries such as Poland and Hungary had significant ethnic minorities and large Jewish populations. In the postwar period, both were all but gone, and Poles and Hungarians were largely on their own.

Nor, in the trough of deglobalisation, did Europeans often get beyond their own borders, and Germans were no exception. In 1950, most Germans had never been abroad, except as soldiers. Some 70 per cent of the adult women had never left Germany at all. Travel, a luxury enjoyed by the few, didn’t begin to pick up until the mid-1950s, while international travel became a truly mass phenomenon only in the 1970s, when most people had cars of their own. In the first decades of tourism, Germans mainly visited German-speaking destinations, such as the castles on the Rhine or the northern slopes of the Alps. In these decades, few Germans, save for the highly educated, knew foreign languages, and most other Europeans, unless migrant workers, were no different.

The cage called Fatherland was thus reinforced. The persistence in a world of nationalism of the habits of thought of a once-Nazified nation-in-arms constituted one set of reinforcements. The relative homogeneity of postwar nations and the lack of peacetime experiences abroad constituted another. There was also a third reinforcement keeping the cage shut. This was that Germans had something to hide.

In the postwar period, Germany was full of war criminals. The European courts condemned roughly 100,000 German (and Austrian) perpetrators. The sum total of convictions by the Second World War allies, including the United States, the Soviet Union and Poland, pushes that number higher still, as does the more than 6,000 offenders that West German courts would send to prison, and the nearly 13,000 that the much harsher judicial regimen of East Germany convicted.

Nevertheless, there was still a great deal left to cover up. Lower down the Nazi chain of command, a dismaying number of perpetrators of various shades of complicity got off without penalty or consequence. Two jarring examples might suffice. Only 10 per cent of Germans who had ever worked in Auschwitz were even tried, and only 41 of some 50,000 members of the murderous German Police Battalions, responsible for killing a half a million people, ever saw the inside of a prison.

Trials and sentences reveal only part of the story of complicity. Many Germans not directly involved in crimes had come into inexpensive property and wares. Detailed reports from the north German city of Hamburg suggest that, in that one city alone, some 100,000 people bought confiscated goods at auctions of Jewish wares. Throughout the Federal Republic, houses, synagogues and businesses once belonging to Jewish neighbours were now in German hands. Mutatis mutandis, what was true for the number of people involved in the murder and theft activities of the Third Reich also held true about what people knew. ‘Davon haben wir nichts gewusst’ (‘We knew nothing about that [the murder of the Jews]’), West German citizens never tired of repeating in the first decades after the war. Historians now debate whether a third or even half of the adult population in fact knew of the mass killings, even if most scholars concede that few Germans had detailed knowledge about Auschwitz.

The Germans shared a European fate here as well, even if they had the most to hide. In his trailblazing article ‘The Past Is Another Country: Myth and Memory in Postwar Europe’ (1992), the late Tony Judt pointed out the stakes that almost all of occupied Europe had in covering up collaboration with Nazi overlords. This wasn’t merely a matter of forgetting, as is sometimes assumed. Rather, it involved continuing and conscious concealment. After all, many people (especially in eastern Europe, where the preponderance of Jews had lived) had enriched themselves – waking up in ‘Jewish furs’, as the saying went, and occupying Jewish houses in what was surely one of the greatest forced real-estate transfers of modern history.

For all these reasons, the cage called Fatherland wasn’t easy to leave and, rather than imagine a secret key opening its door, it makes more sense to follow the hard work involved in loosening up its three essential dimensions: a warring nation, a homogeneous nation, and a cover-up nation. It wasn’t until West Germans could take leave of these mental templates that they could even begin to exit the cage. Fortunately, in the postwar era, Germany was blessed with prolonged prosperity, increased immigration, and the passing of time. When brought together with small, often courageous steps of individuals and institutions, these factors allowed West Germans eventually to embraced peace, diversity and the cause of historical truth: in short, to exit the cage.

he vision of ‘a living, not a deathly concept of Fatherland’, as Dolf Sternberger put it in 1947, had already been laid in the early years of occupation. Sternberger, who cut off the ‘A’ from his first name, argued for a different kind of nation, one that commanded openness and engagement but didn’t end in the glorification of killing and dying in war or in the marginalisation and persecution of others. The nation as a source of life, as a caretaker of its citizens, and not as a vehicle for power, expansion, war and death: this was Sternberger’s initial vision.

It was a conception of Germany that West Germans slowly embraced, symbolically replacing the warfare state with the welfare state, swapping barracks and panzers for department stores and high-performance cars. Enabled by . . .

Continue reading. There’s much more. I suspect that this essay is essentially an extract from his latest book.

Update

There’s still work to be done: see the report “German commando unit disbanded over suspected far-right links within ranks,” by Loveday Morris and Luisa Beck, published today (10 June 2021) in the Washington Post. It begins:

German authorities disbanded a Frankfurt police commando unit Thursday over suspected far-right links to a group of active officers, the latest in a string of extremist-related scandals to blight the country’s police and military.

Peter Beuth, interior minister for the Hesse state where Frankfurt is located, said “unacceptable misconduct” prompted the decision to close the unit. He also said superiors had turned a “blind eye.”

Hesse’s prosecutor on Wednesday said the office was investigating 20 officers from the force with the majority suspected of sending messages in far-right chat groups, including Nazi symbols and “inciting content.” Three supervising officers were accused of failing to stop or report the exchanges. All but one of the 20 was on active duty.

The chat groups were uncovered after examining the phone of an officer suspected of possessing and distributing child pornography.

One officer has been officially suspended, and the others have been “banned from conducting official business,” the public prosecutor said.

The move comes in the wake of revelations over far-right links that have embroiled Germany’s security forces, from other far-right chat groups sharing neo-Nazi content to a group of extremist doomsday preppers who hoarded ammunition ahead of “Day X.”

A court in Hesse is trying Franco Albrecht, a former soldier accused of posing as a Syrian refugee in an attempt to carry out a “false flag” attack. Hesse’s police chief was forced to resign last year after police computers were used to search for personal details of prominent figures before they were sent threatening letters and emails.

A year ago, Germany also partially disbanded its military’s elite commando force because of the extremist links of its officers.

Germany’s Federal Interior Minister Horst Seehofer has pushed back against assertions of structural racism or far-right sympathies in the country’s police forces. But he has agreed to commission a study into the issue last year as pressure grew amid a slew of such cases.

A similar study by Germany’s domestic intelligence agency said there were 370 suspected cases of right-wing extremism in the country’s police and security forces.

I will point out that far-right extremists have been discovered in US law enforcement (police departments and state police) and in the US military. Indeed, some of those active in the insurrection of January 6 were active in military service and some were active police officers. The problem is not unique to Germany.

Written by Leisureguy

9 June 2021 at 11:09 am

America’s scarcity mindset

leave a comment »

Noah Smith writes Noahpinions:

“This land was made for you and me” — Woody Guthrie

“I’m all right Jack, keep your hands off of my stack” — Pink Floyd

I’ve been reading Rick Perlstein’s Reaganland: America’s Right Turn 1976-1980. Like all Perlstein books, it’s excellent and you should read it. Anyway, one of the things that really jumps out about the Carter years is the way scarcity and pessimism (which is just anticipation of future scarcity) made the country more selfish. The oil crises of the 70s created absolute chaos, with gunfights at gas stations and violent trucker strikes. It’s not hard to see how that era led to the every-man-for-himself attitude of the conservative 1980s.

But the crazy thing is that America seems to be falling back into this scarcity mindset. Only this time, the shortages are almost entirely of our own creation.

Stephen Covey, the self-help author who wrote The 7 Habits of Highly Effective People, coined the terms “abundance mindset” and “scarcity mindset”. Basically he means that some people going around thinking of the world as a set of positive-sum, win-win situations, while other people go around thinking of everything as a zero-sum competition where you’re either a winner or a loser.

Meanwhile, the psychologist Ronald Inglehart came up with the related idea of “self-expression values” vs. “survival values”. Survival values, which supposedly come about because of economic scarcity, include ethnocentrism, xenophobia, fear of disease, and a hunger for authoritarianism. Sounds a lot like Trumpism, but I think you can also see echoes of this in various leftist ideologies and spaces.

The World Values Survey keeps track of these values, and it’s interesting to see how the U.S. has evolved over time. Here’s the map of countries from 2008 (click graphic to enlarge):

You can see that while we were more traditionalist than most other rich countries, we were also very high on the “self-expression” end of the scale — about the same as Australia, New Zealand, or Denmark. This is basically the classic view of the U.S. — a bit religious, but a very open and tolerant society. Now check out the map for 2020: . . .

Continue reading. There’s much more — including the second map, which shows some significant changes.

For whatever reason, the US does seem to have lost or abandoned a united effort to work for the common good.

Written by Leisureguy

7 June 2021 at 3:33 pm

Malcolm Gladwell’s Fantasy of War From the Air

leave a comment »

In the New Republic Colin Dickey reviews a recent book by Malcom Gladwell:

There’s a scene in the 2011 film Moneyball where Brad Pitt’s Billy Beane is mentoring young Peter Brand (Jonah Hill) on how to cut a professional baseball player from the roster: bluntly, without euphemism. “Would you rather,” he asks, “get one shot in the head or five in the chest and bleed to death?” Imagine, if you will, that this was not a rhetorical question or an analogy about firing someone but rather a serious, literal question. Now imagine 206 pages of this, and you have a sense of what it’s like to read Malcolm Gladwell’s new book.

The Bomber Mafia: A Dream, a Temptation, and the Longest Night of the Second World War is a nasty, brutish book—if it’s also short, it’s not nearly short enough. It is a breathless and narratively riveting story about the best way to kill hundreds of thousands of civilians. It is the story of two different approaches to killing hundreds of thousands of civilians, and of the heroic men who each championed their own method for mass killing. Its central question is whether one should approach the wholesale massacre of the innocents with indifference or with hypocrisy, and its conceit is that this is a relevant or fascinating distinction. It is a book detailing a series of ingenious new technologies for butchery, dressed up in the polished technophilic language of a TED talk.

The book details the rise and fall (and rise again) of the doctrine of precision air bombing, an idea that emerged from the Air Corps Tactical School (the aviation equivalent of the Army War College), nicknamed the “Bomber Mafia.” The Air Force was not yet a separate branch of the military in the 1930s, but with the advent of military aviation the men at the Air Corps Tactical School (based at Maxwell Field in Montgomery, Alabama) began to fantasize about entirely new kinds of war-making and attempted to birth a revolution in how war might be fought. Their singular obsession, according to Gladwell, was this: What if, instead of bringing the full might of one’s military on the enemy, battering them into submission, you could take out key infrastructure and manufacturing targets (“choke points,” in the Bomber Mafia’s parlance) that would incapacitate your opponent while avoiding mass death?

It’s an interesting enough idea. In the opening years of World War II, aerial bombing meant total destruction. The London Blitz was designed to overwhelm the British and demoralize them into submission. England’s answer to this was Arthur “Bomber” Harris, whom Gladwell describes as, simply, a “psychopath.” Harris was one of the chief architects of the British tactic of “area bombing” or “morale bombing”: Reduce cities to rubble and incinerate the civilians until they submit. For Harris, civilians were viable targets if for no other reason than some of them worked in the factories that made bombs and submarines. As he would say later, “They were all active soldiers, to my mind.”

The minds at the Air Corps Tactical School thought there might be a different way. “The whole argument of the Bomber Mafia, their whole reason for being, was that they didn’t want to cross that line,” Gladwell writes. “They weren’t just advancing a technological argument. They were also advancing a moral argument.” When the Americans joined forces with the British Royal Air Force in bombing Germany, the Bomber Mafia sought to prove its approach. Under the command of General Haywood Hansell, the Americans argued that if they could destroy the German’s capacity to make ball bearings, they could bring their manufacturing to a standstill. What if you could leave the Germans for want of a nail and lose them the whole ship?

This is the “dream” of the subtitle—what if by changing one’s perspective and focusing on something small and seemingly insignificant, one could change how wars were fought? One can see how the author of The Tipping PointBlinkand Outliers would be taken by a group whose motto was Proficimus more irrententi—“We make progress unhindered by custom.” The Bomber Mafia is adapted from an audiobook, which means that what sounds conversational and engaging on tape can sound garrulous on the page, but it also allows Gladwell to telegraph his breathless fascination with these men. “I worry that I haven’t fully explained just how radical—how revolutionary—the Bomber Mafia thinking was,” he says at one point, before launching on a long digression about chapel architecture. Unbound by tradition, the Bomber Mafia wanted to innovate and rethink war from the ground up (or the sky down). This is a group “utterly uninterested in heritage and tradition,” Gladwell explains; rather than “studying the Peloponnesian War or the Battle of Trafalgar,” they were readying themselves for “today’s battles.”

In Gladwell’s world, the people who matter are the innovators, the disrupters. The protagonists of The Bomber Mafia are all various analogs of Steve Jobs or John Lennon—heroic icons who brought a unique perspective and, through determination and insight, pursued a dream that changed the world. But such decisions never happen in a vacuum, and by foregrounding such technological pursuits, The Bomber Mafia furthers the fiction that somehow airstrikes can be moral.


.
How much can you change the world from the air? In the 1920s, when aviation was in its infancy, proponents for air power imagined a utopian possibility: The airplane was so new, so unrefined, and offered so much potential. The sky was the limit, and perhaps somewhere in this technology would be a way to end war once and for all.

Though this dream would fade fast, the book strains to carry this early naïveté over to the realities of World War II. Gladwell organizes his chapters around individual men with unique, startling ideas, like Carl L. Norden, a Dutch engineer whose obsession was the aerial bomb sight, which would enable precision strikes and could entirely change how aerial warfare was conducted. The book follows first Norden and then the Air Corps Tactical School under Haywood Hansell, as it attempts to prove the efficacy of the precision bombing thesis. This group is repeatedly contrasted with men like Harris, as searching for a “moral” approach to bombing. Hansell, we’re told, “provides us with a model of what it means to be moral in our modern world.”

Gladwell repeats this line throughout; he quotes Tami Biddle, professor of national security at the U.S. Army War College, on this as well: “I think there’s a strong moral component to all this,” she tells Gladwell,

a desire to find a way to fight a war that is clean and that is not going to tarnish the American reputation as a moral nation, a nation of ideas and ideology and commitment to individual rights and respect for human beings.

Tellingly, though, Gladwell provides no direct quotes from Hansell or the Bomber Mafia suggesting that they thought their approach was moral; it’s all a retrospective appraisal from contemporary historians. After all, here is what their so-called “moral” approach looked like at the time: In a wargame that proposed a conflict between Canada and the United States, the Bomber Mafia gamed out what it would take for a hypothetical airstrike launched from Toronto to take out New York City. Bomber Mafia associate Muir Fairchild instead theorized that you could bring the city to its knees by striking 17 targets: the bridges, the aqueducts that brought fresh water to the city, and the power grid. As military historian Robert Pape explains, “They basically want to create a situation where there’s almost no potable water for the population to drink.” This would avoid “wave upon wave of costly and dangerous bombing attacks” or reducing the city to rubble, while still incapacitating the city. This, somehow, is the moral option: cutting off a city of millions to die slowly of thirst. We are back to Billy Beane’s question: Would you rather get one shot in the head or five in the chest and bleed to death? . . .

Continue reading. There’s more.

Written by Leisureguy

6 June 2021 at 8:03 am

Evolution unleashed: Revolution in the making?

leave a comment »

Kevin Lalan, professor of behavioural and evolutionary biology at the University of St Andrews in Scotland, an elected fellow of the Royal Society of Edinburgh, a fellow of the Society of Biology, and co-author (with Tobias Uller) of Evolutionary Causation: Biological and Philosophical Reflections (2019), writes in Aeon:

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

If you are not a biologist, you’d be forgiven for being confused about the state of evolutionary science. Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet as novel ideas flood in from genomics, epigenetics and developmental biology, most evolutionists agree that their field is in flux. Much of the data implies that evolution is more complex than we once assumed.

Some evolutionary biologists, myself included, are calling for a broader characterisation of evolutionary theory, known as the extended evolutionary synthesis (EES). A central issue is whether what happens to organisms during their lifetime – their development – can play important and previously unanticipated roles in evolution. The orthodox view has been that developmental processes are largely irrelevant to evolution, but the EES views them as pivotal. Protagonists with authoritative credentials square up on both sides of this debate, with big-shot professors at Ivy League universities and members of national academies going head-to-head over the mechanisms of evolution. Some people are even starting to wonder if a revolution is on the cards.

In his book On Human Nature (1978), the evolutionary biologist Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor was contentious for two reasons. First, as we’ll see, it’s no less true that culture holds genes on a leash. Second, while there must be a genetic propensity for cultural learning, few cultural differences can be explained by underlying genetic differences.

Nonetheless, the phrase has explanatory potential. Imagine a dog-walker (the genes) struggling to retain control of a brawny mastiff (human culture). The pair’s trajectory (the pathway of evolution) reflects the outcome of the struggle. Now imagine the same dog-walker struggling with multiple dogs, on leashes of varied lengths, with each dog tugging in different directions. All these tugs represent the influence of developmental factors, including epigenetics, antibodies and hormones passed on by parents, as well as the ecological legacies and culture they bequeath.

The struggling dog-walker is a good metaphor for how EES views the adaptive process. Does this require a revolution in evolution? Before we can answer this question, we need to examine how science works. The best authorities here are not biologists but philosophers and historians of science. Thomas Kuhn’s book The Structure of Scientific Revolutions (1962) popularised the idea that sciences change through revolutions in understanding. These ‘paradigm shifts’ were thought to follow a crisis of confidence in the old theory that arose through the accumulation of conflicting data.

Then there’s Karl Popper, and his conjecture that scientific theories can’t be proven but can be falsified. Consider the hypothesis: ‘All sheep are white.’ Popper maintained that no amount of positive findings consistent with this hypothesis could prove it to be correct, since one could never rule out the chance that a conflicting data-point might arise in the future; conversely, the observation of a single black sheep would decisively prove the hypothesis to be false. He maintained that scientists should strive to carry out critical experiments that could potentially falsify their theories.

While Kuhn and Popper’s ideas are well-known, they remain disputed and contentious in the eyes of philosophers and historians. Contemporary thinking in these fields is better captured by the Hungarian philosopher Imre Lakatos in The Methodology of Scientific Research Programmes (1978):

The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths.

Popper’s arguments might make logical sense, but they don’t quite map on to how science works in the real world. Scientific observations are susceptible to errors of measurement; scientists are human beings and get attached to their theories; and scientific ideas can be fiendishly complex – all of which makes evaluating scientific hypotheses a messy business. Rather than accepting that our hypotheses might be wrong, we challenge the methodology (‘That sheep’s not black – your instruments are faulty’), dispute the interpretation (‘The sheep’s just dirty’), or come up with tweaks to our hypotheses (‘I meant domesticated breeds, not wild mouflon’). Lakatos called such fixes and fudges ‘auxiliary hypotheses’; scientists propose them to ‘protect’ their core ideas, so that they need not be rejected.

This sort of behaviour is clearly manifest in scientific debates over evolution. Take the idea that new features acquired by an organism during its life can be passed on to the next generation. This hypothesis was brought to prominence in the early 1800s by the French biologist Jean-Baptiste Lamarck, who used it to explain how species evolved. However, it has long been regarded as discredited by experiment – to the point that the term ‘Lamarckian’ has a derogatory connotation in evolutionary circles, and any researchers expressing sympathy for the idea effectively brand themselves ‘eccentric’. The received wisdom is that parental experiences can’t affect the characters of their offspring.

Except they do. The way that genes are expressed to produce an organism’s phenotype – the actual characteristics it ends up with – is affected by chemicals that attach to them. Everything from diet to air pollution to parental behaviour can influence the addition or removal of these chemical marks, which switches genes on or off. Usually these so-called ‘epigenetic’ attachments are removed during the production of sperm and eggs cells, but it turns out that some escape the resetting process and are passed on to the next generation, along with the genes. This is known as ‘epigenetic inheritance’, and more and more studies are confirming that it really happens.

Let’s return to the almond-fearing mice. The inheritance of . . .

Continue reading. There’s much more.

Written by Leisureguy

5 June 2021 at 10:22 am

(Trying To) Study Textbooks Effectively: A Year of Experimentation

leave a comment »

An interesting post at LessWrong:

When I started studying the art of studying, I wanted to understand the role of book learning. How do we best learn from a textbook, scientific article, or nonfiction book? What can a student of average intelligence do to stay on top of their homework? Is it possible to improve your annual knowledge growth rate by one or two percent by learning how to learn? Should a motivated student take a maximizing or satisficing approach to their coursework? How many of the skills of a top scholar are strategic, collaborative, psychological, or involve merely a set of habits and technological proficiencies?

Fortunately, I started with the most esoteric of approaches, exploring visualization. I tried using a memory palace to memorize a textbook. It was vivid, fun, and creative. Exploring visualization helped me understand chemical diagrams, led me to invent a math problem, and made learning a lot more fun. But I simply couldn’t jam that much detailed technical knowledge into my head. The method didn’t help me pass my final exam, and I dropped it.

Posts from this era include Visual Babble and PruneUsing a memory palace to memorize a textbookThe point of a memory palaceVisualizing the textbook for fun and profit,

After that, I explored speed reading. I read the theory, experimented both with physical technique and speed reading apps, and kind of broke my reading habits developing this difficult-to-correct tendency to skim. This tendency to read too quickly persisted long after I’d dropped deliberate attempts at speed reading. I finally made some intellectual progress, which preceded correcting the reading habit itself, in The Comprehension Curve.

Then I explored the world of Anki and tried to use flashcards to memorize a textbook instead (or at least a few chapters). After simulating the sheer amount of flashcard review I’d have to do to keep a strategy like that up long-term, I dropped that too. I felt that forming memories of narrow facts (like the structure of RNA polymerase or the name of the 7th enzyme in glycolysis) was the costliest way to learn. And I found the achievement of world-class memory champions irrelevant to real-world learning, which just seems like an entirely different task.

Posts from this area (not all on flashcards specifically) include The Multi-Tower Study StrategyDefine Your Learning Goal: Competence Or Broad KnowledgeProgressive Highlighting: Picking What To Make Into FlashcardsGoldfish ReadingCurious Inquiry and Rigorous Training, and Using Flashcards for Deliberate Practice.

During this time, I also played around with “just reading,” without a conscious technique. Posts from this era include Check OK, babble-read, optimize (how I read textbooks)Wild Reading,

Notes are cheap. It takes a lot less time to write down a fact than to memorize it. But I went further. I developed an elaborate and carefully-specified system of shorthand notation to represent causal, temporal, and physical structures. It used Newick notation for tree structures, variants on arrow signs to articulate causation, sequence, combination, and more, templates to rewrite the stereotyped information presented by textbooks in a uniform format, and hyperlinks in Obsidian to represent the relationships between concepts.

Not only did I take notes on the textbook, I also took notes on each individual homework problem. I also developed notes for other problems. I wrote Question Notes for The Precipice. This means that for each paragraph in the book, I wrote down one question to which that paragraph was a valid answer.

I never published any posts on note-taking. Partly, note-taking itself scratched that itch. But more importantly, it was a very fast iterative cycle. My methods developed day by day, over the course of months. I was experimenting with different software apps, tweaking the templates I used, figuring out how to expand my particular method of shorthand to represent complex structures. After all the shifts I’d made on my previous experiments, I thought I would spare LessWrong the tedious minutiae of my developing thoughts on note-taking. I’m confident that crafting the perfect notes in an elaborate and precise shorthand system is no a panacaea, so I don’t know if it’s worth bothering.

Exploring note-taking was as useful as visualizing was fun. The rigid structure of my note-taking approach gave me clear guidance on what it means to “read” or “study” a textbook chapter. They became a useful reference for looking things up. The idea of bringing together any data, formula, charts, or techniques I needed to solve a problem, and then making a plan of attack before setting to work, was a big upgrade for my accuracy and sense of ease.

Yet when my note-taking . . .

Continue reading.

Written by Leisureguy

4 June 2021 at 12:26 pm

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

leave a comment »

Highly recommended: Kill Decision and also influx, by Daniel Suarez (and I would also highly recommend his other books, including the duology Daemon and Freedom™). Joe Hernandez reports for NPR:

Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator. Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.

Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.

“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.

The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.

he Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.

The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.

The idea of a “killer robot” has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.

What’s new about the incident in Libya, if confirmed, is . . .

Continue reading. There’s more.

Written by Leisureguy

2 June 2021 at 3:34 pm

What Renaissance?

leave a comment »

History as taught uses boundaries and classifications that to a great extent are arbitrary. For example, the date of Rome’s fall is often given as 395 AD, but people in 396 AD didn’t comment, “Wow! Did you notice how Rome fell last year?! That was something!” And there’s the quip about the doctoral student whose thesis was on the first 30 minutes of the Protestant Reformation.

Henrik Lagerlund,, professor of the history of philosophy at Stockholm University and member of the Rotman Institute of Philosophy in Canada, has an interesting article in Aeon, which notes that

He is series editor of Studies in the History of Philosophy of Mind (2002-); editor-in-chief of the Encyclopedia of Medieval Philosophy (2010); and co-editor of Causal Powers in Science: Blending Historical and Conceptual Perspectives (2021). He is also the author of Skepticism in Philosophy: A Comprehensive, Historical Introduction (2020).

Lagerlund writes:

Renaissance philosophy started in the mid-14th century and saw the flowering of humanism, the rejection of scholasticism and Aristotelianism, the renewal of interest in the ancients, and created the prerequisites for modern philosophy and science. At least, this is the conventional story. But, in fact, there was no Renaissance. It is an invention by historians, a fiction made in order to tell a story – a compelling story about the development of philosophy, but nevertheless a story. In fact, all periodisation is ‘mere’ interpretation. This view is called historiographical nihilism.

Historiography was for a long time simply the writing of histories. Sweden, for example, had a royal historiographer, which was a formal appointment at the Royal Court. For a period in the late 17th century, the position was held by the philosopher Samuel Pufendorf (1632-94). He wrote several books in Latin on the history of Gustav II Adolf’s war efforts in Europe during the Thirty Years’ War, as well as one about Queen Christina’s abdication. Recently, historiography has become more a study of how history is written. In the second sense, it is the works of the historians and their methods that are the object of study, and not history itself. A historiographer doesn’t write histories, but develops theories about how history is written.

Nihilism, of course, has been given many meanings and has been interpreted in many different ways by philosophers throughout history. In the context of historiography, it means the rejection of, or – in a slightly weaker form – the scepticism towards historiographical concepts such as periodisation, but also other concepts pertaining to the development of a ‘theory’ of history; consequently, it implies that there can’t be only one method of history but many.

Historiographical nihilism has nothing against using periodisation in history and philosophy as a heuristic tool or for pedagogic purposes, but it reminds us that, as such, they’re always false, and when we study the details of history, it will become obvious that such grand statements as the outline of a period such as the Renaissance are futile and empty. The arbitrariness of assigning the term ‘Renaissance philosophy’ to a period in time can be easily seen if we have a look at the historical development of the term itself.

Renaissance philosophy is often presented as a conflict between humanism and scholasticism, or sometimes it’s simply described as the philosophy of humanism. This is a deeply problematic characterisation, partly based on the assumption of a conflict between two philosophical traditions – a conflict that never actually existed, and was in fact constructed by the introduction of two highly controversial terms: ‘humanism’ and ‘scholasticism’. A telling example of how problematic these terms are as a characterisation of philosophy in the 16th century can be found in Michel de Montaigne (1533-92). He was critical of a lot of philosophy that came before him, but he didn’t contrast what he rejected with some kind of humanism, and his sceptical essay An Apology for Raymond Sebond (1580) wasn’t directed at scholastic philosophy. In fact, both these terms were invented much later as a means to write about or introduce Renaissance philosophy. Persisting with this simplistic dichotomy only perverts any attempt at writing the history of 14th- to 16th-century philosophy.

One of the first attempts at writing a history of philosophy in a modern way was Johann Jacob Brucker’s five-volume Historia critica philosophiae (1742-44) published in Leipzig. He didn’t use the terms ‘Renaissance’ or ‘humanism’, but the term ‘scholastic’ was important for him. The narrative we still live with in philosophy, for the most part, was already laid down by him. It’s the familiar narrative that emphasises the ancient beginning of philosophy, followed by a collapse in the Middle Ages, and an eventual recovery of ancient wisdom in what much later became called ‘Renaissance philosophy’.

The US philosopher Brian Copenhaver, one of the foremost scholars of our time, develops this idea in his contribution to The Routledge Companion to Sixteenth-Century Philosophy (2017). In ‘Philosophy as Descartes Found It: Humanists v Scholastics?’, he explains how Brucker’s ideal was developed from Cicero and called by him ‘humanitatis litterae’ or ‘humanitatis studia’. For Brucker, these terms signified the works of the classical authors and the study of them. The Latin he used for the teaching of the classical authors was ‘humanior disciplina’. Brucker sees himself as completing a project he claims was started by Petrarch in the mid-14th century: a cultural renewal that would save philosophy from the darkness of scholasticism.

As we’ve come to know more about the period referred to by Brucker as the Middle Age, it has become clear that it’s simply wrong to call it a decline. It is instead extraordinarily rich philosophically, and should be celebrated as hugely innovative. It’s by no means a ‘dark age’. Quite the contrary. So the view that emerges in Brucker stems from a lack of knowledge and understanding of the philosophy of that time.

The use of the term ‘humanism’ to signify a coherent movement was first introduced in the 19th century, around the same time as the advent of the term ‘Renaissance’. Crucially, neither were initially used in connection with philosophy. Rather, they were used by art historians, especially . . .

Continue reading. There’s much more.

Written by Leisureguy

31 May 2021 at 11:14 am

What The Rise Of Amazon Has To Do With The Rise Of Trump

leave a comment »

Danielle Kurtzleben reports at NPR:

Amazon was already an economic behemoth before the start of the coronavirus pandemic. But when many Americans ramped up their shopping from home, the company saw explosive growth. In short, ProPublica journalist Alec MacGillis writes in Fulfillment, its fortunes diverged from the nation’s economic fortunes.

The book looks at the American economy through the lens of Amazon — the forces that made it, the trends it accelerated, and the inequality that he argues has resulted from the growth of Big Tech. The NPR Politics Podcast spoke to him about America’s “winning” and “losing” cities, what Amazon has to do with former President Donald Trump’s election, and how much it matters when consumers decide to boycott huge companies like Amazon.

Fulfillment was the latest selection in the NPR Politics Podcast Book Club. Join in the book conversations at the podcast’s Facebook group. The next discussion, in late June, will be about Elizabeth Hinton’s America on Fire: The Untold History of Police Violence and Black Rebellion Since the 1960s.

The following are excerpts from the full interview with MacGillis, with answers edited for clarity and length. [Audio of the interview here. – LG]

DANIELLE KURTZLEBEN: Your book isn’t exactly what I was expecting. I sort of went into it thinking, “this is going to be a book that’s, ‘Amazon [is] bad — it has bad labor practices and it hurts small business, etc.’ ” And while Amazon doesn’t come off as quite a hero, the book is much more about the American economy and American economic history through an Amazon lens. How would you describe what you were trying to do?

ALEC MACGILLIS: Yes, I actually came to Amazon secondarily within the book. I wanted to write a book for years now about regional disparities in America — the sort of growing regional inequality between a small set of what I call sort of winner-take-all cities, cities like Seattle, San Francisco, New York, Boston, D.C., and a much larger set of cities and towns that have that have really been falling behind.

We’ve always had richer and poorer places, but the gap between them has gotten a lot bigger in recent years, and it’s really unhealthy for the country. I especially wanted to write about it after Trump got elected; it was so clear just what a big role these regional disparities had in Trump’s election.

I chose Amazon as the frame for two different reasons. One is that the company is so ubiquitous now in our life, just so omnipresent, that it’s a handy thread to kind of just take you around the country and show what we’re becoming as a country in kind of a metaphorical kind of way. But it’s also a very handy frame for the story of racial inequality, because the company is itself helping drive these disparities. The regional concentration of wealth in our country is very closely tied to the concentration of our economy in certain companies.

DK: I’m not sure what the timeline was of you working on this book, but when you saw the big HQ2 contest happen — it’s like your book’s thesis on steroids. What was your reaction to Amazon holding essentially a Bachelor competition for where its next headquarters would be?

AM: It was quite serendipitous in a way that they embarked on this process while I was working on the book. I actually chose Washington, D.C. as one of the two “winner” cities that I was going to focus on before it got chosen by Amazon to be the second headquarters. [Amazon chose the D.C. suburb of Arlington, Va., as a new headquarters site in 2018.]

I knew that I wanted to focus on Seattle because it already was the Amazon headquarters. And I wanted to focus on Washington because it was so clear that Washington was another winner-take-all city that had been completely transformed by this kind of hyper-prosperity. And then, lo and behold, they go ahead and pick Washington as their second headquarters.

Another reason I wanted to have Washington as a second winner-take-all city is that I found the contrast between Washington and Baltimore so compelling for me.

The sort of spiritual heart of the book is the contrast between Washington [and] Baltimore, these two cities that are just 40 miles apart. I’ve moved between these cities now for the last 20 years, working and living in both places. And it’s just been so striking to watch the gap growing between them, and to me, just really upsetting and disheartening to watch that happening.

You have one city that’s become just incredibly unaffordable for so many people, where it costs, you know, seven, eight, nine hundred thousand dollars to buy a row house, if not more. All these people, longtime residents, mostly longtime black residents, being displaced by the thousands. And then just up the road in Baltimore, you have such deep population decline that you have rowhouses, that are going for seven or eight hundred thousand dollars down the road, being demolished by the hundreds.

That just is not good for people in either sort of city, and Amazon is really at the core of that. They chose Washington as their headquarters. It’s going to get only richer or more expensive.

DK: There’s so much to get at here in terms of the economic forces at work — the way that city government works, NIMBYism in action, de-unionization, companies getting preferential tax treatment, that sort of thing. How did we get here? Is there an original sin that sort of led to where we are, or is it just that we went from a goods-based to a tech-based economy, and this just sort of inevitably happened? . . .

Continue reading. There’s more.

Written by Leisureguy

30 May 2021 at 1:32 pm

A Number Theorist Who Connects Math to Other Creative Pursuits

leave a comment »

In Quanta Steve Nadis interviews Jordan Ellenberg:

here are many different pathways into mathematics,” said Jordan Ellenberg, a mathematician at the University of Wisconsin, Madison. “There is the stereotype that interest in math displays itself early. That is definitely not true in general. It’s not the universal story — but it is my story.”

That account was backed up by a biostatistician at the University of Pennsylvania — his mother, Susan Ellenberg. “Jordan recognized numbers before he could walk,” she said. “We’d be going someplace with him, and he’d start to call out numbers, and his father and I would have to figure out where he was seeing them. Each night, he’d ask me to teach him something new about math.” When he was in second grade, a local teacher began taking him through the high school math curriculum. Ever since, he’s been preoccupied with mathematics — though not exclusively so.

After graduating from Harvard University in 1993, Ellenberg completed a one-year master’s program in fiction writing at Johns Hopkins University, where he wrote a novel that was published a decade later, titled The Grasshopper King. But he always felt that he would eventually return to mathematics, and in 1994 he entered a doctoral program back at Harvard, pursuing research under the supervision of Barry Mazur, a number theorist.

“Barry was a great adviser and a very learned guy,” Ellenberg said. “One of the things he showed me is that it’s OK to be interested in things other than math. Through him I saw that being in a university isn’t just about being in the math department, but rather being part of a whole world of scholarship.”

Ellenberg has taken that view to heart, finding mathematics to explore in everything from internet fads to voting rights. He has interacted and even collaborated with colleagues from many different fields and departments, while keeping up his writing — academic papers for math journals, and popular articles for newspapers and magazines. In 2001, he started writing a column for Slate called “Do the Math.” Many entries are not typical mathematician fare, such as “Algebra for Adulterers,” “Cooking the Books on Virginity,” and “What Broadway Musicals Tell Us About Creativity.”

His latest book, Shape, is all about geometry — though, as you might expect, it departs significantly from the traditional geometry of your high school days. Proving the congruence of triangles and the like, he said, bears little resemblance to the work of modern geometry. In the book’s introduction, Ellenberg confesses that it was a curious subject for him to have taken up: “Reader, let me be straight with you about geometry: at first I didn’t care for it.”

Quanta spoke with Ellenberg earlier this month about geometry, electoral math and creativity. The interview has been condensed and edited for clarity.

When did you first realize there was something special about math?

When I was 6 years old, I was in the living room, gazing at the rectangular pattern of holes on a speaker where the sound comes through. I noticed there were 6 rows of 8 holes and, equivalently, 8 columns of 6 holes. I knew that 6 × 8 equals 8 × 6. But at that moment, I grasped that this was a fact about the world, not just a fact from the multiplication tables. Mathematical knowledge, I realized, was something that existed on its own — something you could directly apprehend — and not just something you were taught.

That, for me, offered an early glimmer of the power of mathematical thinking — and the emotional force that comes with it. As teachers, we aspire for every kid to have that kind of experience of mathematical knowledge.

Mathematics is a diverse field. How did you decide to focus on number theory?

I went to graduate school not really knowing what I would work on. It was just after Andrew Wiles proved Fermat’s Last Theorem. There was so much energy and enthusiasm about number theory at that time. It seemed to be the most exciting thing going on.

Students often ask me: “How do I figure out what area of math is right for me?” I tell them that it’s all interesting. Every field of research has deep wonderful ideas. You just have to see what you fall into. And wherever you fall, there is excitement to be found.

Of all the possible subjects in math, why did you write a book on geometry, especially when you admit to having a mental block when it comes to visualizing things?

It’s true, I didn’t really take to high school geometry. There was a certain style — the Euclidean “theorem, statement, proof” approach — that did not vibe with me. That approach is certainly a part of geometry, but it happens to be a tiny part.

It’s also true that I have difficulty with some geometric things. For example, when you have to put a credit card into a machine, I can’t follow the diagram and instead end up trying all four possibilities. If I’m on the first floor of my house and am asked about the layout of things in the room above me, I can’t really picture that. But it turns out that those skills aren’t so important when it comes to doing geometry.

Even though I steered clear of geometry when I was young, I later learned that you can’t maintain a dislike for any part of mathematics because all of its branches touch each other.

You also like to find mathematical connections even among ideas that don’t seem too mathematical, like pondering how many holes a straw has. Why bother answering that?

Well, it’s kind of an internet craze [with more than 60 million hits on Google]. It goes viral all the time, and you may wonder why people are so captivated by such a weird question. I’d say it’s actually a deep mathematical question, not a triviality nor a cut-and-dried matter. You could say one hole or two holes — or zero holes if you think about taking a rectangular piece of paper (with no holes in it) and rolling it up. It’s a way of getting people to understand topology and homology groups, which involves classifying objects based on the number of holes they have.

It turns out there is a mathematical answer to this: Topologists would say the straw has just one hole. But the point is not just to give people an answer, but rather to show them why it’s an interesting problem. Although this question is settled, many of the things that are now settled in mathematics may not have been settled 100 or so years ago. People have fought hard over almost every single conceptual advance.

In 2019, you and 10 other mathematicians signed a brief about gerrymandering that was submitted to the Supreme Court. What does math have to do with that? . . .

Continue reading. There’s more.

Written by Leisureguy

29 May 2021 at 9:30 am

Posted in Books, Daily life, Math

A dinner described

leave a comment »

The description occurs in Post Captain, the second book of Patrick O’Brian’s marvelous series of British naval novels of the Napoleonic era. (The first is Master and Commander, highly recommended.) O’Brian writes:

‘Dinner,’ said Captain Christy-Pallière, closing the file of Death Sentences, F-L. ‘I shall start with a glass of Banyuls and some anchovies, a handful of olives, black olives; then I believe I may look at Hébert’s fish soup, and follow it with a simple langouste in court-bouillon. Possibly his gigot en croûte: the lamb is exquisite now that the thyme is in flower. Then no more than cheese, strawberries, and some trifle with our coffee – a saucer of my English jam, for example. None of your architectural meals, Penhoët; my liver will not stand it in this heat, and we have a great deal of work to do if the Annibale is to be ready for sea by next week. There are all Dumanoir’s dossiers to deal with – how I wish he would come back. I should have interrogated the Maltese this morning. If we have a good dinner they risk to escape unshot . . .’

Written by Leisureguy

26 May 2021 at 9:56 am

Posted in Books, Daily life, Food, Military

Capitalism Doesn’t Have to Be This Way

leave a comment »

Zachary Karabell, author of Inside Money: Brown Brothers Harriman and the American Way of Power (from which this article seems to be taken), writes in the Atlantic:

When is enough enough? This simple, vital question—How much monetary gain does a person or a company need in order to feel satisfied?—has little place in the finance industry or in contemporary capitalism more broadly. The capitalism that has become dominant in the years since the 1980s is not about enough; it’s about more, and no amount of more is ever enough.

For many of its critics, capitalism, in all its versions, is a maximizer of more. The relentless pursuit of profit, the drive to multiply shareholder value that undergirds most large public companies, and the demand that revenue grow faster than the overall economy or the population—all of these impulses prevail on Main Street, on Wall Street, and in Silicon Valley. This is one reason why such an enormous gulf has opened up between the richest Americans and the rest, and why large banks, behemoth energy companies, multinational industrials, huge private-equity firms, and large tech companies have flourished.

But today’s paramount form of capitalism is not the only possible variant, nor was the volatile, boom-and-bust, panic-prone one that prevailed for most of two centuries through the Great Depression. An alternative form of capitalism placed a higher value on social stability than on the pursuit of more. Exemplifying that approach—one that embraced a less rapacious culture of enough—is the oldest bank still in business in the United States today: Brown Brothers Harriman. For 220 years, the company has tried to make reliable returns through a clear-eyed management of risk—not the avoidance of all potential downsides but a healthy recognition that, when it closed its ledgers each night, it needed to be prepared for the world to change the next day. Beyond limiting their own risk, the leaders of Brown Brothers believed—as I show in my new book—that domestic discord and global instability were to be avoided if possible and planned for if not, and they understood that the ebbs and flows of money could either boost the fortunes of all or beggar the nation.

The company’s story is particularly important now, as the United States tries to define post-pandemic capitalism amid widespread suspicion that the system is failing many Americans. Brown Brothers was one of a handful of banks at the apex of the system for much of the 19th and into the mid-20th century, and it is far older than the financial firms, such as Goldman Sachs, Lehman Brothers, and Morgan Stanley, that became famous (or infamous) in recent decades.

Starting with its founder, Alexander Brown, an Irish linen merchant who made his way to Baltimore in 1800 fleeing the sectarian violence of Belfast, and then run by his four sons, the firm was a creator of our system of paper money; the company issued letters of credit that were trusted more than even the U.S. dollar until well after the Civil War. To its eternal discredit, Brown Brothers was, like many northern businesses, deeply enmeshed in the antebellum cotton trade—a role for which the company now apologizes—though its partners were founding members of the antislavery Republican Party.

For much of the 19th century, the firm almost single-handedly managed the foreign-exchange system between the British pound and the U.S. dollar until the dollar became the world’s currency after World War II. It also underwrote the first railroad (the Baltimore & Ohio), created the first traveler’s checks, established one of the first modern wealth-management businesses, funded businesses as varied as The Nation and Time and Newsweek and CBS and Pan American World Airways and the first steamships, and then eventually sent a triad of partners—the future ambassador Averell Harriman, future Defense Secretary Robert Lovett, and future senator Prescott Bush (yes, from that Bush family)—into the highest levels of government. The three were exemplars of their class, a white, Anglo-Saxon Protestant elite that proceeded to shape the entire postwar global system of the United Nations, the precursor to the World Trade Organization, the dollar-denominated currency system established at Bretton Woods in 1944, the national-security establishment in Washington, the Marshall Plan aid to Europe, and American military preparations during the Cold War.

Like American history writ large, the company’s legacy was messy. In the 1850s, the renowned preacher Henry Ward Beecher repeatedly cited Brown Brothers in his sermons denouncing American materialism. Just as it was complicit in slavery, the firm was entangled in the rise of American imperialism in Latin America. In the 20th century, it spurred U.S. military intervention in Central America when it appealed to President William Howard Taft to send Marines to Nicaragua so that the bonds the firm held would be paid back. Still, in the 1940s and ’50s, Harriman and Lovett were featured on the cover of national magazines as heroes who had elevated the United States to a position of global power and as stewards of the postwar international system that they helped design.

Brown Brothers was the epitome of an elite that saw itself as bound to lead, and whose public service represented a form of noblesse oblige. Altruism wasn’t the driver. It was rather a specific sense that they and their class could not ultimately thrive unless the commons thrived as well. They had attended schools such as Groton and Yale that inculcated ideas such as “To reign is to serve.” That ethos coalesced into a more coherent governing creed of “the Establishment,” which explains in part the rules-based, American-led order that followed World War II. It was an order meant to preserve capitalism against communism, to spread the gospel of wealth globally, and to allow the United States and the dollar to thrive, which would lead to the worldwide efflorescence of the middle class and so redound to the benefit of American capital and American companies.

But mention Brown Brothers today, and most Americans will shrug. Even in the financial world, the name evokes a response of

Continue reading.

The problem with modern capitalism — “hypercaptialism” — is that the imperative of “more” must fail in a finite world (such as the world in which we find ourselves). Recognizing and respecting natural limits and moral demands is an approach of measured moderation that will be longer lasting and not so disaster prone. It does require a certain humility, which modern capitalists totally lack.

Written by Leisureguy

25 May 2021 at 1:06 pm

This Library Has New Books by Major Authors, but They Can’t Be Read Until 2114

with one comment

Merve Emre had an interesting article in the NY Times in November 2018. It begins:

In a small clearing in the forests of Nordmarka, one hour outside the city limits of Oslo, a thousand spruce trees are growing. They will grow for the next 96 years, until 2114, when they will be felled, pulped, pressed and dyed to serve as the paper supply for the Scottish artist Katie Paterson’s Future Library: an anthology of 100 previously unpublished books written by some of the 21st century’s most celebrated writers. There will be one book for every year the trees will have grown, each a donation from a writer chosen by the Future Library’s board of trustees — a gift from the literary gatekeepers of the present to the readers of the future.

This summer, nearly 100 people made the annual pilgrimage to the clearing to watch the Turkish novelist Elif Shafak present Paterson with the fourth manuscript for the library, which already houses novels by Margaret Atwood, David Mitchell and the Icelandic novelist Sjón. The Handover Ceremony, as it is called, is a modest ritual. The audience is called to order by the flutings and shouts of a Norwegian folk singer who blesses the land its offerings. Anne Beate Hovind, Chair of the Future Library Trust, established by the Oslo city government in 2014, introduces the audience to Paterson, Shafak and the seedlings, which now stand just over a foot tall and are dressed up for the occasion. Pretty red schoolgirl bows are tied around their center stalks, and the tops of their needles are turned out a younger, brighter green than the wild grass that surrounds them.

In previous years, it has rained, which means the local foresters have brewed coffee and hot chocolate on colossal iron grills while the audience has stood shivering under their umbrellas. But this year, the entire day — the entire summer, really — has been uncommonly hot and dry, and so the audience sits scattered among the saplings, drinking water and perspiring, as they listen to Shafak, author of 10 novels and a prominent feminist and critic of Turkish nationalism, speak. She describes writing a novel for the Future Library as “a secular act of faith” in a world that seems to have gone mad, a world that violently accentuates the differences between people instead of celebrating their common humanity. “When you write a book,” she says, “you have the faith that it will reach out to someone else, to someone who is different from you and it will connect us. That you will be able to transcend the boundaries of the self, that was given to you at birth, that you will be able to touch someone else’s reality.” Yet in 96 years, when the seedlings become trees and the trees are sacrificed to the written word, it is impossible to know whose reality they will touch.

The next 96 years do not look promising for the seedlings, which are more vulnerable than their ancestors to all manner of man-made disasters: the storm surges, wildfires, heat waves, and droughts precipitated by global warming, as well as the less dramatic possibility that, amid the daily brutalities of life on earth, people will simply stop tending to them and the books that are their fate. The announcement of each new author is greeted with less media fanfare than the author that came before, and very few people have commented on the recent choice of Han Kang, winner of the 2016 Man Booker International Prize for her novel “The Vegetarian,” as the fifth author. Increasingly, it seems, there is something unbearably precious about writing novels that cannot be read — an act of delayed gratification that can have no real payoff because it has no real stakes, only symbolic ones. And there is something more straightforwardly unbearable about planting trees knowing that, in a time of mass deforestation and consumer waste, they will be cut down to make paper.

Yet the Future Library begins to look less twee, less inattentive, when one considers the bonfire that consumed 40,000 books in Alexandria in 48 B.C. or, this summer, the stray paper lantern that set the Museu Nacional of Brasil on fire, destroying manuscripts and artifacts collected over two hundred years. It was shocking to remember, in an age of hard drives and big data, how quickly the matter of memories could disappear. The Future Library makes the physicality of culture palpable by insisting that we confront the long, laborious process of preserving language. It refuses to take it for granted. And it reminds us that we have not always been attentive to how literature is made, distributed, preserved and celebrated.

But these are thoughts for tomorrow and not today, which is a day for celebration. Shafak’s manuscript is sealed in a handsome gray box tied with a royal purple ribbon. “Don’t open it and don’t talk about the contents,” Hovind warns Shafak as she gets up to hand the box to Paterson, who weeps softly and openly. She explains that she is especially emotional this year as she has just had a child who she has brought with her to the forest, a towheaded little creature who will be 96 years old when the Future Library’s anthology is printed.

“PERSONS OR THINGS which are regarded as taboo may be compared to objects charged with electricity,” observed the anthropologist Northcote Thomas. Taboo, a Polynesian word that Sigmund Freud translated as “holy dread,” most often referred to an action that was both sacred and forbidden, consecrated and dangerous. It is an apt description of the Future Library, which grafts an environmental taboo onto an artistic one: trees that are planted to be cut down; books that are written not to be read.

The manuscripts are electrified by these taboos. In 2020, they will be moved to the New Deichmanske Library, currently under construction in Oslo, where they will be displayed in a “Silent Room”: a womb-shaped chamber facing the forest, lined with wood from its trees. Visitors will be able to enter, one or two at a time, to gaze at the manuscripts lying under their protective glass cases, waiting for the years to pass. More like a prayer closet than a reading room, Paterson describes the Silent Room as a “contemplative space.” Her hope is that it will prompt the visitor’s imagination to journey through “deep time” to probe the mysteries of the forest.

What Paterson’s description of the Silent Room makes clear is that the books of the Future Library were never meant to be read, certainly not in our lifetime, but not even in the future. They are meant to be  . . .

Continue reading.

Written by Leisureguy

22 May 2021 at 2:42 pm

Posted in Books, Daily life, Memes

Truth, lies, and honey

leave a comment »

François Lévêque, Professeur d’économie, Mines ParisTech, has an interesting article in The Conversation that begins:

We hear many tales about bees and honey. Even economists may base their theories on fantasy hives. Dieticians can do the same when promoting the imaginary health benefits of honey, and then there’s the honey itself. It should be one of the purest products of nature, yet what we find on supermarket shelves can be cut with syrup, tainted by antibiotics, or sourced from China despite a label that claims otherwise.

So let’s take a worldwide tour of the honey trade, which oscillates between truth and tall tales, with a few tips for your coming purchases.

Bee theory

The social life of bees has long fired up our imagination. Pliny the Elder (23-79 CE) admired their political organisation, with its chiefs and councils. He even thought that moral principles guided their behaviour. Nearly 1,700 years later, the Anglo-Dutch author Bernard Mandeville took the opposite view, describing a vice-ridden hive inhabited by selfish bees. The Fable of Bees, published in 1714, became a work of reference for political economists. A precursor to Adam Smith, whose invisible hand of individual self-interest fed the common good, Mandeville set out to prove that, unlike altruism, selfishness was productive. Hostile to frugality – wealth stolen from a miser will trickle down, after all – he inspired Keynes’ critique of excess saving.

In fairness to Pliny, Mandeville and many others who have fantasised about bees, the hive as we now know it, with its removable wooden frames, had yet to be invented. So it was difficult to observe the life and social mores of bees. There were no glass walls enabling us to watch their busy work or count drones, males whose only purpose in life is to mate with a virgin queen. Nor were there electronic tags to monitor the ceaseless movement of bees and discover that to produce a pound of honey they must cover a distance equivalent to flying around the world, visiting some 5 million flowers on the way.

James Meade, a British economist who was awarded the 1977 Nobel prize for his work on international economic policy, had no such excuse. In the early 1950s he cited the example of apple-growing and beekeeping in the same area to illustrate his theoretical analysis of external economies. Each serves the other: the bees gather nectar from the apple blossom to make honey and in so doing pollinate the flowers which in turn become fruit.

Meade theorised that because these reciprocal services are unpaid, both parties under-invest: Beekeepers set up fewer hives than is economically optimal, because they take no share in the marginal product apple growers obtain from a bigger harvest. Orchard owners plant fewer trees than is economically optimal, taking no share in the marginal product beekeepers derive from extra honey. This example was a big success with economics professors and their students, no doubt on account of its bucolic character and springtime atmosphere.

Unfortunately, Meade was mistaken on two points. First, he did not know that apple blossom contains very little nectar, which is perhaps excusable. Apple-blossom honey, should you find some in a shop, is actually made from the flowers of other plants in the orchard. Second – and this is a real blunder – he overlooked the many arrangements between growers and keepers for their mutual benefit and reward. So there was in fact no sign of free, unpaid production factors, and hence no external economies. Steven Cheung, a specialist in property rights and transaction costs, made this point with a survey of beekeepers and tree-growers, whose deals abide by rules rooted in tradition or are actually framed in full-blown contracts.

Bees sometimes travel by truck

American beekeepers have been charging for their pollination services for years. But the almond boom has massively increased the scale of these operations. Every year millions of hives are trucked to California’s almond orchards from other parts of the country. Accounting for almost 80% of global demand, the farms play host to some 30 billion bees for a few weeks. The hives are then taken to Florida or Texas to pollinate other trees.

Bees and beehives travel around other countries, too. In France, they move from one region to another to gather pollen from flowering plants and trees at their best. At different points in the season the same hive may produce honey flavoured by Mediterranean garrigue (scrubland), acacia trees and finally lavender bushes. Unlike their US counterparts, professional beekeepers in Europe countries earn their living from producing honey, and not mainly from pollination services. Nor is there much trade in bees here.

Until recently, the transportation of bees in China was largely for honey production, not pollination. But in provinces such as Sichuan, the balance of earnings is shifting toward pollination. This is indirectly due to  . . .

Continue reading. There’s much more.

Written by Leisureguy

22 May 2021 at 12:27 pm

How to be a genius

leave a comment »

Craig Wright, professor emeritus of music at Yale University and a member of the American Academy of Arts and Sciences and author of The Hidden Habits of Genius: Beyond Talent, IQ, and Grit  Unlocking the Secrets of Greatness (2020), has an interesting article in Aeon that begins:

Don’t get me wrong – yes, I’m a professor at Yale University, but I’m no genius. When I first mentioned to our four grown children that I was going to teach a new course on genius, they thought that was the funniest thing they’d ever heard. ‘You, you’re no genius! You’re a plodder.’ And they were right. So how did it come to pass that now, some dozen years later, I continue to teach a successful course on genius at Yale, and have written an Amazon Book of the Year selection, The Hidden Habits of Genius (2020). The answer: I must have, as Nikola Tesla urged, ‘the boldness of ignorance’.

I started my professional life trying to be a concert pianist, back in the days of the Cold War. The United States was then trying to beat the Soviet Union at its own games. In 1958, Van Cliburn, a 23-year-old pianist from Texas, won the inaugural International Tchaikovsky Competition, something akin to the Olympics of classical music. And then in 1972, Brooklyn’s Bobby Fischer defeated Boris Spassky in chess. Because I had shown an interest in music, and was also tall with enormous hands, I, too, would become the next Cliburn, at least so my mother declared.

Although our family wasn’t wealthy, my parents managed to provide me with a Baldwin grand piano and find the best teachers in our hometown of Washington, DC. Soon, I was packed off to the prestigious Eastman School of Music, where, once again, every opportunity was placed before me. And I had a strong work ethic: by the age of 21, I had engaged, by my estimation, in 15,000 hours of focused practice. (Mozart had needed only 6,000 to get to the level of master-composer and performer.) Yet, within two years, I could see that I would never earn a dime as a concert pianist. I had everything going for me except one: I lacked musical talent. No special memory for music, no exceptional hand-eye coordination, no absolute pitch – all things very necessary to a professional performer.

‘If you can’t compose, you perform; and if you can’t perform, you teach’ – that’s the mantra of conservatoires such as the Eastman School of Music. But who wants to spend each day in the same studio teaching other likely soon-to-fail pianists? My intuition was to find a larger arena in a university. So off I went to Harvard to learn to become a college professor and a researcher of music history – a musicologist, as it’s called. Eventually, I found employment at Yale as a classroom instructor teaching the ‘three Bs:’ Bach, Beethoven and Brahms. Yet the most captivating composer I ran into there was an M: Mozart. My interest in him accelerated with the appearance of the Academy Award-winning film Amadeus (1984). For a time, the entire world seemed obsessed with this funny, passionate and naughty character.

Thus it was a movie, of all things, that caused me to shift the focus of my academic research to Mozart. Yet the cardinal principle of scholarship I’d been taught at Harvard remained the same: if you seek the truth, consult the original primary sources; the rest is simply hearsay. Thus, over the course of 20 years, I went in search of Mozart in libraries in Berlin, Salzburg, Vienna, Krakow, Paris, New York and Washington, studying his autograph (or handwritten) music manuscripts. I found that Mozart could effortlessly conceive of great swaths of music entirely in his head, with almost no corrections. What Salieri said of Mozart in Amadeus no longer seems so fanciful: here ‘was the very voice of God’.

To hold in your hands the divine pages of a Mozart autograph – even if wearing the oft-required white gloves – is at the same time an honour and an exhilaration. The fluctuating angles of his pen, changing size of his note heads and varying tints of ink provide an insight as to how his mind is working. As if invited into Mozart’s study, you watch as this genius, empowered by his huge natural gifts, enters a creative zone, and the music just pours forth.

What other genius, I wondered, worked like Mozart? Here again, it was the autograph manuscripts that drew me in. Who among us has not been attracted to the fascinating designs of Leonardo da Vinci – his sketches of ingenious machines and instruments of war, as well as pacifist paintings? Unlike the original manuscripts of Mozart, the drawings and notes of Leonardo (some 6,000 pages survive) have mostly been published in facsimile editions, and many are now available online.

If Mozart could hear in his head how the music ought to go, Leonardo, judging from his sketches, could simply see in his mind’s eye how the machine should work or the painting should look. Here, too, Leonardo’s natural technical facility is manifest, as seen in the hand-eye coordination that results in correct proportions and the cross-hatching lines that suggest three-dimensional perception. Likewise evident is Leonardo’s relentless curiosity. We watch his mind range across an endless horizon of interconnected interests; on one page, for example, a heart becomes the branches of a tree, which then become the tentacles of a mechanical pulley. How do all these seemingly disparate things of the world hang together? Leonardo wanted to know. With good reason, the cultural historian Kenneth Clark called him ‘the most relentlessly curious man in history’.

Mozart in music, Leonardo in art; what about the everyday world of politics? Here the perfect subject of a study of genius was close at hand: Elizabeth I, queen of England. The Beinecke Rare Book and Manuscript Library at Yale owns copies of every history of her reign written by her contemporaries. The secret to her success? Elizabeth not only read books voraciously (three hours a day was her wont) but also people. She read, she studied, she observed, and she kept her mouth shut (Video et taceo was her motto). By knowing all and saying little, Elizabeth ruled for nearly 45 years, laid the foundations of the British empire and fledgling capitalist corporations, and gave her name to an entire epoch, the Elizabethan era.

Fascinating! I was learning so much. Why not have students learn along with me – after all, that’s why we have these young people cluttering up the place! And that’s how my genius course – or ‘Exploring the Nature of Genius’ – came to be.

Perhaps it takes a non-genius to analyse how exceptional human accomplishment happens. During my years at Harvard and at Yale, I met a lot of smart people, including a half-dozen Nobel Prize winners. If you’re a prodigy with a great gift for something, you can simply do it – yet might not be aware of why and how. And you don’t ask questions. Indeed, the geniuses I met seemed too preoccupied with committing acts of genius to consider the cause of their creative output. Maybe an outsider looking in has a clearer overview of how the magic gets done.

Year after year, increasing numbers of Yale students enrolled in my course to find the answer but, from the very first, something unexpected happened, and I should have seen it coming: the appreciation of genius turns out to be gender-biased.

Although the ratio of Yale undergraduates is now 50/50 male-female, and although the genius course is a general humanities class open to all, annually the enrolment in that class skews about 60/40 male-female. Students at Yale and other liberal arts colleges vote with their feet and, despite favourable course evaluations, women at Yale don’t seem to be as interested in exploring the nature of genius as their male counterparts are.

Why, I wondered. Are women less excited by competitive comparisons that rank some people as ‘more exceptional’ than others? Are they less likely to value the traditional markers of genius in a winner-take-all world – things such as the world’s greatest painting or most revolutionary invention? Does the absence of female mentors and role models play a part? Why take a course in which the readings, once again, will be mostly about the triumphant accomplishments of ‘great [mostly white] men’? Was the very way I’d framed this course perpetuating, once again, an unconscious bias against women and the assumption of a white cultural supremacy?

Happily, I ultimately ‘capped’ the course at 120 students and, thus, could do bit of social engineering. I was at liberty to admit whom I wished and thereby assure a representative proportion of women and minority students. The aim was not to fill quotas, but to increase diversity of opinion and inspire robust argumentation, things especially useful in a course in which there’s no answer.

‘There is no answer! There is no answer! There is no answer!’ chanted 120 eager undergraduates in the first session of the ‘genius course’, as I urged them on. Students typically want an answer to put into their pocket as they leave class, one they can later deploy on a test – but I felt that it was important to make this point immediately. To the simple question ‘What is genius?’ there’s no answer, only opinions. As to what drives it – nature or nurture – again, no one knows.

The question ‘Nature or nurture?’ always provoked debate. The quant types (mathematics and science majors) thought genius was due to natural gifts; parents and teachers had told them that they’d been born with a special talent for quantitative reasoning. The jocks (varsity athletes) thought exceptional accomplishment was all hard work: no pain, no gain. Coaches had taught them that their achievement was the result of endless hours of practice. Among novice political scientists, conservatives thought genius a God-given gift; liberals thought it was caused by a supportive environment. No answer? Call in the experts: readings from Plato, William Shakespeare and Charles Darwin to Simone de Beauvoir followed, but each had his or her own take.

The students hoped for something more concrete. Some wanted to know if they were already geniuses and what their futures might hold. Most wanted to know how they, too, might become a genius. They had heard that I’d studied geniuses from Louisa May Alcott to Émile Zola, and thought that I might have found the key to genius. So I asked: ‘How many of you think you already are or have the capacity to be a genius?’ Some timidly raised their hands; the class clowns did so emphatically. Next: ‘If you’re not one already, how many of you want to be a genius’? In some years, as many as three-quarters of the students raised their hands. Then I asked: ‘OK, but what exactly is a genius?’ Excitement turned to puzzlement, which was followed by a two-week quest to formulate a definition of genius, one that usually ended with the following sort of hypothesis:

A genius is a person of extraordinary mental powers whose original works or insights change society in some significant way for good or for ill across cultures and across time.

Only gradually, and not until I’d written my book The Hidden Habits of Genius, did I come to see that this complex verbiage might be simplified into something akin to a ‘genius equation’.

Here was a formula that students and the populace at large could immediately grasp: . . .

Continue reading. There’s more.

Written by Leisureguy

19 May 2021 at 4:25 pm

Speaking of cultural memes and their workings: Sinead O’Connor Remembers Things Differently

leave a comment »

For those over a certain age, Sinead O’Connor’s ripping apart the photograph of the Pope is an indelible memory — but from a temporal distance it looks different from how it seemed at the time. Amanda Hess writes in the NY Times:

Sinead O’Connor is alone, which is how she prefers to be. She has been riding out the pandemic in a tiny village on an Irish mountaintop, watching murder shows, buying fairy-garden trinkets online and mainlining American news on CNN. On a recent overcast afternoon, she had a navy hijab arranged over her shaved head and a cigarette permanently installed between her fingertips, and when she leaned over an iPad inside her all-glass conservatory, she looked as if she had been hermetically sealed into her own little world.

“I’m lucky,” she said, “because I enjoy my own company.”

Her cottage was appointed in bright, saturated colors that leapt out from the monotonous backdrop of the Irish sky with the surreal quality of a pop-up book. Bubble-gum roses lined the windows, and the Hindu goddess Durga stretched her eight arms across a blanket on a cozy cherry couch. When O’Connor, 54, gave me a little iPad tour during our video interview, the place seemed to fold in on itself: The flowers were fake ones she bought on Amazon.com, and her pair of handsome velvet chairs weren’t made for sitting.

“Deliberately, I bought uncomfortable chairs, because I don’t like people staying long,” she said. “I like being on my own.” But she disclosed this with such an impish giggle that it sounded almost like an invitation.

O’Connor is, no matter how hard she tries to fight it, irresistible. She exudes a tender familiarity, thanks to her cherubic smile, her loose tongue and the fact that she happens to possess one of the most iconic heads in pop culture memory. In the early ’90s, O’Connor became so famous that the very dimensions of her skull seemed inscribed in the public consciousness. If you remember two things about her, it’s that she vaulted to fame with that enduring close-up in the video for her version of “Nothing Compares 2 U” — and then, that she stared down a “Saturday Night Live” camera, tore up a photo of Pope John Paul II and killed her career.

But O’Connor doesn’t see it that way. In fact, the opposite feels true. Now she has written a memoir, “Rememberings,” that recasts the story from her perspective. “I feel that having a No. 1 record derailed my career,” she writes, “and my tearing the photo put me back on the right track.”

O’Connor saw herself as a protest-singing punk. When she ascended to the top of the pop charts, she was trapped. “The media was making me out to be crazy because I wasn’t acting like a pop star was supposed to act,” she told me. “It seems to me that being a pop star is almost like being in a type of prison. You have to be a good girl.” And that’s just not Sinead O’Connor.

“CRAZY” IS A word that does some dirty cultural work. It is a flip way of referencing mental illness, yes. But it’s also a slippery label that has little to do with how a person’s brain works and everything to do with how she is culturally received. Calling someone crazy is the ultimate silencing technique. It robs a person of her very subjectivity.

By the time O’Connor appeared on “S.N.L.,” in October 1992, she had already been branded as insane — for boycotting the Grammy Awards where she was up for record of the year (they recognized only “material gain,” she said) and refusing to play “The Star-Spangled Banner” before her concerts (because national anthems “have nothing to do with music in general”). But now her reputation felt at permanent risk.

“I’m not sorry I did it. It was brilliant,” she said of her protest against abuse in the Catholic Church. “But it was very traumatizing,” she added. “It was open season on treating me like a crazy bitch.”

Soon after the show, O’Connor appeared at a Bob Dylan tribute concert, and when the crowd booed, she was so taken aback she thought, at first, that they were making fun of her outfit. Joe Pesci threatened to smack her in an “S.N.L.” monologue, and later, on that same stage, Madonna mocked her in a gently condescending fashion, play-scowling and ripping up a photograph of the tabloid-star sex offender Joey Buttafuoco. O’Connor was condemned by the Anti-Defamation League and a group called the National Ethnic Coalition of Organizations, which hired a steamroller to crush hundreds of her albums outside of her record company’s headquarters. The Washington Times named her “the face of pure hatred” and Frank Sinatra called her “one stupid broad.”

Now O’Connor’s memoir arrives at a time when the culture seems eager to reassess these old judgments. The top comment on a YouTube rip of O’Connor’s “Behind the Music” episode is: “Can we all just say she was right!” Few cultural castaways have been more vindicated by the passage of time: child sexual abuse, and its cover-up within the Catholic Church, is no longer an open secret. John Paul II finally acknowledged the church’s role in 2001, nearly a decade after O’Connor’s act of defiance.

But the overreaction to O’Connor was not just about whether she was right or wrong; it was about the kinds of provocations we accept from women in music. “Not because I was famous or anything, but because I was a human being, I had a right to put my hand up and say what I felt,” O’Connor said. Some artists are skilled at shocking in a way designed to sell more records, and others at tempering their political rage into palatable music, but “Sinead is not the tempering type,” her friend Bob Geldof, the musician and activist, told me. “In that, she is very much an Irish woman.”

To understand why O’Connor may have seen her cultural blacklisting as liberating, you have to understand just how deeply she was misapprehended throughout her career. She was still a teenager when she started work on her fierce, ethereal first record, “The Lion and the Cobra,” when an executive — “a square unto high heaven” — called her to lunch and told her to dress more femininely and grow out her close-cropped hair. So she marched to a barber and shaved it all off. “I looked like an alien,” she writes in the book, which was a kind of escape hatch from looking like a human woman. When O’Connor became pregnant in the midst of recording, she writes that the executive called a doctor and tried to coerce her into having an abortion, which she refused. Her first son, Jake, arrived just before the album did.

Later, when “Nothing Compares 2 U” made her a star, O’Connor said the song’s writer, Prince, terrorized her. She had pledged to reveal the details “when I’m an old lady and I write my book,” and now she has: She writes that Prince summoned her to his macabre Hollywood mansion, chastised her for swearing in interviews, harangued his butler to serve her soup though she repeatedly refused it, and sweetly suggested a pillow fight, only to thump her with something hard he’d slipped into his pillowcase. When she escaped on foot in the middle of the night, she writes, he stalked her with his car, leapt out and chased her around the highway.

Prince is the type of artist who is hailed as crazy-in-a-good-way, as in, “You’ve got to be crazy to be a musician,” O’Connor said, “but there’s a difference between being crazy and being a violent abuser of women.” Still, the fact that her best-known song was written by this person does not faze her at all. “As far as I’m concerned,” she said, “it’s my song.”

O’CONNOR’S STATEMENT ON “S.N.L.” was more personal than most knew. In the book, she details how her mother physically abused her throughout her childhood. “I won the prize in kindergarten for being able to curl up into the smallest ball, but my teacher never knew why I could do it so well,” she writes. There is a reason, in the “Nothing Compares 2 U” video, she begins to cry when she hits the line about her mama’s flowers. O’Connor was 18 when her mother died, and on that day, she took down the one photograph on her mom’s bedroom wall: the image of the pope. O’Connor carefully saved the photo, waiting for the right moment to destroy it.

“Child abuse is an identity crisis and fame is an identity crisis, so I went straight from one identity crisis into another,” she said. And when she tried to call attention to child abuse through her fame, she was vilified. “People would say that she’s fragile,” Geldof said. “No, no, no. Many people would have collapsed under the weight of being Sinead O’Connor, had it not been Sinead.”

Instead, O’Connor felt freed. “I could just be me. Do what I love. Be imperfect. Be mad, even,” she writes in the book. “I’m not a pop star. I’m just a troubled soul who needs to scream into mikes now and then.” She sees the backlash as having pushed her away from the wrong life, in mainstream pop, and forced her to make a living performing live, which is where she feels most comfortable as an artist.

“Rememberings” is a document of a difficult life, but it is also deliciously funny, starting with the title. (“As I’ve said, I can’t remember many details because I was constantly stoned,” she writes.) It is loaded with charming stories from the height of her fame. She rejects the Red Hot Chili Peppers singer Anthony Kiedis’s claim that they had a thing (“Only in his mind”) but confirms a fling with Peter Gabriel (to discover the profane term she assigns to their affair, you’ll have to read it.) . . .

Continue reading. There’s a lot more.

And the report linked above is worth reading as well. It is by Jon Pareles and appeared in the November 1, 1992, issue of the NY Times. That report begins:

You think it’s easy to get booed at Madison Square Garden? Maybe it is for a visiting hockey team, but at a rock concert, drawing boos qualifies as a perverse kind of achievement. Sinead O’Connor, who was booed (as well as cheered) at the Bob Dylan tribute on Oct. 16, once again showed that she has a gift that’s increasingly rare.: the ability to stir full-fledged outrage. She has stumbled onto the new 1990’s taboo: taking on an authority figure.

O’Connor was booed because, 13 days earlier, she had torn up a photograph of Pope John Paul II on NBC’s “Saturday Night Live,” saying, “Fight the real enemy.” Compounding her impropriety, she dropped her scheduled Dylan song and reprised “War,” the anti-racism song by Bob Marley and Haile Selassie. Her expression was timorous, defiant, martyred, and she made all the late-edition newspapers and television news.

Meanwhile, the tabloids happily reported, Madonna (no stranger to recontextualized Christian symbols) told The Irish Times: “I think there is a better way to present her ideas rather than ripping up an image that means a lot to other people.” She added, “If she is against the Roman Catholic Church and she has a problem with them, I think she should talk about it.”

She did: last week, O’Connor released an open letter, linking her being abused as a child to “the history of my people” and charging, “The Catholic church has controlled us by controlling education, through their teachings on sexuality, marriage, birth control and abortion, and most spectacularly through the lies they taught us with their history books.” The letter concluded, “My story is the story of countless millions of children whose families and nations were torn apart for money in the name of Jesus Christ.” Proselytizing as imperialism as child abuse — quite a leap.

Madonna’s reaction may have been professional jealousy. After Madonna had herself gowned, harnessed, strapped down and fully stripped to promote her album “Erotica” and her book “Sex,” O’Connor stole the spotlight with one photograph of a fully-clothed man. But the other vilification that descended on O’Connor showed she had struck a nerve.

Sex, which used to be a guaranteed shocker, has become a popular entertainment, with triple-X tapes on home VCR’s and lubricious innuendo in every sitcom. Visual and telephone sex, sex as commercial spectacle, may have moved in where fear of AIDS has made physical sex far less casual. Looking is safe; touching is not.

But as public standards of viewable sexual behavior have changed, a new kind of taboo is gaining force: challenging authority and its religious version, blasphemy. (Another button-pusher, sexual harassment, has more to do with power and authority than with titillation.) In an American culture that used to prize the loner, the wiseguy, the maverick, defense of authority is on the rise, whether it’s a backlash against permissiveness or fear of impending anarchy.

Anti-authority sentiments raise hackles highest when the challenge comes from insubordinate blacks (like Ice-T with “Cop Killer”) or women, like O’Connor. If a heavy-metal band took a picture of the Pope, hung it on an upside-down cross and burned it, the act would likely be greeted with yawns — that old bit again? But waifish female 25-year-olds like O’Connor don’t have the same prerogative. While bullies like Axl Rose are lionized as rock-and-roll rebels simply for lashing out at the press — like so many losing political candidates — O’Connor draws real outrage because she doesn’t know her place.

Not that O’Connor isn’t a loose cannon. She has a penchant for the impassioned but mis-targeted gesture: boycotting the Grammy Awards, refusing to perform on a “Saturday Night Live” show featuring Andrew Dice Clay, refusing to let “The Star-Spangled Banner” be played before a concert, singing a Bob Marley song at a Bob Dylan tribute. Tearing up the Pope’s photograph may have been the best way she could envision to condemn Catholicism, but she surely would have thought twice about tearing up a photograph of Louis Farrakhan or the Lubavitcher Rebbe.

She baffles the likes of Madonna by making her gestures without game plans or tie-ins. “War” doesn’t appear on her new album, “Am I Not Your Girl?” — a collection of standards accompanied by orchestra and sung in the voice of a terrified child who believes every unhappy word.

Yet for all O’Connor’s sincerity . . .

Continue reading.

Written by Leisureguy

18 May 2021 at 12:24 pm

You are a network, not just a body, mind, or your social role

leave a comment »

Kathleen Wallace, professor of philosophy at Hofstra University in Hempstead, New York,  works on ethics and metaphysics of personal identity and has an interesting piece in Aeon. I imagine, given the content, that it is an extract from her book The Network Self: Relation, Process, and Personal Identity (2019). It is fairly long, and it begins:

Who am I? We all ask ourselves this question, and many like it. Is my identity determined by my DNA or am I product of how I’m raised? Can I change, and if so, how much? Is my identity just one thing, or can I have more than one? Since its beginning, philosophy has grappled with these questions, which are important to how we make choices and how we interact with the world around us. Socrates thought that self-understanding was essential to knowing how to live, and how to live well with oneself and with others. Self-determination depends on self-knowledge, on knowledge of others and of the world around you. Even forms of government are grounded in how we understand ourselves and human nature. So the question ‘Who am I?’ has far-reaching implications.

Many philosophers, at least in the West, have sought to identify the invariable or essential conditions of being a self. A widely taken approach is what’s known as a psychological continuity view of the self, where the self is a consciousness with self-awareness and personal memories. Sometimes these approaches frame the self as a combination of mind and body, as René Descartes did, or as primarily or solely consciousness. John Locke’s prince/pauper thought experiment, wherein a prince’s consciousness and all his memories are transferred into the body of a cobbler, is an illustration of the idea that personhood goes with consciousness. Philosophers have devised numerous subsequent thought experiments – involving personality transfers, split brains and teleporters – to explore the psychological approach. Contemporary philosophers in the ‘animalist’ camp are critical of the psychological approach, and argue that selves are essentially human biological organisms. (Aristotle might also be closer to this approach than to the purely psychological.) Both psychological and animalist approaches are ‘container’ frameworks, positing the body as a container of psychological functions or the bounded location of bodily functions.

All these approaches reflect philosophers’ concern to focus on what the distinguishing or definitional characteristic of a self is, the thing that will pick out a self and nothing else, and that will identify selves as selves, regardless of their particular differences. On the psychological view, a self is a personal consciousness. On the animalist view, a self is a human organism or animal. This has tended to lead to a somewhat one-dimensional and simplified view of what a self is, leaving out social, cultural and interpersonal traits that are also distinctive of selves and are often what people would regard as central to their self-identity. Just as selves have different personal memories and self-awareness, they can have different social and interpersonal relations, cultural backgrounds and personalities. The latter are variable in their specificity, but are just as important to being a self as biology, memory and self-awareness.

Recognising the influence of these factors, some philosophers have pushed against such reductive approaches and argued for a framework that recognises the complexity and multidimensionality of persons. The network self view emerges from this trend. It began in the later 20th century and has continued in the 21st, when philosophers started to move toward a broader understanding of selves. Some philosophers propose narrative and anthropological views of selves. Communitarian and feminist philosophers argue for relational views that recognise the social embeddedness, relatedness and intersectionality of selves. According to relational views, social relations and identities are fundamental to understanding who persons are.

Social identities are traits of selves in virtue of membership in communities (local, professional, ethnic, religious, political), or in virtue of social categories (such as race, gender, class, political affiliation) or interpersonal relations (such as being a spouse, sibling, parent, friend, neighbour). These views imply that it’s not only embodiment and not only memory or consciousness of social relations but the relations themselves that also matter to who the self is. What philosophers call ‘4E views’ of cognition – for embodied, embedded, enactive and extended cognition – are also a move in the direction of a more relational, less ‘container’, view of the self. Relational views signal a paradigm shift from a reductive approach to one that seeks to recognise the complexity of the self. The network self view further develops this line of thought and says that the self is relational through and through, consisting not only of social but also physical, genetic, psychological, emotional and biological relations that together form a network self. The self also changes over time, acquiring and losing traits in virtue of new social locations and relations, even as it continues as that one self.

How do you self-identify? You probably have many aspects to yourself and would resist being reduced to or stereotyped as any one of them. But you might still identify yourself in terms of your heritage, ethnicity, race, religion: identities that are often prominent in identity politics. You might identify yourself in terms of other social and personal relationships and characteristics – ‘I’m Mary’s sister.’ ‘I’m a music-lover.’ ‘I’m Emily’s thesis advisor.’ ‘I’m a Chicagoan.’ Or you might identify personality characteristics: ‘I’m an extrovert’; or commitments: ‘I care about the environment.’ ‘I’m honest.’ You might identify yourself comparatively: ‘I’m the tallest person in my family’; or in terms of one’s political beliefs or affiliations: ‘I’m an independent’; or temporally: ‘I’m the person who lived down the hall from you in college.’ Some of these are more important than others, some are fleeting. The point is that who you are is more complex than any one of your identities. Thinking of the self as a network is a way to conceptualise this complexity and fluidity.

Let’s take a concrete example. Consider Lindsey: she is spouse, mother, novelist, English speaker, Irish Catholic, feminist, professor of philosophy, automobile driver, psychobiological organism, introverted, fearful of heights, left-handed, carrier of Huntington’s disease (HD), resident of New York City. This is not an exhaustive set, just a selection of traits or identities. Traits are related to one another to form a network of traits. Lindsey is an inclusive network, a plurality of traits related to one another. The overall character – the integrity – of a self is constituted by the unique interrelatedness of its particular relational traits, psychobiological, social, political, cultural, linguistic and physical.

Figure 1 below is based on an approach to modelling ecological networks; the nodes represent traits, and the lines are relations between traits (without specifying the kind of relation). . .

Continue reading. There’s much more.

I’ll point out that many of these relationships are culturally shaped and determined — one’s teachers, for example, or one’s doctor, dentist, team members, and so on. And that a network can also achieve an identity as a memeplex, a cluster of mutually supportive memes, and act in some ways like a living organism — things such as corporations or military units or sports teams or tribal communities or political parties or churches or . . .

The whole article is worth reading, and based on that, the book also is of interest.

Written by Leisureguy

18 May 2021 at 12:05 pm

%d bloggers like this: