Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘History’ Category

Example of systemic racism

leave a comment »

This is the sort of thing that Republicans are fighting to keep people from learning. From a post in Facebook:

Here is the truth behind systemic racism

In 1866, one year after the 13th Amendment was ratified (the amendment that ended slavery), Alabama, Texas, Louisiana, Arkansas, Georgia, Mississippi, Florida, Tennessee, and South Carolina began to lease out convicts for labor (peonage). This made the business of arresting Blacks very lucrative, which is why hundreds of White men were hired by these states as police officers. Their primary responsibility was to search out and arrest Blacks who were in violation of Black Codes. Once arrested, these men, women and children would be leased to plantations where they would harvest cotton, tobacco, sugar cane. Or they would be leased to work at coal mines, or railroad companies. The owners of these businesses would pay the state for every prisoner who worked for them; prison labor.

It is believed that after the passing of the 13th Amendment, more than 800,000 Blacks were part of the system of peonage, or re-enslavement through the prison system. Peonage didn’t end until after World War II began, around 1940.

This is how it happened.

The 13th Amendment declared that “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.” (Ratified in 1865)
Did you catch that? It says, “neither slavery nor involuntary servitude could occur except as a punishment for a crime”. Lawmakers used this phrase to make petty offenses crimes. When Blacks were found guilty of committing these crimes, they were imprisoned and then leased out to the same businesses that lost slaves after the passing of the 13th Amendment. This system of convict labor is called peonage.

The majority of White Southern farmers and business owners hated the 13th Amendment because it took away slave labor. As a way to appease them, the federal government turned a blind eye when southern states used this clause in the 13th Amendment to establish laws called Black Codes. Here are some examples of Black Codes:

In Louisiana, it was illegal for a Black man to preach to Black congregations without special permission in writing from the president of the police. If caught, he could be arrested and fined. If he could not pay the fines, which were unbelievably high, he would be forced to work for an individual, or go to jail or prison where he would work until his debt was paid off.

If a Black person did not have a job, he or she could be arrested and imprisoned on the charge of vagrancy or loitering.

This next Black Code will make you cringe. In South Carolina, if the parent of a Black child was considered vagrant, the judicial system allowed the police and/or other government agencies to “apprentice” the child to an “employer”. Males could be held until the age of 21, and females could be held until they were 18. Their owner had the legal right to inflict punishment on the child for disobedience, and to recapture them if they ran away.

This (peonage) is an example of systemic racism – Racism established and perpetuated by government systems. Slavery was made legal by the U.S. Government. Segregation, Black Codes, Jim Crow and peonage were all made legal by the government, and upheld by the judicial system. These acts of racism were built into the system, which is where the term “Systemic Racism” is derived.

This is the part of “Black History” that most of us were never told about.

Written by Leisureguy

1 August 2021 at 12:48 pm

Reading John Gray in war

leave a comment »

Andy Owen, author of All Soldiers Run Away: Alano’s War: The Story of a British Deserter (2017) and a former soldier who writes on the ethics and philosophy of war, has an interesting essay in Aeon:

‘All of humanity’s problems stem from man’s inability to sit quietly in a room alone.’
Blaise Pascal (1623-62)

Ifirst read the English philosopher John Gray while sitting in the silence of the still, mid-afternoon heat of Helmand Province in Afghanistan. In Black Mass: Apocalyptic Religion and the Death of Utopia (2007), Gray showed how the United States’ president George W Bush and the United Kingdom’s prime minister Tony Blair framed the ‘war on terror’ (which I was part of) as an apocalyptic struggle that would forge the new American century of liberal democracy, where personal freedom and free markets were the end goals of human progress. Speaking at the Sydney Writers’ Festival in 2008, Gray highlighted an important caveat to the phrase ‘You can’t have an omelette without breaking eggs,’ which is sometimes used, callously, to justify extreme means to high-value ends. Gray’s caveat was: ‘You can break millions of eggs and still not have a single omelette.’ In my two previous tours of Iraq, I had seen first-hand – as sectarian hatred, insurgency, war fighting, targeted killings and the euphemistically named collateral damage tore apart buildings, bodies, communities and the shallow fabric of the state – just how many eggs had been broken and yet still how far away from the omelette we were.

There was no doubt that Iraq’s underexploited oil reserves were part of the US strategic decision-making, and that the initial mission in Afghanistan was in response to the terrorist attacks of 11 September 2001 on the US, but both invasions had ideological motivations too. I had started the process to join the British military before 9/11. The military I thought I was joining was the one that had successfully completed humanitarian interventions in the Balkans and Sierra Leone. I believed we could use force for good, and indeed had a duty to do so. After the failure to prevent genocides in Rwanda and Srebrenica, the concept of the ‘responsibility to protect’ was developing, which included the idea that when a state was ‘unable or unwilling’ to protect its people, responsibility shifted to the international community and, as a last resort, military intervention would be permissible. It would be endorsed by all member states of the United Nations (UN) in 2005 but, under the framework, the authority to employ the last resort rested with the UN Security Council, who hadn’t endorsed the invasion of Iraq.

Despite the lack of a UN resolution, many of us who deployed to Iraq naively thought we were doing the right thing. When Lieutenant Colonel Tim Collins delivered his eve-of-battle speech to the Royal Irish Battle Group in March 2003, he opened by stating: ‘We go to liberate, not to conquer.’ We had convinced ourselves that, as well as making the region safer by seizing the Iraqi president Saddam Hussein’s weapons of mass destruction (WMD), we were there to save the people of Iraq from their own government and replace it with the single best way of organising all societies: liberal democracy. This feeling was so persuasive that it led to many troops feeling that the Iraqis were somehow ungrateful when they started to shoot at us for invading their country.

By my second tour of Iraq in 2005, it was clear that no WMD would be found and the society that was evolving was far from the one envisaged. Morale was at a low ebb as the gap between the mission and what we were achieving widened. We were stuck in a Catch-22. We would hand over to local security forces when the security situation improved enough for us to do so. However, the security situation couldn’t improve while we were still there. It would improve only if we left. The conditions that would allow us to leave were us already having left. Most troops were stuck inside the wire, their only purpose seemingly to be mortared or rocketed for being there. I was asked why we were there, especially when soldiers witnessed their friends being injured or killed, or saw the destruction of the city we’d come to liberate. They needed meaning, it couldn’t all be pointless. Meaning was found in protecting each other. My team of 30 or so men and women found purpose in trying to collect intelligence on those planting deadly improvised explosive devices along the main routes in and out of the city. Members of both the team before and the team after us were blown up trying to do so.

Much of the criticism levelled at the post-invasion failure focused on the mistake of disbanding the Iraqi state, the lack of post-conflict planning and the lack of resources. There was less focus on the utopian aims of the whole project. But it was only through Gray that I saw the similarities between the doctrines of Stalinism, Nazi fascism, Al-Qaeda’s paradoxical medieval, technophile fundamentalism, and Bush’s ‘war on terror’. Gray showed that they are all various forms (however incompatible) of utopian thinking that have at their heart the teleological notion of progress from unenlightened times to a future utopia, and a belief that violence is justified to achieve it (indeed, from the Jacobins onwards, violence has had a pedagogical function in this process). At first, I baulked at the suggested equivalence with the foot soldiers of the other ideologies. There were clearly profound differences! But through Gray’s examples, I went on to reflect on how much violence had been inflicted throughout history by those thinking that they were doing the right thing and doing it for the greater good.

A message repeated throughout Gray’s work is that, despite the irrefutable material gains, this notion is misguided: scientific knowledge and the technologies at our disposal increase over time, but there’s no reason to think that morality or culture will also progress, nor – if it does progress for a period – that this progress is irreversible. To think otherwise is to misunderstand the flawed nature of our equally creative and destructive species and the cyclical nature of history. Those I spoke to in Basra needed no convincing that the advance of rational enlightened thought was reversible, as the Shia militias roamed the streets enforcing their interpretation of medieval law, harassing women, attacking students and assassinating political opponents. By the time bodies of journalists who spoke out against the death squads started turning up at the side of the road, Basra’s secular society was consigned to history. Gray points to the re-introduction of torture by the world’s premier liberal democracy during the war on terror as an example of the reversibility of progress. The irreversibility idea emerged directly from a utopian style of thinking that’s based on the notion that the end justifies the means. Such thinking is often accompanied by one of the defining characteristics of the Iraq and Afghanistan campaigns: hubris.

The myth of progress was a key theme of Gray’s . . .

Continue reading.

Written by Leisureguy

31 July 2021 at 8:46 pm

Entering Steppelandia: pop. 7.7 billion

leave a comment »

The Great Steppe (shown in blue): First a barrier, then a highway

In The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World, a title perhaps familiar from my list of repeatedly recommended books, David Anthony describes how, until the invention of the wheel (and thus of wagons and carts), the Great Steppe that crosses Eurasia (see map above, with Italy visible at the left and Korea at the right) was impassible: trackless grassland, the grass an average of 5 feet tall, with water hard to find, impossible to traverse on foot.

Horses are native to the Great Steppe and adapted to life there — for example, cattle and sheep will die when snow covers the steppe, but horses will dig through the snow to the hay-like grass that lies beneath — and, as Anthony explains, horses were at first meat animals for steppe-dwellers as meat animals. At some point a stallion traded liberty for the luxury of getting laid, and herds of horses could be kept for food. (Almost all domesticated horses are, as we know from their Y-chromosomes, descended from that stallion.) Horses when hobbled do not roam, so no fences were required — and hobbles are easier to make and maintain than fences. Finally some brave souls tried riding the horses, and suddenly people could go faster and farther than when afoot.

But even when people were able to enter the steppe on horseback, the distances and sparseness of running water kept the step impassable. But when the wheel was invented, by the Assyrians, wagons were possible, and horse-drawn wagons could carry a lot of supplies. The flat plains of the steppe were transformed from barrier to highway, allowing the fierce speakers of Proto-Indo-European to prey on villagers across Eurasia, and later serving as a route for trade and commerce.

I highly recommend Anthony’s book. Razib Khan has an interesting article on the Great Steppe today:

Whenever I write deep-dive Substack posts on genetics and human history (IndiaItaly, even China), I end up cutting reams of in-depth background on the steppe before I hit “publish.” Why? The Eurasian steppe is my compulsive digression. Everything canonical, everything human… makes more sense if I make sure you understand the steppe first. But too many don’t. And I fear they don’t even know what they’re missing. I want to bring my readership along on my steppe obsession, not least so that the rest of my posts will be more meaningful reads.

In that spirit, the following piece kicks off a foray deep into the Eurasian steppe and its centrality to human history, civilization and genetics. This free post is the personal why of the steppe for me. In the subsequent series of long-form, subscriber-only pieces, I’ll be expanding on the what, who and when of 5000 years of the steppe.

Steppe super-fan or steppe-skeptic, I hope you’ll consider subscribing to Razib Khan’s Unsupervised Learning for more in this vein.

On a lighter note, try your hand at my two-minute “Your Steppe IQ” quiz. Legit bragging rights if you earn Khan status or make it to Steppelandia. And my best steppe reading recommendations for all who finish!

I am haunted by the steppe. Yes, my surname comes from a Turkic language of the eastern steppe, in modern-day Mongolia. The only language I read has its ultimate origins on the steppe, among the kurgan burial-mound builders who flourished east of the Dnieper five thousand years ago. And sure, over four thousand years ago, my direct paternal ancestors were steppe pastoralists occupying lands west of the Volga. But the motives for my obsession aren’t that self-involved.

I probably don’t need to explain this to anyone who’s read me for long, but I churn through exhaustive obsessions in my readings. For example, in 1986 I read the last word I craved (or could find) on climatology, in 1987 dinosaurs, in 1988 military history, robotics and board games, in 1990 physical geography and overpopulation models, in 1993 cosmology and physics, in 1994 the Welsh, in 1995 Thomas Sowell and the history of science fiction, in 1996 Naomi Wolfe, in 1998 Luca Luigi Cavalli-Sforza and the Jewish people, in 1999 the Stoics, South Africa and Intelligent Design, goldfish in pre-9/11 2001, Salafists in post-9/11 2001, in 2003 David Hume, in 2004 Wittgenstein, in 2005 Catholicism, in 2006 R.A. Fisher, the Mormons and cognitive science of religion, in 2007 the Abbasids, in 2015 the Russians and in 2018 Critical Theory. To be sure, new contributions to a field draw me back into past passions on the regular. And certain domains I closed the book on tend to age better than others; my children regularly dismiss me among themselves: “Daddy only knows the old dinosaurs.”

But there are a few through-lines I’m never done with. Even 30 years into reading, I always feel I’m barely past the preface. In the broadest strokes, “peoples” have obsessed me since my earliest childhood. I clearly remember peppering my parents’ graduate-school acquaintances with “Which humans have the best vision?” and “Which humans are strongest?”-type questions before I could really read. And I couldn’t be born to a luckier age, because this consuming passion with human population history can now be yoked to the powerful engine of historical population genomics. I expect the riches of this field to remain inexhaustible generations after I am but dust.

Populations, population genomics and the histories of ancient nations we can infer from them, are what I live for. Which populations? No surprise that I’m never done with China or ancient Rome. But also the people of the steppe. Always the steppe. What even is the steppe? A void so under-examined, its illustrious peoples don’t even merit a single umbrella term. An expanse so vast it spans eight time zones. A word I’m disappointed to find few know and a world fewer still explore. Does any region whose influence touched empires and cultures across Europe, the Middle East and China, languish less examined?

The culture and genes of people all across the world today come from the steppe. The ancient Romans, Chinese and Arabs all have their advocates and chroniclers. They tell their story in their own voice. The Mongols may cut an impressive swath through history, but too often they see print only for the horrific deeds chronicled by their enemies. What if what we knew of the Romans only came via the Gauls after Caesar’s genocide against them? What if all that remained of Seinfeld were Wikipedia plot summaries by his vengeful antagonist Newman? What if Trump were our only observer of Obama?

Whether we are astute enough to recognize it or not, the shape of the modern world has been molded by conflict between the nomads of the Eurasian steppe and the loose arc of civilized societies that happened to lay curled around their domains. The politics, history, and geography of the steppe are critical lacunae in most grand historical narratives. The fall of the first Han Dynasty, the fall of the Roman Empire and the conquest of India by Muslims all owe to a sequence of events unleashed by Eurasian steppe nomads.

Grass from sea to shining sea

Beginning with the Great Hungarian Plain in the west, a broad ribbon of rich grassland stretches nearly unbroken across all of Eurasia to the Pacific, unfurling in infinite sameness between boreal forests to the north and arid deserts to the south. For an American kid weaned on 1980’s nature specials, my idea of vast open lands was the idyllic American prairie or the African savanna teeming with wildebeest. But the Eurasian steppe dwarfs both. The largest uninterrupted grassland ecosystem in the world, it spans 10,000 kilometers. It is more oceanic than continental in size. Even the scale of smaller subsections of this ecosystem is hard to fathom. The Pontic steppe north of the Black Sea begins on the edge of Romania and runs all the way east to the Volga river, where Europe gives way to Asia. A tarp cut from just the far eastern Mongolian reach of the steppe could casually smother all of Germany and France.

Thinking back, to arrive at my full appreciation of the importance of the steppe, I had to first get over a matter of elementary-school pedantry. The extensive color-coded maps in my beloved “biomes of the world” reference books had driven home Eurasia’s vastness. Our planet’s mega continent, Eurasia of course has the biggest biomes, chief among them the taiga, “forest” in Siberian Turkic languages. An uninterrupted expanse extending from Scandinavia to the Pacific ocean, the immense taiga is unmistakably more extensive than the steppe it parallels. Which is all very well if you are a wolf, moose, or bear and this is your prime habitat. Less so for a human. 

For our Ice-Age ancestors, the open steppe may not have been much more appealing than the semi-arctic forests, but . . .

Continue reading.

Written by Leisureguy

30 July 2021 at 1:35 pm

Posted in Books, Daily life, History

Some members of Congress are going to pay the piper — or at least their lawyers

leave a comment »

Heather Cox Richardson:

The ripples of the explosive testimony of the four police officers Tuesday before the House Select Committee to Investigate the January 6th Attack on the United States Capitol continue to spread. Committee members are meeting this week to decide how they will proceed. Congress goes on recess during August, but committee chair Bennie Thompson (D-MS) suggested the committee would, in fact, continue to meet during that break.

Committee members are considering subpoenas to compel the testimony of certain lawmakers, especially since the Department of Justice on Tuesday announced that it would not assert executive privilege to stop members of the Trump administration from testifying to Congress about Trump’s role in the January 6 insurrection. This is a change from the Trump years, when the Department of Justice refused to acknowledge Congress’s authority to investigate the executive branch. This new directive reasserts the traditional boundaries between the two branches, saying that Congress can require testimony and administration officials can give it.

Further, the Department of Justice yesterday rejected the idea that it should defend Congress members involved in the January 6 insurrection. Representative Eric Swalwell (D-CA) sued Alabama Representative Mo Brooks, as well as the former president and Trump lawyer Rudy Giuliani, for lying about the election, inciting a mob, and inflicting pain and distress.

Famously, Brooks participated in the rally before the insurrection, telling the audience: “[W]e are not going to let the Socialists rip the heart out of our country. We are not going to let them continue to corrupt our elections, and steal from us our God-given right to control our nation’s destiny.” “Today,” he said, “Republican Senators and Congressmen will either vote to turn America into a godless, amoral, dictatorial, oppressed, and socialist nation on the decline or they will join us and they will fight and vote against voter fraud and election theft, and vote for keeping America great.”

“[T]oday is the day American patriots start taking down names and kicking ass!” he said. He asked them if they were willing to give their lives to preserve “an America that is the greatest nation in world history.” “Will you fight for America?” he asked.

To evade the lawsuit, Brooks gave an affidavit in which he and his lawyers insisted that this language was solely a campaign speech, urging voters to support Republican lawmakers in 2022 and 2024. But he also argued that the Department of Justice had to represent him in the lawsuit because he was acting in his role as a congress member that day, representing his constituents.

Yesterday, the Department of Justice declined to take over the case, pointing out that campaign and electioneering activities fall outside the scope of official employment. It goes on to undercut the idea of protecting any lawmaker who participated in the insurrection, saying that “alleged action to attack Congress and disrupt its official functions is not conduct a Member of Congress is employed to perform.” This means Brooks is on his own to defend himself from the Swalwell lawsuit. It also means that lawmakers intending to fight subpoenas are going to be paying for their own legal representation.

If the committee does, in fact, start demanding that lawmakers talk, Brooks is likely on the list of those from whom they will want to hear. Trying to bolster the new Republican talking point that House Speaker Nancy Pelosi (D-CA) should have been better prepared for the insurrection (this is a diversion: she has no say over the Capitol Police, and she did, in fact, call for law enforcement on January 6), Brooks told Slate political reporter Jim Newell that he, Brooks, knew something was up. He had been warned “on Monday that there might be risks associated with the next few days,” he said. “And as a consequence of those warnings, I did not go to my condo. Instead, I slept on the floor of my office. And when I gave my speech at the Ellipse, I was wearing body armor.” “That’s why I was wearing that nice little windbreaker,” he told Newell. “To cover up the body armor.”

Brooks is not the only one in danger of receiving a subpoena. Representative Jim Jordan (R-OH) admitted on the Fox News Channel that he spoke to the former president on January 6, although he claimed not to remember whether it was before, during, or after the insurrection. He tried to suggest that chatting with Trump on January 6 was no different than chatting with him at any other time, but that is unlikely to fly. Jordan also repeatedly referred to Trump as “the president,” rather than the former president, a dog whistle to those who continue to insist that Trump did not, in fact, lose the 2020 election.

Meanwhile, it looks more and more like Republicans, including House Minority Leader Kevin McCarthy (R-CA), are  . . .

Continue reading. There’s much more.

Written by Leisureguy

29 July 2021 at 9:52 pm

Age of Invention: An Absent Atlantic

leave a comment »

Anton Howes has an interesting newsletter, and I found this issue worth reading:

I’ve become engrossed this week by a book written in 1638 by the merchant Lewes Roberts — The Marchant’s Mappe of Commerce. It is, in effect, a guide to how to be a merchant, and an extremely comprehensive one too. For every trading centre he could gather information about, Roberts noted the coins that were current, their exchange rates, and the precise weights and measures in use. He set down the various customs duties, down even to the precise bribes you’d be expected to pay to various officials. In Smyrna, for example, Roberts recommended you offer the local qadi some cloth and coney-skins for a vest, the qadi’s servant some English-made cloth, and their janissary guard a few gold coin.

Unusually for so many books of the period, Roberts was also careful to be accurate. He often noted whether his information came from personal experience, giving the dates of his time in a place, or whether it came second-hand. When he was unsure of details, he recommended consulting with better experts. And myths — like the rumour he heard that the Prophet Muhammad’s remains at Mecca were in an iron casket suspended from the ceiling by a gigantic diamond-like magnet called an adamant — were thoroughly busted. Given his accuracy and care, it’s no wonder that the book, in various revised editions, was in print for almost sixty years after his death. (He died just three years after publication.)

What’s most interesting about it to me, however, is Roberts’s single-minded view of English commerce. The entire world is viewed through the lens of opportunities for trade, taking note of the commodities and manufactures of every region, as well as their principal ports and emporia. A place’s antiquarian or religious tourist sites, which generally make up the bulk of so many other geographical works, are given (mercifully) short shrift. Indeed, because the book was not written with an international audience in mind, it also passes over many trades with which the English were not involved, or from which they were even excluded. It thus provides a remarkably detailed snapshot of what exactly English merchants were interested in and up to on the eve of civil war; and right at the tail end of a century of unprecedented growth in London’s population, itself seemingly led by its expansion of English commerce.

So, what did English merchants consider important? It’s especially illuminating about England’s trade in the Atlantic — or rather, the lack thereof.

Roberts spends remarkably little time on the Americas, which he refers to as the continents of Mexicana (North America) and Peruana (South America). Most of his mentions of English involvement are about which privateers had once raided which Spanish-owned colonies, and he gives especial attention to the seasonal fishing for cod off the coast of Newfoundland — a major export trade to the Mediterranean, and a source of employment to many English West Country farmers, who he refers to as being like otters for spending half their lives on land and the other half on sea.

But as for the recently-established English colonies on the mainland, which Roberts refers to collectively as Virginia, he writes barely a few sentences. Although he reproduces some of the propaganda about what is to be found there — no mention yet of tobacco by the way, with the list consisting largely of foodstuffs, forest products, tar, pitch, and a few ores — the entirety of New England is summarised only as a place “said to be” resorted to by religious dissenters. The island colonies on Barbados and Bermuda were also either too small or too recently established to merit much attention. To the worldly London merchant then, the New World was still peripheral — barely an afterthought, with the two continents meriting a mere 11 pages, versus Africa’s 45, Asia’s 108, and Europe’s 262.

The reason for this was that the English were excluded from trading directly with the New World by the Spanish. It was, as Roberts jealously put it, “shut up from the eyes of all strangers”. The Spanish were not only profiting from the continent’s mines of gold and silver, but he also complained of their monopoly over the export of European manufactures to its colonies there. It’s a striking foreshadowing of what was, in the eighteenth century, to become one of the most important features of the Atlantic economy — the market that the growing colonies would one day provide for British goods. Indeed, Roberts’s most common condemnation of the Spanish was for having killed so many natives, thereby extinguishing the major market that had already been there: “had not the sword of these bloodsuckers ended so many millions of lives in so short a time, trade might have seen a larger harvest”. The genocide had, in Roberts’s view, not only been horrific, but impoverished Europe too (he was similarly upset that the Spanish had slaughtered so many of the natives of the Bahamas, known for the “matchless beauty of their women”).

Moving to the other side of the Atlantic, to the western coast of Africa, it’s clear from Roberts’s descriptions that English trade with Morocco was not what it used to be. As I’ve written before, the Saadi empire based at Fez had once had a sort of mutually reinforcing, symbiotic relationship with England, both having had a common enemy in Spain. The English had in the late sixteenth century secretly sent the Saadis weapons, buying from them sugar, copper, and saltpetre — essential for gunpowder. The sultan had once even suggested to Elizabeth I that they invade Spain’s colonies in the New World together. But by Roberts’s time the region’s commerce had been wrecked by decades of civil war. One Moroccan coastal city, Salé, had even become a semi-independent pirate republic. English merchants, having once been a major presence in Fez, now avoided storing any goods or residing there, instead making “their ships their shops” and only unloading precisely whatever merchandise was actually sold. “Where peace and unity is wanting,” as Roberts sagely put it, “trade must decay”.

Further south, in the Gulf of Guinea (then called the “Genin and Benin” or “Ginney and Binney” coast), Roberts describes how the English trade there — buying gold, and selling cloth, weapons, and especially salt — was limited by competition with other Europeans. The Portuguese had long ago built . . .

Continue reading.

Written by Leisureguy

29 July 2021 at 3:58 pm

Our democracy is under attack. Washington journalists must stop covering it like politics as usual.

leave a comment »

Margaret Sullivan, one-time Public “Editor for the NY Times and now a columnist for the Washington Post, has a good piece today:

Back in the dark ages of 2012, two think-tank scholars, Norman Ornstein and Thomas Mann, wrote a book titled “It’s Even Worse Than It Looks” about the rise of Republican Party extremism and its dire effect on American democracy.

In a related op-ed piece, these writers made a damning statement about Washington press coverage, which treats the two parties as roughly equal and everything they do as deserving of similar coverage.

Ornstein and Mann didn’t use the now-in-vogue terms “both-sidesism” or “false equivalence,” but they laid out the problem with devastating clarity (the italics are mine):

“We understand the values of mainstream journalists, including the effort to report both sides of a story. But a balanced treatment of an unbalanced phenomenon distorts reality. If the political dynamics of Washington are unlikely to change any time soon, at least we should change the way that reality is portrayed to the public.”

Positive proof was in the recent coverage of congressional efforts to investigate the Jan. 6 insurrection at the Capitol.

The Democratic leadership has been trying to assemble a bipartisan panel that would study that mob attack on our democracy and make sure it is never repeated. Republican leaders, meanwhile, have been trying to undermine the investigation, cynically requesting that two congressmen who backed efforts to invalidate the election be allowed to join the commission, then boycotting it entirely. And the media has played straight into Republicans’ hands, seemingly incapable of framing this as anything but base political drama.

“ ‘What You’re Doing Is Unprecedented’: McCarthy-Pelosi Feud Boils Over,” read a CNN headline this week. “After a whiplash week of power plays . . . tensions are at an all-time high.”

Is it really a “feud” when Republican Minority Leader Kevin McCarthy performatively blames Democratic House Speaker Nancy Pelosi for refusing to seat Republicans Jim Jordan and Jim Banks — two sycophantic allies of Trump, who called the Jan. 6 mob to gather?

One writer at Politico called Pelosi’s decision a “gift to McCarthy.” And its Playbook tut-tutted the decision as handing Republicans “a legitimate grievance,” thus dooming the holy notion of bipartisanship.

“Both parties have attacked the other as insincere and uninterested in conducting a fair-minded examination,” a Washington Post news story observed. (“Can it really be lost on the Post that the Republican party has acted in bad faith at every turn to undermine every attempt to investigate the events of Jan. 6?” a reader complained to me.)

The bankruptcy of this sort of coverage was exposed on Tuesday morning, when the Jan. 6 commission kicked off with somber, powerful, pointedly nonpolitical testimony from four police officers who were attacked during the insurrection. Two Republicans, Liz Cheney and Adam Kinzinger, even defied McCarthy’s boycott to ensure their party would be sanely represented.

Law officers became truth seekers about who was responsible for the Capitol attacks

This strain of news coverage, observed Jon Allsop in Columbia Journalism Review, centers on twinned, dubious implications: “That bipartisanship is desirable and that Democrats bear responsibility for upholding it — even in the face of explicit Republican obstructionism.”

This stance comes across as both cynical (“politics was ever thus”) and unsophisticated (“we’re just doing our job of reporting what was said”). Quite a feat.

Mainstream journalists want their work to be perceived as fair-minded and nonpartisan. They want to defend themselves against charges of bias. So they equalize the unequal. This practice seems so ingrained as to be unresolvable.

There is a way out. But it requires the leadership of news organizations to radically reframe the mission of its Washington coverage. As a possible starting point, I’ll offer these recommendations:

  • Toss out the insidious “inside-politics” frame and replace it with a “pro-democracy” frame.
  • Stop calling the reporters who cover this stuff “political reporters.” Start calling them “government reporters.”

  • Stop asking who the winners and losers were in the latest skirmish. Start asking who is serving the democracy and who is undermining it

  • Stop being “savvy” and start being patriotic.

In a year-end piece for Nieman Lab, Andrew Donohue, managing editor of the Center for Investigative Reporting’s Reveal, called for news organizations to put reporters on a new-style “democracy beat” to focus on voting suppression and redistricting. “These reporters won’t see their work in terms of politics or parties, but instead through the lens of honesty, fairness, and transparency,” he wrote.

I’d make it more sweeping. The democracy beat shouldn’t be some kind of specialized innovation, but a widespread rethinking across the mainstream media.

Making this happen will call for something that Big Journalism is notoriously bad at: An open-minded, nondefensive recognition of what’s gone wrong.

Top editors, Sunday talk-show moderators and other news executives should pull together their brain trusts to grapple with this. And they should be transparent with the public about what they’re doing and why.

As a model, they might have to swallow their big-media pride and look to places like Harrisburg, Pa., public radio station WITF which has admirably explained to its audience why it continually offers reminders about the actions of those public officials who tried to overturn the 2020 election results. Or to Cleveland Plain Dealer editor Chris Quinn’s letter to readers about how the paper and its website,, refuse to cover every reckless, attention-getting lie of Republican Josh Mandel as he runs for the U.S. Senate next year. . .

Continue reading. There’s more.

Written by Leisureguy

28 July 2021 at 1:34 pm

The Real Source of America’s Rising Rage

leave a comment »

Kevin Drum has a good article in Mother Jones that begins:

Americans sure are angry these days. Everyone says so, so it must be true.

But who or what are we angry at? Pandemic stresses aside, I’d bet you’re not especially angry at your family. Or your friends. Or your priest or your plumber or your postal carrier. Or even your boss.

Unless, of course, the conversation turns to politics. That’s when we start shouting at each other. We are way, way angrier about politics than we used to be, something confirmed by both common experience and formal research.

When did this all start? Here are a few data points to consider. From 1994 to 2000, according to the Pew Research Center, only 16 percent of Democrats held a “very unfavorable” view of Republicans, but then these feelings started to climb. Between 2000 and 2014 it rose to 38 percent and by 2021 it was about 52 percent. And the same is true in reverse for Republicans: The share who intensely dislike Democrats went from 17 percent to 43 percent to about 52 percent.

Likewise, in 1958 Gallup asked people if they’d prefer their daughter marry a Democrat or a Republican. Only 28 percent cared one way or the other. But when Lynn Vavreck, a political science professor at UCLA, asked a similar question a few years ago, 55 percent were opposed to the idea of their children marrying outside their party.

Or consider the right track/wrong track poll, every pundit’s favorite. Normally this hovers around 40–50 percent of the country who think we’re on the right track, with variations depending on how the economy is doing. But shortly after recovering from the 2000 recession, this changed, plunging to 20–30 percent over the next decade and then staying there.

Finally, academic research confirms what these polls tell us. Last year a team of researchers published an international study that estimated what’s called “affective polarization,” or the way we feel about the opposite political party. In 1978, we rated people who belonged to our party 27 points higher than people who belonged to the other party. That stayed roughly the same for the next two decades, but then began to spike in the year 2000. By 2016 it had gone up to 46 points—by far the highest of any of the countries surveyed—and that’s before everything that has enraged us for the last four years.

What’s the reason for this? There’s no shortage of speculation. Political scientists talk about the fragility of presidential systems. Sociologists explicate the culture wars. Historians note the widening divide between the parties after white Southerners abandoned the Democratic Party following the civil rights era. Reporters will regale you with stories about the impact of Rush Limbaugh and Newt Gingrich.

There’s truth in all of these, but even taken together they are unlikely to explain the underlying problem. Some aren’t new (presidential systems, culture wars) while others are symptoms more than causes (the Southern Strategy).

I’ve been spending considerable time digging into the source of our collective rage, and the answer to this question is trickier than most people think. For starters, any good answer has to fit the timeline of when our national temper tantrum began—roughly around the year 2000. The answer also has to be true: That is, it needs to be a genuine change from past behavior—maybe an inflection point or a sudden acceleration. Once you put those two things together, the number of candidates plummets.

But I believe there is an answer. I’ll get to that, but first we need to investigate a few of the most popular—but ultimately unsatisfying—theories currently in circulation.

Theory #1: Americans Have Gone Crazy With Conspiracy Theories

It’s probably illegal to talk about the American taste for conspiracy theorizing without quoting from Richard Hofstadter’s famous essay, “The Paranoid Style in American Politics.” It was written in 1964, but this passage (from the book version) about the typical conspiracy monger should ring a bell for the modern reader:

He does not see social conflict as something to be mediated and compromised, in the manner of the working politician. Since what is at stake is always a conflict between absolute good and absolute evil, the quality needed is not a willingness to compromise but the will to fight things out to a finish. Nothing but complete victory will do.

Or how about this passage from Daniel Bell’s “The Dispossessed”? It was written in 1962:

The politics of the radical right is the politics of frustration—the sour impotence of those who find themselves unable to understand, let alone command, the complex mass society that is the polity today…Insofar as there is no real left to counterpoise to the right, the liberal has become the psychological target of that frustration.

In other words, the extreme right lives to own the libs. And it’s no coincidence that both Hofstadter and Bell wrote about this in the early ’60s: That was about the time that the John Birch Society was gaining notoriety and the Republican Party nominated Barry Goldwater for president. But as Hofstadter in particular makes clear, a fondness for conspiracy theories has pervaded American culture from the very beginning. Historian Bernard Bailyn upended revolutionary-era history and won a Pulitzer Prize in 1968 for his argument that belief in a worldwide British conspiracy against liberty “lay at the heart of the Revolutionary movement”—an argument given almost Trumpian form by Sam Adams, who proclaimed that the British empire literally wanted to enslave white Americans. Conspiracy theories that followed targeted the Bavarian Illuminati, the Masons, Catholics, East Coast bankers, a global Jewish cabal, and so on.

But because it helps illuminate what we face now, let’s unpack the very first big conspiracy theory of the modern right, which began within weeks of the end of World War II.

In 1945 FDR met with Joseph Stalin and Winston Churchill at Yalta with the aim of gaining agreement about the formation of the United Nations and free elections in Europe. In this he succeeded: Stalin agreed to everything FDR proposed. When FDR returned home he gave a speech to Congress about the meeting, and it was generally well received. A month later he died.

Needless to say, Stalin failed to observe most of the agreements he had signed. He never had any intention of allowing “free and fair” elections in Eastern Europe, which he wanted as a buffer zone against any future military incursion from Western Europe. The United States did nothing about this, to the disgust of many conservatives. However, this was not due to any special gutlessness on the part of Harry Truman or anyone in the Army. It was because the Soviet army occupied Eastern Europe when hostilities ended and there was no way to dislodge it short of total war, something the American public had no appetite for.

And there things might have stood. Scholars could have argued for years about whether FDR was naive about Stalin, or whether there was more the US and its allies could have done to push Soviet troops out of Europe. Books would have been written and dissertations defended, but not much more. So far we have no conspiracy theory, just some normal partisan disagreement.

But then came 1948. Thomas Dewey lost the presidency to Harry Truman and Republicans lost control of the House. Soon thereafter the Soviet Union demonstrated an atomic bomb and communists overran China. It was at this point that a normal disagreement turned into a conspiracy theory. The extreme right began suggesting that FDR had deliberately turned over Eastern Europe to Stalin and that the US delegation at Yalta had been rife with Soviet spies. Almost immediately Joe McCarthy was warning that the entire US government was infiltrated by communists at the highest levels. J. Robert Oppenheimer, the architect of the Manhattan Project, was surely a communist. George Marshall, the hero of World War II, was part of “a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man.”

Like most good conspiracy theories, there was a kernel of truth here. Stalin really did take over Eastern Europe. Alger Hiss, part of the Yalta delegation, really did turn out to be a Soviet mole. Klaus Fuchs and others really did pass along atomic secrets to the Soviets. Never mind that Stalin couldn’t have been stopped; never mind that Hiss was a junior diplomat who played no role in the Yalta agreements; never mind that Fuchs may have passed along secrets the Soviets already knew. It was enough to power a widespread belief in McCarthy’s claim of the biggest conspiracy in all of human history.

There’s no polling data from back then, but belief in this conspiracy became a right-wing mainstay for years—arguably the wellspring of conservative conspiracy theories for decades. Notably, it caught on during a time of conservative loss and liberal ascendancy. This is a pattern we’ve seen over and over since World War II. The John Birch Society and the JFK assassination conspiracies gained ground after enormous Democratic congressional victories in 1958 and again in 1964. The full panoply of Clinton conspiracies blossomed after Democrats won united control of government in the 1992 election. Benghazi was a reaction to Barack Obama—not just a Democratic win, but the first Black man to be elected president. And today’s conspiracy theories about stealing the presidential election are a response to Joe Biden’s victory in 2020.

How widespread are these kinds of beliefs? And has their popularity changed over time? The evidence is sketchy but there’s polling data that provides clues. McCarthy’s conspiracy theories were practically a pandemic, consuming American attention for an entire decade. Belief in a cover-up of the JFK assassination has always hovered around 50 percent or higher. In the mid-aughts, a third of poll respondents strongly or somewhat believed that 9/11 was an inside job, very similar to the one-third of Americans who believe today that there was significant fraud in the 2020 election even though there’s no evidence to support this. And that famous one-third of Americans who are skeptical of the COVID-19 vaccine? In 1954 an identical third of Americans were skeptical of the polio vaccine that had just become available.

So how does QAnon, the great liberal hobgoblin of the past year, measure up? It may seem historically widespread for such an unhinged conspiracy theory, but it’s not: Polls suggest that actual QAnon followers are rare and that belief in QAnon hovers at less than 10 percent of the American public. It’s no more popular than other fringe fever swamp theories of the past.

It’s natural to believe that things happening today—to you—are worse than similar things lost in the haze of history, especially when social media keeps modern outrages so relentlessly in our faces. But often it just isn’t true. A mountain of evidence suggests that the American predilection for conspiracy theories is neither new nor growing. Joseph Uscinski and Joseph Parent, preeminent scholars of conspiracy theories, confirmed this with some original research based on letters to the editors of the New York Times and the Chicago Tribune between 1890 and 2010. Their conclusion: Belief in conspiracy theories has been stable since about 1960. Along with more recent polling, this suggests that the aggregate belief in conspiracy theories hasn’t changed a lot and therefore isn’t likely to provide us with much insight into why American political culture has corroded so badly during the 21st century.

Theory #2: It’s All About Social Media

How about social media? Has it had an effect? Of . . .

Continue reading. There’s much more — along with what he views as the main cause.

And note these:


Today It’s Critical Race Theory. 200 Years Ago It Was Abolitionist Literature.

The Moral Panic Over Critical Race Theory Is Coming for a North Carolina Teacher of the Year

Post-Trump, the GOP Continues to Be the Party of (White) Grievance

Written by Leisureguy

28 July 2021 at 12:00 pm

Re-counting the Cognitive History of Numerals

leave a comment »

In The MIT Press Reader Philip Laughlin, who acquires books for the MIT Press in the fields of Cognitive Science, Philosophy, Linguistics, and Bioethics, interviews Stephen Chrisomalis, Professor of Anthropology at Wayne State University and author of, among other books,Reckonings: Numerals, Cognition, and History.”

Those of us who learned arithmetic using pen and paper, working with the ten digits 0–9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realize that this sort of numeracy is not preordained.

Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book “Reckonings: Numerals, Cognition, and History.” Remarkably, each of the basic structures was invented multiple times independently of one another. In “Reckonings,” Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology, and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.

Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians, and his thoughts on the validity of the Sapir-Whorf Hypothesis.

Philip Laughlin
: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?

Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, “Numerical Notation: A Comparative History” (Cambridge, 2010). It was an unorthodox project for an anthropology department — neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.

What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.

Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.

PL: You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?

SC: The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ — to calculate, to think, and to judge — which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general — to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement — every numerical notation faces its own reckoning as users decide whether to adopt, transmit, and eventually, abandon it. As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic — but it’s about a lot more than that.

The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000.” Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.

PL: At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “Futurologist.” So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?

SC: One of the key arguments in “Reckonings” is that . . .

Continue reading.

Written by Leisureguy

28 July 2021 at 11:49 am

The testimony from the police who stood against the insurrectionists

leave a comment »

Heather Cox Richardson:

This morning, the House Select Committee to Investigate the January 6th Attack on the United States Capitol began its hearings with testimony from two Capitol Police officers and two Metropolitan Police officers.

After Representatives Bennie Thompson (D-MS) and Liz Cheney (R-WY) opened the hearing, Sergeant Aquilino Gonell and and Officer Harry Dunn of the Capitol Police, and Officer Michael Fanone and Officer Daniel Hodges of the Metropolitan Police, recounted hand-to-hand combat against rioters who were looking to stop the election of Democrat Joe Biden and kill elected officials whom they thought were standing in the way of Trump’s reelection. They gouged eyes, sprayed chemicals, shouted the n-word, and told the officers they were going to die. They said: “Trump sent us.”

Lawmakers questioning the officers had them walk the members through horrific video footage taken from the officers’ body cameras. The officers said that one of the hardest parts of the insurrection for them was hearing the very people whose lives they had defended deny the horror of that day. They called the rioters terrorists who were engaged in a coup attempt, and called the indifference of lawmakers to those who had protected them “disgraceful.” “I feel like I went to hell and back to protect them and the people in this room,” Fanone said. “But too many are now telling me that hell doesn’t exist, or that hell wasn’t actually that bad.”

The officers indicated they thought that Trump was responsible for the riot. When asked if Trump was correct that it was “a loving crowd,” Gonell responded: “To me, it’s insulting, just demoralizing because of everything that we did to prevent everyone in the Capitol from getting hurt…. And what he was doing, instead of sending the military, instead of sending the support or telling his people, his supporters, to stop this nonsense, he begged them to continue fighting.” The officers asked the committee to make sure it did a thorough investigation. “There was an attack carried out on January 6, and a hit man sent them,” Dunn testified. “I want you to get to the bottom of that.”

The Republicans on the committee, Representatives Adam Kinzinger (IL) and Liz Cheney (WY) pushed back on Republican claims that the committee is partisan.

“Like most Americans, I’m frustrated that six months after a deadly insurrection breached the United States Capitol for several hours on live television, we still don’t know exactly what happened,” Kinzinger said. “Why? Because many in my party have treated this as just another partisan fight. It’s toxic and it’s a disservice to the officers and their families, to the staff and the employees in the Capitol complex, to the American people who deserve the truth, and to those generations before us who went to war to defend self-governance.”

Kinzinger rejected the Republican argument that the committee should investigate the Black Lives Matter protests of summer 2020, saying that he had been concerned about those protests but they were entirely different from the events of January 6: they did not threaten democracy. “There is a difference between breaking the law and rejecting the rule of law,” Kinzinger observed. (Research shows that more than 96% of the BLM protests had no violence or property damage.)

The officers and lawmakers both spoke eloquently of their determination to defend democracy. Sergeant Gonell, a U.S. Army veteran of the Iraq War who emigrated from the Dominican Republic, said: “As an immigrant to the United States, I am especially proud to have defended the U.S. Constitution and our democracy on January 6.” Adam Schiff (D-CA) added: “If we’re no longer committed to a peaceful transfer of power after elections if our side doesn’t win, then God help us. If we deem elections illegitimate merely because they didn’t go our way rather than trying to do better the next time, then God help us.”

Cheney said: “Until January 6th, we were proof positive for the world that a nation conceived in liberty could long endure. But now, January 6th threatens our most sacred legacy. The question for every one of us who serves in Congress, for every elected official across this great nation, indeed, for every American is this: Will we adhere to the rule of law? Will we respect the rulings of our courts? Will we preserve the peaceful transition of power? Or will we be so blinded by partisanship that we throw away the miracle of America? Do we hate our political adversaries more than we love our country and revere our Constitution?”

House Minority Leader Kevin McCarthy (R-CA) and Senate Minority Leader Mitch McConnell (R-KY) both said they had been too busy to watch the hearing. But the second-ranking Republican in the Senate, John Thune of South Dakota, called the officers heroes and said: “We should listen to what they have to say.”

Republicans are somewhat desperately trying to change the subject in such a way that it will hurt Democrats. Shortly before the hearing started, McCarthy House Republican conference chair Elise Stefanik (R-NY), who was elected to that position after the conference tossed Liz Cheney for her refusal to support Trump after the insurrection; and Jim Banks (R-IN), whom McCarthy tried to put on the committee and who promised to undermine it, held a press conference. They tried to blame House Speaker Nancy Pelosi (D-CA) for the attack on the Capitol, a right-wing talking point, although she, in fact, has no control over the Capitol Police.

Shortly after the hearing ended, some of the House’s key Trump supporters—Andy Biggs (R-AZ), Matt Gaetz (R-FL), Louie Gohmert (R-TX), Bob Good (R-VA), Paul Gosar (R-AZ), and Marjorie Taylor Greene (R-GA)—tried to hold a press conference in front of the Department of Justice, where they promised to complain about those arrested for their role in the January 6 insurrection, calling them “political prisoners.” The conference fell apart when protesters called Gaetz a pedophile (he is under investigation for sex trafficking a girl), and blew a whistle to drown the Republican lawmakers out.

This story is not going away, not only because the events of January 6 were a deadly attack on our democracy that almost succeeded and we want to know how and why that came to pass, but also because those testifying before the committee are under oath.

Since the 1950s, when Senator Joe McCarthy (R-WI) pioneered constructing a false narrative to attract voters, the Movement Conservative faction of the Republican Party focused not on fact-based arguments but on emotionally powerful fiction. There are no punishments for lying in front of television cameras in America, and from Ronald Reagan’s Welfare Queen to Rush Limbaugh’s “Feminazis” to the Fox News Channel personalities’ warnings about dangerous Democrats to Rudy Giuliani’s “witnesses” to “voter fraud” in the 2020 election, Republicans advanced fictions and howled about the “liberal media” when they were fact-checked. By the time of the impeachment hearings for former president Trump, Republican lawmakers like Jim Jordan (R-OH) didn’t even pretend to care about facts but instead yelled and badgered to get clips that could be arranged into a fictional narrative on right-wing media.

Now, though, the Movement Conservative narrative that  . . .

Continue reading.

Written by Leisureguy

28 July 2021 at 9:55 am

Some clips from police testimony on the January 7 insurrection

leave a comment »

Watch not only this clip, but click the date of the tweet and read the thread.

Written by Leisureguy

27 July 2021 at 4:56 pm

How Bad is American Life? Americans Don’t Even Have Friends Anymore

leave a comment »

Umair Haque has a somewhat gloomy piece in Medium, which includes the chart above. He writes:

Continue reading.

Written by Leisureguy

27 July 2021 at 11:58 am

Paris Sportif: The Contagious Attraction of Parkour

leave a comment »

I first encountered parkour in a Luc Besson movie, District 13 (from 2004, original title Banlieue 13), but it has a longer history, discussed by Macs Smith in an extract from his book Paris and the Parasite: Noise, Health, and Politics in the Media City published in The MIT Reader:

In a city fixated on public health and order, a viral extreme sport offers a challenge to the status quo.1955, Letterist International, a Paris-based group of avant-garde authors, artists, and urban theorists, published “Proposals for Rationally Improving the City of Paris.” The group, which would become better known as Situationist International, or SI, and play an important role in the May 1968 demonstrations, put forward wild suggestions for breaking the monotony of urban life. Some of these, like the call to abolish museums and distribute their masterpieces to nightclubs, were iconoclastic and anti-institutional, reflecting the group’s anarchic political leanings.

Others were less overtly political and testified to a thirst for excitement. To appeal to “spelunkers” and thrill-seekers, they called for Paris’s rooftops and metro tunnels to be opened up to exploration. The group believed that the mundaneness of urban life in the 1950s was integral to bourgeois capitalism. Boredom was part of how the government maintained order, and so a more equal city would necessarily have to be more frightening, more surprising, more fun.

SI disbanded in 1972, but its ideas about the links between emotion and urban politics have been influential. Among the best examples are the subcultures centered around urban thrill-seeking that exist today, like urban exploration (Urbex), rooftopping, and skywalking, all of which involve breaking into dangerous or forbidden zones of the city. The most famous inheritor to SI’s call to experience urban space differently is parkour, which was invented in the Paris suburb of Lisses in the 1980s. It was inspired by Hébertisme, a method of obstacle course training first introduced to the French Navy in 1910 by Georges Hébert. David Belle learned the principles of Hébertisme from his father, Raymond, who had been exposed to it at a military school in Vietnam. David, along with a friend, Sébastien Foucan, then adapted those principles, originally conceived for natural environments, to the suburban architecture of their surroundings.

Over time, parkour has incorporated techniques from tumbling, gymnastics, and capoeira, resulting in a striking blend of military power and balletic artistry. Parkour involves confronting an urban map with an embodied experience of urban space. It is often defined as moving from points A to B in the most efficient way possible, and parkour practitioners, called traceurs, often depict themselves as trailblazers identifying routes through the city that cartography does not capture. Traceurs sometimes evoke the fantasy of tracing a straight line on the map and finding a way to turn it into a path, although in practice, they more often work at a single point on the map — a park, a rooftop, an esplanade — and end a session back where they started.

Traceurs’ desire to rewrite the map is another thing they share with the Situationists, who liked to cut up maps and glue them back together to show the psychological distance between neighborhoods. But parkour distinguishes itself from SI through its use of video, which continues to be a point of debate within the practice. In the early 2000s, Sébastien Foucan reignited this debate when he broke away from Belle to pioneer his own version of the training system.

Foucan’s appearance in the 2003 documentary “Jump London” cemented “freerunning” as the name for this alternate practice, which put a greater emphasis on stylized movements. Foucan would go on to play a terrorist bomb-maker in Martin Campbell’s “Casino Royale,” leaping from cranes with Daniel Craig’s James Bond in pursuit. Some parkour purists see this as a degradation of the utilitarian roots of their training, and insist instead on a physio-spiritual discourse of communion with the environment, mastery of fear, and humility. They reject freerunning as a brash corruption of Hébert’s principles. The sociologist Jeffrey Kidder notes in his interviews with traceurs in Chicago that they dismiss participants who lack interest in serious rituals like safety, humility, and personal growth. They react negatively to media coverage that highlights parkour’s danger or assimilates it into adolescent rebellions like skateboarding, drug use, or loitering.

In my own email interview with the leaders of Parkour Paris, the official parkour organization of Paris, the same will to blame media is evident: “Parkour has been mediatized in ‘connotated’ films. The traceurs depicted in those fictions were friendly delinquents a bit like Robin Hood. Friendly, yes, but for the immense majority of people they were still delinquents from the banlieue,” they gripe. “It’s been very hard to shake that image.” . . .

Continue reading. There’s much more. And it includes this 50-minute video, Jump London:

Written by Leisureguy

27 July 2021 at 10:17 am

Dick Gregory — great comedian and civil rights icon — wrote a great cookbook

leave a comment »

Shea Peters has an interesting article in Atlas Obscura on the origin an impact of Dick Gregory’s cookbook (available in a Kindle edition for US$1.79):

Adrian Miller, the author of Black Smoke: African Americans and the United States of Barbecue, remembers how for his family, holidays like Juneteenth always meant celebrating with food. “We went to the public celebrations in the Five Points neighborhood, Denver’s historic Black neighborhood. At those events, the celebrated foods were barbecue, usually pork spareribs, giant smoked turkey legs, watermelon, and red-colored drinks.”

To many Black Americans, barbecue and soul food mean victory. Cooking techniques passed down for generations speak to the fortitude and perseverance of Black culture and cuisine. But along with celebration comes the consideration of the health effects of meat, sugar, and fat. Running parallel to the narrative of soul food lies another story, one that ties nutrition with liberation, and one that features an unlikely hero: a prominent Black comedian whose 1974 book filled with plant-based recipes continues to influence Black American diets today.

I grew up with Dick Gregory’s Natural Diet for Folks Who Eat: Cookin’ With Mother Nature in my home in Memphis. I even took it along with me for my first semester at Tennessee State University. The campus was surrounded with fast-food and soul food restaurants, and I often referred back to Gregory’s book for nutritional advice. I also made recipes from its pages, such as the “Nutcracker Sweet,” a fruit smoothie made with a mixture that would now be known as almond milk. Today, many years later and living in Brooklyn, I still consult the book. The same copy I first saw on my mother’s bookcase—with its cover depicting Gregory’s head wearing a giant chef’s hat topped with fruit and vegetables—now sits on my own.

Now considered one of history’s greatest stand-up comedians, Dick Gregory skyrocketed to fame after an appearance on The Tonight Show with Jack Paar in 1961, a segment that almost didn’t happen. Gregory initially turned down the opportunity because the show allowed Black entertainers to perform, but not to sit on Parr’s couch for interviews. After his refusal, Parr personally called Gregory to invite him to an interview on the Tonight Show’s couch. His appearance was groundbreaking: “It was the first time white America got to hear a Black person not as a performer, but as a human being,” Gregory later said in an interview.

Gregory was particularly adept at using humor to showcase the Black experience at a time of heightened tension and division in the United States. During a performance early in his career, he quipped, “Segregation is not all bad. Have you ever heard of a collision where the people in the back of the bus got hurt?”

“He had the ability to make us laugh when we probably needed to cry,” U.S. representative and civil rights icon John Lewis said in an interview after Gregory’s death in 2017. “He had the ability to make the whole question of race, segregation, and racial discrimination simple, where people could come together and deal with it and not try to hide it under the American rug.”

But Gregory didn’t just tackle racial inequality at comedy clubs. He also used his voice to advocate for civil rights at protests and rallies. After emceeing a rally with Dr. Martin Luther King Jr. in June 1961, Gregory developed a relationship with King. (Gregory’s close ties to leaders like King and Mississippi activist Medgar Evers would eventually lead to his becoming a target of FBI surveillance.) He aided in the search for the missing civil rights workers that were killed by the Ku Klux Klan in Mississippi during the intense “Freedom Summer” of 1964 and performed at a rally on the last night of 1965’s Selma to Montgomery march.

For Gregory, who became a vegetarian in 1965, food and diet became inextricably linked to civil rights. “The philosophy of nonviolence, which I learned from Dr. Martin Luther King, Jr., during my involvement in the civil rights movement, was first responsible for my change in diet,” he writes in his book. “I felt the commandment ‘Thou shalt not kill’ applied to human beings not only in their dealings with each other—war, lynching, assassination, murder, and the like—but in their practice of killing animals for food or sport.”

Throughout Dick Gregory’s Natural Diet, he ties the liberation of Black people to health, nutrition, and basic human rights. Gregory was all too familiar with the socioeconomic obstacles to a healthy diet: Growing up poor in St. Louis, he had limited access to fresh fruits and vegetables. In his book, he notes that readers may not always have the best resources, but they can have the best information. Each chapter serves as both a rallying cry and a manual, offering everything from primers on the human body to lists of foods that are good sources of particular vitamins and minerals.

Thanks to Gregory’s longstanding collaboration with nutritionist Dr. Alvenia Fulton, the book offers healthy recipes as well as natural remedies for common ailments. The chapter  . . .

Continue reading. The article includes two recipes.

And see also this earlier post about a food that came from a radical movement: Bean Pie.

I’ll also note that the FBI’s surveillance of civil-rights leaders is yet another example of the FBI’s sleazy side, as is its support of the pedophile Larry Kassar (refusing to investigate credible allegations) and its blaming on an Oregon man for the bomb attacks in Spain because the FBI couldn’t read fingerprints — not to mention the scandal of the incompetence and bad practice at FBI forensic labs. It’s an agency that needs considerable work (and a new culture).

Written by Leisureguy

26 July 2021 at 12:40 pm

Facing Years in Prison for Drone Leak, Daniel Hale Makes His Case Against U.s. Assassination Program

leave a comment »

This article by Ryan Devereaux in The Intercept is a must-read:

THE MISSILES THAT killed Salim bin Ahmed Ali Jaber and Walid bin Ali Jaber came in the night. Salim was a respected imam in the village of Khashamir, in southeastern Yemen, who had made a name for himself denouncing the rising power of Al Qaeda’s franchise in the Arabian Peninsula. His cousin Walid was a local police officer. It was August 21, 2012, and the pair were standing in a palm grove, confronting a trio of suspected militants, when the Hellfires made impact.

The deaths of the two men sparked protests in the days that followed, symbolizing for many Yemenis the human cost of U.S. counterterrorism operations in their country. Thousands of miles away, at the U.S. military’s base in Bagram, Afghanistan, Daniel Hale, a young intelligence specialist in the U.S. Air Force, watched the missiles land. One year later, Hale found himself sitting on a Washington, D.C., panel, listening as Salim’s brother, Faisal bin Ali Jaber, recalled the day Salim was killed.

As Fazil recounted what happened next, I felt myself transported back in time to where I had been on that day, 2012. Unbeknownst to Fazil and those of his village at the time was that they had not been the only ones watching Salem approach the jihadist in the car. From Afghanistan, I and everyone on duty paused their work to witness the carnage that was about to unfold. At the press of a button, from thousands of miles away, two Hellfire missiles screeched out of the sky, followed by two more. Showing no signs of remorse, I, and those around me, clapped and cheered triumphantly. In front of a speechless auditorium, Fazil wept.

Hale recalled the emotional moment and others stemming from his work on the U.S. government’s top-secret drone program in an 11-page, handwritten letter filed in the U.S. District Court for the Eastern District of Virginia this week.

Secret Evidence

Hale was indicted by a grand jury and arrested in 2019 on a series of counts related to the unauthorized disclosure of national defense and intelligence information and the theft of government property. In March, the 33-year-old pleaded guilty to leaking a trove of unclassified, secret, and top-secret documents to a news organization, which government filings strongly implied was The Intercept. His sentencing is scheduled for next week.

The Intercept “does not comment on matters relating to the identity of anonymous sources,” Intercept Editor-in-Chief Betsy Reed said at the time of Hale’s indictment. “These documents detailed a secret, unaccountable process for targeting and killing people around the world, including U.S. citizens, through drone strikes,” Reed noted. “They are of vital public importance, and activity related to their disclosure is protected by the First Amendment.”

Federal prosecutors are urging Judge Liam O’Grady to issue a maximum sentence, up to 11 years in prison, arguing that Hale has shown insufficient remorse for his actions, that his disclosures were motivated by vanity and not in the public interest, and that they aided the United States’ enemies abroad — namely the Islamic State.

“These documents contained specific details that adversaries could use to hamper and defeat actions of the U.S. military and the U.S. intelligence community,” the government claimed. “Indeed, they were of sufficient interest to ISIS for that terrorist organization to further distribute two of those documents in a guidebook for its followers.”

Prosecutors have acknowledged, however, that Hale’s sentencing was “in an unusual posture” because the probation officer in the case, who makes recommendations to the court, “has not seen some of the key facts of the case,” namely those that the government says support its claim that Hale’s disclosures had the potential to cause “serious” or “exceptionally grave” harm to U.S. national security. The Intercept has not reviewed the documents in question, which remain under seal, shielded from public scrutiny.

Harry P. Cooper, a former senior official in the CIA and noted agency expert on classified materials who did review the documents, provided a declaration in Hale’s case on the potential national security threat posed by the release of the documents.

Cooper, who maintains a top-secret clearance and has trained top-level officials at the agency, including the director of the CIA, said that while some of the documents did constitute so-called national defense information, “the disclosure of these documents, at the time they were disclosed and made public, did not present any substantial risk of harm to the United States or to national security.”

Commenting on the government’s claim that Hale’s disclosures were circulated by ISIS, Cooper said, “such publication further supports my conclusions, because it suggests that the adversaries treated the documents as trophies rather than as something that would give a tactical advantage, given that publication would reduce to zero any tactical advantage that the documents might otherwise have given.”

“In short,” Cooper said, “an adversary who has gained a tactical advantage by receiving secret information would never publicize their possession of it.”

Hale was charged under the Espionage Act, a highly controversial 1917 law that has become a favored tool of federal prosecutors pursuing cases of national security leaks. The law bars the accused from using motivations such as informing the public as a defense against incarceration, and yet, Hale’s alleged personal motivations and character came up repeatedly in a sentencing memo filed this week, with prosecutors arguing that he was “enamored of journalists” and that as a result, “the most vicious terrorists in the world” obtained top-secret U.S. documents.

In their own motion filed this week, Hale’s lawyers argued that the former intelligence analyst’s motivations were self-evident — even if the government refused to recognize them. “The facts regarding Mr. Hale’s motive are clear,” they wrote. “He committed the offense to bring attention to what he believed to be immoral government conduct committed under the cloak of secrecy and contrary to public statements of then-President Obama regarding the alleged precision of the United States military’s drone program.”

Hidden Assassinations

Legal experts focused on the drone program strongly dispute the prosecution’s claim that Hale’s disclosures did not provide a significant public service. Indeed, for many experts, shedding light on a lethal program that the government had tried to keep from public scrutiny for years is vital.

“The disclosures provided important information to the American public about a killing program that has virtually no transparency or accountability, and has taken a devastating toll on civilian lives abroad in the name of national security,” said Priyanka Motaparthy, director of the Counterterrorism, Armed Conflict and Human Rights Project at Columbia Law School. “They helped reveal how some of the most harmful impacts of this program, in particular the civilian toll, were obscured and hidden.”

Thanks in large part to the government’s efforts to keep the drone program under tight secrecy, the task of calculating the human impact of the program has been left to investigative journalists and independent monitoring groups. The numbers that these groups have compiled over the years show a staggering human cost of these operations. The U.K.-based Bureau of Investigative Journalism, or TBIJ, estimates the total number of deaths from drones and other covert killing operations in Pakistan, Afghanistan, Yemen, and Somalia to run between 8,858 and 16,901 since strikes began to be carried out in 2004.

Of those killed, as many as 2,200 are believed to have been civilians, including several hundred children and multiple U.S. citizens, including a 16-year-old boy. The tallies of civilian casualties are undoubtedly an undercount of the true cost of the drone war — as Hale’s letter to the court this week and the documents he allegedly made public show, the people who are killed in American drone strikes are routinely classified as “enemies killed in action” unless proven otherwise.

Following years of pressure — and in the wake of the publication of the materials Hale is accused of leaking — the Obama administration introduced new requirements for reporting civilian casualties from covert counterterrorism operations to the public in 2016, disclosing that year that between 64 and 116 civilians were believed to have been killed in drone strikes and other lethal operations. However, the Trump administration revoked that meager disclosure requirement, leaving the public once again in the dark about who exactly is being killed and why. . .

Continue reading. There’s more and it’s important because it shows an aspect of the US that one normally associates with the baddies. Some of what the US has done — a drone strike on a wedding party, for example — are functionally equivalent to terrorism.

Written by Leisureguy

25 July 2021 at 4:56 pm

National unity for the common good vs. National divisiveness to get as much as you can

leave a comment »

Heather Cox Richardson has a good column:

On July 20, 1969, American astronauts Neil Armstrong and Edwin “Buzz” Aldrin became the first humans ever to land, and then to walk, on the moon.

They were part of the Apollo program, designed to put an American man on the moon. Their spacecraft launched on July 16 and landed back on Earth in the Pacific Ocean July 24, giving them eight days in space, three of them orbiting the moon 30 times. Armstrong and Aldrin spent almost 22 hours on the moon’s surface, where they collected soil and rock samples and set up scientific equipment, while the pilot of the command module, Michael Collins, kept the module on course above them.

The American space program that created the Apollo 11 spaceflight grew out of the Cold War. The year after the Soviet Union launched an artificial satellite in 1957, Congress created the National Aeronautics and Space Administration (NASA) to demonstrate American superiority by sending a man into space. In 1961, President John F. Kennedy moved the goalposts, challenging the country to put a man on the moon and bring him safely back to earth again. He told Congress: “No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.”

A year later, in a famous speech at Rice University in Texas, Kennedy tied space exploration to America’s traditional willingness to attempt great things. “Those who came before us made certain that this country rode the first waves of the industrial revolutions, the first waves of modern invention, and the first wave of nuclear power, and this generation does not intend to founder in the backwash of the coming age of space. We mean to be a part of it—we mean to lead it,” he said.

[T]here is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people…. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills….”

But the benefits to the country would not only be psychological, he said. “The growth of our science and education will be enriched by new knowledge of our universe and environment, by new techniques of learning and mapping and observation, by new tools and computers for industry, medicine, the home as well as the school.” The effort would create “a great number of new companies, and tens of thousands of new jobs…new demands in investment and skilled personnel,” as the government invested billions in it.

“To be sure, all this costs us all a good deal of money…. I realize that this is in some measure an act of faith and vision, for we do not now know what benefits await us.”

Seven years later, people across the country gathered around television sets to watch Armstrong step onto the moon and to hear his famous words: “That’s one small step for [a] man, one giant leap for mankind.”

President Richard Nixon called the astronauts from the White House: “I just can’t tell you how proud we all are of what you have done,” he said. “For every American, this has to be the proudest day of our lives…. Because of what you have done, the heavens have become a part of man’s world…. For one priceless moment in the whole history of man, all the people on this Earth are truly one…in their pride in what you have done, and…in our prayers that you will return safely to Earth.”

And yet, by the time Armstrong and Aldrin were stepping onto the moon in a grand symbol of the success of the nation’s moon shot, Americans back on earth were turning against each other. Movement conservatives who hated post–World War II business regulation, taxation, and civil rights demanded smaller government and championed the idea of individualism, while those opposed to the war in Vietnam increasingly distrusted the government.

After May 4, 1970, when the shooting of college students at Kent State University in Ohio badly weakened Nixon’s support, he began to rally supporters to his side with what his vice president, Spiro Agnew, called “positive polarization.” They characterized those who opposed the administration as anti-American layabouts who simply wanted a handout from the government. The idea that Americans could come together to construct a daring new future ran aground on the idea that anti-war protesters, people of color, and women were draining hardworking taxpayers of their hard-earned money.

Ten years later, former actor and governor of California Ronald Reagan won the White House by promising to defend white taxpayers from people like the “welfare queen,” who, he said, “has 80 names, 30 addresses, 12 Social Security cards and is collecting veteran’s benefits on four non-existing deceased husbands.” Reagan promised to champion individual Americans, getting government, and the taxes it swallowed, off people’s backs.

“In this present crisis, government is not the solution to our problem; government is the problem,” Reagan said in his Inaugural Address. Americans increasingly turned away from the post–World War II teamwork and solidarity that had made the Apollo program a success, and instead focused on liberating individual men to climb upward on their own terms, unhampered by regulation or taxes.

This week, on July 20, 2021, 52 years to the day after Armstrong and Aldrin stepped onto the moon, former Amazon CEO Jeff Bezos and four passengers spent 11 minutes in the air, three of them more than 62 miles above the earth, where many scientists say space starts. For those three minutes, they were weightless. And then the pilotless spaceship returned to Earth.

Traveling with Bezos were his brother, Mark; 82-year-old Wally Funk, a woman who trained to be an astronaut in the 1960s but was never permitted to go to space; and 18-year-old Oliver Daemen from the Netherlands, whose father paid something under $28 million for the seat.

Bezos’s goal, he says, is . . .

Continue reading. There’s more.

Written by Leisureguy

24 July 2021 at 9:29 am

The Chatbot Problem

leave a comment »

Stephen Marche writes in the New Yorker:

In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.” Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.

In his 1950 science-fiction collection, “I, Robot,” Isaac Asimov outlined his three laws of robotics. They were intended to provide a basis for moral clarity in an artificial world. “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is the first law, which robots have already broken. During the recent war in Libya, Turkey’s autonomous drones attacked General Khalifa Haftar’s forces, selecting targets without any human involvement. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” a report from the United Nations read. Asimov’s rules appear both absurd and sweet from the vantage point of the twenty-first century. What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.

Artificial intelligence is an ethical quagmire. Its power can be more than a little nauseating. But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?

Ethics has never been a strong suit of Silicon Valley, to put the matter mildly, but, in the case of A.I., the ethical questions will affect the development of the technology. When Lemonade, an insurance app, announced that its A.I. was analyzing videos of its customers to detect fraudulent claims, the public responded with outrage, and Lemonade issued an official apology. Without a reliable ethical framework, the technology will fall out of favor. If users fear artificial intelligence as a force for dehumanization, they’ll be far less likely to engage with it and accept it.

Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values. The crisis, as it’s arriving, possesses aspects of a horror film. “As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice,’ ” Christian writes. “We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.” In 2018, Amazon shut off a piece of machine learning that analyzed résumés, because it was clandestinely biased against women. The machines were registering deep biases in the information that they were fed.

Language is a thornier problem than other A.I. applications. For one thing, the stakes are higher. Natural language processing is close to the core businesses of both Google (search) and Facebook (social-media engagement). Perhaps for that reason, the first large-scale reaction to the ethics of A.I. natural language processing could not have gone worse. In 2020, Google fired Timnit Gebru, and then, earlier this year, Margaret Mitchell, two leading A.I.-ethics researchers. Waves of protest from their colleagues followed. Two engineers at Google quit. Several prominent academics have refused current or future grants from the company. Gebru claims that she was fired after being asked to retract a paper that she co-wrote with Mitchell and two others called “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” (Google disputes her claim.) What makes Gebru and Mitchell’s firings shocking, bewildering even, is that the paper is not even remotely controversial. Most of it isn’t even debatable.

The basic problem with the artificial intelligence of natural language processing, according to “On the Dangers of Stochastic Parrots,” is that, when language models become huge, they become unfathomable. The data set is simply too large to be comprehended by a human brain. And without being able to comprehend the data, you risk manifesting the prejudices and even the violence of the language that you’re training your models on. “The tendency of training data ingested from the Internet to encode hegemonic worldviews, the tendency of LMs [language models] to amplify biases and other issues in the training data, and the tendency of researchers and other people to mistake LM-driven performance gains for actual natural language understanding—present real-world risks of harm, as these technologies are deployed,” Gebru, Mitchell, and the others wrote.

As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. “On the Dangers of Stochastic Parroting” offers a number of examples: “Biases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities.” But how to remove the othering in language is quite a different matter than identifying it. Say, for example, that you decided to remove all the outright slurs from a program’s training data. “If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light,” Gebru and the others write. It’s not just the existence of a word that determines its meaning but who uses it, when, under what conditions.

The evidence for stochastic parroting is fundamentally incontrovertible, rooted in the very nature of the technology. The tool applied to solve many natural language processing problems is called a transformer, which uses techniques called positioning and self-attention to achieve linguistic miracles. Every token (a term for a quantum of language, think of it as a “word,” or “letters,” if you’re old-fashioned) is affixed a value, which establishes its position in a sequence. The positioning allows for “self-attention”—the machine learns not just what a token is and where and when it is but how it relates to all the other tokens in a sequence. Any word has meaning only insofar as it relates to the position of every other word. Context registers as mathematics. This is the splitting of the linguistic atom.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? I remember a fact that I learned when I was forced to study Old English for my Ph.D.: in English, the terms for food eaten at the table derive from French—beef, mutton—while the terms for animals in the field derive from Anglo-Saxon—cow, sheep. That difference registers ethnicity and class: the Norman conquerors ate what the Saxon peons tended. So every time you use those most basic words—cow, beef—you express a fundamental caste structure that differentiates consumer from worker. Progressive elements in the United States have made extensive attempts to remove gender duality from pronouns. But it’s worth noting that, in French or in Spanish, all nouns are gendered. A desk, in French, is masculine, and a chair is feminine. The sky itself is gendered: the sun is male, the moon female. Ultimately, what we can fix in language is parochial. Caste and gender are baked into every word. Eloquence is always a form of dominance. Government is currently offering no solutions. Sam Altman, the C.E.O. of OpenAI, which created the deep-learning network GPT-3, has been very open about his pursuit of any kind of governance whatsoever. In Washington, he has found, discussing the long-term consequences of artificial intelligence leads to “a real eyes-glazed-over look.” The average age of a U.S. senator is sixty-three. They are missing in action.

Let’s imagine an A.I. engineer who wants to create a chatbot that aligns with human values. Where is she supposed to go to determine a reliable metric of “human values”?. . .

Continue reading.

Written by Leisureguy

23 July 2021 at 1:23 pm

The Leakage Problem

leave a comment »

Pedestrian Observations has an interesting post:

I’ve spent more than ten years talking about the cost of construction of physical infrastructure, starting with subways and then branching on to other things, most.

And yet there’s a problem of comparable size when discussing infrastructure waste, which, lacking any better term for it, I am going to call leakage. The definition of leakage is any project that is bundled into an infrastructure package that is not useful to the project under discussion and is not costed together with it. A package, in turn, is any program that considers multiple projects together, such as a stimulus bill, a regular transport investment budget, or a referendum. The motivation for the term leakage is that money deeded to megaprojects leaks to unrelated or semi-related priorities. This often occurs for political reasons but apolitical examples exist as well.

Before going over some examples, I want to clarify that the distinction between leakage and high costs is not ironclad. Sometimes, high costs come from bundled projects that are costed together with the project at hand; in the US they’re called betterments, for example the $100 million 3 km bike lane called the Somerville Community Path for the first, aborted iteration of the Green Line Extension in Boston. This blur is endemic to general improvement projects, such as rail electrification, and also to Northeast Corridor high-speed rail plans, but elsewhere, the distinction is clearer.

Finally, while normally I focus on construction costs for public transport, leakage is a big problem in the United States for highway investment, for political reasons. As I will explain below, I believe that nearly all highway investment in the US is waste thanks to leakage, even ignoring the elevated costs of urban road tunnels.

State of good repair

A month ago, I uploaded a video about the state of good repair grift in the United States. The grift is that SOGR is maintenance spending funded out of other people’s money – namely, a multiyear capital budget – and therefore the agency can spend it with little public oversight. The construction of an expansion may be overly expensive, but at the end of the day, the line opens and the public can verify that it works, even for a legendarily delayed project like Second Avenue Subway, the Berlin-Brandenburg Airport, or the soon-to-open Tel Aviv Subway. It’s a crude mechanism, since the public can’t verify safety or efficiency, but it’s impossible to fake: if nothing opens, it embarrasses all involved publicly, as is the case for California High-Speed Rail. No such mechanism exists for maintenance, and therefore, incompetent agencies have free reins to spend money with nothing to show for it. I recently gave an example of unusually high track renewal costs in Connecticut.

The connection with leakage is that capital plans include renewal and long-term repairs and not just expansion. Thus, SOGR is leakage, and when its costs go out of control, they displace funding that could be used for expansion. The NEC Commission proposal for high-speed rail on the Northeast Corridor calls for a budget of $117 billion in 2020 dollars, but there is extensive leakage to SOGR in the New York area, especially the aforementioned Connecticut plan, and thus for such a high budget the target average speed is about 140 km/h, in line with the upgraded legacy trains that high-speed lines in Europe replace.

Regionally, too, the monetary bonfire that is SOGR sucks the oxygen out of the room. The vast majority of the funds for MTA capital plans in New York is either normal replacement or SOGR, a neverending program whose backlog never shrinks despite billions of dollars in annual funding. The MTA wants to spend $50 billion in the next 5 years on capital improvements; visible expansion, such as Second Avenue Subway phase 2, moving block signaling on more lines, and wheelchair accessibility upgrades at a few stations, consists of only a few billion dollars of this package.

This is not purely an American issue. Germany’s federal plan for transport investment calls for 269.6 billion euros in project capital funding from 2016 to 2030, including a small proportion for projects planned now to be completed after 2031; as detailed on page 14, about half of the funds for both road and rail are to go to maintenance and renewal and only 40% to expansion. But 40% for expansion is still substantially less leakage than seen in American plans like that for New York.

Betterments and other irrelevant projects

Betterments straddle the boundary between high costs and leakage. They can be bundled with . . .

Continue reading.

Written by Leisureguy

23 July 2021 at 12:56 pm

We’re all teenagers now

leave a comment »

Paul Howe, professor of political science at the University of New Brunswick in Fredericton, Canada  and author of Teen Spirit: How Adolescence Transformed the Adult World (2020), has an extract of his book in Aeon:

Most of us are familiar with the law of unintended consequences. In the 1920s, Prohibition put a halt to the legal production and sale of alcohol in the United States only to generate a new set of social ills connected to bootlegging and wider criminal activity. More recently, mainstream news media outlets, in pursuit of ratings and advertising dollars, lavished attention on an outlandish, orange-hued candidate when he first announced his run for president in 2015, and inadvertently helped to pave his way to the White House – oops. Aiding and abetting his campaign was a communications tool – social media – originally designed to bring people together and create community, but which now seems to serve more as a vehicle of division and discord.

A different development has been seen as an unqualified boon: the mass expansion, over the past century, of public education. In place of a narrowly educated elite and the minimally schooled masses, we now have a society where the vast majority possess knowledge and skills necessary for success in various dimensions of their lives, including work, community engagement, democratic participation and more. Some might fall short of their potential, but the general impact is clear: extending greater educational opportunity to one and all has provided untold benefits for both individuals and society at large over the long haul.

The latest work from Robert Putnam, the pre-eminent scholar of social change in the modern US, illustrates the common wisdom on the matter. His book The Upswing (co-authored with the social entrepreneur Shaylyn Romney Garrett) sets the stage by describing the social strife of the Gilded Age, the final decades of the 19th century when rapid industrialisation and technological change generated social dislocation, inequality, civic discord and political corruption. In response to this troubled state of affairs, the Progressive movement sprang into being, bringing a new community spirit to society’s problems, along with a series of pragmatic solutions. One signal achievement was the establishment of the modern public high school, an innovation that began in the US West and Midwest and spread quickly throughout the country. Enrolment at the secondary level among those aged 14 to 17 leapt from about 15 per cent in 1910 to 70 per cent by 1940.

In Putnam’s account, the clearest benefit of educating Americans to a higher level was unparalleled economic growth and upward social mobility for the newly educated lower classes – positive effects that unfolded over the first half of the 20th century and made the US a more prosperous and egalitarian society. These benefits were part and parcel of a more general upswing that encompassed rising levels of social trust, community engagement, political cooperation, and a stronger societal emphasis on ‘we’ than ‘I’.

But it did not last. For reasons not entirely clear, the 1960s saw individualism resurfacing as the dominant mindset of Americans and the ethos of US society, turning the upswing into a downswing that has continued to the present day and lies at the heart of many contemporary social and political problems.

Hidden in this puzzling arc of social change is another unintended consequence. Universal secondary education not only elevated Americans by spreading relevant knowledge and skills to the masses. It also gave rise to a more complex social and cultural transformation, as the adolescent period became pivotal in shaping who we are. The fact is that high school is, and always has been, about more than just education. In the late 1950s, the sociologist James Coleman investigated student life in 10 US high schools, seeking to learn more about adolescents and their orientation towards schooling. In The Adolescent Society: The Social Life of the Teenager and Its Impact on Education (1961), he reported that it was the social, not the educational, dimension of the high-school experience that was paramount to teens. Cloistered together in the high-school setting, teenagers occupied a separate and distinct social space largely immune from adult influence. Coleman warned that:

The child of high-school age is ‘cut off’ from the rest of society, forced inward toward his own age group, made to carry out his whole social life with others his own age. With his fellows, he comes to constitute a small society, one that has most of its important interactions within itself, and maintains only a few threads of connection with the outside adult society.

The emergence of a segregated teenage realm occurred well before Coleman put his finger on the problem. In their classic study of the mid-1920s, the sociologists Robert and Helen Lynd described the high school in ‘Middletown’ (later revealed to be Muncie, Indiana) as ‘a fairly complete social cosmos in itself … [a] city within a city [where] the social life of the intermediate generation centres … taking over more and more of [their] waking life.’

Life beyond the classroom reinforced the pattern: a national survey from around the same time found that the average urban teenager spent four to six nights a week socialising with peers rather than enjoying quiet nights at home with the family. With the advent of modern high school, the day-to-day life of teenagers was transformed, their coming-of-age experiences fundamentally altered. Adolescence became a kind of social crucible where teens were afforded the time and space to interact intensively with one another and develop by their own lights.

So while there was clear educational benefit gained from the reading, writing and arithmetic taking place in high-school classrooms across the land, a wider set of changes started to emanate from this new social configuration. The most visible was the emergence of a more sharply defined youth culture rooted in shared interests and passions that flourished more freely within adolescent society. Young people flocked to the movies like no other demographic, their enthusiasm for the silver screen and its celebrity icons helping to propel Hollywood to the forefront of popular culture. They latched on to new musical styles – jazz in the 1920s, swing in the 1930s – and embraced them as their own; devoured the new literary sensation of the times, comic books; and adopted common ways of dressing and personal styling as emblems of youth fashion. Embodied in these trends was a heightened emphasis on the fun and the frivolous side of life that would slowly reset societal standards as time went on.

Other changes were more subtle but equally portentous. Sociological studies conducted between the two world wars reveal a rapid liberalisation of attitudes towards practices such as betting, smoking and divorce, with rates of disapproval among youth declining by 20 to 35 percentage points in the space of just a single decade. In this same period, young people grew increasingly tolerant of social misdemeanours such as habitually failing to keep promises, using profane language, and keeping extra change mistakenly given by a store clerk – minor incivilities by today’s standards, but harbingers of a changing social landscape where the transgression of established norms was starting to become more common and accepted.

This rapid evolution in everyday behaviour reflected a deeper transformation: the character of rising generations, their values, temperament and traits, were being reshaped by the powerful influence of peers during the formative years of adolescence. Hedonistic desires were more openly expressed, pleasurable activities more freely pursued. Conscientiousness was downplayed, social norms treated with greater scepticism and disdain. Impulsiveness and emotionality were more commonly displayed, an open, adventurous spirit widely embraced.

What these diverse adolescent qualities amounted to were the building blocks of a nascent individualism that would reshape society profoundly as they came to full fruition over the course of the next few decades. Traits conducive to self-focused and self-directed thought and action were more deeply etched in teenagers and slowly altered the character of society at large as whole groups socialised in this manner moved forward to adulthood.

The effects of peer influence, this argument implies, run deeper than is commonly imagined, affecting not just superficial features of the self during the teenage years, but the kind of person we become. Important research from the personality psychologist Judith Rich Harris, synthesised in her seminal book, The Nurture Assumption (1998), backs up this idea. Harris reviewed the body of research on the nature versus nurture debate, finding it consistently showed environmental effects outside the home loomed larger than had previously been realised. And she presented evidence that . . .

Continue reading.

I commented on the article:

Fascinating article, and the hypothesis of adolescents “setting” their cultural outlook through being grouped with coevals during the transition to early adulthood (a) makes sense and (b) explains a lot. I am now elderly but in middle age (in the 1980’s), a common topic of conversation among people of my age was how much older our parents seemed to have been when they were the age we were. We (in our view) still had a youthful outlook, but our parents had always had an older (more adult?) outlook and attitude. And of course our parents had spent their adolescent years not among coevals but embedded in an adult workforce, where they picked up the culture and expectations of those adults, whereas we had picked up in our adolescent years the culture and outlook of other adolescents.

Another thought: I recall reading about things that happened in Iraq after George W. Bush had the US invade (and pretty much destroy) that country, and among those things was the US practice of imprisoning anyone whom they suspected of being a “terrorist” (sometimes just being anti-US). That amounted, various writers pointed out, to an intensive education in terrorism, by putting together practiced and knowledgeable insurgents and terrorist with many who had been merely discontented, but in the prisons, they learned a lot — skills, attitudes, and outlooks — and made connections so that they left as members of a network. (Another unforeseen side-effect.)

By penning up adolescents together for the years of their transition from childhood to early adulthood, we pretty much ensured that a new culture would evolve and they would leave that environment with that cultural outlook embedded in them.

Both those are examples of the rapidity with which memes evolve. (“Memes” in the sense Richard Dawkins meant when he defined the term in Chapter 11 of The Selfish Gene, as units of human culture.) Memetic evolution is the result of the same logic that explains the evolution of lifeforms: reproduction with variation, occasional mutation, and a natural selection that results in some changes being successful (reproducing more) and others not so successful — cf. The Meme Machine, by Susan Blackmore.

Cultures evolve very quickly, but even lifeforms can evolve fairly quickly in circumstances in which selection is intense — cf. the rapid evolution when a species becomes island-bound. The schools (and prisons) made a cultural island, and cultural evolution was swift.

Written by Leisureguy

22 July 2021 at 8:13 pm

Raymond Scott’s bizarre but intriguing ideas

leave a comment »

Being ahead of one’s time is a serious curse. Ted Gioia has a most interesting column that begins:

Background: Below is the latest in my series of profiles of individuals I call visionaries of sound—innovators who are more than just composers or performers, but futurists beyond category. Their work aims at nothing less than altering our entire relationship with the music ecosystem and day-to-day soundscapes.

In many instances, their names are barely known, even within the music world. In some cases—as with Charles Kellogg, recently profiled here—they have been entirely left out of music history and musicology books.

In this installment, I focus on the remarkable legacy of Raymond Scott. During the coming months, I will be publishing more of these profiles. Perhaps I will collect them in a book at some point.

The Secret Music Technology of Raymond Scott

Unfortunately, I need to start this article by talking about Porky Pig.

Raymond Scott deserves better. He never intended for his legacy in music to depend on cartoon animals. But his largest audience, as it turned out, would be children who laugh at Bugs Bunny, Daffy Duck, Porky Pig and the other animated protagonists of the Looney Tunes and Merrie Melodies cartoons released by Warner Bros.

Scott didn’t write cartoon music—at least, not intentionally—but his music appears on more than 100 animated films. For that give credit (or blame) to Carl Stallings, who needed to churn out a cartoon soundtrack every week, more or less, while under contract to Warner Bros. Stallings found a goldmine in the compositions of Raymond Scott, whose music had been licensed to the studio. These works, which straddle jazz and classical stylings, possess a manic energy that made them the perfect accompaniment to a chase scene or action sequence or some random cartoon-ish act of violence.

Scott called his music “descriptive jazz”—his name for a novel chamber music style that drew on the propulsive drive of swing, with all the riffs and syncopation of that dance style, but with less improvisation and proclaiming a taste for extravagant, quasi-industrial sounds. It was like techno before there was techno, but with a jitterbug sensibility.

When I first learned about Scott, I was taught to view him as a music novelty act, akin perhaps to Zez Confrey or Spike Jones, and the most frequently cited examples of his work (to the extent, they were mentioned at all) were these cartoon soundtracks. But Scott had higher ambitions. He was, after all, a Juilliard graduate, with a taste for experimental music, and worldview more aligned with Dali and Dada than Daffy Duck. But Scott also wanted to be a technologist—his early aim had been to study engineering. He dreamed of combining these two pursuits, and gaining renown as one of the trailblazers in electronic music.

Under slightly different circumstances, he might have become even more famous for music tech than for his cartoon music, as well-known as Robert Moog or Ray Dolby or Les Paul or Leon Theremin. But those dreams were all in the future, when he picked the name “Raymond Scott” out of a phone book—because he thought it “had good rhythm.” . . .

Continue reading. It gets stranger and stranger. He invented a music synthesizer, for example, hiring Bob Moog to design circuits for him. (Moog later made his own synthesizer, of course.) Amazing story.

There’s an old country song called “Pictures from Life’s Other Side.” This whole piece reminded me of that.

Written by Leisureguy

21 July 2021 at 3:03 pm

Arianna Rosenbluth Changed the World Before Leaving Science Behind

leave a comment »

By and large, society during my generation treated women badly (and by “society” I mean “men” but also organizations (overwhelmingly managed by men) and social conventions). This article provides one example but there are many others. One example often offered is how Crick and Watson used Rosalind Franklin’s findings but failed to credit her. That particular story does not correspond to the facts but there are many others — for example, how long Wally Frank had to wait before she was finally able to take a suborbital flight yesterday.

Here’s an example that is true, which Anastasia Carrier recounts in the Harvard Gazette. She writes:

A few years ago, Jean Rosenbluth was visiting her mother at a nursing home in Pasadena. The occasion was a holiday party, and Jean and her husband were seated with her mother and another couple. It came up in conversation that the man sharing the table was a history of science professor, specializing in physics.

“Oh, my mother was a physicist,” Jean said as she introduced her mother. “This is Arianna Rosenbluth.”

The professor was stunned. “Wait, the Arianna Rosenbluth?” Arianna smiled shyly and kept eating her lemon meringue pie.

Arianna Wright Rosenbluth, who received a master’s degree in physics from Radcliffe College in 1947, was one of five scientists who created the revolutionary Metropolis algorithm—the first practical implementation of what are now known as the Markov Chain Monte Carlo methods, go-to tools for solving large, complex mathematical and engineering problems.

Over the years, these methods have been used to simulate both quantum physics and markets, predict genetic predisposition to certain illnesses, forecast the outcomes of political conflicts, and model the spread of infectious diseases. It was Rosenbluth who found a way to get early computers to use the Markov Chain method, creating a blueprint that others followed.

“Arianna’s impact would last for a long time,” says Xihong Lin, a professor of biostatistics at the Harvard T.H. Chan School of Public Health, who used Markov Chain Monte Carlo methods to analyze a large set of COVID-19 data from Wuhan and to calculate the infectiousness of the virus. The methods have also helped specialists evaluate the effectiveness of quarantine and stay-at-home measures.

“Without Rosenbluth, I don’t think the field of Markov Chain Monte Carlo would go that far,” says Lin, referring to the role of the Radcliffe-trained scientist in enabling wide use of the tool across disciplines. “Implementation is critically important. That’s why her contribution is a landmark and really should be emphasized—should be honored.”

The paper that Rosenbluth coauthored—along with her then-husband, Marshall Rosenbluth, Edward and Augusta Teller, and Nicholas Metropolis—was published in 1953, but the algorithm’s origin story remained a mystery for five decades. In 2003, Marshall shared his memory of the achievement during a conference celebrating its 50th anniversary. The researchers developed the tool to illuminate how atoms rearranged themselves as solids melted, he said. Marshall did most of the conceptual work, and Arianna translated their idea into a computer algorithm—a task that required a fundamental understanding of physics and computer science, and also creativity.

By all accounts, Rosenbluth, who died of COVID-19 complications in December at age 93, was brilliant. She earned her PhD in physics at Harvard at 21 and in her short career worked under two physicists who went on to earn Nobel Prizes. And yet she effectively quit science in her late 20s, leaving her job at the Los Alamos Scientific Laboratory to be a stay-at-home mother. She rarely spoke about her time in the lab—although she sometimes mentioned to her children how irritating it was that her ideas were overlooked because she was a woman trying to make it in a male-dominated field. Other times, she would lovingly describe MANIAC I—the Los Alamos machine that she used for computing the Metropolis algorithm.

“She was ahead of her time,” says Pierre E. Jacob, the John L. Loeb Associate Professor of the Natural Sciences and a professor of statistics in the Harvard Faculty of Arts and Sciences, whose work involves Markov chains and probability modeling. In his syllabus, he renamed the Metropolis algorithm the Rosenbluth algorithm after reading about Arianna’s death.

“Better late than never,” he says.

Star on the Rise

Growing up in Houston, Arianna Wright was a mystery to her parents.

“Her mom and dad had this genius child, and they kind of didn’t know what to do with her,” says Mary Rosenbluth, one of Arianna’s four children. Leffie (Woods) Wright was confused by her quiet and introspective daughter, who didn’t care for fashion and rules but loved reading, especially fantasy books like L. Frank Baum’s Wizard of Oz series. Mary recalls a newspaper article among her mother’s things that described Arianna as a child genius.

“It kind of struck me,” she says. “Here’s this girl growing up in suburban Houston, and she was just so different from everybody else.”

Arianna received a full-ride scholarship to Rice Institute (now Rice University) in Houston and took a bus to her classes. She earned her bachelor’s when she was 18, with honors in physics and mathematics. During her college days, she fenced against men as well as women, winning city and state championships. She qualified for the Summer Olympics in 1944, but World War II led to the cancellation of the games. She qualified again four years later but couldn’t afford to travel to London.

At Harvard, Arianna was rejected by one potential advisor because he didn’t take female PhD students, says Alan Rosenbluth, Arianna’s oldest child and a retired physicist. That was not uncommon. “Women were discouraged every step of the way,” says Margaret W. Rossiter, a Cornell historian of women in science. But Arianna forged ahead, in 1949 becoming just the fifth woman to earn a PhD in physics from Harvard.

She accepted a postdoctoral fellowship funded by the Atomic Energy Commission to study at Stanford University, where she . . .

Continue reading. There’s much more.

It strikes me that Arianna’s recognition of how to express the ideas of Markov chains using code in a computer algorithm bears a passing resemblance to how Zhi Bingyi’s recognition of how to express Chinese characters using the Latin alphabet on a keyboard (as described in the previous post).

Written by Leisureguy

21 July 2021 at 11:29 am

%d bloggers like this: