## Archive for the ‘**Math**’ Category

## Proof Assistant Makes Jump to Big-League Math

Computer-assisted cognition is here. Kevin Hartnett writes in *Quanta*:

Computer proof assistants have been an intriguing subplot in mathematics for years — promising to automate core aspects of the way mathematicians work, but in practice having little effect on the field.

But a new result, completed in early June, has the feel of a rookie’s first hit in the big leagues: At last, a proof assistant has made a real contribution to the leading edge of mathematical research by verifying the correctness of a complicated, modern proof.

“It demonstrates that modern maths can be formalized in a theorem prover,” said Bhavik Mehta, a graduate student at the University of Cambridge who contributed to the work.

The proof in question is by Peter Scholze of the University of Bonn, one of the most widely respected mathematicians in the world. It is just one piece of a larger project called “condensed mathematics” that he and Dustin Clausen of the University of Copenhagen have been working on for several years.

Their goal is to create new foundations for topology, replacing the traditional notion of a topological space — whose examples include the sphere and the doughnut — with more versatile objects that the authors call condensed sets. In this new perspective, topological spaces are thought of as being assembled from infinite points of dust glued together.

That project includes a particularly important, difficult proof that Scholze worked out himself during a consuming week in July 2019. It establishes that an area of math called real functional analysis still works if you replace topological spaces with condensed sets.

Scholze began the proof on a Monday. He worked entirely in his head, barely writing anything down, let alone using a computer. By Thursday afternoon he’d nearly figured it out, save one piece that he just couldn’t get right. He was also feeling the strain of the intense concentration required to hold such a complicated argument in his active memory. So that night he unwound with some friends at a bar. He paid for it the next morning, Friday.

“I was completely hungover,” said Scholze.

But he also knew that he wouldn’t have time to work over the weekend, making Friday his best chance to finish the proof. The thought of losing touch with everything he’d built up in his mind over the past week, then having to start again fresh on Monday, was more than he wanted to consider.

“I didn’t think I’d have the mental capacity to rebuild this in my head again,” said Scholze.

So he powered through and finished the proof. But afterward, he wasn’t certain that what he had done was correct. The reason was more than the hazy circumstances in which he’d cleared the final hurdle. The proof was so complicated Scholze knew it was possible he had missed something.

“It’s some very convoluted thing with many moving parts. It’s hard to know which parts move by how much when you shift one of these parameters,” said Scholze.

Scholze didn’t find time to actually write down the proof until November 2019. A year later he contacted Kevin Buzzard, a mathematician at Imperial College London and a prominent evangelist for a proof assistant program called Lean. Scholze wanted to know whether it would be possible to type his proof into Lean — turning it into lines of code like a software program — so that the program could verify whether it was really true.

Buzzard shared Scholze’s inquiry with a handful of other members of the Lean community including Johan Commelin, a postdoctoral researcher at the University of Freiburg. Commelin had the perfect background for the job — he’d been using Lean for several years and was familiar with condensed mathematics — and he was convinced that verifying Scholze’s proof would do a lot to legitimize the proof assistant’s standing in the mathematical community.

“Being able to collaborate with Peter on such a project and having his name attached to it would be an enormous boost for Lean,” said Commelin.

But he also thought it could take a year or more to do it, which gave him pause. Commelin was worried he might spend all that time verifying the proof and, at the end, the rest of the math world would just shrug.

“I thought that if I spend two years working on this and I come out of my cave and say, ‘This is fine,’ the rest of the world is going to say, ‘Wow, we already knew this, Peter proved it,’” said Commelin. It wouldn’t matter that Scholze himself wasn’t entirely sure.

So Commelin asked Scholze if he’d be willing to make a public statement vouching for the importance of the work. Scholze agreed, and on Dec. 5, 2020, he wrote a post on Buzzard’s blog.

They called it the “Liquid Tensor Experiment,” a nod to mathematical objects involved in the proof called liquid real vector spaces, and to a progressive rock band he and Commelin enjoy called Liquid Tension Experiment. In the 4,400-word primer, Scholze explained some technical aspects of the result and then added a note testifying in plain language to what he saw as the importance of checking it with a computer.

“I think this may be my most important theorem to date. (It does not really have any applications so far, but I’m sure this will change,)” Scholze wrote. “Better be sure it’s correct…”

Assurance in place, Commelin set to work. After explaining to Lean the mathematical statement whose proof he ultimately wanted the program to check, he brought more mathematicians into the project. They identified a few lemmas — intermediate steps in the proof — that seemed most approachable. They formalized those first, coding them on top of the library of mathematical knowledge that Lean draws on to determine if a given statement is true or not.

Last October,

Quantawrote that the collective effort to write mathematics in Lean has the “air of a barn raising.” This project was no different. Commelin would identify discrete parts of the proof and post them to Zulip, a discussion board that serves as a hub for the Lean community. When mathematicians saw a part of the proof that fit their expertise, they’d volunteer to formalize it.Mehta was one of about a dozen mathematicians who contributed to the work. In May he saw a post from Commelin asking for help formalizing the proof of a statement called Gordan’s lemma, which related to Mehta’s work in the area of combinatorial geometry. He spent a week coding the proof in terms that were consistent with the larger proof the mathematicians were building. It was emblematic, he said, of the way Lean works.

“It’s one big collaboration with a lot of people doing what they’re good at to make a singular monolith,” he said.

As the work proceeded, Scholze was a consistent presence on Zulip, answering questions and explaining points of the proof — a bit like an architect giving directions to builders on a job site. “He was always within reach,” Commelin said.

At the end of May the group finished formalizing the one part of the proof Scholze was most unsure about. Commelin entered the final keystroke at 1:10 a.m. on May 29. Lean compiled the proof, and . . .

## Re-counting the Cognitive History of Numerals

In *The MIT Press Reader* Philip Laughlin, who acquires books for the MIT Press in the fields of Cognitive Science, Philosophy, Linguistics, and Bioethics, interviews Stephen Chrisomalis, Professor of Anthropology at Wayne State University and author of, among other books,* “Reckonings: Numerals, Cognition, and History.”*

Those of us who learned arithmetic using pen and paper, working with the ten digits 0–9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realize that this sort of numeracy is not preordained.

Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book “Reckonings: Numerals, Cognition, and History.” Remarkably, each of the basic structures was invented multiple times independently of one another. In “Reckonings,” Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology, and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.

Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians, and his thoughts on the validity of the Sapir-Whorf Hypothesis.

.: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?

Philip Laughlin

Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, “Numerical Notation: A Comparative History” (Cambridge, 2010). It was an unorthodox project for an anthropology department — neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.

Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.

PL: You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?

SC: The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ — to calculate, to think, and to judge — which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general — to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement — every numerical notation faces its own reckoning as users decide whether to adopt, transmit, and eventually, abandon it. As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic — but it’s about a lot more than that.The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000.” Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.

PL: At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “Futurologist.” So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?

SC: One of the key arguments in “Reckonings” is that . . .

## Genetic Memory: How We Know Things We Never Learned

David Treffert published an interesting article in *Scientific American *in January 2015. It begins:

I met my first savant 52 years ago and have been intrigued with that remarkable condition ever since. One of the most striking and consistent things in the many savants I have seen is that that they clearly know things they never learned.

Leslie Lemke is a musical virtuoso even though he has never had a music lesson in his life. Like “Blind Tom” Wiggins a century before him, his musical genius erupted so early and spontaneously as an infant that it could not possibly have been learned. It came ‘factory installed’. In both cases professional musicians witnessed and confirmed that Lemke and Wiggins somehow, even in the absence of formal training, had innate access to what can be called “the rules” or vast syntax of music.

Alonzo Clemons has never had an art lesson in his life. As an infant, after a head injury, he began to sculpt with whatever was handy–Crisco or whatever–and now is a celebrated sculptor who can mold a perfect specimen of any animal with clay in an hour or less after only a single glance at the animal itself–every muscle and tendon perfectly positioned. He has had no formal training.

To explain the savant, who has innate access to the vast syntax and rules of art, mathematics, music and even language, in the absence of any formal training and in the presence of major disability, “genetic memory,” it seems to me, must exist along with the more commonly recognized cognitive/semantic and procedural/habit memory circuits.

Genetic memory, simply put, is complex abilities and actual sophisticated knowledge inherited along with other more typical and commonly accepted physical and behavioral characteristics. In savants the music, art or mathematical “chip” comes factory installed. In addition to the examples mentioned above, I describe others

in my book,Islands of Genius: The Bountiful Mind of the Autistic, Acquired and Sudden Savant.Genetic memory is not an entirely new concept. In 1940, A.A. Brill quoted Dr. William Carpenter who, in comparing math prodigy Zerah Colburn’s calculating powers to Mozart’s mastery of musical composition, wrote the following:

In each of the foregoing cases, then, we have a peculiar example of the possession of an extraordinary congenital aptitude for certain mental activity, which showed itself at so early a period as to exclude the notion that it could have been acquired by the experience of the individual. To such congenital gifts we give the name of intuitions: it can scarcely be questioned that like the instincts of the lower animals, they are the expressions of constitutional tendencies embodied in the organism of the individuals who manifest them.

Carl Jung used the term “collective unconscious” to define his even broader concept of inherited traits, intuitions and collective wisdom of the past.

Wilder Penfield in his pioneering 1978 book,

Mystery of the Mind,also referred to three types of memory. “Animals,” he wrote, “particularly show evidence of what might be called racial memory” (this would be the equivalent of genetic memory). He lists the second type of memory as that associated with “conditioned reflexes” and a third type as “experiential”. The two latter types would be consistent with the terminology commonly applied to “habit or procedural” memory and “cognitive or semantic” memory.In his 1998 book,

The Mind’s Past, Michael Gazzaniga wrote:The baby does not learn trigonometry, but knows it; does not learn how to distinguish figure from ground, but knows it; does not need to learn, but knows, that when one object with mass hits another, it will move the object … The vast human cerebral cortex is chock full of specialized systems ready, willing and able to be used for specific tasks. Moreover, the brain is built under tight genetic control … As soon as the brain is built, it starts to express what it knows, what it comes with from the factory. And the brain comes loaded. The number of special devices that are in place and active is staggering. Everything from perceptual phenomena to intuitive physics to social exchange rules comes with the brain. These things are not learned; they are innately structured. Each device solves a different problem … the multitude of devices we have for doing what we do are factory installed; by the time we know about an action, the devices have already performed it.

Steven Pinker’s 2003 book,

The Blank Slate: The Modern Denial of Human Nature,refutes the “blank slate” theories of human development. Brian Butterworth, in his 1999 book,What Counts: How Every Brain is Hardwired for Math,points out that babies have many specialized innate abilities, including numerical ones that he attributes to a “number module” encoded in the human genome from ancestors 30,000 years ago.Marshall Nivenberg, from the National Heart Institute, provided insight into the actual DNA/RNA mechanics of this innate knowledge in an article titled “Genetic Memory” published in 1968 in JAMA.

Whether called genetic, ancestral or racial memory, or intuitions or congenital gifts, the concept of a genetic transmission of sophisticated knowledge well beyond instincts, is necessary to explain how prodigious savants can know things they never learned.

We tend to think of ourselves as being born with a magnificent and intricate piece of organic machinery (“hardware”) we call the brain, along with a massive but blank hard drive (memory). What we become, it is commonly believed, is an accumulation and culmination of our continuous learning and life experiences, which are added one by one to memory. But the prodigious savant apparently comes already programmed with a vast amount of innate skill -and knowledge in his or her area of expertise–factory-installed “software” one might say–which accounts for the extraordinary abilities over which the savant innately shows mastery in the face of often massive cognitive and other learning handicaps. It is an area of memory function worthy of much more exploration and study.

Indeed recent cases of “acquired savants” or “accidental genius” have convinced me that we all have such factory-installed software. I discussed some of those cases in detail in . . .

## Arianna Rosenbluth Changed the World Before Leaving Science Behind

By and large, society during my generation treated women badly (and by “society” I mean “men” but also organizations (overwhelmingly managed by men) and social conventions). This article provides one example but there are many others. One example often offered is how Crick and Watson used Rosalind Franklin’s findings but failed to credit her. That particular story does not correspond to the facts but there are many others — for example, how long Wally Frank had to wait before she was finally able to take a suborbital flight yesterday.

Here’s an example that is true, which Anastasia Carrier recounts in the *Harvard Gazette. *She writes:

A few years ago, Jean Rosenbluth was visiting her mother at a nursing home in Pasadena. The occasion was a holiday party, and Jean and her husband were seated with her mother and another couple. It came up in conversation that the man sharing the table was a history of science professor, specializing in physics.

“Oh, my mother was a physicist,” Jean said as she introduced her mother. “This is Arianna Rosenbluth.”

The professor was stunned. “Wait,

theArianna Rosenbluth?” Arianna smiled shyly and kept eating her lemon meringue pie.Arianna Wright Rosenbluth, who received a master’s degree in physics from Radcliffe College in 1947, was one of five scientists who created the revolutionary Metropolis algorithm—the first practical implementation of what are now known as the Markov Chain Monte Carlo methods, go-to tools for solving large, complex mathematical and engineering problems.

Over the years, these methods have been used to simulate both quantum physics and markets, predict genetic predisposition to certain illnesses, forecast the outcomes of political conflicts, and model the spread of infectious diseases. It was Rosenbluth who found a way to get early computers to use the Markov Chain method, creating a blueprint that others followed.

“Arianna’s impact would last for a long time,” says Xihong Lin, a professor of biostatistics at the Harvard T.H. Chan School of Public Health, who used Markov Chain Monte Carlo methods to analyze a large set of COVID-19 data from Wuhan and to calculate the infectiousness of the virus. The methods have also helped specialists evaluate the effectiveness of quarantine and stay-at-home measures.

“Without Rosenbluth, I don’t think the field of Markov Chain Monte Carlo would go that far,” says Lin, referring to the role of the Radcliffe-trained scientist in enabling wide use of the tool across disciplines. “Implementation is critically important. That’s why her contribution is a landmark and really should be emphasized—should be honored.”

The paper that Rosenbluth coauthored—along with her then-husband, Marshall Rosenbluth, Edward and Augusta Teller, and Nicholas Metropolis—was published in 1953, but the algorithm’s origin story remained a mystery for five decades. In 2003, Marshall shared his memory of the achievement during a conference celebrating its 50th anniversary. The researchers developed the tool to illuminate how atoms rearranged themselves as solids melted, he said. Marshall did most of the conceptual work, and Arianna translated their idea into a computer algorithm—a task that required a fundamental understanding of physics and computer science, and also creativity.

By all accounts, Rosenbluth, who died of COVID-19 complications in December at age 93, was brilliant. She earned her PhD in physics at Harvard at 21 and in her short career worked under two physicists who went on to earn Nobel Prizes. And yet she effectively quit science in her late 20s, leaving her job at the Los Alamos Scientific Laboratory to be a stay-at-home mother. She rarely spoke about her time in the lab—although she sometimes mentioned to her children how irritating it was that her ideas were overlooked because she was a woman trying to make it in a male-dominated field. Other times, she would lovingly describe MANIAC I—the Los Alamos machine that she used for computing the Metropolis algorithm.

“She was ahead of her time,” says Pierre E. Jacob, the John L. Loeb Associate Professor of the Natural Sciences and a professor of statistics in the Harvard Faculty of Arts and Sciences, whose work involves Markov chains and probability modeling. In his syllabus, he renamed the Metropolis algorithm the Rosenbluth algorithm after reading about Arianna’s death.

“Better late than never,” he says.

Star on the RiseGrowing up in Houston, Arianna Wright was a mystery to her parents.

“Her mom and dad had this genius child, and they kind of didn’t know what to do with her,” says Mary Rosenbluth, one of Arianna’s four children. Leffie (Woods) Wright was confused by her quiet and introspective daughter, who didn’t care for fashion and rules but loved reading, especially fantasy books like L. Frank Baum’s

Wizard of Ozseries. Mary recalls a newspaper article among her mother’s things that described Arianna as a child genius.“It kind of struck me,” she says. “Here’s this girl growing up in suburban Houston, and she was just so different from everybody else.”

Arianna received a full-ride scholarship to Rice Institute (now Rice University) in Houston and took a bus to her classes. She earned her bachelor’s when she was 18, with honors in physics and mathematics. During her college days, she fenced against men as well as women, winning city and state championships. She qualified for the Summer Olympics in 1944, but World War II led to the cancellation of the games. She qualified again four years later but couldn’t afford to travel to London.

At Harvard, Arianna was rejected by one potential advisor because he didn’t take female PhD students, says Alan Rosenbluth, Arianna’s oldest child and a retired physicist. That was not uncommon. “Women were discouraged every step of the way,” says Margaret W. Rossiter, a Cornell historian of women in science. But Arianna forged ahead, in 1949 becoming just the fifth woman to earn a PhD in physics from Harvard.

She accepted a postdoctoral fellowship funded by the Atomic Energy Commission to study at Stanford University, where she . . .

Continue reading. There’s much more.

It strikes me that Arianna’s recognition of how to express the ideas of Markov chains using code in a computer algorithm bears a passing resemblance to how Zhi Bingyi’s recognition of how to express Chinese characters using the Latin alphabet on a keyboard (as described in the previous post).

## The Big Bang: What We Know and How We Know It

*8 Gravitons* has an interesting article about the Big Bang. The article begins:

When most people think of the Big Bang, they imagine a single moment: a whole universe emerging from nothing. That’s not really how it worked, though. The Big Bang refers not to one event, but to a whole scientific theory. Using Einstein’s equations and some simplifying assumptions, we physicists can lay out a timeline for the universe’s earliest history. Different parts of this timeline have different evidence: some are meticulously tested, others we even expect to be wrong! It’s worth talking through this timeline and discussing what we know about each piece, and how we know it.

We can seesurprisingly far back in time. As we look out into the universe, we see each star as it was when the light we see left it: longer ago the further the star is from us. Looking back, we see changes in the types of stars and galaxies: stars formed without the metals that later stars produced, galaxies made of those early stars. We see the universe become denser and hotter, until eventually we reach the last thing we can see: thecosmic microwave background, a faint light that fills our view in every direction. This light represents a change in the universe, the emergence of the first atoms. Before this, there were ions: free nuclei and electrons, forming a hot plasma. That plasma constantly emitted and absorbed light. As the universe cooled, the ions merged into atoms, and light was free to travel. Because of this, we cannotseeback beyond this point. Our model gives detailed predictions for this curtain of light: its temperature, and even the ways it varies in intensity from place to place, which in turn let us hone our model further.

In principle, we could “see” a bit further. Light isn’t the only thing that travels freely through the universe. Neutrinos are almost massless, and pass through almost everything. Like the cosmic microwave background, the universe should have a cosmic neutrino background. This would come from much earlier, from an era when the universe was so dense that neutrinos regularly interacted with other matter. We haven’t detected this neutrino background yet, but future experiments might. Gravitational waves meanwhile, can also pass through almost any obstacle. There should be gravitational wave backgrounds as well, from a variety of eras in the early universe. Once again these haven’t been detected yet, but more powerful gravitational wave telescopes may yet see them.

We have indirect evidencea bit further back than we can see things directly. In the heat of the early universe the first protons and neutrons were merged via nuclear fusion, becoming the first atomic nuclei: isotopes of hydrogen, helium, and lithium. Our model lets us predict the proportions of these, how much helium and lithium per hydrogen atom. We can then compare this to the oldest stars we see, and see that the proportions are right. In this way, we know something about the universe from before we can “see” it.

We get surprisedwhen we look at the universe on large scales, and compare widely separated regions. We find those regions are surprisingly similar, more than we would expect from randomness and the physics we know. Physicists have proposed different explanations for this. The most popular,cosmic inflation, suggests that the universe expanded very rapidly, accelerating so that a small region of similar matter was blown up much larger than the ordinary Big Bang model would have, projecting those similarities across the sky. While many think this proposal fits the data best, we still aren’t sure it’s the right one: there are alternate proposals, and it’s even controversial whether we should be surprised by the large-scale similarity in the first place.

We understand, in principle, how matter can come from “nothing”.This is sometimes presented as the most mysterious part of the Big Bang, the idea that matter could spontaneously emerge from an “empty” universe. But to a physicist, this isn’t very mysterious. Matter isn’t actually conserved, mass is just energy you haven’t met yet. Deep down, the universe is just a bunch of rippling quantum fields, with different ones more or less active at different times. Space-time itself is just another field, the gravitational field. When people say that in the Big Bang matter emerged from nothing, all they mean is that energy moved from the gravitational field to fields like the electron and quark, giving rise to particles. As we wind the model back, we can pretty well understand how this could happen.

If we extrapolate,winding Einstein’s equations back all the way, we reach . . .

See also Kevin Drum’s post “Here’s the theory of relativity in 500 words,” in which he covers both Special Relativity (time and space) and General Relativity (gravitation).

## Re-counting the Cognitive History of Numerals

In *The MIT Press Reader* Philip Laughlin interviews Stephen Chrisomalis, author of *Reckonings: Numerals, Cognition, and History*:

Those of us who learned arithmetic using pen and paper, working with the ten digits 0–9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realize that this sort of numeracy is not preordained.

Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book “Reckonings: Numerals, Cognition, and History.” Remarkably, each of the basic structures was invented multiple times independently of one another. In “Reckonings,” Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology, and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.

Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians, and his thoughts on the validity of the Sapir-Whorf Hypothesis.

Philip Laughlin: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?

Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, “Numerical Notation: A Comparative History” (Cambridge, 2010). It was an unorthodox project for an anthropology department — neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.

Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.

PL: You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?

SC: The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ — to calculate, to think, and to judge — which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general — to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement — every numerical notation faces its own reckoning as users decide whether to adopt, transmit, and eventually, abandon it. As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic — but it’s about a lot more than that.The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000.” Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.

PL: At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “Futurologist.” So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?

SC: One of the key arguments in “Reckonings” is that . . .

Continue reading. There’s much more.

## Mathematicians Prove 2D Version of Quantum Gravity Works

Charlie Wood writes in *Quanta*:

Alexander Polyakov, a theoretical physicist now at Princeton University, caught a glimpse of the future of quantum theory in 1981. A range of mysteries, from the wiggling of strings to the binding of quarks into protons, demanded a new mathematical tool whose silhouette he could just make out.

“There are methods and formulae in science which serve as master keys to many apparently different problems,” he wrote in the introduction to a now famous four-page letter in

Physics Letters B. “At the present time we have to develop an art of handling sums over random surfaces.”Polyakov’s proposal proved powerful. In his paper he sketched out a formula that roughly described how to calculate averages of a wildly chaotic type of surface, the “Liouville field.” His work brought physicists into a new mathematical arena, one essential for unlocking the behavior of theoretical objects called strings and building a simplified model of quantum gravity.

Years of toil would lead Polyakov to breakthrough solutions for other theories in physics, but he never fully understood the mathematics behind the Liouville field.

Over the last seven years, however, a group of mathematicians has done what many researchers thought impossible. In a trilogy of landmark publications, they have recast Polyakov’s formula using fully rigorous mathematical language and proved that the Liouville field flawlessly models the phenomena Polyakov thought it would.

“It took us 40 years in math to make sense of four pages,” said Vincent Vargas, a mathematician at the French National Center for Scientific Research and co-author of the research with Rémi Rhodes of Aix-Marseille University, Antti Kupiainen of the University of Helsinki, François David of the French National Center for Scientific Research, and Colin Guillarmou of Paris-Saclay University.

The three papers forge a bridge between the pristine world of mathematics and the messy reality of physics — and they do so by breaking new ground in the mathematical field of probability theory. The work also touches on philosophical questions regarding the objects that take center stage in the leading theories of fundamental physics: quantum fields.

“This is a masterpiece in mathematical physics,” said Xin Sun, a mathematician at the University of Pennsylvania.

## Infinite Fields

In physics today, the main actors in the most successful theories are fields — objects that fill space, taking on different values from place to place.

In classical physics, for example, a single field tells you everything about how a force pushes objects around. Take Earth’s magnetic field: The twitches of a compass needle reveal the field’s influence (its strength and direction) at every point on the planet.

Fields are central to quantum physics, too. However, the situation here is more complicated due to the deep randomness of quantum theory. From the quantum perspective, Earth doesn’t generate one magnetic field, but rather an infinite number of different ones. Some look almost like the field we observe in classical physics, but others are wildly different.

But physicists still want to make predictions — predictions that ideally match, in this case, what a mountaineer reads on a compass. Assimilating the infinite forms of a quantum field into a single prediction is the formidable task of a “quantum field theory,” or QFT. This is a model of how one or more quantum fields, each with their infinite variations, act and interact.

Driven by immense experimental support, QFTs have become the basic language of particle physics. The Standard Model is one such QFT, depicting fundamental particles like electrons as fuzzy bumps that emerge from an infinitude of electron fields. It has passed every experimental test to date (although various groups may be on the verge of finding the first holes).

Physicists play with many different QFTs. Some, like the Standard Model, aspire to model real particles moving through the four dimensions of our universe (three spatial dimensions plus one dimension of time). Others describe exotic particles in strange universes, from two-dimensional flatlands to six-dimensional uber-worlds. Their connection to reality is remote, but physicists study them in the hopes of gaining insights they can carry back into our own world.

Polyakov’s Liouville field theory is one such example.

## Gravity’s Field

The Liouville field, which is based on an equation from complex analysis developed in the 1800s by the French mathematician Joseph Liouville, describes a completely random two-dimensional surface — that is, a surface, like Earth’s crust, but one in which the height of every point is chosen randomly. Such a planet would erupt with mountain ranges of infinitely tall peaks, each assigned by rolling a die with infinite faces.

Such an object might not seem like an informative model for physics, but randomness is not devoid of patterns. The bell curve, for example, tells you how likely you are to randomly pass a seven-foot basketball player on the street. Similarly, bulbous clouds and crinkly coastlines follow random patterns, but it’s nevertheless possible to discern consistent relationships between their large-scale and small-scale features.

Liouville theory can be used to identify patterns in the endless landscape of all possible random, jagged surfaces. Polyakov realized this chaotic topography was essential for modeling strings, which trace out surfaces as they move. The theory has also been applied to describe quantum gravity in a two-dimensional world. Einstein defined gravity as space-time’s curvature, but translating his description into the language of quantum field theory creates an infinite number of space-times — much as the Earth produces an infinite collection of magnetic fields. Liouville theory packages all those surfaces together into one object. It gives physicists the tools to measure the curvature —and hence, gravitation — at every location on a random 2D surface.

“Quantum gravity basically means random geometry, because quantum means random and gravity means geometry,” said Sun.

Polyakov’s first step in exploring the world of random surfaces was to write down an expression defining the odds of finding a particular . . .

## A Random Walk through the English Language

Jorden Ellenberg writes in the *Scientific American*:

Here’s a game Claude Shannon, the founder of information theory, invented in 1948. He was trying to model the English language as a random process. Go to your bookshelf, pick up a random book, open it and point to a random spot on the page, and mark the first two letters you see. Say they’re I and N. Write down those two letters on your page.

Now, take another random book off the shelf and look through it until you find the letters I and N in succession. Whatever the character

following“IN” is—say, for instance, it’s a space—that’s the next letter of your book. And now you take down yet another book and look for an N followed by a space, and once you find one, mark down what character comes next. Repeat until you have a paragraph“IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID

PONDENOME OF DEMONSTURES OF THE REPTAGIN IS

REGOACTIONA OF CRE”

That isn’t English, but it kind of

lookslike English.Shannon was interested in the “entropy” of the English language, a measure, in his new framework, of how much information a string of English text contains. The Shannon game is a Markov chain; that is, it’s a random process where the next step you take depends only on the current state of the process. Once you’re at LA, the “IN NO IST” doesn’t matter; the chance that the next letter is, say, a B is the probability that a randomly chosen instance of “LA” in your library is followed by a B.

And as the name suggests, the method wasn’t original to him; it was almost a half-century older, and it came from, of all things, a vicious mathematical/theological beef in late-czarist Russian math.

There’s almost nothing I think of as more inherently intellectually sterile than verbal warfare between true religious believers and movement atheists. And yet, this one time at least, it led to a major mathematical advance, whose echoes have been bouncing around ever since. One main player, in Moscow, was Pavel Alekseevich Nekrasov, who had originally trained as an Orthodox theologian before turning to mathematics. His opposite number, in St. Petersburg, was his contemporary Andrei Andreyevich Markov, an atheist and a bitter enemy of the church. He wrote a lot of angry letters to the newspapers on social matters and was widely known as Neistovyj Andrei, “Andrei the Furious.”

The details are a bit much to go into here, but the gist is this: Nekrasov thought he had found a mathematical proof of free will, ratifying the beliefs of the church. To Markov, this was mystical nonsense. Worse, it was mystical nonsense wearing mathematical clothes. He invented the Markov chain as an example of random behavior that could be generated purely mechanically, but which displayed the same features Nekrasov thought guaranteed free will.

A simple example of a Markov chain: a spider walking on a triangle with corners labeled 1, 2, 3. At each tick of the clock, the spider moves from its present perch to one of the other two corners it’s connected to, chosen at random. So, the spider’s path would be a string of numbers.

1, 2, 1, 3, 2, 1, 2, 3, 2, 3, 2, 1 …

Markov started with abstract examples like this, but later (perhaps inspiring Shannon?) applied this idea to strings of text, among them Alexander Pushkin’s poem

Eugene Onegin. Markov thought of the poem, for the sake of math, as a string of consonants and vowels, which he laboriously cataloged by hand. Letters after consonants are 66.3 percent vowels and 33.7 percent consonants, while letters following vowels are only 12.8 percent vowels and 87.2 percent consonants.So, you can produce “fake Pushkin” just as Shannon produced fake English; if the current letter is a vowel, the next letter is a vowel with probability 12.8 percent, and if the current letter is a consonant, the next one is a vowel with probability 66.3 percent. The results are not going to be very poetic; but, Markov discovered, they can be distinguished from the Markovized output of other Russian writers. Something of their style is captured by the chain.

Nowadays, the Markov chain is a fundamental tool for exploring spaces of conceptual entities much more general than poems. It’s how election reformers identify which legislative maps are brutally gerrymandered, and it’s how Google figures out which Web sites are most important (the key is a Markov chain where at each step you’re at a certain Web site, and the next step is to follow a random link from that site). What a neural net like GPT-3 learns—what allows it to produce uncanny imitation of human-written text—is a gigantic Markov chain that counsels it how to pick the next word after a sequence of 500, instead of the next letter after a sequence of two. All you need is a rule that tells you what probabilities govern the next step in the chain, given what the last step was.

You can train your Markov chain on your home library, or on

Eugene Onegin,or on the huge textual corpus to which GPT-3 has access; you can train it on anything, and the chain will imitate that thing! You can train it on baby names from 1971, and get: . . .

## A Number Theorist Who Connects Math to Other Creative Pursuits

In *Quanta* Steve Nadis interviews Jordan Ellenberg:

here are many different pathways into mathematics,” said Jordan Ellenberg, a mathematician at the University of Wisconsin, Madison. “There is the stereotype that interest in math displays itself early. That is definitely not true in general. It’s not the universal story — but it is my story.”

That account was backed up by a biostatistician at the University of Pennsylvania — his mother, Susan Ellenberg. “Jordan recognized numbers before he could walk,” she said. “We’d be going someplace with him, and he’d start to call out numbers, and his father and I would have to figure out where he was seeing them. Each night, he’d ask me to teach him something new about math.” When he was in second grade, a local teacher began taking him through the high school math curriculum. Ever since, he’s been preoccupied with mathematics — though not exclusively so.

After graduating from Harvard University in 1993, Ellenberg completed a one-year master’s program in fiction writing at Johns Hopkins University, where he wrote a novel that was published a decade later, titled

The Grasshopper King. But he always felt that he would eventually return to mathematics, and in 1994 he entered a doctoral program back at Harvard, pursuing research under the supervision of Barry Mazur, a number theorist.“Barry was a great adviser and a very learned guy,” Ellenberg said. “One of the things he showed me is that it’s OK to be interested in things other than math. Through him I saw that being in a university isn’t just about being in the math department, but rather being part of a whole world of scholarship.”

Ellenberg has taken that view to heart, finding mathematics to explore in everything from internet fads to voting rights. He has interacted and even collaborated with colleagues from many different fields and departments, while keeping up his writing — academic papers for math journals, and popular articles for newspapers and magazines. In 2001, he started writing a column for

Slatecalled “Do the Math.” Many entries are not typical mathematician fare, such as “Algebra for Adulterers,” “Cooking the Books on Virginity,” and “What Broadway Musicals Tell Us About Creativity.”His latest book,

Shape, is all about geometry — though, as you might expect, it departs significantly from the traditional geometry of your high school days. Proving the congruence of triangles and the like, he said, bears little resemblance to the work of modern geometry. In the book’s introduction, Ellenberg confesses that it was a curious subject for him to have taken up: “Reader, let me be straight with you about geometry: at first I didn’t care for it.”

Quantaspoke with Ellenberg earlier this month about geometry, electoral math and creativity. The interview has been condensed and edited for clarity.## When did you first realize there was something special about math?

When I was 6 years old, I was in the living room, gazing at the rectangular pattern of holes on a speaker where the sound comes through. I noticed there were 6 rows of 8 holes and, equivalently, 8 columns of 6 holes. I knew that 6 × 8 equals 8 × 6. But at that moment, I grasped that this was a fact about the world, not just a fact from the multiplication tables. Mathematical knowledge, I realized, was something that existed on its own — something you could directly apprehend — and not just something you were taught.

That, for me, offered an early glimmer of the power of mathematical thinking — and the emotional force that comes with it. As teachers, we aspire for every kid to have that kind of experience of mathematical knowledge.

## Mathematics is a diverse field. How did you decide to focus on number theory?

I went to graduate school not really knowing what I would work on. It was just after Andrew Wiles proved Fermat’s Last Theorem. There was so much energy and enthusiasm about number theory at that time. It seemed to be the most exciting thing going on.

Students often ask me: “How do I figure out what area of math is right for me?” I tell them that it’s all interesting. Every field of research has deep wonderful ideas. You just have to see what you fall into. And wherever you fall, there is excitement to be found.

## Of all the possible subjects in math, why did you write a book on geometry, especially when you admit to having a mental block when it comes to visualizing things?

It’s true, I didn’t really take to high school geometry. There was a certain style — the Euclidean “theorem, statement, proof” approach — that did not vibe with me. That approach is certainly a part of geometry, but it happens to be a tiny part.

It’s also true that I have difficulty with some geometric things. For example, when you have to put a credit card into a machine, I can’t follow the diagram and instead end up trying all four possibilities. If I’m on the first floor of my house and am asked about the layout of things in the room above me, I can’t really picture that. But it turns out that those skills aren’t so important when it comes to doing geometry.

Even though I steered clear of geometry when I was young, I later learned that you can’t maintain a dislike for any part of mathematics because all of its branches touch each other.

## You also like to find mathematical connections even among ideas that don’t seem too mathematical, like pondering how many holes a straw has. Why bother answering that?

Well, it’s kind of an internet craze [with more than 60 million hits on Google]. It goes viral all the time, and you may wonder why people are so captivated by such a weird question. I’d say it’s actually a deep mathematical question, not a triviality nor a cut-and-dried matter. You could say one hole or two holes — or zero holes if you think about taking a rectangular piece of paper (with no holes in it) and rolling it up. It’s a way of getting people to understand topology and homology groups, which involves classifying objects based on the number of holes they have.

It turns out there is a mathematical answer to this: Topologists would say the straw has just one hole. But the point is not just to give people an answer, but rather to show them why it’s an interesting problem. Although this question is settled, many of the things that are now settled in mathematics may not have been settled 100 or so years ago. People have fought hard over almost every single conceptual advance.

## In 2019, you and 10 other mathematicians signed a brief about gerrymandering that was submitted to the Supreme Court. What does math have to do with that? . . .

Continue reading. There’s more.

## There’s a hole at the bottom of math

This is an excellent presentation of fascinating ideas (some of which I struggle with — the spectral gap, for instance), and enough is clearly presented to be pleasing and satisfying. Worth watching the entire 34 minutes.

## When an algorithm taps you on the shoulder

Matt Stroud reports in the *Verge*:

ROBERT MCDANIEL’S TROUBLESbegan with a knock on the door. It was a weekday in mid-2013, as he made lunch in the crowded three-bedroom house where he lives with his grandmother and several of his adult siblings.When he went to answer the door, McDaniel discovered not one person, but a cohort of visitors: two police officers in uniform, a neighbor working with the police, and a muscular guy in shorts and a T-shirt sporting short, graying hair.

Police officers weren’t a new sight for McDaniel. They often drove down his tree-lined street in the Austin neighborhood of Chicago making stops and arrests. Out of the 775 homicides tracked by the

Chicago Sun-Timesin 2020, 72 of them happened in Austin. That’s almost 10 percent of the city’s murder rate, in a region that takes up just 3 percent of its total area. The City of Chicago puts out a “heat map” of where gun crimes occur, with areas of moderate shooting numbers shaded in blue or green. Red splotches represent large numbers — and hottest concentrations — of shootings. On the map, Austin is the color of a fire engine.Still, this visit from authorities caught McDaniel off guard: at that point in time, he had nothing remotely violent on his criminal record — just arrests for marijuana-related offenses and street gambling. And despite two officers showing up at his front door with the cohort, neither of them, nor anyone else in the cohort, accused McDaniel of breaking the law. They were not there to arrest him. No one was there to investigate a crime. They just wanted to talk.

“I had no idea why these cops were here,” McDaniel says, recounting it to me years later. “I didn’t do shit to bring them here.”

He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.

McDaniel was both a potential victim and a potential perpetrator, and the visitors on his porch treated him as such. A social worker told him that he could help him if he was interested in finding assistance to secure a job, for example, or mental health services. And police were there, too, with a warning: from here on out, the Chicago Police Department would be watching him. The algorithm indicated Robert McDaniel was more likely than 99.9 percent of Chicago’s population to either be shot or to have a shooting connected to him. That made him dangerous, and top brass at the Chicago PD knew it. So McDaniel had better be on his best behavior.

The idea that a series of calculations could predict that he would soon shoot someone, or be shot, seemed outlandish. At the time, McDaniel didn’t know how to take the news.

But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy. . .

Continue reading. There’s much more, and story gets even more interesting. The “help” offered causes the problem it was intended to prevent.

Later in the article, Stroud points out one weakness built into the system:

Forecasting isn’t magic; it’s an educated guess about what might happen based on things that have already occurred. The data feeding forecasting software for police are typically built around police stops and arrests. That might sound straightforward and unbiased, but consider that US Department of Justice data show that African Americans are more than twice as likely to be arrested than white people. And if you’re Black, your likelihood of being stopped by a police officer can be nearly four times higher than if you’re white, depending on which city you live in, according to the Stanford Open Policing Project.

Building a forecasting model around data like these can run the risk of stigmatizing entire populations based on discriminatory data; a 2017 study from the Journal of

Statistics and Public Policyfound that arrests doubled in a quadrant of Los Angeles where its police department tested forecasting software. Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.Which is all to say that forecasting can put entire populations at risk of over-policing — which has led to countless unnecessary police killings for relatively insignificant infractions. (Think George Floyd. And Michael Brown. Twelve-year-old Tamir Rice. Sandra Bland, Philando Castile, Walter Scott. Thirteen-year-old Adam Toledo, this year, in Chicago. Alton Sterling, Breonna Taylor, Ahmaud Arbery. The list goes on.)

Later still:

IN MCDANIEL’S VIEW,the heat list caused the harm its creators hoped to avoid: it predicted a shooting that wouldn’t have happened if it hadn’t predicted the shooting.As the heat list continued to operate, researchers tore it to shreds. A 2016 paper published in the

Journal of Experimental Criminologycame to some troubling conclusions about the list that had, by then, been rebranded as the “Strategic Subject List,” or SSL. Among them: “The individuals on the SSL were considered to be ‘persons of interest’ to the CPD,” meaning that McDaniel’s description of being routinely targeted for surveillance and searches matched what researchers discovered. “Overall,” the report goes on, “there was no practical direction about what to do with individuals on the SSL, little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders.”The heat list wasn’t particularly predictive, it turned out. It wasn’t high-tech. Cops would just use the list as a way to target people.

There was another problem, too. . .

## Labor shortage? or wage shortage?

It’s as simple as black and white — and reflects the law of supply and demand (basic economics): as demand increases and supply remains the same, the price increases. If the demand for labor increases beyond the supply, the price goes up. Businesses certainly understand that if the demand for their widgets increases, they can change more. That principle also applies to the labor required to make the widgets. Don’t businesses understand this basic principle?

## William Press: At lunch with Freeman Dyson

William Press writes in *Inference*:

IN THIS ESSAY, I would like to tell the story of a minor discovery in mathematical game theory that Freeman Dyson and I made in 2011. Dyson was a personal friend and one of the great mathematical physicists of the twentieth century. He died in 2020, at the age of ninety-six. He was famously self-effacing, which is not to say that he lacked an accurate opinion of his own abilities. Freeman would deny that he had done anything at all and then allow friends—or even strangers—to vehemently contradict him. Our discovery was not of that character. It really

wasvery minor. The reasons for telling the story now are less about the discovery itself and more about the tendency of scientists to seek lessons in moral philosophy in the least likely of places—high-school algebra, for example.Imagine that a group of scientists gather to play a kind of terror game. They must propose scenarios that, should they eventuate, would shake their belief in the foundations of their fields. The mathematician’s proposed terror is that a long message, in English, is found to be encoded—in excess of any plausible random probability—somewhere in the first billion digits of pi.

^{1}The physicist’s terror is that the interaction cross-section of a fundamental particle will have significantly different values when measured in different places on earth, or in the same place at different times.^{2}The biologist’s terror is that some feature of the living world will be unexplainable by the principle of natural selection. Within biology’s subspecialty of evolution theory, there is a small area of study known asevolution of cooperation. That study, some would say, lies closest to the biologist’s terror. That makes it worth poking at.## Cooperation and Defection

IN BIOLOGY, a cooperator is an individual who pays a cost for another individual to receive a benefit. When cooperation is mutually beneficial to two individuals of the same or different species—a condition termed

direct reciprocity—then it is favored by natural selection. There are other possibilities. In so-called kin selection, an individual’s self-sacrifice may be favored if, on average, it helps another individual in the same gene pool to survive.^{3}The unit of survival is understood in this case to be not the individual, but the gene that two individuals share.^{4}It is harder to understand why individuals cooperate when defection would be more favorable or when the reciprocity is only indirect.Suppose that two microbe species,

AandB, both need processed nutrientsaandb. The cooperative state might be thatAproducesa,Bproducesb, and each secretes a portion of its nutrient for the benefit of the other. But this equilibrium is not evolutionarily stable: a defectingAwith a mutation that halts its sharing ofabecomes a free rider, benefitting fromBwithout paying the fare. Free riders, avoiding a cost, will tend to take over the population. The evolutionarily stable endpoint is noncooperation, even though cooperation would be better for both species.Cooperation among humans seems hardest of all to understand. “Humans are the champions of cooperation,” Martin Nowak has remarked. “From hunter-gatherer societies to nation-states, cooperation is the decisive organizing principle of human society.”

^{5}In much, if not most, of our cooperation, reciprocity is indirect. To be sure, some people give money to universities in the hope of getting their own children admitted—kin selection—but many more give to charities that are of no direct benefit to themselves or their kin. Many billionaires become philanthropists, but from the standpoint of evolution theory, why is this? A quirk of our culture, maybe? But cultures, too, compete for dominance with other contemporaneous cultures, and by a process akin to natural selection. Are we to understand that generosity is selectively favored? Or are the generous billionaires only transient?Charles Darwin recognized that cooperation posed a challenge to his theory of natural selection. He described an elegant experiment to ferret out whether the aphid yields its excretion to the ant voluntarily, or involuntarily with the ant as a parasite.

^{6}He provided a convincing argument that it was the former. Darwin, the consummate naturalist, hated overgeneralized theory. Yet the significant literature on the evolution of cooperation that has flourished in the last fifty years is almost entirely theoretical. Much of it is cast in the formalism of mathematical game theory, a subject that came into existence more than half a century after Darwin’s death in the work of John von Neumann and Oskar Morgenstern. Game theory describes how competing, sentient players, in a well-defined universe of choices and payoffs, may knowingly seek to optimize their own outcomes. Evolution is the blind watchmaker,^{7}optimizing only by trial and error. Exactly how the achievable outcomes of evolution correspond to the mathematical optima of game theory is not a settled question.## The Prisoner’s Dilemma

GO BACK TO MICROBES

AandB, but now promote them to sentience. They become Alice and Bob, who are arrested on suspicion of committing, together, a serious crime. Each has sworn not to betray the other. They are questioned in separate rooms.“We already have enough evidence to convict you both of a misdemeanor,” the detective says to each, “that will put you away for one year.” Each, separately, says nothing. “But if you defect, rat out your partner and turn state’s evidence,” the detective continues, “we’ll let you go, scot-free. Your partner will get a felony conviction, six years in the state penitentiary.”

“What if we

bothturn state’s evidence?” Alice and Bob each ask.“Well, I can’t let you both go free,” the detective says. “You’ll each get three years.”

Alice reasons as follows: . . .

Continue reading. There’s much more.

I will add this: Robert Axelrod had two “tournaments” in which computer algorithms competed in the Prisoner’s Dilemma. Anatol Rapoport (author of the book *Operational Philosophy* that had a big impact on me when I read it in high school) wrote the algorithm that won the first tournament, using a simple strategy of tit for tat: cooperate initially, then respond as the other player did from then on.

Axelrod published the results and the strategies used, and then, with that information available to assist participants in their work, held a second tournament. Anatol Rapoport won the second tournament as well, using the same strategy.

Axelrod’s book is a lot of fun. Recommended.

## Can a prime number be illegal? Yes.

See this Wikipedia article, which begins:

An

illegal primeis a prime number that represents information whose possession or distribution is forbidden in some legal jurisdictions. One of the first illegal primes was found in 2001. When interpreted in a particular way, it describes a computer program that bypasses the digital rights management scheme used on DVDs. Distribution of such a program in the United States is illegal under the Digital Millennium Copyright Act.^{[1]}An illegal prime is a kind of illegal number.

## One knot, five applications

Back in the day — *way* back in the day — when I was in Cub Scouts I was fascinated by knots. Of course there are the square knot and the granny knot everyone knows, but I liked the sheet bend, the sheepshank, the clove hitch, the timber hitch, the bowline (an essential knot), the bowline on a bight (in part, I think, I like the names). This list of knots is animated, so if you click knot, you see it being tied.

John Donne was partial to knots himself, and they appear frequently in his poems. Knots carry with them a kind of history in the context of how they were used. The Gordian knot has entered common language. The mathematics of knots is formidable and fruitful. Knotted breads (pretzels, braided loaves, and the like) please and delight.

IMO, every young person should learn well how to tie a few dozen knots and understand their use.

## Tips from neuroscience to keep you focused on hard tasks

David Badre writes in *Nature*:

Some of scientists’ most rewarding moments come when we confront a hard problem or a difficult task. Solving a major methodological hurdle, designing an elegant experiment, making sense of a puzzling result, working on a new model or writing a paper or grant proposal are the intellectual challenges that make a career in science so exciting. But doing hard tasks is, in fact, hard. It can frustrate and weigh on us, and cause anxiety and stress. We can struggle to maintain focus on our hard tasks, including the ones we enjoy and eagerly wish to complete. We often postpone work on hard tasks, such as beginning to write a paper or do complex data analysis, in favour of quick wins from easier tasks, like fine-tuning a figure, organizing our calendars or making a dent in our e-mail correspondence.

In late 2020, I published a book,

On Task, about the neuroscience of cognitive control: the mental function that allows us to connect our goals and plans with our actions. It is concerned with precisely this problem of how we get things done. It is ironic, therefore, that writing a book about how our brains do tasks was itself a difficult task to do. I enjoyed writing the book, and valued the goal. But there were moments when it was really difficult to find the words to convey a complex idea. And working on the book was never the most immediately urgent task in my day-to-day work, so it was challenging to carve out the time for the writing and thought it required.You might not be writing a book, but everyone experiences the struggles of difficult tasks. They have been made all the worse with lockdowns, home-schooling and other lifestyle changes due to the pandemic. Everyone experiences bouts of procrastination or work-avoidance, and the guilt that comes with them. There is no avoiding these experiences entirely, but there are some strategies that can help us stay focused.

## Make space

To solve hard problems, the brain needs ready access to the information, plans, procedures and knowledge it will be using. Cognitive scientists refer to this collective task knowledge as a task set. However, the task set is not always immediately available: we can’t hold it all active in our limited mental workspace, or ‘working memory’, all the time.

For example, when writing a scientific paper, we must bring to mind lots of information related to the background, logic, design and results of a study. If we have just been at a meeting on a different topic, and then sit down to write the paper, the necessary information might not be in the forefront of our minds. It must be mentally retrieved and organized in our working memory before we can start writing.

In practice, returning to a hard task in this way comes with a ‘restart’ cost: we must spend time and mental effort getting back into our task set, rather than making progress. For this reason, it is important to create time and space for hard tasks.

• Set aside large blocks of time. It is all too easy for working scientists to fill our days with meetings and other small tasks that leave only small gaps for the serious work. Long gaps are needed not only because of the intense thought and work required by hard tasks, but also because we need some time to re-establish our task set. Switching frequently between tasks makes producing quality work harder.

• Be consistent. We should try to reserve a consistent time and place for our hard work and be protective of it. Ideally, we should find this time and place every day. Even if we don’t make progress one day, that time should be spent on our hard task rather than other tasks, even if it’s just reviewing our work. Consistency can aid memory: memory retrieval is context dependent, in that it helps to have the same sights and sounds available when we learn something as when we will try to remember it. Thus, working on a task in the same context repeatedly might aid retrieval and help us to re-establish our task set when we restart.

## Minimize distraction and never multitask

When we do two or more tasks at once, either at the same time or switching between them, our performance efficiency and quality will suffer. This happens partly because the tasks use shared cognitive resources, such as working memory. As a result, they will compete for that shared resource and interfere with one another. When doing a hard task, it is important to minimize this interference from multi-tasking.

• Remove cues to other tasks. It helps to put away e-mail and social media and their associated cues. Phone notifications or a badge that tells us how many unread messages we have are distractions that pull us to other tasks. These result in multitasking costs, whether we do the other tasks or not. Even cues that we simply associate with other tasks, such as seeing our phones on the table, can distract us. As much as possible, we should keep our space and time for hard work clear of other distracting tasks.

• Beware the allure of easy tasks. When we decide to perform a task, our brains do a cost–benefit analysis on the fly, weighing the value of the outcome against the projected mental investment required to be successful. As a result, we often avoid hard tasks in favour of smaller, easier tasks, particularly if we aren’t making immediate progress. That will affect our motivation. Sending some e-mails or doing administrative work or straightening up the desk might all be worthwhile tasks and feel productive, but they prevent us doing the task we need to do, while adding multitasking costs.

## Engage in good problem-solving habits

To find a solution to a hard problem or perform a hard task, we must structure the problem or task in a way that will allow us to succeed.

For example, . . .

Continue reading. There’s more.

## Is the AstraZeneca Vaccine Really Dangerous?

Sometimes it’s difficult to understand public reactions. Kevin Drum notes:

I’m puzzled about something related to the AstraZeneca vaccine. Several European countries have put it on hold due to reports of blood clots in patients who have received it. If I have the numbers right, about 40 cases of blood clots have been reported out of the 17 million people who have gotten the AZ vaccine.

But COVID-19 itself is far more dangerous on the blood clot front. “Blood clots continue to wreak havoc for patients with severe COVID-19 infection,” reported the University of Michigan Health Lab a few months ago, and that hasn’t changed.

So let’s do some arithmetic. The overall hospitalization rate for COVID-19 is about 100 per million per week, which means that of the 17 million who have received the AZ vaccine since January, something on the order of 17,000 would have been hospitalized if they hadn’t been vaccinated. The incidence of blood clots in patients hospitalized for COVID-19, according to a recent study, is about 20%. This means that roughly 3,000 people out of those 17 million would have developed blood clots without the vaccine, simply because they would gotten a serious case of COVID-19.

So . . . 3,000 without the vaccine, 40 with the vaccine.¹ That seems like a no-brainer: keep inoculating people with the AZ vaccine until firm evidence of large-scale harm emerges.²

Unless, of course, I bollixed up my amateur risk assessment arithmetic. If I did, what am I missing?

¹Also, AstraZeneca claims that 40 cases out of 17 million is about the same as it is for other vaccines. I can’t verify that at the moment, but I wouldn’t be surprised if it’s true.

²The calculus might work out differently if you had plenty of alternate vaccine supplies, but I don’t get the impression that’s the case in Europe.