Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Math’ Category

Math’s Fatal Flaw: An elegant presentation of math’s cutting edge

leave a comment »

Written by Leisureguy

20 October 2021 at 10:24 am

Posted in Daily life, Math, Video

C.S. Peirce, American Aristotle

leave a comment »

Charles Sanders Peirce (last name pronounced “purse”), a highly original philosopher and thinker, is the subject of an essay in Aeon by Daniel Everett, which begins:

[I intend] to make a philosophy like that of Aristotle, that is to say, to outline a theory so comprehensive that, for a long time to come, the entire work of human reason, in philosophy of every school and kind, in mathematics, in psychology, in physical science, in history, in sociology and in whatever other department there may be, shall appear as the filling up of its details.
C S Peirce, Collected Papers (1931-58)

The roll of scientists born in the 19th century is as impressive as any century in history. Names such as Albert Einstein, Nikola Tesla, George Washington Carver, Alfred North Whitehead, Louis Agassiz, Benjamin Peirce, Leo Szilard, Edwin Hubble, Katharine Blodgett, Thomas Edison, Gerty Cori, Maria Mitchell, Annie Jump Cannon and Norbert Wiener created a legacy of knowledge and scientific method that fuels our modern lives. Which of these, though, was ‘the best’?

Remarkably, in the brilliant light of these names, there was in fact a scientist who surpassed all others in sheer intellectual virtuosity. Charles Sanders Peirce (1839-1914), pronounced ‘purse’, was a solitary eccentric working in the town of Milford, Pennsylvania, isolated from any intellectual centre. Although many of his contemporaries shared the view that Peirce was a genius of historic proportions, he is little-known today. His current obscurity belies the prediction of the German mathematician Ernst Schröder, who said that Peirce’s ‘fame [will] shine like that of Leibniz or Aristotle into all the thousands of years to come’.

Some might doubt this lofty view of Peirce. Others might admire him for this or that contribution yet, overall, hold an opinion of his oeuvre similar to that expressed by the psychologist William James on one of his lectures, that it was like ‘flashes of brilliant light relieved against Cimmerian darkness’. Peirce might have good things to say, so this reasoning goes, but they are too abstruse for the nonspecialist to understand. I think that a great deal of Peirce’s reputation for obscurity is due, not to Peirce per se, but to the poor organisation and editing of his papers during their early storage at and control by Harvard University (for more on this, see André de Tienne’s insightful history of those papers).

Such skepticism, however incorrect, becomes self-reinforcing. Because relatively few people have heard of Peirce, at least relative to the names above, and because he has therefore had a negligible influence in popular culture, some assume that he merits nothing more than minor fame. But there are excellent reasons why it is worth getting to know more about him. The leading Peirce scholar ever, Max Fisch, described Peirce’s intellectual significance in this fecund paragraph from 1981:

Who is the most original and the most versatile intellect that the Americas have so far produced? The answer ‘Charles S Peirce’ is uncontested, because any second would be so far behind as not to be worth nominating. Mathematician, astronomer, chemist, geodesist, surveyor, cartographer, metrologist, spectroscopist, engineer, inventor; psychologist, philologist, lexicographer, historian of science, mathematical economist, lifelong student of medicine; book reviewer, dramatist, actor, short-story writer; phenomenologist, semiotician, logician, rhetorician [and] metaphysician … He was, for a few examples, … the first metrologist to use a wave-length of light as a unit of measure, the inventor of the quincuncial projection of the sphere, the first known conceiver of the design and theory of an electric switching-circuit computer, and the founder of ‘the economy of research’. He is the only system-building philosopher in the Americas who has been both competent and productive in logic, in mathematics, and in a wide range of sciences. If he has had any equals in that respect in the entire history of philosophy, they do not number more than two.

Peirce came from a well-to-do, prominent family of senators, businessmen and mathematicians. His father, Benjamin Peirce, was considered the greatest US mathematician of his generation, teaching mathematics and astronomy at Harvard for some 50 years. Charles’s brother, James, also taught mathematics at Harvard, eventually becoming a dean there. C S Peirce was, on the other hand, despised by the presidents of Harvard (Charles Eliot; where Peirce studied) and Johns Hopkins University (Daniel Gilman; where Peirce initially taught). Eliot and Gilman, among others, actively opposed Peirce’s employment at any US institution of higher education and thus kept him in penury for the latter years of his life. They falsely accused him of immorality and underestimated his brilliance due to input from jealous rivals, such as Simon Newcomb.

Though the story of Peirce’s life and thinking processes is inspiring and informative, this story is not told here. (I recommend Joseph Brent’s 1998 biography of Peirce as an excellent beginning. My own planned intellectual biography of Peirce intends to trace his life from his Pers family roots in Belgium in the 17th century to the history of the influence of his work on modern philosophy and science.) The objective here is rather to highlight some portions of Peirce’s thought to explain why his theories are so important and relevant to contemporary thinking across a wide range of topics.

The importance and range of Peirce’s contributions to science, mathematics and philosophy can be appreciated partially by recognising that many of the most important advances in philosophy and science over the past 150 years originated with Peirce: the development of mathematical logic (before and arguably better eventually than Gottlob Frege); the development of semiotics (before and arguably better than Ferdinand de Saussure); the philosophical school of pragmatism (before and arguably better than William James); the modern development of phenomenology (independently of and arguably superior to Edmund Husserl); and the invention of universal grammar with the property of recursion (before and arguably better than Noam Chomsky; though, for Peirce, universal grammar – a term he first used in 1865 – was the set of constraints on signs, with syntax playing a lesser role).

Beyond these philosophical contributions, Peirce also made fundamental discoveries in science and mathematics. A few of these are:  . . .

Continue reading. There’s much more.

Written by Leisureguy

8 October 2021 at 1:08 pm

Sandpile Math Is Amazing

leave a comment »

Nautilus has a beautifully illustrated article by Jordan Ellenberg on a simulation of how a sandpile results if you have a vertical stack of many, many grains of sand, and then allow the stack to collapse. The basis of the simulation is simple. Think of a grid of squares. Four grains in a square collapse so the the four neighboring squares that share a side each get one of the grains. If there are more than four grain in a square, apply that rule repeatedly until the number of grains in each square is less than four.

The article shows the patterns that emerge, and includes this video:

“THE LIVING AVALANCHE:A single grain of sand added to a 255×255 sandpile with a critical density triggers a complex set of avalanches. The color of a pixel records the number of topples that have taken place at that location, so each avalanche “heats up” the area it covers.”

Written by Leisureguy

8 October 2021 at 12:28 pm

Posted in Daily life, Math

New Math Book Rescues Landmark Topology Proof

leave a comment »

Interesting that the proof was almost lost simply because it was poorly written. Kevin Hartnett writes in Quanta:

One of the most important pieces of mathematical knowledge was on the verge of being lost, maybe forever. Now, a new book hopes to save it.

The Disc Embedding Theorem rewrites a proof completed in 1981 by Michael Freedman — about an infinite network of discs — after years of solitary toil on the California coast. Freedman’s proof answered a question that at the time was one of the most important unsolved questions in mathematics, and the defining problem in Freedman’s field, topology.

Freedman’s proof felt miraculous. Nobody at the time believed it could possibly work — until Freedman personally persuaded some of the most respected people in the field. But while he won over his contemporaries, the written proof is so full of gaps and omissions that its logic is impossible to follow unless you have Freedman, or someone who learned the proof from him, standing over your shoulder guiding you.

“I probably didn’t treat the exposition of the written material as carefully as I should have,” said Freedman, who today leads a Microsoft research group at the University of California, Santa Barbara focused on building a quantum computer. [No “probably” about it. – LG]

Consequently, the miracle of Freedman’s proof has faded into myth.

Today, few mathematicians understand what he did, and those who do are aging out of the field. The result is that research involving his proof has withered. Almost no one gets the main result, and some mathematicians have even questioned whether it’s correct at all.

In a 2012 post on MathOverflow, one commenter referred to the proof as a “monstrosity of a paper” and said he had “never met a mathematician who could convince me that he or she understood Freedman’s proof.”

The new book is the best effort yet to fix the situation. It is a collaboration by five young researchers who were captivated by the beauty of Freedman’s proof and wanted to give it new life. Over nearly 500 pages, it spells out the steps of Freedman’s argument in complete detail, using clear, consistent terminology. The goal was to turn this important but inaccessible piece of mathematics into something that a motivated undergraduate could learn in a semester.

“There is nothing left to the imagination anymore,” said Arunima Ray of the Max Planck Institute for Mathematics in Bonn, co-editor of the book along with Stefan Behrens of Bielefeld University, Boldizsár Kalmár of the Budapest University of Technology and Economics, Min Hoon Kim of Chonnam National University in South Korea, and Mark Powell of Durham University in the U.K. “It’s all nailed down.”

Sorting Spheres

In 1974, Michael Freedman was 23 years old, and he had his eye on one of the biggest problems in topology, a field of math which studies the basic characteristics of spaces, or manifolds, as mathematicians refer to them.

It was called the Poincaré conjecture, after the French mathematician Henri Poincaré, who’d posed it in 1904. Poincaré predicted that any shape, or manifold, with certain generic characteristics must be equivalent, or homeomorphic, to the sphere. (Two manifolds are homeomorphic when you can take all the points on one and map them over to points on the other while maintaining relative distances between points, so that points that are close together on the first manifold remain close together on the second.)

Poincaré was specifically thinking of three-dimensional manifolds, but mathematicians went on to consider manifolds of all dimensions. They also wondered if the conjecture held for two types of manifolds. The first type, known as a “smooth” manifold, doesn’t have any features like sharp corners, allowing you to perform calculus at every point. The second, known as a “topological” manifold, can have corners where calculus is impossible.

By the time Freedman started work on the problem, mathematicians had made a lot of progress on the conjecture, including solving the topological version of it in dimensions 5 and higher.

Freedman focused on the four-dimensional topological conjecture. It stated that any topological manifold that’s a four-dimensional “homotopy” sphere, which is loosely equivalent to a four-dimensional sphere, is in fact homeomorphic (strongly equivalent) to the four-dimensional sphere.

“The question we’re asking is, [for the four-sphere], is there a difference between these two notions of equivalence?” said Ray.

The four-dimensional version was arguably the hardest version of Poincaré’s problem. This is due in part to the fact that the tools mathematicians used to solve the conjecture in higher dimensions don’t work in the more constrained setting of four dimensions. (Another contender for the hardest version of the question is the three-dimensional Poincaré conjecture, which wasn’t solved until 2002, by Grigori Perelman.)

At the time Freedman set to work, no one had any fully developed idea for how to solve it — meaning that if he was going to succeed, he was going to have to invent wildly new mathematics.

Curves That Count

Before getting into how he proved the Poincaré conjecture, it’s worth digging a little more into what the question is really asking.

A four-dimensional homotopy sphere can be characterized by the way curves drawn inside it interact with each other: The interaction tells you something essential about the larger space in which they’re interacting.

In the four-dimensional case, these curves will be two-dimensional planes (and in general, the curves will be at most half the dimension of the larger space they’re drawn inside). To understand the basic setup, it’s easier to consider a simpler example involving one-dimensional curves intersecting inside two-dimensional space, like this: . . .

Continue reading.

Written by Leisureguy

10 September 2021 at 4:48 pm

Posted in Books, Math

The Statistics Behind “Breakthrough” Infection

leave a comment »

Joe Schwarcz points out what’s missing from the report of the superspreader event in Rhode Island. He writes at MicGill University Office of Science and Society:

Recently we have seen a rash of headlines along the lines of “75% of infected people were fully vaccinated” atop stories describing a “super-spreader” event in Massachusetts. While that statistic is correct, it needs to be put into context. First, let’s note that the outbreak, in which 360 out of 469 cases (75%) were among vaccinated individuals, was linked with densely packed indoor and outdoor events on the July 4th weekend with few people wearing masks. Next, let’s examine that scary 75% number.

Consider a theoretical scenario in which 100% of a population is vaccinated. Since vaccines are not perfect, there will be some “breakthrough” infections, and in such a case, 100% of infections will be among vaccinated people! That does not mean vaccines do not work. To determine vaccine efficacy in this case, we would need to know the total number of vaccinated and unvaccinated people who gathered that weekend. If this were known, then the percent of infections in the two populations could be calculated and the efficacy of the vaccine determined. In other words, it is important to know the denominator in such a calculation!

If out of the 469 cases, 360 (75%) were vaccinated, to find the percent of infections among vaccinated people we would have to make the calculation 360/a X 100 where “a” is the total number of vaccinated people. Similarly, for the unvaccinated, it would be 109/b, X 100 where “b” is the total of unvaccinated individuals. But we do not know “a” or “b,” so the relative effectiveness cannot be calculated. However, given what we know about rates of vaccination in the state, which is roughly 65%, and in Provincetown, the epicentre, a reported 85%, it is a good bet that “a” is much larger than “b,” meaning that the percent of infection among vaccinated people is much less than among the unvaccinated. Such data is available from other studies and formed the basis for the approval of the various vaccines.

What all this means is that given the high vaccination rate, the chilling 75% number is not surprising and is basically meaningless. The really important number is 1.07, which is the percent of infected people (5 out of 469) who ended up in the hospital! One of these was unvaccinated, and two had prior health conditions. This means that the vaccines keep people out of the hospital! We can cope with flu-like symptoms, we just do not want to end up in the ICU!

To further buttress the point that vaccines work, CDC reports about 35,000 symptomatic cases a week out of 162 million vaccinated Americans, which is 0.02%. Also, more than 90% of patients hospitalized with COVID are unvaccinated. Furthermore, vaccination is associated with a 40-50% reduction in COVID cases among an infected person’s household contacts.

The delta variant is now responsible for most infections, and it is nasty, spreading much more easily than the original virus. Studies show that vaccinated individuals carry as much of this variant in their nose as the unvaccinated, suggesting that they can spread the virus. They can indeed, but are still less likely to do so than the unvaccinated since they are less likely to be infected in the first place. Another boost for vaccines comes from just-released Israeli data demonstrating that a third dose of COVID-19 vaccine among the over 60 offers greater protection than after two vaccines by a factor of four!

What should we then take away from the Massachusetts event? That this is not the time for unnecessary gatherings or travel. And neither is it time to toss away the masks, no matter how unpleasant they are to wear. And of course, for the yet unvaccinated, it is time to get the jab. Unvaccinated people are incubators for variants.

Written by Leisureguy

28 August 2021 at 9:53 am

The Beauty of Bézier Curves

leave a comment »

Written by Leisureguy

25 August 2021 at 12:48 pm

Posted in Games, Math, Software, Technology

The Collatz Conjecture

leave a comment »

Written by Leisureguy

4 August 2021 at 12:21 pm

Posted in Math, Video

Proof Assistant Makes Jump to Big-League Math

leave a comment »

Computer-assisted cognition is here. Kevin Hartnett writes in Quanta:

Computer proof assistants have been an intriguing subplot in mathematics for years — promising to automate core aspects of the way mathematicians work, but in practice having little effect on the field.

But a new result, completed in early June, has the feel of a rookie’s first hit in the big leagues: At last, a proof assistant has made a real contribution to the leading edge of mathematical research by verifying the correctness of a complicated, modern proof.

“It demonstrates that modern maths can be formalized in a theorem prover,” said Bhavik Mehta, a graduate student at the University of Cambridge who contributed to the work.

The proof in question is by Peter Scholze of the University of Bonn, one of the most widely respected mathematicians in the world. It is just one piece of a larger project called “condensed mathematics” that he and Dustin Clausen of the University of Copenhagen have been working on for several years.

Their goal is to create new foundations for topology, replacing the traditional notion of a topological space — whose examples include the sphere and the doughnut — with more versatile objects that the authors call condensed sets. In this new perspective, topological spaces are thought of as being assembled from infinite points of dust glued together.

That project includes a particularly important, difficult proof that Scholze worked out himself during a consuming week in July 2019. It establishes that an area of math called real functional analysis still works if you replace topological spaces with condensed sets.

Scholze began the proof on a Monday. He worked entirely in his head, barely writing anything down, let alone using a computer. By Thursday afternoon he’d nearly figured it out, save one piece that he just couldn’t get right. He was also feeling the strain of the intense concentration required to hold such a complicated argument in his active memory. So that night he unwound with some friends at a bar. He paid for it the next morning, Friday.

“I was completely hungover,” said Scholze.

But he also knew that he wouldn’t have time to work over the weekend, making Friday his best chance to finish the proof. The thought of losing touch with everything he’d built up in his mind over the past week, then having to start again fresh on Monday, was more than he wanted to consider.

“I didn’t think I’d have the mental capacity to rebuild this in my head again,” said Scholze.

So he powered through and finished the proof. But afterward, he wasn’t certain that what he had done was correct. The reason was more than the hazy circumstances in which he’d cleared the final hurdle. The proof was so complicated Scholze knew it was possible he had missed something.

“It’s some very convoluted thing with many moving parts. It’s hard to know which parts move by how much when you shift one of these parameters,” said Scholze.

Scholze didn’t find time to actually write down the proof until November 2019. A year later he contacted Kevin Buzzard, a mathematician at Imperial College London and a prominent evangelist for a proof assistant program called Lean. Scholze wanted to know whether it would be possible to type his proof into Lean — turning it into lines of code like a software program — so that the program could verify whether it was really true.

Buzzard shared Scholze’s inquiry with a handful of other members of the Lean community including Johan Commelin, a postdoctoral researcher at the University of Freiburg. Commelin had the perfect background for the job — he’d been using Lean for several years and was familiar with condensed mathematics — and he was convinced that verifying Scholze’s proof would do a lot to legitimize the proof assistant’s standing in the mathematical community.

“Being able to collaborate with Peter on such a project and having his name attached to it would be an enormous boost for Lean,” said Commelin.

But he also thought it could take a year or more to do it, which gave him pause. Commelin was worried he might spend all that time verifying the proof and, at the end, the rest of the math world would just shrug.

“I thought that if I spend two years working on this and I come out of my cave and say, ‘This is fine,’ the rest of the world is going to say, ‘Wow, we already knew this, Peter proved it,’” said Commelin. It wouldn’t matter that Scholze himself wasn’t entirely sure.

So Commelin asked Scholze if he’d be willing to make a public statement vouching for the importance of the work. Scholze agreed, and on Dec. 5, 2020, he wrote a post on Buzzard’s blog.

They called it the “Liquid Tensor Experiment,” a nod to mathematical objects involved in the proof called liquid real vector spaces, and to a progressive rock band he and Commelin enjoy called Liquid Tension Experiment. In the 4,400-word primer, Scholze explained some technical aspects of the result and then added a note testifying in plain language to what he saw as the importance of checking it with a computer.

“I think this may be my most important theorem to date. (It does not really have any applications so far, but I’m sure this will change,)” Scholze wrote. “Better be sure it’s correct…”

Assurance in place, Commelin set to work. After explaining to Lean the mathematical statement whose proof he ultimately wanted the program to check, he brought more mathematicians into the project. They identified a few lemmas — intermediate steps in the proof — that seemed most approachable. They formalized those first, coding them on top of the library of mathematical knowledge that Lean draws on to determine if a given statement is true or not.

Last October, Quanta wrote that the collective effort to write mathematics in Lean has the “air of a barn raising.” This project was no different. Commelin would identify discrete parts of the proof and post them to Zulip, a discussion board that serves as a hub for the Lean community. When mathematicians saw a part of the proof that fit their expertise, they’d volunteer to formalize it.

Mehta was one of about a dozen mathematicians who contributed to the work. In May he saw a post from Commelin asking for help formalizing the proof of a statement called Gordan’s lemma, which related to Mehta’s work in the area of combinatorial geometry. He spent a week coding the proof in terms that were consistent with the larger proof the mathematicians were building. It was emblematic, he said, of the way Lean works.

“It’s one big collaboration with a lot of people doing what they’re good at to make a singular monolith,” he said.

As the work proceeded, Scholze was a consistent presence on Zulip, answering questions and explaining points of the proof — a bit like an architect giving directions to builders on a job site. “He was always within reach,” Commelin said.

At the end of May the group finished formalizing the one part of the proof Scholze was most unsure about. Commelin entered the final keystroke at 1:10 a.m. on May 29. Lean compiled the proof, and . . .

Continue reading.

Written by Leisureguy

28 July 2021 at 1:50 pm

Posted in Math, Software, Technology

Re-counting the Cognitive History of Numerals

leave a comment »

In The MIT Press Reader Philip Laughlin, who acquires books for the MIT Press in the fields of Cognitive Science, Philosophy, Linguistics, and Bioethics, interviews Stephen Chrisomalis, Professor of Anthropology at Wayne State University and author of, among other books,Reckonings: Numerals, Cognition, and History.”

Those of us who learned arithmetic using pen and paper, working with the ten digits 0–9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realize that this sort of numeracy is not preordained.

Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book “Reckonings: Numerals, Cognition, and History.” Remarkably, each of the basic structures was invented multiple times independently of one another. In “Reckonings,” Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology, and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.

Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians, and his thoughts on the validity of the Sapir-Whorf Hypothesis.


.
Philip Laughlin
: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?

Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, “Numerical Notation: A Comparative History” (Cambridge, 2010). It was an unorthodox project for an anthropology department — neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.

What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.

Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.

PL: You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?

SC: The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ — to calculate, to think, and to judge — which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general — to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement — every numerical notation faces its own reckoning as users decide whether to adopt, transmit, and eventually, abandon it. As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic — but it’s about a lot more than that.

The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000.” Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.

PL: At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “Futurologist.” So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?

SC: One of the key arguments in “Reckonings” is that . . .

Continue reading.

Written by Leisureguy

28 July 2021 at 11:49 am

Genetic Memory: How We Know Things We Never Learned

leave a comment »

David Treffert published an interesting article in Scientific American in January 2015. It begins:

I met my first savant 52 years ago and have been intrigued with that remarkable condition ever since. One of the most striking and consistent things in the many savants I have seen is that that they clearly know things they never learned.

Leslie Lemke is a musical virtuoso even though he has never had a music lesson in his life. Like “Blind Tom” Wiggins a century before him, his musical genius erupted so early and spontaneously as an infant that it could not possibly have been learned. It came ‘factory installed’. In both cases professional musicians witnessed and confirmed that Lemke and Wiggins somehow, even in the absence of formal training, had innate access to what can be called “the rules” or vast syntax of music.

Alonzo Clemons has never had an art lesson in his life. As an infant, after a head injury, he began to sculpt with whatever was handy–Crisco or whatever–and now is a celebrated sculptor who can mold a perfect specimen of any animal with clay in an hour or less after only a single glance at the animal itself–every muscle and tendon perfectly positioned. He has had no formal training.

To explain the savant, who has innate access to the vast syntax and rules of art, mathematics, music and even language, in the absence of any formal training and in the presence of major disability, “genetic memory,” it seems to me, must exist along with the more commonly recognized cognitive/semantic and procedural/habit memory circuits.

Genetic memory, simply put, is complex abilities and actual sophisticated knowledge inherited along with other more typical and commonly accepted physical and behavioral characteristics. In savants the music, art or mathematical “chip” comes factory installed. In addition to the examples mentioned above, I describe others in my book, Islands of Genius: The Bountiful Mind of the Autistic, Acquired and Sudden Savant.

Genetic memory is not an entirely new concept. In 1940, A.A. Brill quoted Dr. William Carpenter who, in comparing math prodigy Zerah Colburn’s calculating powers to Mozart’s mastery of musical composition, wrote the following:

In each of the foregoing cases, then, we have a peculiar example of the possession of an extraordinary congenital aptitude for certain mental activity, which showed itself at so early a period as to exclude the notion that it could have been acquired by the experience of the individual. To such congenital gifts we give the name of intuitions: it can scarcely be questioned that like the instincts of the lower animals, they are the expressions of constitutional tendencies embodied in the organism of the individuals who manifest them.

Carl Jung used the term “collective unconscious” to define his even broader concept of inherited traits, intuitions and collective wisdom of the past.

Wilder Penfield in his pioneering 1978 book, Mystery of the Mindalso referred to three types of memory. “Animals,” he wrote, “particularly show evidence of what might be called racial memory” (this would be the equivalent of genetic memory). He lists the second type of memory as that associated with “conditioned reflexes” and a third type as “experiential”. The two latter types would be consistent with the terminology commonly applied to “habit or procedural” memory and “cognitive or semantic” memory.

In his 1998 book, The Mind’s Past, Michael Gazzaniga wrote:

The baby does not learn trigonometry, but knows it; does not learn how to distinguish figure from ground, but knows it; does not need to learn, but knows, that when one object with mass hits another, it will move the object … The vast human cerebral cortex is chock full of specialized systems ready, willing and able to be used for specific tasks. Moreover, the brain is built under tight genetic control … As soon as the brain is built, it starts to express what it knows, what it comes with from the factory. And the brain comes loaded. The number of special devices that are in place and active is staggering. Everything from perceptual phenomena to intuitive physics to social exchange rules comes with the brain. These things are not learned; they are innately structured. Each device solves a different problem … the multitude of devices we have for doing what we do are factory installed; by the time we know about an action, the devices have already performed it.

Steven Pinker’s 2003 book, The Blank Slate: The Modern Denial of Human Naturerefutes the “blank slate” theories of human development. Brian Butterworth, in his 1999 book, What Counts: How Every Brain is Hardwired for Math, points out that babies have many specialized innate abilities, including numerical ones that he attributes to a “number module” encoded in the human genome from ancestors 30,000 years ago.

Marshall Nivenberg, from the National Heart Institute, provided insight into the actual DNA/RNA mechanics of this innate knowledge in an article titled “Genetic Memory” published in 1968 in JAMA.

Whether called genetic, ancestral or racial memory, or intuitions or congenital gifts, the concept of a genetic transmission of sophisticated knowledge well beyond instincts, is necessary to explain how prodigious savants can know things they never learned.

We tend to think of ourselves as being born with a magnificent and intricate piece of organic machinery (“hardware”) we call the brain, along with a massive but blank hard drive (memory). What we become, it is commonly believed, is an accumulation and culmination of our continuous learning and life experiences, which are added one by one to memory. But the prodigious savant apparently comes already programmed with a vast amount of innate skill -and knowledge in his or her area of expertise–factory-installed “software” one might say–which accounts for the extraordinary abilities over which the savant innately shows mastery in the face of often massive cognitive and other learning handicaps. It is an area of memory function worthy of much more exploration and study.

Indeed recent cases of “acquired savants” or “accidental genius” have convinced me that we all have such factory-installed software. I discussed some of those cases in detail in . . .

Continue reading.

Written by Leisureguy

22 July 2021 at 11:27 am

Arianna Rosenbluth Changed the World Before Leaving Science Behind

leave a comment »

By and large, society during my generation treated women badly (and by “society” I mean “men” but also organizations (overwhelmingly managed by men) and social conventions). This article provides one example but there are many others. One example often offered is how Crick and Watson used Rosalind Franklin’s findings but failed to credit her. That particular story does not correspond to the facts but there are many others — for example, how long Wally Frank had to wait before she was finally able to take a suborbital flight yesterday.

Here’s an example that is true, which Anastasia Carrier recounts in the Harvard Gazette. She writes:

A few years ago, Jean Rosenbluth was visiting her mother at a nursing home in Pasadena. The occasion was a holiday party, and Jean and her husband were seated with her mother and another couple. It came up in conversation that the man sharing the table was a history of science professor, specializing in physics.

“Oh, my mother was a physicist,” Jean said as she introduced her mother. “This is Arianna Rosenbluth.”

The professor was stunned. “Wait, the Arianna Rosenbluth?” Arianna smiled shyly and kept eating her lemon meringue pie.

Arianna Wright Rosenbluth, who received a master’s degree in physics from Radcliffe College in 1947, was one of five scientists who created the revolutionary Metropolis algorithm—the first practical implementation of what are now known as the Markov Chain Monte Carlo methods, go-to tools for solving large, complex mathematical and engineering problems.

Over the years, these methods have been used to simulate both quantum physics and markets, predict genetic predisposition to certain illnesses, forecast the outcomes of political conflicts, and model the spread of infectious diseases. It was Rosenbluth who found a way to get early computers to use the Markov Chain method, creating a blueprint that others followed.

“Arianna’s impact would last for a long time,” says Xihong Lin, a professor of biostatistics at the Harvard T.H. Chan School of Public Health, who used Markov Chain Monte Carlo methods to analyze a large set of COVID-19 data from Wuhan and to calculate the infectiousness of the virus. The methods have also helped specialists evaluate the effectiveness of quarantine and stay-at-home measures.

“Without Rosenbluth, I don’t think the field of Markov Chain Monte Carlo would go that far,” says Lin, referring to the role of the Radcliffe-trained scientist in enabling wide use of the tool across disciplines. “Implementation is critically important. That’s why her contribution is a landmark and really should be emphasized—should be honored.”

The paper that Rosenbluth coauthored—along with her then-husband, Marshall Rosenbluth, Edward and Augusta Teller, and Nicholas Metropolis—was published in 1953, but the algorithm’s origin story remained a mystery for five decades. In 2003, Marshall shared his memory of the achievement during a conference celebrating its 50th anniversary. The researchers developed the tool to illuminate how atoms rearranged themselves as solids melted, he said. Marshall did most of the conceptual work, and Arianna translated their idea into a computer algorithm—a task that required a fundamental understanding of physics and computer science, and also creativity.

By all accounts, Rosenbluth, who died of COVID-19 complications in December at age 93, was brilliant. She earned her PhD in physics at Harvard at 21 and in her short career worked under two physicists who went on to earn Nobel Prizes. And yet she effectively quit science in her late 20s, leaving her job at the Los Alamos Scientific Laboratory to be a stay-at-home mother. She rarely spoke about her time in the lab—although she sometimes mentioned to her children how irritating it was that her ideas were overlooked because she was a woman trying to make it in a male-dominated field. Other times, she would lovingly describe MANIAC I—the Los Alamos machine that she used for computing the Metropolis algorithm.

“She was ahead of her time,” says Pierre E. Jacob, the John L. Loeb Associate Professor of the Natural Sciences and a professor of statistics in the Harvard Faculty of Arts and Sciences, whose work involves Markov chains and probability modeling. In his syllabus, he renamed the Metropolis algorithm the Rosenbluth algorithm after reading about Arianna’s death.

“Better late than never,” he says.

Star on the Rise

Growing up in Houston, Arianna Wright was a mystery to her parents.

“Her mom and dad had this genius child, and they kind of didn’t know what to do with her,” says Mary Rosenbluth, one of Arianna’s four children. Leffie (Woods) Wright was confused by her quiet and introspective daughter, who didn’t care for fashion and rules but loved reading, especially fantasy books like L. Frank Baum’s Wizard of Oz series. Mary recalls a newspaper article among her mother’s things that described Arianna as a child genius.

“It kind of struck me,” she says. “Here’s this girl growing up in suburban Houston, and she was just so different from everybody else.”

Arianna received a full-ride scholarship to Rice Institute (now Rice University) in Houston and took a bus to her classes. She earned her bachelor’s when she was 18, with honors in physics and mathematics. During her college days, she fenced against men as well as women, winning city and state championships. She qualified for the Summer Olympics in 1944, but World War II led to the cancellation of the games. She qualified again four years later but couldn’t afford to travel to London.

At Harvard, Arianna was rejected by one potential advisor because he didn’t take female PhD students, says Alan Rosenbluth, Arianna’s oldest child and a retired physicist. That was not uncommon. “Women were discouraged every step of the way,” says Margaret W. Rossiter, a Cornell historian of women in science. But Arianna forged ahead, in 1949 becoming just the fifth woman to earn a PhD in physics from Harvard.

She accepted a postdoctoral fellowship funded by the Atomic Energy Commission to study at Stanford University, where she . . .

Continue reading. There’s much more.

It strikes me that Arianna’s recognition of how to express the ideas of Markov chains using code in a computer algorithm bears a passing resemblance to how Zhi Bingyi’s recognition of how to express Chinese characters using the Latin alphabet on a keyboard (as described in the previous post).

Written by Leisureguy

21 July 2021 at 11:29 am

The Standard Model: The Most Successful Scientific Theory Ever

leave a comment »

Written by Leisureguy

16 July 2021 at 4:51 pm

Posted in Math, Science

The Big Bang: What We Know and How We Know It

leave a comment »

8 Gravitons has an interesting article about the Big Bang. The article begins:

When most people think of the Big Bang, they imagine a single moment: a whole universe emerging from nothing. That’s not really how it worked, though. The Big Bang refers not to one event, but to a whole scientific theory. Using Einstein’s equations and some simplifying assumptions, we physicists can lay out a timeline for the universe’s earliest history. Different parts of this timeline have different evidence: some are meticulously tested, others we even expect to be wrong! It’s worth talking through this timeline and discussing what we know about each piece, and how we know it.

We can see surprisingly far back in time. As we look out into the universe, we see each star as it was when the light we see left it: longer ago the further the star is from us. Looking back, we see changes in the types of stars and galaxies: stars formed without the metals that later stars produced, galaxies made of those early stars. We see the universe become denser and hotter, until eventually we reach the last thing we can see: the cosmic microwave background, a faint light that fills our view in every direction. This light represents a change in the universe, the emergence of the first atoms. Before this, there were ions: free nuclei and electrons, forming a hot plasma. That plasma constantly emitted and absorbed light. As the universe cooled, the ions merged into atoms, and light was free to travel. Because of this, we cannot see back beyond this point. Our model gives detailed predictions for this curtain of light: its temperature, and even the ways it varies in intensity from place to place, which in turn let us hone our model further.

In principle, we could “see” a bit further. Light isn’t the only thing that travels freely through the universe. Neutrinos are almost massless, and pass through almost everything. Like the cosmic microwave background, the universe should have a cosmic neutrino background. This would come from much earlier, from an era when the universe was so dense that neutrinos regularly interacted with other matter. We haven’t detected this neutrino background yet, but future experiments might. Gravitational waves meanwhile, can also pass through almost any obstacle. There should be gravitational wave backgrounds as well, from a variety of eras in the early universe. Once again these haven’t been detected yet, but more powerful gravitational wave telescopes may yet see them.

We have indirect evidence a bit further back than we can see things directly. In the heat of the early universe the first protons and neutrons were merged via nuclear fusion, becoming the first atomic nuclei: isotopes of hydrogen, helium, and lithium. Our model lets us predict the proportions of these, how much helium and lithium per hydrogen atom. We can then compare this to the oldest stars we see, and see that the proportions are right. In this way, we know something about the universe from before we can “see” it.

We get surprised when we look at the universe on large scales, and compare widely separated regions. We find those regions are surprisingly similar, more than we would expect from randomness and the physics we know. Physicists have proposed different explanations for this. The most popular, cosmic inflation, suggests that the universe expanded very rapidly, accelerating so that a small region of similar matter was blown up much larger than the ordinary Big Bang model would have, projecting those similarities across the sky. While many think this proposal fits the data best, we still aren’t sure it’s the right one: there are alternate proposals, and it’s even controversial whether we should be surprised by the large-scale similarity in the first place.

We understand, in principle, how matter can come from “nothing”. This is sometimes presented as the most mysterious part of the Big Bang, the idea that matter could spontaneously emerge from an “empty” universe. But to a physicist, this isn’t very mysterious. Matter isn’t actually conserved, mass is just energy you haven’t met yet. Deep down, the universe is just a bunch of rippling quantum fields, with different ones more or less active at different times. Space-time itself is just another field, the gravitational field. When people say that in the Big Bang matter emerged from nothing, all they mean is that energy moved from the gravitational field to fields like the electron and quark, giving rise to particles. As we wind the model back, we can pretty well understand how this could happen.

If we extrapolate, winding Einstein’s equations back all the way, we reach . . .

Continue reading.

See also Kevin Drum’s post “Here’s the theory of relativity in 500 words,” in which he covers both Special Relativity (time and space) and General Relativity (gravitation).

Written by Leisureguy

15 July 2021 at 1:30 pm

Posted in Math, Science

Re-counting the Cognitive History of Numerals

leave a comment »

In The MIT Press Reader Philip Laughlin interviews Stephen Chrisomalis, author of Reckonings: Numerals, Cognition, and History:

Those of us who learned arithmetic using pen and paper, working with the ten digits 0–9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realize that this sort of numeracy is not preordained.

Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book “Reckonings: Numerals, Cognition, and History.” Remarkably, each of the basic structures was invented multiple times independently of one another. In “Reckonings,” Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology, and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.

Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians, and his thoughts on the validity of the Sapir-Whorf Hypothesis.


Philip Laughlin: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?

Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, “Numerical Notation: A Comparative History” (Cambridge, 2010). It was an unorthodox project for an anthropology department — neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.

What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.

Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.

PL: You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?

SC: The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ — to calculate, to think, and to judge — which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general — to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement — every numerical notation faces its own reckoning as users decide whether to adopt, transmit, and eventually, abandon it. As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic — but it’s about a lot more than that.

The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000.” Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.

PL: At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “Futurologist.” So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?

SC: One of the key arguments in “Reckonings” is that . . .

Continue reading. There’s much more.

Written by Leisureguy

21 June 2021 at 1:36 pm

Posted in Books, Daily life, History, Math

Mathematicians Prove 2D Version of Quantum Gravity Works

with one comment

Charlie Wood writes in Quanta:

Alexander Polyakov, a theoretical physicist now at Princeton University, caught a glimpse of the future of quantum theory in 1981. A range of mysteries, from the wiggling of strings to the binding of quarks into protons, demanded a new mathematical tool whose silhouette he could just make out.

“There are methods and formulae in science which serve as master keys to many apparently different problems,” he wrote in the introduction to a now famous four-page letter in Physics Letters B. “At the present time we have to develop an art of handling sums over random surfaces.”

Polyakov’s proposal proved powerful. In his paper he sketched out a formula that roughly described how to calculate averages of a wildly chaotic type of surface, the “Liouville field.” His work brought physicists into a new mathematical arena, one essential for unlocking the behavior of theoretical objects called strings and building a simplified model of quantum gravity.

Years of toil would lead Polyakov to breakthrough solutions for other theories in physics, but he never fully understood the mathematics behind the Liouville field.

Over the last seven years, however, a group of mathematicians has done what many researchers thought impossible. In a trilogy of landmark publications, they have recast Polyakov’s formula using fully rigorous mathematical language and proved that the Liouville field flawlessly models the phenomena Polyakov thought it would.

“It took us 40 years in math to make sense of four pages,” said Vincent Vargas, a mathematician at the French National Center for Scientific Research and co-author of the research with Rémi Rhodes of Aix-Marseille University, Antti Kupiainen of the University of Helsinki, François David of the French National Center for Scientific Research, and Colin Guillarmou of Paris-Saclay University.

The three papers forge a bridge between the pristine world of mathematics and the messy reality of physics — and they do so by breaking new ground in the mathematical field of probability theory. The work also touches on philosophical questions regarding the objects that take center stage in the leading theories of fundamental physics: quantum fields.

“This is a masterpiece in mathematical physics,” said Xin Sun, a mathematician at the University of Pennsylvania.

Infinite Fields

In physics today, the main actors in the most successful theories are fields — objects that fill space, taking on different values from place to place.

In classical physics, for example, a single field tells you everything about how a force pushes objects around. Take Earth’s magnetic field: The twitches of a compass needle reveal the field’s influence (its strength and direction) at every point on the planet.

Fields are central to quantum physics, too. However, the situation here is more complicated due to the deep randomness of quantum theory. From the quantum perspective, Earth doesn’t generate one magnetic field, but rather an infinite number of different ones. Some look almost like the field we observe in classical physics, but others are wildly different.

But physicists still want to make predictions — predictions that ideally match, in this case, what a mountaineer reads on a compass. Assimilating the infinite forms of a quantum field into a single prediction is the formidable task of a “quantum field theory,” or QFT. This is a model of how one or more quantum fields, each with their infinite variations, act and interact.

Driven by immense experimental support, QFTs have become the basic language of particle physics. The Standard Model is one such QFT, depicting fundamental particles like electrons as fuzzy bumps that emerge from an infinitude of electron fields. It has passed every experimental test to date (although various groups may be on the verge of finding the first holes).

Physicists play with many different QFTs. Some, like the Standard Model, aspire to model real particles moving through the four dimensions of our universe (three spatial dimensions plus one dimension of time). Others describe exotic particles in strange universes, from two-dimensional flatlands to six-dimensional uber-worlds. Their connection to reality is remote, but physicists study them in the hopes of gaining insights they can carry back into our own world.

Polyakov’s Liouville field theory is one such example.

Gravity’s Field

The Liouville field, which is based on an equation from complex analysis developed in the 1800s by the French mathematician Joseph Liouville, describes a completely random two-dimensional surface — that is, a surface, like Earth’s crust, but one in which the height of every point is chosen randomly. Such a planet would erupt with mountain ranges of infinitely tall peaks, each assigned by rolling a die with infinite faces.

Such an object might not seem like an informative model for physics, but randomness is not devoid of patterns. The bell curve, for example, tells you how likely you are to randomly pass a seven-foot basketball player on the street. Similarly, bulbous clouds and crinkly coastlines follow random patterns, but it’s nevertheless possible to discern consistent relationships between their large-scale and small-scale features.

Liouville theory can be used to identify patterns in the endless landscape of all possible random, jagged surfaces. Polyakov realized this chaotic topography was essential for modeling strings, which trace out surfaces as they move. The theory has also been applied to describe quantum gravity in a two-dimensional world. Einstein defined gravity as space-time’s curvature, but translating his description into the language of quantum field theory creates an infinite number of space-times — much as the Earth produces an infinite collection of magnetic fields. Liouville theory packages all those surfaces together into one object. It gives physicists the tools to measure the curvature —and hence, gravitation — at every location on a random 2D surface.

“Quantum gravity basically means random geometry, because quantum means random and gravity means geometry,” said Sun.

Polyakov’s first step in exploring the world of random surfaces was to write down an expression defining the odds of finding a particular . . .

Continue reading.

Written by Leisureguy

17 June 2021 at 2:15 pm

Posted in Daily life, Math, Science

A Random Walk through the English Language

leave a comment »

Andrei Andreyevich Markov, the atheist (left) and Pavel Alekseevich Nekrasov, the believer (right). Credit: Alamy

Jorden Ellenberg writes in the Scientific American:

Here’s a game Claude Shannon, the founder of information theory, invented in 1948. He was trying to model the English language as a random process. Go to your bookshelf, pick up a random book, open it and point to a random spot on the page, and mark the first two letters you see. Say they’re I and N. Write down those two letters on your page.

Now, take another random book off the shelf and look through it until you find the letters I and N in succession. Whatever the character following “IN” is—say, for instance, it’s a space—that’s the next letter of your book. And now you take down yet another book and look for an N followed by a space, and once you find one, mark down what character comes next. Repeat until you have a paragraph

“IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID

PONDENOME OF DEMONSTURES OF THE REPTAGIN IS

REGOACTIONA OF CRE”

That isn’t English, but it kind of looks like English.

Shannon was interested in the “entropy” of the English language, a measure, in his new framework, of how much information a string of English text contains. The Shannon game is a Markov chain; that is, it’s a random process where the next step you take depends only on the current state of the process. Once you’re at LA, the “IN NO IST” doesn’t matter; the chance that the next letter is, say, a B is the probability that a randomly chosen instance of “LA” in your library is followed by a B.

And as the name suggests, the method wasn’t original to him; it was almost a half-century older, and it came from, of all things, a vicious mathematical/theological beef in late-czarist Russian math.

There’s almost nothing I think of as more inherently intellectually sterile than verbal warfare between true religious believers and movement atheists. And yet, this one time at least, it led to a major mathematical advance, whose echoes have been bouncing around ever since. One main player, in Moscow, was Pavel Alekseevich Nekrasov, who had originally trained as an Orthodox theologian before turning to mathematics. His opposite number, in St. Petersburg, was his contemporary Andrei Andreyevich Markov, an atheist and a bitter enemy of the church. He wrote a lot of angry letters to the newspapers on social matters and was widely known as Neistovyj Andrei, “Andrei the Furious.”

The details are a bit much to go into here, but the gist is this: Nekrasov thought he had found a mathematical proof of free will, ratifying the beliefs of the church. To Markov, this was mystical nonsense. Worse, it was mystical nonsense wearing mathematical clothes. He invented the Markov chain as an example of random behavior that could be generated purely mechanically, but which displayed the same features Nekrasov thought guaranteed free will.

A simple example of a Markov chain: a spider walking on a triangle with corners labeled 1, 2, 3. At each tick of the clock, the spider moves from its present perch to one of the other two corners it’s connected to, chosen at random. So, the spider’s path would be a string of numbers.

1, 2, 1, 3, 2, 1, 2, 3, 2, 3, 2, 1 …

Markov started with abstract examples like this, but later (perhaps inspiring Shannon?) applied this idea to strings of text, among them Alexander Pushkin’s poem Eugene Onegin. Markov thought of the poem, for the sake of math, as a string of consonants and vowels, which he laboriously cataloged by hand. Letters after consonants are 66.3 percent vowels and 33.7 percent consonants, while letters following vowels are only 12.8 percent vowels and 87.2 percent consonants.

So, you can produce “fake Pushkin” just as Shannon produced fake English; if the current letter is a vowel, the next letter is a vowel with probability 12.8 percent, and if the current letter is a consonant, the next one is a vowel with probability 66.3 percent. The results are not going to be very poetic; but, Markov discovered, they can be distinguished from the Markovized output of other Russian writers. Something of their style is captured by the chain.

Nowadays, the Markov chain is a fundamental tool for exploring spaces of conceptual entities much more general than poems. It’s how election reformers identify which legislative maps are brutally gerrymandered, and it’s how Google figures out which Web sites are most important (the key is a Markov chain where at each step you’re at a certain Web site, and the next step is to follow a random link from that site). What a neural net like GPT-3 learns—what allows it to produce uncanny imitation of human-written text—is a gigantic Markov chain that counsels it how to pick the next word after a sequence of 500, instead of the next letter after a sequence of two. All you need is a rule that tells you what probabilities govern the next step in the chain, given what the last step was.

You can train your Markov chain on your home library, or on Eugene Onegin, or on the huge textual corpus to which GPT-3 has access; you can train it on anything, and the chain will imitate that thing! You can train it on baby names from 1971, and get: . . .

Continue reading.

Written by Leisureguy

12 June 2021 at 7:10 pm

Posted in Math

A Number Theorist Who Connects Math to Other Creative Pursuits

leave a comment »

In Quanta Steve Nadis interviews Jordan Ellenberg:

here are many different pathways into mathematics,” said Jordan Ellenberg, a mathematician at the University of Wisconsin, Madison. “There is the stereotype that interest in math displays itself early. That is definitely not true in general. It’s not the universal story — but it is my story.”

That account was backed up by a biostatistician at the University of Pennsylvania — his mother, Susan Ellenberg. “Jordan recognized numbers before he could walk,” she said. “We’d be going someplace with him, and he’d start to call out numbers, and his father and I would have to figure out where he was seeing them. Each night, he’d ask me to teach him something new about math.” When he was in second grade, a local teacher began taking him through the high school math curriculum. Ever since, he’s been preoccupied with mathematics — though not exclusively so.

After graduating from Harvard University in 1993, Ellenberg completed a one-year master’s program in fiction writing at Johns Hopkins University, where he wrote a novel that was published a decade later, titled The Grasshopper King. But he always felt that he would eventually return to mathematics, and in 1994 he entered a doctoral program back at Harvard, pursuing research under the supervision of Barry Mazur, a number theorist.

“Barry was a great adviser and a very learned guy,” Ellenberg said. “One of the things he showed me is that it’s OK to be interested in things other than math. Through him I saw that being in a university isn’t just about being in the math department, but rather being part of a whole world of scholarship.”

Ellenberg has taken that view to heart, finding mathematics to explore in everything from internet fads to voting rights. He has interacted and even collaborated with colleagues from many different fields and departments, while keeping up his writing — academic papers for math journals, and popular articles for newspapers and magazines. In 2001, he started writing a column for Slate called “Do the Math.” Many entries are not typical mathematician fare, such as “Algebra for Adulterers,” “Cooking the Books on Virginity,” and “What Broadway Musicals Tell Us About Creativity.”

His latest book, Shape, is all about geometry — though, as you might expect, it departs significantly from the traditional geometry of your high school days. Proving the congruence of triangles and the like, he said, bears little resemblance to the work of modern geometry. In the book’s introduction, Ellenberg confesses that it was a curious subject for him to have taken up: “Reader, let me be straight with you about geometry: at first I didn’t care for it.”

Quanta spoke with Ellenberg earlier this month about geometry, electoral math and creativity. The interview has been condensed and edited for clarity.

When did you first realize there was something special about math?

When I was 6 years old, I was in the living room, gazing at the rectangular pattern of holes on a speaker where the sound comes through. I noticed there were 6 rows of 8 holes and, equivalently, 8 columns of 6 holes. I knew that 6 × 8 equals 8 × 6. But at that moment, I grasped that this was a fact about the world, not just a fact from the multiplication tables. Mathematical knowledge, I realized, was something that existed on its own — something you could directly apprehend — and not just something you were taught.

That, for me, offered an early glimmer of the power of mathematical thinking — and the emotional force that comes with it. As teachers, we aspire for every kid to have that kind of experience of mathematical knowledge.

Mathematics is a diverse field. How did you decide to focus on number theory?

I went to graduate school not really knowing what I would work on. It was just after Andrew Wiles proved Fermat’s Last Theorem. There was so much energy and enthusiasm about number theory at that time. It seemed to be the most exciting thing going on.

Students often ask me: “How do I figure out what area of math is right for me?” I tell them that it’s all interesting. Every field of research has deep wonderful ideas. You just have to see what you fall into. And wherever you fall, there is excitement to be found.

Of all the possible subjects in math, why did you write a book on geometry, especially when you admit to having a mental block when it comes to visualizing things?

It’s true, I didn’t really take to high school geometry. There was a certain style — the Euclidean “theorem, statement, proof” approach — that did not vibe with me. That approach is certainly a part of geometry, but it happens to be a tiny part.

It’s also true that I have difficulty with some geometric things. For example, when you have to put a credit card into a machine, I can’t follow the diagram and instead end up trying all four possibilities. If I’m on the first floor of my house and am asked about the layout of things in the room above me, I can’t really picture that. But it turns out that those skills aren’t so important when it comes to doing geometry.

Even though I steered clear of geometry when I was young, I later learned that you can’t maintain a dislike for any part of mathematics because all of its branches touch each other.

You also like to find mathematical connections even among ideas that don’t seem too mathematical, like pondering how many holes a straw has. Why bother answering that?

Well, it’s kind of an internet craze [with more than 60 million hits on Google]. It goes viral all the time, and you may wonder why people are so captivated by such a weird question. I’d say it’s actually a deep mathematical question, not a triviality nor a cut-and-dried matter. You could say one hole or two holes — or zero holes if you think about taking a rectangular piece of paper (with no holes in it) and rolling it up. It’s a way of getting people to understand topology and homology groups, which involves classifying objects based on the number of holes they have.

It turns out there is a mathematical answer to this: Topologists would say the straw has just one hole. But the point is not just to give people an answer, but rather to show them why it’s an interesting problem. Although this question is settled, many of the things that are now settled in mathematics may not have been settled 100 or so years ago. People have fought hard over almost every single conceptual advance.

In 2019, you and 10 other mathematicians signed a brief about gerrymandering that was submitted to the Supreme Court. What does math have to do with that? . . .

Continue reading. There’s more.

Written by Leisureguy

29 May 2021 at 9:30 am

Posted in Books, Daily life, Math

There’s a hole at the bottom of math

leave a comment »

This is an excellent presentation of fascinating ideas (some of which I struggle with — the spectral gap, for instance), and enough is clearly presented to be pleasing and satisfying. Worth watching the entire 34 minutes.

Written by Leisureguy

26 May 2021 at 7:49 pm

Posted in Math, Memes, Philosophy, Science

When an algorithm taps you on the shoulder

leave a comment »

Matt Stroud reports in the Verge:

ROBERT MCDANIEL’S TROUBLES began with a knock on the door. It was a weekday in mid-2013, as he made lunch in the crowded three-bedroom house where he lives with his grandmother and several of his adult siblings.

When he went to answer the door, McDaniel discovered not one person, but a cohort of visitors: two police officers in uniform, a neighbor working with the police, and a muscular guy in shorts and a T-shirt sporting short, graying hair.

Police officers weren’t a new sight for McDaniel. They often drove down his tree-lined street in the Austin neighborhood of Chicago making stops and arrests. Out of the 775 homicides tracked by the Chicago Sun-Times in 2020, 72 of them happened in Austin. That’s almost 10 percent of the city’s murder rate, in a region that takes up just 3 percent of its total area. The City of Chicago puts out a “heat map” of where gun crimes occur, with areas of moderate shooting numbers shaded in blue or green. Red splotches represent large numbers — and hottest concentrations — of shootings. On the map, Austin is the color of a fire engine.

Still, this visit from authorities caught McDaniel off guard: at that point in time, he had nothing remotely violent on his criminal record — just arrests for marijuana-related offenses and street gambling. And despite two officers showing up at his front door with the cohort, neither of them, nor anyone else in the cohort, accused McDaniel of breaking the law. They were not there to arrest him. No one was there to investigate a crime. They just wanted to talk.

“I had no idea why these cops were here,” McDaniel says, recounting it to me years later. “I didn’t do shit to bring them here.”

He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.

McDaniel was both a potential victim and a potential perpetrator, and the visitors on his porch treated him as such. A social worker told him that he could help him if he was interested in finding assistance to secure a job, for example, or mental health services. And police were there, too, with a warning: from here on out, the Chicago Police Department would be watching him. The algorithm indicated Robert McDaniel was more likely than 99.9 percent of Chicago’s population to either be shot or to have a shooting connected to him. That made him dangerous, and top brass at the Chicago PD knew it. So McDaniel had better be on his best behavior.

The idea that a series of calculations could predict that he would soon shoot someone, or be shot, seemed outlandish. At the time, McDaniel didn’t know how to take the news.

But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy. . .

Continue reading. There’s much more, and story gets even more interesting. The “help” offered causes the problem it was intended to prevent.

Later in the article, Stroud points out one weakness built into the system:

Forecasting isn’t magic; it’s an educated guess about what might happen based on things that have already occurred. The data feeding forecasting software for police are typically built around police stops and arrests. That might sound straightforward and unbiased, but consider that US Department of Justice data show that African Americans are more than twice as likely to be arrested than white people. And if you’re Black, your likelihood of being stopped by a police officer can be nearly four times higher than if you’re white, depending on which city you live in, according to the Stanford Open Policing Project.

Building a forecasting model around data like these can run the risk of stigmatizing entire populations based on discriminatory data; a 2017 study from the Journal of Statistics and Public Policy found that arrests doubled in a quadrant of Los Angeles where its police department tested forecasting software. Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

Which is all to say that forecasting can put entire populations at risk of over-policing — which has led to countless unnecessary police killings for relatively insignificant infractions. (Think George Floyd. And Michael Brown. Twelve-year-old Tamir Rice. Sandra Bland, Philando Castile, Walter Scott. Thirteen-year-old Adam Toledo, this year, in Chicago. Alton Sterling, Breonna Taylor, Ahmaud Arbery. The list goes on.)

Later still:

IN MCDANIEL’S VIEW, the heat list caused the harm its creators hoped to avoid: it predicted a shooting that wouldn’t have happened if it hadn’t predicted the shooting.

As the heat list continued to operate, researchers tore it to shreds. A 2016 paper published in the Journal of Experimental Criminology came to some troubling conclusions about the list that had, by then, been rebranded as the “Strategic Subject List,” or SSL. Among them: “The individuals on the SSL were considered to be ‘persons of interest’ to the CPD,” meaning that McDaniel’s description of being routinely targeted for surveillance and searches matched what researchers discovered. “Overall,” the report goes on, “there was no practical direction about what to do with individuals on the SSL, little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders.”

The heat list wasn’t particularly predictive, it turned out. It wasn’t high-tech. Cops would just use the list as a way to target people.

There was another problem, too. . .

Written by Leisureguy

24 May 2021 at 1:41 pm

Labor shortage? or wage shortage?

leave a comment »

It’s as simple as black and white — and reflects the law of supply and demand (basic economics): as demand increases and supply remains the same, the price increases. If the demand for labor increases beyond the supply, the price goes up. Businesses certainly understand that if the demand for their widgets increases, they can change more. That principle also applies to the labor required to make the widgets. Don’t businesses understand this basic principle?

Written by Leisureguy

23 May 2021 at 8:25 am

Posted in Business, Daily life, Math

Tagged with

%d bloggers like this: