## Archive for the ‘**Math**’ Category

## With Category Theory, Mathematics Escapes From Equality

One of the things I would do over if I lived my life again would be to dive deep into category theory when I was studying math in graduate school. I was on the threshold and turned back. Kevin Hartnett writes in *Quanta*:

The equal sign is the bedrock of mathematics. It seems to make an entirely fundamental and uncontroversial statement: These things are exactly the same.

But there is a growing community of mathematicians who regard the equal sign as math’s original error. They see it as a veneer that hides important complexities in the way quantities are related — complexities that could unlock solutions to an enormous number of problems. They want to reformulate mathematics in the looser language of equivalence.

“We came up with this notion of equality,” said Jonathan Campbell of Duke University. “It should have been equivalence all along.”

The most prominent figure in this community is Jacob Lurie. In July, Lurie, 41, left his tenured post at Harvard University for a faculty position at the Institute for Advanced Study in Princeton, New Jersey, home to many of the most revered mathematicians in the world.

Lurie’s ideas are sweeping on a scale rarely seen in any field. Through his books, which span thousands of dense, technical pages, he has constructed a strikingly different way to understand some of the most essential concepts in math by moving beyond the equal sign. “I just think he felt this was the correct way to think about mathematics,” said Michael Hopkins, a mathematician at Harvard and Lurie’s graduate school adviser.

Lurie published his first book,

Higher Topos Theory, in 2009. The 944-page volume serves as a manual for how to interpret established areas of mathematics in the new language of “infinity categories.” In the years since, Lurie’s ideas have moved into an increasingly wide range of mathematical disciplines. Many mathematicians view them as indispensable to the future of the field. “No one goes back once they’ve learned infinity categories,” said John Francis of Northwestern University.Yet the spread of infinity categories has also revealed the growing pains that a venerable field like mathematics undergoes whenever it tries to absorb a big new idea, especially an idea that challenges the meaning of its most important concept. “There’s an appropriate level of conservativity in the mathematics community,” said Clark Barwick of the University of Edinburgh. “I just don’t think you can expect any population of mathematicians to accept any tool from anywhere very quickly without giving them convincing reasons to think about it.”

Although many mathematicians have embraced infinity categories, relatively few have read Lurie’s long, highly abstract texts in their entirety. As a result, some of the work based on his ideas is less rigorous than is typical in mathematics.

“I’ve had people say, ‘It’s in Lurie somewhere,’” said Inna Zakharevich, a mathematician at Cornell University. “And I say, ‘Really? You’re referencing 8,000 pages of text.’ That’s not a reference, it’s an appeal to authority.”

Mathematicians are still grappling with both the magnitude of Lurie’s ideas and the unique way in which they were introduced. They’re distilling and repackaging his presentation of infinity categories to make them accessible to more mathematicians. They are performing, in a sense, the essential work of governance that must follow any revolution, translating a transformative text into day-to-day law. In doing so, they are building a future for mathematics founded not on equality, but on equivalence.

## Infinite Towers of Equivalence

Mathematical equality might seem to be the least controversial possible idea. Two beads plus one bead equals three beads. What more is there to say about that? But the simplest ideas can be the most treacherous.

Since the late 19th century, the foundation of mathematics has been built from collections of objects, which are called sets. Set theory specifies rules, or axioms, for constructing and manipulating these sets. One of these axioms, for example, says that you can add a set with two elements to a set with one element to produce a new set with three elements: 2 + 1 = 3.

On a formal level, the way to show that two quantities are equal is to pair them off: Match one bead on the right side of the equal sign with one bead on the left side. Observe that after all the pairing is done, there are no beads left over.

Set theory recognizes that two sets with three objects each pair exactly, but it doesn’t easily perceive all the different ways to do the pairing. You could pair the first bead on the right with the first on the left, or the first on the right with the second on the left, and so on (there are six possible pairings in all). To say that two plus one equals three and leave it at that is to overlook all the different ways in which they’re equal. “The problem is, there are many ways to pair up,” Campbell said. “We’ve forgotten them when we say equals.”

This is where equivalence creeps in. While equality is a strict relationship — either two things are equal or they’re not — equivalence comes in different forms.

When you can exactly match each element of one set with an element in the other, that’s a strong form of equivalence. But in an area of mathematics called homotopy theory, for example, two shapes (or geometric spaces) are equivalent if you can stretch or compress one into the other without cutting or tearing it.

From the perspective of homotopy theory, a flat disk and a single point in space are equivalent — you can compress the disk down to the point. Yet it’s impossible to pair points in the disk with points in the point. After all, there’s an infinite number of points in the disk, while the point is just one point.

Since the mid-20th century mathematicians have tried to develop an alternative to set theory in which it would be more natural to do mathematics in terms of equivalence. In 1945 the mathematicians Samuel Eilenberg and Saunders Mac Lane introduced a new fundamental object that had equivalence baked right into it. They called it a category.

Categories can be filled with anything you want. You could have a category of mammals, which would collect all the world’s hairy, warm-blooded, lactating creatures. Or you could make categories of mathematical objects: sets, geometric spaces or number systems.

A category is a set with extra metadata: a description of all the ways that two objects are related to one another, which includes a description of all the ways two objects are equivalent. You can also think of categories as geometric objects in which each element in the category is represented by a point.

Imagine, for example, the surface of a globe. Every point on this surface could represent a different type of triangle. Paths between those points would express equivalence relationships between the objects. In the perspective of category theory, you forget about the explicit way in which any one object is described and focus instead on how an object is situated among all other objects of its type.

“There are lots of things we think of as things when they’re actually relationships between things,” Zakharevich said. “The phrase ‘my husband,’ we think of it as an object, but you can also think of it as a relationship to me. There is a certain part of him that’s defined by his relationship to me.”

Eilenberg and Mac Lane’s version of a category was well suited to keeping track of strong forms of equivalence. But in the second half of the 20th century, mathematicians increasingly began to do math in terms of weaker notions of equivalence such as homotopy. “As math gets more subtle, it’s inevitable that we have this progression towards these more subtle notions of sameness,” said Emily Riehl, a mathematician at Johns Hopkins University. In these subtler notions of equivalence, the amount of information about how two objects are related increases dramatically. Eilenberg and Mac Lane’s rudimentary categories were not designed to handle it.

To see how the amount of information increases, first remember our sphere that represents many triangles. Two triangles are homotopy equivalent if you can stretch or otherwise deform one into the other. Two points on the surface are homotopy equivalent if there’s a path linking one with the other. By studying homotopy paths between points on the surface, you’re really studying different ways in which the triangles represented by those points are related. . . .

## Social physics

Ian Stewart, an emeritus professor of mathematics at Warwick University in the UK whose latest book is *Do Dice Play God? *(2019), writes in *Aeon*:

In Isaac Asimov’s novel

Foundation(1951), the mathematician Hari Seldon forecasts the collapse of the Galactic Empire using psychohistory: a calculus of the patterns that occur in the reaction of the mass of humanity to social and economic events. Initially put on trial for treason, on the grounds that his predictionencouragessaid collapse, Seldon is permitted to set up a research group on a secluded planet. There, he investigates how to minimise the destruction and reduce the subsequent period of anarchy from 30,000 years to a mere 1,000.Asimov knew that predicting large-scale political events over periods of millennia is not really plausible. But we all do suspend this disbelief when reading fiction. No Jane Austen fan gets upset to be told that Elizabeth Bennet and Mr Darcy didn’t actually exist. Asimov was smart enough to know that such forecasting, however accurate it might be, is vulnerable to any large disturbance that hasn’t been anticipated, not even in principle. He also understood that readers who happily swallowed psychohistory would realise the same thing. In the second volume of the series, just such a ‘black swan’ event derails Seldon’s plans. However, Seldon has a contingency plan, one that the series later reveals also brings some surprises.

Asimov’s

Foundationseries is notable for concentrating on the political machinations of the key groups, instead of churning out page upon page of space battles between vast fleets armed to the teeth. The protagonists receive regular reports of such battles, but the description is far from a Hollywood treatment. The plot, as Asimov himself stated, is modelled on Edward Gibbon’s bookTheHistory of the Decline and Fall of the Roman Empire(1776-89), and a masterclass in planning on an epic scale for uncertainty. Every senior minister and civil servant should be obliged to read it.Psychohistory, a fictional method for predicting humanity’s future, takes a hypothetical mathematical technique to extremes, for dramatic effect. But, for less ambitious tasks, we use the basic idea every day; for example, when a supermarket manager estimates how many bags of flour to put on the shelves, or an architect assesses the likely size of a meeting room when designing a building. The character of Seldon was to some extent inspired by Adolphe Quételet, one of the first to apply mathematics to human behaviour. Quételet was born in 1796 in Ghent in the Low Countries, now Belgium. Today’s obsessions with the promises and dangers of ‘big data’ and artificial intelligence are direct descendants of Quételet’s brainchild. He didn’t call it psychohistory, of course. He called it social physics.

The basic tools and techniques of statistics were born in the physical sciences, especially astronomy. They originated in a systematic method to extract information from observations subject to unavoidable errors. As the understanding of probability theory grew, a few pioneers extended the method beyond its original boundaries. Statistics became indispensable in biology, medicine, government, the humanities, even sometimes the arts. So it’s fitting that the person who lit the fuse was a pure mathematician turned astronomer, one who succumbed to the siren song of the social sciences.

Quételet bequeathed to posterity the realisation that, despite all the vagaries of free will and circumstance, the behaviour of humanity in bulk is far more predictable than we like to imagine. Not perfectly, by any means, but, as they say, ‘good enough for government work’. He also left us two specific ideas:

l’homme moyen, the ‘average man’, and the ubiquity of the normal probability distribution, better-known as the bell curve. Both are useful tools that opened up new ways of thinking, and that have serious flaws if taken too literally or applied too widely.Quételet gained the first doctorate awarded by the newly founded University of Ghent. His thesis was on conic sections, a topic that also fascinated Ancient Greek geometers, who constructed important curves – ellipse, parabola, hyperbola – by slicing a cone with a plane. For a time, he taught mathematics, until his election to the Royal Academy of Brussels in 1820 propelled him into a 50-year career in the scholarly stratosphere as the central figure of Belgian science.

Around that time, Quételet joined a movement to found a new observatory. He didn’t know much astronomy, but he was a born entrepreneur and he knew his way around the labyrinths of government. His first step was to secure a promise of government funding. Then he took measures to remedy his ignorance of the subject that the observatory was to study. In 1823, at government expense, he headed for Paris to study with leading astronomers, meteorologists and mathematicians. He learned astronomy and meteorology from François Arago and Alexis Bouvard, and probability theory from Joseph Fourier.

At that time, astronomers were pioneering the use of probability theory to improve measurements of planetary orbits despite inevitable observational errors. Learning these techniques from the experts sparked a lifelong obsession with the application of probability to statistical data. By 1826, Quételet was a regional correspondent for the statistical bureau of the Kingdom of the Low Countries.

One basic number has a strong effect on everything that happens, and will happen, in a country: its population. If you don’t know how many people you’ve got, it’s difficult to plan. You can guesstimate, but you might well end up wasting a lot of money on unnecessary infrastructure, or underestimating demand and causing a crisis. This is a problem that every nation still grapples with.

The natural way to find out how many people live in your country is to count them. Making a census isn’t as easy as it might seem, however. People move around, and they hide themselves away to avoid being convicted of crimes or to avoid paying tax. In 1829, the Belgian government was planning a new census and Quételet, who had been working on historical population figures, joined the project. ‘The data that we have at present can only be considered provisional, and are in need of correction,’ he wrote. A full census is expensive, so it makes sense to estimate population changes between censuses. However, you can’t get away with estimates for long, and a census every 10 years is common. Quételet urged the government to carry out a new census, to get an accurate baseline for future estimates. However, he’d come back from Paris with an interesting idea, an idea, he’d got from the great French mathematician Pierre-Simon de Laplace. If it worked, it would save a lot of money.

Laplace had calculated the population of France by multiplying together two numbers. The first was the number of births in the past year. It could be found from the registers of births, which were pretty accurate. The other number was the ratio of the total population to the annual number of births – the reciprocal of the birth rate. Multiplying the number of births and the ratio of the population gives the total population change. However, to work, it looks as though you need to know the total population to find the birth rate. Laplace’s idea was to sample: you could get a reasonable estimate using sound sampling methods. Select a few reasonably typical areas, perform a full census in those, compare with the number of births in those areas. Laplace calculated that about 30 areas would be adequate to estimate the population of the whole of France.

The Belgian government, however, eschewed sampling and carried out a full census. Quételet seems to have rejected the soundness of sampling due to an intelligent, informed but misguided methodological criticism by Baron de Keverberg, an advisor to the state. Observing that birth rates in different regions depend on a bewildering variety of factors, the Baron concluded that it would be impossible to create a representative sample. Errors would accumulate, making the results useless. But he made two mistakes. One was to seek a representative sample, rather than settling for a random one. The other was to assume the worst case (sampling errors accumulate) rather than the typical case (errors mostly cancel out each other through random variation). Notably, Laplace had also assumed that the best way to sample a population was to select, in advance, regions considered to be in some sense

representativeof the whole, with a similar mix of rich and poor, educated and uneducated, male and female. Today, opinion polls often remain so designed, in an effort to get good results from small samples. However, statisticians eventually discovered that a big-enough random sample is just as effective as one specially selected to be representative, and much simpler to obtain. But all this was in the future, and Belgium duly tried to count every single person.Baron de Keverberg’s criticism of Quételet’s plans for the 1829 Belgian census had one useful effect: . . .

## Kevin Drum takes an honest look at worker pay (in contrast to Michael Strain, who takes a dishonest look)

The epidemic of outright lying by the Right is unstoppable. Kevin Drum points out another deliberate effort to mislead, this time by Michael Strain. Read the post and look at the graph.

I am really tired of GOP lies.

## Math Duo Maps the Infinite Terrain of Minimal Surfaces

Erica Klarreich has an amazing article in *Quanta*. Do read it. Illustration above is the caption graphic for the article.

## How a Strange Grid Reveals Hidden Connections Between Simple Numbers

This *Quanta *article is fascinating. Math is *so* weird, given the simplicity of the premises, postulates, and axioms. The richness of what results is totally unexpected. Perhaps it is a type of emergence.

At any rate, I highly recommend the article.