Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Math’ Category

Landmark Math Proof Clears Hurdle in Top Erdős Conjecture

leave a comment »

Erica Klarreich writes in Quanta:

Apair of mathematicians has solved the first chunk of one of the most famous conjectures about the additive properties of whole numbers. Proposed more than 60 years ago by the legendary Hungarian mathematician Paul Erdős, the conjecture asks when an infinite list of whole numbers will be sure to contain patterns of at least three evenly spaced numbers, such as 26, 29 and 32.

Erdős posed thousands of problems over the course of his career, but the question of which number lists contain evenly spaced numbers (what mathematicians call arithmetic progressions) was one of his all-time favorites. “I think many people regarded it as Erdős’ number-one problem,” said Timothy Gowers of the University of Cambridge. Gowers, who won the Fields Medal in 1998, has spent many hours trying to solve it. “Pretty well any additive combinatorialist who’s reasonably ambitious has tried their hand at it,” he said, referring to the branch of mathematics to which the conjecture belongs.

As a rule, a denser list of numbers has a higher chance of containing arithmetic progressions than a sparser list, so Erdős proposed a simple density test: Just add up the reciprocals of the numbers on your list. If your numbers are plentiful enough to make this sum infinite, Erdős conjectured that your list should contain infinitely many arithmetic progressions of every finite length — triples, quadruples and so forth.

Now, in a paper posted online on July 7, Thomas Bloom of Cambridge and Olof Sisask of Stockholm University have proved the conjecture when it comes to evenly spaced triples, like 5, 7 and 9. The pair has shown that whenever a number list’s sum of reciprocals is infinite, it must contain infinitely many evenly spaced triples.

“This result was kind of a landmark goal for a lot of years,” said Nets Katz of the California Institute of Technology. “It’s a big deal.”

One set whose reciprocals sum to infinity is the primes, those numbers divisible by only 1 and themselves. In the 1930s, Johannes van der Corput used the special structure of the primes to show that they do indeed contain infinitely many evenly spaced triples (such as 17, 23 and 29).

But Bloom and Sisask’s new finding means that you don’t need a deep knowledge of the primes’ unique structure to prove that they contain infinitely many triples. All you need to know is that prime numbers are abundant enough for the sum of their reciprocals to be infinite — a fact mathematicians have known for centuries. “Thomas and Olof’s result tells us that even if the primes had a completely different structure to the one they actually have, the mere fact that there are as many primes as there are would ensure an infinitude of arithmetic progressions,” wrote Tom Sanders of the University of Oxford in an email.

The new paper is 77 pages long, and it will take time for mathematicians to check it carefully. But many feel optimistic that it is correct. “It really looks the way a proof of this result should look,” said Katz, whose earlier work laid much of the groundwork for this new result.

Bloom and Sisask’s theorem implies that as long as your number list is dense enough, certain patterns must emerge. The finding obeys what Sarah Peluse of Oxford called the fundamental slogan of this area of mathematics (originally stated by Theodore Motzkin): “Complete disorder is impossible.”

Density in Disguise

It’s easy to make an infinite list with no arithmetic progressions if you make the list sparse enough. For example, consider the sequence 1, 10, 100, 1,000, 10,000, … (whose reciprocals sum to the finite decimal 1.11111…). These numbers spread apart so rapidly that you can never find three that are evenly spaced.

You might wonder, though, if there are significantly denser number sets that still avoid arithmetic progressions. You could, for example, walk down the number line and keep every number that doesn’t complete an arithmetic progression. This creates the sequence 1, 2, 4, 5, 10, 11, 13, 14, … , which looks pretty dense at first. But it becomes incredibly sparse as you move into higher numbers — for instance, by the time you get to 20-digit numbers, only about 0.000009% of the whole numbers up to that point are on your list. In 1946, Felix Behrend came up with denser examples, but even these become sparse very quickly — a Behrend set that goes up to 20-digit numbers contains about 0.001% of the whole numbers.

At the other extreme, if your set includes almost all the whole numbers, it will definitely contain arithmetic progressions. But between these extremes is a vast, largely uncharted middle. How sparse can you make your set, mathematicians have wondered, and still be sure that it will have arithmetic progressions?

Erdős (perhaps in collaboration with the Hungarian mathematician Pál Turán, some say) provided one possible answer. His condition about the sum of reciprocals is a statement about density in disguise: It turns out to be the same as saying that the density of your list up to any number N is at least approximately 1 over the number of digits in N. In other words, it’s OK for your list to grow sparser as you go out along the number line, but only if it does so very slowly: Up through 5-digit numbers your list should have density at least about 1/5; up through 20-digit numbers it should have density at least about 1/20; and so forth. Provided this density condition is met, Erdős conjectured, your list should contain infinitely many arithmetic progressions of every length.

In 1953, Klaus Roth started mathematicians on a path toward proving Erdős’ conjecture.  . .

Continue reading.

Written by LeisureGuy

3 August 2020 at 1:25 pm

Posted in Math

Find the fallacy in this proof that all people in Canada are the same age

leave a comment »

Steps are numbered. If you click a step, you are told whether that is the fallacious step or not. Find the fallacious step — if you can do it with just one click, you understand logic.

Here’s the proof.

Written by LeisureGuy

17 July 2020 at 10:12 am

Posted in Math

The problem of confirmation theory

leave a comment »

I knew about this paradox — I encountered it in a logic course in grad school — and I thought I had blogged it, but I can’t find it. It’s a cute paradox.

Written by LeisureGuy

6 July 2020 at 3:00 pm

Posted in Math, Video

Today is Tau Day: Celebrate!

leave a comment »

For more information, TauDay.com.

And see also:

Written by LeisureGuy

26 June 2020 at 9:34 am

Posted in Math

The inscribed-rectangle problem

leave a comment »

For more about this, see Kevin Hartnett’s article in Quanta.

Written by LeisureGuy

26 June 2020 at 8:54 am

Posted in Math

The Hard Truth Of Poker — And Life: You’re Never ‘Due’ For Good Cards

leave a comment »

Five Thirty Eight has posted an excerpt from Maria Konnikova’s book:

Maria Konnikova is a New York Times bestselling author and contributor to The New Yorker with a doctorate in psychology. She decided to learn how to play poker to better understand the role of luck in our lives, examining the game through the lens of psychology and human behavior. This excerpt is adapted from her new book, “The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win,” which is available June 23.


For many years, my life centered around studying the biases of human decision-making: I was a graduate student in psychology at Columbia, working with that marshmallow-tinted legend, Walter Mischel, to document the foibles of the human mind as people found themselves in situations where risk abounded and uncertainty ran high. Dissertation defended, I thought to myself, that’s that. I’ve got those sorted out. And in the years that followed, I would pride myself on knowing so much about the tools of self-control that would help me distinguish myself from my poor experimental subjects. Placed in a stochastic environment, faced with stress and pressure, I knew how I’d go wrong — and I knew precisely what to do when that happened.

Fast-forward to 2016. I have embarked on my latest book project, which has taken me into foreign territory: the world of No Limit Texas Hold ’em. And here I am, at my first-ever tournament. It’s a charity event. I’ve been practicing for weeks, playing online, running through hands, learning the contours of basic tournament poker strategy.

I get off to a rocky start, almost folding pocket aces, the absolute best hand you can be dealt, because I’m so nervous about messing up and disappointing my coach, Erik Seidel — a feared crusher considered one of the best poker players in the world. He’s the one who finagled this invitation for me in the first place, and I feel certain that I’m going to let him down. But somehow, I’ve managed to survive out of the starting gate, and a few hours in, I’m surprised to find myself starting to experience a new kind of feeling. This isn’t that hard. This is fun. I’m not half-bad.

This moment, this I’m not half-bad making its fleeting way through my brain, is the first time I notice a funny thing start to happen. It’s as if I’ve been cleaved in two. The psychologist part of my brain looks dispassionately on, noting everything the poker part of me is doing wrong. And the poker player doesn’t seem to be able to listen. Here, for instance, the psychologist is screaming a single word: overconfidence. I know that the term “novice” doesn’t even begin to describe me and that my current success is due mostly to luck. But then there’s the other part of me, the part that is most certainly thinking that maybe, just maybe, I have a knack for this. Maybe I’m born to play poker and conquer the world.

The biases I know all about in theory, it turns out, are much tougher to fight in practice. Before, I was working so hard on grasping the fundamentals of basic strategy that I didn’t have the chance to notice. Now that I have some of the more basic concepts down, the shortcomings of my reasoning hit me in the face. After an incredibly lucky straight draw on a hand I had no business playing — the dealer helpfully tells me as much with a “You’ve got to be kidding me” as I turn over my hand and win the pot — I find myself thinking maybe there’s something to the hot hand, the notion that a player is “hot,” or on a roll. Originally, it was taken from professional basketball, from the popular perception that a player with a hot hand, who’d made a few shots, would continue to play better and make more baskets. But does it actually exist — and does believing it exists, even if it doesn’t, somehow make it more real? In basketball, the psychologists Thomas Gilovich, Amos Tversky, and Robert Vallone argued it was a fallacy of reasoning — when they looked at the Boston Celtics and the Philadelphia 76ers, they found no evidence that the hot hand was anything but illusion. But in other contexts, mightn’t it play out differently? I’ve had the conventional thinking drilled into me, yet now I think I’m on a roll. I should bet big. Definitely bet big.

That idea suffers a debilitating blow after a loss with a pair of jacks — a hand that’s actually halfway decent. After a flop that has an ace and a queen on it — both cards that could potentially make any of my multiple opponents a pair higher than mine — I refuse to back down. I’ve had bad cards for the last half an hour. I deserve to win here! I lose over half my chips by refusing to fold — hello, sunk cost fallacy! We’ll be seeing you again, many times. And then, instead of reevaluating, I start to chase the loss: Doesn’t this mean I’m due for a break? I can’t possibly keep losing. It simply isn’t fair. Gambler’s fallacy — the faulty idea that probability has a memory. If you are on a bad streak, you are “due” for a win. And so I continue to bet when I should sit a few hands out.

It’s fascinating how that works, isn’t it? Runs make the human mind uncomfortable. In our heads, probabilities should be normally distributed — that is, play out as described. If a coin is tossed ten times, about five of those should be heads. Of course, that’s not how probability actually works — and even though a hundred heads in a row should rightly make us wonder if we’re playing with a fair coin or stuck in a Stoppardian alternate reality, a run of ten or twenty may well happen. Our discomfort stems from the law of small numbers: We think small samples should mirror large ones, but they don’t, really. The funny thing isn’t our discomfort. That’s understandable. It’s the different flavors that discomfort takes when the runs are in our favor versus not. The hot hand and the gambler’s fallacy are actually opposite sides of the exact same coin: positive recency and negative recency. We overreact to chance events, but the exact nature of the event affects our perception in a way it rightly shouldn’t.

We have a mental image of the silly gamblers who think they’re due to hit the magic score, and it’s comforting to think that won’t be us, that we’ll recognize runs for what they are: statistical probabilities. But when it starts happening in reality, we get a bit jittery. “All these squalls to which we have been subjected are signs the weather will soon improve and things will go well for us,” Don Quixote tells his squire, Sancho Panza, in Miguel de Cervantes’s 1605 novel, “because it is not possible for the bad or the good to endure forever, from which it follows that since the bad has lasted so long a time, the good is close at hand.” We humans have wanted chance to be equitable for quite some time. Indeed, when we play a game in which chance doesn’t look like our intuitive view of it, we balk.

Frank Lantz has spent over twenty years designing games. When we meet at his office at NYU, where he currently runs the Game Center, he lets me in on an idiosyncrasy of game design. “In video games where there are random events — things like dice rolls — they often skew the randomness so that it corresponds more closely to people’s incorrect intuition,” he says. “If you flip heads twice in a row, you’re less likely to flip heads the third time. We know this isn’t actually true, but it feels like it should be true, because we have this weird intuition about large numbers and how randomness works.” The resulting games actually accommodate that wrongness so that people don’t feel like the setup is “rigged” or “unfair.” “So they actually make it so that you’re less likely to flip heads the third time,” he says. “They jigger the probabilities.”

For a long time, Lantz was a serious poker player. And one of the reasons he loves the game is that . . .

Continue reading.

Written by LeisureGuy

22 June 2020 at 2:16 pm

Posted in Books, Daily life, Math, Science

Computation All the Way Down

leave a comment »

From Edge, and worth reading in full. In passing, I will point out that Plato already said that the universe is constructed from the basic reality of mathematics (cf. Timaeus). The talk begins:

We’re now in this situation where people just assume that science can compute everything, that if we have all the right input data and we have the right models, science will figure it out. If we learn that our universe is fundamentally computational, that throws us right into the idea that computation is a paradigm you have to care about. The big transition was from using equations to describe how everything works to using programs and computation to describe how things work. And that’s a transition that has happened after 300 years of equations. The transition time to using programs has been remarkably quick, a decade or two. One area that was a holdout, despite the transition of many fields of science into the computational models direction, was fundamental physics.

If we can firmly establish this fundamental theory of physics, we know it’s computation all the way down. Once we know it’s computation all the way down, we’re forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can’t get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can’t work. It says that even if we know the rules for the system, it may be the case that we can’t work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can’t have a predictive theoretical science of what’s going to happen.

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. His most recent endeavor is The Wolfram Physics Project. He is also the author, most recently, of A Project to Find the Fundamental Theory of Physics. Stephen Wolfram’s Edge Bio Page

COMPUTATION ALL THE WAY DOWN

The question that I’m asking myself is how does the universe work? What is the lowest level machine code for how our universe works? The big surprise to me is that over the last six months or so, I think we’ve figured out a path to be able to answer that question.

There’s a lot of detail about how what we figured out about the path to that question relates to what’s already known in physics. Once we know this is the low-level machine code for the universe, what can we then ask ourselves about why we have this universe and not another? Can we ask questions like why does this universe exist? Why does any universe exist? Some of those are questions that people asked a couple thousand years ago.

Lots of Greek philosophers had their theories for how the universe fundamentally works. We’ve gotten many layers of physics and mathematics sophistication since then, but what I’m doing goes back to these core questions of how things fundamentally work underneath. For us, it’s this simple structure that involves elements and relations that build into hypergraphs that evolve in certain ways, and then these hypergraphs build into multiway graphs and multiway causal graphs. From pieces of the way those work, we see what relativity is, what quantum mechanics is, and so on.

One of the questions that comes about when you imagine that you might hold in your hand a rule that will generate our whole universe, how do you then think about that? What’s the way of understanding what’s going on? One of the most obvious questions is why did we get this universe and not another? In particular, if the rule that we find is a comparatively simple rule, how did we get this simple-rule universe?

The lesson since the time of Copernicus has been that our Earth isn’t the center of the universe. We’re not special in this or that way. If it turns out that the rule that we find for our universe is this rule that, at least to us, seems simple, we get to ask ourselves why we lucked out and got this universe with a simple rule. I have to say, I wasn’t expecting that there would be a good scientific answer to that question. One of the surprises from this project to try to find the fundamental theory of physics has been that we have an understanding of how that works.

There are three levels of understanding of how the universe works in this model of ours. It starts from what one can think of as atoms of space, these elements that are knitted together by connectivity to form what ends up behaving like the physical space in which we move. The first level of what’s going on involves these elements and rules that describe how elements connected in a particular way should be transformed to elements connected in some other way. This connectivity of the elements is what makes up when we look at, say, 10100, 10400 of these elements. That’s what behaves like space as we’re familiar with it, and not only space but also all of the things that are in space—all the matter and particles—are all just features of this underlying structure and its detailed way of connecting these elements together.

We’ve got this set of transformation rules that apply to those underlying elements. In this set up, space is a very different thing from time. One of the wrong turns of 20th-century physics was this idea that space and time should always be packaged together into this four-dimensional spacetime continuum. That’s wrong. Time is different from space. Time is the inexorable operation of computation in figuring out what the next state will be from previous states, where our space to something that is a more specific extent of, in this particular case, the hypergraph that knits together these different elements.

From the idea of this hypergraph being rewritten through time, when you are an observer embedded within that hypergraph, the only thing you are ultimately sensitive to is the question of which events that happen inside this hypergraph affect which other ones. What are the causal relationships between different events in this process of time evolution? From that, you get what we call a causal graph of what events affect what other events. It turns out that special relativity and then general relativity emerge basically from properties of that causal graph.

In our way of thinking about fundamental physics, there are three levels of description that end up corresponding to general relativity—the theory of space and time and gravity—quantum mechanics, and then the third level, which is something different.

In the lowest level of these models that we’re constructing, the only thing we know about all of these elements is that they’re just things. We know which things are related to which other things; for example, if we say that there are relations that involve pairs of things—binary relations—then we can say we’ve got these things and there are pairs that are related. We can draw that as a mathematical graph or a network, where we’re just putting down points and joining them by a line. We happen to need a slight generalization of that, usually called a hypergraph in mathematics, where instead of just having relations between pairs of things, you can have relations between triples or quadruples of things.

You can’t represent that with just a line between two things. It’s like a bag of things that corresponds to each hyperedge. But that’s a detail not really important to the big picture. The thing that is relevant is that the underlying rules just say that some collection of elements that are related in a certain way are transformed to some other collection of elements related in some other way.

The whole operation of the universe consists of just rerunning that particular rule a gazillion times. Maybe the gazillion is about 10400 for our universe, I’m not sure about that—that’s based on one estimate of how this might work.

The first level is to understand, as you apply these rules, what are the causal relationships between applying a rule in one place, then that rule produces certain output, and that output gets used when the rule is applied again in the same place or in a nearby place. You can draw this network, this graph, of the causal relationships of what output is needed to feed the input to another updating event. That causal graph turns out to be our representation of space and time.

That causal graph has properties that reproduce special relativity and then general relativity, the theory of gravity. That’s a feature of these models, that in the limit of a very large number of these little update rules, with certain assumptions—like the assumption that the limiting space of our universe is finite dimensional—it follows that what happens satisfies Einstein’s equations for general relativity. Then the next level of this is to apply these transformations to this hypergraph, to this collection of relations. But there might be many possible places where a particular transformation might apply, which one should I run? Which one should I do? The next piece of these models is to do all of them, and what you’ll build is what we call a multiway graph, which represents all possible updates that you can have done.

If you do one update it might allow you to do another update. If you don’t do that update, it wouldn’t allow you to do another update. It’s not saying just do everything. There’s still a lot of structural information in what could happen after what, and what can happen at the same time as what. So, this multiway graph turns out to be a representation of what in quantum mechanics people have thought about as the path integral. In classical mechanics, say you throw a ball, the ball moves in a particular definite trajectory. In quantum mechanics, the ball has many possible trajectories it follows, which are all weighted in a certain way, and what we observe corresponds to, say, some weighting or some combination of those trajectories.

In our models, that corresponds to what happens in this multiway graph, that there are these many possible paths that can be followed in the multiway graph. In quantum mechanics, we believe we measure definite things. It turns out it’s very elegant and wonderful that in relativity we’re used to this idea of reference frames, observers thinking about the universe in terms of their reference frame. Are they at rest? Are they traveling at a certain velocity? Are they accelerating? What is their state of motion? In quantum mechanics, we have this analog of reference frames, which we call quantum observation frames (QOF) that represent the way we’re choosing to experience this multiway system of possibilities.

In any case, one can reproduce the various results of quantum mechanics. We’re busily going through and trying to reproduce all the different things that show up in quantum mechanics. One of things we can do is take, for example, quantum computers and compile all that formalism into these multiway graphs. If you’ve got a quantum computer that’s described in the standard formalism of quantum computing in this way, then you just run this program and you’ll get a multiway graph that basically implements the same thing. So that’s proof that these multiway graphs reproduce the physics of quantum computing.

In spacetime, a big result are Einstein’s equations, which say that the curvature of space depends on the presence of matter. If you have a thing that is following a straight line, let’s say you shoot a laser in some direction. Normally, you think the light from a laser just goes in a straight line. But when there’s a massive object, like a star or a black hole, the path of that laser light will be turned by the presence of that mass. Einstein’s equations describe how that turning works. They say that the curvature of space, the amount of turning, depends on the amount of energy momentum that exists in space.

In our multiway graph, we also think about paths through the multiway graph. We can also think about the presence of energy momentum in the multiway graph, the presence of energy momentum in the quantum system that is described by this multiway graph. Something really amazing happens, which is that Einstein’s equations in the classical idea of space and time turns out to be exactly Feynman’s path integral in quantum mechanics.

These various paths that are representing the possibilities in quantum mechanics are effectively being turned in this multiway space by the presence of energy momentum, or more specifically, by the presence of the Lagrangian density, which is a relativistically invariant analog of energy momentum. In other words, the core of quantum mechanics, which is the way that the phases work in the path integral, is the exact same phenomenon as the core of classical general relativity, the way that trajectories are turned by the presence of energy momentum in spacetime. That’s a pretty cool thing that I’m excited about.

When we think about this multiway system, we’re saying that . . .

Continue reading.

Written by LeisureGuy

19 June 2020 at 5:56 pm

Posted in Math, Science

Tagged with

Ptolemy’s magical theorem

with one comment

Full disclosure: I studied (and taught) Ptolemy’s Almagest, though it was only as I taught it that I realized what a terrific book it is.

I found this video quite fun:

Written by LeisureGuy

18 June 2020 at 3:03 pm

Posted in Math

In a Single Measure, Invariants Capture the Essence of Math Objects

leave a comment »

Erica Klarreich writes in Quanta Magazine:

When Lisa Piccirillo solved a decades-old mystery about the “Conway knot,” she had to overcome the knot’s uncanny ability to hoodwink some of the most powerful tools mathematicians have devised. Known as invariants, these tools form the backbone not just of knot theory but of many areas of mathematics, extracting essential features of mathematical objects and detecting when two objects are fundamentally different from each other.

As the name suggests, an invariant is an attribute that doesn’t vary as you change an object’s inessential features (where “inessential” means whatever you need it to in a particular context). An invariant is a distillation of some innate quality of the object, often in the form of a single number.

To take an example from topology, imagine covering a ball with stretchy netting that partitions the surface into shapes such as triangles and rectangles. The number of shapes will, of course, depend on the netting you use, as will the numbers of edges and corners. But mathematicians figured out centuries ago that a certain combination of these three numbers always comes out the same: the number of shapes plus the number of corners minus the number of edges.

If, for example, your netting partitions the sphere into a puffed-out tetrahedron (with four triangles, four corners and six edges), this number works out to 4 + 4 − 6 = 2. If your netting instead forms the pattern of a soccer ball (with a total of 32 hexagons and pentagons, 60 corners, and 90 edges), you again get 32 + 60 − 90 = 2. In some sense, the number 2 is an intrinsic feature of sphere-ness. This number (called the sphere’s Euler characteristic) doesn’t change if you stretch or distort the sphere, so it is what mathematicians call a topological invariant.

If you wrap a netting around a doughnut surface instead, you always get an Euler characteristic of 0. On a two-holed doughnut, you get −2. The Euler characteristic for surfaces belongs to a series of invariants that allow mathematicians to explore shapes in higher dimensions as well. It can help topologists distinguish between two shapes that are hard to visualize, since if they have different Euler characteristics, they cannot be the same topological shape.

Invariants are also used to study the 15-puzzle, a classic toy consisting of square tiles numbered 1 through 15 that you slide around in a 4-by-4 grid. The goal is to put a mixed-up arrangement of tiles in numerical order from left to right, starting from the top row. If you’d like to know whether a particular arrangement is solvable, there’s an invariant that gives you the answer. It outputs either “even” or “odd” depending on the sum of two numbers: the number of slides required to carry the blank square to the bottom right corner and the number of tile pairings that are in reverse numerical order (with the blank square representing tile 16).

Whenever you slide a tile into the empty square, both these numbers switch parity (evenness or oddness). So the parity of their sum never changes, meaning that it is an invariant of the sliding process. For the solved configuration this invariant is even, since both numbers are zero. So any configuration with an odd invariant is utterly hopeless.

When it comes to knot theory, distinguishing between knots is a tricky business, since you can make a knot unrecognizable just by moving the strands of the loop around (mathematicians think of knots as occurring in closed loops rather than open strings, so they can’t be undone). Here, invariants are indispensable, and mathematicians have come up with dozens that distill different features of knots. But these invariants tend to have blind spots.

Take, for example, an invariant called tricolorability. A knot diagram is tricolorable if  . . .

Continue reading.

Written by LeisureGuy

4 June 2020 at 3:57 pm

Posted in Math

A physics-based Covid-19 risk estimator for leaving the house

leave a comment »

This Medium article by Elena Polosova is interesting. It begins:

So we’re quarantined. We’re social distancing, avoiding groups, and staying 6 feet apart as much as possible. But this still leaves so many questions!

  • What if the sidewalk is only 4 feet wide — should I #stayHomeSaveLives?
  • How does “riskiness of the hangout” scale with “length of the hangout”?
  • How risky is going to Costco vs. going to the corner store?
  • How does this all change if we’re wearing masks?

I’m a mathematician, and I’m quarantined in a community house with nine other people. I also worked part-time for three years as a network epidemiology research assistant at MIT. Burning questions like this came up at all my house meetings, so to sync our collective understandings, I made a physics-based activity risk model and fed our questions into it.

In the rest of this article, I’ll step through the answers, and I’ll show you how to use the model to answer questions of your own!

Disclaimer: all models are wrong, but some are useful. I think this one is useful, but please bear in mind that I made it in a week. The exact percentages are definitely inaccurate. I’m sharing because I think the general, directional trend information it reveals — distinguishing between a 1% risk and a 10% risk — is much better than no information at all.

The Model

How to Use It

Here’s the link to download my Jupyter notebook on GitHub. And, here’s a link to a Google spreadsheet version, if you prefer that format. The parameters are documented in the code — change them to match your scenario!

Update May 1st, 2020: Ezekiel Sebastine made a web app version, linked here! This is the most user-friendly, but also the least flexible.

Derivation

Many specifics of coronavirus transmission are still being debated, so I kept my math at a high level. I considered three widely-agreed-upon kinds of transmission:

  1. Surface-based: you touch an object that has virus on it, then touch your face.
  2. Warm-body-based: you come near a living, breathing infected person, and viral particles from their breathing then infect you in a diffusion-based way.
  3. Wildcard: a catch-all for everything else that’s beyond the scope of this model. Fluid dynamics of airborne particles are weird enough that this term is definitely nonzero, though it’s hard to say exactly how big it is. Probably small? . . .

Read the whole thing.

Written by LeisureGuy

13 May 2020 at 9:16 am

Twelve Tones, by Vi Hart

leave a comment »

So far as I’m concerned, this is a must-watch.

Some thoughts I had as I watched:

One thing about learning a new language is that it requires the acquisition and understanding of new patterns because two lanauages don’t really fully match up any more than the lives of people in the same line of work in the same village match up: there are commonalities, but there are also differences difficult to match.

A mild example from this morning: Esperanto has the word kuri, to run, which matches the English verb quite well: replacing to run with kuri  works well: “He runs” and “Li kuras” mean the same thing..

But I could not think of a simple Esperanto verb that would match to walk. There is marŝi, but that definitely includes the idea of walking in step — it matches “to march,” not “to walk.” And there’s promeni, to walk to see sights or for exercise, but it has for me overtones of “to promenade,” “to stroll,” and “to amble.” I wanted a neutral word, in the same way that “to run” is neutral.

I posted a question in the Lernu forum, asking for an Esperanto verb that means “iri per piedoj” (to go by foot) or “iri piede” (to go footwise). An immediate response: piediri. And that does seem to match, and it also illustrates how in Esperanto (as in Forth) one constructs new words to do the exact job you want, whereas in English one must dig through the drawer of words to find the closest match and perhaps be satisfied with a phrase — though it should be pointed out that the poet’s role is to take current words and, through context, stretch them to take new shapes and do new jobs. By putting a word in a new context, the poet fills it with a different color and charges it with a different energy. And not only poets: writers of fiction and drama do the same — think of some of the significant words in (say) “Death of a Salesman,” or “Macbeth,” or some stories of Raymond Carver, and how the impact of those words in that context differs from their workaday use.

A second thought was how the real numbers, being a continuum, contain many numbers and properties that we can never know — very interesting numbers and very interesting properties, if we could only know them. I suppose one of the reasons mathematicians are constantly generalizing is that moving to a more general level you can get a kind of overarching “knowing” of a class and its structure that frees you of having to know the individual elements.

At any rate, I found it a fascinating video, and as I write this I’m listing to Schoenberg (via YouTube), and his music — at least this piece — is indeed very nice.

Enjoy.

Written by LeisureGuy

6 May 2020 at 5:43 pm

John Conway Solved Mathematical Problems With His Bare Hands

leave a comment »

Kevin Hartnett writes in Quanta:

n modern mathematics, many of the biggest advances are great elaborations of theory. Mathematicians move mountains, but their strength comes from tools, highly sophisticated abstractions that can act like a robotic glove, enhancing the wearer’s strength. John Conway was a throwback, a natural problem-solver whose unassisted feats often left his colleagues stunned.

“Every top mathematician was in awe of his strength. People said he was the only mathematician who could do things with his own bare hands,” said Stephen Miller, a mathematician at Rutgers University. “Mathematically, he was the strongest there was.”

On April 11, Conway died of COVID-19. The Liverpool, England, native was 82.

Conway’s contributions to mathematics were as varied as the stories people tell about him.

“Once he shook my hand and informed me that I was four handshakes away from Napoleon, the chain being: [me] — John Conway — Bertrand Russell — Lord John Russell – Napoleon,” said his Princeton University colleague David Gabai over email. Then there was the time Conway and one of his closest friends at Princeton, the mathematician Simon Kochen, decided to memorize the world capitals on a whim. “We decided to drop the mathematics for a while,” Kochen said, “and for a few weeks we’d go home and do, like, the western bulge of Africa or the Caribbean nations.”

Conway had the tendency — perhaps unparalleled among his peers — of jumping into an area of mathematics and completely changing it.

“A lot of the objects he studied are thought of by other mathematicians the way that he thought of them,” Miller said. “It’s as if his personality has been superimposed on them.”

Conway’s first big discovery was an act of self-preservation. In the mid-1960s he was a young mathematician looking to launch his career. On the recommendation of John McKay, he decided to try to prove something about the properties of a sprawling geometric object called the Leech lattice. It comes up in the study of the most efficient way to pack as many round objects in as little space as possible — an enterprise known as sphere packing.

To get a sense of what the Leech lattice is and why it’s important, first consider a simpler scenario. Imagine you wanted to fit as many circles as possible into a region of the standard Euclidean plane. You can do this by dividing the plane into one big hexagonal grid and circumscribing the largest possible circle inside each hexagon. The grid, called a hexagonal lattice, serves as an exact guide for the best way to pack circles in two-dimensional space.

In the 1960s, the mathematician John Leech came up with a different kind of lattice that he predicted would serve as a guide for the most efficient packing of 24-dimensional spheres in 24-dimensional space. (It later proved true.) This application to sphere packing made the Leech lattice interesting, but there were still many unknowns. Chief among them were the lattice’s symmetries, which can be collected into an object called a “group.”

In 1966, at McKay’s urging, Conway decided that he would discover the symmetry group of the Leech lattice, no matter how long it took.

“He sort of shut himself up in this room and said goodbye to his wife, and was [planning] to work all day every day for a year,” said Richard Borcherds, a mathematician at the University of California, Berkeley, and a former student of Conway’s.

But, as it turned out, the farewell was unnecessary. “He managed to calculate it in about 24 hours,” Borcherds said.

Rapid computation was one of Conway’s signature traits. It was a form of recreation for him. He devised an algorithm for quickly determining the day of the week for any date, past or future, and enjoyed inventing and playing games. He’s perhaps best known for creating the “Game of Life,” a mesmerizing computer program in which collections of cells evolve into new configurations based on a few simple rules.

After discovering the symmetries of the Leech lattice — a collection now known as the Conway group — Conway became interested in the properties of other similar groups. One of these was the aptly named “monster” group, a collection of symmetries that appear in 196,883-dimensional space.

In a 1979 paper called “Monstrous Moonshine,” Conway and Simon Norton conjectured a deep and surprising relationship between properties of the monster group and properties of a distant object in number theory called the j-function. They predicted that the dimensions in which the monster group operates match, almost exactly, the coefficients of the j-function. A decade later, Borcherds proved Conway and Norton’s “moonshine” conjecture, helping him win a Fields Medal in 1998.

Without Conway’s facility for computation and taste for grappling with examples, he and Norton might not even have thought to conjecture the moonshine relationship.

“In doing these examples they discovered this numerology,” Miller said. “[Conway] did it from the ground up; he didn’t come in with some magic wand. When he understood something, he understood it as well as anyone else did, and usually did it in his own unique way.”

Nine years before moonshine, Conway’s style of hands-on mathematics led him to a breakthrough in an entirely different area. In the field of topology, mathematicians study the properties of knots, which are like closed loops of string. Mathematicians are interested in classifying all types of knots. For example, if you attach the ends of an unknotted shoelace you get one type of knot. If you tie an overhand knot in the shoelace and then connect the ends, you get another.

But it’s not always that simple. If you take two closed loops and jumble each of them, the way a cat might play with a piece of string, you won’t necessarily be able to tell at a glance — even a long glance — whether or not they’re the same knot.

In the 19th century, a trio of British and American scientists — Thomas Kirkman, Charles Little and Peter Tait — labored to create a kind of periodic table of knots. Over the course of six years they classified the first 54 knots.

Conway, in a 1970 paper, came up with a more efficient way of doing the same job. His description — known as Conway notation — made it much easier to diagram the tangles and overlaps in a knot.

“What Little did in six years, it took him an afternoon,” said Marc Lackenby, a mathematician at the University of Oxford who studies knot theory.

And that wasn’t all. In the same paper, Conway made another major contribution to knot theory. Mathematicians studying knots have different types of tests they apply, which typically act as invariants, meaning that if the results come out as different for two knots, then the knots are different.

One of the most venerable tests in knot theory is the Alexander polynomial — a polynomial expression that’s based on the way a given knot crosses over itself. It’s a highly effective test, but it’s also slightly ambiguous. The same knot could yield multiple different (but very closely related) Alexander polynomials.

Conway managed to refine the Alexander polynomial, ironing out the ambiguity. The result was the invention of the Conway polynomial, which is now a basic tool learned by every knot theorist. . .

Continue reading.

Written by LeisureGuy

20 April 2020 at 1:37 pm

Posted in Math

Another soft-skin shave, and I think it was La Toja

with 2 comments

First, I want to thank again Chris R who in a comment pointed out the use of the timed release to avoid camera shake. This morning was overcast, plus I had to be at the supermarket by 7:00am, so I was taking this photo around 6:15am (not much daylight), and the lighting in the apartment is not bright. But the 2-second timed release delivered a crisp focus and without the glare lighting of a flash.

I’ve been noting shaves that result in my skin feeling particuarly soft and supple and trying to guess at the cause — The Dead Sea shaving soap was one, and the Declaration Grooming + Chatillon Lux was another. I recaled that La Toja boasts wonderful skin-conditioning properties (see this earlier post), so I brought out my La Toja shave stick (and aftershave, though I suspect any magic resides in the soap).

Good prep and took my time with lathering. Since it’s a two-day stubble on Mondays, the shave already is set to be pretty good, and I went with the excellent Fine slant — totally wonderful if you keep the handle away from your face.

Three passes — a little resistance, so it gets a new blade now — and the result is again a totally smooth, soft, and supple skin. La Toja aftershave may also have helped — it’s quite a nice aftershave — but I credit the soap (for reasons found at the link above).

I really enjoy starting the week on such a positive note, and the local supermarket is getting their routine polished. One thing that has greatly improved service, and something I hope they will maintain, is using a single queue for multiple servers. This drastically cuts average wait time — plus it is fairer (first come, first served). See, for example, this post. I’ve observed that most post offices seem to have adopted the single-queue/multiple-servers model, as have banks and airlines. For some reasons, though, supermarkets have, until now, resisted. I hope that they continue it post-pandemic.

Queuing theory is fascinating and counter-intuitive. In this post, I quote a brief piece on queuing theory. From that:

Suppose a small bank has only one teller. Customers take an average of 10 minutes to serve and they arrive at the rate of 5.8 per hour. What will the expected waiting time be? What happens if you add another teller?

We assume customer arrivals and customer service times are random (details later). With only one teller, customers will have to wait nearly five hours on average before they are served. But if you add a second teller, the average waiting time is not just cut in half; it goes down to about 3 minutes. The waiting time is reduced by a factor of 93x.

Why was the wait so long with one teller? There’s . . .

There’s more.

Written by LeisureGuy

20 April 2020 at 8:55 am

Conway’s game of Life emulated in Life

leave a comment »

The notes to the video explain.

Written by LeisureGuy

17 April 2020 at 5:15 pm

Posted in Games, Math, Software

Finally We May Have a Path to the Fundamental Theory of Physics… and It’s Beautiful

leave a comment »

A fascinating and lengthy work in progress by Stephen Wolfram, which begins:

It’s unexpected, surprising—and for me incredibly exciting. To be fair, at some level I’ve been working towards this for nearly 50 years. But it’s just in the last few months that it’s finally come together. And it’s much more wonderful, and beautiful, than I’d ever imagined.

In many ways it’s the ultimate question in natural science: How does our universe work? Is there a fundamental theory? An incredible amount has been figured out about physics over the past few hundred years. But even with everything that’s been done—and it’s very impressive—we still, after all this time, don’t have a truly fundamental theory of physics.

Back when I used do theoretical physics for a living, I must admit I didn’t think much about trying to find a fundamental theory; I was more concerned about what we could figure out based on the theories we had. And somehow I think I imagined that if there was a fundamental theory, it would inevitably be very complicated.

But in the early 1980s, when I started studying the computational universe of simple programs I made what was for me a very surprising and important discovery: that even when the underlying rules for a system are extremely simple, the behavior of the system as a whole can be essentially arbitrarily rich and complex.

And this got me thinking: Could the universe work this way? Could it in fact be that underneath all of this richness and complexity we see in physics there are just simple rules? I soon realized that if that was going to be the case, we’d in effect have to go underneath space and time and basically everything we know. Our rules would have to operate at some lower level, and all of physics would just have to emerge.

By the early 1990s I had a definite idea about how the rules might work, and by the end of the 1990s I had figured out quite a bit about their implications for spacetimegravity and other things in physics—and, basically as an example of what one might be able to do with science based on studying the computational universe, I devoted nearly 100 pages to this in my book A New Kind of Science.

I always wanted to mount a big project to take my ideas further. I tried to start around 2004. But pretty soon I got swept up in building Wolfram|Alpha, and the Wolfram Language and everything around it. From time to time I would see physicist friends of mine, and I’d talk about my physics project. There’d be polite interest, but basically the feeling was that finding a fundamental theory of physics was just too hard, and only kooks would attempt it.

It didn’t help that there was something that bothered me about my ideas. The particular way I’d set up my rules seemed a little too inflexible, too contrived. In my life as a computational language designer I was constantly thinking about abstract systems of rules. And every so often I’d wonder if they might be relevant for physics. But I never got anywhere. Until, suddenly, in the fall of 2018, I had a little idea.

It was in some ways simple and obvious, if very abstract. But what was most important about it to me was that it was so elegant and minimal. Finally I had something that felt right to me as a serious possibility for how physics might work. But wonderful things were happening with the Wolfram Language, and I was busy thinking about all the implications of finally having a full-scale computational language.

But then, at our annual Summer School in 2019, there were two young physicists (Jonathan Gorard and Max Piskunov) who were like, “You just have to pursue this!” Physics had been my great passion when I was young, and in August 2019 I had a big birthday and realized that, yes, after all these years I really should see if I can make something work.

So—along with the two young physicists who’d encouraged me—I began in earnest in October 2019. It helped that—after a lifetime of developing them—we now had great computational tools. And it wasn’t long before we started finding what I might call “very interesting things”. We reproduced, more elegantly, what I had done in the 1990s. And from tiny, structureless rules out were coming space, time, relativity, gravity and hints of quantum mechanics.

We were doing zillions of computer experiments, building intuition. And gradually things were becoming clearer. We started understanding how quantum mechanics works. Then we realized what energy is. We found an outline derivation of my late friend and mentor Richard Feynman’s path integral. We started seeing some deep structural connections between relativity and quantum mechanics. Everything just started falling into place. All those things I’d known about in physics for nearly 50 years—and finally we had a way to see not just what was true, but why.

I hadn’t ever imagined anything like this would happen. I expected that we’d start exploring simple rules and gradually, if we were lucky, we’d get hints here or there about connections to physics. I thought maybe we’d be able to have a possible model for the first  seconds of the universe, but we’d spend years trying to see whether it might actually connect to the physics we see today.

In the end, if we’re going to have a complete fundamental theory of physics, we’re going to have to find the specific rule for our universe. And I don’t know how hard that’s going to be. I don’t know if it’s going to take a month, a year, a decade or a century. A few months ago I would also have said that I don’t even know if we’ve got the right framework for finding it.

But I wouldn’t say that anymore. Too much has worked. Too many things have fallen into place. We don’t know if the precise details of how our rules are set up are correct, or how simple or not the final rules may be. But at this point I am certain that the basic framework we have is telling us fundamentally how physics works.

It’s always a test for scientific models to compare how much you put in with how much you get out. And I’ve never seen anything that comes close. What we put in is about as tiny as it could be. But what we’re getting out are huge chunks of the most sophisticated things that are known about physics. And what’s most amazing to me is that at least so far we’ve not run across a single thing where we’ve had to say “oh, to explain that we have to add something to our model”. Sometimes it’s not easy to see how things work, but so far it’s always just been a question of understanding what the model already says, not adding something new.

At the lowest level, the rules we’ve got are about as minimal as anything could be. (Amusingly, their basic structure can be expressed in a fraction of a line of symbolic Wolfram Language code.) And in their raw form, they don’t really engage with all the rich ideas and structure that exist, for example, in mathematics. But as soon as we start looking at the consequences of the rules when they’re applied zillions of times, it becomes clear that they’re very elegantly connected to a lot of wonderful recent mathematics.

There’s something similar with physics, too. The basic structure of our models seems alien and bizarrely different from almost everything that’s been done in physics for at least the past century or so. But as we’ve gotten further in investigating our models something amazing has happened: we’ve found that not just one, but many of the popular theoretical frameworks that have been pursued in physics in the past few decades are actually directly relevant to our models.

I was worried this was going to be one of those “you’ve got to throw out the old” advances in science. It’s not. Yes, the underlying structure of our models is different. Yes, the initial approach and methods are different. And, yes, a bunch of new ideas are needed. But to make everything work we’re going to have to build on a lot of what my physicist friends have been working so hard on for the past few decades.

And then there’ll be the physics experiments. If you’d asked me even a couple of months ago when we’d get anything experimentally testable from our models I would have said it was far away. And that it probably wouldn’t happen until we’d pretty much found the final rule. But it looks like I was wrong. And in fact we’ve already got some good hints of bizarre new things that might be out there to look for.

OK, so what do we need to do now? I’m thrilled to say that I think we’ve found a path to the fundamental theory of physics. We’ve built a paradigm and a framework (and, yes, we’ve built lots of good, practical, computational tools too). But now we need to finish the job. We need to work through a lot of complicated computation, mathematics and physics. And see if we can finally deliver the answer to how our universe fundamentally works.

It’s an exciting moment, and I want to share it. I’m looking forward to being deeply involved. But this isn’t just a project for me or our small team. This is a project for the world. It’s going to be a great achievement when it’s done. And I’d like to see it shared as widely as possible. Yes, a lot of what has to be done requires top-of-the-line physics and math knowledge. But I want to expose everything as broadly as possible, so everyone can be involved in—and I hope inspired by—what I think is going to be a great and historic intellectual adventure.

Today we’re officially launching our Physics Project. From here on, we’ll be livestreaming what we’re doing—sharing whatever we discover in real time with the world. (We’ll also soon be releasing more than 400 hours of video that we’ve already accumulated.) I’m posting all my working materials going back to the 1990s, and we’re releasing all our software tools. We’ll be putting out bulletins about progress, and there’ll be educational programs around the project.

Oh, yes, and we’re putting up a Registry of Notable Universes. It’s already populated with nearly a thousand rules. I don’t think any of the ones in there yet are our own universe—though I’m not completely sure. But sometime—I hope soon—there might just be a rule entered in the Registry that has all the right properties, and that we’ll slowly discover that, yes, this is it—our universe finally decoded. . .

Continue reading. There’s much much more, and note the links in the chart that appears next.

Written by LeisureGuy

14 April 2020 at 2:25 pm

Posted in Math, Science

Remembering the Unstoppable Freeman Dyson

leave a comment »

Robbert Dijkgraaf writes in Quanta:

hen Freeman Dyson passed away in February at the age of 96, the world lost one of its most versatile scientists and astute humanists. Trained as a mathematician, Dyson had an appetite for number theory, but his most famous achievement came early as a theoretical physicist, laying out the architecture of modern particle physics. He then moved into the design of nuclear reactors, nuclear-powered space travel, astronomy, astrobiology, climate change and futurism, all while being “a wise observer of the human scene.” He described himself as a frog, not a bird, as he enjoyed jumping from pool to pool, studying their details deeply in the mud. The bird’s-eye perspective was not for him, and he had a lifelong suspicion of grand unified theories.

No life was more entangled with the Institute for Advanced Study, Dyson’s home in Princeton, New Jersey. The English-born scientist came first to the Institute in 1948 as part of an exceptional group of young physicists and mathematicians working with the director, J. Robert Oppenheimer. His colleagues included the future Nobel laureates Hideki Yukawa and Jack Steinberger, as well as Dyson’s first wife, Verena Haefeli, the mother of his eldest children, Esther and George. (In 1958, Freeman married Imme Dyson, a master runner, with whom he had four daughters: Dorothy, Mia, Rebecca and Emily.)

When today’s younger scientists asked Dyson how it felt to be a physicist at the Institute in 1948, in those halcyon days when giants like Albert Einstein and Oppenheimer roamed the grounds, he had great pleasure telling them he wasn’t impressed at all by the famous men. Einstein rarely came to seminars, only when his friend Max von Laue visited, and Oppenheimer did little physics. No, his young colleagues inspired him the most.

It was in his own youth that Dyson had his most celebrated result: the unification of two complementary views of quantum electrodynamics, the theory describing the interaction of light with charged matter. During a postwar visit to the United States, he was fortunate to join the group of young American physicists who had returned from Los Alamos after building the atom bomb. Together, they set their minds on resolving the mysteries of quantum theory. Among them was Richard Feynman, the quirkiest and most brilliant of the bunch. Dyson described him as “half genius and half buffoon.” They made an immediate and lasting connection.

At that time, there were two different approaches toward understanding particle physics. Julian Schwinger at Harvard University had developed a complicated scheme of calculations that was comprehensive, but which few understood. Feynman at Cornell University, on the other hand, had posited a deceptively simple set of diagrams that described the interactions of particles in terms of their trajectories through space and time. In the summer of 1948, while traveling by Greyhound bus from San Francisco to Princeton, Dyson had an epiphany that united the two.

In a flash he understood how Feynman’s straightforward diagrams could perfectly reflect Schwinger’s abstract algebra. A single diagram could in fact be drawn in space and time in many separate ways, interchanging cause and effect, and so each one could capture a whole range of particle behavior. For example, it could describe the emission of a photon by an electron and the subsequent absorption by a second electron, but also the reverse process where the second electron emitted the photon and the first absorbed it. All these processes corresponded exactly with each of the separate calculations in Schwinger’s approach.

It was nothing less than the birth of modern particle physics. Nobel Prizes naturally soon followed — for Schwinger, Feynman and the Japanese physicist Sin-Itiro Tomonaga, who had independently found a third approach. Dyson just missed the boat that by tradition has room for only three passengers, but he often said it was much better when people asked why you didn’t get a Nobel Prize, rather than why you did.

After a brief stay at Cornell, Dyson took up a permanent post at the Institute in 1953, where he stayed until the very end, walking every morning to his office to think and write. A few years into the appointment, he abandoned particle physics. It was time for a jump into a different pool. Dyson threw himself into  . . .

Continue reading.

 

Written by LeisureGuy

13 April 2020 at 1:44 pm

Posted in Math, Science

What is it about 63 and -7/4?

leave a comment »

Written by LeisureGuy

12 April 2020 at 12:02 pm

Posted in Math, Video

Contrast of abstract vs. concrete in physics, math, and elsewhere

leave a comment »

I had a few thoughts after sleeping on this article in Quanta, which describes how abstract (and unconstructable) numbers in mathematics possibly support a distorted view the physics of physical reality.

One thought concerned something that would be possible (in theory) if physical quantities could be measured precisely: Consider two metal cubes, almost identical in size. Designate the smaller’s width as 1, and the larger made so that its width is 1.000xyz…, where xyz initiates a string of 3-digit segments, with each 3 digits being the decimal value of a byte, and the bytes representing characters in Unicode UTF-8. The larger cube is constructed so the decimal expansion contains the Unicode representation of the Universal Library described by Jorge Luis Borges in The Library of Babel.

This library, though quite large (read the story to get an idea of its size), is certainly not infinite, and so the decimal string of the larger cube’s size in terms of the size of the smaller is also not an infinite decimal. It’s just a very long decimal.

That would mean that two 1mm (approximately) cubes could contain the complete range of human knowledge. But this is possible only in thought, not in reality. Reality does not even allow for such precision.

Another thought I had was the contrast between the Platonic ideal God of some branches of Christianity — a God that is eternal, unchanging, absolute, omnipotent, omniscience, unaffected by the physical universe in general and humans in particular — and the God of process theology, which is intimately involved with the evolutionary universe and humans, reacting to humans and capable of change and of learning (as in fact as described in the Holy Bible).

In both cases, the contrast is between the abstract and theoretical and the real and practical — similar to the contrast between my abstract and theoretical knowledge of the camera’s self-timer that did not make the jump to the real and practical application of the self-timer when I needed it. Abstract and theoretical knowledge is separate from real and practical and experiential knowledge, and the abstract and theoretical model of the universe in physics is separate from the real and practical (and not so precise) universe in reality.

Written by LeisureGuy

8 April 2020 at 11:52 am

Does Time Really Flow?

leave a comment »

The answer apparently is “Yes.” The idea of a block universe falls before intuitionist mathematics. Natalie Wolchover writes in Quanta:

Strangely, although we feel as if we sweep through time on the knife-edge between the fixed past and the open future, that edge — the present — appears nowhere in the existing laws of physics.

In Albert Einstein’s theory of relativity, for example, time is woven together with the three dimensions of space, forming a bendy, four-dimensional space-time continuum — a “block universe” encompassing the entire past, present and future. Einstein’s equations portray everything in the block universe as decided from the beginning; the initial conditions of the cosmos determine what comes later, and surprises do not occur — they only seem to. “For us believing physicists,” Einstein wrote in 1955, weeks before his death, “the distinction between past, present and future is only a stubbornly persistent illusion.”

The timeless, pre-determined view of reality held by Einstein remains popular today. “The majority of physicists believe in the block-universe view, because it is predicted by general relativity,” said Marina Cortês, a cosmologist at the University of Lisbon.

However, she said, “if somebody is called on to reflect a bit more deeply about what the block universe means, they start to question and waver on the implications.”

Physicists who think carefully about time point to troubles posed by quantum mechanics, the laws describing the probabilistic behavior of particles. At the quantum scale, irreversible changes occur that distinguish the past from the future: A particle maintains simultaneous quantum states until you measure it, at which point the particle adopts one of the states. Mysteriously, individual measurement outcomes are random and unpredictable, even as particle behavior collectively follows statistical patterns. This apparent inconsistency between the nature of time in quantum mechanics and the way it functions in relativity has created uncertainty and confusion.

Over the past year, the Swiss physicist Nicolas Gisin has published four papers that attempt to dispel the fog surrounding time in physics. As Gisin sees it, the problem all along has been mathematical. Gisin argues that time in general and the time we call the present are easily expressed in a century-old mathematical language called intuitionist mathematics, which rejects the existence of numbers with infinitely many digits. When intuitionist math is used to describe the evolution of physical systems, it makes clear, according to Gisin, that “time really passes and new information is created.” Moreover, with this formalism, the strict determinism implied by Einstein’s equations gives way to a quantum-like unpredictability. If numbers are finite and limited in their precision, then nature itself is inherently imprecise, and thus unpredictable.

Physicists are still digesting Gisin’s work — it’s not often that someone tries to reformulate the laws of physics in a new mathematical language — but many of those who have engaged with his arguments think they could potentially bridge the conceptual divide between the determinism of general relativity and the inherent randomness at the quantum scale.

“I found it intriguing,” said Nicole Yunger Halpern, a quantum information scientist at Harvard University, responding to Gisin’s recent article in Nature Physics. “I’m open to giving intuitionist mathematics a shot.”

Cortês called Gisin’s approach “extremely interesting” and “shocking and provocative” in its implications. “It’s really a very interesting formalism that is addressing this problem of finite precision in nature,” she said.

Gisin said it’s important to formulate laws of physics that cast the future as open and the present as very real, because that’s what we experience. “I am a physicist who has my feet on the ground,” he said. “Time passes; we all know that.”

Information and Time

Gisin, 67, is primarily an experimenter. He runs a lab at the University of Geneva that has performed groundbreaking experiments in quantum communication and quantum cryptography. But he is also the rare crossover physicist who is known for important theoretical insights, especially ones involving quantum chance and nonlocality.

On Sunday mornings, in lieu of church, Gisin makes a habit of sitting quietly in his chair at home with a mug of oolong tea and contemplating deep conceptual puzzles. It was on a Sunday about two and a half years ago that he realized that the deterministic picture of time in Einstein’s theory and the rest of “classical” physics implicitly assumes the existence of infinite information.

Consider the weather. Because it’s chaotic, or highly sensitive to small differences, we can’t predict exactly what the weather will be a week from now. But because it’s a classical system, textbooks tell us that we could, in principle, predict the weather a week on, if only we could measure every cloud, gust of wind and butterfly’s wing precisely enough. It’s our own fault we can’t gauge conditions with enough decimal digits of detail to extrapolate forward and make perfectly accurate forecasts, because the actual physics of weather unfolds like clockwork.

Now expand this idea to the entire universe. In a predetermined world in which time only seems to unfold, exactly what will happen for all time actually had to be set from the start, with the initial state of every single particle encoded with infinitely many digits of precision. Otherwise there would be a time in the far future when the clockwork universe itself would break down.

But information is physical. Modern research shows it requires energy and occupies space. Any volume of space is known to have a finite information capacity (with the densest possible information storage happening inside black holes). The universe’s initial conditions would, Gisin realized, require far too much information crammed into too little space. “A real number with infinite digits can’t be physically relevant,” he said. The block universe, which implicitly assumes the existence of infinite information, must fall apart.

He sought a new way of describing time in physics that didn’t presume infinitely precise knowledge of the initial conditions.

The Logic of Time

The modern acceptance that there exists a continuum of real numbers, most with infinitely many digits after the decimal point, carries little trace of the vitriolic debate over the question in the first decades of the 20th century. David Hilbert, the great German mathematician, espoused the now-standard view that real numbers exist and can be manipulated as completed entities. Opposed to this notion were mathematical “intuitionists” led by the acclaimed Dutch topologist L.E.J. Brouwer, who saw mathematics as a construct. Brouwer insisted that numbers must be constructible, their digits calculated or chosen or randomly determined one at a time. Numbers are finite, said Brouwer, and they’re also processes: They can become ever more exact as more digits reveal themselves in what he called a choice sequence, a function for producing values with greater and greater precision.

By grounding mathematics in what can be constructed, intuitionism has far-reaching consequences for the practice of math, and for determining which statements can be deemed true. The most radical departure from standard math is that the law of excluded middle, a vaunted principle since the time of Aristotle, doesn’t hold. The law of excluded middle says that either a proposition is true, or its negation is true — a clear set of alternatives that offers a powerful mode of inference. But in Brouwer’s framework, statements about numbers might be neither true nor false at a given time, since the number’s exact value hasn’t yet revealed itself.

There’s no difference from standard math when it comes to numbers like 4, or ½, or pi, the ratio of a circle’s circumference to its diameter. Even though pi is irrational, with no finite decimal expansion, there’s an algorithm for generating its decimal expansion, making pi just as determinate as a number like ½. But consider another number x that’s in the ballpark of ½.

Say the value of is 0.4999, where further digits unfurl in a choice sequence. Maybe the sequence of 9s will continue forever, in which case x converges to exactly ½. (This fact, that 0.4999… = 0.5, is true in standard math as well, since x differs from ½ by less than any finite difference.)

But if at some future point in the sequence, a digit other than 9 crops up — if, say, the value of becomes 4.999999999999997… — then no matter what happens after that, x is less than ½. But before that happens, when all we know is 0.4999, “we don’t know whether or not a digit other than 9 will ever show up,” explained Carl Posy, a philosopher of mathematics at the Hebrew University of Jerusalem and a leading expert on intuitionist math. “At the time we consider this x, we cannot say that x is less than ½, nor can we say that x equals ½.” The proposition “x is equal to ½” is not true, and neither is its negation. The law of the excluded middle doesn’t hold.

Moreover, the continuum can’t be cleanly divided into two parts consisting of all numbers less than ½ and all those greater than or equal to ½. “If you try to cut the continuum in half, this number x is going to stick to the knife, and it won’t be on the left or on the right,” said Posy. “The continuum is viscous; it’s sticky.”

Hilbert compared the removal of the law of excluded middle from math to “prohibiting the boxer the use of his fists,” since the principle underlies much mathematical deduction. Although Brouwer’s intuitionist framework compelled and fascinated the likes of Kurt Gödel and Hermann Weyl, standard math, with its real numbers, dominates because of ease of use.

The Unfolding of Time

Gisin first encountered intuitionist math at a meeting last May attended by Posy. When the two got to talking, Gisin quickly saw a connection between the unspooling decimal digits of numbers in this mathematical framework and the physical notion of time in the universe. Materializing digits seemed to naturally correspond to the sequence of moments defining the present, when the uncertain future becomes concrete reality. . .

Continue reading.

Written by LeisureGuy

7 April 2020 at 1:02 pm

Posted in Daily life, Math, Science

The parrots that understand probabilities

leave a comment »

Written by LeisureGuy

9 March 2020 at 12:15 pm

%d bloggers like this: