Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Math’ Category

Can a prime number be illegal? Yes.

leave a comment »

See this Wikipedia article, which begins:

An illegal prime is a prime number that represents information whose possession or distribution is forbidden in some legal jurisdictions. One of the first illegal primes was found in 2001. When interpreted in a particular way, it describes a computer program that bypasses the digital rights management scheme used on DVDs. Distribution of such a program in the United States is illegal under the Digital Millennium Copyright Act.[1] An illegal prime is a kind of illegal number.

Written by LeisureGuy

16 April 2021 at 9:29 am

One knot, five applications

leave a comment »

Back in the day — way back in the day — when I was in Cub Scouts I was fascinated by knots. Of course there are the square knot and the granny knot everyone knows, but I liked the sheet bend, the sheepshank, the clove hitch, the timber hitch, the bowline (an essential knot), the bowline on a bight (in part, I think, I like the names). This list of knots is animated, so if you click knot, you see it being tied.

John Donne was partial to knots himself, and they appear frequently in his poems. Knots carry with them a kind of history in the context of how they were used. The Gordian knot has entered common language. The mathematics of knots is formidable and fruitful. Knotted breads (pretzels, braided loaves, and the like) please and delight.

IMO, every young person should learn well how to tie a few dozen knots and understand their use.

Written by LeisureGuy

10 April 2021 at 3:07 pm

Tips from neuroscience to keep you focused on hard tasks

leave a comment »

David Badre writes in Nature:

Some of scientists’ most rewarding moments come when we confront a hard problem or a difficult task. Solving a major methodological hurdle, designing an elegant experiment, making sense of a puzzling result, working on a new model or writing a paper or grant proposal are the intellectual challenges that make a career in science so exciting. But doing hard tasks is, in fact, hard. It can frustrate and weigh on us, and cause anxiety and stress. We can struggle to maintain focus on our hard tasks, including the ones we enjoy and eagerly wish to complete. We often postpone work on hard tasks, such as beginning to write a paper or do complex data analysis, in favour of quick wins from easier tasks, like fine-tuning a figure, organizing our calendars or making a dent in our e-mail correspondence.

In late 2020, I published a book, On Task, about the neuroscience of cognitive control: the mental function that allows us to connect our goals and plans with our actions. It is concerned with precisely this problem of how we get things done. It is ironic, therefore, that writing a book about how our brains do tasks was itself a difficult task to do. I enjoyed writing the book, and valued the goal. But there were moments when it was really difficult to find the words to convey a complex idea. And working on the book was never the most immediately urgent task in my day-to-day work, so it was challenging to carve out the time for the writing and thought it required.

You might not be writing a book, but everyone experiences the struggles of difficult tasks. They have been made all the worse with lockdowns, home-schooling and other lifestyle changes due to the pandemic. Everyone experiences bouts of procrastination or work-avoidance, and the guilt that comes with them. There is no avoiding these experiences entirely, but there are some strategies that can help us stay focused.

Make space

To solve hard problems, the brain needs ready access to the information, plans, procedures and knowledge it will be using. Cognitive scientists refer to this collective task knowledge as a task set. However, the task set is not always immediately available: we can’t hold it all active in our limited mental workspace, or ‘working memory’, all the time.

For example, when writing a scientific paper, we must bring to mind lots of information related to the background, logic, design and results of a study. If we have just been at a meeting on a different topic, and then sit down to write the paper, the necessary information might not be in the forefront of our minds. It must be mentally retrieved and organized in our working memory before we can start writing.

In practice, returning to a hard task in this way comes with a ‘restart’ cost: we must spend time and mental effort getting back into our task set, rather than making progress. For this reason, it is important to create time and space for hard tasks.

• Set aside large blocks of time. It is all too easy for working scientists to fill our days with meetings and other small tasks that leave only small gaps for the serious work. Long gaps are needed not only because of the intense thought and work required by hard tasks, but also because we need some time to re-establish our task set. Switching frequently between tasks makes producing quality work harder.

• Be consistent. We should try to reserve a consistent time and place for our hard work and be protective of it. Ideally, we should find this time and place every day. Even if we don’t make progress one day, that time should be spent on our hard task rather than other tasks, even if it’s just reviewing our work. Consistency can aid memory: memory retrieval is context dependent, in that it helps to have the same sights and sounds available when we learn something as when we will try to remember it. Thus, working on a task in the same context repeatedly might aid retrieval and help us to re-establish our task set when we restart.

Minimize distraction and never multitask

When we do two or more tasks at once, either at the same time or switching between them, our performance efficiency and quality will suffer. This happens partly because the tasks use shared cognitive resources, such as working memory. As a result, they will compete for that shared resource and interfere with one another. When doing a hard task, it is important to minimize this interference from multi-tasking.

• Remove cues to other tasks. It helps to put away e-mail and social media and their associated cues. Phone notifications or a badge that tells us how many unread messages we have are distractions that pull us to other tasks. These result in multitasking costs, whether we do the other tasks or not. Even cues that we simply associate with other tasks, such as seeing our phones on the table, can distract us. As much as possible, we should keep our space and time for hard work clear of other distracting tasks.

• Beware the allure of easy tasks. When we decide to perform a task, our brains do a cost–benefit analysis on the fly, weighing the value of the outcome against the projected mental investment required to be successful. As a result, we often avoid hard tasks in favour of smaller, easier tasks, particularly if we aren’t making immediate progress. That will affect our motivation. Sending some e-mails or doing administrative work or straightening up the desk might all be worthwhile tasks and feel productive, but they prevent us doing the task we need to do, while adding multitasking costs.

Engage in good problem-solving habits

To find a solution to a hard problem or perform a hard task, we must structure the problem or task in a way that will allow us to succeed.

For example, . . .

Continue reading. There’s more.

Written by LeisureGuy

16 March 2021 at 12:13 pm

Is the AstraZeneca Vaccine Really Dangerous?

leave a comment »

Sometimes it’s difficult to understand public reactions. Kevin Drum notes:

I’m puzzled about something related to the AstraZeneca vaccine. Several European countries have put it on hold due to reports of blood clots in patients who have received it. If I have the numbers right, about 40 cases of blood clots have been reported out of the 17 million people who have gotten the AZ vaccine.

But COVID-19 itself is far more dangerous on the blood clot front. “Blood clots continue to wreak havoc for patients with severe COVID-19 infection,” reported the University of Michigan Health Lab a few months ago, and that hasn’t changed.

So let’s do some arithmetic. The overall hospitalization rate for COVID-19 is about 100 per million per week, which means that of the 17 million who have received the AZ vaccine since January, something on the order of 17,000 would have been hospitalized if they hadn’t been vaccinated. The incidence of blood clots in patients hospitalized for COVID-19, according to a recent study, is about 20%. This means that roughly 3,000 people out of those 17 million would have developed blood clots without the vaccine, simply because they would gotten a serious case of COVID-19.

So . . . 3,000 without the vaccine, 40 with the vaccine.¹ That seems like a no-brainer: keep inoculating people with the AZ vaccine until firm evidence of large-scale harm emerges.²

Unless, of course, I bollixed up my amateur risk assessment arithmetic. If I did, what am I missing?

¹Also, AstraZeneca claims that 40 cases out of 17 million is about the same as it is for other vaccines. I can’t verify that at the moment, but I wouldn’t be surprised if it’s true.

²The calculus might work out differently if you had plenty of alternate vaccine supplies, but I don’t get the impression that’s the case in Europe.

Written by LeisureGuy

15 March 2021 at 4:10 pm

A Mathematician’s Lament

with 2 comments

A 2002 essay by Paul Lockhart (PDF — the essay was later expanded into a short book):

A musician wakes from a terrible nightmare. In his dream he finds himself in a society where music education has been made mandatory. “We are helping our students become more competitive in an increasingly sound-filled world.” Educators, school systems, and the state are put in charge of this vital project. Studies are commissioned, committees are formed, and decisions are made—all without the advice or participation of a single working musician or composer.

Since musicians are known to set down their ideas in the form of sheet music, these curious black dots and lines must constitute the “language of music.” It is imperative that students become fluent in this language if they are to attain any degree of musical competence; indeed, it would be ludicrous to expect a child to sing a song or play an instrument without having a thorough grounding in music notation and theory. Playing and listening to music, let alone composing an original piece, are considered very advanced topics and are generally put off until college, and more often graduate school.

As for the primary and secondary schools, their mission is to train students to use this language—to jiggle symbols around according to a fixed set of rules: “Music class is where we take out our staff paper, our teacher puts some notes on the board, and we copy them or transpose them into a different key. We have to make sure to get the clefs and key signatures right, and our teacher is very picky about making sure we fill in our quarter-notes completely. One time we had a chromatic scale problem and I did it right, but the teacher gave me no credit because I had the stems pointing the wrong way.”

In their wisdom, educators soon realize that even very young children can be given this kind of musical instruction. In fact it is considered quite shameful if one’s third-grader hasn’t completely memorized his circle of fifths. “I’ll have to get my son a music tutor. He simply won’t apply himself to his music homework. He says it’s boring. He just sits there staring out the window, humming tunes to himself and making up silly songs.”

In the higher grades the pressure is really on. After all, the students must be prepared for the standardized tests and college admissions exams. Students must take courses in Scales and Modes, Meter, Harmony, and Counterpoint. “It’s a lot for them to learn, but later in college when they finally get to hear all this stuff, they’ll really appreciate all the work they did in high school.” Of course, not many students actually go on to concentrate in music, so only a few will ever get to hear the sounds that the black dots represent. Nevertheless, it is important that every member of society be able to recognize a modulation or a fugal passage, regardless of the fact that they will never hear one. “To tell you the truth, most students just aren’t very good at music. They are bored in class, their skills are terrible, and their homework is barely legible. Most of them couldn’t care less about how important music is in today’s world; they just want to take the minimum number of music courses and be done with it. I guess there are just music people and non-music people. I had this one kid, though, man was she sensational! Her sheets were impeccable—every note in the right place, perfect calligraphy, sharps, flats, just beautiful. She’s going to make one hell of a musician someday.”

Waking up in a cold sweat, the musician realizes, gratefully, that it was all just a crazy dream. “Of course!” he reassures himself, “No society would ever reduce such a beautiful and meaningful art form to something so mindless and trivial; no culture could be so cruel to its children as to deprive them of such a natural, satisfying means of human expression. How absurd!”

Meanwhile, on the other side of town, a painter has just awakened from a similar nightmare…

I was surprised to find myself in a regular school classroom—no easels, no tubes of paint. “Oh we don’t actually apply paint until high school,” I was told by the students. “In seventh grade we mostly study colors and applicators.” They showed me a worksheet. On one side were swatches of color with blank spaces next to them. They were told to write in the names. “I like painting,” one of them remarked, “they tell me what to do and I do it. It’s easy!”

After class I spoke with the teacher. “So your students don’t actually do any painting?” I asked. “Well, next year they take Pre-Paint-by-Numbers. That prepares them for the main Paint-by-Numbers sequence in high school. So they’ll get to use what they’ve learned here and apply it to real-life painting situations—dipping the brush into paint, wiping it off, stuff like that. Of course we track our students by ability. The really excellent painters—the ones who know their colors and brushes backwards and forwards—they get to the actual painting a little sooner, and some of them even take the Advanced Placement classes for college credit. But mostly we’re just trying to give these kids a good foundation in what painting is all about, so when they get out there in the real world and paint their kitchen they don’t make a total mess of it.”

“Um, these high school classes you mentioned…”

“You mean Paint-by-Numbers? We’re seeing much higher enrollments lately. I think it’s mostly coming from parents wanting to make sure their kid gets into a good college. Nothing looks better than Advanced Paint-by-Numbers on a high school transcript.”
“Why do colleges care if you can fill in numbered regions with the corresponding color?”

“Oh, well, you know, it shows clear-headed logical thinking. And of course if a student is planning to major in one of the visual sciences, like fashion or interior decorating, then it’s really a good idea to get your painting requirements out of the way in high school.”

“I see. And when do students get to paint freely, on a blank canvas?”

“You sound like one of my professors! They were always going on about expressing yourself and your feelings and things like that—really way-out-there abstract stuff. I’ve got a degree in Painting myself, but I’ve never really worked much with blank canvasses. I just use the Paint-by-Numbers kits supplied by the school board.”
Sadly, our present system of mathematics education is precisely this kind of nightmare. In fact, if I had to design a mechanism for the express purpose of destroying a child’s natural curiosity and love of pattern-making, I couldn’t possibly do as good a job as is currently being done—I simply wouldn’t have the imagination to come up with the kind of senseless, soul- crushing ideas that constitute contemporary mathematics education.

Everyone knows that something is wrong. The politicians say, “we need higher standards.” The schools say, “we need more money and equipment.” Educators say one thing, and teachers say another. They are all wrong. The only people who understand what is going on are the ones most often blamed and least often heard: the students. They say, “math class is stupid and boring,” and they are right.

Mathematics and Culture

The first thing to understand is that mathematics is an art. The difference between math and the other arts, such as music and painting, is that our culture does not recognize it as such. Everyone understands that poets, painters, and musicians create works of art, and are expressing themselves in word, image, and sound. In fact, our society is rather generous when it comes to creative expression; architects, chefs, and even television directors are considered to be working artists. So why not mathematicians?

Part of the problem is that nobody has . . .

Continue reading. There’s much more.

Written by LeisureGuy

13 March 2021 at 6:59 pm

One way to be fooled by statistics

with 2 comments

Written by LeisureGuy

13 March 2021 at 5:05 pm

The Bayesian Trap

leave a comment »

Written by LeisureGuy

13 March 2021 at 4:33 pm

Losing cause and effect in physics seems like losing associativity in abstract algebra — but there it is

leave a comment »

And some interest math arises as the common properties of arithmetic operations are abandoned. The requirement that the square of any number be positive was set aside with the creation/discovery of i and the complex numbers, which turned out be very useful. Losing commutativity happens pretty quickly as you develop new numbers. Although complex numbers are commutative (a*b = b*a for any two complex numbers a and b), quaternions are not, though with quaternions associativity (a*(b*c) = (a*b)*c) still holds (as it does for complex numbers, real numbers, rational numbers, and integers. But with octonions you lose associativity as well. (Real numbers are, well, real numbers. Complex numbers are represented by pairs of real numbers, quaternions by quadruples of real numbers, an octonions by octuples of real numbers.)

Associativity is pretty basic, as is cause and effect, but exploring reality is like moving west on the Oregon Trail: some things must be abandoned along the way. Special relativity did away with simultaneity, and now we lose cause and effect. Natalie Wolchover writes in Quanta:

Alice and Bob, the stars of so many thought experiments, are cooking dinner when mishaps ensue. Alice accidentally drops a plate; the sound startles Bob, who burns himself on the stove and cries out. In another version of events, Bob burns himself and cries out, causing Alice to drop a plate.

Over the last decade, quantum physicists have been exploring the implications of a strange realization: In principle, both versions of the story can happen at once. That is, events can occur in an indefinite causal order, where both “A causes B” and “B causes A” are simultaneously true.

“It sounds outrageous,” admitted Časlav Brukner, a physicist at the University of Vienna.

The possibility follows from the quantum phenomenon known as superposition, where particles maintain all possible realities simultaneously until the moment they’re measured. In labs in Austria, China, Australia and elsewhere, physicists observe indefinite causal order by putting a particle of light (called a photon) in a superposition of two states. They then subject one branch of the superposition to process A followed by process B, and subject the other branch to B followed by A. In this procedure, known as the quantum switch, A’s outcome influences what happens in B, and vice versa; the photon experiences both causal orders simultaneously.

Over the last five years, a growing community of quantum physicists has been implementing the quantum switch in tabletop experiments and exploring the advantages that indefinite causal order offers for quantum computing and communication. It’s “really something that could be useful in everyday life,” said Giulia Rubino, a researcher at the University of Bristol who led the first experimental demonstration of the quantum switch in 2017.

But the practical uses of the phenomenon only make the deep implications more acute.

Physicists have long sensed that the usual picture of events unfolding as a sequence of causes and effects doesn’t capture the fundamental nature of things. They say this causal perspective probably has to go if we’re ever to figure out the quantum origin of gravity, space and time. But until recently, there weren’t many ideas about how post-causal physics might work. “Many people think that causality is so basic in our understanding of the world that if we weaken this notion we would not be able to make coherent, meaningful theories,” said Brukner, who is one of the leaders in the study of indefinite causality.

That’s changing as physicists contemplate the new quantum switch experiments, as well as related thought experiments in which Alice and Bob face causal indefiniteness created by the quantum nature of gravity. Accounting for these scenarios has forced researchers to develop new mathematical formalisms and ways of thinking. With the emerging frameworks, “we can make predictions without having well-defined causality,” Brukner said.

Correlation, Not Causation

Progress has grown swifter recently, but many practitioners trace the origin of this line of attack on the quantum gravity problem to work 16 years ago by Lucien Hardy, a British-Canadian theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. “In my case,” said Brukner, “everything started with Lucien Hardy’s paper.”

Hardy was best known at the time for taking a conceptual approach made famous by Albert Einstein and applying it to quantum mechanics.

Einstein revolutionized physics not by thinking about what exists in the world, but by considering what individuals can possibly measure. In particular, he imagined people on moving trains making measurements with rulers and clocks. By using this “operational” approach, he was able to conclude that space and time must be relative.

In 2001, Hardy applied this same approach to quantum mechanics. He reconstructed all of quantum theory starting from five operational axioms.

He then set out to apply it to an even bigger problem: the 80-year-old problem of how to reconcile quantum mechanics and general relativity, Einstein’s epic theory of gravity. “I’m driven by this idea that perhaps the operational way of thinking about quantum theory may be applied to quantum gravity,” Hardy told me over Zoom this winter.

The operational question is: In quantum gravity, what can we, in principle, observe? Hardy thought about the fact that quantum mechanics and general relativity each have a radical feature. Quantum mechanics is famously indeterministic; its superpositions allow for simultaneous possibilities. General relativity, meanwhile, suggests that space and time are malleable. In Einstein’s theory, massive objects like Earth stretch the space-time “metric” — essentially the distance between hash marks on a ruler, and the duration between ticks of clocks. The nearer you are to a massive object, for instance, the slower your clock ticks. The metric then determines the “light cone” of a nearby event — the region of space-time that the event can causally influence.

When you combine these two radical features, Hardy said, two simultaneous quantum possibilities will stretch the metric in different ways. The light cones of events become indefinite — and thus, so does causality itself.

Most work on quantum gravity elides one of these features. Some researchers, for instance, attempt to characterize the behavior of “gravitons,” quantum units of gravity. But the researchers have the gravitons interact against a fixed background time. “We’re so used to thinking about the world evolving in time,” Hardy noted. He reasons, though, that quantum gravity will surely inherit general relativity’s radical feature and lack fixed time and fixed causality. “So the idea is really to throw caution to the wind,” said the calm, serious physicist, “and really embrace this wild situation where you have no definite causal structure.”

Over Zoom, Hardy used a special projector to film a whiteboard, where he sketched out various thought experiments, starting with one that helped him see how to describe data entirely without reference to the causal order of events.

He imagined an array of probes drifting in space. They’re taking data — recording, say, the polarized light spewing out of a nearby exploding star, or supernova. Every second, each probe logs its location, the orientation of its polarizer (a device like polarized sunglasses that either lets a photon through or blocks it depending on its polarization), and whether a detector, located behind the polarizer, detects a photon or not. The probe transmits this data to a man in a room, who prints it on a card. After some time, the experimental run ends; the man in the room shuffles all the cards from all the probes and forms a stack.

The probes then rotate their polarizers and make a new series of measurements, producing a new stack of cards, and repeat the process, so that the man in the room ultimately has many shuffled stacks of out-of-order measurements. “His job is to try to make some sense of the cards,” Hardy said. The man wants to devise a theory that accounts for all the statistical correlations in the data (and, in this way, describes the supernova) without any information about the data’s causal relationships or temporal order, since those might not be fundamental aspects of reality.

How might the man do this? He could first . . .

Continue reading.

Written by LeisureGuy

11 March 2021 at 6:36 pm

Posted in Daily life, Math, Science

This is the fastest random-number generator ever built: 250 trillion bits per second — and truly random, not from an algorithm

leave a comment »

A bow-tie-shaped semiconductor (scanning electron microscope image) produces a laser beam with randomly fluctuating intensity. Credit: Kyungduk Kim

The problem with using an algorithm to generate random numbers is that the numbers that result, coming from an algorithm, are not in fact random. If you have the algorithm, you can readily replicate a set of numbers. (These numbers are referred to as “pseudo-random” numbers. While they are perfectly fine to use in, say, board games, serious research requires truly randdom numbers.)

The numbers generated by this new device will be truly random. Davide Castelvecchi reports in Nature:

Researchers have built the fastest random-number generator ever made, using a simple laser. It exploits fluctuations in the intensity of light to generate randomness — a coveted resource in applications such as data encryption and scientific simulations — and could lead to devices that are small enough to fit on a single computer chip.

True randomness is surprisingly difficult to come by. Algorithms in conventional computers can produce sequences of numbers that seem random at first, but over time these tend to display patterns. This makes them at least partially predictable, and therefore vulnerable to being decoded.

To make encryption safer, researchers have turned to quantum mechanics, where the laws of physics guarantee that the results of certain measurements — such as when a radioactive atom decays — are genuinely random.

A popular way to tap into quantum randomness is to exploit fluctuations in how photons are emitted by the materials used in lasers. Typical laser devices are designed to minimize these fluctuations to produce light of steady intensity: they make the light waves bounce around inside the material to force its atoms to emit more and more photons in sync with each other.

But for random-number generation, researchers aim for the opposite. “We want the intensity to fluctuate randomly, so we can digitize the intensity to generate random numbers,” says Hui Cao, an applied physicist at Yale University in New Haven, Connecticut.

Dapper device

Cao and her team made their laser material — a translucent semiconductor — in the shape of a bow tie. Photons bounce between the curved walls of the bow tie multiple times, before coming out as a scattered beam. The researchers can then capture the light with an ultrafast camera. They recorded the light output of 254 independent pixels, which together produced random bits at a rate of around 250 terabits per second, or 250 terahertz. That’s several orders of magnitude faster than previous such devices, which recorded only one pixel at a time. Their results were reported in Science on 25 February1.

The invention “represents a major leap in performance of random-number generators”, says Krister Shalm, a physicist at the US National Institute of Standards and Technology in Boulder, Colorado.

The fastest existing computers have clock speeds measured in gigahertz, which is much too slow to fully exploit the full power of Cao’s device. The set-up could be made smaller by . . .

Continue reading.

Written by LeisureGuy

3 March 2021 at 11:13 am

Posted in Math, Science, Technology

Simulating alternate voting systems

leave a comment »

I just stumbled across a very interesting channel on YouTube: Primer. In particular, take a look at the 7 brief videos in the Evolution series.

But for an example, here’s a standalone video on voting systems.

As a point of interest, Wikipedia notes:

Ranked-choice voting is used for state primary, congressional, and presidential elections in Alaska and Maine and for local elections in more than 20 US cities including Cambridge, Massachusetts; San Francisco, California; Oakland, California; Berkeley, California; San Leandro, California; Takoma Park, Maryland; St. Paul, Minnesota; Minneapolis, Minnesota; Santa Fe, New Mexico; Portland, Maine; Las Cruces, New Mexico; and St. Louis Park, Minnesota.[1] New York City is by far the largest voting population in the US that has opted for RCV, pending implementation in 2021.[2] RCV is commonly used for student leadership and other non-governmental elections.[1] It was used by all voters in four states in the 2020 Democratic Party presidential primaries.[3]

Written by LeisureGuy

25 February 2021 at 10:01 am

Posted in Daily life, Election, Math, Video

An infinite pattern that never repeats

leave a comment »

I did some searching to see where I could buy Penrose tiles — I like the ones with lines rather than curves — and came up empty-handed.

Written by LeisureGuy

24 February 2021 at 1:06 pm

Posted in Daily life, Math, Science, Video

There’s a Hidden Binary Message on Perseverance’s Mars Parachute

leave a comment »

This is so cool.

Written by LeisureGuy

23 February 2021 at 4:09 pm

Posted in Daily life, Math, Technology

Cute commentary on teaching

leave a comment »

Written by LeisureGuy

5 February 2021 at 4:33 pm

Posted in Education, Math, Video

Topology 101: The Hole Truth

leave a comment »

David Richeson writes in Quanta:

If you’re looking to pick a fight, simply ask your friends, “Is Pluto a planet?” Or “Is a hotdog a sandwich?” Or “How many holes does a straw have?” The first two questions will have them arguing yay or nay, while the third yields claims of two, one and even zero.

These questions all hinge on definitions. What is the precise definition of a planet? A sandwich? A hole? We will leave the first two for your friends to argue about. The third, however, can be viewed through a mathematical lens. How have mathematicians — particularly topologists, who study spatial relationships — thought about holes?

In everyday language, we use “hole” in a variety of nonequivalent ways. One is as a cavity, like a pit dug in the ground. Another is as an opening or aperture in an object, like a tunnel through a mountain or the punches in three-ring binder paper. Yet another is as a completely enclosed space, such as an air pocket in Swiss cheese. A topologist would say that all but the first example are holes. But to understand why – and why mathematicians even care about holes in the first place — we have to travel through the history of topology, starting with how it differs from its close kin, geometry.

In geometry, shapes like circles and polyhedra are rigid objects; the tools of the trade are lengths, angles and areas. But in topology, shapes are flexible things, as if made from rubber. A topologist is free to stretch and twist a shape. Even cutting and gluing are allowed, as long as the cut is precisely reglued. A sphere and a cube are distinct geometric objects, but to a topologist, they’re indistinguishable. If you want a mathematical justification that a T-shirt and a pair of pants are different, you should turn to a topologist, not a geometer. The explanation: They have different numbers of holes.

Leonhard Euler kicked off the topological investigation of shapes in the 18th century. You might think that by then mathematicians knew almost all there was to know about polyhedra. But in 1750, Euler discovered what I consider one of the all-time great theorems: If a polyhedron has polygonal faces, edges and vertices, then – = 2. For example, a soccer ball has 20 white hexagonal and 12 black pentagonal patches for a total of 32 faces, as well as 90 edges and 60 vertices. And, indeed, 60 – 90 + 32 = 2. This elementary observation has deep connections to many areas of mathematics and yet is simple enough to be taught to kindergartners. But it eluded centuries of geometers like Euclid, Archimedes and Kepler because the result does not depend on geometry. It depends only on the shape itself: It is topological.

Euler implicitly assumed his polyhedra were convex, meaning a line segment joining any two points stayed completely within the polyhedron. Before long, scholars found nonconvex exceptions to Euler’s formula. For instance, in 1813 the Swiss mathematician Simon Lhuilier recognized that if we punch a hole in a polyhedron to make it more donut-shaped, changing its topology, then V – E + F = 0. . .

.Continue reading. There’s more, and it’s interesting.

Written by LeisureGuy

26 January 2021 at 1:17 pm

Posted in Math

New article on Medium: “Choosing Which Student Goes Next”

leave a comment »

The article will be primarily of interest to teachers. If you teach or know someone who does, take a look.

Written by LeisureGuy

4 January 2021 at 10:49 am

The Dunning-Kruger effect may be an example of the Dunning-Kruger effect

leave a comment »

Being inappropriately confident of a position because of a lack of relevant knowledge — assurance based on ignorance — is often observed, more often in others than in ourselves, but the actual result may in fact be a misinterpretation and misunderstanding of statistics. Jonathan Jarry explains for McGill University’s Office of Science & Society:

Iwant the Dunning-Kruger effect to be real. First described in a seminal 1999 paper by David Dunning and Justin Kruger, this effect has been the darling of journalists who want to explain why dumb people don’t know they’re dumb. There’s even video of a fantastic pastiche of Turandot’s famous aria, Nessun dorma, explaining the Dunning-Kruger effect. “They don’t know,” the opera singer belts out at the climax, “that they don’t know.”

I was planning on writing a very short article about the Dunning-Kruger effect and it felt like shooting fish in a barrel. Here’s the effect, how it was discovered, what it means. End of story.

But as I double-checked the academic literature, doubt started to creep in. While trying to understand the criticism that had been leveled at the original study, I fell down a rabbit hole, spoke to a few statistics-minded people, corresponded with Dr. Dunning himself, and tried to understand if our brain really was biased to overstate our competence in activities at which we suck… or if the celebrated effect was just a mirage brought about by the peculiar way in which we can play with numbers.

Have we been overstating our confidence in the Dunning-Kruger effect?

A misunderstood effect

The most important mistake people make about the Dunning-Kruger effect, according to Dr. Dunning, has to do with who falls victim to it. “The effect is about us, not them,” he wrote to me. “The lesson of the effect was always about how we should be humble and cautious about ourselves.” The Dunning-Kruger effect is not about dumb people. It’s mostly about all of us when it comes to things we are not very competent at.

In a nutshell, the Dunning-Kruger effect was originally defined as a bias in our thinking. If I am terrible at English grammar and am told to answer a quiz testing my knowledge of English grammar, this bias in my thinking would lead me, according to the theory, to believe I would get a higher score than I actually would. And if I excel at English grammar, the effect dictates I would be likely to slightly underestimate how well I would do. I might predict I would get a 70% score while my actual score would be 90%. But if my actual score was 15% (because I’m terrible at grammar), I might think more highly of myself and predict a score of 60%. This discrepancy is the effect, and it is thought to be due to a specific problem with our brain’s ability to assess its skills.

This is what student participants went through for Dunning and Kruger’s research project in the late 1990s. There were assessments of grammar, of humour, and of logical reasoning. Everyone was asked how well they thought they did and everyone was also graded objectively, and the two were compared.

Since then, many studies have been done that have reported this effect in other domains of knowledge. Dr. Dunning tells me he believes the effect “has more to do with being misinformed rather than uninformed.” If I am asked the boiling point of mercury, it is clear my brain does not hold the answer. But if I am asked what is the capital of Scotland, I may think I know enough to say Glasgow, but it turns out it’s Edinburgh. That’s misinformation and it’s pushing down on that confidence button in my brain.

So case closed, right? On the contrary. In 2016 and 2017, two papers were published in a mathematics journal called Numeracy. In them, the authors argued that the Dunning-Kruger effect was a mirage. And I tend to agree.

The effect is in the noise

The two papers, by Dr. Ed Nuhfer and colleagues, argued that the Dunning-Kruger effect could be replicated by using random data. “We all then believed the [1999] paper was valid,” Dr. Nuhfer told me via email. “The reasoning and argument just made so much sense. We never set out to disprove it; we were even fans of that paper.” In Dr. Nuhfer’s own papers, which used both computer-generated data and results from actual people undergoing a science literacy test, his team disproved the claim that most people that are unskilled are unaware of it (“a small number are: we saw about 5-6% that fit that in our data”) and instead showed that both experts and novices underestimate and overestimate their skills with the same frequency. “It’s just that experts do that over a narrower range,” he wrote to me.

Wrapping my brain around all this took weeks. I recruited a husband-and-wife team, Dr. Patrick E. McKnight (from the Department of Psychology at George Mason University, also on the advisory board of Sense About Science and STATS.org) and Dr. Simone C. McKnight (from Global Systems Technologies, Inc.), to help me understand what was going on. Patrick McKnight not only believed in the existence of the Dunning-Kruger effect: he was teaching it to warn his students to be mindful of what they actually knew versus what they thought they knew. But after replicating Dr. Nuhfer’s findings using a different platform (the statistical computing language R instead of Nuhfer’s Microsoft Excel), he became convinced the effect was just an artefact of how the thing that was being measured was indeed measured.

We had long conversations over this as I kept pushing back. As a skeptic, I am easily enticed by stories of the sort “everything you know about this is wrong.” That’s my bias. To overcome it, I kept playing devil’s advocate with the McKnights to make sure we were not forgetting something. Every time I felt my understanding crystallize, doubt would creep in the next day and my discussion with the McKnights would resume.

I finally reached a point where I was fairly certain the Dunning-Kruger effect had not been shown to be a bias in our thinking but was just an artefact. Here then is the simplest explanation I have for why the effect appears to be real.

For an effect of human psychology to be real, it cannot be rigorously replicated using random noise. If the human brain was predisposed to choose heads when a coin is flipped, you could compare this to random predictions (heads or tails) made by a computer and see the bias. A human would call more heads than the computer would because the computer is making random bets whereas the human is biased toward heads. With the Dunning-Kruger effect, this is not the case. Random data actually mimics the effect really well.

The effect as originally described in 1999 makes use of a very peculiar type of graph. “This graph, to my knowledge, is quite unusual for most areas of science,” Patrick McKnight told me. In the original experiment, students took a test and were asked to guess their score. Therefore, each student had two data points: the score they thought they got (self-assessment) and the score they actually got (performance). In order to visualize these results, Dunning and Kruger separated everybody into quartiles:  . . .

Continue reading. There’s much more, and it’s interesting.

Written by LeisureGuy

26 December 2020 at 12:46 pm

Posted in Daily life, Math, Science

How Claude Shannon Invented the Future

leave a comment »

David Tse writes in Quanta:

Science seeks the basic laws of nature. Mathematics searches for new theorems to build upon the old. Engineering builds systems to solve human needs. The three disciplines are interdependent but distinct. Very rarely does one individual simultaneously make central contributions to all three — but Claude Shannon was a rare individual.

Despite being the subject of the recent documentary The Bit Player — and someone whose work and research philosophy have inspired my own career — Shannon is not exactly a household name. He never won a Nobel Prize, and he wasn’t a celebrity like Albert Einstein or Richard Feynman, either before or after his death in 2001. But more than 70 years ago, in a single groundbreaking paper, he laid the foundation for the entire communication infrastructure underlying the modern information age.

Shannon was born in Gaylord, Michigan, in 1916, the son of a local businessman and a teacher. After graduating from the University of Michigan with degrees in electrical engineering and mathematics, he wrote a master’s thesis at the Massachusetts Institute of Technology that applied a mathematical discipline called Boolean algebra to the analysis and synthesis of switching circuits. It was a transformative work, turning circuit design from an art into a science, and is now considered to have been the starting point of digital circuit design.

Next, Shannon set his sights on an even bigger target: communication.

Communication is one of the most basic human needs. From smoke signals to carrier pigeons to the telephone to television, humans have always sought methods that would allow them to communicate farther, faster and more reliably. But the engineering of communication systems was always tied to the specific source and physical medium. Shannon instead asked, “Is there a grand unified theory for communication?” In a 1939 letter to his mentor, Vannevar Bush, Shannon outlined some of his initial ideas on “fundamental properties of general systems for the transmission of intelligence.” After working on the problem for a decade, Shannon finally published his masterpiece in 1948: “A Mathematical Theory of Communication.”

The heart of his theory is a simple but very general model of communication: A transmitter encodes information into a signal, which is corrupted by noise and then decoded by the receiver. Despite its simplicity, Shannon’s model incorporates two key insights: isolating the information and noise sources from the communication system to be designed, and modeling both of these sources probabilistically. He imagined the information source generating one of many possible messages to communicate, each of which had a certain probability. The probabilistic noise added further randomness for the receiver to disentangle.

Before Shannon, the problem of communication was primarily viewed as a deterministic signal-reconstruction problem: how to transform a received signal, distorted by the physical medium, to reconstruct the original as accurately as possible. Shannon’s genius lay in his observation that the key to communication is uncertainty. After all, if you knew ahead of time what I would say to you in this column, what would be the point of writing it?

This single observation shifted the communication problem from the physical to the abstract, allowing Shannon to model the uncertainty using probability. This came as a total shock to the communication engineers of the day.

Given that framework of uncertainty and probability, Shannon set out in his landmark paper to systematically determine the fundamental limit of communication. His answer came in three parts. Playing a central role in all three is the concept of an information “bit,” used by Shannon as the basic unit of uncertainty. A portmanteau of “binary digit,” a bit could be either a 1 or a 0, and Shannon’s paper is the first to use the word (though he said the mathematician John Tukey used it in a memo first).

First, Shannon came up with a formula for the minimum number of bits per second to represent the information, a number he called its entropy rate, H. This number quantifies the uncertainty involved in determining which message the source will generate. The lower the entropy rate, the less the uncertainty, and thus the easier it is to compress the message into something shorter. For example, texting at the rate of 100 English letters per minute means sending 26100 possible messages every minute, each represented by a sequence of 100 letters. One could encode all these possibilities into 470 bits, since 2470 ≈ 26100. If the sequences were equally likely, then Shannon’s formula would say that the entropy rate is indeed 470 bits per minute. In reality, some sequences are much more likely than others, and the entropy rate is much lower, allowing for greater compression.

Second, he provided a formula for the maximum number of bits per second that can be reliably communicated in the face of noise, which he called the system’s capacity, C. This is the maximum rate at which the receiver can resolve the message’s uncertainty, effectively making it the speed limit for communication.

Finally, he showed that reliable communication of the information from the source in the face of noise is possible if and only if  . . .

Continue reading.

Written by LeisureGuy

22 December 2020 at 4:30 pm

Why blue tigers went extinct early in the 20th century

leave a comment »

Christie Wilcox writes in Quanta:

If you had braved the jungles of China’s Fujian province in the early 20th century, various accounts say you could have witnessed a stunningly unexpected animal: a blue tiger. These tigers were described as “marvelously beautiful” with bodies “a deep shade of Maltese, changing into almost deep blue on the under parts.” As late as the 1950s, hunters reported spotting their blue hairs alongside the traditional orange fur of other South China tigers on trails.

Then the blue tigers disappeared. The last reported sighting was in 1953, and blue tigers were soon the stuff of legends, with not so much as a preserved hide to prove they ever existed.

It is tempting to think the cats’ blueness was tied to some flaw that left them unable to compete with their bright orange kin. But it’s more likely their bizarre coats had nothing to do with their extinction; it was simply bad luck that the color arose in a small population that continued to shrink.

This kind of chance evolution is the purview of neutral theory, the historically controversial idea that “survival of the fittest” isn’t the only, or even the most common, way that species change, split or disappear. Simple as the proposition sounds, its consequences for genetics, evolution, ecology and even disciplines outside of biology have been sweeping.

Not So Neutral Theory

The random rise or fall of gene variants in a population is known as genetic drift. Today it’s accepted as a key driver of evolution and diversity, but that wasn’t always the case. Until the 1960s, biologists generally ascribed all variation to selective forces: Deleterious traits hampered an individual’s reproduction, ensuring that over time, the traits would disappear (negative or purifying selection). Conversely, helpful traits bolstered the number of offspring an individual had and raised their own prevalence (positive selection) — all as predicted by Charles Darwin and Alfred Russel Wallace’s principle of natural selection.

Then sequencing studies on proteins revealed much more genetic variation within populations than expected. The idea that selection was acting on all those genes at once, weighing their effects and picking which ones should stay or go, didn’t sit right with some scientists.

In 1968, the renowned geneticist Motoo Kimura proposed an alternative explanation, now called neutral theory. Kimura posited that most of the variation between organisms is neither advantageous nor disadvantageous. Consequently, most of the variety we see isn’t a product of the hidden hand of selection but rather of luck. “All you need is some input of variation, and random forces will do the rest,” said Armand Leroi, an evolutionary biologist at Imperial College London.

Kimura’s neutral theory of molecular evolution sparked debate because it seemed to water down the influence of selection. But the genomics revolution of the late 20th century and widespread DNA sequencing confirmed that Kimura was right; swapping out one letter for another in a gene’s code usually has little effect.

Ever since, neutral theory has been the default assumption (or null hypothesis) in genetics. “If you want to show that a given variant in a DNA sequence is under selection, you first have to really show that it can’t be just explained by neutrality,” Leroi said.

(Population) Size Matters

Some continue to fight the idea that neutral forces drive diversity at the molecular level, but Parul Johri, a population geneticist at Arizona State University, largely chalks that up to a misunderstanding of its role in evolution. “Kimura never said everything is neutral,” she said. What matters is how much neutral forces have shaped biodiversity. And that depends largely on the size of the group you’re looking at.

Imagine a population of 10 birds: one red, one green and all the rest brown. These colors aren’t harmful or helpful, so all the birds have the same chance of reproducing. Then a tornado kills six of the brown birds, purely by chance. Now half the population is brown, a quarter is red and a quarter is green. A random event caused a major shift in diversity. That’s genetic drift.

If there had been 98 brown birds and the same lone red and green ones, however, the catastrophe wouldn’t have mattered as much. Even if . . .

Continue reading.

Written by LeisureGuy

10 December 2020 at 2:21 pm

Posted in Evolution, Math, Science

3-D Printed Mathematical Constructs (as art)

leave a comment »

The photo shows an early iteration of the three-dimensional analogue of the two-dimensional Hilbert Curve, a space-filling curve defined by David Hilbert. This is one of a variety of 3-D printed mathematical constructs made by Henry Segerman.

I found this example and explanation of negatively curved surfaces interesting:

And this, too, is interesting:

Written by LeisureGuy

29 November 2020 at 7:44 am

Scientists Uncover the Universal Geometry of Geology

leave a comment »

Interesting article in Quanta by joshua Sokol. I didn’t know about the Gömböc, but now I see you can buy copies (or 3-D  print your own). The article begins:

n a mild autumn day in 2016, the Hungarian mathematician Gábor Domokos arrived on the geophysicist Douglas Jerolmack’s doorstep in Philadelphia. Domokos carried with him his suitcases, a bad cold and a burning secret.

The two men walked across a gravel lot behind the house, where Jerolmack’s wife ran a taco cart. Their feet crunched over crushed limestone. Domokos pointed down.

“How many facets do each of these gravel pieces have?” he said. Then he grinned. “What if I told you that the number was always somewhere around six?” Then he asked a bigger question, one that he hoped would worm its way into his colleague’s brain. What if the world is made of cubes?

At first, Jerolmack objected. Houses can be built out of bricks, but Earth is made of rocks. Obviously, rocks vary. Mica flakes into sheets; crystals crack on sharply defined axes. But from mathematics alone, Domokos argued, any rocks that broke randomly would crack into shapes that have, on average, six faces and eight vertices. Considered together, they would all be shadowy approximations converging on a sort of ideal cube. Domokos had proved it mathematically, he said. Now he needed Jerolmack’s help to show that this is what nature does.

“It was geometry with an exact prediction that was borne out in the natural world, with essentially no physics involved,” said Jerolmack, a professor at the University of Pennsylvania. “How in the hell does nature let this happen?”

Over the next few years, the pair chased their geometric vision from microscopic fragments to rock outcrops to planetary surfaces and even to Plato’s Timaeus, suffusing the project with an additional air of mysticism. The foundational Greek philosopher, writing around 360 BCE, had matched his five Platonic solids with five supposed elements: earth, air, fire, water and star stuff. With either foresight or luck or a little of both, Plato paired cubes, the most stackable shape, with earth. “I was like, oh, OK, now we’re getting a little bit metaphysical,” Jerolmack said.

But they kept finding cuboid averages in nature, plus a few non-cubes that could be explained with the same theories. They ended up with a new mathematical framework: a descriptive language to express how all things fall apart. When their paper was published earlier this year, it came titled like a particularly esoteric Harry Potter novel: “Plato’s Cube and the Natural Geometry of Fragmentation.”

Several geophysicists contacted by Quanta say the same mathematical framework might also help with problems like understanding erosion from cracked cliff faces, or preventing hazardous rock slides. “That is really, really exciting,” said the University of Edinburgh geomorphologist Mikaël Attal, one of two scientists who reviewed the paper before publication. The other reviewer, the Vanderbilt geophysicist David Furbish, said, “A paper like this makes me think: Can I somehow make use of these ideas?”

All Possible Breaks

Long before he came to Philadelphia, Domokos had more innocuous mathematical questions.

Suppose you fracture something into many pieces. You now have a mosaic: a collection of shapes that could tile back together with no overlaps or gaps, like the floor of an ancient Roman bath. Further suppose those shapes are all convex, with no indentations.

First Domokos wanted to see if geometry alone could predict what shapes, on average, would make up that kind of mosaic. Then he wanted to be able to describe all other possible collections of shapes you could find.

In two dimensions, you can try this out without smashing anything. Take a sheet of paper. Make a random slice that divides the page into two pieces. Then make another random slice through each of those two polygons. Repeat this random process a few more times. Then count up and average the number of vertices on all the bits of paper.

For a geometry student, predicting the answer isn’t too hard. “I bet you a box of beer that I can make you derive that formula within two hours,” Domokos said. The pieces should average four vertices and four sides, averaging to a rectangle.

You could also consider the same problem in three dimensions. About 50 years ago, the Russian nuclear physicist, dissident and Nobel Peace Prize winner Andrei Dmitrievich Sakharov posed the same problem while chopping heads of cabbage with his wife. How many vertices should the cabbage pieces have, on average? Sakharov passed the problem on to the legendary Soviet mathematician Vladimir Igorevich Arnold and a student. But their efforts to solve it were incomplete and have largely been forgotten.

Unaware of this work, Domokos wrote a proof which pointed to cubes as the answer. He wanted to double-check, though, and he suspected that if an answer to the same problem already existed, it would be locked in an inscrutable volume by the German mathematicians Wolfgang Weil and Rolf Schneider, an 80-year-old titan in the field of geometry. Domokos is a professional mathematician, but even he found the text daunting.

“I found someone who was willing to read that part of the book for me and translate it back into human language,” Domokos said. He found the theorem for any number of dimensions. That confirmed that cubes were indeed the 3D answer.

Now Domokos had the average shapes produced by splitting a flat surface or a three-dimensional block. But then a larger quest emerged. Domokos realized that he could also develop a mathematical description not just of averages, but of potentiality: Which collections of shapes are even mathematically possible when something falls apart?

Remember, the shapes produced after something falls apart are a mosaic. They fit together with no overlap or gaps. Those cut-up rectangles, for example, can easily tile together to fill in a mosaic in two dimensions. So can hexagons, in an idealized case of what mathematicians would call a Voronoi pattern. But pentagons? Octagons? They don’t tile. . .

Continue reading.

Written by LeisureGuy

19 November 2020 at 6:51 pm

Posted in Daily life, Math, Science

<span>%d</span> bloggers like this: