Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Math’ Category

Penrose-tiling a bathroom

leave a comment »

Cool idea. Penrose tiling should be more common. Image is from a post by Lior Pachter that describes the project. I blogged earlier a video on Penrose tiling.

Written by Leisureguy

27 May 2022 at 12:32 pm

Posted in Daily life, Math

The Most Powerful Computers You’ve Never Heard Of

leave a comment »

Fascinating video. 

Written by Leisureguy

3 May 2022 at 12:00 pm

Cumulative excess deaths from COVID-19

leave a comment »

The above chart is from Kevin Drum’s post looking at “excessive death” statistics, which probably provide a better measure of Covid deaths than the count of deaths explicitly attributed to Covid (since in many localities there’s a strong pressure not to list Covid as the cause of death). 

Read his post for the full explanation and more charts.

Written by Leisureguy

19 April 2022 at 9:30 am

The Antikythera Cosmos

leave a comment »

A very interesting video on the Antikythera Mechanism. This is via a Vice article by Becky Ferreira, which begins:

In the early 1900s, divers hunting for sponges off the coast of Antikythera, a Greek island in the Aegean Sea, discovered a Roman-era shipwreck that contained an artifact destined to dramatically alter our understanding of the ancient world.

Known as the Antikythera Mechanism, the object is a highly sophisticated astronomical calculator that dates back more than 2,000 years. Since its recovery from the shipwreck in 1901, generations of researchers have marveled over its stunning complexity and inscrutable workings, earning it a reputation as the world’s first known analog computer.

The device’s gears and displays cumulatively demonstrated the motions of the planets and the Sun, the phases of the lunar calendar, the position of Zodiac constellations, and even the timing of athletic events such as the ancient Olympic Games. The device also reflects a very ancient idea of the cosmos, with Earth at the center.

While some of the calculator’s mysteries have been solved over the past century, scientists at University College London’s Antikythera Research Team present, for the first time, “a radical new model that matches all the data and culminates in an elegant display of the ancient Greek Cosmos,” according to a study published on Friday in Scientific Reports.

Led by Tony Freeth, a mechanical engineer at UCL and a leading world expert on the mechanism, the interdisciplinary team called the artifact “an ancient Greek astronomical compendium of staggering ambition” and “a beautiful conception, translated by superb engineering into a device of genius,” in the study.

“This is such a special device,” said Adam Wojcik, a materials scientist at UCL and a co-author of the study, in a call. “It’s just so out-of-this world, given what we know, or knew, about contemporary ancient Greek technology. It’s unique and there’s nothing else that remotely approaches it for centuries, or maybe a millennia afterwards.”

“However, it exists and all the scholarship points to the fact that it is ancient Greek,” added Wojcik, who has been fascinated by the artifact since he was a child. “There’s no question about it and we just have to accept that there is so much about what they could do that we just don’t know and we can’t fathom. The mechanism is a window on that.”

Understanding the clockwork instrumentation of the Antikythera Mechanism has been a longstanding challenge for scientists because only a third of the artifact survived its multi-millennia entombment under the Mediterranean waves. The remains of the calculator include 82 fragments, some of which contain complex gears and once-hidden inscriptions, which were wedged between front and back display faces during the bygone era in which the artifact was fully intact.

As new experimental techniques emerged, research teams have been able to explain the purpose and dynamics of the Antikythera Mechanism’s back face, which includes a system of eclipse predictions. In particular, the use of surface imaging and high-resolution X-ray tomography on the artifact, described in a 2006 study also led by Freeth, revealed scores of never-before-seen inscriptions that helpfully amount to a user’s guide to the mechanism.

Now, Freeth and his colleagues believe they have tackled the missing piece of the puzzle: the complicated gearworks underlying the front “Cosmos” display of the calculator. Virtually nothing from this front section survived, and “no previous reconstruction has come close to matching the data” that does exist, according to the study.

The new paper “has synthesized other people’s work, and dealt with all the loose ends and the uncomfortable nuances that other people just simply ignored,” Wojcik said. “For example, there are certain features in the surviving bits—holes and pillars and things like that—which people have said: ‘well, we’ll just ignore that in our explanation. There must be a use for that but we don’t know what it is so we’ll just ignore it.’”

“Effectively, what we’ve done is we’ve not ignored anything,” he added. “So the enigmatic pillars and holes, all of a sudden, now make sense in our solution. It all comes together and it fits the inscriptional evidence.” . . .

Continue reading.

Written by Leisureguy

12 April 2022 at 10:33 pm

Bertrand’s Paradox

leave a comment »

Fascinating.

But wait! There’s more!

Written by Leisureguy

17 March 2022 at 12:14 pm

Posted in Math, Video

Correlation vs. Causation

leave a comment »

We are quite frequently told that correlation does not equal causation — perhaps a little too frequently. (Not so frequently observed is that, although correlation does not imply causation, causation does indeed result in a correlation.) 

But a simple reminder is not so good as specific examples, and that’s the benefit of the Spurious Correlations site (pointed out to me by Montreal Steve). On the site the charts are interactive — for example, hovering over a data point will display values — but this example screenshot is not interactive:

Written by Leisureguy

6 March 2022 at 6:24 am

Posted in Daily life, Humor, Math, Science

Algorithms are designing better buildings

leave a comment »

Silvio Carta, Head of Art and Design, University of Hertfordshire, writes in The Conversation:

When giant blobs began appearing on city skylines around the world in the late 1980s and 1990s, it marked not an alien invasion but the impact of computers on the practice of building design.

Thanks to computer-aided design (CAD), architects were able to experiment with new organic forms, free from the restraints of slide rules and protractors. The result was famous curvy buildings such as Frank Gehry’s Guggenheim Museum in Bilbao and Future Systems’ Selfridges Department Store in Birmingham.

Today, computers are poised to change buildings once again, this time with algorithms that can inform, refine and even create new designs. Even weirder shapes are just the start: algorithms can now work out the best ways to lay out rooms, construct the buildings and even change them over time to meet users’ needs. In this way, algorithms are giving architects a whole new toolbox with which to realise and improve their ideas.

At a basic level, algorithms can be a powerful tool for providing exhaustive information for the design, construction and use of a building. Building information modelling uses comprehensive software to standardise and share data from across architecture, engineering and construction that used to be held separately. This means everyone involved in a building’s genesis, from clients to contractors, can work together on the same 3D model seamlessly.

More recently, new tools have begun to combine this kind of information with algorithms to automate and optimise aspects of the building process. This ranges from interpreting regulations and providing calculations for structural evaluations to making procurement more precise.

Algorithmic design

But algorithms can also help with the design stage, helping architects to understand how a building will be used by revealing hidden patterns in existing and proposed constructions. These can be spatial and geometrical characteristics such as the ratio of public to private areas or the natural airflow of a building. They can be patterns of use showing which rooms are used most and least often.

Or they can be visual and physical connections that show what people can and can’t see from each point of a building and enable us to predict the flow of people around it. This is particularly relevant when designing the entrances of public buildings so we can place services and escape routes in the best position.

Algorithms can also be used to extend the capability of designers to think about and generate . . .

Continue reading. Much more, and it’s interesting.

See also this earlier post.

Written by Leisureguy

23 February 2022 at 10:44 am

Aspiring to a Higher Plane: Going Deeper into Abbot’s “Flatland”

leave a comment »

Ian Stewart has a very interesting article in The Public Domain Review (that’s their mis of fonts) about Abbot’s novel Flatland (at the link, available as an ebook in various formats free of charge). The article includes various illustrations from the book, and I’ll not include those, but instead offer the initial text of the article:

Edwin Abbott Abbott, who became Headmaster of the City of London School at the early age of 26, was renowned as a teacher, writer, theologian, Shakespearean scholar, and classicist. He was a religious reformer, a tireless educator, and an advocate of social democracy and improved education for women. Yet his main claim to fame today is none of these: a strange little book, the first and almost the only one of its genre: mathematical fantasy. Abbott called it Flatland, and published it in 1884 under the pseudonym A. Square.

On the surface — and the setting, the imaginary world of Flatland, is a surface, an infinite Euclidean plane — the book is a straightforward narrative about geometrically shaped beings that live in a two-dimensional world. A. Square, an ordinary sort of chap, undergoes a mystical experience: a visitation by the mysterious Sphere from the Third Dimension, who carries him to new worlds and new geometries. Inspired by evangelical zeal, he strives to convince his fellow citizens that the world is not limited to the two dimensions accessible to their senses, falls foul of the religious authorities, and ends up in jail.

The story has a timeless appeal, and has never been out of print since its first publication. It has spawned several sequels and has been the subject of at least one radio programme and two animated films. Not only is the book about hidden dimensions: it has its own hidden dimensions. Its secret mathematical agenda is not the notion of two dimensions, but that of four. Its social agenda pokes fun at the rigid stratification of Victorian society, especially the low status of women, even the wives and daughters of the wealthy.

Flatland’s inhabitants are . . .

Continue reading. There’s more, the illustrations are nifty, and you can get a free copy of the ebook.

Written by Leisureguy

25 January 2022 at 1:27 pm

How A.I. Conquered Poker

leave a comment »

In the NY Times Magazine, Keith Romer describes how poker has now been solved. (Gift link, no paywall.)

Last November in the cavernous Amazon Room of Las Vegas’s Rio casino, two dozen men dressed mostly in sweatshirts and baseball caps sat around three well-worn poker tables playing Texas Hold ’em. Occasionally a few passers-by stopped to watch the action, but otherwise the players pushed their chips back and forth in dingy obscurity. Except for the taut, electric stillness with which they held themselves during a hand, there was no outward sign that these were the greatest poker players in the world, nor that they were, as the poker saying goes, “playing for houses,” or at least hefty down payments. This was the first day of a three-day tournament whose official name was the World Series of Poker Super High Roller, though the participants simply called it “the 250K,” after the $250,000 each had put up to enter it.

At one table, a professional player named Seth Davies covertly peeled up the edges of his cards to consider the hand he had just been dealt: the six and seven of diamonds. Over several hours of play, Davies had managed to grow his starting stack of 1.5 million in tournament chips to well over two million, some of which he now slid forward as a raise. A 33-year-old former college baseball player with a trimmed light brown beard, Davies sat upright, intensely following the action as it moved around the table. Two men called his bet before Dan Smith, a fellow pro with a round face, mustache and whimsically worn cowboy hat, put in a hefty reraise. Only Davies called.

The dealer laid out a king, four and five, all clubs, giving Davies a straight draw. Smith checked (bet nothing). Davies bet. Smith called. The turn card was the deuce of diamonds, missing Davies’s draw. Again Smith checked. Again Davies bet. Again Smith called. The last card dealt was the deuce of clubs, one final blow to Davies’s hopes of improving his hand. By now the pot at the center of the faded green-felt-covered table had grown to more than a million in chips. The last deuce had put four clubs on the table, which meant that if Smith had even one club in his hand, he would make a flush.

Davies, who had been betting the whole way needing an eight or a three to turn his hand into a straight, had arrived at the end of the hand with precisely nothing. After Smith checked a third time, Davies considered his options for almost a minute before declaring himself all-in for 1.7 million in chips. If Smith called, Davies would be out of the tournament, his $250,000 entry fee incinerated in a single ill-timed bluff.

Smith studied Davies from under the brim of his cowboy hat, then twisted his face in exasperation at Davies or, perhaps, at luck itself. Finally, his features settling in an irritated scowl, Smith folded and the dealer pushed the pile of multicolored chips Davies’s way. According to Davies, what he felt when the hand was over was not so much triumph as relief.

“You’re playing a pot that’s effectively worth half a million dollars in real money,” he said afterward. “It’s just so much goddamned stress.”

Real validation wouldn’t come until around 2:30 that morning, after the first day of the tournament had come to an end and Davies had made the 15-minute drive from the Rio to his home, outside Las Vegas. There, in an office just in from the garage, he opened a computer program called PioSOLVER, one of a handful of artificial-intelligence-based tools that have, over the last several years, radically remade the way poker is played, especially at the highest levels of the game. Davies input all the details of the hand and then set the program to run. In moments, the solver generated an optimal strategy. Mostly, the program said, Davies had gotten it right. His bet on the turn, when the deuce of diamonds was dealt, should have been 80 percent of the pot instead of 50 percent, but the 1.7 million chip bluff on the river was the right play.

“That feels really good,” Davies said. “Even more than winning a huge pot. The real satisfying part is when you nail one like that.” Davies went to sleep that night knowing for certain that he played the hand within a few degrees of perfection.

The pursuit of perfect poker goes back at least as far as the 1944 publication of “Theory of Games and Economic Behavior,” by the mathematician John von Neumann and the economist Oskar Morgenstern. The two men wanted to correct what they saw as a fundamental imprecision in the field of economics. “We wish,” they wrote, “to find the mathematically complete principles which define ‘rational behavior’ for the participants in a social economy, and to derive from them the general characteristics of that behavior.” Economic life, they suggested, should be thought of as a series of maximization problems in which individual actors compete to wring as much utility as possible from their daily toil. If von Neumann and Morgenstern could quantify the way good decisions were made, the idea went, they would then be able to build a science of economics on firm ground.

It was this desire to model economic decision-making that led them to game play. Von Neumann rejected most games as unsuitable to the task, especially those like checkers or chess in which both players can see all the pieces on the board and share the same information. “Real life is not like that,” he explained to Jacob Bronowski, a fellow mathematician. “Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.” Real life, von Neumann thought, was like poker.

Using his own simplified version of the game, in which  . . .

Continue reading. There’s much more — and no paywall.

Written by Leisureguy

18 January 2022 at 3:44 pm

The Riemann Hypothesis, Explained

leave a comment »

This is quite a good video. It appeaars in an article in Quanta, “Mathematicians Clear Hurdle in Quest to Decode Primes,” by Kevin Hartnett, interesting in its own right, but the video also works as a standalone.

Written by Leisureguy

13 January 2022 at 2:36 pm

Posted in Math, Video

Paraconsistent Logics Find Structure in Our Inconsistent World

leave a comment »

Zach Weber, associate professor of philosophy at the University of Otago in New Zealand and author of Paradoxes and Inconsistent Mathematics (2021), has in Aeon what I suspect is an extract from that book. He writes:

Here is a dilemma you may find familiar. On the one hand, a life well lived requires security, safety and regularity. That might mean a family, a partner, a steady job. On the other hand, a life well lived requires new experiences, risk and authentic independence, in ways incompatible with a family or partner or job. Day to day, it can seem not just challenging to balance these demands, but outright impossible. That’s because, we sense, the demands of a good life are not merely difficult; sometimes, the demands of a good life actually contradict. ‘Human experience,’ wrote the novelist George Eliot in 1876, ‘is usually paradoxical.’

One aim of philosophy is to help us make sense of our lives, and one way philosophy has tried to help in this regard is through logic. Formal logic is a perhaps overly literal approach, where ‘making sense’ is cashed out in austere mathematical symbolism. But sometimes our lives don’t make sense, not even when we think very hard and carefully about them. Where is logic then? What if, sometimes, the world truly is senseless? What if there are problems that simply cannot be resolved consistently?

Formal logic as we know it today grew out of a project during the 17th-century Enlightenment: the rationalist plan to make sense of the world in mathematical terms. The foundational assumption of this plan is that the world does make sense, and can be made sense of: there are intelligible reasons for things, and our capacity to reason will reveal these to us. In his book La Géométrie (1637), René Descartes assumed that the world could be covered by a fine-mesh grid so precise as to reduce geometry to analysis; in his Ethics (1677), Baruch Spinoza proposed a view of Nature and our place in it so precise as to be rendered in proofs; and in a series of essays written around 1679, G W Leibniz envisioned a formal language capable of expressing every possible thought in structure-preserving, crystalline symbols – a characteristica universalis – that obeys precise algebraic rules, allowing us to use it to find answers – a calculus ratiocinator.

ationalism dreams big. But dreams are cheap. The startling thing about this episode is that, by the turn of the 20th century, Leibniz’s aspirations seemed close to coming true due to galvanic advances across the sciences, so much so that the influential mathematician David Hilbert was proposing something plausible when in 1930 he made the rationalist assumption a credo: ‘We must know, we will know.’

Hilbert’s credo was based in part on the spectacular successes of logicians in the late 19th century carving down to the bones of pure mathematics (geometry, set theory, arithmetic, real analysis) to find the absolute certainty of deductive validity. If logic itself can be understood in exacting terms, then the project of devising a complete and consistent theory of the world (or at least, the mathematical basis thereof) appeared to be in reach – a way to answer every question, as Hilbert put it, ‘for the honour of human understanding itself’.

But even as Hilbert was issuing his credo and elaborating his plans for solving the Entscheidungsproblem – of building what we would now call a computer that can mechanically decide the truth or falsity of any sentence – all was not well. Indeed, all had not been well for some time.

Already in 1902, on the verge of completing his life’s work, the logician Gottlob Frege received an ominous letter from Bertrand Russell. Frege had been working to provide a foundation for mathematics of pure logic – to reduce complex questions about arithmetic and real analysis to the basic question of formal, logical validity. If this programme, known as logicism, were successful then the apparent certainty of logical deduction, the inescapable truth of the conclusions of sound derivations, would percolate up, so to speak, into all mathematics (and any other area reducible to mathematics). In 1889, Frege had devised an original ‘concept notation’ for quantified logic exactly for this goal, and had used it for his Basic Laws of Arithmetic (two volumes of imposing symbolism, published in 1893 and 1903). Russell shared this logicist goal, and in his letter to Frege, Russell said, in essence, that he had liked Frege’s recent book very much, but had just noticed one little oddity: that one of the basic axioms upon which Frege had based all his efforts seemed to entail a contradiction.

Frege had assumed what he called ‘Basic Law V’ which says, in effect: Sets are collections of things that share a property. For example, the set of all triangles is comprised of all and only the triangles. This seemed obvious enough for Frege to assume as a self-evident logical truth. But from Basic Law V, Russell showed that Frege’s system could prove a statement of the form P and not-P as a theorem. It is called Russell’s Paradox:

Let R be the collection of all things with the property of ‘not being a self-member’. (For example, the set of triangles is not itself a triangle, so it is an R.) What about R itself? If is in R, then it is not, by definition of R; if is not in R, then it is, again by definition. It must be one or the other – so it is both: is in and is not in R, self-membered and not, a contradiction.

The whole system was in fact inconsistent, and thus – in Frege and Russell’s view – absurd. Nonsense. In a few short lines, Frege’s life work had been shown to be a failure.

He would continue to work for another two decades, but his grand project was destroyed. Russell would also spend the next decades trying to come to terms with own his simple discovery, first writing the monumental but flawed Principia Mathematica (three volumes, 1910-13) with Alfred North Whitehead, then eventually pivoting away from logic without ever really solving the problem. Years would pass, with some of the best minds in the world trying mightily to overcome the contradiction Russell had found, without finding a fully satisfactory solution.

By 1931, a young logician named Kurt Gödel had leveraged a similar paradox out of Russell’s own system. Gödel found a statement that, if provable true or false – that is, decidable – would be inconsistent. Gödel’s incompleteness theorems show that there cannot be a complete, consistent and computable theory of the world – or even just of numbers! Any complete and computable theory will be inconsistent. And so, the Enlightenment rationalist project, from Leibniz to Hilbert’s programme, has been shown impossible.

Or so goes the standard story. But the lesson that we must give up on a full understanding of the world in which we live is an enormous pill to swallow. It has been almost a century or more since these events, filled with new and novel advances in logic, and some philosophers and logicians think it is time for a reappraisal.

If the world were a perfect place, we would not need logic. Logic tells us what follows from things we already believe, things we are already committed to. Logic helps us work around our fallible and finite limitations. In a perfect world, the infinite consequences of our beliefs would lie transparently before us. ‘God has no need of any arguments, even good ones,’ said the logician Robert Meyer in 1976: all the truths are apparent before God, and He does not need to deduce one from another. But we are not gods and our world is not perfect. We need logic because we can go wrong, because things do go wrong, and we need guidance. Logic is most important for making sense of the world when the world appears to be senseless.

The story just told ends in failure in part because the logic that Frege, Russell and Hilbert were using was classical logic. Frege assumed something obvious and got a contradiction, but classical logic makes no allowance for contradiction. Because of the classical rule of ex contradictione quodlibet (‘from a contradiction everything follows’), any single contradiction renders the entire system useless. But logic is a theory of validity: an attempt to account for what conclusions really do follow from given premises. As contemporary ‘anti-exceptionalists about logic’ have noted, theories of logic are like everything else in science and philosophy. They are developed and debated by people, and all along there have been disagreements about what the correct theory of logic is. Through that ongoing debate, many have suggested that a single contradiction leading to arbitrary nonsense seems incorrect. Perhaps, then, the rule of ex contradictione itself is wrong, and should not be part of our theory of logic. If so, then perhaps Frege didn’t fail after all.

Over the past decades, logicians have developed mathematically rigorous systems that can handle inconsistency not by eradicating or ‘solving’ it, but by accepting it. Paraconsistent logics create a new opportunity for theories that, on the one hand, seem almost inalienably true (like Frege’s Basic Law V) but, on the other, are known to contain some inconsistencies, such as blunt statements of the form P and not-P. In classical logic, there is a hard choice: give up any inconsistent theory as irrational, or else devolve into apparent mysticism. With these new advances in formal logic, there may be a middle way, whereby sometimes an inconsistency can be retained, not as some mysterious riddle, but rather as a stone-cold rational view of our contradictory world.

Paraconsistent logics have been most famously promoted by Newton da Costa since the 1960s, and Graham Priest since the 1970s. Though viewed initially (and still) with some scepticism, ‘paraconsistent logics’ now have an official mathematics classification code (03B53, according to the American Mathematical Society) and there have been five World Congress of Paraconsistency meetings since 1997. These logics are now studied by researchers across the globe, and hold out the prospect of accomplishing the impossible: recasting the very laws of logic itself to make sense of our sometimes seemingly senseless situation. If it works, it could ground a new sort of Enlightenment project, a rationalism that rationally accommodates some apparent irrationality. On this sort of approach, truth is beholden to rationality; but rationality is also ultimately beholden to truth.

That might sound a little perplexing, so let’s start with a very ordinary example. Suppose . . .

Continue reading. This strikes me as an exciting concept. I personally have by stymied by the way that (classical) logic leads sometimes to a dead end or an unresolved knot. This is an interesting approach that holds the promise of offering guidance in a (classically) inconsistent world (cf. quantum mechanics).

Written by Leisureguy

13 January 2022 at 6:54 am

How Imaginary Numbers Were Invented

leave a comment »

This video touches on the always tricky question, “What is ‘reality’?”, as does the next post.

Written by Leisureguy

13 January 2022 at 6:31 am

Posted in Daily life, Math, Video

New year, plan adjusted

leave a comment »

Iterative refinement is the way I naturally work, and that’s perhaps why I found the programming language Forth such a comfortable fit. Those who look back at previous posts may find that the posts have been updated with new information or revised thoughts (update: or, to be frank, corrections of typos, misspellings, and wrong word choice).

In particular, I found I needed a little more assistance from my budget-planning/spending-tracking workbook (described in this post). I am now focused on keeping my spending each week within the week’s budgeted amount, since if I do that, then the monthly and annual budget will work out fine.

The process of iterative refinement has been on-going. Another example: my monthly checks do not arrive on the first day of the month, and it took me a surprisingly long time to realize that the solution is simple. On the first day of the month I transfer from savings the amount I have budgeted for the month (for Groc & Misc and Discretionary) into my Visa account. Then, when the checks do arrive, I just deposit the total into savings. (They are direct deposits into checking, so I just transfer the total to savings.)

I transfer the money from savings to Visa rather than to checking because I am charged a fee if my checking transactions exceed a small number. As a result, I don’t use checking except as a way-station before the money goes into savings: I pay for things with Visa, not with checks. (A few bills require payment from checking, so when those come due, I transfer the money from savings to checking to pay the bill.)

The monthly budget amount I transfer from savings into Visa results in a positive Visa balance, which is then whittled away through the month from Groc & Misc and Discretionary spending. My weekly budget is the monthly amount I transferred divided by 4.33, the average number of weeks in a month, so staying within weekly budget will keep me within monthly budget — more details in this post.

I think the new worksheet (“Weekly”) will be useful. One nice feature: the weekly budget amount for the first week is taken from my overall budget, but the weekly budget amount for each subsequent week is adjusted by the amount over- or under-budget from the previous week. If I go over budget in a week, the budget amount for the subsequent week is reduced by that amount; if I come in under budget, the budget for the subsequent week is increased by that amount. If I stay under budget for a few weeks, I then have a reasonable amount to splurge.

Little by little and bit by bit.

Written by Leisureguy

8 January 2022 at 3:48 pm

Posted in Business, Daily life, Math

Disney’s FastPass: A Complicated History

leave a comment »

I have not been to almost no theme parks — in the mid-1950’s, I did go with my family to Knott’s Berry Farm and the first Disneyland. But even then I did not much like the walking and waiting. I’ve been to the Santa Cruz Boardwalk a few times, but that is a pale imitation (and requires less walking and less waiting and has other benefits (fewer people, quicker to enter and to leave, and in a town interesting in itself).

That being said, I found this full-length documentary fascinating, in part because it shows me an alien world — one that I have negative desire to visit, but still find interesting in terms of its operation and the kinds of problems it must solve. 

I imagine that this documentary might be even more interesting to someone who has been subjected to the systems described.

Written by Leisureguy

1 January 2022 at 6:00 pm

How Pythagoras Broke Music (and how we kind of fixed it)

leave a comment »

Written by Leisureguy

22 November 2021 at 3:40 pm

The genius of John von Neumann

leave a comment »

Tom Chivers writes at UnHerd:

In 1956, shortly before his early death from bone cancer, John von Neumann received a letter from Kurt Gödel, the Austrian logician. After a paragraph of half-hearted inquiries into von Neumann’s health, Gödel finally got to the point: he had found an interesting new mathematical puzzle. And in the Fifties, if you found an interesting new mathematical puzzle, you sent it to John von Neumann.

The puzzle that Gödel was describing would come to be known as P vs NP. To oversimplify, it asks: can every mathematical question which can be checked quickly also be solved quickly? For instance: you are given a half-complete Sudoku puzzle. Is there a legal solution? If someone were to show you a solution, you could quickly verify whether it was legal. If you used a larger grid, the solution would take longer to check, but not exponentially so1.

But establishing that there is a legal solution is much slower. There might be quintillions of possible ways of filling it out; the number grows exponentially with the size of the grid. Checking them all one by one might take millions of years even on a powerful computer, if the grid is large enough.

What Gödel wanted to know was: is there some algorithm that could solve the Sudoku (or similar problems) as quickly as we could check a solution? P vs NP is one of the great outstanding questions of mathematics: it has profound implications, but no one has been able to prove it, one way or the other.

The Man from the Future, Ananyo Bhattacharya’s fascinating, fast-moving intellectual biography of von Neumann, made me think of P vs NP. Not because von Neumann solved it; but because von Neumann, in Bhattacharya’s telling, provided solutions to many other previously unsolved problems, in dozens of different fields; others simply had to check them, and expand on them. There is, I think, some discomfort about calling people “geniuses” these days, or in admitting that intelligence is a real thing or that it shapes history – but von Neumann was a genius, and his extraordinary intelligence shaped the modern world.

He was not an economist, but he developed the use of fixed-point theorems in economics in a paper which the historian Roy Weintraub calls “the single most important article in mathematical economics”, and which inspired “half a dozen” Nobel laureates.

His work on game theory – he invented the field, and coined the term “zero-sum game” – inspired at least half a dozen more. Game theory also transformed the study of evolution, inspiring the work of Bill Hamilton, John Maynard Smith, and Richard Dawkins.

He developed utility theory, the basis of modern economics. In 2011 Daniel Kahneman, another economics Nobel laureate (who won his Nobel partly for building on von Neumann’s game-theory ideas), called it“ the most important theory in the social sciences”.

Some of his last work, with Stanislaw Ulam on “cellular automata” – grids of squares that turn on and off according to simple rules – shaped modern computer science in thousands of ways, notably inspiring John McCarthy, who would go on to coin the term “artificial intelligence”.

Von Neumann’s genius was apparent early. In 1915, at the age of 11, he had gone to the famous gymnasium school in his native Budapest; the “legendary” maths teacher, László Rátz, immediately realised that von Neumann was beyond his ability to teach, and sent him for extra tuition at the local university. There he was mentored by Gábor Szegö, later head of Stanford’s maths department, who was “moved to tears” by his brilliance.

At 17, still at high school, he partly rescued Cantor’s set theory, the basis of much mathematical theory, from a crippling paradox. A couple of years later, he helped reconcile Werner Heisenberg and Erwin Schrödinger’s rival models of quantum mechanics. In the early Thirties, he met the astronomer Subrahmanyan Chandrasekhar, and worked with him on general relativity and the behaviour of stellar clusters. Chandrasekhar would later tell an interviewer, “If I say, ‘He reminds me of von Neumann,’ that’s about the best compliment I can give anyone.”

Von Neumamm read some Alan Turing research which imagined a hypothetical computing machine, and saw how to build a working computer. The paper he produced building on Turing’s ideas is considered “the birth certificate of modern computers”, according to the computer scientist Wolfgang Coy. With his wife Kläri, and Ulam, he pioneered Monte Carlo simulations, vital now in climate modelling and a million other fields.

In almost every sphere of scientific inquiry – physics, biology, maths, economics, the social sciences, computing – you find von Neumann’s fingerprints. There is a Wikipedia page of “List of things named after John von Neumann.” Were it not for him, our understanding of the world would be decades behind where it is.

What created this genius? Bhattacharya  . . .

Continue reading.

Written by Leisureguy

16 November 2021 at 12:50 pm

The Problem That Broke Math (and led to quantum physics)

leave a comment »

A very interesting video.

Written by Leisureguy

1 November 2021 at 1:57 pm

Posted in Math, Science, Video

The unknown linguistic laws that apply to all life

leave a comment »

Jonny Thomson writes in Big Think:

Linguists have known for quite some time that certain “laws” seem to govern human speech. For instance, across languages, shorter words tend to be more frequently used than longer words. Biologists have taken notice, and many have wondered if these “linguistic laws” also apply to biological phenomena. Indeed, they do, and a new review published in Trends in Ecology & Evolution elaborates on their discoveries.

Pattern 1: being twice as big as the nearest rival

The first linguistic rule concerns the frequency of the most used words in a language. It is known as “Zipf’s rank-frequency law”, and it maintains that “the relative frequency of a word is inversely proportional to its frequency rank.” In other words, the most frequently used word will be twice as common as the second most frequent word, three times as common as the third most frequent, and so on. For instance, in English, “the” is the most common, making up seven percent of all the words we use. The next common is “of,” which is roughly 3.5 percent.

The incredible thing is that this law applies also to a whole range of non-linguistic things. It is seen in the size of proteins and DNA structures. It is seen in most of the noises animals use to communicate, as well as primate gestures. It is found in the relative abundance of plant and animal species. In your garden, the flora and fauna very likely will be distributed by Zipf’s rank-freqeuncy law.

Recently, it has been observed in COVID infection rates, where the largest outbreaks (if there are similar demographics across a country) will be double the size of the next largest region. The law is so reliable, that it is being used to call out countries who are doctoring their COVID infection numbers. [I would think that Benford’s Law would also be useful in that connection. – LG]

Pattern 2: smaller things are more common

The second linguistic rule we can apply to life is known as “Zipf’s law of abbreviation,” which “describes the tendency of more frequently used words to be shorter.” It is true across hundreds of diverse and unrelated languages, including sign. In English, the top seven most common words are all three letters or fewer, and in the top 100, there are only two words (“people” and “because”) that are more than five letters. The words we use most regularly are short and to the point.

It is also a law seen all over nature. Communication among birds and mammals tend to be short. Indeed, it is seen in the songs of black-capped chickadees, call duration of Formosan macaques, vocalizations of indri, gesture time of chimpanzees, and length of surface behavioral patterns in dolphins. Apparently, it is not just humans who want their language to be efficient.

The law appears in ecology, as well: the most numerous species tend to be the smallest. There are many, many more flies and rats in New York City than there are humans.

Pattern 3: the longer something is, the shorter its composite parts

Let’s take a sentence, like this one, with all its words, long and short, strung together, punctuated by commas, nestled in with each other, to reach a final (and breathless) finale. What you should notice is that although the sentence is long, it is divided into pretty small clauses. This is known as “Menzerath’s law,” in which there is “a negative relationship between the size of the whole and the size of the constituent part.” It is seen not only in sentence construction; the law applies to the short phonemes and syllables found in long words. “Hippopotamus” is divided into lots of short syllables (that is, each syllable has only a few letters), while, ironically, the word “short” constitutes one giant syllable.

As with the previous laws, it is observed in most languages but is perhaps not as widespread. There are several . . .

Continue reading. There’s more. And there’s this on Benford’s Law:

Written by Leisureguy

28 October 2021 at 5:24 pm

Posted in Daily life, Math, Science

Cognition Without Computation: Just because a theory is old doesn’t mean it’s correct

leave a comment »

Rodney Brooks writes in IEEE Spectrum:

I’m just going to come out and say it: Human cognition might have nothing whatsoever to do with computation.

Yes, I am well aware that the computational theory of mind is a deeply entrenched one, starting with the work in the early 1940s of Warren McCulloch and Walter Pitts in Chicago, and then later at MIT, where they were joined by Jerome Lettvin and Humberto Maturana. But over the course of human history, lots of theories have been widely but wrongly held, sometimes for decades.

Consider the phlogiston theory of combustion. For over a century, starting in 1667, most scientists believed that what made disparate things burn was a common substance, later named phlogiston, that dissipated into the air via fire. Air had only a finite capacity to absorb it, which explained why fire was extinguished if there was only a small amount of air available.

By the end of the 19th century, Newtonian physics was pretty good at explaining the behavior of physical objects in the universe that could be directly observed. But right at the beginning of the 20th century, Einstein came up with two revolutions almost at once, the ideas of both relativity and that energy was quantized. The fallout from the second of these is quantum mechanics. Over 100 years later, its consequences are still being discovered and often still disbelieved. Quantum entanglement, which is leading to new forms of secure communications, is still routinely described as “spooky.” Measuring a property at one place can cause something to change at another, and with a reaction time that is faster than the speed of light.

For the last 30 years it had been thought that the mechanism for Alzheimer’s disease was the accumulation of amyloid protein plaques in the brain. A person with Alzheimer’s always has clumps of plaque in the brain. However, recent trials with drugs that reduce these clumps failed to give relief from the disease. The theory has come under fire, and now it is postulated that the lumps of plaque are a side effect of the disease, not the causal mechanism. There have been accusations that alternate approaches to understanding the disease were underfunded, as the peer reviews of such proposals were viewed as out of the mainstream.

The point is that long-held theories get superseded pretty frequently, and then everyone gets to claim that they thought they were a bit kooky all along. And over time the heroes of any particular science sometimes get knocked out of the pantheon for being fundamentally wrong.

Today we all assume that intelligence, thought, cognition—whatever you want to call it—is a product of computation. Computational neuroscience is the respectable way to approach the understanding of these phenomena in all animals, including humans. And artificial intelligence, the engineering counterpart to neuro-“science,” likewise assumes that to build an intelligent system we should write computer programs. In John McCarthy’s proposal for the famous 1956 Dartmouth Workshop on AI, the field’s foundational event, he argued precisely this position on the very first page.

Computationalism is not at all central to other aspects of our technology. New  . . .

Continue reading. There’s more.

Written by Leisureguy

28 October 2021 at 3:09 pm

Math’s Fatal Flaw: An elegant presentation of math’s cutting edge

leave a comment »

Written by Leisureguy

20 October 2021 at 10:24 am

Posted in Daily life, Math, Video

%d bloggers like this: