Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Books’ Category

A look inside The War for Gaul: A New Translation

leave a comment »

The following is an excerpt from The War for Gaul: A New Translation, by Julius Caesar, translated by James J. O’Donnell, professor of history, philosophy, and religious studies and University Librarian at Arizona State University, whose books include PagansThe Ruin of the Roman Empire, and Augustine: A New Biography.

Caesar deserves to be compared with Alexander the Great. No one before or since comes close. Command, conquest, and a lasting legacy set them apart from the likes of mere strivers like Napoleon or Hitler. And the war in Gaul was the making of Caesar.

Isn’t that what you would expect a translator of Caesar to say? It’s all entirely true and many have said as much before. But admiring him without understanding him makes us complicit in his ill-­doing as well. This translation of his account of the war in Gaul will try to restore your objectivity and freedom of judgment. Make of him what you will.

***

Cormac McCarthy should be the one to write the story of Caesar in Gaul. As insensitive and brutal as McCarthy’s Americans afoot in a land of native and Spanish peoples they wrongly took for uncivilized, Caesar’s armies had little excuse for what they did and they preferred not to remember it once done. But Caesar told their story coolly. Though people die in droves, horribly, on these pages, the Latin word for “blood” appears only twice, near the end.

The facts of the story must be made clear. A general with something to prove, a career to make, and plunder to be harvested for financial gain was handed an army and a province and a guarantee he would have both for long enough to make serious mischief. He spent nine years battering his way through his province and the rich and promising lands beyond, bullying allies and brutalizing the resistant. By the time he was through, the lands and peoples that obeyed his commands—and those of his successors for another half millennium—had been vastly increased, and he was poised to make himself master of the world, or at least the world that stretched from the English Channel to Damascus.

He had no business doing any of this. His colleagues admired his chutzpah, knowing that he went far beyond every reasonable moral or legal boundary. His excesses were possible because he was in competition with two other monsters, one of whom fell in battle at the opposite end of the world while Caesar was in Gaul, the other of whom let Caesar go too long, then fought him, then fled, and ended up hacked to death by the minions of a king who thought it prudent to curry favor with Caesar.

But the book Caesar wrote is magnificent: amoral, certainly, but clear, vivid, and dramatic, a thing to be remembered and read for the ages. Books about war often make us sympathize with the wretchedness of the victims. This one forces us to be Romans of the kind its author wanted to be. We read it nervously, cheering for a bullfight we didn’t want to attend and don’t approve of, admiring the grace of the awesome minuet that floods the sand with blood. There is no denying that this is a great work of literature, one of the greatest, and at the same time, there should be no denying that it is a bad man’s book about his own bad deeds. I think it is the best bad man’s book ever written.

But many will resist my saying the plain fact. Because his carven prose depends on a deliberately restrained vocabulary and a terse, correct style, the book has been thought suitable for schoolboys for many generations, until about the time Latin schoolmasters discovered finally that women can read too. Now the book is in disfavor, for the wrong reasons: because it is about war, and because it is too easy. But we all need to read books about war if we are to avoid dying in one, and this book is anything but easy.

The best reasons for not teaching this book to the young are . . .

Continue reading.

Written by Leisureguy

16 September 2021 at 12:49 pm

“Galaxy Quest” from a Don Quixote perspective

leave a comment »

I have seen Galaxy Quest before — an excellent movie, particularly for Star Trek fans, which stars Alan Rickman, Sigourney Weaver, Tony Shalhoub, Sam Rockwell, and Tim Allen. It’s currently available on Netflix, and having just been discussing that thinking about Don Quixote, I saw it through that lens — and it works.

If you’ve read Don Quixote, watch Galaxy Quest with Don Quixote in mind. It enriches the movie immensely.

Update: For example, the aliens took the TV series as historical documents, but then they make the fictional real.

Written by Leisureguy

10 September 2021 at 10:14 pm

Posted in Art, Books, Movies & TV

New Math Book Rescues Landmark Topology Proof

leave a comment »

Interesting that the proof was almost lost simply because it was poorly written. Kevin Hartnett writes in Quanta:

One of the most important pieces of mathematical knowledge was on the verge of being lost, maybe forever. Now, a new book hopes to save it.

The Disc Embedding Theorem rewrites a proof completed in 1981 by Michael Freedman — about an infinite network of discs — after years of solitary toil on the California coast. Freedman’s proof answered a question that at the time was one of the most important unsolved questions in mathematics, and the defining problem in Freedman’s field, topology.

Freedman’s proof felt miraculous. Nobody at the time believed it could possibly work — until Freedman personally persuaded some of the most respected people in the field. But while he won over his contemporaries, the written proof is so full of gaps and omissions that its logic is impossible to follow unless you have Freedman, or someone who learned the proof from him, standing over your shoulder guiding you.

“I probably didn’t treat the exposition of the written material as carefully as I should have,” said Freedman, who today leads a Microsoft research group at the University of California, Santa Barbara focused on building a quantum computer. [No “probably” about it. – LG]

Consequently, the miracle of Freedman’s proof has faded into myth.

Today, few mathematicians understand what he did, and those who do are aging out of the field. The result is that research involving his proof has withered. Almost no one gets the main result, and some mathematicians have even questioned whether it’s correct at all.

In a 2012 post on MathOverflow, one commenter referred to the proof as a “monstrosity of a paper” and said he had “never met a mathematician who could convince me that he or she understood Freedman’s proof.”

The new book is the best effort yet to fix the situation. It is a collaboration by five young researchers who were captivated by the beauty of Freedman’s proof and wanted to give it new life. Over nearly 500 pages, it spells out the steps of Freedman’s argument in complete detail, using clear, consistent terminology. The goal was to turn this important but inaccessible piece of mathematics into something that a motivated undergraduate could learn in a semester.

“There is nothing left to the imagination anymore,” said Arunima Ray of the Max Planck Institute for Mathematics in Bonn, co-editor of the book along with Stefan Behrens of Bielefeld University, Boldizsár Kalmár of the Budapest University of Technology and Economics, Min Hoon Kim of Chonnam National University in South Korea, and Mark Powell of Durham University in the U.K. “It’s all nailed down.”

Sorting Spheres

In 1974, Michael Freedman was 23 years old, and he had his eye on one of the biggest problems in topology, a field of math which studies the basic characteristics of spaces, or manifolds, as mathematicians refer to them.

It was called the Poincaré conjecture, after the French mathematician Henri Poincaré, who’d posed it in 1904. Poincaré predicted that any shape, or manifold, with certain generic characteristics must be equivalent, or homeomorphic, to the sphere. (Two manifolds are homeomorphic when you can take all the points on one and map them over to points on the other while maintaining relative distances between points, so that points that are close together on the first manifold remain close together on the second.)

Poincaré was specifically thinking of three-dimensional manifolds, but mathematicians went on to consider manifolds of all dimensions. They also wondered if the conjecture held for two types of manifolds. The first type, known as a “smooth” manifold, doesn’t have any features like sharp corners, allowing you to perform calculus at every point. The second, known as a “topological” manifold, can have corners where calculus is impossible.

By the time Freedman started work on the problem, mathematicians had made a lot of progress on the conjecture, including solving the topological version of it in dimensions 5 and higher.

Freedman focused on the four-dimensional topological conjecture. It stated that any topological manifold that’s a four-dimensional “homotopy” sphere, which is loosely equivalent to a four-dimensional sphere, is in fact homeomorphic (strongly equivalent) to the four-dimensional sphere.

“The question we’re asking is, [for the four-sphere], is there a difference between these two notions of equivalence?” said Ray.

The four-dimensional version was arguably the hardest version of Poincaré’s problem. This is due in part to the fact that the tools mathematicians used to solve the conjecture in higher dimensions don’t work in the more constrained setting of four dimensions. (Another contender for the hardest version of the question is the three-dimensional Poincaré conjecture, which wasn’t solved until 2002, by Grigori Perelman.)

At the time Freedman set to work, no one had any fully developed idea for how to solve it — meaning that if he was going to succeed, he was going to have to invent wildly new mathematics.

Curves That Count

Before getting into how he proved the Poincaré conjecture, it’s worth digging a little more into what the question is really asking.

A four-dimensional homotopy sphere can be characterized by the way curves drawn inside it interact with each other: The interaction tells you something essential about the larger space in which they’re interacting.

In the four-dimensional case, these curves will be two-dimensional planes (and in general, the curves will be at most half the dimension of the larger space they’re drawn inside). To understand the basic setup, it’s easier to consider a simpler example involving one-dimensional curves intersecting inside two-dimensional space, like this: . . .

Continue reading.

Written by Leisureguy

10 September 2021 at 4:48 pm

Posted in Books, Math

Ilan Stavans on Don Quixote

leave a comment »

Bronze statues of Don Quixote and Sancho Panza, at the Plaza de España in Madrid, Spain. 2010. Photo by רנדום.

I am a big fan of the book Don Quixote, and I am just one fan among millions of others. (Indeed, it is probably time to read the book again.) In Octavian, Ilan Stevens writes about the book:

More than 400 years ago, an aging and obscure Spaniard named Miguel de Cervantes published a novel that would change the course of literature (and come to be regarded as perhaps the greatest of all novels by numerous critics): The Ingenious Gentleman Don Quixote of La Mancha, more commonly known as Don Quixote. Rich, strange, nearly infinite in its influence, the book offers us a profound understanding both of humans and of the stories they tell. This brilliant essay by Ilan Stavans  critic, essayist, translator, Octavian board member, and publisher of Restless Books imagines the Quixote as a nation unto itself, one whose ambassadors have spread its magic through space and time. 


.
It has been described as the most influential novel in the history of the form. It is also among the bulkiest, longer even than David Foster Wallace’s Infinite Jest. It is the steadiest of bestsellers, only outshined by the Bible (speaking of which, the 19th-century French critic Charles Augustin Sainte-Beuve once called it “the secular Bible of humanity”). It has been translated into English a total of twenty times, more than any other novel. The first appeared in 1613, while its author Miguel de Cervantes was still alive.

Don Quixote of La Mancha, in other words, is a book one should love without restraint. It is moody and unpredictable. It is formally idiosyncratic. It moves easily between the highest and lowest of tonal registers. It possesses an uncanny ability to weed out unwelcome readers. Its 381,104 words, 8,207 periods, 40,617 commas, 690 exclamation points, 960 question marks, and 2,046 semi-colons draw those readers it does welcome into a labyrinth not only of signs but of images and emotions. To find one’s way through this requires intellectual stamina, psychological alertness, and — paradoxically — a willing credulity. After all, the book is a collection of bizarre episodes, some comic, some pathetic, some utterly disengaged from the rest, all connected by the thread of its two wandering protagonists, a slim, laid-back hidalgo who does nothing but spend his idle hours reading tales of adventure, and his squire, Sancho Panza, an almost illiterate field laborer and family man who believes he’s a practical fellow when he isn’t. It’s hard to know which of the two is more cuckoo: the foolish señor who is convinced he can change the world by becoming a superhero, or the silly employee who wastes his time following him.

This already complex structure exists, as well, in four dimensions — it changes with time. Come to the book when you are young and you will discover in it the endless ebullience of youth; read it again in your fifties (about the age of its protagonist, Don Quixote de la Mancha, also known as the Knight of the Mournful Countenance) and you will see a subtle and empathetic portrayal of a man in the grip of a midlife crisis. Return again in your old age, and find the Quixote transformed into a book on how to deal with the end that awaits us all, a well-tempered look into the face of death.

This year marks the 400th anniversary of Cervantes completing the novel’s manuscript. If the definition of a classic is a book that passes the test of time, this one has succeeded with flying colors. But I want to propose a different definition: a classic is a book capable of building a nation around itself. This one has. The world may be divided by flags, currencies, borders, and governments, but the realest nations congregate around mythologies. Unquestionably there is a Quixote nation, made up of the millions of readers who have fallen under its spell. It includes an assortment of admirable names: Lord Byron, Gustave Flaubert, Fyodor Dostoyevsky, Franz Kafka, William Faulkner, Jorge Luis Borges, Orson Welles, Salvador Dalí, Miguel de Unamuno, and Pablo Picasso (whose 1955 ink study, also undertaken as an anniversary commemoration, of the knight and his squire still amazes the eye today). George Washington, who helped build his own republic of the imagination, read the book and loved it. But more admirable than these are the countless readers of the book whose names are lost to history — the true creators of a homeland for the knight and his servant.

The Quixote’s birth was far from certain. Prior to starting work on what would become his magnum opus, Cervantes was a soldier (he fought in the Battle of Lepanto against the Turks, a heroic yet humbling experience: he was injured and lost much of the use of his left arm), a captive at war, and a lousy tax collector who ended up in jail for mishandling funds. He was also a rather limited author, a poet and playwright (he also wrote novellas), whom, I suspect, posterity would ignore if, about a decade before his death in 1616 at 69, he hadn’t stumbled on the idea of exploring the limits of parody. Still, he was penniless in the end, never suspecting for a minute the global impact his work would have. Indeed, I often imagine the surprise on his face (none of the portraits available were done while he was alive) had he realized the whole period he belonged to would be called “the age of Cervantes.” Not the age of Lope de Vega, the most successful and prolific of all playwrights who were his contemporaries? Not Quevedo or Góngora, two astonishing sonnetists?

The majority of readers, at least American readers, first learn of Don Quixote through Man of La Mancha, a syrupy and formulaic Broadway musical that in most ways could not be more distant from the antinomian spirit of the book. The one consolation to be drawn from this fact is that, for all its flaws, Man of La Mancha does manage to communicate an essential truth about the novel — the essential truth, in fact: both are driven by the restless and infinite imagination of Don Quixote, who dreams, in the words of the song, the impossible dream. (One is tempted to quote Picasso here: “Everything you can imagine is real.”) Indeed, no book addresses with a more penetrating eye the freedom dreams grant us. (Sorry, Freud!) Consider the arch-famous episode of the windmills, which should be seen as a clash between a decrepit feudalist and the most innovative energy technology of the time. Don Quixote is convinced these magisterial structures are giants whose intent is to conquer the earth, whereas Sancho knows (and so does the narrator) that they are far more mundane than that. Or the puppet theater performing a tale of adventure and submission which the knight confuses with real events, jumping on the stage and destroying the marionettes. Or the group of prisoners in transit whom Don Quixote liberates because he believes them to be innocent. Or the Cave of Montesinos, a dark and frightening place where Don Quixote has a mystical experience. The list of such incidents is long.

True, Cervantes wasn’t a good stylist. There are bumpy parts in Don Quixote, in which the author seems asleep at the wheel. He is sometimes repetitive. He forgets crucial details, such as the name of Sancho’s wife, calling her variously Juana and Teresa. But novels, especially lasting ones, don’t need to be perfect. What they need to be, of course, is . . .

Continue reading. There’s much more.

And read — or listen to — Don Quixote. The Edith Grossman translation is serviceable.

Written by Leisureguy

9 September 2021 at 10:36 am

Could Small Still Be Beautiful?

leave a comment »

Bryce T. Bauer has an interesting article in Craftsmanship magazine. The article’s blurb:

In the mid-1970s, an economist named E.F. Schumacher argued that our push for endless growth was doomed to fail. His book, “Small Is Beautiful,” soon became a classic, inspiring a generation of idealists. While largely forgotten since then, Schumacher’s ideas might speak to the working class’s troubles today more than ever.

The article begins:

1. “Economics as a Form of Brain Damage”
2. The Schumacher Center For a New Economics
3. The New Economics of Land Ownership
4. The New Economics of Business Financing
5. The New Economics of Currency
6. The New Economics of Entrepreneurship
7. Challenges to the New Economy

Four decades ago, just as some of the forces that have caused today’s problems with globalization and inequality began to take hold, a British economist by the name of E.F. Schumacher took America by storm with a set of contrary ideas about how an economy should work.

Schumacher aimed squarely at supporting everyday people and the communities where they lived. For a brief period in the mid-1970s, his name enjoyed headline status — and his book, “Small Is Beautiful: Economics as if People Mattered,” joined a pantheon of powerful, call-to-action works of the time. Schumacher’s book was taken so seriously that, a few years after its publication, it was listed alongside such enduring critiques as Rachel Carson’s “Silent Spring” and Paul R. Ehrlich’s “The Population Bomb.”

While “Small Is Beautiful” hasn’t endured with quite the same power those works have enjoyed, its ideas have still seeped into the thinking of some of the nation’s latter-day acolytes of social and environmental sustainability, including Wendell Berry, Jane Jacobs, and Bill McKibben. Schumacher’s work also inspired a small think-tank focused on turning the small towns and bucolic countryside of the Massachusetts Berkshires into a laboratory for further exploration of his theories.

Given how rarely Schumacher’s once-popular ideas are discussed today, one can’t help but wonder—were his perceptions all wrong? Or, as the director of the institute focused on sustaining his ideas, and as Schumacher himself also said, was their time yet to come? If the latter, might that time be now? Every day, it seems, more and more experts join the argument that the accelerating dominance of global companies — in a world struggling with income inequality, resource depletion, and the growing ravages of climate change — has put us on an unsustainable path. If that bleak outlook is correct, maybe it’s time to give Schumacher’s ideas a second look.

“ECONOMICS AS A FORM OF BRAIN DAMAGE”

When “Small Is Beautiful” came out, in 1973, Schumacher had already worked for several decades as an economist. In the years after its publication, he toured the United States speaking to crowds across the country and meeting with political leaders, including an address before 50 members of Congress and a meeting with President Jimmy Carter. At the time, America was being wrenched by many of the ills he said modern economics would cause. The 1970s was a decade marked by oil and gas shocks, labor unrest and stagflation, a growing concern over the environment, and the discord of the Vietnam War. Schumacher was attuned to what it all portended. (In fact, the first use of the term “global warming” occurred just two years after Schumacher’s book was published.) Schumacher wrote “we do well to ask why it is that all these terms — pollution, environment, ecology, etc. — have so suddenly come into prominence…is this a sudden fad, a silly fashion, or perhaps a sudden failure of nerve?”

Born in Bonn, Germany, Schumacher had fled Nazi Germany to England in 1937. During the Second World War, when Great Britain began interning Germans, including Jewish refugees, Schumacher and his family moved to the countryside, where he worked on a farm until his writing caught the notice of John Maynard Keynes, the British economist who launched the 20th century’s activist alternative to unfettered, free-market economics.

The core of Schumacher’s argument lay in his book’s subtitle: “Economics as if People Mattered.” For far too long, economists had approached the problem of development in a way that focused too much on goods over people, emphasizing the elimination of labor instead of job creation. He accused these experts of treating consumption as the end itself, always to be maximized.

In Schumacher’s view, the economy would not benefit from the standard methods of stimulation; if anything, it should be de-intensified. If this could be managed, Schumacher believed, it would allow time “for any piece of work — enough to make a really good job of it, to enjoy oneself, to produce real equality, even to make things beautiful.”

The opportunity to work this way — which is central to any artisan or tradesman, and to his or her ability to produce top-notch, innovative work — clearly has only declined further in the years since Schumacher made this observation. And if anything, his critique might be even more timely today. In a new book, “Tightrope: Americans Reaching for Hope,” veteran New York Times journalists Nicholas Kristof and Sheryl WuDunn argue that the growing scarcity of jobs that offer such visceral satisfactions is part of what’s plunged America’s working class into unprecedented levels of despair, drug addiction, and suicide 

To be truly helpful, Schumacher argued, development funds in poor areas should be spent on “intermediate technology” — that is, technology that’s cheap, resilient, and simple enough to be used by workers in areas that lack access to education, ready capital, and sophisticated infrastructure. Technology that’s too expensive, and too complex to be readily used in developing economies, he said, destroys “the possibilities of self-reliance.” . . .

Continue reading. There’s much more.

Written by Leisureguy

6 September 2021 at 5:34 pm

The radical power of the book index

leave a comment »

Michael Delgado writes in Prospect:

Index, A History of the
Dennis Duncan (Allen Lane)

Is Google making us stupid? This was the question posed by the American writer Nicholas Carr in a 2008 essay published in the Atlantic. “I’m not thinking the way I used to think,” he confessed. “Immersing myself in a book or a lengthy article used to be easy… now my concentration often starts to drift after two or three pages.” The internet, Carr posited, was to blame. “My mind now expects to take in information the way the Net distributes it.”

This argument has become something of a cliché, and Carr was self-aware enough to point out that this was hardly a new concern. Marshall McLuhan had said much the same thing about technology in the 1960s. Nietzsche’s prose, according to a friend of his, became “tighter, more telegraphic” after he began using a typewriter. A minor Venetian humanist lamented that the arrival of Gutenberg’s printing press in the 15th century would make people lazy, weak-minded and “less studious.”

Misoneism is the ur-fear. It’s understandable when it emerges as a response to paradigm-shifting inventions like the typewriter, the printing press or writing itself. A passage in Plato’s Phaedrus relays Socrates’ myth about the Egyptian god Theuth, who invented the act of writing. Theuth proudly presents his new creation to King Thamus, attesting that it “will make the Egyptians wiser and will improve their memories.” Thamus is unimpressed: “this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory.”

It is difficult to imagine that what most of us see as a functional piece of scholarly apparatus, the book index, could inspire the same passions as these monumental innovations did. But as Dennis Duncan shows in his masterful new book, Index, A History of the, the birth of the index was a long and painful one.

Early indexes, concordances and distinctiones had been around for a long time before the index blossomed into something like its modern form. It was the arrival of printed page numbers that helped firm things up. At the Bodleian Library, Duncan gets his hands on the first extant example of the printed page number, in a short sermon produced in Cologne in 1470, and describes it as “the most intense experience that I have had of the archival sublime.” But 1470 was far from a watershed moment. Even at the end of the century, page numbers only appeared in around 10 per cent of printed books, and the index itself continued to be treated with suspicion. The Renaissance polymath Conrad Gessner balked at those “ignorant or dishonest men” who “rely only on the indexes” to gain information. A couple of centuries later Alexander Pope put it more floridly in The Dunciad: “Index-learning turns no student pale/Yet holds the eel of science by the tail.” The index, in these conceptions, is a shortcut, a cheat code that lets you digest a book without reading it in full. We are back to Socrates.

We do not think of indexes in this way nowadays. We have other concerns about technology, such as platforms like Blinkist that claim to distil the key ideas from an array of new books into bitesize podcasts. Notwithstanding Gessner, Pope and the rest, reading indexes from A to Z has not become common practice, and certainly isn’t considered a substitute for reading the book itself. But some indexes, as Duncan shows through a host of entertaining examples, can be enlightening, satirical and pointed in and of themselves.

It is these indexes—the ones that push the boundaries of the form and play with our expectations—that comprise the most entertaining parts of Index, A History of the. In one of those typically ego-driven episodes of academic score-settling associated with the Augustans, the Hellenist Richard Bentley in 1697 published a fine-toothed critique of Charles Boyle’s new edition of the epistles of the ancient tyrant Phalaris. In response, a gaggle of Boyle’s affiliates ganged up on Bentley, writing a hot-headed reply in which they accused him of “Index-hunting” in the manner of a “Second-hand Critic.” A young wit named William King added to this “A Short Account of Dr Bentley by Way of Index,” which includes multiple-page entries for, inter alia, “His egregious dulness,” “His Pedantry,” “His Appeal to Foreigners” and, perhaps most damning of all, “His familiar acquaintance with Books that he never saw.” This is bibliography as weapon, the expectation of dull objectivity associated with the index ingeniously subverted. The index-hunter has become the index-hunted.

There is in this anecdote a very particular, pleasing blend of nitpicking and linguistic flair that feels like a precursor to the cool passive aggression we so often see in the letters pages of learned magazines like the London Review of Books. Throughout the book, Duncan is aware that the index is a specialist area of interest. But by dissecting its history and its workings, he also shows that the vast majority of people who use indexes—anyone, really, who has ever read a work of non-fiction—take them for granted. His book is both an entertaining and edifying journey through index-history and a spirited defence of the index (and indexers) in the technological age. It is precisely because indexing is a rarefied sport that it is worth saving.

Not that it really needs saving. Yes, there’s the issue of money: indexing is not a lucrative practice, and the rise of automated software has provided a cheap alternative to human indexers. But as Duncan shows, software, at least by itself, is not much of an alternative. “A specialist indexer knows that it can be helpful to tag a concept even if it is not explicitly named… they can tell the difference—even without first names—between Marx, KarlMarx, Groucho, and Marx, Richard.” Nowhere in this book is the importance of the human indexer more apparent than in its own index, compiled by Paula Clarke Bain, which is a nest of metatextual easter eggs. The entry for “wild goose chase” tells us to “see chase, wild goose” (not dissimilar from the teenage Lewis Carroll’s playful index to his own handwritten journal, which had entries for “General, Things in, 25,” “In General, Things, 25” and “Things in General, 25”). Under “Indexers” we have the subheading  . . .

Continue reading.

Written by Leisureguy

4 September 2021 at 8:41 am

Posted in Books, Daily life, History

Kayfabe Ascendent

leave a comment »

Two very interesting videos:

Written by Leisureguy

3 September 2021 at 6:13 pm

The Table of Contents outline for the Guide now fixed

leave a comment »

Somehow a link on the “Sources for the Guide” page was broken — the link shows in outline form the Guide’s full Table of Contents. The link has been fixed.

Written by Leisureguy

29 August 2021 at 6:24 pm

Posted in Books, Shaving

Toddlers who want to help

leave a comment »

I’m listening to Hunt, Gather, Parent on my walks, and the author just raised an interesting point — or rather, the Mayan parent she was learning from did. A child learning to help is like a child learning to talk: at first it babbles, then it forms words and near-words, then makes attempts at sentences. Learning the language takes years.

And the same with is true with learning to help. The help a 1-year-old can offer is akin to babbling: it’s not really help (just as babbling’s not really language) but it’s an essential step and one to be encouraged. It’s important to let the toddler “help” even if the parent must later clean up a mess, because becoming fluent in help will take years of practice.

Many modern parents, in effect, teach their children NOT to help, by repeatedly rebuffing those early attempts to help. After a while the child learns the lesson: Don’t help.

One interesting comment she made on the helping: don’t praise the help, just accept it and continue the pattern — just first words and sentences are not praised but simply accepted and used. Praise turns out to have negative consequences (such as creating praise addiction). The idea of praise is to encourage self-esteem, but that comes from accomplishment, not from the words of others. She observed that Mayan parents very rarely praise their children.

Moreover, when a child does a task, that effort is with their locus of control: they don’t depend on others, but control the outcome themselves. Praise, in contrast, is not under the child’s control — it comes (or fails to come) from an external source, and a child conditioned to need praise has a source of anxiety: whether the praise will be forthcoming or not. The locus of control has moved outside the child, and that fosters insecurity.

It’s a very interesting book.

Written by Leisureguy

23 August 2021 at 9:32 am

Walking and listening

leave a comment »

I have rediscovered that listening while walking makes the walk easier. Today I walked farther, faster, and longer than yesterday, but instead of feeling tired, I felt invigorated. Of course, by walking six days a week I am naturally increasing strength and energy levels (the training effect Ken Cooper talks about), but still I think listening to the audiobook helped — plus the book (Hunt, Gather, Parent currently, on loan from library) was interesting and I learned things. (It really is a fascinating book.)

I did listen to the Edith Grossman translation of Don Quixote on walks in Monterey. Because the local library has quite a few books in downloadable audio format, I’ll look through those — I already spotted Jane Eyre, and that will be next. Once I run out of library audiobooks, there’s always Audible.com.

Written by Leisureguy

21 August 2021 at 4:42 pm

The Singularity approacheth

leave a comment »

As I’ve mentioned previously, the Singularity in science-fiction is when an AI gets good enough to design a better AI, and then recursion leads to exponential improvement, and the story’s off and running.

Well, it’s here. Matthew Hudson mentions it in an article in the New Yorker, which begins:

Deep learning, the artificial-intelligence technology that powers voice assistants, autonomous cars, and Go champions, relies on complicated “neural network” software arranged in layers. A deep-learning system can live on a single computer, but the biggest ones are spread over thousands of machines wired together into “clusters,” which sometimes live at large data centers, like those operated by Google. In a big cluster, as many as forty-eight pizza-box-size servers slide into a rack as tall as a person; these racks stand in rows, filling buildings the size of warehouses. The neural networks in such systems can tackle daunting problems, but they also face clear challenges. A network spread across a cluster is like a brain that’s been scattered around a room and wired together. Electrons move fast, but, even so, cross-chip communication is slow, and uses extravagant amounts of energy.

Eric Vishria, a general partner at Benchmark, a venture-capital firm in San Francisco, first came to understand this problem in the spring of 2016, while listening to a presentation from a new computer-chip company called Cerebras Systems. Benchmark is known for having made early investments in companies such as Twitter, Uber, and eBay—that is, in software, not hardware. The firm looks at about two hundred startup pitches a year, and invests in maybe one. “We’re in this kissing-a-thousand-frogs kind of game,” Vishria told me. As the presentation started, he had already decided to toss the frog back. “I’m, like, Why did I agree to this? We’re not gonna do a hardware investment,” he recalled thinking. “This is so dumb.”

Andrew Feldman, Cerebras’s co-founder, began his slide deck with a cover slide, then a team slide, catching Vishria’s attention: the talent was impressive. Then Feldman compared two kinds of computer chips. First, he looked at graphics-processing units, or G.P.U.s—chips designed for creating 3-D images. For a variety of reasons, today’s machine-learning systems depend on these graphics chips. Next, he looked at central processing units, or C.P.U.s—the general-purpose chips that do most of the work on a typical computer. “Slide 3 was something along the lines of, ‘G.P.U.s actually suck for deep learning—they just happen to be a hundred times better than C.P.U.s,’ ” Vishria recalled. “And, as soon as he said it, I was, like, facepalm. Of course! Of course!” Cerebras was proposing a new kind of chip—one built not for graphics but for A.I. specifically.

Vishria had grown used to hearing pitches from companies that planned to use deep learning for cybersecurity, medical imaging, chatbots, and other applications. After the Cerebras presentation, he talked with engineers at some of the companies that Benchmark had helped fund, including Zillow, Uber, and Stitch Fix; they told him that they were struggling with A.I. because “training” the neural networks took too long. Google had begun using super-fast “tensor-processing units,” or T.P.U.s—special chips it had designed for artificial intelligence. Vishria knew that a gold rush was under way, and that someone had to build the picks and shovels.

That year, Benchmark and Foundation Capital, another venture-capital company, led a twenty-seven-million-dollar round of investment in Cerebras, which has since raised close to half a billion dollars. Other companies are also making so-called A.I. accelerators; Cerebras’s competitors—Groq, Graphcore, and SambaNova—have raised more than two billion dollars in capital combined. But Cerebras’s approach is unique. Instead of making chips in the usual way—by printing dozens of them onto a large wafer of silicon, cutting them out of the wafer, and then wiring them to one another—the company has made one giant “wafer-scale” chip. A typical computer chip is the size of a fingernail. Cerebras’s is the size of a dinner plate. It is the largest computer chip in the world.

Even competitors find this feat impressive. “It’s all new science,” Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. “It’s an incredible piece of engineering—a tour de force.” At the same time, another engineer I spoke with described it, somewhat defensively, as a science project—bigness for bigness’s sake. Companies have tried to build mega-chips in the past and failed; Cerebras’s plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. “To be totally honest with you, for me, ignorance was an advantage,” Vishra said. “I don’t know that, if I’d understood how difficult it was going to be to do what they did, I would have had the guts to invest.”

Computers get faster and faster—a remarkable fact that’s easy to take for granted. It’s often explained by means of Moore’s Law: the pattern identified in 1965 by the semiconductor pioneer Gordon Moore, according to which the number of transistors on a chip doubles every year or two. Moore’s Law, of course, isn’t really a law. Engineers work tirelessly to shrink transistors—the on-off switches through which chips function—while also refining each chip’s “architecture,” creating more efficient and powerful designs. . .

. . .  Cerebras’s first task was to address the manufacturing difficulties that bedevil bigger chips. A chip begins as a cylindrical ingot of crystallized silicon, about a foot across; the ingot gets sliced into circular wafers a fraction of a millimetre thick. Circuits are then “printed” onto the wafer, through a process called photolithography. Chemicals sensitive to ultraviolet light are carefully deposited on the surface in layers; U.V. beams are then projected through detailed stencils called reticles, and the chemicals react, forming circuits.

Typically, the light projected through the reticle covers an area that will become one chip. The wafer then moves over and the light is projected again. After dozens or hundreds of chips are printed, they’re laser-cut from the wafer. “The simplest way to think about it is, your mom rolls out a round sheet of cookie dough,” Feldman, who is an avid cook, said. “She’s got a cookie cutter, and she carefully stamps out cookies.” It’s impossible, because of the laws of physics and optics, to build a bigger cookie cutter. So, Feldman said, “We invented a technique such that you could communicate across that little bit of cookie dough between the two cookies.”

In Cerebras’s printing system—developed in partnership with T.S.M.C., the company that manufactures its chips—the cookies overlap at their edges, so that their wiring lines up. The result is a single, “wafer-scale” chip, copper-colored and square, which is twenty-one centimetres on a side. (The largest G.P.U. is a little less than three centimetres across.) Cerebras produced its first chip, the Wafer-Scale Engine 1, in 2019. The WSE-2, introduced this year, uses denser circuitry, and contains 2.6 trillion transistors collected into eight hundred and fifty thousand processing units, or “cores.” (The top G.P.U.s have a few thousand cores, and most C.P.U.s have fewer than ten.)

Aart de Geus, the chairman and co-C.E.O. of the company Synopsys, asked me, “2.6 trillion transistors is astounding, right?” Synopsys provides some of the software that Cerebras and other chipmakers use to make and verify their chip designs. In designing a chip, de Geus said, an engineer starts with two central questions: “Where does the data come in? Where is it being processed?” When chips were simpler, designers could answer these questions at drafting tables, with pencils in hand; working on today’s far more complex chips, they type code that describes the architecture they want to create, then move on to using visual and coding tools. “Think of seeing a house from the top,” de Geus said. “Is the garage close to the kitchen? Or is it close to the bedroom? You want it close to the kitchen—otherwise, you will have to carry groceries all through the house.” He explained that, having designed the floor plan, “you might describe what happens inside a room using equations.”

Chip designs are mind-bogglingly intricate. “There’s multiple layers,” de Geus said, with circuits crisscrossing and running on top of one another, like major expressway interchanges. For Cerebras’s engineers, working at wafer scale, that complexity was heightened. Synopsys’s software offered assistance in the form of artificial intelligence: pattern-matching algorithms recognized frequent problems and suggested solutions; optimization routines nudged rooms into faster, more efficient arrangements. If too many lanes of traffic try to squeeze between two blocks of buildings, the software allows engineers to play Robert Moses, shifting the blocks.

In the end, Feldman said, the mega-chip design offers several advantages. Cores communicate faster when they’re on the same chip: instead of being spread around a room, the computer’s brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that’s ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home. In describing the efficiencies of the wafer-scale chip, Feldman offered an analogy: he asked me to imagine groups of roommates (the cores) in a dormitory (a chip) who want to watch a football game (do computing work). To watch the game, Feldman said, the roommates need beer stored in a fridge (data stored in memory); Cerebras puts a fridge in every room, so that the roommates don’t have to venture to the dorm’s common kitchen or the Safeway. This has the added advantage of allowing each core to work more quickly on different data. “So in my dorm room I can have Bud,” Feldman said. “And in your dorm room you can have Schlitz.”

Finally, Cerebras had to surmount the problem of yield. The firm’s engineers use Trilogy’s trick: redundancy. But here they have an advantage over their predecessors. Trilogy was trying to make a general-purpose chip, with many varied components, and so wiring around a single failed element could require connecting to a distant substitute. On Cerebras’s chip, all the cores are identical. If one cookie comes out wrong, the ones surrounding it are just as good.

In June, in a paper published in Nature, Google developers reported that, for the first time, they’d fully automated a process called “chip floorplanning.” A typical chip can contain thousands of memory blocks, tens of millions of logic gates, and tens of kilometres of microscopic wiring. Using the same techniques that their DeepMind colleagues had used to teach a neural network to win at Go, they’d trained an A.I. to floorplan a tensor-processing unit, arranging these elements while preventing data congestion; when they tested the A.I.’s T.P.U. against one that a team of experts had spent several months creating, they found that the computer’s design, drawn up in a matter of hours, matched or exceeded the humans’ in efficient use of area, power, and wire length. Google is currently using the algorithm to design its next T.P.U.

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Designing and manufacturing the chip turned out to be just half of the challenge. Brains use . . .

Continue reading. There’s more.

Emphasis added.

Written by Leisureguy

20 August 2021 at 6:23 pm

The puzzle of ownership and the rules about it

leave a comment »

On a recent episode of Amicus,  Dalia Lithwick spoke with law professor Michael Heller aboutMine!: How the Hidden Rules of Ownership Control Our Lives, which he co-wrote with James Salzman.

Mine! seeks to do for ownership what Freakonomics did for incentives and what Nudge did for our cognitive biases. It opens up a new, counterintuitive, and fascinating way to think about the world that we all take for granted. Their conversation has been edited and condensed for clarity.

Slate has an edited and condensed version of the conversation worth reading, but the first example struck home to me: the space in back of a passenger seat on an airplane has dual ownership. It belongs to the person in the seat, who can lean back into that space, and it also belongs to the person in the seat behind, who will use that space for (say) knees and laptop.

Often, as you can imagine and perhaps have experienced, the two putative owners dispute ownership, but they overlook the fact that they both own the space because the airline sold that space twice, getting money both from the person who has the right to lean back (as shown by the button the chair) and the person who has a right to have knees and laptop.

Although the two passengers often end up angry at each other, the proper target of their ire should be the airline. If a realtor sold a house simultaneously to two different families, they would quickly understand that the person who is at fault — the person, not to put too fine a point on it, is dishonest — is the realtor. Both families are blameless, because both made the purchase in good faith. It was the thieving realtor who will (or at least should) go to jail, just as it is the thieving airline that should suffer consequences.

The consequences for the airlines should be a law that defines the amount of space an airline passenger has for undisputed use, and that should include the seat, a space in front for knees and laptop, and a space behind for reclining. Unfortunately, US business corporations pretty much control Congress, so that Congress usually works on their behalf rather on the behalf of the public. Still, the party at fault is clear, and the solution is clear. And nothing will be done.

Written by Leisureguy

20 August 2021 at 4:09 pm

Elliot Ackerman on the Anabasis

leave a comment »

Why the Classics? reprints an essay from their archives:

As the U.S. withdrawal from Afghanistan dominates the news, we wanted to bring you an essay by Elliot Ackerman from our archives about a work focused on the difficulty and danger of military adventures abroad: Xenophon’s Anabasis. This book offers — according to Ackerman, who is a veteran of Afghanistan, a recipient of the Silver Star and Bronze Star, and a National Book Award finalist — in its account of a doomed Greek expedition into Persia an important lens for Americans to think about our wars both in Afghanistan and Iraq and our country’s understanding of what war is and what it means.   

A western army marches to within sixty kilometers of Baghdad. Their leader, the youngest son of a great ruler, has gathered them to oust the current regime. They face a large irregular force. A pitched battle is fought. The result is inconclusive. The rudderless army suffers greatly as it attempts to extricate itself from the conflict and find its way home. Perhaps this sounds like something we’ve recently lived through, but it’s not. It’s Xenophon’s Anabasis — the title loosely translates as The March Up Country.

This book — written around 370 B.C., almost thirty years after the events it relates — chronicles an expedition of 10,000 Greek mercenaries under the command of Cyrus against his brother the Persian King Artaxerxes II between 401 and 399 B.C. Arguably the first soldier-turned-author, Xenophon was an Athenian of noble birth, but one who had little taste for the hectic and cosmopolitan life of fifth-century Athens. He had a predilection for war and was an admirer of the Spartans, then ruling the Hellespont after the humiliating Athenian defeat in the Peloponnesian Wars.

The Anabasis opens when Xenophon’s old friend Proxenus, a Theban mercenary serving as a general in the army gathering under Cyrus, invites him to participate in the campaign. Unsure whether or not to join the march, Xenophon consults with the philosopher Socrates who advises him to ask the Oracle at Delphi for guidance. “Xenophon went and put the question to Apollo, to which of the gods he must pray and do sacrifice, so that he might best accomplish his intended journey and return in safety, with good fortune.” The oracle tells Xenophon that he should pray to Apollo, but when he reports this to Socrates, the philosopher is quick to point out that Xenophon asked the wrong question. Rather than inquiring as to whether or not he should go on the campaign or stay at home in the first place, he instead only asked whom he should pray to so that he would achieve the best result once he left.

It seems asking the wrong question in the run-up to war isn’t just a 21st-century phenomenon. And Xenophon isn’t alone among the foreign commanders in asking them. Cyrus, the youngest son of the recently deceased Darius II, was by all accounts a charismatic leader. Xenophon writes of him, “that he should triumph over his friends in the great matters of well-doing is not surprising, seeing that he was much more powerful than they, but that he should go beyond them in minute attentions, and in an eager desire to give pleasure, seems to me, I must confess, more admirable.” In short, Cyrus is the type of guy you’d like to have a beer with. He leverages that personal magnetism when convincing each member of his coalition to contribute troops to the march, tailoring his reason for fighting to whatever they wish to hear as opposed to his true aim: deposing his brother as King of Persia.

It isn’t until Cyrus’s army has marched from Greece into what is now Iraq that he finally lets them know the real purpose of the expedition. Upon receiving the news, the Greek generals under Cyrus confer among themselves, concluding that having advanced this deep into enemy territory they are committed, that Cyrus, despite this one deception, has always treated them fairly, and that they should chose to fight on his behalf. The battle is joined on September 3rd, 401 B.C., just outside of Baghdad, at a place called Cunaxa.

That fateful day opens with a series of Persian feints, in which Artaxerxes’s army attempts to pull Cyrus deeper and deeper into their territory. The strategy works and when Cyrus commits to battle it’s because he believes he’s caught an adversary who is too weak to engage him. The Greek hoplites on the army’s left flank, anchored along the banks of the Euphrates River, quickly defeat the Persians opposite them. Witnessing their success from the army’s center, Cyrus’s enthusiasm becomes irrepressible. Then he sees his brother Artaxerxes across the battlefield. Xenophon captures the moment in all its glorious, simple horror: “Unable to longer contain himself, with a cry, ‘I see the man,’ he rushed at him . . . As Cyrus delivered the blow, someone struck him with a javelin under the eye severely . . . Cyrus himself fell, and eight of his bravest companions lay on top of him.”

The son of the great king, seemingly destined for greatness himself, is slain by a javelin launched from the hand of a common soldier named Mithridates. That common soldier is richly rewarded by his king Artaxerxes — until he drunkenly boasts aloud that it was he who killed Cyrus. The same king who rewarded him now tortures him to death via scaphism, a punishment synonymous for Greek peoples with the cruelty and excess of the Persians. Artaxerxes shackles Mithridates inside a box filled with flies, wasps, and larva after coating his body with milk and honey; the soldier will slowly be eaten alive.

This story presents a hard lesson, but one absolutely central to the Anabasis: chance, not intelligence, bravery, or skill, exalts or casts down men’s fate.

Xenophon doesn’t expound on the what-ifs. Rather he shows throughout the Anabasis the fickle swing of fortune in wartime. When the Greek generals suggest that Ariaeus, Cyrus’s second-in-command, take up his claim, . . .

Continue reading. There’s more — and the Anabasis itself is interesting reading.

Written by Leisureguy

19 August 2021 at 1:51 pm

How the government can support a free press and cut disinformation

leave a comment »

Martha Minow of the Harvard Law School is interviewed by Christina Pazzanese in The Harvard Gazette:

The mainstream news industry has been in sharp decline since the 1990s, owing to a series of financial and cultural changes brought by the rise of the internet. Amid the closing or shrinking of newspapers, magazines, and other legacy news outlets, Americans have increasingly turned to social media and heavily partisan websites and cable networks as their main sources of news and information, which has led to a proliferation of disinformation and misinformation and fueled polarization.

Given the vital role a free and responsible press plays in American democracy and the unique protections the Constitution provides for it under the First Amendment, is it time for the government to get involved? Is it government’s place to do so? And how could that happen without infringing on that freedom?

In a new book, Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech (Oxford University Press, 2021), Martha Minow, 300th Anniversary University Professor at Harvard Law School, says the First Amendment not only does not preclude the federal government from protecting a free press in jeopardy, it requires that it do so. Minow spoke with the Gazette about some of the ways to potentially clean up social media and bankroll local news, and why arguing on Twitter isn’t a First Amendment right.

Q&A with Martha Minow

GAZETTE: There seems to be broad misunderstanding about what speech is protected by the First Amendment and what is not. Underlying “cancel culture” and complaints about “deplatforming” is a belief that people should not be penalized for saying things online that others find objectionable or that are inaccurate or even false because of their right to freely express themselves. Can you clarify how the First Amendment applies and doesn’t apply to social media platforms, like Twitter or Facebook, and online generally?

MINOW: I wrote a book to examine the challenges and decline of the news industry during a time of exploding misinformation and disinformation, a global pandemic, and great challenges to democracies in the United States and elsewhere. Certainly, one big dimension of this context is [what] some people are calling [an] infodemic: the flood of information that is enabled by the internet, and particularly social media. But it is not just social media. It’s conventional media, particularly cable news, but also some broadcast news.

Most of the sources of communications are private, and private communications are not governed by the First Amendment. Private companies are entitled to edit, elevate, suppress, remove [speech], whether it’s in broadcast, cable, or on a social media platform. Indeed, private companies have First Amendment freedoms against any government intervention. We in America are very fond of rights, and rights maybe are what hold us together more certainly than shared traditions, shared identities. And one of the ways that’s really evolved is how we talk about rights as if it’s a cultural phenomenon or it’s part of our identities. But that kind of informal conversation about “I have First Amendment freedom” may be a metaphor on a social media platform, but it is not a legal right. We sign terms-of-service agreements with platform companies. They’re the ones that control what is communicated and what’s not. That’s much less edited than broadcast or cable or print media. So, we’re living in an unprecedented time of lowered barriers to communicating to mass audiences — almost anybody can have access to a mass audience. But that’s all enabled by private providers and the private providers are not restricted by the First Amendment in what they remove or amplify.

GAZETTE: What are a few of the measures that could effectively hold tech firms to account for what is published and shared on their platforms?

MINOW: When it comes to holding the platform companies responsible for conveying, amplifying, even escalating hateful communications, misinformation, [and] disinformation, there are some techniques, but we have to be careful because if the government is involved, then the First Amendment is front and center. The techniques include eliminating or reducing the immunity currently granted under the [1996] Communications Decency Act, which has a section, Section 230, that treats platform companies differently from any other media and specifically immunizes them from liabilities that apply to all these other entities. They include liabilities for fraud, for defamation, for violating contract terms. [But] even Section 230 does not immunize the platforms from criminal responsibility or from violations of intellectual property rights. So, one very direct step to hold companies responsible would be to either eliminate this immunity or make it conditional. I actually prefer that alternative.

Companies adopt and should adhere to standards of moderation, content moderation rules. They can develop their own, but the idea would be they’d have to announce standards; they’d have to report on them; and they’d have to have processes to act on anyone calling them out for violating their own standards. That’s pretty direct, and it would put them on the same par as all the other media entities that exist in the country.

Another possibility would be to take intellectual property seriously and make the platforms pay when they take or steal or promote information from other news sources. They don’t put the revenues that they gain, particularly from advertising, back into investment in news. It’s not a punishment; it’s simply the idea of holding them responsible like [the] grown-up companies that they are.

You know, the fact of the matter is, the big disinformation source is as much broadcast and cable [television as it is online] and on those, there is a basis for government regulation. The FCC could take that seriously and withhold licenses, remove them, terminate them, for companies that are misleading people, that are labeling as news something that’s entirely opinion. Cable is largely a monopoly. Local communities grant franchises to cable companies; local communities could hold them more responsible. I don’t look forward to a day, I hope we never see it, that the government, at any level, is deciding the content. But when there is scarce opportunity to amplify communications given to private companies, it’s only fair that they should have standards that they then deliver on [by] providing some quality control of what they amplify. There is no right to have your message sent to everybody in the world anywhere. What there is, is a right to be free from government restrictions on your speech. So, one very specific suggestion that I have is that when we deal with digital communications, there could be a delay, and there could be speed bumps. Before people can spread messages to large numbers of people, there could be a delay, they could even use artificial intelligence to monitor it before it can be spread beyond a handful of people.

GAZETTE: The era of self-policing hasn’t worked very well so far, but you say there are things companies can and should be doing right now to act more responsibly and to help support the news. What are a few of those?

MINOW: . . .

Continue reading.

Written by Leisureguy

18 August 2021 at 5:34 pm

Losing a Language

leave a comment »

Charles Schifano writes in Desk Notes:

Toward the end of Vladimir Nabokov’s kaleidoscopic memoir, we see the young writer arrive in Cambridge, fresh from a lengthy trip across Europe—his aristocratic childhood now conclusively over, a mob of executioners having just missed him.

At a bookstall in the Market Place, I unexpectedly came upon a Russian work, a secondhand copy of Dahl’s Interpretative Dictionary of the Living Russian Language in four volumes. I bought it and resolved to read at least ten pages per day, jotting down such words and expressions as might especially please me, and I kept this up for a considerable time.

Already fluent in English, the language of his new country wasn’t a shock. The sounds of his birthplace, however, were receding behind him. Nabokov, now twenty years old, would never again set foot in Russia.

My fear of losing or corrupting, through alien influence, the only thing I had salvaged from Russia—her language—became positively morbid and considerably more harassing than the fear I was to experience two decades later of my never being able to bring my English prose anywhere close to my Russian.

You might recognize these undulations if you’ve ever struggled with a new language. An unwanted symbiotic relationship comes to your lips: the structure of your native tongue infects the new language, and, startlingly, the new language infects your native tongue.

I used to sit up far into the night, surrounded by an almost Quixotic accumulation of unwieldy volumes…It would have horrified me at the time to discover what I see so clearly now, the direct influence upon my Russian structures of various contemporaneous (“Georgian”) English verse patterns that were running about my room and all over me like tame mice.

Sentences do, in fact, have this rather unfortunate tendency to echo. What’s heard in your ear is soon transferred to your mouth. You see this best with children, who acquire those first birth-cries of language through a stuttered mimicry of what’s around them. But you can also catch it with your own speech. Where did that slogan come from? Why did I just repeat that cliché? And most people have a careless nature when it comes to picking up phrases along the way—mirroring the accents of friends, regurgitating the expressions they hear at lunch. Any sufficiently focused writer will realize that it takes determination to resist the speech patterns of those around you.

For Nabokov, however, the trouble pertained to the sentences already in his mind. Could he discriminate among his languages? Or would the ooze from one seep into another? Each of his languages (Russian, English, French) has its own cadence, each has a distinct sense of timing, and each leaves its own reverberations on the page. How could he possibly keep them separate?

Languages aren’t, it’s worth remembering, a mere collection of words and a few bits of grammar. Only the superficial aspects of a language land in a dictionary. All the undertones and nuances and lyricisms are hidden somewhere else. And if that weren’t true, then acquiring a language wouldn’t require anything more than a good memory.

A curious detail about Nabokov’s languages comes from those who met him in person. Many people were surprised to discover that . . .

Continue reading.

Written by Leisureguy

17 August 2021 at 12:02 pm

The World Is All That Is the Case

leave a comment »

Ed Simon writes in The Millions:

“Well, God has arrived. I met him on the 5:15 train. He has a plan to stay in Cambridge permanently.”
John Maynard Keynes in a letter to his wife describing Ludwig Wittgenstein (1929)

Somewhere along the crooked scar of the eastern front, during those acrid summer months of the Brusilov Offensive in 1916, when the Russian Empire pierced into the lines of the Central Powers and perhaps more than one million men would be killed from June to September, a howitzer commander stationed with the Austrian 7th Army would pen gnomic observations in a notebook, having written a year before that the “facts of the world are not the end of the matter.” Among the richest men in Europe, the 27-year-old had the option to defer military service, and yet an ascetic impulse compelled Ludwig Wittgenstein into the army, even though he lacked any patriotism for the Austro-Hungarian cause. Only five years before his trench ruminations would coalesce into 1921’s Tractatus Logico-Philosophicusand the idiosyncratic contours of Wittgenstein’s thinking were already obvious, scribbling away as incendiary explosions echoed across the Polish countryside and mustard gas wafted over fields of corpses. “When my conscience upsets my equilibrium, then I am not in agreement with something. But is this? Is it the world?” he writes. Wittgenstein is celebrated and detested for this aphoristic quality, with pronouncements offered as if directly from the Sibylline grove. “Philosophy,” Wittgenstein argued in the posthumously published Culture and Value“ought really to be written only as poetic composition.” In keeping with its author’s sentiment, I’d claim that the Tractatus is less the greatest philosophical work of the 20th century than it is one of the most immaculate volumes of modernist poetry written in the past hundred years.

The entire first chapter is only seven sentences, and can easily be arranged as a stanza read for its prosody just as easily as a logician can analyze it for rigor:

The world is all that is the case.

The world is the totality of facts, not of things.

The world is determined by the facts, and by their being all the facts.

For the totality of facts determines what is the case, and also whatever is not the case.

The facts in logical space are the world.

The world divides into facts.

Each item can be the case or not the case while everything else remains the same.

Its repetition unmistakably evokes poetry. The use of anaphora with “The world” at the beginning of the first three lines (and then again at the start of the fifth). The way in which each sentence builds to a crescendo of increasing length, from starting with a simple independent clause to a trio of lines that are composed of independent and dependent clauses, hitting a peak in the exact middle of the stanza, and then returning to independent clauses, albeit the final line being the second longest sentence in the poem. Then there is the diction, the reiteration of certain abstract nouns in place of concrete images—”world,” ‘facts,” “things.” In Wittgenstein’s thought these have definite meanings, but in a general sense they’re also words that are pushed to an extreme of conceptual intensity. They are as vague as is possible, while still connotating a definite something. If Wittgenstein mentioned red wheelbarrows and black petals, it might more obviously read as poetry, but what he’s doing is unique; he’s building verse from the constituent atoms of meaning, using the simplest possible concepts that could be deployed. Finally, the inscrutable nature of Wittgenstein’s pronouncements is what gives him such an oracular aura. If the book is confusing, that’s partially the point. It’s not an argument, it’s a meditation, a book of poetry that exists to do away with philosophy.

Published a century ago this spring, the Tractatus is certainly one of the oddest books in the history of logic, structured in an unconventional outline of unspooling pronouncements offered without argument, as well as a demonstration of philosophy’s basic emptiness, and thus the unknowability of reality. All great philosophers claim that theirs is the work that demolishes philosophy, and Wittgenstein is only different in that the Tractatus actually achieves that goal. “Most of the propositions and questions to be found in philosophical works are not false but nonsensical,” writes Wittgenstein.  “Consequently, we cannot give any answer to questions of this kind,” where “of this kind” means all of Western philosophy. What results is either poetry transubstantiated into philosophy or philosophy converted into poetry, with the Tractatus itself a paradox, a testament to language that shows the limits of language, where “anyone who understands me eventually recognizes… [my propositions] as nonsensical…He must, so to speak, throw away the ladder after he has climbed up it.” The Tractatus is a self-immolating book, a work that exists to demonstrate its own futility in existing. At its core are unanswerable questions of silence, meaninglessness, and unuttered poetry. The closest that Western philosophy has ever come to the Tao.

Of the Viennese Wittgensteins, Ludwig was raised in an atmosphere of unimaginable wealth. As a boy, the salons of the family mansions (there were 13 in the capital alone) were permeated with the music of Gustav Mahler and Johannes Brahms (performed by the composers themselves), the walls were lined with commissioned golden-shimmer paintings by Gustave Klimt, and the rocky bespoke sculptures of August Rodin punctuated their courtyards. “Each of the siblings was made exceedingly rich,” writes Alexander Waugh in The House of Wittgenstein (and he knows about difficult families), “but the money, to a family obsessed with social morality, brought with it many problems.” Committed to utmost seriousness, dedication, and genius, the Wittgensteins were a cold family, the children forced to live up to the exacting standards of their father Karl Otto Clemens Wittgenstein. Ludwig’s father was an iron man, the Austrian Carnegie, and the son was indulged with virtually every privilege imaginable in fin de siècle Vienna. His four brothers were to be trained for industry, and to be patrons of art, music, poetry, and philosophy, with absolutely no failure in any regard to be countenanced. Only a few generations from the shtetl, the Wittgensteins had assimilated into gentile society, most of them converting to Catholicism, along with the few odd Protestants; Ludwig’s grandfather even had the middle name “Christian” as if to underscore their new position. Wittgenstein had a life-long ambivalence about his own Jewishness—even though three of his four grandparents were raised in the faith—and he had an attraction to a type of post-theological mystical Christianity, while he also claimed that his iconoclastic philosophy was “Hebraic.”

Even more ironically, or perhaps uncannily, Wittgenstein was only the second most famous graduate of Vienna’s secondary Realschule; the other student was Adolph Hitler. There’s a class photograph from 1905 featuring both of them when they were 16. As James Klaage notes in Wittgenstein: Philosophy and Biography“an encounter with Wittgenstein’s mind would have created resentment and confusion in someone like Hitler,” while to great controversy (and thin evidence) Kimberly Cornish in The Jew of Linz claims that the philosopher had a profound influence on the future dictator, inadvertently inspiring the latter’s antisemitism. Strangely, like many assimilated and converted Jews within Viennese society, a casual antisemitism prevailed among the Wittgensteins. He would even be attracted to the writings of the pseudo-philosopher Otto Weininger, who in his book Sex and Character promulgated a notoriously self-hating antisemitic and misogynistic position, deploring modernity as the “most effeminate of all ages” (the author would ultimately commit suicide in the house where Beethoven had lived as an act of Völkisch sacrifice). When promoting the book, Wittgenstein maintained that he didn’t share in Weininger’s views, but rather found the way the writer was so obviously wrong interesting. Jewishness was certainly not to be discussed in front of the Wittgenstein paterfamilias, nor was anything that to their father reeked of softness, gentleness, or effeminacy, including Ludwig’s bisexuality, which he couldn’t express until decades later. And so at the risk of indulging an armchair version of that other great Viennese vocation of psychoanalysis, Wittgenstein made the impossibility of being able to say certain things the center of his philosophy. As Brahms had remembered, the family chillily acted “towards one another as if they were at court.” Of his four brothers—Rudi drank a mixture of cyanide and milk while in a Berlin cabaret in 1922, distraught over his homosexuality and his father’s rejection; Kurt shot himself in the dwindling days of the Great War after his troops defied him; and Hans, the oldest and a musical prodigy, presumably drowned himself in Chesapeake Bay while on an American sojourn in 1902—only Paul and Ludwig avoided suicide. There were economic benefits to being a Wittgenstein, but little else.

Austere Ludwig—a cinema-handsome man with a personality somehow both dispassionate and intense—tried to methodically shuffle off his wealth, which had hung from his neck along with the anchor of respectability. As it was, eventually the entire fortune would be commandeered by the Nazis, but before that Wittgenstein dispensed with his inheritance literally. When his father died in 1913, Wittgenstein began anonymously sending large sums of money to poets like Rainer Maria Rilke, whose observation in a 1909 lyric that “I am so afraid of people’s words./They describe so distinctly everything” reads almost as a gloss on the Tractatus. With his new independence, Wittgenstein moved to simple log cabin on a Norwegian fjord where he hoped to revolutionize logic. Attracted towards the austere, this was the same Wittgenstein whom in 1923, after the Tractatus had been published, lodged above a grocer in rural Austria and worked as a school teacher, with the visiting philosopher Frank Ramsey describing one of the richest men in Europe as living in “one tiny room, whitewashed, containing a bed, washstand, small table and one hard chair and that is all there is room for. His evening meal which I shared last night is rather unpleasant coarse bread, butter and cocoa.” Monasticism served Wittgenstein, because he’d actually accomplish that task of revolutionizing philosophy. From his trench meditations while facing down the Russians—where he ironically carried only two books—Fyodor Dostoevsky’s The Brothers Karamazov and Leo Tolstoy’s The Gospel in Brief —he birthed the Tractatus, holding to Zossima’s commandment that one should “Above all, avoid falsehood, every kind of falsehood.” The result would be a book whose conclusions were completely true without being real. Logic pushed to the extremes of prosody.

The Tractatus was the only complete book Wittgenstein published in his lifetime, and the slender volume is composed of a series of propositions arranged within one another like an onion. Its seven main propositions are . . .

Continue reading.

Written by Leisureguy

16 August 2021 at 1:52 pm

New edition of “The History of Jazz”

leave a comment »

Ted Gioia writes at The Honest Broker:

Back in the early 1990s, Sheldon Meyer of Oxford University Press asked me to write a full history of jazz, from its origins to the current day—a book that would serve as the publishing house’s flagship work on the subject.

When Oxford University Press had published Marshall Stearns’s The Story of Jazz in 1956, it had served as a milestone moment in music scholarship. For the first time, a major academic press was embracing jazz as a legitimate field of study. But by the 1990s, Stearns’s book was terribly out-of-date, and Oxford needed a new work to replace it in their offerings. My book was envisioned as that replacement.

I told Meyer that I would need at least 4-5 years to deliver a book on such an expansive topic. He accepted this timeline—he was a wise editor who took a long term view of publishing, a rarity nowadays, but that’s why so many books he edited went on to win the Pulitzer or Bancroft prizes. I was blessed to have him as my editor, and wanted to work with him on this project. I managed to complete the manuscript in the promised time frame, and in 1997 my book The History of Jazz was published, a few days after my 40th birthday.

In retrospect, I view this moment as the key turning-point in my vocation as a music historian. The History of Jazz would prove to be the bestselling jazz book of the next quarter-of-a-century, selling hundreds of thousands of copies in English and various translations. It brought me in contact with readers all over the world, and put me in an enviable position. Music tends to be a young person’s game, and that’s true for writers as well as performers. Yet I found that I had somehow reversed the trend, finding a much larger readership after the age of 40 than I’d ever enjoyed as a young man—in the aftermath everyone from the White House to the United Nations would contact me for guidance and advice on jazz-oriented projects, and I still hear daily from readers of this book who share their own jazz stories from all over the world. I never take that for granted, and have always felt gratitude to Sheldon and Oxford, but especially to these readers, who have stayed with me through so many subsequent books.

But the history of jazz is not a static subject. The music continues to morph and evolve. So I wrote an updated and expanded second edition of The History of Jazz released in in 2011. And ten years later, another upgrade is very much necessary. A few weeks ago, the new third edition of The History of Jazz was released—which has allowed me to bring this exciting story, once again, up to the current day.

Below is an extract from the new edition for my subscribers. It looks at the extraordinary conjunction of events spurring a resurgence of interest in jazz in the current moment.

For more information on the book, you may want to check out my recent interview for NPR, conducted by Natalie Weiner.

How Jazz Was Declared Dead—Then Came Roaring Back to Life

by Ted Gioia (from The History of Jazz, 2021 Edition)

I’ve heard many predictions about jazz over the years. The prognosticators typically serve up grim forecasts about the genre’s inevitable decline into irrelevancy or its survival on life support as a kind of musical museum exhibit celebrating past glories. Such prophecies aren’t much fun to consider—but they haven’t been very accurate either. None of these seers has anticipated what’s actually now happening on the jazz scene, a development as delightful as it has been unexpected. Jazz has somehow rediscovered its roots as populist music, embarking on a new and unscripted dialogue with mainstream culture. To some extent, jazz has even turned into a kind of talisman for forward-looking sounds in commercial music—with the same mass-market periodicals that published obituaries for the genre just a short while ago now proclaiming its hot new status.

Artists as different from each other as Kamasi Washington, Esperanza Spalding, Shabaka Hutchings, and Robert Glasper have shown that they can draw on the full range of current-day song styles without losing their jazz roots, and attract a young crossover audience who are energized and excited by this give-and-take. Pop culture stars, from Kendrick Lamar to Lady Gaga, have returned the favor, seeking out ways of uplifting their own artistry by incorporating jazz ingredients into their music. In the process, the whole notion of jazz as a niche genre for snobbish insiders has gotten overturned. Jazz is showing up with increasing frequency on tourist guides, suggested as the preferred evening’s entertainment in New York or London or Tokyo or some other travel destination. And even for stay-at-homes watching movies from the comfort of their couch, a surprising number of Hollywood offerings—La La LandGreen BookWhiplashMiles AheadBorn to Be BlueSoul—have served up jazz stories and songs suitable for mainstream appeal.

Of course, not every jazz old-timer celebrates the music’s newfound popularity. Just as complaints could be heard in the 1980s and 1990s when the music gained wider respectability and made an alliance with academic and nonprofit institutions, a whole litany of different grievances have been raised now that the genre has seemingly reversed course and returned to the people. But the lessons of jazz history are fairly clear by now: complaints and denunciations by entrenched insiders are almost always a sign that something important is underway. In this instance, the new discourse between jazz and popular music seems more than just a passing trend but the sign of an emerging ethos that might prove lasting and transformative.

It’s hard to pinpoint the moment a trend reverses. And in the case of jazz, it sometimes seemed as if its alleged downturn would never end—at least judging by the pessimistic media pronouncements on the art form made during the early years of the twenty-first century. Jazz’s problem, they declared, wasn’t like a bad haircut, something you could grow out of, or an embarrassing tattoo that a laser might zap away, but more like a death sentence. I still recall my dismay when The Atlantic entitled an otherwise favorable review of one of my books with the dispiriting headline: “The End of Jazz,” and followed it up with a subhead that promised to explain “how America’s most vibrant music became a relic.” I was miffed, but I could hardly blame the author. He was simply stating the consensus view among opinion leaders.

That was back in 2012, but the notion that jazz was dead had been bouncing around for quite some time. In 2007, Esquire had published a similar article, proclaiming in its headline not only the “Death of Jazz,” but adding that the genre had been in decline since John Coltrane’s demise forty years earlier. Around that same time, critic Marc Myers published an article on his JazzWax website entitled “Who Killed Jazz and When?,” which reached a similar conclusion, but pinpointed an even earlier cause of decline— specifically, the decision by jazz bands in the late 1940s to stop playing for dancers. When CNN tackled the same matter, in an article entitled “When Jazz Stopped Being Cool,” the guilty parties were now the Beatles and rock & roll. Other pundits focused on different root causes for the music’s obsolescence, with everyone from elitist fans to narcissistic performers getting a share of the blame. But the final result was, as they saw it, hardly open to debate: jazz had been on life support for too long, and it was time to put the dear old thing out of its misery.

It’s now been several years since I’ve seen any of those anguished obituaries for jazz, and instead a different kind of news story has taken its place. Big font headlines now proclaim  . . .

Continue reading.

Written by Leisureguy

15 August 2021 at 4:07 pm

Posted in Books, History, Jazz

Our Workplaces Think We’re Computers. We’re Not.

leave a comment »

Ezra Klein interviewed Annie Murphy Paul, author of The Extended Mind: The Power of Thinking Outside the Brain. (NB: There’s an unauthorized companion workbook and journal of 86 pages with the same title, a blatant effort to ride the coattails of Paul’s book. Do not buy that workbook and journal, but it would indeed be a good idea to buy a blank notebook, title it “The Extended Mind Workbook & Journal” and in it develop your own guide based on the book — asking yourself questions, writing in your own words the guidance the author provides, and keeping a journal of your thoughts and efforts as you apply what you learn from the book.)

You can can listen to the entire interview on “The Ezra Klein Show” on AppleSpotifyGoogle or wherever you get your podcasts. The interview transcript (not fully edited for grammar or spelling) begins:

EZRA KLEIN: I’m Ezra Klein, and this is “The Ezra Klein Show.”

[MUSIC PLAYING]

Something I’ve been wrestling with lately, both in my head and then, of course, on the show, is what I’ve come to think of as productivity paradoxes, these things that look and feel to us like work, like productivity, that the culture tells us are work and productivity but turn out to be the opposite. They turn out to be distractions or they turn out to miss something profound about how we work or how we think or even how we live.

If you remember, for instance, my interview with Cal Newport from earlier this year, that was about one of these: the way constant communication on platforms like Slack and Teams and to some degree even email, it codes as work, it looks like work, and it’s often a distraction not just from work but from its fundamental precursor, focus. There are also, of course, distractions from life and leisure. When we’re not able to work well in productivity in the time we’re supposed to do it, it expands outward into everything else. So this isn’t just about work but about being able to balance work and the rest of life.

Then I began reading this new book, “The Extended Mind” by Annie Murphy Paul. Paul is a science writer, and her book, the work here, began as an inquiry into how we learn, but then it became something else. It became a book about how we think. Because what came to tie her research together was this 1998 article by the philosophers Andy Clark and David Chalmers called “The Extended Mind,” which argued that there was a, quote, “active role of the environment in driving cognitive processes,” end quote. That what you should think of as our mind, and certainly the way our mind worked, was extending out beyond our head and was intimately shaped, like actually intertwined, with tools, with people, with the environment, with the visual field. And subsequent research really, really bore this out. And the implications of it, I think, are profound.

A lot of this book is about recognizing that we have the intuitive metaphor of our minds, that they’re an analytical machine, a computer of sorts. And we’ve taken this broken metaphor of the mind and then built schools and workplaces and society on top of it, built the built environment on top of it. And the result is that our work and school lives are littered with these productivity paradoxes.

What so often feels and looks like productivity and efficiency to us are often the very activities and habits that stunt our thinking. And many of the habits and activities that look like leisure, sometimes even look like play, like if you’ve taken a walk in the middle of the day or a nap, those end up unlocking our thinking. If the question is, how can we be the most creative or come up with the most profound productive insights, you need to do that stuff.

And so if you read it correctly, in my view at least, this is a pretty radical book. It has radical implications not just for how we think about ourselves but for policy, for architecture, for our social lives, for schooling, for the economy. And I’ll say that it has stuck with me quite a bit. It has changed the way I structure a bunch of my days. I’m trying to work with my mind more and against it less. As always, my email for guest suggestions, reading recommendations, whatever, is ezrakleinshow@nytimes.com.

Annie Murphy Paul, welcome to the show.

ANNIE MURPHY PAUL: Thanks, Ezra. I’m really glad to be here.

EZRA KLEIN: You have a quote in the book that encapsulates kind of the whole of it for me, and you’re talking here about the limits of the brain as a computer, this analogy that we use all the time. And you write, quote, “When fed a chunk of information, a computer processes it in the same way on each occasion, whether it’s been at work for five minutes or five hours, whether it is located in a fluorescent lit office or positioned next to a sunny window, whether it’s near other computers or is the only computer in the room. This is how computers operate.

But the same doesn’t hold for human beings. The way we’re able to think about information is dramatically affected by the state we’re in when we encounter it.” End of the quote. Why is that true? Why doesn’t our brain work the same way in all contexts?

ANNIE MURPHY PAUL: Well, it has to do with the fact that our brain is a biological organ and an evolved organ that’s very different from a computer. And the computer metaphor for the brain has been dominant since the emergence of cognitive science in the middle of the last century, and it really permeates the way we think and talk about the brain, and it places these sort of invisible limits on how we use the brain, how we regard other people’s brains, and it’s because that metaphor is so faulty it leads us to act and to make choices in ways that are not at all optimal.

And so in this book, I wanted to challenge the metaphor of the computer and point out that, no, actually the brain evolved in particular settings, mostly outside. It evolved to do things like sense and move the body to find its way through three dimensional landscapes, to engage in encounters in small groups of people. These are the things that the brain does effortlessly, naturally. The brain is not a computer. It never was, and its failures are particular to its own nature, and it has to be understood on its own terms.

EZRA KLEIN: You argue that a lot of thinking — I don’t know if you’re quite saying happens in the body, but it’s certainly picked up by the body as opposed to picked up by our mental ruminations. And the body can in some ways be even more rational than what we think of as a brain. What do you mean by that? Make that argument for me.

ANNIE MURPHY PAUL: Well, as we go through our everyday lives, there’s way more information than we can process or retain consciously. It would just completely explode our mental bandwidth. But we are taking in that information, noting regularities and patterns, and storing them in the non-conscious mind so that it can be used later when we encounter a similar situation. Then the question becomes, well, if it’s non-conscious, how do we make use of that information?

And it’s because the body lets us know. I mean, that’s what we call a gut feeling or what psychologists, what scientists call interoception, which is the perception of internal sensations that arise from within the body. And people who are more attuned to those internal signals and cues are better able to draw on that wealth of information that we know but we don’t know. We possess it, but we don’t know it explicitly or consciously. So that’s what a gut feeling is. It’s sort of your body tugging at your mental sleeve and saying, hey, you’ve been here before. You’ve had this experience before. Here’s how you responded. It worked or it didn’t work. Here’s what is the right thing to do now.

But in our world where we are so brain bound, so focused on the cerebral and the things that go on in our head, we tend to push the body aside, to quash those feelings, to override them, even, in the service of getting our mental work done, when really we should be cultivating that ability, becoming more attuned and more sensitive to it, because it has all this accumulated experience and information to share with us.

EZRA KLEIN: You cite a study that just floored me by the psychologist Antonio Damasio, which looked at the body picking up some of these unconscious processes in a pattern recognition game. Can you describe that?

ANNIE MURPHY PAUL: Yeah. So Damasio set up a kind of card game online on a computer where players were asked to turn over cards from one of four decks. And they could choose which cards they wanted to turn over. And they were given a starting purse. The object was to gain as much money as possible and lose as little money as possible, because each card came with an associated reward or took some money away. And so the experimenter said go. People started clicking on the decks and turning over cards.

And unbeknownst to them, . . .

Continue reading. It’s interesting and offers a new perspective.

See also: Embodied cognition.

UPDATE: Toward the end of the interview, Ezra Klein says:

EZRA KLEIN: Well, let’s talk about the distinction you made right there between expanding and contracting the mind. Because a point you make in the book is that we often think about the demand side of our attention, which is to say, we think about what we want to devote our attention to and that we need to be tougher about being rigorous about where we put our attention. We need to work harder at devoting our attention. But you talk about working on the supply side of our attention, trying to expand the amount of attention we have, trying to replenish the attention we have. So how does using that framework change what you do? If I’m tired or I’m getting ready for a podcast, what do I do to expand the supply side of my attention?

I immediately thought of Julia Cameron’s book The Artist’s Way (included, BTW, on my list of books I find myself repeatedly recommending). It’s a 12-week program of beginning each day by writing three pages in a journal (preferably by hand), without pausing — just free-associating and continuing to write. (If you’re stuck, just write “word, word, word,” until something comes to mind and continue.) She provides more information and some direction in her book, which IMO is worth getting — and the 12-week program is worth doing.

One of the things she mentions is that on one day each week, you schedule and keep an “artist’s date,” which is a period of 1-2 hours in which you get outside and walk around and take in impressions. It might be in a city, or in a neighborhood, or in a park. The important thing is just to take time to observe things happening.

She say that the morning pages are output, a flowing forth of ideas, and it is vital to also have time for input, to absorb impressions, observe things, and allow time for ideas to form.

Written by Leisureguy

14 August 2021 at 3:46 pm

When your father dies, your accounting degree is not going to help you to process that experience. Homer will help you.

leave a comment »

An interview on Why the Classics? that’s worth reading (as are the Classics):

For our first-ever newsletter on Substack, we wanted to share this amazing interview with noted classicist, memoirist, and critic Daniel Mendelsohn on the Odyssey. Mendelsohn wrote one of the most moving and psychologically penetrating contemporary works on Homer’s epic, An Odyssey  a mixture of autobiography, literary analysis, and cultural history. The whole piece is worth your time, but we wanted to draw particular attention to Mendelsohn’s thoughts on the alleged “impracticality” of the classics. He argues, convincingly and correctly, that reading great literature will prepare you for the truly human experiences  love, grief, doubt, joy  in a way more superficially “practical” education will not. 

Octavian Report: Why should we read the Odyssey?

Daniel Mendelsohn: There’s a reason the classics are classics — and it’s not because they have better agents than books that aren’t classics. The classics are classics because they pose in a way that is lively and narratively interesting and challenging the most basic questions about human experience. The Greek and Roman classics are the foundation for our way of seeing the world. And therefore we read them because they tell us something true about life. In the case of the Odyssey, aside from everything else it is, it’s one of the great family dramas. It’s about homecoming, it’s about the meaning of home, it’s about how you know and how you prove your intimacy with members of your family. It’s about the bonds that connect family members over many years despite time and distance.

Beyond that, it’s in a certain sense the first science-fiction narrative. It envisions an adventurer who’s exposed to strange new civilizations (to quote the opening of Star Trek). Odysseus is the person from Greek civilization, from Western culture, touring abroad through alternative and new civilizations. And it is through his interactions with different models of civilization, from total barbarity to hyper-cultured behavior, that he comes to reflect on his own civilization and to determine where in that spectrum it falls.

Maybe science fiction is a little strong. It’s certainly one of the first anthropological documents in the Western tradition. It’s about somebody who’s very interested to see how other kinds of cultures live. And through Odysseus, we the readers get to reflect on just what it means to be civilized.

OR: What is the civilization that Odysseus is coming from, and how do these other societies look in comparison to the one he has left?

Mendelsohn: It’s like “Goldilocks and the Three Bears.” This bed is too big, this bed is too little, this bed is just right. And I think we’re meant to feel that his own civilization, the Greek Bronze Age city-state from which he comes — Ithaca — and the civilization that it represents is, as it were, just right. On one end of the spectrum of civilization is the Cyclops, who represents a low point of barbarity. One of the great measures of civilization, certainly in Odysseus’ own culture, is how you treat guests. And the guest-host relationship is one of the strong markers of this civilized society: you treat your guests well. It’s a standard theme that’s repeated over and over in the poem. Odysseus himself comes as a stranger, anonymous, and he’s always bathed, fed, treated to dinner. And then they say, “Well, what is your name?” That is standard operating procedure. The Cyclops, by contrast, eats his guests. He’s a cannibal, which is clearly the low end of the spectrum of civilization.

On the other hand, the last adventure he has before he returns home is set on the island of Scheria, the land of the Phaeacians, who are hyper-civilized. They dance, they love music, they love poetry, they play games. They’re unbelievably refined. They are sensitive to Odysseus, who’s a guest in their household. They notice him crying. They ask him what’s the matter. So, I think between the Phaeacians and the Cyclops, you get these extremes. And the Greeks whom Odysseus represents land somewhere in the middle.

OR: How many hands do you think were actually involved with the composition of the Odyssey? Do you think it was done by the same person or persons that did the Iliad?

Mendelsohn: . . .

Continue reading. I love this stuff.

I also like Mark Twain’s remark that the Iliad was not written by Homer, but by another man of the same name.

Written by Leisureguy

13 August 2021 at 1:51 pm

Posted in Books, Daily life, Education

How to curate (just about) anything

leave a comment »

Glenn Adamson, a curator whose books include Fewer, Better Things (2019) and Craft: An American History (2021), writes in Psyche:

Need to know

Next to me, on my desk, is a pile of about 50 books. Some of them I bought. Some I was given by friends. Some were sent by people I didn’t even know, who thought I might be interested in their work. Individually, the books are interesting. Together, though, they have become downright oppressive.

Desks, in this respect, are a lot like closets. And kitchen cupboards. And two-car garages. All seem to naturally fill up with stuff, stuff we kinda sorta want, enough not to ditch it, anyway. My father is a good example. He subscribes to The Economist magazine, and at some point, long ago, decided he would never throw away an issue until he’d read it cover to cover. Last year, he brightly announced that he’d finally reached the September 2001 edition. He was fascinated, from this distance, to see the debates over the World Trade Center attacks and their implications. Just wait till you get to 2020, I said.

In his novel Homer and Langley (2009), E L Doctorow offers a more extreme example: a fictionalised account of the real-life Collyer brothers, whose hoarding instincts were so strong that they ended up living in a narrow warren of passages, squeezing through towering, ramshackle stockpiles of their own belongings, ‘a labyrinth of hazardous pathways, full of obstructions and many dead ends’. Doctorow adds drama and poignancy to the tale by telling it from the perspective of Homer, the elder brother, who is blind, though the house becomes so crowded that illuminating it is impossible anyway. Homer’s brother Langley also navigates it in near-darkness. Eventually, the brothers are grimly undone by their own packrat ways. And this part of the story, sadly, is true: in 1947, Langley was crushed by a fall of domestic detritus. Homer, unable or unwilling to escape, starved to death.

Most of us should be able to sympathise with the Collyers, because we dislike throwing out things at least a little. Yet the idea that our possessions might turn the tables and possess us holds a certain fascinating horror. Our complex relationship with our things is behind the runaway popularity of Marie Kondo, the diminutive Jedi of decluttering, who has conquered the world with her ‘six rules of tidying’ – most memorably, the injunction to ask of every single thing in your environment: ‘Does it spark joy?’ If not, she counsels, out it goes.

Kondo gives good advice. But we should go further. Her injunctions aren’t much use in helping me figure out how to reduce my book pile, for instance. I do read for joy, sometimes, and occasionally even find it. But I have books for many other reasons, too: for reference, to learn things, to transform my understanding and see the world a bit differently. Kondo’s combination of empathy and minimalism makes for good television. But it won’t get you to the kind of lived-in, peculiarly personalised space that, for most people, defines the ideal of home.

Past the tidying stage, a more capacious process awaits: curating. Before we go any further, let’s admit that overuse has worn the word a bit thin. Even before the rise of social media – which allows a user to carefully curate an avatar self, one post at a time – it was already suffering mission creep. Wedding receptions, department store windows, dinner parties, your weekend away: no longer are these things planned or arranged. They are ‘curated’. A quick incidence search of the term in Google Books shows an astonishing rise, from all-but-zero in 1960 through a slight rise to 1980, and then up a Matterhorn-steep climb to the present day. There is a book out there offering a ‘curated’ tour of America’s RV parks. Another photography book is ambitiously titled Reality, Curated (2021). A website called Curated offers advice on buying just about anything, which will be offered by a 100 per cent human expert (‘or should we say, your new friend’).

Unsurprisingly, museum curators – among whom I number myself – can be irritated by all this. It sometimes feels like a land grab of our professional territory. The meaning of real curatorial work is diluted, while the project of organising everyday things is made pretentious. But what if we accept the overuse of the word as evidence that people actually want to curate? What if we apply museum procedures and principles – the things that curators actually do for a living – to our everyday activities and things? It’s an interesting idea.

What to do

The word curate derives from the Latin curare, ‘to care for’. As anyone who has ever had a child or a pet knows, caring for something is a two-way street, and it’s perfectly OK if most of the traffic is going outbound. The curator’s creed is: ask not what your stuff can do for you, but what you can do for your stuff.

Buy with confidence

In one important respect, this attitude doesn’t translate well to normal life. When a curator writes a ‘case for acquisition’ (as such internal documents are usually called), which are typically reviewed by a broader collections committee, they are singling out an object from the vast number of other things in the world. To acquire something for a museum collection is to designate it as worthy of permanent preservation. The curator is saying to the museum and its visitors and the world: this object is important, and you should think so too.

To apply this attitude to personal possessions would be rather arrogant – think of how awful it can be when parents try to impose their possessions on their own children. (And I’d like to thank my dad for recycling his Economists when he’s done with them.) In some ways, however, a genuine museum curatorial process is a great model to bear in mind.

For starters, a collection is always built in light of a stated mission. Curators don’t ask simply: is this thing great? They ask: is it great for my institution, and my department? This means having a deep understanding of the long-term historical trajectory not just of the object, but of the place where it will reside.

Many factors can come into play when making this determination. Is a prospective acquisition redundant with other objects already in the museum’s holdings? Or, conversely, does it fill a conspicuous hole in the collection? What is the full story of the object, its authorship, medium and history? Is it what it purports to be, and is it in good condition? A good case for acquisition goes into all these questions in depth, and if it’s really well done, might even become a key research tool for future curators.

It would be ridiculous to sit down in the department store to write up all the pros and cons of buying a jacket for the fall season. Still, curatorial questions are the right ones to be asking. Something can be wonderful – epaulettes! – without being right for everyone. Issues of redundancy, complementarity and authenticity must be taken into account. Curatorial habits of mind can prevent impulse-buying, and the regret that follows.

Active storage

Let’s be honest, though. Even the most cogently conceived and consistently applied collections policy is no match for the law of closets. Storage is an unavoidable part of life. It’s very unusual for a museum to have more than about 10 per cent of its holdings on view at any given time. Storage, however, doesn’t necessarily mean dead space. Curators work closely with registrars to ensure that collections remain active, through gallery rotations and touring exhibitions, and, most importantly in recent years, through digital access.

The Victoria and Albert Museum in London, where I used to work, has more than . . .

Continue reading. There’s much more.

Written by Leisureguy

11 August 2021 at 2:01 pm

Posted in Art, Books, Daily life

%d bloggers like this: