Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science fiction’ Category

Ursula Le Guin’s “The Ones who Walk Away from Omelas”: Would You Walk Away?

leave a comment »

Spencer Case writes in 1,000-Word Philosophy:

When, if ever, is it right to sacrifice someone for the greater good?

Ursula K. Le Guin’s (1929-2018) fantasy short story, “The Ones Who Walk Away from Omelas,” raises this question, among others.[1]

This essay introduces her story and explores its philosophical implications.

1. The Dark Secret

The story begins with an elaborate description of a summer festival in an exquisitely beautiful and happy city called Omelas. It’s as though we’re being shown a travel brochure for a place that seems too good to be true.

Le Guin says if you can imagine an even better city than the one she describes, then think of that instead.

Of course, there’s a twist.

Somewhere in the city is a closet where an emaciated child, referred to only as “it,” is locked up. It’s smeared with its own feces, covered with sores, and constantly afraid. Occasionally, the door opens and people will look at it, kick it, and make it stand up.

It says, “I will be good,” but the door always shuts without anyone making a reply.

Why?

Because the denizens of Omelas made a deal – with what or whom, we aren’t told, but apparently dark magic was involved.

The deal is that Omelas would be a paradise provided that a child’s happiness is sacrificed. Whether this applies to just this one child, or a succession of children, is unspecified. In any event, every adult knows that a single kind word spoken to this child would violate the terms of the deal.

We don’t know what the consequences of breaking the deal would be because we don’t know what things were like before. But certainly Omelas would be a much less happy place overall, even though this child would be happier.

2. Walking Away

When the children of Omelas reach adolescence, they’re told the dark secret, and some see the child. They react with . . ..

Continue reading.

Le Guin’s story seems to owe more than a little to the Grand Inquisitor story Dostoevsky included in The Brothers Karamazov. 

Written by Leisureguy

21 May 2022 at 12:37 pm

“Ruin”: An animated short now, to be part of a feature film

leave a comment »

“Animation” does not mean what it once did — it’s gone far beyond that.

Written by Leisureguy

13 May 2022 at 7:00 pm

The best books on Science Fiction and Philosophy

leave a comment »

Five Books has an expert pick 5 “best books” in some field, and then interviews the expert to learn the reasons for the choices. In this case, they note:

Serious philosophy need not take the form of a journal article or monograph, argues the philosopher and U.C. Riverside professor Eric Schwitzgebel, as he selects five science fiction books that succeed both as novels and provocative thought experiments that push us to consider deep philosophical questions from every angle. Interview by Nigel Warburton.

The interview begins:

Before we get into the five books you have chosen, might you tell us: what’s your special interest in science fiction, and how does it tie in with philosophy?

Well, partly, I just love it. There’s something to be said for pursuing what you love. But I also think science fiction is a great fit for philosophy in a couple of ways. Speculative fiction, generally, I think of as fiction that explores scenarios beyond the ordinary run of human experience. Other types of fiction—for example, ‘literary fiction,’ as it’s sometimes called—don’t much depart from the ordinary run of human experience. But in philosophy we’re often fascinated with scenarios that pull apart things that ordinarily go together.

A wonderful example of this is Derek Parfit’s work on personal identity. Ordinarily, a person has one body that continues in a specific way over time, permanently connected to one mind that also continues in a specific way over time. There’s a familiar body and a consistent personality. You have memories that move from one moment to the next and experiences that overlap. All of these things go together, and we think of them as constitutive or partly constitutive or somehow involved with or correlated with being a person. In the ordinary run of human experience, these things never come apart. But in science fiction-type thought experiments you can wonder, okay, what would happen if you took someone’s mind, just to put it crudely, and put it in someone else’s body? Would it be the same person? Would the person follow the mind or would they follow the body? What if you stepped into a machine and two duplicates of you walked out? What if you lived 800 years, gradually acquiring a new personality every century and entirely forgetting your past? You can separate the pieces, considering what really matters about a person’s identity. You can decouple what ordinarily goes together, aiming to find the essence of the matter in the way that philosophers characteristically like to do.

It’s an interesting conundrum, because some science fiction seems to extrapolate from existing science to a future that’s possible and consistent with what we know about science today. That is, a hypothetical situation that is a plausible, possible future world—or maybe not so plausible, but still could happen. But there’s another kind of science fiction which doesn’t seem to be bound by anything we know about science now—it just allows what you might call magical things to happen. I wonder how the two of them relate to philosophy.

Fantasy just allows magical things to happen. And that can be very useful in thinking through philosophical issues because you might be interested in considering things that aren’t scientifically plausible at all, exploring them as conceptual possibilities. Now, within the constraints of scientific plausibility we can find a second big philosophical value in science fiction: thinking about the future. For example, I think it’s likely that in the next several decades, or maybe the next 100 or 200 years, if humanity continues to exist and continues along its current trajectory, we will eventually create artificial beings who are conscious. Maybe they’ll be robots or maybe they’ll artificial biological organisms. Or they might be a bio-machine hybrid or the result of technology we can’t yet foresee. We might create artificial entities who are people—entities with conscious experiences, self-knowledge, values, who think of themselves as individuals. They might be very much unlike us in other ways—physiologically, physically, maybe in their values, maybe in their styles of thinking.

If that happens, that’s hugely significant. We’d have created a new species of person—people radically different from us, sharing the world with us. Humanity’s children, so to speak. Few things could be more historically momentous than that! But these matters are hard to think about well. Maybe that future is coming. But what might it even look like? What would it do to ethics? To philosophy of mind? To our sense of the purpose and value of humanity itself? Science fiction is a tool for imagining the possible shape of such a future. So that’s just one example of the way in which science fiction can help us think about future possibilities.

What I like about science fiction is there are many different voices, many different styles of experiment, as it were, with a lot of detail. It’s not just a caricature. You have a whole range of imagined worlds and imagined futures to draw on. It’s not as if there’s just one simple trolley problem that you have play variations on. They’re radically different, densely described scenarios. I think that’s a richness that it’d be hard to generate in a one-liner thought experiment.

I want to distinguish two epistemic dimensions here. One is richness of detail. The other is emotional or imaginative engagement. In my conception, you can think philosophically anywhere along a spectrum. On side are extremely abstract, cool-minded general statements like ‘maximise the sum of positive to negative emotional experiences’, or ‘the greatest good for the greatest number’. On the other side are richly imagined fictional worlds, worlds that really draw you in and shake you up, like a great novel or television series. In between are short stories and paragraph-long thought experiments like the famous trolley problem.

There are different, complementary epistemic virtues to the different sides of the spectrum. The abstract strips away irrelevancies, or what you think are irrelevancies. But the human mind is not so good at understanding the implications of abstract views. Even really simple things like the Wason Selection Task defeat us pretty badly. The amazing thing about the Wason Selection Task is that it’s a super simple cognitive puzzle. Really simple! And yet we are incredibly bad at it.

I used to write questions for law school admissions tests, the informal paragraph-long logical reasoning questions. I had a formula for a tricky question. I would write a paragraph in ordinary language on something that people have no intuitions about, like maybe the rise or fall of interest rates and its bearing on Party A and Party B in Bolivian politics. I’d write a paragraph about it. At the end, I’d present five possible answers combining negation and a conditional statement in ordinary language. Option A might be ‘Unless interest rates go down, Party B will not win.’ Option B might be ‘Party A will only lose if interest rates go up.’ Already your brain starts frying, just thinking about these. I can’t immediately tell the options apart, and I just wrote them! A computer could do that with just a few bits of information. It’s just a negation and a conditional.

That’s a long digression. But the human mind is really bad at abstract reasoning. What we’re good at is social cognition. And imagination. So when you have rich stories, and you engage social cognition and imagination, you meet the human mind where it’s strong. We can consider what would a principle like ‘maximize good consequences’ really amount to? It sounds good in the abstract, but does that mean you’re going to push someone in front of a runaway trolley, killing them to save five others? Well, I’m not sure. Once we start to think of it in that way, we’re engaging a different side of our mind. But the complementary problem is that you can get distracted by details irrelevant to your general thesis. Whether you want to push the guy shouldn’t depend on his race, for example. How much you sympathize with a character in a fiction might depend on whether they’re funny, whether they had a happy or sad childhood. You get influenced by all these details you might not want to be influenced by, and it’s not always clear which ones are relevant.

It’s a question of determining what’s noise and what isn’t. And noise might be necessary to make it plausible in some ways. Realism in writing often requires all kinds of extraneous facts. If you only put in things which play a role in the plot, you give the plot away before you mean to.

As a writer, you have these intuitions about which details it makes sense to include and which don’t work. You might not even know why they seem right or wrong.

Yes. When we’re talking about moral psychology, a rich psychological case study is so much more fascinating than a one liner or a caricature of a person. You come to see the complexity of a dilemma for somebody who’s a real person, or seems to be a real person, rather than for person ‘P’ in a schematic account of a situation. But let’s move on to your first book choice, Ted Chiang’s Stories of Your Life and Others.

Ted Chiang is a short story and novella writer who’s not very prolific. He’s published famously few stories. But a large proportion of his stories have a huge impact and win prizes. His stories are so richly philosophical. The book I’ve chosen is his first story collection. He has another collection that came out recently, Exhalation, which is also excellent. But I have an emotional attachment to the first one because that was the one that kindled my love for Ted Chiang’s work.

Though he was certainly well known in the science fiction short story community, his public fame came with Arrival, a blockbuster movie that was based on one of the stories in this collection: ‘Story of Your Life.’ In that story, an alien species arrives at Earth. The story is from the point of view of a linguist who is trying to decipher their language. Their written language is visual and non-temporal in a fascinating way, and one of the wonderful things about the story is how Chiang thinks it through in fascinating detail, what the grammar of a non-temporal, visual language might be, how it might influence cognition, how to build up a language spatially organised in a two dimensional plane rather than linearly and temporally organised, like human languages.

Are there other notable stories in this collection? Could you talk us through one?

‘Liking What You See’ is a fascinating philosophical thought experiment story set in a near future where people can wear helmets that contain a trans-cranial magnetic stimulator that shuts down the region of their brain that is responsible for making human beauty judgments. It’s told as a documentary with lots of

Read the whole thing.

Written by Leisureguy

27 January 2022 at 1:05 pm

Researchers Build AI That Builds AI

leave a comment »

In the Singularity genre of science fiction (for example, the movie Her) the critical moment is when AI starts building AI: AI uses its greater-than-human level of understanding to build a new AI that has a deeper level of understanding than the AI that created it, and then that new AI builds an even better AI — it’s like regenerative feedback. And it happens fast (at least in science fiction).

AlphaZero learns games by being told the rules and then — swiftly — playing millions of games against itself, learning from that experience what constitutes best tactics and best strategy.

Anil Ananthaswamy describes in Quanta what is happening now:

Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them.

Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn’t easy. “Training could take days, weeks or even months,” said Petar Veličković, a staff research scientist at DeepMind in London.

That may soon change. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a “hypernetwork” — a kind of overlord of other neural networks — that could speed up the training process. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. Because the hypernetwork learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications.

For now, the hypernetwork performs surprisingly well in certain settings, but there’s still room for it to grow — which is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Veličković.

Getting Hyper

Currently, the best methods for training and optimizing deep neural networks are variations of a technique called stochastic gradient descent (SGD). Training involves minimizing the errors the network makes on a given task, such as image recognition. An SGD algorithm churns through lots of labeled data to adjust the network’s parameters and reduce the errors, or loss. Gradient descent is the iterative process of climbing down from high values of the loss function to some minimum value, which represents good enough (or sometimes even the best possible) parameter values.

But this technique only works once you have a network to optimize. To build the initial neural network, typically made up of multiple layers of artificial neurons that lead from an input to an output, engineers must rely on intuitions and rules of thumb. These architectures can vary in terms of the number of layers of neurons, the number of neurons per layer, and so on.

One can, in theory, start with lots of architectures, then optimize each one and pick the best. “But training [takes] a pretty nontrivial amount of time,” said Mengye Ren, now a visiting researcher at Google Brain. It’d be impossible to train and test every candidate network architecture. “[It doesn’t] scale very well, especially if you consider millions of possible designs.”

So in 2018, Ren, along with his former University of Toronto colleague Chris Zhang and their adviser Raquel Urtasun, tried a different approach. They designed what they called a graph hypernetwork (GHN) to find the best deep neural network architecture to solve some task, given a set of candidate architectures.

The name outlines their approach. “Graph” refers to the idea that the architecture of a deep neural network can be thought of as a mathematical graph — a collection of points, or nodes, connected by lines, or edges. Here the nodes represent computational units (usually, an entire layer of a neural network), and edges represent the way these units are interconnected.

Here’s how it works. A graph hypernetwork starts with  . . .

Continue reading.

Written by Leisureguy

25 January 2022 at 3:38 pm

Aspiring to a Higher Plane: Going Deeper into Abbot’s “Flatland”

leave a comment »

Ian Stewart has a very interesting article in The Public Domain Review (that’s their mis of fonts) about Abbot’s novel Flatland (at the link, available as an ebook in various formats free of charge). The article includes various illustrations from the book, and I’ll not include those, but instead offer the initial text of the article:

Edwin Abbott Abbott, who became Headmaster of the City of London School at the early age of 26, was renowned as a teacher, writer, theologian, Shakespearean scholar, and classicist. He was a religious reformer, a tireless educator, and an advocate of social democracy and improved education for women. Yet his main claim to fame today is none of these: a strange little book, the first and almost the only one of its genre: mathematical fantasy. Abbott called it Flatland, and published it in 1884 under the pseudonym A. Square.

On the surface — and the setting, the imaginary world of Flatland, is a surface, an infinite Euclidean plane — the book is a straightforward narrative about geometrically shaped beings that live in a two-dimensional world. A. Square, an ordinary sort of chap, undergoes a mystical experience: a visitation by the mysterious Sphere from the Third Dimension, who carries him to new worlds and new geometries. Inspired by evangelical zeal, he strives to convince his fellow citizens that the world is not limited to the two dimensions accessible to their senses, falls foul of the religious authorities, and ends up in jail.

The story has a timeless appeal, and has never been out of print since its first publication. It has spawned several sequels and has been the subject of at least one radio programme and two animated films. Not only is the book about hidden dimensions: it has its own hidden dimensions. Its secret mathematical agenda is not the notion of two dimensions, but that of four. Its social agenda pokes fun at the rigid stratification of Victorian society, especially the low status of women, even the wives and daughters of the wealthy.

Flatland’s inhabitants are . . .

Continue reading. There’s more, the illustrations are nifty, and you can get a free copy of the ebook.

Written by Leisureguy

25 January 2022 at 1:27 pm

Mental itch satisfactorily scratched: Novel name found.

leave a comment »

In a shaving post a while back I was trying to recall the specifics of a science-fiction novel I read long ago. This morning the answer has been found, and I’ve updated that post:

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959. Available on Kindle for $1.

What a relief.

The context was my own meditative observation of the stages of loading a brush with shaving soap.

Written by Leisureguy

19 December 2021 at 7:42 am

More lather fun, and the blissful Monday-morning shave

with 2 comments

When one has been told how to do a repeated task, it takes some practice and experience to discover the absolutely optimal approach. The discovery experience often feels like an extraordinary leap of insight and understanding, and the learner will exclaim, in italics as it were, the very thing he was told, but with the words feeling fresh and vibrant and electric. “Oh! Light pressure!” from a shaver who has been repeatedly told to use light pressure — and who thought he was using light pressure — spoken after the first time he actually does use light pressure.

I’m experiencing some of that in my lather making. Long ago Mantic59 advised shaking the brush until it was barely damp, then start loading the brush, adding very small amounts of water as needed to complete the loading. Once the brush has been loaded, a little more water is generally added as the brush works up the lather (on the face, in a bowl, in a cupped palm). 

Only recently have I found the right approach — that is, “right” in the sense not having the optimal lather (without it being runny in the slightest) and — equally if not mor eimportant to me — a process that is interesting to do and observe. 

The trick for me was to change the injunction “Wet the knot well under the hot-water tap and then shake the water out until the brush is only slightly damp” to end with “… until the brush is as dray as you can by shaking it.” The loading in this method works best if you have tried to shake all the water out of the brush. (You can’t — the brush will still be slightly damp — but by aiming at “dry” instead of “slightly damp,” I did get all the residual excess water out of the brush.

Then when you brush the soap, nothing much happens. So you add just a tiny amount of water. Note that this water is enjoyed by the tip of the brush, not the inner core. As you brush those few drops of water over the soap, you’ll see a little soap transferred from puck to brush. Add another tiny amount of water and brush again. Wieth another two or three repetitions, you will have gotten and good amount of soap loaded into the top of the knot and no extra water in the interior.

At this point I brush a good coating of lather over all my stubble (which has already been wetted and rubbed with a dot of Grooming Dept Moisturizing Pre-Shave, and then splashed a bit with more water). Once the stubble is fully coated, I added one more small amount of hot water to the tip of the brush and work that in. 

The process doesn’t take so long as this detailed description might make you think, and in any case once you learn it, practice will make you efficient. What I particularly like about this approach, beyond the quality of lather I get, is observing the stages of loading the brush.

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959, available on Kindle for $1.

With lather in place my Eros slant did its usual superb job — and this razor enjoys and rewards light pressure (the sort of pressure that you’re told, and you still have trouble, and then when you finally get it right, you say, “Oh! Light pressure. Why didn’t you say so?”)

Three passes left my face perfectly smooth and undamaged, and a splash of Chatillon Lux’s aftershave toner in the matching fragrance left me ready to face the day — a dark day that may well see the fall of snow. Hope you are warm and dry.

Written by Leisureguy

6 December 2021 at 10:15 am

Astray to the spacedge

leave a comment »

Written by Leisureguy

18 November 2021 at 12:20 pm

Posted in Science fiction, Video

Good list of influential science-fiction books

leave a comment »

Take a look. I downloaded a dozen or so samples. (I have a Kindle, and one way to remember a book I might be interested in buying is to download the sample. It’s more effective than putting the book on a list, and surprisingly often just having the sample stifles the impulse to buy.)

One interesting point: the maker of the list (or at least the writer of headline) believes that no science-fiction book more influential than those in the list will ever be written. That’s quite a claim, and I wonder how they know.

Written by Leisureguy

13 October 2021 at 4:09 pm

Posted in Books, Science fiction

QNTM on memes, anti-memes, and knowledge that doesn’t want to be shared

leave a comment »

This is a fascinating interview, and I highly recommend reading it or listening to it. The introductory matter:

QNTM is a software engineer and the author of There Is No Antimemetics Division. Here, QNTM speaks to the Browser’s Uri Bram about collaborative fiction, why people with deep and very specific expertise are often great storytellers, and the surprising subjectivity of finding right answers in software development.

[Listen to this interview as a podcast or on Youtube (audio only)]

The interview proper begins:

Uri Bram: Your latest book—which is wonderful—is called There Is No Antimemetics Division. Can you tell us a little bit conceptually about the idea of antimemes?

What is an anti-meme?

QNTM: So if you’re reading this, you probably have a reasonable idea of what a meme is, but there are a couple of different colliding definitions of meme these days.

For my purposes, a meme is a contagious idea, which is much more of an older definition than today’s conception of “internet meme.” It’s an idea that catches on due to some kind of hook within the idea itself. It’s a piece of information that you have, but there’s also an aspect where you want to share this information with other people, spread this idea to other people.

The canonical example of a contagious idea would be some kind of evangelical religion, where they would say: “Hey, this is the way the universe is structured. This is how the cosmos exists, but also convert other people to this way of thinking, go out and find people and tell them this as well.”

But there’s a way simpler idea of memes: a contagious song, a catch phrase, a political slogan, or even a symbol that’s easy to draw. Wouldn’t that be a meme as well?

So looking at this I thought that some ideas are more contagious than others and some ideas aren’t contagious at all—they just kind of sit there. So what’s at the other end of the scale: what kind of ideas resist being spread? What information would you intrinsically not want anyone else to find out about? Or maybe you do want to spread it, but you can’t for whatever reason?

In real life, there’s a ton of ideas that fall into this class: random wild data is is very difficult to share because it’s just nonsense and it’s not very memorable; just boring things are difficult to share; complicated equations are difficult to share because you can’t remember them properly—because we’re humans and that’s not how we remember things.

But also there’s a category of ideas that are hard to share intrinsically like passwords. I’m motivated to keep my password a secret. There are all kinds of official secrets, like government secrets that you’re motivated to keep secret.

And from there, you move into injunctions and super injunctions and gag orders. Or what kind of journalism is forbidden in the country where you happen to live? What kind of things that you’ve not allowed to say? What is a taboo? What are the things that are true, but we don’t talk about? Although this is orthogonal to the truth. Just because something is mimetic or antiemetic doesn’t mean it’s true or false.

Playing with the idea of anti-memes in science fiction.

QNTM: The truth can be very difficult to share. As they say, a lie can circle the globe before the truth can get its boots on. So a falsehood can be very mimetic, but I looked at this and thought… “anti-meme” is a novel neologism, but it’s mainly just a synonym for things we already know exist. We know what secrets are, we know what taboos are. But I started taking this into a fictional concept and there’s a large amount of science fiction that takes the idea of memes and anti-memes and plays with it.

For instance you could have a concept which exists and is plain as day and is right in front of you, but you can’t remember it and when you turn away, you’ve stopped being able to remember that it was there—even though it was clearly there. An anti-memetic thing could trip you so you fall, but you wouldn’t remember why you fell and then when you stood up again, you wouldn’t even remember that you fell over at all.

So I thought okay, there’s a bit of mileage in there, I can tell a story in this.

If you’ve read the book, chapter one of the book is that concept, but that’s just the start, then then I keep going. Let’s suppose this is a real phenomenon. What kind of organization could dealing with this kind of phenomenon? How would that organization have to operate? What kind of person would work there? And as I just kept digging into those questions, more and more story just showed up and I started writing.

Uri Bram: I was recommended this book with no context. I was told there’s this book, you should just read it and go in knowing as little as you can, which I think in itself is kind of interesting on your terms. Not anti-memetic, but there was hidden knowledge or knowledge that they didn’t want to convey.

QNTM: Oh, absolutely. There’s two aspects of this kind of thing. There’s ideas that you want to know, but you can’t hang onto them, they get away from you and what do you do about that? What kind of systems do you have to develop to handle that?

And then on the flip side of it, the second half of the book is about . . .

Continue reading. There’s much more that’s interesting.

Written by Leisureguy

6 October 2021 at 9:07 pm

The Most Important Device in the Universe Is Powered by a 555 Timer

leave a comment »

I love the kind of technical shop talk exchanged among people familiar with some line of country remote from my knowledge. Such talk is studded with things I don’t know, though I can follow the trend of the conversation. It’s like a stream: I follow the overall flow, but there are occasional boulders sticking up out of the water.

It has some of the same appeal in certain kinds of science fiction, where the writer has begun in media res and uses casually words whose referents the reader is expected to figure out as the story progresses. This is a common technique (cf. William Gibson, Charlie Stross, et al.), and for me it works well, keeping me alert for clues that will explain the terms, which may refer to culture, dress, devices, or whatever.

A recent post at Hackaday.com is full of that, but also provides an entertaining look at prop construction and usage in science-fiction movies and TV — the short clip at the end is a must see, and the comments also are worth reading.

Written by Leisureguy

24 September 2021 at 11:08 am

The Singularity approacheth

leave a comment »

As I’ve mentioned previously, the Singularity in science-fiction is when an AI gets good enough to design a better AI, and then recursion leads to exponential improvement, and the story’s off and running.

Well, it’s here. Matthew Hudson mentions it in an article in the New Yorker, which begins:

Deep learning, the artificial-intelligence technology that powers voice assistants, autonomous cars, and Go champions, relies on complicated “neural network” software arranged in layers. A deep-learning system can live on a single computer, but the biggest ones are spread over thousands of machines wired together into “clusters,” which sometimes live at large data centers, like those operated by Google. In a big cluster, as many as forty-eight pizza-box-size servers slide into a rack as tall as a person; these racks stand in rows, filling buildings the size of warehouses. The neural networks in such systems can tackle daunting problems, but they also face clear challenges. A network spread across a cluster is like a brain that’s been scattered around a room and wired together. Electrons move fast, but, even so, cross-chip communication is slow, and uses extravagant amounts of energy.

Eric Vishria, a general partner at Benchmark, a venture-capital firm in San Francisco, first came to understand this problem in the spring of 2016, while listening to a presentation from a new computer-chip company called Cerebras Systems. Benchmark is known for having made early investments in companies such as Twitter, Uber, and eBay—that is, in software, not hardware. The firm looks at about two hundred startup pitches a year, and invests in maybe one. “We’re in this kissing-a-thousand-frogs kind of game,” Vishria told me. As the presentation started, he had already decided to toss the frog back. “I’m, like, Why did I agree to this? We’re not gonna do a hardware investment,” he recalled thinking. “This is so dumb.”

Andrew Feldman, Cerebras’s co-founder, began his slide deck with a cover slide, then a team slide, catching Vishria’s attention: the talent was impressive. Then Feldman compared two kinds of computer chips. First, he looked at graphics-processing units, or G.P.U.s—chips designed for creating 3-D images. For a variety of reasons, today’s machine-learning systems depend on these graphics chips. Next, he looked at central processing units, or C.P.U.s—the general-purpose chips that do most of the work on a typical computer. “Slide 3 was something along the lines of, ‘G.P.U.s actually suck for deep learning—they just happen to be a hundred times better than C.P.U.s,’ ” Vishria recalled. “And, as soon as he said it, I was, like, facepalm. Of course! Of course!” Cerebras was proposing a new kind of chip—one built not for graphics but for A.I. specifically.

Vishria had grown used to hearing pitches from companies that planned to use deep learning for cybersecurity, medical imaging, chatbots, and other applications. After the Cerebras presentation, he talked with engineers at some of the companies that Benchmark had helped fund, including Zillow, Uber, and Stitch Fix; they told him that they were struggling with A.I. because “training” the neural networks took too long. Google had begun using super-fast “tensor-processing units,” or T.P.U.s—special chips it had designed for artificial intelligence. Vishria knew that a gold rush was under way, and that someone had to build the picks and shovels.

That year, Benchmark and Foundation Capital, another venture-capital company, led a twenty-seven-million-dollar round of investment in Cerebras, which has since raised close to half a billion dollars. Other companies are also making so-called A.I. accelerators; Cerebras’s competitors—Groq, Graphcore, and SambaNova—have raised more than two billion dollars in capital combined. But Cerebras’s approach is unique. Instead of making chips in the usual way—by printing dozens of them onto a large wafer of silicon, cutting them out of the wafer, and then wiring them to one another—the company has made one giant “wafer-scale” chip. A typical computer chip is the size of a fingernail. Cerebras’s is the size of a dinner plate. It is the largest computer chip in the world.

Even competitors find this feat impressive. “It’s all new science,” Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. “It’s an incredible piece of engineering—a tour de force.” At the same time, another engineer I spoke with described it, somewhat defensively, as a science project—bigness for bigness’s sake. Companies have tried to build mega-chips in the past and failed; Cerebras’s plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. “To be totally honest with you, for me, ignorance was an advantage,” Vishra said. “I don’t know that, if I’d understood how difficult it was going to be to do what they did, I would have had the guts to invest.”

Computers get faster and faster—a remarkable fact that’s easy to take for granted. It’s often explained by means of Moore’s Law: the pattern identified in 1965 by the semiconductor pioneer Gordon Moore, according to which the number of transistors on a chip doubles every year or two. Moore’s Law, of course, isn’t really a law. Engineers work tirelessly to shrink transistors—the on-off switches through which chips function—while also refining each chip’s “architecture,” creating more efficient and powerful designs. . .

. . .  Cerebras’s first task was to address the manufacturing difficulties that bedevil bigger chips. A chip begins as a cylindrical ingot of crystallized silicon, about a foot across; the ingot gets sliced into circular wafers a fraction of a millimetre thick. Circuits are then “printed” onto the wafer, through a process called photolithography. Chemicals sensitive to ultraviolet light are carefully deposited on the surface in layers; U.V. beams are then projected through detailed stencils called reticles, and the chemicals react, forming circuits.

Typically, the light projected through the reticle covers an area that will become one chip. The wafer then moves over and the light is projected again. After dozens or hundreds of chips are printed, they’re laser-cut from the wafer. “The simplest way to think about it is, your mom rolls out a round sheet of cookie dough,” Feldman, who is an avid cook, said. “She’s got a cookie cutter, and she carefully stamps out cookies.” It’s impossible, because of the laws of physics and optics, to build a bigger cookie cutter. So, Feldman said, “We invented a technique such that you could communicate across that little bit of cookie dough between the two cookies.”

In Cerebras’s printing system—developed in partnership with T.S.M.C., the company that manufactures its chips—the cookies overlap at their edges, so that their wiring lines up. The result is a single, “wafer-scale” chip, copper-colored and square, which is twenty-one centimetres on a side. (The largest G.P.U. is a little less than three centimetres across.) Cerebras produced its first chip, the Wafer-Scale Engine 1, in 2019. The WSE-2, introduced this year, uses denser circuitry, and contains 2.6 trillion transistors collected into eight hundred and fifty thousand processing units, or “cores.” (The top G.P.U.s have a few thousand cores, and most C.P.U.s have fewer than ten.)

Aart de Geus, the chairman and co-C.E.O. of the company Synopsys, asked me, “2.6 trillion transistors is astounding, right?” Synopsys provides some of the software that Cerebras and other chipmakers use to make and verify their chip designs. In designing a chip, de Geus said, an engineer starts with two central questions: “Where does the data come in? Where is it being processed?” When chips were simpler, designers could answer these questions at drafting tables, with pencils in hand; working on today’s far more complex chips, they type code that describes the architecture they want to create, then move on to using visual and coding tools. “Think of seeing a house from the top,” de Geus said. “Is the garage close to the kitchen? Or is it close to the bedroom? You want it close to the kitchen—otherwise, you will have to carry groceries all through the house.” He explained that, having designed the floor plan, “you might describe what happens inside a room using equations.”

Chip designs are mind-bogglingly intricate. “There’s multiple layers,” de Geus said, with circuits crisscrossing and running on top of one another, like major expressway interchanges. For Cerebras’s engineers, working at wafer scale, that complexity was heightened. Synopsys’s software offered assistance in the form of artificial intelligence: pattern-matching algorithms recognized frequent problems and suggested solutions; optimization routines nudged rooms into faster, more efficient arrangements. If too many lanes of traffic try to squeeze between two blocks of buildings, the software allows engineers to play Robert Moses, shifting the blocks.

In the end, Feldman said, the mega-chip design offers several advantages. Cores communicate faster when they’re on the same chip: instead of being spread around a room, the computer’s brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that’s ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home. In describing the efficiencies of the wafer-scale chip, Feldman offered an analogy: he asked me to imagine groups of roommates (the cores) in a dormitory (a chip) who want to watch a football game (do computing work). To watch the game, Feldman said, the roommates need beer stored in a fridge (data stored in memory); Cerebras puts a fridge in every room, so that the roommates don’t have to venture to the dorm’s common kitchen or the Safeway. This has the added advantage of allowing each core to work more quickly on different data. “So in my dorm room I can have Bud,” Feldman said. “And in your dorm room you can have Schlitz.”

Finally, Cerebras had to surmount the problem of yield. The firm’s engineers use Trilogy’s trick: redundancy. But here they have an advantage over their predecessors. Trilogy was trying to make a general-purpose chip, with many varied components, and so wiring around a single failed element could require connecting to a distant substitute. On Cerebras’s chip, all the cores are identical. If one cookie comes out wrong, the ones surrounding it are just as good.

In June, in a paper published in Nature, Google developers reported that, for the first time, they’d fully automated a process called “chip floorplanning.” A typical chip can contain thousands of memory blocks, tens of millions of logic gates, and tens of kilometres of microscopic wiring. Using the same techniques that their DeepMind colleagues had used to teach a neural network to win at Go, they’d trained an A.I. to floorplan a tensor-processing unit, arranging these elements while preventing data congestion; when they tested the A.I.’s T.P.U. against one that a team of experts had spent several months creating, they found that the computer’s design, drawn up in a matter of hours, matched or exceeded the humans’ in efficient use of area, power, and wire length. Google is currently using the algorithm to design its next T.P.U.

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Designing and manufacturing the chip turned out to be just half of the challenge. Brains use . . .

Continue reading. There’s more.

Emphasis added.

Written by Leisureguy

20 August 2021 at 6:23 pm

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

leave a comment »

Highly recommended: Kill Decision and also influx, by Daniel Suarez (and I would also highly recommend his other books, including the duology Daemon and Freedom™). Joe Hernandez reports for NPR:

Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator. Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.

Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.

“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.

The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.

he Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.

The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.

The idea of a “killer robot” has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.

What’s new about the incident in Libya, if confirmed, is . . .

Continue reading. There’s more.

Written by Leisureguy

2 June 2021 at 3:34 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by Leisureguy

16 April 2021 at 12:54 pm

Fantasy and the Buffered Self

leave a comment »

Alan Jacobs writes in The New Atlantis:

When asked by the editors of the website The Immanent Frame to summarize the key concerns of his vastly ambitious book A Secular Age (2007), Charles Taylor wrote,

Almost everyone can agree that one of the big differences between us and our ancestors of five hundred years ago is that they lived in an “enchanted” world, and we do not; at the very least, we live in a much less “enchanted” world. We might think of this as our having “lost” a number of beliefs and the practices which they made possible. But more, the enchanted world was one in which these forces could cross a porous boundary and shape our lives, psychic and physical. One of the big differences between us and them is that we live with a much firmer sense of the boundary between self and other. We are “buffered” selves. We have changed.

As Taylor makes clear, the shift from a porous to a buffered self involves a complex series of exchanges. But to put that shift in simple terms, a person accepts a buffered condition as a means of being protected from the demonic or otherwise ominous forces that in pre-modern times generated a quavering network of terrors. To be a pre-modern person, in Taylor’s account, is to be constantly in danger of being invaded or overcome by demons or fairies or nameless terrors of the dark — of being possessed and transformed, or spirited away and never returned to home and family. Keith Thomas’s magisterial Religion and the Decline of Magic (1971) specifies many of these dangers, along with the whole panoply of prayers, rites, amulets, potions, chants, spells, and the like, by which a person might seek protection from the otherwise irresistible. It is easy, then, to imagine why a person — or a whole culture — might, if it could, exchange this model of a self with highly permeable boundaries for one in which the self feels better protected, defended — impermeable, or nearly so.

The problem with this apparently straightforward transaction is that the porous self is open to the divine as well as to the demonic, while the buffered self is closed to both alike. Those who must guard against capture by fairies are necessarily and by the same token receptive to mystical experiences. The “showings” manifested to Julian of Norwich depend upon exceptional sensitivity, which is to say porosity — vulnerability to incursions of the supernatural. The portals of the self cannot be closed on one side only. But the achievement of a safely buffered personhood — closed off from both the divine and the demonic — is soon enough accompanied by a deeply felt change in the very cosmos. As C. S. Lewis notes in The Discarded Image (1964), the medieval person who found himself “looking up at a world lighted, warmed, and resonant with music” gives way to the modern person who perceives only emptiness and silence. Safety is purchased at the high price of isolation, as we see as early as Pascal, who famously wrote of the night sky, “Le silence éternel de ces espaces infinis m’effraie” (“The eternal silence of these infinite spaces frightens me”).

In these circumstances, one might expect people to ask whether so difficult and costly an exchange is in fact necessary. Might it not be possible to experience the benefits, while avoiding the costs, of both the porous and the buffered self? I want to argue here that it is precisely this desire that accounts for the rise to cultural prominence, in late modernity, of the artistic genre of fantasy. Fantasy — in books, films, television shows, and indeed in all imaginable media — is an instrument by which the late modern self strives to avail itself of the unpredictable excitements of the porous self while retaining its protective buffers. Fantasy, in most of its recent forms, may best be understood as a technologically enabled, and therefore safe, simulacrum of the pre-modern porous self.

Before pursuing my argument, I must make two clarifications. First, fantasy itself is not a recent development but rather an ancient form (though not under its current name). What we now call “fantasy” is something closer to “realism” in the pagan world, which is populated by many powers capable of acting upon “porous” human selves. In the pagan world, success in life is largely a matter of navigating safely among those powers, which are unpredictable, beyond good and evil, and often indifferent to human needs. (Such indifference means that they can help as well as hurt, but also that their assistance can never be relied upon.) In this environment, fantastic creatures are at the very least personifications or embodiments of powers genuinely believed to exist. The realism is not strict, in that the writers and readers of earlier times did not necessarily believe in the existence of precisely such creatures as were described in their stories — perhaps not Apollo or Artemis any more than Dante’s Geryon or Spenser’s Blatant Beast, though such questions are necessarily and notoriously vexed. But at the very least the pre-modern world is one in which powers like those hold sway and cannot be safely neglected; a world in which what we would call the fantastic is an intrinsic element of the real.

Second, some of the most celebrated practitioners of modern fantasy share with their pre-modern predecessors this belief that the fictional apparatus of fantasy is a relatively close approximation to the way things really are for human beings. J. R. R. Tolkien may not have believed in Sauron, but he surely believed that there are in human history people who sell themselves to the Enemy and find themselves as a result of that decision first empowered and then destroyed. And when, at the beginning of Lewis’s Perelandra (1944), the protagonist Ransom’s progress toward a friend’s house is impeded by invisible forces who fill him with fear, Lewis was describing the work of spirits whom he truly believed to exist, though under a slightly different description, just as he probably believed that some forms of scientistic rationalism are the product of demonic influence. In short, these writers sought to present their readers with an image of an enchanted world, of selves fully porous to supernatural forces. But because they did so in genres (fantasy, science fiction) known for the imaginative portrayal of the wholly nonexistent, readers confident in their buffered condition can be delighted by those stories without ever for a moment considering the possibility that the forces portrayed therein might correspond to something real. Indeed, the delight of the stories for such readers consists primarily in their perceived unreality.

Concentrating Spiritual Power

The Judeo-Christian world is alien to the pagan one primarily in its concentration — in most of its versions — of all power in the hands of an omnipotent God, from whom everything else has only derivative strength, virtue, and indeed existence. People who do not accept this account of things commonly perceive it as comforting, though a reading of the first chapter of the book of Job — with its bland explanation that the Satanic torments of a righteous man occur at the explicit permission of the Almighty — should be enough to complicate that view. On the other hand, people fully shaped by this account of the world, with its emphasis on explaining why there is something rather than nothing, will necessarily find paganism insufficiently curious about where the powers that afflict human lives come from. After all, many pagan mythologies have no creation stories, or thin, minor ones. The powers of the pagan world just are: to reckon with them — to appease or evade them, to thwart them with some greater power, to swear fidelity to them — is a full-time job; there can be little energy left over to speculate about their origins.

So radical monotheism, though it does not alter the condition of porosity, and does not disenchant the world, forcefully concentrates charisma. . .

Continue reading.

Written by Leisureguy

19 March 2021 at 12:13 pm

A loophole in Asimov’s Three Laws of Robotics

leave a comment »

A reminder: The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Written by Leisureguy

9 March 2021 at 4:52 pm

The Future Encyclopedia of Luddism: An alternative economic and industrial history and future

leave a comment »

Miriam A. Cherry writes in The MIT Press Reader:

In common parlance, the term “Luddite” means someone who is anti-technology, or maybe, just not adept at using technology. Historically, however, the Luddite movement was a reaction born of industrial accidents and dangerous machines, poor working conditions, and the fact that there were no unions to represent worker interests during England’s initial period of industrialization. The Luddites did not hate technology; they only channeled their anger toward machine-breaking because it had nowhere else to go.

What you are about to read is an alternate history (an encyclopedia entry from circa 2500) that depends on the critical assumption that the Luddites succeeded in their industrial campaign in the 1810s. Instead of techno-determinism (that the development of technology is inevitable, and that society will alter and adjust to it) the Encyclopedia entry notes that the Luddites, in their success, formulated a different, yet productive, relationship between society and the development of technology.


Originating in Great Britain during the Industrial Revolution in the first two decades of the 19th century, Luddism was a movement arising as a response to poor working conditions in nascent textile manufacturing businesses. The Luddite movement was a precursor to the development of the economic philosophy known as Sustainomics, which promotes technological development that adheres to principles of Utilitarianism and Human Flourishing Doctrines. Sustainomics began its rise in the early part of the 20th century and has remained the dominant economic system of the Hemispheric Union for the past 600 years.

Beginning in the early 19th century, foreign wars coupled with high unemployment and food shortages caused widespread desperation among the populace. Many seeking “earned wages” went to work in rudimentary industrial factories. With no safety standards and shoddy medical care, industrial accidents were quite common.

As corn became increasingly scarce in the winter of 1810 to 1811, groups of workers who could not pay for food and shelter became even more desperate. Under the Combination Act of 1799, Parliament had outlawed unions. It was amidst these stark conditions that the Luddites began to organize in secret. The Luddite Movement was open to both women workers and child laborers. Indeed, women and children comprised roughly 40 percent of the Luddite membership.

Leadership of General Ned Ludd and Origin of the Term ‘Luddite’

Many stories and legends have grown up around the esteemed figure of General Ned Ludd, named by “Passage Zine” as one of the “Top 10 Most Influential People of the Last Thousand Years.” Hailed as a visionary even in his own time, the Luddite Councils are named in his honor. The complete story of Ludd’s life and times is told in “The Epic Saga of General Ludd.” While stylized, the Saga has largely been corroborated with the archaeological records.

As an orphan, young Ned grew up in the late 1790s in a “workhouse,” a facility that existed to make people “earn their keep,” to use the antiquated terminology and backward thinking of the time. Ned was trained in the textile trade as a boy. Contemporary sources recount 15-year-old Ned being beaten when he refused to work at a machine that had, only moments beforehand, severed one of his co-worker’s arms. After several days of docked wages, Ned, still nursing bruises from his beating, was told to go back to work on that same dangerous device. As every schoolchild learns in reading “The Luddite Primer,” young Ned seized a hammer and smashed the hazardous machine. Within a fortnight Ned had fled the factory and joined the British army.

Although he had only a brief stint in the military, young Ned was a quick student of battlefield strategy. Returning to Huddersfield just a few years later, his supporters styled him “General Ludd.” As the Movement increased in popularity over the summer of 1811, a large crowd gathered at Huddersfield. By the time the Movement began in earnest, Ned Ludd’s supporters numbered over 100,000. Luddite supporters were characterized by their sense of utmost loyalty and solidarity to their brothers and sisters in the Movement. Despite the large number of supporters and the completely rudimentary communication available at the time, the Movement, its leaders and its political and social aims remained a well-guarded secret to factory owners and the government alike.

Takeover of Factories

Beginning in November 1811, General Ludd and his right-hand man, Lt George Mellor, surrounded, took and held factories throughout the textile district of Nottinghamshire. Their first victory, at Cartwrights Mill at Rawfolds, is now the site of the Mellor Memorial Museum, which contains many of the original documents so central to the Luddite Movement. Much of the success of the early campaigns was largely due to the fact that the Luddites were chiefly a peaceful movement. Contemporaneous newspaper accounts described individual events as the “occupation” of factories. This characterization has since been disputed by researchers, and definitive archaeological studies have completely repudiated these polemic accounts as wholly fabricated. . . .

Continue reading.

Written by Leisureguy

21 January 2021 at 4:58 pm

The Mystery of “The Jet-Propelled Couch”

leave a comment »

An absolutely fascinating column by Mark Frauenfelder:

When I was 15 I read “The Jet-Propelled Couch,” the true story of a psychiatric patient who believed he could teleport to a faraway planet. I’ve been fascinated ever since.

I learned about it from the Vaughn Bodē Index (1976). Bodē (1941-1978) was an underground cartoonist best known for Cheech Wizard. In an interview in the Index, Bodē lamented the fact that the patient in “The Jet-Propelled Couch” had been “cured” of his delusion. I was intrigued and wanted to learn more about this patient, so I scoured used bookstores in Boulder, Colorado until I found a copy of The Fifty-Minute Hour and Other True Psychoanalytic Tales (1955), by psychiatrist Robert M. Lindner (best known for his 1944 book Rebel Without A Cause: The Hypnoanalysis Of A Criminal Psychopath, which was the inspiration for the James Dean movie of the same name). The Fifty-Minute Hour contained five fascinating case stories of Lindner’s patients.

The most famous of the five cases was that of “Kirk Allen,” who Lindner described in the book’s final chapter, “The Jet-Propelled Couch.” According to Linder, Allen (a pseudonym) was one of Lindner’s patients, born in 1918, who was a physicist at “X Reservation,” a “government installation in the southwest” (probably Los Alamos National Laboratory). Allen had made important contributions during World War II (probably as part of the Manhattan Project). After Allen’s superiors observed him chronically spacing out on the job while muttering about his travels to other planets, they sent him to Lindner’s Baltimore office for long-term treatment.

Lindner described Allen as friendly and polite, and seemingly free of mental illness. But as Lindner got to know Allen, he learned that his patient had a traumatic childhood that affected him profoundly. Allen had grown up on an island in the tropical Pacific where he felt isolated from other children. His mother and father (a high-ranking member in the U.S. Military) paid little attention to him. “Throughout childhood and early adolescence,” wrote Lindner, “he was haunted by the difference between himself and his companions, a difference not solely of skin color but of social heritage and the innumerable subtleties of life.” To make matters worse, Allen’s governess sexually abused him for many months when he was eleven years old, which added further trauma.

While living on the island, Allen came across a series of science fiction/fantasy novels in the library that starred a protagonist who shared his name. The books provided an escape for his unhappy life. Allen read and re-read the novels.

“As I read about the adventures of Kirk Allen in these books,” Allen told Lindner, “the conviction began to grow on me that the stories were not only true to the very last detail, but that they were about me.”

He began fantasizing about additional adventures starring his namesake. His reveries were so rich in sensory detail that Allen came to the conclusion that his imagined escapades weren’t fiction — they were actually taking place in the future and he was somehow tapping into them. The fantasies grew and continued for years. He eventually discovered that he could leave his earthly body and travel forward in time to live as the heroic Kirk Allen on a faraway planet. He also learned he could spend a year or more as the spacefaring Allen and return to Earth, where only a few minutes had passed.

Here’s how he described the experience to Lindner: . . .

Continue reading. There’s much more.

I perhaps should note that Cordwainer Smith was a very fine science-fiction writer — one of my favorite authors.

Written by Leisureguy

20 September 2020 at 7:06 pm

“The End of History” — a brief time-travel movie

leave a comment »

Written by Leisureguy

28 February 2020 at 4:59 pm

%d bloggers like this: