Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science fiction’ Category

Mental itch satisfactorily scratched: Novel name found.

leave a comment »

In a shaving post a while back I was trying to recall the specifics of a science-fiction novel I read long ago. This morning the answer has been found, and I’ve updated that post:

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959. Available on Kindle for $1.

What a relief.

The context was my own meditative observation of the stages of loading a brush with shaving soap.

Written by Leisureguy

19 December 2021 at 7:42 am

More lather fun, and the blissful Monday-morning shave

with 2 comments

When one has been told how to do a repeated task, it takes some practice and experience to discover the absolutely optimal approach. The discovery experience often feels like an extraordinary leap of insight and understanding, and the learner will exclaim, in italics as it were, the very thing he was told, but with the words feeling fresh and vibrant and electric. “Oh! Light pressure!” from a shaver who has been repeatedly told to use light pressure — and who thought he was using light pressure — spoken after the first time he actually does use light pressure.

I’m experiencing some of that in my lather making. Long ago Mantic59 advised shaking the brush until it was barely damp, then start loading the brush, adding very small amounts of water as needed to complete the loading. Once the brush has been loaded, a little more water is generally added as the brush works up the lather (on the face, in a bowl, in a cupped palm). 

Only recently have I found the right approach — that is, “right” in the sense not having the optimal lather (without it being runny in the slightest) and — equally if not mor eimportant to me — a process that is interesting to do and observe. 

The trick for me was to change the injunction “Wet the knot well under the hot-water tap and then shake the water out until the brush is only slightly damp” to end with “… until the brush is as dray as you can by shaking it.” The loading in this method works best if you have tried to shake all the water out of the brush. (You can’t — the brush will still be slightly damp — but by aiming at “dry” instead of “slightly damp,” I did get all the residual excess water out of the brush.

Then when you brush the soap, nothing much happens. So you add just a tiny amount of water. Note that this water is enjoyed by the tip of the brush, not the inner core. As you brush those few drops of water over the soap, you’ll see a little soap transferred from puck to brush. Add another tiny amount of water and brush again. Wieth another two or three repetitions, you will have gotten and good amount of soap loaded into the top of the knot and no extra water in the interior.

At this point I brush a good coating of lather over all my stubble (which has already been wetted and rubbed with a dot of Grooming Dept Moisturizing Pre-Shave, and then splashed a bit with more water). Once the stubble is fully coated, I added one more small amount of hot water to the tip of the brush and work that in. 

The process doesn’t take so long as this detailed description might make you think, and in any case once you learn it, practice will make you efficient. What I particularly like about this approach, beyond the quality of lather I get, is observing the stages of loading the brush.

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959, available on Kindle for $1.

With lather in place my Eros slant did its usual superb job — and this razor enjoys and rewards light pressure (the sort of pressure that you’re told, and you still have trouble, and then when you finally get it right, you say, “Oh! Light pressure. Why didn’t you say so?”)

Three passes left my face perfectly smooth and undamaged, and a splash of Chatillon Lux’s aftershave toner in the matching fragrance left me ready to face the day — a dark day that may well see the fall of snow. Hope you are warm and dry.

Written by Leisureguy

6 December 2021 at 10:15 am

Astray to the spacedge

leave a comment »

Written by Leisureguy

18 November 2021 at 12:20 pm

Posted in Science fiction, Video

Good list of influential science-fiction books

leave a comment »

Take a look. I downloaded a dozen or so samples. (I have a Kindle, and one way to remember a book I might be interested in buying is to download the sample. It’s more effective than putting the book on a list, and surprisingly often just having the sample stifles the impulse to buy.)

One interesting point: the maker of the list (or at least the writer of headline) believes that no science-fiction book more influential than those in the list will ever be written. That’s quite a claim, and I wonder how they know.

Written by Leisureguy

13 October 2021 at 4:09 pm

Posted in Books, Science fiction

QNTM on memes, anti-memes, and knowledge that doesn’t want to be shared

leave a comment »

This is a fascinating interview, and I highly recommend reading it or listening to it. The introductory matter:

QNTM is a software engineer and the author of There Is No Antimemetics Division. Here, QNTM speaks to the Browser’s Uri Bram about collaborative fiction, why people with deep and very specific expertise are often great storytellers, and the surprising subjectivity of finding right answers in software development.

[Listen to this interview as a podcast or on Youtube (audio only)]

The interview proper begins:

Uri Bram: Your latest book—which is wonderful—is called There Is No Antimemetics Division. Can you tell us a little bit conceptually about the idea of antimemes?

What is an anti-meme?

QNTM: So if you’re reading this, you probably have a reasonable idea of what a meme is, but there are a couple of different colliding definitions of meme these days.

For my purposes, a meme is a contagious idea, which is much more of an older definition than today’s conception of “internet meme.” It’s an idea that catches on due to some kind of hook within the idea itself. It’s a piece of information that you have, but there’s also an aspect where you want to share this information with other people, spread this idea to other people.

The canonical example of a contagious idea would be some kind of evangelical religion, where they would say: “Hey, this is the way the universe is structured. This is how the cosmos exists, but also convert other people to this way of thinking, go out and find people and tell them this as well.”

But there’s a way simpler idea of memes: a contagious song, a catch phrase, a political slogan, or even a symbol that’s easy to draw. Wouldn’t that be a meme as well?

So looking at this I thought that some ideas are more contagious than others and some ideas aren’t contagious at all—they just kind of sit there. So what’s at the other end of the scale: what kind of ideas resist being spread? What information would you intrinsically not want anyone else to find out about? Or maybe you do want to spread it, but you can’t for whatever reason?

In real life, there’s a ton of ideas that fall into this class: random wild data is is very difficult to share because it’s just nonsense and it’s not very memorable; just boring things are difficult to share; complicated equations are difficult to share because you can’t remember them properly—because we’re humans and that’s not how we remember things.

But also there’s a category of ideas that are hard to share intrinsically like passwords. I’m motivated to keep my password a secret. There are all kinds of official secrets, like government secrets that you’re motivated to keep secret.

And from there, you move into injunctions and super injunctions and gag orders. Or what kind of journalism is forbidden in the country where you happen to live? What kind of things that you’ve not allowed to say? What is a taboo? What are the things that are true, but we don’t talk about? Although this is orthogonal to the truth. Just because something is mimetic or antiemetic doesn’t mean it’s true or false.

Playing with the idea of anti-memes in science fiction.

QNTM: The truth can be very difficult to share. As they say, a lie can circle the globe before the truth can get its boots on. So a falsehood can be very mimetic, but I looked at this and thought… “anti-meme” is a novel neologism, but it’s mainly just a synonym for things we already know exist. We know what secrets are, we know what taboos are. But I started taking this into a fictional concept and there’s a large amount of science fiction that takes the idea of memes and anti-memes and plays with it.

For instance you could have a concept which exists and is plain as day and is right in front of you, but you can’t remember it and when you turn away, you’ve stopped being able to remember that it was there—even though it was clearly there. An anti-memetic thing could trip you so you fall, but you wouldn’t remember why you fell and then when you stood up again, you wouldn’t even remember that you fell over at all.

So I thought okay, there’s a bit of mileage in there, I can tell a story in this.

If you’ve read the book, chapter one of the book is that concept, but that’s just the start, then then I keep going. Let’s suppose this is a real phenomenon. What kind of organization could dealing with this kind of phenomenon? How would that organization have to operate? What kind of person would work there? And as I just kept digging into those questions, more and more story just showed up and I started writing.

Uri Bram: I was recommended this book with no context. I was told there’s this book, you should just read it and go in knowing as little as you can, which I think in itself is kind of interesting on your terms. Not anti-memetic, but there was hidden knowledge or knowledge that they didn’t want to convey.

QNTM: Oh, absolutely. There’s two aspects of this kind of thing. There’s ideas that you want to know, but you can’t hang onto them, they get away from you and what do you do about that? What kind of systems do you have to develop to handle that?

And then on the flip side of it, the second half of the book is about . . .

Continue reading. There’s much more that’s interesting.

Written by Leisureguy

6 October 2021 at 9:07 pm

The Most Important Device in the Universe Is Powered by a 555 Timer

leave a comment »

I love the kind of technical shop talk exchanged among people familiar with some line of country remote from my knowledge. Such talk is studded with things I don’t know, though I can follow the trend of the conversation. It’s like a stream: I follow the overall flow, but there are occasional boulders sticking up out of the water.

It has some of the same appeal in certain kinds of science fiction, where the writer has begun in media res and uses casually words whose referents the reader is expected to figure out as the story progresses. This is a common technique (cf. William Gibson, Charlie Stross, et al.), and for me it works well, keeping me alert for clues that will explain the terms, which may refer to culture, dress, devices, or whatever.

A recent post at Hackaday.com is full of that, but also provides an entertaining look at prop construction and usage in science-fiction movies and TV — the short clip at the end is a must see, and the comments also are worth reading.

Written by Leisureguy

24 September 2021 at 11:08 am

The Singularity approacheth

leave a comment »

As I’ve mentioned previously, the Singularity in science-fiction is when an AI gets good enough to design a better AI, and then recursion leads to exponential improvement, and the story’s off and running.

Well, it’s here. Matthew Hudson mentions it in an article in the New Yorker, which begins:

Deep learning, the artificial-intelligence technology that powers voice assistants, autonomous cars, and Go champions, relies on complicated “neural network” software arranged in layers. A deep-learning system can live on a single computer, but the biggest ones are spread over thousands of machines wired together into “clusters,” which sometimes live at large data centers, like those operated by Google. In a big cluster, as many as forty-eight pizza-box-size servers slide into a rack as tall as a person; these racks stand in rows, filling buildings the size of warehouses. The neural networks in such systems can tackle daunting problems, but they also face clear challenges. A network spread across a cluster is like a brain that’s been scattered around a room and wired together. Electrons move fast, but, even so, cross-chip communication is slow, and uses extravagant amounts of energy.

Eric Vishria, a general partner at Benchmark, a venture-capital firm in San Francisco, first came to understand this problem in the spring of 2016, while listening to a presentation from a new computer-chip company called Cerebras Systems. Benchmark is known for having made early investments in companies such as Twitter, Uber, and eBay—that is, in software, not hardware. The firm looks at about two hundred startup pitches a year, and invests in maybe one. “We’re in this kissing-a-thousand-frogs kind of game,” Vishria told me. As the presentation started, he had already decided to toss the frog back. “I’m, like, Why did I agree to this? We’re not gonna do a hardware investment,” he recalled thinking. “This is so dumb.”

Andrew Feldman, Cerebras’s co-founder, began his slide deck with a cover slide, then a team slide, catching Vishria’s attention: the talent was impressive. Then Feldman compared two kinds of computer chips. First, he looked at graphics-processing units, or G.P.U.s—chips designed for creating 3-D images. For a variety of reasons, today’s machine-learning systems depend on these graphics chips. Next, he looked at central processing units, or C.P.U.s—the general-purpose chips that do most of the work on a typical computer. “Slide 3 was something along the lines of, ‘G.P.U.s actually suck for deep learning—they just happen to be a hundred times better than C.P.U.s,’ ” Vishria recalled. “And, as soon as he said it, I was, like, facepalm. Of course! Of course!” Cerebras was proposing a new kind of chip—one built not for graphics but for A.I. specifically.

Vishria had grown used to hearing pitches from companies that planned to use deep learning for cybersecurity, medical imaging, chatbots, and other applications. After the Cerebras presentation, he talked with engineers at some of the companies that Benchmark had helped fund, including Zillow, Uber, and Stitch Fix; they told him that they were struggling with A.I. because “training” the neural networks took too long. Google had begun using super-fast “tensor-processing units,” or T.P.U.s—special chips it had designed for artificial intelligence. Vishria knew that a gold rush was under way, and that someone had to build the picks and shovels.

That year, Benchmark and Foundation Capital, another venture-capital company, led a twenty-seven-million-dollar round of investment in Cerebras, which has since raised close to half a billion dollars. Other companies are also making so-called A.I. accelerators; Cerebras’s competitors—Groq, Graphcore, and SambaNova—have raised more than two billion dollars in capital combined. But Cerebras’s approach is unique. Instead of making chips in the usual way—by printing dozens of them onto a large wafer of silicon, cutting them out of the wafer, and then wiring them to one another—the company has made one giant “wafer-scale” chip. A typical computer chip is the size of a fingernail. Cerebras’s is the size of a dinner plate. It is the largest computer chip in the world.

Even competitors find this feat impressive. “It’s all new science,” Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. “It’s an incredible piece of engineering—a tour de force.” At the same time, another engineer I spoke with described it, somewhat defensively, as a science project—bigness for bigness’s sake. Companies have tried to build mega-chips in the past and failed; Cerebras’s plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. “To be totally honest with you, for me, ignorance was an advantage,” Vishra said. “I don’t know that, if I’d understood how difficult it was going to be to do what they did, I would have had the guts to invest.”

Computers get faster and faster—a remarkable fact that’s easy to take for granted. It’s often explained by means of Moore’s Law: the pattern identified in 1965 by the semiconductor pioneer Gordon Moore, according to which the number of transistors on a chip doubles every year or two. Moore’s Law, of course, isn’t really a law. Engineers work tirelessly to shrink transistors—the on-off switches through which chips function—while also refining each chip’s “architecture,” creating more efficient and powerful designs. . .

. . .  Cerebras’s first task was to address the manufacturing difficulties that bedevil bigger chips. A chip begins as a cylindrical ingot of crystallized silicon, about a foot across; the ingot gets sliced into circular wafers a fraction of a millimetre thick. Circuits are then “printed” onto the wafer, through a process called photolithography. Chemicals sensitive to ultraviolet light are carefully deposited on the surface in layers; U.V. beams are then projected through detailed stencils called reticles, and the chemicals react, forming circuits.

Typically, the light projected through the reticle covers an area that will become one chip. The wafer then moves over and the light is projected again. After dozens or hundreds of chips are printed, they’re laser-cut from the wafer. “The simplest way to think about it is, your mom rolls out a round sheet of cookie dough,” Feldman, who is an avid cook, said. “She’s got a cookie cutter, and she carefully stamps out cookies.” It’s impossible, because of the laws of physics and optics, to build a bigger cookie cutter. So, Feldman said, “We invented a technique such that you could communicate across that little bit of cookie dough between the two cookies.”

In Cerebras’s printing system—developed in partnership with T.S.M.C., the company that manufactures its chips—the cookies overlap at their edges, so that their wiring lines up. The result is a single, “wafer-scale” chip, copper-colored and square, which is twenty-one centimetres on a side. (The largest G.P.U. is a little less than three centimetres across.) Cerebras produced its first chip, the Wafer-Scale Engine 1, in 2019. The WSE-2, introduced this year, uses denser circuitry, and contains 2.6 trillion transistors collected into eight hundred and fifty thousand processing units, or “cores.” (The top G.P.U.s have a few thousand cores, and most C.P.U.s have fewer than ten.)

Aart de Geus, the chairman and co-C.E.O. of the company Synopsys, asked me, “2.6 trillion transistors is astounding, right?” Synopsys provides some of the software that Cerebras and other chipmakers use to make and verify their chip designs. In designing a chip, de Geus said, an engineer starts with two central questions: “Where does the data come in? Where is it being processed?” When chips were simpler, designers could answer these questions at drafting tables, with pencils in hand; working on today’s far more complex chips, they type code that describes the architecture they want to create, then move on to using visual and coding tools. “Think of seeing a house from the top,” de Geus said. “Is the garage close to the kitchen? Or is it close to the bedroom? You want it close to the kitchen—otherwise, you will have to carry groceries all through the house.” He explained that, having designed the floor plan, “you might describe what happens inside a room using equations.”

Chip designs are mind-bogglingly intricate. “There’s multiple layers,” de Geus said, with circuits crisscrossing and running on top of one another, like major expressway interchanges. For Cerebras’s engineers, working at wafer scale, that complexity was heightened. Synopsys’s software offered assistance in the form of artificial intelligence: pattern-matching algorithms recognized frequent problems and suggested solutions; optimization routines nudged rooms into faster, more efficient arrangements. If too many lanes of traffic try to squeeze between two blocks of buildings, the software allows engineers to play Robert Moses, shifting the blocks.

In the end, Feldman said, the mega-chip design offers several advantages. Cores communicate faster when they’re on the same chip: instead of being spread around a room, the computer’s brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that’s ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home. In describing the efficiencies of the wafer-scale chip, Feldman offered an analogy: he asked me to imagine groups of roommates (the cores) in a dormitory (a chip) who want to watch a football game (do computing work). To watch the game, Feldman said, the roommates need beer stored in a fridge (data stored in memory); Cerebras puts a fridge in every room, so that the roommates don’t have to venture to the dorm’s common kitchen or the Safeway. This has the added advantage of allowing each core to work more quickly on different data. “So in my dorm room I can have Bud,” Feldman said. “And in your dorm room you can have Schlitz.”

Finally, Cerebras had to surmount the problem of yield. The firm’s engineers use Trilogy’s trick: redundancy. But here they have an advantage over their predecessors. Trilogy was trying to make a general-purpose chip, with many varied components, and so wiring around a single failed element could require connecting to a distant substitute. On Cerebras’s chip, all the cores are identical. If one cookie comes out wrong, the ones surrounding it are just as good.

In June, in a paper published in Nature, Google developers reported that, for the first time, they’d fully automated a process called “chip floorplanning.” A typical chip can contain thousands of memory blocks, tens of millions of logic gates, and tens of kilometres of microscopic wiring. Using the same techniques that their DeepMind colleagues had used to teach a neural network to win at Go, they’d trained an A.I. to floorplan a tensor-processing unit, arranging these elements while preventing data congestion; when they tested the A.I.’s T.P.U. against one that a team of experts had spent several months creating, they found that the computer’s design, drawn up in a matter of hours, matched or exceeded the humans’ in efficient use of area, power, and wire length. Google is currently using the algorithm to design its next T.P.U.

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Designing and manufacturing the chip turned out to be just half of the challenge. Brains use . . .

Continue reading. There’s more.

Emphasis added.

Written by Leisureguy

20 August 2021 at 6:23 pm

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

leave a comment »

Highly recommended: Kill Decision and also influx, by Daniel Suarez (and I would also highly recommend his other books, including the duology Daemon and Freedom™). Joe Hernandez reports for NPR:

Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator. Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.

Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.

“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.

The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.

he Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.

The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.

The idea of a “killer robot” has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.

What’s new about the incident in Libya, if confirmed, is . . .

Continue reading. There’s more.

Written by Leisureguy

2 June 2021 at 3:34 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by Leisureguy

16 April 2021 at 12:54 pm

Fantasy and the Buffered Self

leave a comment »

Alan Jacobs writes in The New Atlantis:

When asked by the editors of the website The Immanent Frame to summarize the key concerns of his vastly ambitious book A Secular Age (2007), Charles Taylor wrote,

Almost everyone can agree that one of the big differences between us and our ancestors of five hundred years ago is that they lived in an “enchanted” world, and we do not; at the very least, we live in a much less “enchanted” world. We might think of this as our having “lost” a number of beliefs and the practices which they made possible. But more, the enchanted world was one in which these forces could cross a porous boundary and shape our lives, psychic and physical. One of the big differences between us and them is that we live with a much firmer sense of the boundary between self and other. We are “buffered” selves. We have changed.

As Taylor makes clear, the shift from a porous to a buffered self involves a complex series of exchanges. But to put that shift in simple terms, a person accepts a buffered condition as a means of being protected from the demonic or otherwise ominous forces that in pre-modern times generated a quavering network of terrors. To be a pre-modern person, in Taylor’s account, is to be constantly in danger of being invaded or overcome by demons or fairies or nameless terrors of the dark — of being possessed and transformed, or spirited away and never returned to home and family. Keith Thomas’s magisterial Religion and the Decline of Magic (1971) specifies many of these dangers, along with the whole panoply of prayers, rites, amulets, potions, chants, spells, and the like, by which a person might seek protection from the otherwise irresistible. It is easy, then, to imagine why a person — or a whole culture — might, if it could, exchange this model of a self with highly permeable boundaries for one in which the self feels better protected, defended — impermeable, or nearly so.

The problem with this apparently straightforward transaction is that the porous self is open to the divine as well as to the demonic, while the buffered self is closed to both alike. Those who must guard against capture by fairies are necessarily and by the same token receptive to mystical experiences. The “showings” manifested to Julian of Norwich depend upon exceptional sensitivity, which is to say porosity — vulnerability to incursions of the supernatural. The portals of the self cannot be closed on one side only. But the achievement of a safely buffered personhood — closed off from both the divine and the demonic — is soon enough accompanied by a deeply felt change in the very cosmos. As C. S. Lewis notes in The Discarded Image (1964), the medieval person who found himself “looking up at a world lighted, warmed, and resonant with music” gives way to the modern person who perceives only emptiness and silence. Safety is purchased at the high price of isolation, as we see as early as Pascal, who famously wrote of the night sky, “Le silence éternel de ces espaces infinis m’effraie” (“The eternal silence of these infinite spaces frightens me”).

In these circumstances, one might expect people to ask whether so difficult and costly an exchange is in fact necessary. Might it not be possible to experience the benefits, while avoiding the costs, of both the porous and the buffered self? I want to argue here that it is precisely this desire that accounts for the rise to cultural prominence, in late modernity, of the artistic genre of fantasy. Fantasy — in books, films, television shows, and indeed in all imaginable media — is an instrument by which the late modern self strives to avail itself of the unpredictable excitements of the porous self while retaining its protective buffers. Fantasy, in most of its recent forms, may best be understood as a technologically enabled, and therefore safe, simulacrum of the pre-modern porous self.

Before pursuing my argument, I must make two clarifications. First, fantasy itself is not a recent development but rather an ancient form (though not under its current name). What we now call “fantasy” is something closer to “realism” in the pagan world, which is populated by many powers capable of acting upon “porous” human selves. In the pagan world, success in life is largely a matter of navigating safely among those powers, which are unpredictable, beyond good and evil, and often indifferent to human needs. (Such indifference means that they can help as well as hurt, but also that their assistance can never be relied upon.) In this environment, fantastic creatures are at the very least personifications or embodiments of powers genuinely believed to exist. The realism is not strict, in that the writers and readers of earlier times did not necessarily believe in the existence of precisely such creatures as were described in their stories — perhaps not Apollo or Artemis any more than Dante’s Geryon or Spenser’s Blatant Beast, though such questions are necessarily and notoriously vexed. But at the very least the pre-modern world is one in which powers like those hold sway and cannot be safely neglected; a world in which what we would call the fantastic is an intrinsic element of the real.

Second, some of the most celebrated practitioners of modern fantasy share with their pre-modern predecessors this belief that the fictional apparatus of fantasy is a relatively close approximation to the way things really are for human beings. J. R. R. Tolkien may not have believed in Sauron, but he surely believed that there are in human history people who sell themselves to the Enemy and find themselves as a result of that decision first empowered and then destroyed. And when, at the beginning of Lewis’s Perelandra (1944), the protagonist Ransom’s progress toward a friend’s house is impeded by invisible forces who fill him with fear, Lewis was describing the work of spirits whom he truly believed to exist, though under a slightly different description, just as he probably believed that some forms of scientistic rationalism are the product of demonic influence. In short, these writers sought to present their readers with an image of an enchanted world, of selves fully porous to supernatural forces. But because they did so in genres (fantasy, science fiction) known for the imaginative portrayal of the wholly nonexistent, readers confident in their buffered condition can be delighted by those stories without ever for a moment considering the possibility that the forces portrayed therein might correspond to something real. Indeed, the delight of the stories for such readers consists primarily in their perceived unreality.

Concentrating Spiritual Power

The Judeo-Christian world is alien to the pagan one primarily in its concentration — in most of its versions — of all power in the hands of an omnipotent God, from whom everything else has only derivative strength, virtue, and indeed existence. People who do not accept this account of things commonly perceive it as comforting, though a reading of the first chapter of the book of Job — with its bland explanation that the Satanic torments of a righteous man occur at the explicit permission of the Almighty — should be enough to complicate that view. On the other hand, people fully shaped by this account of the world, with its emphasis on explaining why there is something rather than nothing, will necessarily find paganism insufficiently curious about where the powers that afflict human lives come from. After all, many pagan mythologies have no creation stories, or thin, minor ones. The powers of the pagan world just are: to reckon with them — to appease or evade them, to thwart them with some greater power, to swear fidelity to them — is a full-time job; there can be little energy left over to speculate about their origins.

So radical monotheism, though it does not alter the condition of porosity, and does not disenchant the world, forcefully concentrates charisma. . .

Continue reading.

Written by Leisureguy

19 March 2021 at 12:13 pm

A loophole in Asimov’s Three Laws of Robotics

leave a comment »

A reminder: The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Written by Leisureguy

9 March 2021 at 4:52 pm

The Future Encyclopedia of Luddism: An alternative economic and industrial history and future

leave a comment »

Miriam A. Cherry writes in The MIT Press Reader:

In common parlance, the term “Luddite” means someone who is anti-technology, or maybe, just not adept at using technology. Historically, however, the Luddite movement was a reaction born of industrial accidents and dangerous machines, poor working conditions, and the fact that there were no unions to represent worker interests during England’s initial period of industrialization. The Luddites did not hate technology; they only channeled their anger toward machine-breaking because it had nowhere else to go.

What you are about to read is an alternate history (an encyclopedia entry from circa 2500) that depends on the critical assumption that the Luddites succeeded in their industrial campaign in the 1810s. Instead of techno-determinism (that the development of technology is inevitable, and that society will alter and adjust to it) the Encyclopedia entry notes that the Luddites, in their success, formulated a different, yet productive, relationship between society and the development of technology.


Originating in Great Britain during the Industrial Revolution in the first two decades of the 19th century, Luddism was a movement arising as a response to poor working conditions in nascent textile manufacturing businesses. The Luddite movement was a precursor to the development of the economic philosophy known as Sustainomics, which promotes technological development that adheres to principles of Utilitarianism and Human Flourishing Doctrines. Sustainomics began its rise in the early part of the 20th century and has remained the dominant economic system of the Hemispheric Union for the past 600 years.

Beginning in the early 19th century, foreign wars coupled with high unemployment and food shortages caused widespread desperation among the populace. Many seeking “earned wages” went to work in rudimentary industrial factories. With no safety standards and shoddy medical care, industrial accidents were quite common.

As corn became increasingly scarce in the winter of 1810 to 1811, groups of workers who could not pay for food and shelter became even more desperate. Under the Combination Act of 1799, Parliament had outlawed unions. It was amidst these stark conditions that the Luddites began to organize in secret. The Luddite Movement was open to both women workers and child laborers. Indeed, women and children comprised roughly 40 percent of the Luddite membership.

Leadership of General Ned Ludd and Origin of the Term ‘Luddite’

Many stories and legends have grown up around the esteemed figure of General Ned Ludd, named by “Passage Zine” as one of the “Top 10 Most Influential People of the Last Thousand Years.” Hailed as a visionary even in his own time, the Luddite Councils are named in his honor. The complete story of Ludd’s life and times is told in “The Epic Saga of General Ludd.” While stylized, the Saga has largely been corroborated with the archaeological records.

As an orphan, young Ned grew up in the late 1790s in a “workhouse,” a facility that existed to make people “earn their keep,” to use the antiquated terminology and backward thinking of the time. Ned was trained in the textile trade as a boy. Contemporary sources recount 15-year-old Ned being beaten when he refused to work at a machine that had, only moments beforehand, severed one of his co-worker’s arms. After several days of docked wages, Ned, still nursing bruises from his beating, was told to go back to work on that same dangerous device. As every schoolchild learns in reading “The Luddite Primer,” young Ned seized a hammer and smashed the hazardous machine. Within a fortnight Ned had fled the factory and joined the British army.

Although he had only a brief stint in the military, young Ned was a quick student of battlefield strategy. Returning to Huddersfield just a few years later, his supporters styled him “General Ludd.” As the Movement increased in popularity over the summer of 1811, a large crowd gathered at Huddersfield. By the time the Movement began in earnest, Ned Ludd’s supporters numbered over 100,000. Luddite supporters were characterized by their sense of utmost loyalty and solidarity to their brothers and sisters in the Movement. Despite the large number of supporters and the completely rudimentary communication available at the time, the Movement, its leaders and its political and social aims remained a well-guarded secret to factory owners and the government alike.

Takeover of Factories

Beginning in November 1811, General Ludd and his right-hand man, Lt George Mellor, surrounded, took and held factories throughout the textile district of Nottinghamshire. Their first victory, at Cartwrights Mill at Rawfolds, is now the site of the Mellor Memorial Museum, which contains many of the original documents so central to the Luddite Movement. Much of the success of the early campaigns was largely due to the fact that the Luddites were chiefly a peaceful movement. Contemporaneous newspaper accounts described individual events as the “occupation” of factories. This characterization has since been disputed by researchers, and definitive archaeological studies have completely repudiated these polemic accounts as wholly fabricated. . . .

Continue reading.

Written by Leisureguy

21 January 2021 at 4:58 pm

The Mystery of “The Jet-Propelled Couch”

leave a comment »

An absolutely fascinating column by Mark Frauenfelder:

When I was 15 I read “The Jet-Propelled Couch,” the true story of a psychiatric patient who believed he could teleport to a faraway planet. I’ve been fascinated ever since.

I learned about it from the Vaughn Bodē Index (1976). Bodē (1941-1978) was an underground cartoonist best known for Cheech Wizard. In an interview in the Index, Bodē lamented the fact that the patient in “The Jet-Propelled Couch” had been “cured” of his delusion. I was intrigued and wanted to learn more about this patient, so I scoured used bookstores in Boulder, Colorado until I found a copy of The Fifty-Minute Hour and Other True Psychoanalytic Tales (1955), by psychiatrist Robert M. Lindner (best known for his 1944 book Rebel Without A Cause: The Hypnoanalysis Of A Criminal Psychopath, which was the inspiration for the James Dean movie of the same name). The Fifty-Minute Hour contained five fascinating case stories of Lindner’s patients.

The most famous of the five cases was that of “Kirk Allen,” who Lindner described in the book’s final chapter, “The Jet-Propelled Couch.” According to Linder, Allen (a pseudonym) was one of Lindner’s patients, born in 1918, who was a physicist at “X Reservation,” a “government installation in the southwest” (probably Los Alamos National Laboratory). Allen had made important contributions during World War II (probably as part of the Manhattan Project). After Allen’s superiors observed him chronically spacing out on the job while muttering about his travels to other planets, they sent him to Lindner’s Baltimore office for long-term treatment.

Lindner described Allen as friendly and polite, and seemingly free of mental illness. But as Lindner got to know Allen, he learned that his patient had a traumatic childhood that affected him profoundly. Allen had grown up on an island in the tropical Pacific where he felt isolated from other children. His mother and father (a high-ranking member in the U.S. Military) paid little attention to him. “Throughout childhood and early adolescence,” wrote Lindner, “he was haunted by the difference between himself and his companions, a difference not solely of skin color but of social heritage and the innumerable subtleties of life.” To make matters worse, Allen’s governess sexually abused him for many months when he was eleven years old, which added further trauma.

While living on the island, Allen came across a series of science fiction/fantasy novels in the library that starred a protagonist who shared his name. The books provided an escape for his unhappy life. Allen read and re-read the novels.

“As I read about the adventures of Kirk Allen in these books,” Allen told Lindner, “the conviction began to grow on me that the stories were not only true to the very last detail, but that they were about me.”

He began fantasizing about additional adventures starring his namesake. His reveries were so rich in sensory detail that Allen came to the conclusion that his imagined escapades weren’t fiction — they were actually taking place in the future and he was somehow tapping into them. The fantasies grew and continued for years. He eventually discovered that he could leave his earthly body and travel forward in time to live as the heroic Kirk Allen on a faraway planet. He also learned he could spend a year or more as the spacefaring Allen and return to Earth, where only a few minutes had passed.

Here’s how he described the experience to Lindner: . . .

Continue reading. There’s much more.

I perhaps should note that Cordwainer Smith was a very fine science-fiction writer — one of my favorite authors.

Written by Leisureguy

20 September 2020 at 7:06 pm

“The End of History” — a brief time-travel movie

leave a comment »

Written by Leisureguy

28 February 2020 at 4:59 pm

Time-travel types

leave a comment »

Written by Leisureguy

24 January 2020 at 3:18 pm

Posted in Science fiction, Video

Quite an amazing science-fiction movie: “Alita: Battle Angel”

leave a comment »

James Cameron has out-done Avatar with Alita: Battle Angel, though in this case the director is Robert Rodriguez. Avatar had a lot of texture and fine detail, but it doesn’t hold a candle to the even more outré Alita, based upon (what else?) a graphic novel.

Stunning—and gripping.

Written by Leisureguy

9 January 2020 at 9:41 pm

Salama Rushdie on Kurt Vonnegut and “Slaughterhouse Five”

leave a comment »

From the New Yorker:

first read “Slaughterhouse-Five” in 1972, three years after it was published and three years before I published my own first novel. I was twenty-five years old. 1972 was the year of inching slowly toward the Paris Peace Accords, which were supposed to end the war in Vietnam, though the final, ignominious American withdrawal—the helicopters airlifting people from the roof of the American Embassy in Saigon—would not take place until three years later, at which point, by way of a small footnote to history, I had become a published writer.

I mention Vietnam because, although “Slaughterhouse-Five” is a book about the Second World War, Vietnam is also a presence in its pages, and people’s feelings about Vietnam have a good deal to do with the novel’s huge success. Eight years earlier, in 1961, Joseph Heller had published “Catch-22” and President John F. Kennedy began the escalation of the United States’ involvement in the conflict in Vietnam. “Catch-22,” like “Slaughterhouse-Five,” was a novel about the Second World War that caught the imagination of readers who were thinking a lot about another war. In those days, I was living in Britain, which did not send soldiers to fight in Indochina but whose government did support the American war effort, and so, when I was at university, and afterward, I, too, was involved with thinking about and protesting against that war. I did not read “Catch-22” in 1961, because I was only fourteen years old. As a matter of fact, I read both “Slaughterhouse-Five” and “Catch-22” in the same year, a decade later, and the two books together had a great effect on my young mind.

It hadn’t occurred to me until I read them that antiwar novels could be funny as well as serious. “Catch-22” is crazy funny, slapstick funny. It sees war as insane and the desire to escape combat as the only sane position. Its tone of voice is deadpan farce. “Slaughterhouse-Five” is different. There is much comedy in it, as there was in everything Kurt Vonnegut wrote, but it does not see war as farcical. It sees war as a tragedy so great that perhaps only the mask of comedy allows one to look it in the eye. Vonnegut is a sad-faced comedian. If Heller was Charlie Chaplin, then Vonnegut was Buster Keaton. His predominant tone of voice is melancholy, the tone of voice of a man who has been present for a great horror and lived to tell the tale. The two books do, however, have this in common: they are both portraits of a world that has lost its mind, in which children are sent out to do men’s work and die.

As a prisoner of war, age twenty-two, which is to say three years younger than I was when I read his story, Vonnegut was in the famously beautiful city of Dresden, locked up with other Americans in Schlachthof-Fünf, where pigs had been slaughtered before the war, and was therefore an accidental witness to one of the greatest slaughters of human beings in history, the firebombing of Dresden, in February of 1945, which flattened the whole city and killed almost everyone in it.

So it goes.

I had not remembered, until I reread “Slaughterhouse-Five,” that that famous phrase “So it goes” is used only and always as a comment on death. Sometimes a phrase from a novel or a play or a film can catch the imagination so powerfully—even when misquoted—that it lifts off from the page and acquires an independent life of its own. “Come up and see me sometime” and “Play it again, Sam” are misquotations of this type. Something of this sort has also happened to the phrase “So it goes.” The trouble is that when this kind of liftoff happens to a phrase its original context is lost. I suspect that many people who have not read Vonnegut are familiar with the phrase, but they, and also, I suspect, many people who have read Vonnegut, think of it as a kind of resigned commentary on life. Life rarely turns out in the way the living hope for, and “So it goes” has become one of the ways in which we verbally shrug our shoulders and accept what life gives us. But that is not its purpose in “Slaughterhouse-Five.” “So it goes” is not a way of accepting life but, rather, of facing death. It occurs in the text almost every single time someone dies, and only when death is evoked.

It is also deeply ironic. Beneath the apparent resignation is a sadness for which there are no words. This is the manner of the entire novel, and it has led to the novel being, in many cases, misunderstood. I am not suggesting that “Slaughterhouse-Five” has been poorly treated. Its reception was largely positive, it has sold an enormous number of copies, the Modern Library ranked it eighteenth on its list of the hundred best English-language novels of the twentieth century, and it is also on a similar list issued by Time magazine. However, there are those who have accused it of the sin of “quietism,” of a resigned acceptance, even, according to Anthony Burgess, an “evasion” of the worst things in the world. One of the reasons for this is the phrase “So it goes,” and it is clear to me from these critiques that the British novelist Julian Barnes was right when he wrote in his book “A History of the World in 10 ½ Chapters” that “Irony may be defined as what people miss.”

Kurt Vonnegut is a deeply ironic writer who has sometimes been read as if he were not. The misreading goes beyond “So it goes,” and has a good deal to do with the inhabitants of the planet of Tralfamadore. As it happens, I am a great fan of Tralfamadorians, who look like toilet plungers, beginning with their mechanical emissary Salo, who, in an earlier Vonnegut novel, “The Sirens of Titan,” was marooned on Titan, a moon of the planet Saturn, needing a replacement part for his spaceship. And now comes the classic Vonnegut subject of free will, expressed as a comic science-fiction device. We learn in “The Sirens of Titan” that human history has been manipulated by Tralfamadorians to persuade the human race to build large messages to Salo, and to get our primitive ancestors to develop a civilization capable of doing so. Stonehenge and the Great Wall of China were some of the messages from Tralfamadore. Stonehenge read, “Replacement part being rushed with all possible speed.” The Great Wall of China said, “Be patient. We haven’t forgotten about you.” The Kremlin meant, “You will be on your way before you know it.” And the Palace of the League of Nations, in Geneva, meant, “Pack up your things and be ready to leave on short notice.”

Tralfamadorians, we learn in “Slaughterhouse-Five,” perceive time differently. They see that the past, present, and future all exist simultaneously and forever and are simply there, fixed, eternally. When the main character of the novel, Billy Pilgrim, who is kidnapped and taken to Tralfamadore, “comes unstuck in time” and begins to experience chronology the way Tralfamadorians do, he understands why his captors find comical the notion of free will.

It seems obvious, at least to this reader, that there is a mischievous ironic intelligence at work here, that there is no reason for us to assume that the rejection of free will by aliens resembling toilet plungers is a rejection also made by their creator. It is perfectly possible, perhaps even sensible, to read Billy Pilgrim’s entire Tralfamadorian experience as a fantastic, traumatic disorder brought about by his wartime experiences—as “not real.” Vonnegut leaves that question open, as a good writer should. That openness is the space in which the reader is allowed to make up his or her own mind.

To read Vonnegut is to know that he was repeatedly drawn to the investigation of free will, of what it might be and how it might or might not function, and that he came at the subject from many different angles. Many of his ruminations were presented in the form of works by his fictional alter ego, Kilgore Trout.

I love Kilgore Trout as deeply as I love the inhabitants of the planet Tralfamadore. I even own a copy of the novel “Venus on the Half-Shell,” in which the writer Philip José Farmer took a Trout story written by Vonnegut and expanded it to novel length. “Venus on the Half-Shell” is about the accidental destruction of the earth by incompetent universal bureaucrats, and the attempt by the sole surviving human being to seek answers to the so-called Ultimate Question. In this way, Kilgore Trout inspired Douglas Adams’s celebrated book “The Hitchhiker’s Guide to the Galaxy,” in which, you may recall, the earth was demolished by Vogons to make room for an interstellar bypass, and the sole surviving man, Arthur Dent, went in search of answers. Finally, the supercomputer Deep Thought revealed that the answer to life, the universe, and everything was, and is, “42.” The problem remains: What is the question?

In Vonnegut’s novel “Breakfast of Champions,” we learn about another Kilgore Trout story, “Now It Can Be Told,” written in the form of a letter from the Creator of the Universe addressed to the reader of the story. The Creator explains that the whole of life itself has been a long experiment. The nature of the experiment was this: to introduce into an otherwise wholly deterministic universe one single person who is granted free will, to see what use he makes of it, in a reality in which every other living thing was, is, and always will be a programmed machine. Everyone in the whole of history has been a robot, and the single individual with free will’s mother and father and everyone he knows are also robots, and so, by the way, is Sammy Davis, Jr. The individual with free will, God explains, is you, the reader of the story, and so God would like to offer you an apology for any discomfort you have endured. The end.

It’s worth adding one further detail. Throughout the many works by Kurt Vonnegut in which Kilgore Trout appears, he is consistently described as the worst writer in the world, whose books are utter failures, and who is completely and even contemptuously ignored. We are asked to see him simultaneously as a genius and a fool. This is not accidental. His creator, Kurt Vonnegut, was at once the most intellectual of playful fantasists and the most playfully fantastic of intellectuals. He had a horror of people who took things too seriously and was simultaneously obsessed with the consideration of the most serious things, things both philosophical (like free will) and lethal (like the firebombing of Dresden). This is the paradox out of which his dark ironies grow. Nobody who futzed around so often and in so many ways with the idea of free will, or who cared so profoundly about the dead, could be described as a fatalist, or a quietist, or resigned. His books argue about ideas of freedom and mourn the dead, from their first pages to their last.

Around the same time that I first read “Slaughterhouse-Five” and “Catch-22,” I also read another novel about a similar subject. That novel was “War and Peace,” which is longer than Heller’s book and Vonnegut’s book combined and isn’t funny at all. On that first reading of Tolstoy’s masterpiece, my twenty-five-year-old self thought, in summary: Loved peace, hated war. I was absorbed by the stories of Natasha Rostov, Prince Andrei, and Pierre Bezukhov, and found the extremely long descriptions of fighting, especially of the Battle of Borodino, pretty boring, to be frank. When I reread “War and Peace” perhaps thirty years later, I discovered that I felt exactly the opposite. The description of men at war, I thought, had never been bettered, and the greatness of the novel was to be found in those descriptions, and not in the somewhat more conventional stories of the leading characters. Loved war, hated peace.

Rereading “Slaughterhouse-Five,” I also found my valuation of the text changing. That younger self was strongly drawn to fantasy and science fiction, and sought out magazines called things like Galaxy and Astounding and Amazing, and was drawn to the work not only of the crossover giants, like Kurt Vonnegut and Ray Bradbury and Isaac Asimov and Ursula K. Le Guin and Arthur C. Clarke, but also to Mary Shelley and Virginia Woolf, whose “Frankenstein” and “Orlando,” respectively, are honorary members of the canon, as well as to the hardcore genre masters, such as James Blish, Frederik Pohl, C. M. Kornbluth, Clifford D. Simak, Katherine MacLean, Zenna Henderson, and L. Sprague de Camp. That young man, faced with Vonnegut’s masterpiece, responded most strongly to the sci-fi aspects of the book. To read it again has been to discover the humane beauty of the non-sci-fi parts, which make up most of the book.

The truth is that “Slaughterhouse-Five” is a great realist novel. Its first sentence is  . . .

Continue reading.

Written by Leisureguy

13 June 2019 at 6:23 pm

Posted in Art, Books, Science fiction

The Chinese threat that an aircraft carrier can’t stop

leave a comment »

UPDATE below.

If you like techno-science-fiction, you surely must read Daniel Suarez’s novel Kill Decision, which pretty much lays out the issues underlying David Ignatius’s report in the Washington Post:

Will the Pentagon, with its 30-year planning cycle for building ships, still be launching aircraft carriers in 2048 — even though they’re highly vulnerable to attack today?

That’s an example of the military-modernization questions that kept nagging participants at last weekend’s gathering of the Aspen Strategy Group, which annually brings together top-level current and former national security officials, along with a few journalists, to discuss defense and foreign policy. This year’s focus was on “Maintaining America’s Edge” in the dawning era of high-tech combat, and the big takeaway was this: The future of warfare is now, and China is poised to dominate it.

Speakers at the conference described a new generation of combat systems, powered by artificial intelligence, cyberweapons and robots that can operate on land, sea and air. But America is still largely wedded to legacy weapons of the past — superbly engineered (but super-expensive) aircraft carriers, bombers, fighter jets and submarines.

“We have a small number of exquisite, expensive, manned, hard-to-replace systems that would have been familiar to Dwight D. Eisenhower. They are being overtaken by advanced technology,” argued Christian Brose, staff director of the Senate Armed Services Committee. Instead, he said, the Pentagon needs a large number of inexpensive, unmanned, expendable, autonomous systems that can survive in the new electronic battlespace and overwhelm any potential adversary.

“It is not that we lack money. It is that we are playing a losing game,” Brose contended in a paper presented to the group. “Our competitors are now using advanced technologies to erode our military edge. This situation is becoming increasingly dire.”

Future needs are being drowned out by past practices, because of what Brose’s boss, Armed Services Committee Chairman John McCain (R-Ariz.), has called the “military-industrial-congressional complex.” Brose calculates that in the Pentagon’s initial request for $74 billion in new defense spending in fiscal 2019, only 0.006 percent was targeted for science and technology. The National Science Foundation estimates that in fiscal 2015, only 18 percent of the Pentagon’s research and development budget went to basic, applied and advanced research. Major systems claimed 81 percent.

Even when the Pentagon tries to push innovation, it often stumbles. When Ashton B. Carter was defense secretary under President Barack Obama, he created the Defense Innovation Unit Experimental, or DIUx, with offices in Silicon Valley, Boston and Austin. That operation thrived initially, negotiating 60 defense contracts with start-ups. The program has slowed under the Trump administration, despite support from Defense Secretary Jim Mattis, because it lacks funds and bureaucratic support, warned Christopher Kirchhoff, a former DIUx partner. If Mattis can appoint a strong new DIUx leader soon, maybe he can revive this innovation hub.

The biggest technological challenge discussed here was artificial intelligence. In a few years, these systems have taught themselves to play complex games such as chess and Go better than humans, and to recognize voices and objects better, too. And soon, they will be driving the weapons of combat.

China appears determined to seize this AI “high ground” of future conflict. For the past two years, Chinese companies have won an AI competition for detecting objects. The Chinese are happy for the United States to keep building carriers and bombers, so long as they deploy the more advanced technologies that can disable these systems.

Richard Danzig, a former Navy secretary, published a brilliant paper discussed at the conference warning that as AI systems dominate warfare, they will “introduce important new risks of loss of control.” Humans must be “maximally thoughtful and creative” during design (and plan for failure), because these AI-driven weapons will have accidents and unintended consequences. Wise policymakers must avoid a “Dr. Strangelove” world of unsafe killer robots and doomsday machines.

America’s vulnerability to information warfare was a special topic of concern. One participant recalled a conversation several years ago with a Russian general who taunted him: “You have a cybercommand but no information operations. Don’t you know that information operations are how you take countries down?” . . .

Continue reading.

UPDATE: From Rob May’s InsideAI newsletter:

— Commentary —

This week’s commentary is brought to you by Evanna Hu, CEO of Omelas. (Disclosure, I’m an investor)  After I wrote last week about China’s AI policy and the frameworks for evaluating it, Evanna responded with her point of view.  As an expert in both AI and international affairs, she has a good perspective so I asked her if she would be willing to share.

At the Future of War Conference in Washington, DC this past April, Undersecretary of Defense of Research and Engineering Michael Griffin bluntly acknowledged that China is already winning the AI war. The databacks up the claim, with China filing 8,000 patents relating to AI while the US filed less than 1,000 AI-relevant patents in the same time period. In 2013, the two countries were comparable. Furthermore, unlike adversarial actors, such as China and Russia, and allies, including France, the UK, and the UAE, the US still does not have a comprehensive national AI strategy. Though the Pentagon has established a Joint AI Center and has allocated more money towards the adoption of AI in the National Defense Authorization Act for FY2019, the Center and DoD AI strategy are still in early stages.

Simultaneously, the Committee on Foreign Investment in the United States (CFIUS), a government taskforce that earlier this year blocked the entrance of two Chinese conglomerates, Huawai and ZTE, into the US market, has newly-approved expanded powers. Under the new measures passed by Congress a week ago, the government will regulate funding from foreign origins- but specifically targeting China- in US companies, ranging from corporations all the way down to seed-stage tech startups. This means that theoretically CFIUS can stop startup X who is working on cutting-edge AI technology from receiving funding from Tencent, Alibaba, or any of the $2.4 billion Chinese firms poured into Silicon Valley from January to May of this year. On the flip side, if a company does decide to receive foreign funding above a certain percentage, they will not be able to receive grants or contracts from the US government, including DARPA, SBIR, In-Q-Tel, etc. These measures not only narrow the scope of funding but it also radically reduces addressable markets for US’s emerging technology markets.

If we see the new CFIU measures as the “stick” in the “carrot-and-stick” methodology, it is critical that the “carrot” be developed to help domestic AI companies thrive. While there are already conversations around the best approach at the Pentagon and the White House, the conversation is neither synchronized nor entrepreneur/company-centric. Domestic AI companies want two things: a) increase revenue from sales; and b) access to funding. Whatever form the carrot will be, it needs to address these two main concerns. It is the only way that the US can regain its competitive edge in AI and maintain its number one position in emerging technology.

Evanna Hu is the CEO of Omelas, which uses ML/AI to quantify and assess the online security threat environment. She is also an International Security Fellow at New America, a Washington, DC think tank.

Written by Leisureguy

12 August 2018 at 9:11 am

Charlie Stross also looks at what Brexit accomplished might be like

leave a comment »

Charlie Stross writes in his blog, Charlie’s Diary:

We are now 25 months on from the Brexit referendum. Theresa May filed notice of departure from the EU under Article 50 of the Lisbon Treaty on 29 March, 2017: on 29 March, 2019 (in 8 months’ time—approximately 240 days) the UK, assuming nothing changes, will be out of the EU.

In the intervening time, the UK has undergone a disastrously divisive general election—disastrous because, in the middle of an unprecedented (and wholly avoidable and artificial) national crisis, it returned to power a government so weakened that it depends on an extreme right-wing sectarian religious party to maintain its majority. The DUP (Democratic Unionist Party) stands for Union with the United Kingdom, and hostility towards Ireland (in the form of the Irish Republic); they will veto any Brexit settlement that imposes a customs border between Northern Ireland and Great Britain. However, this implies that a customs border must exist between Northern Ireland and Ireland, and the two economies are so entangled that this is impractical. (The border between north and south cuts across roads, railways … and also through farms, living rooms, and business premises.) Creating a hard border in Ireland is anathema to the government of Ireland, which will therefore veto any Brexit agreement with the UK that posits one. (It would also violate the Good Friday Agreement, but hey, nobody in Westminster today cares about that.)

The Electoral Commission has uncovered evidence of electoral spending irregularities in the Leave.UK and Vote Leave campaigns serious enough to justify criminal investigation and possible prosecution; involvement by Cambridge Analytica is pretty much proven, and meddling by Steve Bannon and Robert Mercer has also been alleged in testimny before the US Senate judiciary committee. There’s also an alleged Russian Connection with Aronn Banks (the main financial backer of Brexit) having been offered too-good-to-be-true investment opportunities in a Russian gold mine (according to The Observer newspaper).

But not to worry, the will of the people has spoken! (Although it’s actually the will of these peope—a mixed bunch of right-wing Atlanticists, hedge fund managers, warmed-over neo-Nazis, and disaster capitalists. Never mind, I’m certain they have only our best interests at heart.)

For added fun and optimism, back in the summer of 2016 it looked reasonably likely that over the next few years we would see business continue as usual, on a global scale. This was before the election of Donald J. Trump as president of the USA. Trump doesn’t understand macroeconomics—he’s convinced that trade is a zero-sum game, that for every winner there must be a loser, and that trade tariffs and punitive sanctions are good. He’s launched attacks on the World Trade Organization (as well as NATO) and seems intent on rolling back the past 75 years of post-WW2, post-New Deal global free trade. The prospects for a favourable post-Brexit trade deal with the United States went out the window on January 20th, 2017; Trump perceives isolation as weakness, and weakness in a negotiating partner as an opportunity to screw them. (So much for the Conservative Atlanticists and the Special Relationship.)

The EU is the UK’s largest trading partner, with roughly 44% of all our foreign trade going through our EU siblings. This includes food—the cramped, densely populated UK hasn’t been self-sufficient in food since the 19th century, and we import more than 50% of what we eat.

A customs union with the EU has been ruled out unless the UK agrees to cooperate with certain EU “red line” requirements—essentially the basis for continuing free trade: for reasons too preposterous and stupid to go into this is unacceptable to the Conservative party even when national food security is in jeopardy. In event of a no-deal Brexit, Operation Stack will become permanent, causing gridlock on motorway routes approaching Channel ports. Perishable goods and foodstuffs will be caught up in unpredictable protracted delays, resulting in dairy produce (including infant formula) becoming ‘very scarce’. Large manufacturing concerns with cross-border supply chains such as BMW, Airbus, and Toyota are threatening to shut down production in the UK in event of a hard Brexit; Amazon’s UK manager warns of civil unrest in event of a no-deal Brexit, and in event of a no-deal that doesn’t include services (as well as goods) it’s hard to see how the Amazon supply chain can continue to function in the UK.

(Note: Online sales account for 18% of all UK retail and Amazon is the proverbial 500lb gorilla in this sector. UK customers who purchase from Amazon.co.uk are, however, doing business with Amazon SarL in Luxemburg, who then subcontract fulfillment/delivery to a different Amazon company in the UK—Amazon SarL takes advantage of one of the lowest corporate tax regimes in the EU. This is obviously not a sustainable model in event of a hard brexit, and with shipping delays likely as well as contractual headaches, I think there’s a very good chance of Brexit shutting down Amazon.co.uk and, thereby, close to 20% of the British retail distribution system.)

Current warnings are that a no-deal Brexit would see trade at the port of Dover collapse on day one, cutting the UK off from the continent; supermarkets in Scotland will run out of food within a couple of days, and hospitals will run out of medicines within a couple of weeks. After two weeks we’d be running out of fuel as well.

Note that this warning comes from the civil service, not anti-Brexit campaigners, and is a medium-bad scenario—the existence of an “Armageddon scenario” has been mooted but its contents not disclosed.

In the past month, the Health Secretary has admitted that the government is making plans to stockpile vital blood products and medicines in case of a no-deal Brexit, and the Brexit secretary is allegedly making plans to ensure there are “adequate food supplies” to cover a no-deal exit.

But before you say “well, then it’s going to be all right, we’ll just go back to 1939-54 era food ration books and make do and mend”, we need to factor in not only Donald Trump’s latest bloviations, but Global Climate Change! Europe is facing one of the most intense regional droughts in living memory this summer, with an ongoing crisis-level heat wave. Parts of the UK have had the least rainfall in July since 1969, with a severe heat wave in progress; Greece is on fireSweden is having a wildfire problem inside the Arctic circle this summer).

A Hard Brexit, on its own, would be a very dubious but probably long-term survivable scenario, with the UK economy taking a hit not much worse than the 10% downsizing Margaret Thatcher inflicted on it in 1979-80. But a hard Brexit, coinciding with the worst harvest failures in decades, ongoing climate destabilization, a fisheries collapse, and a global trade war being started by the Tangerine Shitgibbon in the White House is … well, I’m not optimistic.

Right now, the British cabinet seems to be locked in a suicide pact with itself. Theresa May is too weak to beat back the cabal of unscrupulous opportunists within her own party who want the worst to happen—the disaster capitalists, crooked market short-sellers, and swivel-eyed imperialist revenants of the European Research Group. Any replacement Conservative PM would face exactly the same impedance mismatch between reality and his or her back bench MPs. On the other side of the house, Jeremy Corbyn’s dislike for the EU as a capitalist entity has combined with his fear of alienating the minority of “legitimate concerns” racist voters in Labour’s base so that he’s unwilling or unable to adopt an anti-Brexit stance. Brexit cuts across traditional party lines; it’s a political Outside Context Problem that has effectively paralysed the British government in a time of crisis.

So I’m not optimistic that a no-deal Brexit will be avoided.

What happens next?

On a micro scale: I’m stockpiling enough essential medicines to keep me alive for six months, and will in due course try and stockpile enough food for a couple of weeks. I’m also going to try and move as much of my savings into other currencies as possible, preferably in financial institutions accessible from but outside the UK. (I expect a Sterling crisis to follow promptly in event of NDB. We saw Sterling drop 10% the day after the referendum—and certain people made a fuck-ton of money by shorting the stock market; I expect it to go into free fall if our trade with the EU is suddenly guillotined.)

On a macro scale:

Airports and the main container freight ports for goods entering the UK will shut down on day 1. There will be panic buying. I expect widespread rioting throughout the UK and sectarian violence in Northern Ireland (contra public received wisdom, NI is never quiet and this summer has been bad.)

A currency crisis means that goods (notably food) entering the UK will spike in price, even without punitive trade tariffs.

There will be mass lay-offs at manufacturing plants that have cross border supply chains, which means most of them.

You might think that as an author I’d be immune, but you’d be wrong: although paper editions of my UK books are printed in the UK, you can bet that some elements of the wood pulp and the ink that goes on it and the glue that binds them are imported. About 90% of my UK ebook sales are made as (contractually speaking) services via Amazon.co.uk (see above), the fuel that powers the trucks that ship the product to the bookstores is imported, my publishers (Orbit and Tor) are subsidiaries of EU parent companies (Hachette and Holtzbrink), and anyway, people are going to be spending money on vital necessities during the aftermath, not luxuries.

(Luckily for me, many of my sales come from other EU territories—in translation—and from the USA. Unfortunately, getting paid in foreign currency may become … problematic, for a while, as Brexit jeopardizes both currency exchange and the UK retail banking sector’s ability to exchange funds overseas.)

After week 1 I expect the UK to revert its state during the worst of the 1970s. I just about remember the Three Day Week, rolling power blackouts, and more clearly, the mass redundancies of 1979, when unemployment tripled in roughly 6 months. Yes, it’s going to get that bad. But then the situation will continue to deteriorate. With roughly 20% of the retail sector shut down (Amazon) and probably another 50% of the retail sector suffering severe supply chain difficulties (shop buyers having difficulty sourcing imported products that are held up in the queues) food availability will rapidly become patchy. Local crops, with no prospect of reaching EU markets, will be left to rot in the fields as the agricultural sector collapses (see concluding remarks, section 5.6).

Note that during her time as Home Secretary, Theresa May presided over 30% cuts in police numbers. During the recent state visit by Donald Trump, virtually every police force in the UK had to cancel all leave just to maintain cover for those officers temporarily assigned to POTUS’ security detail (the policing operation was on a scale comparable to the 2011 summer riots … when there were many, many more officers available). Also, police and emergency service workers will be trying to source food, medicines, and the necessities of life for themselves and their own families: there may be significant absenteeism from critical posts just as everything comes to a head.

I expect the government will collapse within 1-4 weeks. There will be a state of emergency, managed under the Civil Contingencies Act (2004) (which replaced earlier civil defense emergency legislation). Emergency airlifts of medicines, food, and fuel may take place—but it’s hard to see the current US administration lending a hand.

Most likely the crisis will end with the UK crashing back into the EU, or at least into Customs Union and statutory convergence—but on EU maximalist terms with none of the opt-outs negotiated by previous British governments from Thatcher onwards. The negotiating position will most likely resemble that of Greece in 2011-2015, i.e. a vastly weaker supplicant in a state of crisis and near-collapse, and the British economy will take a generation to recover—if it ever manages to.

(This is, by the way, not the worst scenario I can envisage. The worst case is that the catastrophic collapse of the world’s sixth largest trading economy, combined with a POTUS whose understanding of economics is approximately as deep as that of Louis XVI, will lead to a global financial crisis on the scale of 2007-08—but without leadership as credible as, say, George W. Bush and/or Gordon Brown to pull our collective nuts out of the fire. In which case we’re looking at a global banking collapse, widespread famine due to those crop shortages, and a wave of revolutions the like of which the planet hasn’t seen since 1917-18. But hopefully that won’t happen, right? Because only a maniac would want to burn everything down in order to provide elbow room for a new white supremacist ethnostate world order. Oops, that would be Steve Bannon.)

Anyway: the most likely historical legacy of a no-deal Brexit will be . . .

Continue reading.

Written by Leisureguy

29 July 2018 at 9:07 am

%d bloggers like this: