Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science fiction’ Category

The seven stages of denial (that a robot will take your job)

leave a comment »

Good post by Jason Kottke, which begins:

From an excerpt of Kevin Kelly’s recent book, The Inevitable, a list of the Seven Stages of Robot Replacement:

1. A robot/computer cannot possibly do the tasks I do.

2. [Later.] OK, it can do a lot of those tasks, but it can’t do everything I do.

3. [Later.] OK, it can do everything I do, except it needs me when it breaks down, which is often.

4. [Later.] OK, it operates flawlessly on routine stuff, but I need to train it for new tasks. . . .

Continue reading.

Written by LeisureGuy

4 January 2017 at 8:16 pm

2017 will be a big year for AI

leave a comment »

As readers of science fiction know, once AI really takes hold, the advances are quite rapid, leading to many good novels about the Synchronicity. Wired has an interesting article on the state of AI, unfortunately behind a paywall (at least for me, but after subscribing I read it). One interesting point from the article:

This summer, after building an AI that cracked the game of Go, Demis Hassabis and his Google DeepMind lab revealed they had also built an AI that helps operate Google’s worldwide network of computer data centers. Using a technique called deep reinforcement learning, which underpins both their Go-playing machine and earlier DeepMind services that learned to master old Atari games, this AI decides when to turn on cooling fans inside the thousands of computer servers that fill these data centers, when to open the data center windows for additional cooling, and when to fall back on expensive air conditioners. All told, it controls over 120 functions inside each data center

As Bloomberg reported, this AI is so effective, it saves Google hundreds of millions of dollars. In other words, it pays for the cost of acquiring DeepMind, which Google bought for about $650 million in 2014. Now, Deepmind plans on installing additional sensors in these computing facilities, so it can collect additional data and train this AI to even higher levels.

Written by LeisureGuy

26 December 2016 at 3:30 pm

The Great A.I. Awakening; or, The Threshold of the Synchronicity

leave a comment »

Gideon Lewis-Kraus writes in the NY Times Magazine:

Prologue: You Are What You Have Read

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”

The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.

NO. 1:

Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.

NO. 2:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.

Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

Sadiq Khan, the mayor of London, stood to make a few opening remarks. A friend, he began, had recently told him he reminded him of Google. “Why, because I know all the answers?” the mayor asked. “No,” the friend replied, “because you’re always trying to finish my sentences.” The crowd tittered politely. Khan concluded by introducing Google’s chief executive, Sundar Pichai, who took the stage.

Pichai was in London in part to inaugurate Google’s new building there, the cornerstone of a new “knowledge quarter” under construction at King’s Cross, and in part to unveil the completion of the initial phase of a company transformation he announced last year. The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”

A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.

Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce. It was only with the refugee crisis, Pichai explained from the lectern, that the company came to reckon with Translate’s geopolitical importance: On the screen behind him appeared a graph whose steep curve indicated a recent fivefold increase in translations between Arabic and German. (It was also close to Pichai’s own heart. He grew up in India, a land divided by dozens of languages.) The team had been steadily adding new languages and features, but gains in quality over the last four years had slowed considerably.

Until today. As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.

Pichai has an affection for the obscure literary reference; he told me a month earlier, in his office in Mountain View, Calif., that Translate in part exists because not everyone can be like the physicist Robert Oppenheimer, who learned Sanskrit to read the Bhagavad Gita in the original. In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”

Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”

To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”

It was a fitting remark: The new Google Translate was run on the first machines that had, in a sense, ever learned to read anything at all.

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence. . .

Continue reading.

Written by LeisureGuy

14 December 2016 at 4:21 pm

The template of a story—any story: same template.

leave a comment »

Interesting to read at this particular time in history, when it looks as though nations are going through the story cycle, nations being one of the stories we tell ourselves. (Cf. memes, which survive if they are replicated by others—that is, only if they are “catchy” in some way, so that more people are drawn into the meme. Darwinian evolution takes care of the rest.

Written by LeisureGuy

29 November 2016 at 2:42 pm

You’ve thought about Kurt Vonnegut’s Cat’s Cradle. Think again.

leave a comment »

Josh Jones posts at Open Culture:

Kurt Vonnegut’s 1963 novel Cat’s Cradle resembles its title, a web of overlapping and entangled stories, all of which have huge holes in the middle. And the book—as have many of his slim, surrealist pop masterpieces—was read by many critics as lightweight—whimsical and sentimental.  One reviewer in The New York Review of Books, for example, called Vonnegut a “compiler of easy to read truisms about society who allows everyone’s heart to be in the right place.”

Not so, argues University of Puerto Rico scholar Mark Wekander Voigt. For all its silliness—such as its Calypso-heavy “parody of a modern invented religion that will make everyone happy”—Cat’s Cradle, writes Voigt, “is essentially about the moral issues involved in a democratic government using the atom bomb.” Vonnegut’s novel suggests that “to be really ethical, to think about right and wrong, means that we must dispense with the authorities who tell us what is right and wrong.”

John, the hero of Cat’s Cradle, begins his absurdist hero’s quest by intending to write a “factual” accounting of what “important Americans had done on the day when the first atomic bomb was dropped on Hiroshima, Japan.” The references would not have been lost on Vonnegut’s contemporary readers, who would all have been familiar with John Hersey’s harrowing 1946 Hiroshima, the most popular book ever written about the dropping of the bomb, with six survivor’s stories told in a thrilling, engaging style and “all the entertainment of a well-written novel.”

Vonnegut, however, writes an alienating anti-novel, in part to demonstrate his point that “to discuss the ethical implications of dropping the bomb on Hiroshima, one should not look at the victims, but at those who were involved in developing such a bomb and their government.” Increasingly, however, it becomes harder and harder to look at anything directly. In the novel’s parody religion, Bokononism, all lies are potentially truths, all truths potentially lies. Language in the military-industrial-complex world of the bomb, Vonnegut suggests, had become as changeable and potentially deadly as the substance called “Ice-9,” a polymorph of water that can instantly turn rivers, lakes, and even whole oceans into ice.

Evoking the novel’s high-wire balancing act of goofy songs and rituals and metaphors for the global annihilation of the earth by nuclear weapons, the 2001 album above [in post at the link – LG[, Ice-9 Ballads, pairs Vonnegut with composer Dave Soldier and the Manhattan Chamber orchestra for an adaptation, of sorts, of Cat’s Cradle.. . .

Continue reading.

I would not be surprised to see an uptick in Vonnegut’s popularity over the next few years.

Written by LeisureGuy

18 November 2016 at 1:21 pm

The trailer for the live-action movie “Ghost in the Shell”

leave a comment »

More info in this Motherboard article:

 

Written by LeisureGuy

13 November 2016 at 10:13 am

A fascinating look at H.G. Wells

leave a comment »

When I read H.G. Wells in junior high, I was enthralled. Roger Luckhurst was interviewed by Five Books and has some interesting things to say:

Three of the books that you have selected to talk about today were written early in H G Wells’ career, in the 1890s. Do you think that his biggest contribution to literature has been those early scientific romances?

Yes, that’s probably right. There is a common critical line about Wells which is that he started off young and enthusiastic, writing lively, ambivalent and ambiguous works. He seemed to be delightedly thinking up new ways of destroying humanity over and over again, or perversely pointing out how we were all going to degenerate down the evolutionary scale, back into sea squids. His work of the 1890s is satirical and provocative.

Then, quite early in his career, he became a famous writer and personality very fast, and in a sense started to treat himself too seriously. He did a famous lecture in 1902 at the Royal Institution called The Discovery of the Future, where he essentially said: fiction is boring! We can scientifically predict the future! And that is what I am going to do in this lecture. Some critics of Wells say that that is what he did for the next 44 years of his life. He got increasingly didactic and uninteresting; literary people in the liberal establishment like Virginia Woolf and E. M. Forster hated him; he became estranged from early fans like Henry James. After his early work, he only wrote didactic utopias and non-fiction about ‘world government’. He hectored.

I think this picture is way too simple, but it is the case that if you are interested in science fiction and popular culture, then those early works—The War of the Worlds, The Time Machine, The Island of Doctor Moreau—are crucial. After that, he did write some interesting works in a bewilderingly diverse array of genres, including social realism and utopias. He even had a religious phase briefly during the First World War, but also wrote histories, textbooks, novels and endless utopias. It was a very diverse career, not easy to encapsulate.

What is the enduring appeal of his early scientific romances? Why do people still read them?

He is not the ‘father of science fiction’, as people sometimes say, but he was part of a generation that was beginning to think about science as a resource for literature. I think he was very lucky to be born when he was, in 1866. He got the benefit of the Education Act of 1870, which meant he could go to college and be among the first generation to be taught scientific subjects. Consequently when he became a journalist in the 1890s his frame of reference was no longer Greek and Roman and Classical literature, as it would have been if he had been to public school and university.

So he was a young upstart from the petit bourgeoisie, of a generation that was relying on new kinds of knowledge. His cleverness was in being able to put them into a new kind of the old romance form and produce the ‘scientific romance’. They are a deliberate mix of new and old: romances rather than novels. They can be utopian or dystopian, but not in the style of literary utopia, established by Thomas More, which is often quite static, descriptive and boring. Instead, Wells’ fictions are dynamic and melodramatic, full of wild, disordered emotions, sensations of sublime awe, and total horror. You’re never quite sure where the sympathies lie, and that ambiguity is why I think they endure.

Writing about The Time Machine, you have identified the ‘Further Vision’—which occurs towards the end of the book—as one of Wells’ most impressive literary passages. Could you talk us through this scene? . . .

Continue reading.

Written by LeisureGuy

31 October 2016 at 4:01 pm

Posted in Books, Science fiction

%d bloggers like this: