Later On

A blog written for those whose interests more or less match mine.

AlphaGolem

leave a comment »

In looking at the current state of AI and thinking of its future, John Henry and the steam drill come to mind. John Cornwell, director of the Science & Human Dimension Project at Jesus College, Cambridge, writes in Aeon:

The game of Go, which has a 3,000-year history in China, is played by two people on a board with counters, or stones, in black and white. The aim is to defeat one’s opponent by surrounding his territory. Metaphorically, the loser is choked into submission by the winner. At a match held in Seoul in South Korea, on 12 March 2016, the world Go champion Lee Sedol, observed by hundreds of supporters, and millions of spectators on television, slumped in apparent despair on being defeated by his opponent: a machine.

Go is a boardgame like no other. It is said to reflect the meaning of life. There are a prodigious number of potential moves – more, it is said, than all the particles in the known Universe. Serious Go players train virtually full-time from the age of five; they think of the game as an art form, and a philosophy, demanding the highest levels of intelligence, intuition and imagination. The champions are revered celebrities. They speak of the game as teaching them ‘an understanding of understanding’, and refer to original winning moves as ‘God’s touch.’

Lee’s face, as he lost the third of five games, and hence the match, was a picture of sorrow. It was as if he had failed the entire human race. He was beaten by AlphaGo, a machine that works on deeply layered neural nets that mimic the human brain and nervous system. The engineers and artificial intelligence (AI) experts who created AlphaGo admit that they do not understand how the machine’s intuition works. If melancholy is a consequence of loss, what was mourned that day was the demise of something uniquely special about human nature.

AlphaGo was designed at the AI research lab DeepMind, a subsidiary of the powerful Google corporation. DeepMind’s spokespeople say that this is just the beginning: they liken their research programmes to the Apollo Moon shot, or the Hubble telescope. The company has recruited 700 technicians, of whom 400 are post-doc computer scientists. They look ahead to the day when AI machines will be employed to solve the most impenetrable and recalcitrant problems in science, health, the environment … the Universe.

DeepMind scientists were thrilled with their success on 12 March 2016. Their glee recalled Dr Frankenstein’s – ‘it’s alive!… it’s alive!’ – in the 1931 movie directed by James Whale. Meanwhile, the emotions of Lee and his supporters bring to mind the pervasive atmosphere of melancholy in Mary Shelley’s novel; one commentator spoke of their ‘heavy sadness’. For his part, Lee had symbolically undergone the fate of Frankenstein’s brother William, throttled in the depths of a wood by the monster.

Cathartic foreboding is familiar in countless stories of hubris, from the original Prometheus myth to Frankenstein (1818) and on to the explosion of 20th- and 21st-century sci-fi literature and movies. But it is melancholy that haunts the imagined spectacle of humans rivalling God by devising creatures made in their own image. For Frankenstein’s monster, as for his creator Victor, the consciousness of having created a distorted human likeness lies at the heart of the unfolding misery and violence. ‘I am malicious because I am miserable,’ says the monster. ‘Am I not shunned and hated by all mankind?’ In the absence of any fatherly kindness from Frankenstein, his sorrow turns to hatred and murderous revenge: ‘If I cannot inspire love, I will cause fear.’

Alex Garland’s film Ex Machina (2014) is a recent manifestation of disastrous hubris in the creator-creature theme. It features Nathan, a billionaire genius AI scientist and contemporary Dr Frankenstein, who lives alone in a remote research facility where he constructs female robots. His latest artifact is Ava, a winsome AI android with suspected consciousness. Nathan wants to test her capacity for seduction. He recruits a young and impressionable computer scientist, Caleb, on the pretext of conducting a series of Turing tests: will Caleb mistake the machine for a human being? Will he fall in love with her? The answers, respectively, are no and yes.

Ava, for her part, manipulates Caleb for her own hidden, long-term aims. With the help of a fellow robot, she murders Nathan and escapes, leaving Caleb imprisoned and alone, facing starvation and death. Caleb elicits our contempt and pity. But Ava, despite her early expressions of frustrated longing (that suggest the sadness of a Lady of Shalott, ‘half sick of shadows’, but which are in fact a tactic of deceit) is a warped version of the prisoner who overcomes many obstacles to escape a Plato’s Cave of unreal androids. At the end of the film, Ava is helicoptered away from the facility to the world of real people. A sense of foreboding haunts the film from the outset, revealed in Nathan’s prognostication of AI’s future: ‘One day,’ he says, ‘the AIs will look back on us the same way we look at fossil skeletons from the plains of Africa. An upright ape, living in dust, with crude language and tools, all set for extinction.’

The enormity of AI’s challenge, and the melancholy it generates, was anticipated more than half a century ago by Norbert Wiener, the pioneer of cybernetics. Wiener was an atheist, yet in God and Golem, Inc(1964) he predicts a set of AI circumstances, theological and eschatological in their scope, with premonitions of dark physical and metaphysical risk. He laid down a principle that self-learning systems are capable, in theory, not only of unprogrammed learning, but of reproducing themselves and evolving. Crucially, they will relate in independent ways with human beings. Wiener believed the risks attendant on playing God were dramatically exemplified in the 17th-century legend of the Golem of Prague, a huge, conscious humanoid, made of clay and powered by cabbalistic magic to protect the Jews of the city. The Golem, named Josef, soon revealed its potential for calamity. When instructed to heave water, it could not stop its task, and flooded the house. (There are premonitions here of the seminar-room joke and thought-experiment, in which an AI machine is briefed to make paperclips and cannot be stopped: eventually it wrecks the infrastructure of the planet and destroys the human race.) The Golem turns against the very people it was intended to protect, and kills them.

Wiener also emphasised the ability of self-learning machines to play games. Every kind of relationship, he argues, is reducible to a game. He saw the Golem myth as a game, and he expands on the idea to suggest that the Book of Job, that most melancholy of biblical stories, is another archetypal game: God and Satan competing to win the soul of the suffering prophet. Similarly, Wiener sees the struggle between God and Satan in John Milton’s epic poem Paradise Lost (1667) as a celestial game: Satan the melancholic fallen arch-fiend, eternally stricken with wounded merit, competing with God for possession of humankind:

Abashed the devil stood,
And felt how awful goodness is, and saw
Virtue in her shape how lovely – saw, and pined
His loss.

And that game will one day be repeated, Wiener predicted, when a human being pits herself against the ultimate machine. Fifty years ahead of time, Wiener foretold that researchers would build a machine to defeat the human champion of the most difficult boardgame ever devised. But this would be just the prelude to much greater extensions of the machines’ prowess. Proposing a general principle in cybernetics, Wiener wrote: ‘a game-playing machine may be used to secure the automatic performance of any function if the performance of this function is subject to a clear-cut, objective criterion of merit’. By clear-cut, he meant definable in a finite number of words or matrices. The systems would, in time, engage in ‘war and business’ which are conflicts ‘and as such, they may be so formalised as to constitute games with definite rules’. He might have included environment, food security, development, diplomacy.

In his conclusion, Wiener speculated that formalised versions of complex human planning and decisions were already being established to ‘determine the policies for pressing the Great Push Button and burning the Earth clean for a new and less humanly undependable order of things’. He was alluding to the probability that the decision for nuclear war would be initiated by a self-learning machine. The notion of the automatic Doomsday Machine had been dramatised that same year in Stanley Kubrick’s film Dr Strangelove(1964). For all its mordant humour, the movie is profoundly dark, ultimately dominated by despair.

Go players speak of the top players’ special ‘imagination’, a talent or faculty that DeepMind’s designers also claim for AlphaGo. But in what sense can a machine possess imagination?

An early hint of AI ‘imagination’ and its rationale, can be found in a 2012 article published in Neuron, the journal of neurology: ‘The Future of Remembering: Memory, Imagining and the Brain’ is authored by a team led by the psychologist Daniel Schacter at Harvard University. The article was ostensibly about Alzheimer’s, and it argued that sufferers lose not only memory but the ability to envisage future events and their consequences. It claimed that imagination is key to both memory and forward-thinking.

Schacter and his colleagues cite the work of Sir Frederic Bartlett, professor of psychology at the University of Cambridge from the 1920s, to tell us what memory is not. In 1932, Bartlett claimed that memory ‘is not the re-excitation of innumerable fixed, lifeless and fragmentary traces, but an imaginative reconstruction or construction’. His research was based on an experiment whereby volunteers were told a Native American legend known as ‘The War of the Ghosts’. It takes about seven minutes to recite; the volunteers were then asked over lapses of days, weeks and months to retell the story. Bartlett found that the volunteers engaged their imaginations to recreate the tale in various ways, based on their own social and personal experiences. Memory, in other words, is not a retrieval of inert bits of information from a database, but a dynamic reconstruction or recreation: an exercise in imagination.

In their article, Schacter and his team argue that neuroscientific studies of imagination, memory, forward-thinking and decisionmaking have much to contribute to AI research. The significance of this statement, in retrospect at least, is the fact that one of the article’s authors was Demis Hassabis, then of University College, London. Hassabis had studied computer science at Cambridge, worked in the development of computer games (including the bestselling Theme Park) and gained a doctorate in cognitive neuroscience. He had been thinking hard about the direction of travel – from the brain to the machine. Certainly, he has said, since as early as 1997, it would be a strategy for his future research through the next two decades. In July 2017, as CEO and co-founder of DeepMind, he Tweeted: ‘Imagination is one of the keys to general intelligence, and also a powerful example of neuroscience-inspired ideas crossing over into AI.’

As Hassabis would explain on many occasions following the triumph of AlphaGo, the machine’s imagination consisted in its capacity to model future scenarios and the consequences of those scenarios at prodigious speeds and across a broad span of combinations, including its opponent’s potential moves. Furthermore, the operation of the neural nets meant that its ‘imagination’ was dynamic, productive, not inert and passive.

The significance of machines mimicking the biological action of the brain and nervous system, as Hassabis framed it, was a metaphorical reversal of the more familiar direction of travel. Before the great leap forward in noninvasive brain imaging through the 1980s and ’90s (the so-called Decade of the Brain), it had been routine, from the early modern period on, to invoke machines to explain the mind-brain function: think of . . .

Continue reading.

Written by LeisureGuy

14 November 2018 at 9:30 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.