Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Go/Baduk’ Category

Afterthought on an earlier post: Territory vs. Influence

leave a comment »

I had a thought about an earlier post that described how an enlightened capitalist approach — giving workers a cut of the gains their productivity produces — instead of the common hypercapitalist approach — screwing as much as possible from the workers so that the gains flow solely to the capitalist — actually benefits everyone.

The thought was from an earlier post on Go (aka Baduk) that included an explanatory video that I believe is understandable even by those who do not play Go. The idea is that there are two styles of play, for territory (immediate gain) and influence (long-term gain), and those two approaches reflect what the typical capitalist approach is (immediate gain) and the KKR approach in the door factory (long-term gain).

The KKR post goes well with the Go post. Take a look and see whether you agree.

Written by Leisureguy

22 May 2023 at 2:17 pm

Humans have improved at Go since AIs became best in the world

leave a comment »

In an earlier post, I argued that AI can clarify contentious propositions through an impartial debate, with an impartial Moderator declaring a winner. This is an example of using AI as a tool to explore a conceptual space.

In New Scientist Andrew Rosebaum describes the outcome of using another AI to explore a different conceptual space: the game of Go/Baduk. He writes:

AIs can beat the world’s best players at the board game Go, but humans are starting to improve too. An analysis of millions of Go moves has found that professional players have been making better and more original game choices since Go-playing AIs overtook humans.

Before 2016, AIs couldn’t beat the world’s best Go players. But this changed with an AI called AlphaGo developed by London-based research firm DeepMind. AlphaGo defeated multiple Go champions, including the then number one ranked human player.

Since then, other AIs have also been developed that are considered “superhuman”. Though they can be used simply as opposition players, they can also help analyse the quality of any given move and so act as a Go coach too.

Minkyu Shin at the City University of Hong Kong and his colleagues decided to investigate whether the introduction of these superhuman Go-playing AIs has led to a marked improvement in human play.

The researchers gathered a data set consisting of 5.8 million move decisions by professional players between 1950 and 2021. They then used a Go-playing AI to help calculate a measure called a “decision quality index”, or DQI, which assesses the quality of a move. They deemed a move “novel” if it had not been previously attempted in combination with the preceding moves.

The analysis found that human players had made significantly better and more novel moves in response to the 2016 advent of superhuman AI. Between 1950 and 2015, the improvement in quality of play was comparatively small, with a median annual DQI oscillating between roughly -0.2 and 0.2.  Whereas after superhuman AI, the DQI leapt upward, with median values above 0.7 from 2018 to 2021. In 2015, 63 per cent of games showed novel strategies, whereas by 2018, that figure had risen to 88 per cent.

Stuart Russell at the University of California, Berkeley, says that . . .

Continue reading.

Written by Leisureguy

15 March 2023 at 3:16 pm

The Achilles’ Heel of AI Go Programs

leave a comment »

Fascinating article in Financial Times shows that AI programs don’t generalize as humans do. From the article:

The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete, Pelrine said.

“As a human it would be quite easy to spot,” he added.

The discovery of a weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep learning systems that underpin today’s most advanced AI, said Stuart Russell, a computer science professor at the University of California, Berkeley.

The systems can “understand” only specific situations they have been exposed to in the past and are unable to generalise in a way that humans find easy, he added.

“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” Russell said.

The player, apparently an amateur 1D, won 14 of the 15 games played against the AI.

Written by Leisureguy

19 February 2023 at 9:50 am

Territory and Influence: Yin and Yang of Go Game

with 2 comments

I realize — or at least suspect — that many of my readers do not yet play Go, but I like to offer encouragement to learn. This video, by Go Magic, has a clear explanation of some aspects of the play of Go, explained in a way that I think can be appreciated by nonplayers:

Written by Leisureguy

10 February 2023 at 12:52 pm

Posted in Games, Go/Baduk, Video

On dry spells and unconscious work

leave a comment »

Timothy Wilson’s book Strangers to Ourselves: Discovering the Adaptive Unconscious helped me better understand some of the processes that make up “me,” and how great a role unconscious processes — of which, of course, we have no awareness (though occasional glimpses) — play in that.

It came up recently in an exchange on Mastodon, when someone wrote about how a protracted dry spell is often a secretly fertile time when unconscious processes are working out what will drive us forward.

During the dry spell, we are not aware of the unconscious processes at work, so the time feels fallow even though our unconscious is busily at work, constructing new connections and new channels. 

During these dry spells, we don’t feel calm and relaxed but almost tense with an effort to make something happen, to break through. It occurred to me that perhaps what we feel is spillover from the unconscious activity — we feel the effort, but we don’t see what is causing it or what it is accomplishing. We’re bystanders on the conscious side of a semi-permeable wall and what comes through is not what’s being done by the emotional component of the work. And because we don’t see what is happening — all that must be done to connect things and make new channels — we are impatient. We feel as though we (our conscious selves) are making the effort and nothing is happening. I think that we are, rather, feeling the effort and don’t know enough to be patient and wait for the result to be achieved. The prototypical example is the adult starting to learn to play the piano. He wants to play easily at first and he feels the effort as the unconscious works to sort out the new skill, but he thinks the effort is in his conscious mind driving his fingers over the keys. He must do that, but that’s not the real effort (which is giving the unconscious the practice it needs to learn) and the sense of effort is, I think, what leaks through from the unconscious’s struggle to integrate this skill.

This sort of spillover from our unconscious work is more evident when someone first starts learning the game of Go/Baduk — something I recommend (see this site). Go depends heavily on pattern recognition, and that unconscious facility — the pattern-recognition subroutine, as it were — is employed in many spheres, such as learning a language, learning to play music, learning dance or sports — and learning Go.

In Go, a stone or a group of stones is captured (and removed from the board) when it (or they) are not connected to any vacant intersections — when they are smothered, as it were. A single stone on the board — not on an edge — is connected to 4 vacant intersections: one above and one below, and one to either side. When three of these are occupied, the stone can be taken on the next move.

Early in the process of learning Go, people suddenly start feeling that in real-life contexts. If they are in a group with someone on either side and they’re talking to someone in front of them, they will suddenly feel the danger that if someone comes up behind them, they will be captured. It’s a feeling, not a conscious thought, and it’s distinct.

Or in driving on a multilane highway, if they have a car on either side and they come closer to a car in front, they will feel that a car behind them will capture them.

I think these feelings are spillovers from the unconscious at work — and specifically that unconscious pattern-recognition subroutine. It’s working so hard to integrate these new patterns into its library that they spill over into the conscious mind. If you have ever learned a foreign language, you will have noticed the same feelings of effort and the occasional lifting of a veil when a string of gibberish switches into a clear thought.

Freud thought that the unconscious would slip through when the conscious mind was distracted — the famous Freudian slip, when one blurts out something that they may not consciously have thought to say. The usual explanation is that the error was because one was tired or distracted, but as Freud pointed out, that is like attributing a robbery on a dark and isolated street to the darkness and isolation. Those are not what robbed the person; they just provided the conditions for a robbery. The robber took advantage of the darkness and isolation to strike, just as the unconscious takes advantage of the conscious self’s being tired or distracted to come into the open.

Nowadays, I don’t consider this view so valid as I once did. It strikes me as giving the conscious self more power and autonomy than it actually possesses. The conscious self seems more like the passenger in a howdah on an elephant. The elephant — the unconscious — goes and does what it wants, and the passenger makes up reasons why he wanted to go there and do that. (This is particularly evident in stage acts in which hypnotized people are given post-hypnotic suggestions to someone to, say, squawk like a chicken when they hear the word “book.” When “book” is said and the person squawks, if you ask them why they did that, they will come up with various reasons — the conscious mind is a rationalizing engine. This is familiar to people who attempt to rely on willpower to diet: their conscious mind can come up with lots of reasons to eat what they want.)

Wilson’s book, mentioned above, is in the list of books I find myself repeatedly recommending, and another book in that list is relevant to this topic: Daniel Goleman’s Vital Lies, Simple Truths: The Psychology of Self-Deception. That book discusses how the unconscious pulls the wool over the eyes of the conscious — and why it does that. It’s a book well worth reading since it can help you spot some instances when your own unconscious spills over into the open. Initially, that can be hard to recognize, because we have somehow trained ourselves not to see it, not to be aware of it. But with practice, you can see it at work.

Here’s another book very much on that topic, in which Marion Milner describes her own journey of discovery to see what her own unconscious mind was up to. The encounters are interesting and in some cases have quite practical application. This book, too, is found on that booklist. It is A Life of One’s Own.

Written by Leisureguy

7 February 2023 at 11:46 am

A nice explanation of a Go game

leave a comment »

For reasons that are not entirely clear to me, many people do not yet play Go. But I think even those who haven’t yet learned Go might enjoy this brief video. (And see also: AlphaGo.)

Written by Leisureguy

31 January 2023 at 12:14 pm

Posted in Games, Go/Baduk, Video

Making Go boards and Shogi boards

leave a comment »

Written by Leisureguy

9 January 2023 at 5:58 pm

Who is Honinbo Jowa? The Story of One of the Strongest Meijin in Japanese Go History

leave a comment »

I don’t post much Go stuff, but this video seems exceptionally clear — especially if you’ve seen (say) The Surrounding Game, an excellent documentary that explains Go very well. I believe the movie is available in the US on Prime Video. The first half gives a history of the game to provide a context for the second half. It’s well worth watching, and if you do watch it, this brief video will be more accessible even if you are not yet a Go player.

Written by Leisureguy

23 December 2022 at 12:51 pm

Posted in Games, Go/Baduk, Video

“Alphago”: The full documentary on YouTube

leave a comment »

This is well worth watching. Alphago has now been well outclassed by Alphazero, the next generation of game-playing AI.

Written by Leisureguy

14 February 2022 at 6:57 pm

The peril of pursuing perfection

with 4 comments

I have written about the difficulty faced by adult beginners in playing piano: they are hyperconscious of the mistakes they make, and they don’t want to play until they can play without making such mistakes. But studying our mistakes is how we learn.

I just came across this story from Art and Fear, by David Bayles and Ted Orland:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

Update: Cf. Linus Pauling: “The best way to have a good idea is to have a lot of ideas.”

Written by Leisureguy

16 November 2020 at 3:42 pm

Excellent full-length documentary on AlphaGo and the match against the world champion

leave a comment »

I highly recommend this documentary even to those who do not play Go. I have no knowledge of (or interest in) football, but I loved the series “Friday Night Lights,” as so many do, not because of the football but because of the human drama. Football is really just the MacGuffin. The story is about the people, and it is absorbing because of that. So it is with this documentary.

Written by Leisureguy

13 June 2020 at 9:24 pm

AlphaGo: The movie

leave a comment »

The full documentary:

Written by Leisureguy

22 March 2020 at 10:02 am

Posted in Games, Go/Baduk, Movies & TV

The Shaw Alphabet and Other Quixotic Solutions I Love

leave a comment »

I just published on Medium a brief article the lists a number of interests that regular blog readers will recognize.

Written by Leisureguy

11 January 2020 at 2:29 pm

Free Go book as PDF: “Go Studies: A History of Adventure”

leave a comment »

I just learned of this site:, with Baduk being the Korean name for Go. And the first thing I saw was a free book on Go: Go Studies: A History of AdventureScroll down that page to get to the download buttons.

It’s really a terrific site, with a lot of very good content. Take look. Remember your New Year’s Resolution to learn Go. 🙂

Written by Leisureguy

13 January 2019 at 1:43 pm

Posted in Books, Games, Go/Baduk

How the Artificial-Intelligence Program AlphaZero Mastered Its Games

leave a comment »

James Somers writes in the New Yorker:

A few weeks ago, a group of researchers from Google’s artificial-intelligence subsidiary, DeepMind, published a paper in the journal Science that described an A.I. for playing games. While their system is general-purpose enough to work for many two-person games, the researchers had adapted it specifically for Go, chess, and shogi (“Japanese chess”); it was given no knowledge beyond the rules of each game. At first it made random moves. Then it started learning through self-play. Over the course of nine hours, the chess version of the program played forty-four million games against itself on a massive cluster of specialized Google hardware. After two hours, it began performing better than human players; after four, it was beating the best chess engine in the world.

The program, called AlphaZero, descends from AlphaGo, an A.I. that became known for defeating Lee Sedol, the world’s best Go player, in March of 2016. Sedol’s defeat was a stunning upset. In “AlphaGo,” a documentary released earlier this year on Netflix, the filmmakers follow both the team that developed the A.I. and its human opponents, who have devoted their lives to the game. We watch as these humans experience the stages of a new kind of grief. At first, they don’t see how they can lose to a machine: “I believe that human intuition is still too advanced for A.I. to have caught up,” Sedol says, the day before his five-game match with AlphaGo. Then, when the machine starts winning, a kind of panic sets in. In one particularly poignant moment, Sedol, under pressure after having lost his first game, gets up from the table and, leaving his clock running, walks outside for a cigarette. He looks out over the rooftops of Seoul. (On the Internet, more than fifty million people were watching the match.) Meanwhile, the A.I., unaware that its opponent has gone anywhere, plays a move that commentators called creative, surprising, and beautiful. In the end, Sedol lost, 1-4. Before there could be acceptance, there was depression. “I want to apologize for being so powerless,” he said in a press conference. Eventually, Sedol, along with the rest of the Go community, came to appreciate the machine. “I think this will bring a new paradigm to Go,” he said. Fan Hui, the European champion, agreed. “Maybe it can show humans something we’ve never discovered. Maybe it’s beautiful.”

AlphaGo was a triumph for its creators, but still unsatisfying, because it depended so much on human Go expertise. The A.I. learned which moves it should make, in part, by trying to mimic world-class players. It also used a set of hand-coded heuristics to avoid the worst blunders when looking ahead in games. To the researchers building AlphaGo, this knowledge felt like a crutch. They set out to build a new version of the A.I. that learned on its own, as a “tabula rasa.”

The result, AlphaGo Zero, detailed in a paper published in October, 2017, was so called because it had zero knowledge of Go beyond the rules. This new program was much less well-known; perhaps you can ask for the world’s attention only so many times. But in a way it was the more remarkable achievement, one that no longer had much to do with Go at all. In fact, less than two months later, DeepMind published a preprint of a third paper, showing that the algorithm behind AlphaGo Zero could be generalized to any two-person, zero-sum game of perfect information (that is, a game in which there are no hidden elements, such as face-down cards in poker). DeepMind dropped the “Go” from the name and christened its new system AlphaZero. At its core was an algorithm so powerful that you could give it the rules of humanity’s richest and most studied games and, later that day, it would become the best player there has ever been. Perhaps more surprising, this iteration of the system was also by far the simplest.

A typical chess engine is a hodgepodge of tweaks and shims made over decades of trial and error. The best engine in the world, Stockfish, is open source, and it gets better by a kind of Darwinian selection: someone suggests an idea; tens of thousands of games are played between the version with the idea and the version without it; the best version wins. As a result, it is not a particularly elegant program, and it can be hard for coders to understand. Many of the changes programmers make to Stockfish are best formulated in terms of chess, not computer science, and concern how to evaluate a given situation on the board: Should a knight be worth 2.1 points or 2.2? What if it’s on the third rank, and the opponent has an opposite-colored bishop? To illustrate this point, David Silver, the head of research at DeepMind, once listed the moving parts in Stockfish. There are more than fifty of them, each requiring a significant amount of code, each a bit of hard-won chess arcana: the Counter Move Heuristic; databases of known endgames; evaluation modules for Doubled Pawns, Trapped Pieces, Rooks on (Semi) Open Files, and so on; strategies for searching the tree of possible moves, like “aspiration windows” and “iterative deepening.”

AlphaZero, by contrast, has only two parts: a neural network and an algorithm called Monte Carlo Tree Search. (In a nod to the gaming mecca, mathematicians refer to approaches that involve some randomness as “Monte Carlo methods.”) The idea behind M.C.T.S., as it’s often known, is that a game like chess is really a tree of possibilities. If I move my rook to d8, you could capture it or let it be, at which point I could push a pawn or move my bishop or protect my queen. . . . The trouble is that this tree gets incredibly large incredibly quickly. No amount of computing power would be enough to search it exhaustively. An expert human player is an expert precisely because her mind automatically identifies the essential parts of the tree and focusses its attention there. Computers, if they are to compete, must somehow do the same.

This is where the neural network comes in. AlphaZero’s neural network receives, as input, the layout of the board for the last few moves of the game. As output, it estimates how likely the current player is to win and predicts which of the currently available moves are likely to work best. The M.C.T.S. algorithm uses these predictions to decide where to focus in the tree. If the network guesses that ‘knight-takes-bishop’ is likely to be a good move, for example, then the M.C.T.S. will devote more of its time to exploring the consequences of that move. But it balances this “exploitation” of promising moves with a little “exploration”: it sometimes picks moves it thinks are unlikely to bear fruit, just in case they do.

At first, the neural network guiding this search is fairly stupid: it makes its predictions more or less at random. As a result, the Monte Carlo Tree Search starts out doing a pretty bad job of focussing on the important parts of the tree. But the genius of AlphaZero is in how it learns. It takes these two half-working parts and has them hone each other. Even when a dumb neural network does a bad job of predicting which moves will work, it’s still useful to look ahead in the game tree: toward the end of the game, for instance, the M.C.T.S. can still learn which positions actually lead to victory, at least some of the time. This knowledge can then be used to improve the neural network. When a game is done, and you know the outcome, you look at what the neural network predicted for each position (say, that there’s an 80.2 per cent chance that castling is the best move) and compare that to what actually happened (say, that the percentage is more like 60.5); you can then “correct” your neural network by tuning its synaptic connections until it prefers winning moves. In essence, all of the M.C.T.S.’s searching is distilled into new weights for the neural network.

With a slightly better network, of course, the search gets slightly less misguided—and this allows it to search better, thereby extracting better information for training the network. On and on it goes, in a feedback loop that ratchets up, very quickly, toward the plateau of known ability.

When the AlphaGo Zero and AlphaZero papers were published, a small army of enthusiasts began describing the systems in blog posts and YouTube videos and building their own copycat versions. Most of this work was explanatory—it flowed from the amateur urge to learn and share that gave rise to the Web in the first place. But a couple of efforts also sprung up to replicate the work at a large scale. The DeepMind papers, after all, had merely described the greatest Go- and chess-playing programs in the world—they hadn’t contained the source code, and the company hadn’t made the programs themselves available to players. Having declared victory, its engineers had departed the field.

Gian-Carlo Pascutto, a computer programmer who works at the Mozilla Corporation, had a track record of building competitive game engines, first in chess, then in Go. He followed the latest research. As the combination of Monte Carlo Tree Search and a neural network became the state of the art in Go A.I.s, Pascutto built the world’s most successful open-source Go engines—first Leela, then LeelaZero—which mirrored the advances made by DeepMind. The trouble was that DeepMind had access to Google’s vast cloud and Pascutto didn’t. To train its Go engine, DeepMind used five thousand of Google’s “Tensor Processing Units”—chips specifically designed for neural-network calculations—for thirteen days. To do the same work on his desktop system, Pascutto would have to run it for seventeen hundred years.

To compensate for his lack of computing power, Pascutto distributed the effort. LeelaZero is a federated system: anyone who wants to participate can download the latest version, donate whatever computing power he has to it, and upload the data he generates so that the system can be slightly improved. The distributed LeelaZero community has had their system play more than ten million games against itself—a little more than AlphaGo Zero. It is now one of the strongest existing Go engines.

It wasn’t long before the idea was extended to chess. In December of last year, when the AlphaZero preprint was published, “it was like a bomb hit the community,” Gary Linscott said. Linscott, a computer scientist who had worked on Stockfish, used the existing LeelaZero code base, and the new ideas in the AlphaZero paper, to create Leela Chess Zero. (For Stockfish, he had developed a testing framework so that new ideas for the engine could be distributed to a fleet of volunteers, and thus vetted more quickly; distributing the training for a neural network was a natural next step.) There were kinks to sort out, and educated guesses to make about details that the DeepMind team had left out of their papers, but within a few months the neural network began improving. The chess world was already obsessed with AlphaZero: posts on celebrated the engine; commentators and grandmasters pored over the handful of AlphaZero games that DeepMind had released with their paper, declaring that this was “how chess ought to be played,” that the engine “plays like a human on fire.” Quickly, Lc0, as Leela Chess Zero became known, attracted hundreds of volunteers. As they contributed their computer power and improvements to the source code, the engine got even better. Today, one core contributor suspects that it is just a few months away from overtaking Stockfish. Not long after, it may become better than AlphaZero itself.

When we spoke over the phone, Linscott marvelled that a project like his, which would once have taken a talented doctoral student several years, could now be done by an interested amateur in a couple of months. Software libraries for neural networks allow for the replication of a world-beating design using only a few dozen lines of code; the tools already exist for distributing computation among a set of volunteers, and chipmakers such as Nvidia have put cheap and powerful G.P.U.s—graphics-processing chips, which are perfect for training neural networks—into the hands of millions of ordinary computer users. An algorithm like M.C.T.S. is simple enough to be implemented in an afternoon or two. You don’t even need to be an expert in the game for which you’re building an engine. When he built LeelaZero, Pascutto hadn’t played Go for about twenty years.

David Silver, the head of research at DeepMind, has pointed out a seeming paradox at the heart of his company’s recent work with games:  . . .

Continue reading.

Written by Leisureguy

28 December 2018 at 11:10 am


leave a comment »

In looking at the current state of AI and thinking of its future, John Henry and the steam drill come to mind. John Cornwell, director of the Science & Human Dimension Project at Jesus College, Cambridge, writes in Aeon:

The game of Go, which has a 3,000-year history in China, is played by two people on a board with counters, or stones, in black and white. The aim is to defeat one’s opponent by surrounding his territory. Metaphorically, the loser is choked into submission by the winner. At a match held in Seoul in South Korea, on 12 March 2016, the world Go champion Lee Sedol, observed by hundreds of supporters, and millions of spectators on television, slumped in apparent despair on being defeated by his opponent: a machine.

Go is a boardgame like no other. It is said to reflect the meaning of life. There are a prodigious number of potential moves – more, it is said, than all the particles in the known Universe. Serious Go players train virtually full-time from the age of five; they think of the game as an art form, and a philosophy, demanding the highest levels of intelligence, intuition and imagination. The champions are revered celebrities. They speak of the game as teaching them ‘an understanding of understanding’, and refer to original winning moves as ‘God’s touch.’

Lee’s face, as he lost the third of five games, and hence the match, was a picture of sorrow. It was as if he had failed the entire human race. He was beaten by AlphaGo, a machine that works on deeply layered neural nets that mimic the human brain and nervous system. The engineers and artificial intelligence (AI) experts who created AlphaGo admit that they do not understand how the machine’s intuition works. If melancholy is a consequence of loss, what was mourned that day was the demise of something uniquely special about human nature.

AlphaGo was designed at the AI research lab DeepMind, a subsidiary of the powerful Google corporation. DeepMind’s spokespeople say that this is just the beginning: they liken their research programmes to the Apollo Moon shot, or the Hubble telescope. The company has recruited 700 technicians, of whom 400 are post-doc computer scientists. They look ahead to the day when AI machines will be employed to solve the most impenetrable and recalcitrant problems in science, health, the environment … the Universe.

DeepMind scientists were thrilled with their success on 12 March 2016. Their glee recalled Dr Frankenstein’s – ‘it’s alive!… it’s alive!’ – in the 1931 movie directed by James Whale. Meanwhile, the emotions of Lee and his supporters bring to mind the pervasive atmosphere of melancholy in Mary Shelley’s novel; one commentator spoke of their ‘heavy sadness’. For his part, Lee had symbolically undergone the fate of Frankenstein’s brother William, throttled in the depths of a wood by the monster.

Cathartic foreboding is familiar in countless stories of hubris, from the original Prometheus myth to Frankenstein (1818) and on to the explosion of 20th- and 21st-century sci-fi literature and movies. But it is melancholy that haunts the imagined spectacle of humans rivalling God by devising creatures made in their own image. For Frankenstein’s monster, as for his creator Victor, the consciousness of having created a distorted human likeness lies at the heart of the unfolding misery and violence. ‘I am malicious because I am miserable,’ says the monster. ‘Am I not shunned and hated by all mankind?’ In the absence of any fatherly kindness from Frankenstein, his sorrow turns to hatred and murderous revenge: ‘If I cannot inspire love, I will cause fear.’

Alex Garland’s film Ex Machina (2014) is a recent manifestation of disastrous hubris in the creator-creature theme. It features Nathan, a billionaire genius AI scientist and contemporary Dr Frankenstein, who lives alone in a remote research facility where he constructs female robots. His latest artifact is Ava, a winsome AI android with suspected consciousness. Nathan wants to test her capacity for seduction. He recruits a young and impressionable computer scientist, Caleb, on the pretext of conducting a series of Turing tests: will Caleb mistake the machine for a human being? Will he fall in love with her? The answers, respectively, are no and yes.

Ava, for her part, manipulates Caleb for her own hidden, long-term aims. With the help of a fellow robot, she murders Nathan and escapes, leaving Caleb imprisoned and alone, facing starvation and death. Caleb elicits our contempt and pity. But Ava, despite her early expressions of frustrated longing (that suggest the sadness of a Lady of Shalott, ‘half sick of shadows’, but which are in fact a tactic of deceit) is a warped version of the prisoner who overcomes many obstacles to escape a Plato’s Cave of unreal androids. At the end of the film, Ava is helicoptered away from the facility to the world of real people. A sense of foreboding haunts the film from the outset, revealed in Nathan’s prognostication of AI’s future: ‘One day,’ he says, ‘the AIs will look back on us the same way we look at fossil skeletons from the plains of Africa. An upright ape, living in dust, with crude language and tools, all set for extinction.’

The enormity of AI’s challenge, and the melancholy it generates, was anticipated more than half a century ago by Norbert Wiener, the pioneer of cybernetics. Wiener was an atheist, yet in God and Golem, Inc(1964) he predicts a set of AI circumstances, theological and eschatological in their scope, with premonitions of dark physical and metaphysical risk. He laid down a principle that self-learning systems are capable, in theory, not only of unprogrammed learning, but of reproducing themselves and evolving. Crucially, they will relate in independent ways with human beings. Wiener believed the risks attendant on playing God were dramatically exemplified in the 17th-century legend of the Golem of Prague, a huge, conscious humanoid, made of clay and powered by cabbalistic magic to protect the Jews of the city. The Golem, named Josef, soon revealed its potential for calamity. When instructed to heave water, it could not stop its task, and flooded the house. (There are premonitions here of the seminar-room joke and thought-experiment, in which an AI machine is briefed to make paperclips and cannot be stopped: eventually it wrecks the infrastructure of the planet and destroys the human race.) The Golem turns against the very people it was intended to protect, and kills them.

Wiener also emphasised the ability of self-learning machines to play games. Every kind of relationship, he argues, is reducible to a game. He saw the Golem myth as a game, and he expands on the idea to suggest that the Book of Job, that most melancholy of biblical stories, is another archetypal game: God and Satan competing to win the soul of the suffering prophet. Similarly, Wiener sees the struggle between God and Satan in John Milton’s epic poem Paradise Lost (1667) as a celestial game: Satan the melancholic fallen arch-fiend, eternally stricken with wounded merit, competing with God for possession of humankind:

Abashed the devil stood,
And felt how awful goodness is, and saw
Virtue in her shape how lovely – saw, and pined
His loss.

And that game will one day be repeated, Wiener predicted, when a human being pits herself against the ultimate machine. Fifty years ahead of time, Wiener foretold that researchers would build a machine to defeat the human champion of the most difficult boardgame ever devised. But this would be just the prelude to much greater extensions of the machines’ prowess. Proposing a general principle in cybernetics, Wiener wrote: ‘a game-playing machine may be used to secure the automatic performance of any function if the performance of this function is subject to a clear-cut, objective criterion of merit’. By clear-cut, he meant definable in a finite number of words or matrices. The systems would, in time, engage in ‘war and business’ which are conflicts ‘and as such, they may be so formalised as to constitute games with definite rules’. He might have included environment, food security, development, diplomacy.

In his conclusion, Wiener speculated that formalised versions of complex human planning and decisions were already being established to ‘determine the policies for pressing the Great Push Button and burning the Earth clean for a new and less humanly undependable order of things’. He was alluding to the probability that the decision for nuclear war would be initiated by a self-learning machine. The notion of the automatic Doomsday Machine had been dramatised that same year in Stanley Kubrick’s film Dr Strangelove(1964). For all its mordant humour, the movie is profoundly dark, ultimately dominated by despair.

Go players speak of the top players’ special ‘imagination’, a talent or faculty that DeepMind’s designers also claim for AlphaGo. But in what sense can a machine possess imagination?

An early hint of AI ‘imagination’ and its rationale, can be found in a 2012 article published in Neuron, the journal of neurology: ‘The Future of Remembering: Memory, Imagining and the Brain’ is authored by a team led by the psychologist Daniel Schacter at Harvard University. The article was ostensibly about Alzheimer’s, and it argued that sufferers lose not only memory but the ability to envisage future events and their consequences. It claimed that imagination is key to both memory and forward-thinking.

Schacter and his colleagues cite the work of Sir Frederic Bartlett, professor of psychology at the University of Cambridge from the 1920s, to tell us what memory is not. In 1932, Bartlett claimed that memory ‘is not the re-excitation of innumerable fixed, lifeless and fragmentary traces, but an imaginative reconstruction or construction’. His research was based on an experiment whereby volunteers were told a Native American legend known as ‘The War of the Ghosts’. It takes about seven minutes to recite; the volunteers were then asked over lapses of days, weeks and months to retell the story. Bartlett found that the volunteers engaged their imaginations to recreate the tale in various ways, based on their own social and personal experiences. Memory, in other words, is not a retrieval of inert bits of information from a database, but a dynamic reconstruction or recreation: an exercise in imagination.

In their article, Schacter and his team argue that neuroscientific studies of imagination, memory, forward-thinking and decisionmaking have much to contribute to AI research. The significance of this statement, in retrospect at least, is the fact that one of the article’s authors was Demis Hassabis, then of University College, London. Hassabis had studied computer science at Cambridge, worked in the development of computer games (including the bestselling Theme Park) and gained a doctorate in cognitive neuroscience. He had been thinking hard about the direction of travel – from the brain to the machine. Certainly, he has said, since as early as 1997, it would be a strategy for his future research through the next two decades. In July 2017, as CEO and co-founder of DeepMind, he Tweeted: ‘Imagination is one of the keys to general intelligence, and also a powerful example of neuroscience-inspired ideas crossing over into AI.’

As Hassabis would explain on many occasions following the triumph of AlphaGo, the machine’s imagination consisted in its capacity to model future scenarios and the consequences of those scenarios at prodigious speeds and across a broad span of combinations, including its opponent’s potential moves. Furthermore, the operation of the neural nets meant that its ‘imagination’ was dynamic, productive, not inert and passive.

The significance of machines mimicking the biological action of the brain and nervous system, as Hassabis framed it, was a metaphorical reversal of the more familiar direction of travel. Before the great leap forward in noninvasive brain imaging through the 1980s and ’90s (the so-called Decade of the Brain), it had been routine, from the early modern period on, to invoke machines to explain the mind-brain function: think of . . .

Continue reading.

Written by Leisureguy

14 November 2018 at 9:30 am

Two excellent movies on Netflix

leave a comment »

First, The Surrounding Game, which explores the appeal of Go (aka Baduk (Korea) and Weichi (China)).

Second, The Accountant, a good action movie.

Written by Leisureguy

30 August 2018 at 4:24 pm

Posted in Games, Go/Baduk, Movies & TV

I found this commentary on AlphaGo Zero v. AlphaGo Master fascinating

leave a comment »

I am fascinated by the analysis and discoveries.

Written by Leisureguy

7 July 2018 at 1:48 pm

Twitch played Go

leave a comment »

Pretty cool. Take a look.

Written by Leisureguy

1 May 2018 at 4:01 pm

Posted in Go/Baduk

“The Surrounding Game” now available on iTunes, Amazon Video, and YouTube

leave a comment »

I highly recommend the documentary. It live on  iTunes, Amazon Video, and YouTube!

It’s a terrific documentary. The first third is sort of a history and high points of Go, which provides a context for the viewer who doesn’t play Go, and then the documentary really hits its stride and gets better and better.

Here’s the trailer:


Written by Leisureguy

15 March 2018 at 10:48 am

Posted in Games, Go/Baduk, Movies & TV

%d bloggers like this: