Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Games’ Category

The Uncanny Resurrection of Dungeons & Dragons

leave a comment »

Neima Jahromi writes in the New Yorker:

The clinical psychologist Jon Freeman was feeling burnt out. He spent his days at a corporate office in Manhattan, managing dozens of research assistants as they tested pharmaceuticals on people with anxiety, depression, and insomnia. Looking for an escape hatch, he noticed that his daughter often had nothing to do after school. She would pick up her Nintendo Wii controller and drift “into this world of digital isolation,” Freeman recalled. From time to time, he enticed her back into social existence with board games. “Then I had this idea: Couldn’t we do this on a larger scale? Could we expand this to our neighborhood?”
Freeman quit his job, and, shortly thereafter, in 2011, the first customers—initially, his daughter’s friends—arrived at his pop-up board-game club and café, Brooklyn Strategist, a place where children and their parents could sit down and play games, both classic and obscure, over veggie platters and homemade ginger ale. Looking back at his work in the research lab, he paired cognitive-ability tests with the board games that he had on hand, and divided these amusements by brain function—kids worked their way around their frontal lobes a die roll at a time.
One day, a child who had grown tired of a sports-statistics game asked if Freeman had heard of the role-playing game Dungeons & Dragons, and if they could play it. The game has no board and no cards. Occasionally, players make use of maps. At its best, it’s a story told between the players, who control characters (elves, dwarves, gnomes, humans), and the Dungeon Master, who describes the world and uses dice to determine outcomes in the second person (“You come across a band of orcs, travelling down the road. What do you do?”). Freeman refused for a week or two—the game was too open-ended, and didn’t have a straightforward cognitive benefit—but the customer persisted, so he went up into his parents’ attic, dug out all his old D. & D. manuals, and wrote an adventure. “I tried to give them a little flavor of everything,” he told me, “A little dungeon crawl, a little fighting monsters. They ate it up.” Word got out. A few months later, a parent stopped him on the street with tears in her eyes. “What are you guys doing?” she asked him. Her son was dyslexic and had been role-playing at Brooklyn Strategist for a couple of weeks. Before D. & D., he couldn’t focus on writing for more than a few seconds. Now he was staying up all night to draft stories about his character. “Whatever it is, bottle it and sell it to me,” the mother said.
Freeman got a permanent space in 2012 and added French-press coffee. A few months later, Gygax, a once defunct magazine named for the Dungeons & Dragons co-creator Gary Gygax, chose Brooklyn Strategist to host its relaunch party. A reporter for Wired, covering the event, asked the magazine’s founders why they wanted to waste their energy on such a publication (not to mention such a store) when “it’s video games, not Dungeons and Dragons and other RPGs, that are getting all the attention?” This attention, it seems, has shifted. Two popular role-playing shows, “The Adventure Zone” and “Critical Role,” sent Freeman’s older patrons to their knees, begging for more D. & D. time in the store. Soon, Freeman had to hire half a dozen paid Dungeon Masters for the kids and has now begun training volunteer Dungeon Masters to guide adventures for the adults who drop in on Thursdays to fight goblins, trick castle guards, and drink wine.
Dungeons & Dragons nights have spread into classrooms and game stores across the country. Forty dollars in Portland, Oregon, gets you into Orcs! Orcs! Orcs!, a “Tavern-inspired” pop-up restaurant with D. & D. games and artisanal delicacies. (One night, it boasted “tankards of beer” and “a whole roast pig.”) In Massachusetts, snow or shine, a series of role-playing camps called Guard Up offers children the chance to chase each other through the fields of Burlington with foam swords and Nerf blasters, while somehow also learning. (Each summer, in one camp, novels like “Animal Farm” or “Twenty Thousand Leagues Under the Sea” are adapted into a mock zombie apocalypse that is then played out by the campers. In another, at a moment of detente, Gandalf might appear on the edge of a running track to give physics lessons.) “I’ve had parents get very upset with me,” said Freeman, who recently opened another store near Columbia University. “Because they sign their kids up for role playing and my staff is trying to expand their horizons beyond D. & D. and into other independent games. But the parents are, like, ‘If they can’t play D. & D., then I don’t know if this is going to work.’ ”
This turn of events might shock a time traveller from the twentieth century. In the seventies and eighties, Dungeons & Dragons, with its supernatural themes, became the fixation of an overheated news media in the midst of a culture war. Role players were seen as closet cases, the least productive kind of geek, retreating to basements to open maps, spill out bags of dice, and light candles by which to see their medieval figurines. They squared with no one. Unlike their hippie peers, they had dropped out without bothering to tune in. On the other side of politics,Christian moralists’ cries of the occult and anxiety about witchcraft followed D. & D. players everywhere. Worse still, parents feared how this enveloping set of lies about druids in dark cloaks and paladins on horseback could tip already vulnerable minds off the cliff of reality. At the end of the 1982 TV movie “Mazes and Monsters,” a troubled gamer, played by a pre-fame Tom Hanks, loses touch and starts to believe that he really does live beside an evil wood in need of heroes. “He saw the monsters. We did not,” his ex-girlfriend says in a voice-over. “We saw nothing but the death of hope, and the loss of our friend.”
Decades passed, D. & D. movies and cartoons came and went, and the game remade itself over and over. But interest fell like an orc beneath a bastard sword. The game’s designers, surrounded by copycats and perplexed about how to bring D. & D. online, made flat-footed attempts at developing new rule books to mimic the video games that D. & D. had inspired. Gygax died, in 2008, occasioning a wealth of tributes but little enthusiasm. Then, a fifth edition of D. & D. rules came out, in 2014, and, somehow, the culture was receptive again to bags of holding and silver-haired drow. People started buying up these volumes in droves. “More people are interested in D&D than we thought,” the game’s lead developer, Mike Mearls, said, as print runs repeatedly sold out. “Who are these people? What do they want?”
In 2017, gathering your friends in a room, setting your devices aside, and taking turns to contrive a story that exists largely in your head gives off a radical whiff for a completely different reason than it did in 1987. And the fear that a role-playing game might wound the psychologically fragile seems to have flipped on its head. Therapists use D. & D. to get troubled kids to talk about experiences that might otherwise embarrass them, and children with autism use the game to improve their social skills. Last year, researchers found that a group of a hundred and twenty-seven role players exhibited above-average levels of empathy, and a Brazilian study from 2013 showed that role-playing classes were an extremely effective way to teach cellular biology to medical undergraduates.
Adult D. & D. acolytes are everywhere now, too. The likes of Drew Barrymore and Vin Diesel regularly take up the twenty-sided die (or at least profess to do so). Tech workers from Silicon Valley to Brooklyn have long-running campaigns, and the showrunners and the novelist behind “Game of Thrones” have all been Dungeon Masters. (It’s also big with comedy improvisers in Los Angeles, but it’s no surprise that theatre kids have nerdy hobbies.) Nevertheless, the image of the recluse persists even among fans. “We’re going to alienate ninety-nine per cent of the people out there right now,” Stephen Colbert told Anderson Cooper last year, on “The Late Show,” as they fondly recalled their respective turns as an elven thief and a witch. “The shut-in at home is really excited,” Cooper replied. “Neckbeards,” Colbert added.
The “neckbeards” may be more numerous now than he and Cooper realize. “The Big Bang Theory” is a sitcom about young scientists at CalTech who spend most of their time shuttling between their laboratories and the comic-book store. The show’s protagonists also play a lot of D. & D. In one episode, a theoretical physicist takes on the guise of the Dungeon Master to relieve a microbiologist of her distress over the restraints of her pregnancy. She pretends, for an evening, to live in a world where only men are with child (“Your husband is home trying not to pee when he laughs”), to drink ale out of the skull of a goblin, and to eat sushi made from the meat of a monster that she has butchered herself. Fourteen million people tuned in.
Dungeons & Dragons seems to have been waiting for us somewhere under the particular psyche of this generation, a psyche that may have been coaxed into fantasy mania by the media that surrounded it. . .

Continue reading.

Written by LeisureGuy

24 October 2017 at 12:56 pm

AlphaGo Zero: Learning from scratch

leave a comment »

A very good (and very interesting) post at DeepMind.com:

Artificial intelligence research has made rapid progress in a wide variety of domains from speech recognition and image classification to genomics and drug discovery. In many cases, these are specialist systems that leverage enormous amounts of human expertise and data.

However, for some problems this human knowledge may be too expensive, too unreliable or simply unavailable. As a result, a long-standing ambition of AI research is to bypass this step, creating algorithms that achieve superhuman performance in the most challenging domains with no human input. In our most recent paper, published in the journal Nature, we demonstrate a significant step towards this goal.

The paper introduces AlphaGo Zero, the latest evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is arguably the strongest Go player in history.

Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.

It also differs from previous versions in other notable ways.

  • AlphaGo Zero only uses the black and white stones from the Go board as its input, whereas previous versions of AlphaGo included a small number of hand-engineered features.
  • It uses one neural network rather than two. Earlier versions of AlphaGo used a “policy network” to select the next move to play and a ”value network” to predict the winner of the game from each position. These are combined in AlphaGo Zero, allowing it to be trained and evaluated more efficiently.
  • AlphaGo Zero does not use “rollouts” – fast, random games used by other Go programs to predict which player will win from the current board position. Instead, it relies on its high quality neural networks to evaluate positions.

All of these differences help improve the performance of the system and make it more general. But it is the algorithmic change that makes the system much more powerful and efficient.

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie. . .

Continue reading.

Written by LeisureGuy

22 October 2017 at 2:14 pm

Posted in Go, Software, Technology

One more step toward the Singularity: Artificial Intelligence Learns to Learn Entirely on Its Own

leave a comment »

In science-fiction about the Singularity, the point at which AI could instruct itself was a big step, leading to a rapid increase in capabilities as a result of positive feedback. Reality is not, of course, science-fiction, but the report of AI instructing itself does catch the eye.

Kevin Hartnett writes in Quanta:

A mere 19 months after dethroning the world’s top human Go player, the computer program AlphaGo has smashed an even more momentous barrier: It can now achieve unprecedented levels of mastery purely by teaching itself. Starting with zero knowledge of Go strategy and no training by humans, the new iteration of the program, called AlphaGo Zero, needed just three days to invent advanced strategies undiscovered by human players in the multi-millennia history of the game. By freeing artificial intelligence from a dependence on human knowledge, the breakthrough removes a primary limit on how smart machines can become.

Earlier versions of AlphaGo were taught to play the game using two methods. In the first, called supervised learning, researchers fed the program 100,000 top amateur Go games and taught it to imitate what it saw. In the second, called reinforcement learning, they had the program play itself and learn from the results.

AlphaGo Zero skipped the first step. The program began as a blank slate, knowing only the rules of Go, and played games against itself. At first, it placed stones randomly on the board. Over time it got better at evaluating board positions and identifying advantageous moves. It also learned many of the canonical elements of Go strategy and discovered new strategies all its own. “When you learn to imitate humans the best you can do is learn to imitate humans,” said Satinder Singh, a computer scientist at the University of Michigan who was not involved with the research. “In many complex situations there are new insights you’ll never discover.”

After three days of training and 4.9 million training games, the researchers matched AlphaGo Zero against the earlier champion-beating version of the program. AlphaGo Zero won 100 games to zero.

To expert observers, the rout was stunning. Pure reinforcement learning would seem to be no match for the overwhelming number of possibilities in Go, which is vastly more complex than chess: You’d have expected AlphaGo Zero to spend forever searching blindly for a decent strategy. Instead, it rapidly found its way to superhuman abilities.

The efficiency of the learning process owes to a feedback loop. Like its predecessor, AlphaGo Zero determines what move to play through a process called a “tree search.” The program starts with the current board and considers the possible moves. It then considers what moves its opponent could play in each of the resulting boards, and then the moves it could play in response and so on, creating a branching tree diagram that simulates different combinations of play resulting in different board setups.

AlphaGo Zero can’t follow every branch of the tree all the way through, since that would require inordinate computing power. Instead, it selectively prunes branches by deciding which paths seem most promising. It makes that calculation — of which paths to prune — based on what it has learned in earlier play about the moves and overall board setups that lead to wins.

Earlier versions of AlphaGo did all this, too. What’s novel about AlphaGo Zero is that instead of just running the tree search and making a move, it remembers the outcome of the tree search — and eventually of the game. It then uses that information to update its estimates of promising moves and the probability of winning from different positions. As a result, the next time it runs the tree search it can use its improved estimates, trained with the results of previous tree searches, to generate even better estimates of the best possible move.

The computational strategy that underlies AlphaGo Zero is effective primarily in situations in which you have an extremely large number of possibilities and want to find the optimal one. In the Nature paper describing the research, the authors of AlphaGo Zero suggest that their system could be useful in materials exploration — where you want to identify atomic combinations that yield materials with different properties — and protein folding, where you want to understand how a protein’s precise three-dimensional structure determines its function.

As for Go,  . . .

Continue reading.

Written by LeisureGuy

18 October 2017 at 4:12 pm

Posted in Go, Software, Technology

Jennifer Rubin’s Distinguished Persons of the Week: The NFL players

leave a comment »

Jennifer Rubin writes in the Washington Post:

Some stood, some knelt. Regardless of their posture (literally and figuratively), many NFL players in the wake of the first Sunday of widespread protests have sounded eloquent on the subject of race and the president.

Aaron Rodgers, Green Bay Packers quarterback, said:

Beauty is, it’s a free country so they can choose to do it or not. The messaging towards this unfortunately needs to continue to be redirected, I think. It’s never been about the national anthem. It’s never been about the military. We’re all patriotic in the locker room. We love our troops. This is about something bigger than that — an invitation to show unity …

This is about unity and love and growing together as a society, and starting a conversation around something that may be a little bit uncomfortable for people. But we’ve got to come together and talk about these things and grow as a community, as a connected group of individuals in our society, and we’re going to continue to show love and unity.

Doug Baldwin of the Seattle Seahawks explained in a written statement:

I’m not surprised by Trump’s comments. He has shown, since the beginning, his dehumanized nature. To think he would be anything different is to not know the reality of his presidency. He has surrounded himself with like minded people and has removed anyone who challenges him. He acts like a child craving attention and any attention will do.

Although these recent comments are not the worst things he’s said or done, I do believe that this will be a unifying moment for the sports world. And with as much influence as athletes have on the younger generation, this can be an opportunity for us to change the narrative of society and point to the president as a poor example of what you can become if you remain close minded, ignorant and uneducated. . .

 

Continue reading. And there’s a video at the link.

I cannot help but note that the NFL players are more eloquent, educated, and sympathetic than President Trump.

Written by LeisureGuy

1 October 2017 at 11:18 am

I enjoyed this one

leave a comment »

I am playing White, “Claire” on SparkChess is Black:

  1. e4 d5
  2. e5 f6
  3. f4 fxe5
  4. fxe5 Nc6
  5. d4 Rb8
  6. h3 Bd7
  7. Nf3 b6
  8. Bb5 Nxe5
  9. Bxd7+ Nxd7
  10. Ne5 Nxe5
  11. dxe5 Rb7
  12. e6 Qd6
  13. Qh5+ g6
  14. Qe2 Qg3+
  15. Kd1 a5
  16. Rf1 Ra7
  17. Rxf8+ Kxf8
  18. Qf1+ Ke8
  19. Qf7+ Kd8
  20. Qf8#

 

Written by LeisureGuy

9 September 2017 at 2:56 pm

Posted in Chess

n the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future?

leave a comment »

Dawn Chan in the New Yorker:

Choong-am Dojang is far from a typical Korean school. Its best pupils will never study history or math, nor will they receive traditional high-school diplomas. The academy, which operates above a bowling alley on a narrow street in northwestern Seoul, teaches only one subject: the game of Go, known in Korean as baduk and in Chinese as wei qi. Each day, Choong-am’s students arrive at nine in the morning, find places at desks in a fluorescent-lit room, and play, study, memorize, and review games—with breaks for cafeteria meals or an occasional soccer match—until nine at night.
Choong-am, which is the product of a merger between four top Go academies, is currently the biggest of a handful of _dojang_s in South Korea. Many of the students enrolled in these schools have been training since they were four or five, perhaps playing informally at first but later growing obsessed with the game’s beauty and the competitiveness and camaraderie that surround it. (Indeed, the word “dojang” more commonly refers to a martial-arts academy.) Lee Hajin, the secretary-general of the International Go Federation, told me that she left home when she was nine. With only her clothes and a stuffed-toy poodle backpack that her parents gave her for Christmas, she moved across the country, into the home of a Go master and his wife.
The aim of all serious Go pupils is ultimately to be designated a professional. This makes them eligible to compete in Asia’s pro tournaments, which are broadcast on TV and sponsored by companies such as Samsung, LG, and the instant-noodle maker Nongshim. At the highest-level tournaments, first-place winners can win as much as three hundred thousand dollars. But the competition is fierce. It is estimated that, of South Korea’s three hundred and twenty pros, only around fifty are able to earn a living on tournament winnings. Sometimes, after losing an especially important match, players joke about drowning themselves in the Han River. Lee Hajin recalls having such bad insomnia before important games that her teacher’s wife would bring her a shot of whiskey, diluted in a cup of water, to help her fall asleep.
Go itself is simple in design but complex in its possible outcomes: two players, one using white stones and the other black, take turns placing their pieces on a square board, capturing territory and boxing each other out. If a child dedicating her life to such a game seems unfathomable elsewhere in the world, it makes more sense in East Asia, where Go has a twenty-five-hundred-year cultural history. Through the centuries, princes, generals, monks, and farmers have played the game, not only to win but to build character and develop mental acumen. “It’s also psychology, philosophy—it’s art,” Fan Hui, the reigning European Go champion, told me. In Tang-dynasty China, the game was considered one of the four arts that a cultivated gentleman ought to master, along with calligraphy, painting, and playing the lute. So many East Asian leaders have studied it that political scientists are wont to identify traces of Go strategy in the continent’s real-world conflicts. Henry Kissinger, for instance, argued that during the Taiwan Strait crisis of the nineteen-fifties, “both sides were playing by wei qi rules.” Today, Seoul’s Myongji University even offers degrees in Go studies. According to Daniela Trinks, a professor in the department, one in four Koreans knows how to play the game.
But recent events could pose a threat to Go’s cultural supremacy. Earlier this week, one of the world’s top players, Lee Sedol, lost two high-profile matches—the first of a planned five—to AlphaGo, an artificial-intelligence program created by Google DeepMind. The same program beat Fan Hui, 5–0, back in October. Until then, Go had been considered the only popular two-player board game that humans would continue to dominate for the foreseeable future, its array of outcomes still too dizzyingly vast for even increasingly smart machines to pick out the best moves. That, of course, has now changed. Even if Lee miraculously comes back to win his remaining three games, the first of which takes place on Saturday, in Seoul, AlphaGo promises to grow even more formidable. (“If there’s a limit to improvement, we haven’t hit it yet,” Demis Hassabis, DeepMind’s founder and C.E.O., told me.) What’s notable, too, is how quickly AlphaGo improves compared with humans. The program lost two quick, unofficial matches with Fan Hui that were scheduled between longer, official ones, which the computer won. Five months later, it is capable of defeating Lee, who is ranked far higher than Fan. According to Ben Lockhart, one of the best amateur Go players born outside East Asia, Fan “could have trained his whole life and would never have gotten close to where Lee Sedol is.”
Lockhart, as it happens, is the lone American pupil currently enrolled at Choong-am. He is an anomaly at the dojang, not just because he is a foreigner but also because he has memories of a life without intensive Go. When he was in high school, in Brooklyn, playing the game but also “smoking a lot of weed and listening to Noam Chomsky in Prospect Park,” his peers in Seoul were already deep into their training regimens. Now, however, Lockhart is more disciplined. Last Friday, he began his morning by trying to make progress through a book of six hundred Go problems. These exercise books are a common component of Go pedagogy, as are actual matches and occasional lectures by professionals. Students sometimes memorize parts of games, or even whole games, from the canon. They also practice specific skills, such as “reading,” or peering into the future at branching paths of possibility—an activity that’s not dissimilar to the so-called tree-search components of AlphaGo and many other game-playing A.I.s.
In the long course of their training, students may play upwards of ten thousand games, developing intuitions about which moves tend to work out well and which don’t. AlphaGo, analogously, improves by playing itself, with the added advantage that it can flit through games quickly, while humans take time to think and place stones on a board. (In January, the DeepMind team published a paper in Nature noting that one of AlphaGo’s neural networks had played more than a million games in a single day.) But there is one particularly interesting difference between a dojang’s pedagogical program and AlphaGo’s: human students receive active guidance from teachers, who can draw attention to specific mistakes or suggest generalized patterns to seek out or avoid. According to DeepMind’s most recent account, although AlphaGo’s learning is shaped by observations of expert human games, it doesn’t receive targeted advice from any outsiders.
Although some Go players are eager to see whether computers will unlock undiscovered moves and strategies, others seem despondent. . .

Continue reading.

Written by LeisureGuy

9 August 2017 at 3:26 pm

“Chess is like looking out over a vast open ocean; checkers is like looking into a bottomless well.”

leave a comment »

The quotation is by Marion Tinsley, the greatest checker player who ever lived, who is the subject of Alexis Madrigal’s article in the Atlantic. From the article, this anecdote:

. . . When Tinsley came to Edmonton in 1991 to play the friendly matches against Chinook, Schaeffer was also blown away that the world champion would agree to play this computer for fun.

The two men sat in his office and began the matches, Schaeffer moving for Chinook and entering changes in the game into the system. The first nine games were all draws. In the tenth game, Chinook was cruising along, searching 16 to 17 moves deep into the future. And it made a move where it thought it had a small advantage. “Tinsley immediately said, ‘You’re gonna regret that.’” Schaeffer said. “And at the time, I was thinking, what the heck does he know, what could possibly go wrong?” But, in fact, from that point forward, Tinsley began to pull ahead.

“In his notes to the game, he later wrote that he had seen all the way to the end of the game and he knew he was going to win,” Schaeffer said.

The computer scientist became fixated on that moment. After the match, he ran simulations to examine what had gone wrong. And he discovered that, in fact, from that move to the end of the game, if both sides played perfectly, he would lose every time. But what he discovered next blew his mind. To see that, a computer or a human would have to look 64 moves ahead.

“I was absolutely stunned,” Schaeffer told me. “How do you compete with somebody whose understanding of the game is so deep that he immediately knows through experience or knowledge or doing some amazing search that he was gonna win that position?” . . .

Written by LeisureGuy

19 July 2017 at 2:14 pm

Posted in Games, Technology

%d bloggers like this: