Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

AI is transforming Google search. It won’t stop there

leave a comment »

Cabe Metz reports in Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

As Bloomberg says, it was Singhal who approved the roll-out of RankBrain. And before that, he and his team may have explored other, simpler forms of machine learning. But for a time, some say, he represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans. The concern—as described by some former Google employees—was that it was more difficult to understand why neural nets behaved the way it did, and more difficult to tweak their behavior.

These concerns still hover over the world of machine learning. The truth is that even the experts don’t completely understand how neural nets work. But they do work. If you feed enough photos of a platypus into a neural net, it can learn to identify a platypus. If you show it enough computer malware code, it can learn to recognize a virus. If you give it enough raw language—words or phrases that people might type into a search engine—it can learn to understand search queries and help respond to them. In some cases, it can handle queries better than algorithmic rules hand-coded by human engineers. Artificial intelligence is the future of Google Search, and if it’s the future of Google Search, it’s the future of so much more.

Sticking to the Rules

This past fall, I sat down with a former Googler who asked that I withhold his name because he wasn’t authorized to talk about the company’s inner workings, and we discussed the role of neural networks inside the company’s search engine. At one point, he said, the Google ads team had adopted neural nets to help target ads, but the “organic search” team was reluctant to use this technology. Indeed, over the years, discussions of this dynamic have popped up every now and again on Quora, the popular question-and-answer site.

Edmond Lau, who worked on Google’s search team and is the author of the book The Effective Engineer, wrote in a Quora post that Singhal carried a philosophical bias against machine learning. With machine learning, he wrote, the trouble was that “it’s hard to explain and ascertain why a particular search result ranks more highly than another result for a given query.” And, he added: “It’s difficult to directly tweak a machine learning-based system to boost the importance of certain signals over others.” Other ex-Googlers agreed with this characterization.

Yes, Google’s search engine was always driven by algorithms that automatically generate a response to each query. But these algorithms amounted to a set of definite rules. Google engineers could readily change and refine these rules. And unlike neural nets, these algorithms didn’t learn on their own. As Lau put it: “Rule-based scoring metrics, while still complex, provide a greater opportunity for engineers to directly tweak weights in specific situations.”

But now, Google has incorporated deep learning into its search engine. And with its head of AI taking over search, the company seems to believe this is the way forward.

Losing Control

It’s true that with neural nets, you lose some control. But you don’t lose all of it, says Chris Nicholson, the founder of the deep learning startup Skymind. Neural networks are really just math—linear algebra—and engineers can certainly trace how the numbers behave inside these multi-layered creations. The trouble is that it’s hard to understand why a neural net classifies a photo or spoken word or snippet of natural language in a certain way.

“People understand the linear algebra behind deep learning. But the models it produces are less human-readable. They’re machine-readable,” Nicholson says. “They can retrieve very accurate results, but we can’t always explain, on an individual basis, what led them to those accurate results.”

What this means is that, in order to tweak the behavior of these neural nets, . . .

Continue reading.

Written by LeisureGuy

4 February 2016 at 12:40 pm

Wow! Computer running AlphaGo program defeats a professional Go player!

leave a comment »

The video is from the post “AlphaGo: Mastering the ancient game of Go with Machine Learning” on the Google Research blog.

David Silver and Demis Hassabis, of Google DeepMind project, write:

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history – the first classic game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952 as a PhD candidate’s project. Then fell checkers in 1994. Chess was tackled by Deep Blue in 1997. The success isn’t limited to board games, either – IBM’s Watson won first place on Jeopardy in 2011, and in 2014 our own algorithms learned to play dozens of Atari games just from the raw pixel inputs.

But one game has thwarted A.I. research thus far: the ancient game of Go. Invented in China over 2500 years ago, Go is played by more than 40 million people worldwide. The rules are simple: players take turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory. Confucius wrote about the game, and its aesthetic beauty elevated it to one of the four essential arts required of any true Chinese scholar. The game is played primarily through intuition and feel, and because of its subtlety and intellectual depth it has captured the human imagination for centuries.

But as simple as the rules are, Go is a game of profound complexity. The search space in Go is vast — more than a googol times larger than chess (a number greater than there are atoms in the universe!). As a result, traditional “brute force” AI methods — which construct a search tree over all possible sequences of moves — don’t have a chance in Go. To date, computers have played Go only as well as amateurs. Experts predicted it would be at least another 10 years until a computer could beat one of the world’s elite group of Go professionals.

We saw this as an irresistible challenge! We started building a system, AlphaGo, described in a paper in Nature this week, that would overcome these barriers. The key to AlphaGo is reducing the enormous search space to something more manageable. To do this, it combines a state-of-the-art tree search with two deep neural networks, each of which contains many layers with millions of neuron-like connections. One neural network, the “policy network”, predicts the next move, and is used to narrow the search to consider only the moves most likely to lead to a win. The other neural network, the “value network”, is then used to reduce the depth of the search tree — estimating the winner in each position in place of searching all the way to the end of the game.

AlphaGo’s search algorithm is much more human-like than previous approaches. For example, when Deep Blue played chess, it searched by brute force over thousands of times more positions than AlphaGo. Instead, AlphaGo looks ahead by playing out the remainder of the game in its imagination, many times over – a technique known as Monte-Carlo tree search. But unlike previous Monte-Carlo programs, AlphaGo uses deep neural networks to guide its search. During each simulated game, the policy network suggests intelligent moves to play, while the value network astutely evaluates the position that is reached. Finally, AlphaGo chooses the move that is most successful in simulation.

We first trained the policy network on 30 million moves from games played by human experts, until it could predict the human move 57% of the time (the previous record before AlphaGo was 44%). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and gradually improving them using a trial-and-error process known as reinforcement learning. This approach led to much better policy networks, so strong in fact that the raw neural network (immediately, without any tree search at all) can defeat state-of-the-art Go programs that build enormous search trees.

These policy networks were in turn used to train the value networks, again by reinforcement learning from games of self-play. These value networks can evaluate any Go position and estimate the eventual winner – a problem so hard it was believed to be impossible.

Of course, all of this requires a huge amount of compute power, so we made extensive use ofGoogle Cloud Platform, which enables researchers working on AI and Machine Learning to access elastic compute, storage and networking capacity on demand. In addition, new open source libraries for numerical computation using data flow graphs, such as TensorFlow, allow researchers to efficiently deploy the computation needed for deep learning algorithms across multiple CPUs or GPUs.

So how strong is AlphaGo? To answer this question, we played a tournament between AlphaGo and the best of the rest – the top Go programs at the forefront of A.I. research. Using a single machine, AlphaGo won all but one of its 500 games against these programs. In fact, AlphaGo even beat those programs after giving them 4 free moves headstart [i.e., a four-stone handicap, roughly equivalent to Queen odds in chess – LG] at the beginning of each game. A high-performance version of AlphaGo, distributed across many machines, was even stronger. . .

Continue reading.

Here’s the paper (“Mastering the game of Go with deep neural networks and tree search”)
in Nature. The abstract:

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Written by LeisureGuy

27 January 2016 at 12:26 pm

Posted in Games, Go, Software, Technology

What is the real legacy of Steve Jobs?

leave a comment »

Steve Jobs had a repellent personality, although in material terms he was quite successful. Sue Halpern in the NY Review of Books reviews two movies and a book about Steve Jobs:

Steve Jobs: The Man in the Machine
a documentary film directed by Alex GibneySteve Jobs
a film directed by Danny Boyle

Becoming Steve Jobs: The Evolution of a Reckless Upstart into a Visionary Leader
by Brent Schlender and Rick Tetzeli
Crown Business, 447 pp., $30.00

Partway through Alex Gibney’s earnest documentary Steve Jobs: The Man in the Machine, an early Apple Computer collaborator named Daniel Kottke asks the question that appears to animate Danny Boyle’s recent film about Jobs: “How much of an asshole do you have to be to be successful?” Boyle’s Steve Jobs is a factious, melodramatic fugue that cycles through the themes and variations of Jobs’s life in three acts—the theatrical, stage-managed product launches of the Macintosh computer (1984), the NeXT computer (1988), and the iMac computer (1998). For Boyle (and his screenwriter Aaron Sorkin) the answer appears to be “a really, really big one.”

Gibney, for his part, has assembled a chorus of former friends, lovers, and employees who back up that assessment, and he is perplexed about it. By the time Jobs died in 2011, his cruelty, arrogance, mercurial temper, bullying, and other childish behavior were well known. So, too, were the inhumane conditions in Apple’s production facilities in China—where there had been dozens of suicides—as well as Jobs’s halfhearted response to them. Apple’s various tax avoidance schemes were also widely known. So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”

Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”

In Gibney’s film, Andy Grignon, the iPhone senior manager from 2005 to 2007, observes that

Apple is a business. And we’ve somehow attached this emotion [of love, devotion, and a sense of higher purpose] to a business which is just there to make money for its shareholders. That’s all it is, nothing more. Creating that association is probably one of Steve’s greatest accomplishments.

Jobs was a consummate showman. It’s no accident that Sorkin tells his story of Jobs through product launches. These were theatrical events—performances—where Jobs made sure to put himself on display as much as he did whatever new thing he was touting. “Steve was P.T. Barnum incarnate,” says Lee Clow, the advertising executive with whom he collaborated closely. “He loved the ta-da! He was always like, ‘I want you to see the Smallest Man in the World!’ He loved pulling the black velvet cloth off a new product, everything about the showbiz, the marketing, the communications.”

People are drawn to magic. Steve Jobs knew this, and it was one reason why he insisted on secrecy until the moment of unveiling. But Jobs’s obsession with secrecy went beyond his desire to preserve the “a-ha!” moment. Is Steve Jobs “the most successful paranoid in business history?,” The Economist asked in 2005, a year that saw Apple sue, among others, a Harvard freshman running a site on the Internet that traded in gossip about Apple and other products that might be in the pipeline. Gibney tells the story of Jason Chen, a Silicon Valley journalist whose home was raided in 2010 by the California Rapid Enforcement Allied Computer Team (REACT), a multi-agency SWAT force, after he published details of an iPhone model then in development. A prototype of the phone had been left in a bar by an Apple employee and then sold to Chen’s employer, the website Gizmodo, for $5,000. Chen had returned the phone to Apple four days before REACT broke down his door and seized computers and other property. Though REACT is a public entity, Apple sits on its steering committee, leaving many wondering if law enforcement was doing Apple’s bidding.

Whether to protect trade secrets, or sustain the magic, or both, Jobs was adamant that Apple products be closed systems that discouraged or prevented tinkering. This was the rationale behind Apple’s lawsuit against people who “jail-broke” their devices in order to use non-Apple, third-party apps—a lawsuit Apple eventually lost. And it can be seen in Jobs’s insistence, from the beginning, on making computers that integrated both software and hardware—unlike, for example, Microsoft, whose software can be found on any number of different kinds of PCs; this has kept Apple computer prices high and clones at bay. An early exchange in Boyle’s movie has Steve Wozniak arguing for a personal computer that could be altered by its owner, against Steve Jobs, who believed passionately in end-to-end control. “Computers aren’t paintings,” Wozniak says, but that is exactly what Jobs considered them to be. The inside of the original Macintosh bears the signatures of its creators.

The magic Jobs was selling went beyond the products his company made: . . .

Continue reading.

Later in the review:

. . . As Gibney puts it, “More than a CEO, he positioned himself as an oracle. A man who could tell the future.”

And he could—some of the time. It’s important to remember, though, that when Jobs was forced out of Apple in 1985, the two computer projects into which he had been pouring company resources, the Apple 3 and another computer called the Lisa, were abject failures that nearly shut the place down. A recurring scene in Boyle’s fable is Jobs’s unhappy former partner, the actual inventor of the original Apple computer, Steve Wozniak, begging him to publicly recognize the team that made the Apple 2, the machine that kept the company afloat while Jobs pursued these misadventures, and Jobs scornfully blowing him off.

Jobs’s subsequent venture after he left Apple, a workstation computer aimed at researchers and academics, appropriately called the NeXT, was even more disastrous. The computer was so overpriced and underpowered that few were sold. Boyle shows Jobs obsessing over the precise dimensions of the black plastic cube that housed theNeXT, rather than on the computer’s actual deficiencies, just as Jobs had obsessed over the minute gradations of beige for the Apple 1. Neither story is apocryphal, and both have been used over the years to illustrate, for better and for worse, Jobs’s preternatural attention to detail. (Jobs also spent $100,000 for the NeXT logo.)

Sorkin’s screenplay claims that the failure of the NeXT computer was calculated—that it was designed to catapult Jobs back into the Apple orbit. Fiction allows such inventions, but as the business journalists Brent Schlender and Rick Tetzeli point out in their semipersonal recounting, Becoming Steve Jobs, “There was no hiding NeXT’s failure, and there was no hiding the fact that NeXT’s failure was primarily Steve’s doing.”

Still, Jobs did use the NeXT’s surviving asset, its software division, as the wedge in the door that enabled him to get back inside his old company a decade after he’d been pushed out. NeXT software, which was developed by Avie Tevanian, a loyal stalwart until Jobs tossed him aside in 2006, became the basis for the intuitive, stable, multitasking operating system used by Mac computers to this day. At the time, though, Apple was in free fall, losing $1 billion a year and on the cusp of bankruptcy. The graphical, icon-based operating system undergirding the Macintosh was no longer powerful or flexible enough to keep up with the demands of its users. Apple needed a new operating system, and Steve Jobs just happened to have one. Or, perhaps more accurately, he had a software engineer—Tevanian—who could rejigger NeXT’s operating system and use it for the Mac, which may have been Jobs’s goal all along. Less than a year after Jobs sold the software to Apple for $429 million and a fuzzily defined advisory position at the company, the Apple CEO was gone, and the board of directors was gone, and Jobs was back in charge. . .

Written by LeisureGuy

24 January 2016 at 2:00 pm

Delicate story on FBI surveillance of innocent civilians

leave a comment »

This interesting report by Joseph Cox in Motherboard seems to be walking on eggshells a bit in pointing out the misdeeds of the FBI. As though fearful of enraging the beast. Are we already at that stage?

Written by LeisureGuy

21 January 2016 at 8:28 pm

WOW!! Check out the Omnity search engine

leave a comment »

You have to sign up to use it, but it’s free and also amazing. Read more about it in the Motherboard article by Nicholas DeLeon:

One new search engine is so confident in its technology that it thinks it could buy Google, and not the other way around.

In development for four years and only now, at CES 2016, out of so-called “stealth mode” in which the company said not a peep to the press, Omnity is a new kind of search engine that asks the question: What if, instead of searching for keywords like “baseball scores” or “best-rated Nintendo 64 games,” a search engine let users search across disparate documents, from Wikipedia pages and news articles to patent filings and PDFs, in order to find shared interconnectedness?

“What this lets users do is avoid the tyranny of taxonomy,” said Omnity CEO Brian Sager at a Tuesday evening CES event called Digital Experience. “We probably should trademark that,” he then joked.

Sager explained that when Omnity searches across documents, it throws out “grammatical glue but semantic noise”—commonly used words like “the,” “he,” “she,” or “it.” Stripped of this “noise,” Omnity is then able to analyze the remaining “rare words” to find common threads that link together different documents.

In one example, Sager searched for documents related to the news article headline “CEO Exits After Mutual Fund Implodes,” at which point Omnity pulled up related Wikipedia and NPR articles and a patent application that the company had filed. In another example, Omnity was asked to find more information about Ford becoming an information technology company. “If you wanted to find more information on this topic what would you look for?” asked Sager. “Keywords like Ford? Technology? Car? What would you get? Here, Omnity is able to analyze the words in that article to pull up further articles on topics like Ford’s progress with self-driving cars, its Palo Alto research lab, and a patent application.”

Continue reading.

You can sign up here.

Written by LeisureGuy

6 January 2016 at 10:53 am

Posted in Software, Technology

Techno-thriller science-fiction, only not: A real, live cybersecurity issue

leave a comment »

Joseph Cox writes at Motherboard:

In December, everyone was starkly reminded of the dangers posed by backdoors in security products: Juniper Networks, a massive company that creates popular networking equipment, found “unauthorized” code in its ScreenOS software which would allow an attacker to take total control of Juniper NetScreen firewalls, or even, with enough resources, passively decrypt VPN traffic.

In response, Juniper released a patch, and advised customers to immediately update their systems. According to a “Cyber Alert” document obtained by Motherboard, the US Department of Defense (DoD) urged a slew of contractors to do the same.

A few days after the backdoor news broke, the Defense Security Service (DSS) sent out the alert detailing the problems found in Juniper’s products. The DSS is a part of the DoD, and, amongst other things, supervises industrial security and provides security education.

“DSS provides this report to cleared contractor security professionals to facilitate the awareness of cyber threats to their classified and unclassified networks and to aid in the identification and development of appropriate actions, priorities, and follow-on measures,” the unclassified document reads, which is dated December 22 2015.

The Cyber Alert then briefly describes the two issues at hand—a hard-coded password that would grant an attacker remote access, and a separate problem that may allow an attacker to decrypt intercepted traffic—and provides signatures for detecting some unauthorized entry attempts.

The document adds that “It is up to the recipients of the alerts to decide how to use the information contained in this document.”

Shortly after Juniper released their patch in mid-December, researchers from Dutch cybersecurity firm Fox-IT were able to dig up the baked-in password that would have given attackers full access to Juniper’s affected products. Rapid 7, another security company, also found the password, and subsequently published it. (The DSS Cyber Alert also includes the hard-coded password).

Days later, Ralf-Philipp Weinmann, founder and CEO of German consultancy Comsecuris, found evidence that the vulnerability facilitating decryption of VPN traffic had either been inserted by the US National Security Agency (NSA), or relied upon cryptographic weaknesses that the agency had deliberately created. Weinmannpointed out that Juniper’s patch doesn’t entirely solve all the problem, either.

The Intercept, pulling up documents from the Snowden archive, reported that the NSA had helped the UK’s Government Communications Headquarters (GCHQ) find vulnerabilities in Juniper products back in February 2011. . .

Continue reading.

The key graf:

. . . it is impossible to create a backdoor for a friendly entity without making an avenue of attack from an adversary. There is no magical formula for giving the good guys access, while keeping the bad ones out. This is the point being repeated bycompanies like Apple, which has introduced robust hard-disk encryption for its devices and message services, while law enforcement agencies demand backdoors to be inserted into consumer products. . .

Written by LeisureGuy

2 January 2016 at 1:25 pm

El Capitan (OS X) kills a capability: Easy volume control bites the dust

with 2 comments

I could not adjust my computer speakers’ volume after installing El Capitan, and the hard-wired keys did not work. Also, the mute button is no longer a toggle—???—and you have to use a separate key: the form toggle is not simply “Mute” and no matter how much you press it, “mute” is what you get. To get sound, you press the volume key (either) and that turns mute off, if it’s on, and otherwise has no effect: an “on” button.

Change this big is deliberate, and the only reason I can think of is that Apple plans to sell a little app through the App Store that will restore the previous functionality: a forced add-on.

We’ll see.

In the meantime, if you want to send feedback, be my guest. Apple set up that feedback form so that it could know how customers respond to their products. Let them know, if you’re inclined. It would be interesting to see a major corporation have to back down due to customer protest.

Written by LeisureGuy

26 December 2015 at 1:12 pm


Get every new post delivered to your Inbox.

Join 2,263 other followers

%d bloggers like this: