Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Artificial Neural Nets Grow Brainlike Navigation Cells

leave a comment »

Evolution seems to find efficient solutions. John Rennie writes in Quanta:

Having the sense to take a shortcut, the most direct route from point A to point B, doesn’t sound like a very impressive test of intelligence. Yet according to a new report appearing today in Nature, in which researchers describe the performance of their new navigational artificial intelligence, the system’s ability to explore complex simulated environments and find the shortest route to a goal put it in a class previously reserved for humans and other living things.

The surprising key to the system’s performance was that while learning how to navigate, the neural net spontaneously developed the equivalent of “grid cells,” sets of brain cells that enable at least some mammals to track their location in space.

For neuroscientists, the new work seems to offer important clues about how grid cells in living brains make us better navigators. It also shows how neural nets could contribute greatly to future neuroscience studies: Neil Burgess, a cognitive neuroscientist at University College London who was not involved with the study, suggested that the systems should “provide fertile ground for understanding how and why the human brain works as it does.”

Meanwhile, for AI researchers, the work’s relevance to improving automated navigation systems is obvious. But its greater importance might eventually lie in suggesting a more general way to enhance the intelligence of machines.

According to the researchers Andrea Banino at the British AI company DeepMind and Caswell Barry at University College London, who were lead authors on the new Nature paper, the project evolved out of questions they had about the function of the brain’s grid cells. Grid cells are often called “the brain’s GPS” because of their importance to navigation in many animal species. (Edvard Moser and May-Britt Moser shared a 2014 Nobel Prize for their discovery of grid cells nine years earlier.) These clusters of neurons, which are arranged in roughly hexagonal arrays, collectively work like the inertial guidance systems on ships, aircraft and missiles: They keep track of how the body is moving through space, even in total darkness. “It’s basically updating your belief about where you are based on how you’re moving,” Barry said.

Neuroscientists have therefore credited grid cells with performing the function called “path integration” — the dead-reckoning form of navigation that doesn’t involve external cues: for example, “Take five steps forward, turn 90 degrees to the left, then walk straight ahead for another 15 steps.…” But various experiments have also hinted that grid cells perform other functions, including some that might go beyond navigation. For example, some studies implicate grid cells in measuring time and distance during travel. As Barry noted, if grid cells provide a spatial reference for objects and locations, “then in principle you can use them to calculate the direct route between those places” — that is, what’s called “vector-based navigation.”

The researchers decided to use deep-learning neural networks to investigate the role of grid cells in these navigational functions. As a first step, they set up a neural net to learn how to do path integration for a simulated agent moving through a small space. “We wanted to see whether we could set up an artificial network with an appropriate task so that it would actually develop grid cells,” Barry said.

The neural net obliged, and, according to Barry, “what was surprising was how well it worked.” The “grid units” that spontaneously emerged in the network were remarkably similar to what’s seen in animals’ brains, right down to the hexagonal grid.


The grid units that organized spontaneously in the neural network given a navigation task are surprisingly similar to the analogous grid cells in the brain, right down to their approximately hexagonal arrangement. These scans show firing activity in the living and artificial neurons.

The research team then joined the neural network’s abilities to systems that helped simulated agents find their way through mazelike virtual environments to goals. The system with grid units was far superior to systems without: For example, the system could tell if a previously closed door offered a shortcut to the goal, and it would take that route preferentially. According to Banino, this knack demonstrated that the grid units in the neural net were performing vector-based navigation because they were identifying a shorter, more direct route based on knowledge of the goal’s position.

“I think with this work, we were able to give a proof of principle that grid cells are used for taking shortcuts,” Banino said. The results therefore supported theories that grid cells in the brain are capable of both path integration and vector-based navigation. Comparable experimental proof with studies on living animals, he added, would be much more difficult to obtain.

“The interesting implication is that this same approach could be used for different sorts of neuroscience questions,” Barry said. Researchers interested in limb control, for example, could train a neural network to . . .

Continue reading.

Later in the article:

Francesco Savelli, a neuroscientist at Johns Hopkins University who co-authored a commentary accompanying the new Nature paper, offered a similar view. He considers it very interesting that “you somehow get these [grid] cells without programming them. … And still they come out, as emergent properties.”

Written by LeisureGuy

9 May 2018 at 7:39 pm

WhatsApp founder plans to leave after broad clashes with parent Facebook

leave a comment »

The story is now well-known, but I think it’s quite revealing. From Elizabeth Dwoskin’s report in the Washington Post:

The billionaire chief executive of WhatsApp, Jan Koum, is planning to leave the company after clashing with its parent, Facebook, over the popular messaging service’s strategy and Facebook’s attempts to use its personal data and weaken its encryption, according to people familiar with internal discussions.

Koum, who sold WhatsApp to Facebook for more than $19 billion in 2014, also plans to step down from Facebook’s board of directors, according to these people. The date of his departure isn’t known.
It “is time for me to move on,” Koum wrote in a Facebook post after The Washington Post reported his plans to depart. He has been informing senior executives at Facebook and WhatsApp of his decision, and in recent months has been showing up less frequently to WhatsApp’s offices on Facebook’s campus in Silicon Valley, according to the people.
The independence and protection of its users’ data is a core tenet of WhatsApp that Koum and his co-founder, Brian Acton, promised to preserve when they sold their tiny start-up to Facebook. It doubled down on its pledge by adding encryption in 2016. The clash over data took on additional significance in the wake of revelations in March that Facebook had allowed third parties to mishandle its users’ personal information.
Facebook chief executive Mark Zuckerberg replied to Koum’s post by crediting Koum with teaching him “about encryption and its ability to take power from centralized systems and put it back in people’s hands. Those values will always be at the heart of WhatsApp.” [I do not trust Mark Zuckerberg in the slightest. – LG]
Facebook, though, needs to prove that its investment in WhatsApp — its largest acquisition ever — was worth it.
“Part of Facebook’s success has been to digest acquisitions, successfully monetize them, and integrate them into their advertising machine,” said Daniel Ives, chief strategy officer and head of technology research for research firm GBH Insights. But WhatsApp has been more challenging because of resistance from the founders, he said. “This was a massive culture clash.”
Koum’s exit is highly unusual at Facebook. . .

Read the whole thing, but the emphasized section reveals what’s wrong with Facebook, and it will require laws and regulations to fix: Facebook will not fix this unless forced to.

Written by LeisureGuy

1 May 2018 at 10:46 am

Game over: The impact of a computer program’s “solving” a game

leave a comment »

Tom Whipple writes in 1843 Magazine:

IN THE YEARS following the publication of J. Sturge’s canonical “Guide to the Game of Draughts” in 1800, the world of serious players was wracked by argument. Number 105 of Sturge’s “140 Striking Situations” had asked readers to demonstrate that with the pieces in a given position white could always win. An expert claimed in a rival publication that Sturge was wrong: the position was a draw. Another countered that it was a win for white – but not for the reasons Sturge suggested. Arguments would continue for the entire reign of Queen Victoria until a consensus was finally reached: it was a win for white.

This most controversial conundrum in the history of draughts became known as the Hundred Years Problem. The name was a little premature.

In 1997, a grandmaster showed the Hundred Years Problem to Jonathan Schaeffer, professor of computer science at the University of Alberta, who is by his own admission a rather mediocre player. Minutes later, Schaeffer announced that the result was a draw. This time, there was no controversy, because although Schaeffer is a mediocre draughts player he is an excellent computer scientist who had spent the previous decade working on a program, Chinook, designed to “solve” draughts.

It so happened that in the same year the computer Deep Blue defeated Gary Kasparov at chess – an event that Newsweek dubbed, “The Brain’s Last Stand”. In the battle of man versus machine, machine had won a great victory.

In draughts, though, the brain’s defeat has been much more comprehensive. By 1994 Chinook was already good enough to draw a championship match 6-6 with the world number one draughts player, Marion Tinsley. Three years later, it conquered the Hundred Years Problem. Ten years after that, it defeated the game itself: a paper published in the journal Science showed that Schaeffer’s program could play a “perfect” game of draughts. Whatever its opponent did, it would respond with the strongest possible play. While in theory a better computer program than Deep Blue could come along, the best any future human or computer could hope to do against Chinook was to draw. The game contained no further mysteries.

To many players, Chinook’s victory was the game’s loss. Before a game is solved a skilled player can be considered an artist, driven by inspiration and creativity as much as by cold logic. Afterwards the player is a fallible human – imperfectly striving to do what a computer has already done. The success of Chinook was as if overnight portrait painters had to cope with the invention of photography, calligraphers with the printing press.

Tinsley was untroubled. He said the program made him feel young again: for decades he had been unbeatable and at last he had a worthy opponent. But some draughts players took it badly. “They said I was going to destroy the game, to ruin it – that no one was going to play,” says Schaeffer. He received hate mail. Some players argued that the computer’s ability to draw on a database of moves, rather than computing best play each time, was cheating. Others considered Schaeffer had besmirched the name of Tinsley, who over a 40-year career lost fewer than ten games. In particular, they objected to the fact that the program won against him by default, because he discovered he had terminal cancer halfway through. “Chinook couldn’t hold a candle to Tinsley,” complained one angry player in a letter to Schaeffer. Another accused him of “trumpeting an unjustified victory against a sick old man”, a third of “engaging in intellectual dishonesty”. A fourth just called him “despicable”.

This upset Schaeffer, who in the course of developing the program had formed a friendship with Tinsley. “He was as close to perfection as you could imagine a human being. What some human players were upset about is we now were better than him.” Schaeffer uses the collective noun a couple of times when referring to him and Chinook. “He was truly outstanding, but he wasn’t quite perfect. He would make a mistake. It may have been only once every 10-15 years, but he would make a mistake.”

Schaeffer is one of a small group around the world trying to solve the world’s games. Last year, a colleague of his published a paper in which he solved a simplified version of poker. It ended by quoting Alan Turing: “It would be disingenuous of us to disguise the fact that the principal motive which prompted the work was the sheer fun of the thing.”

THERE ARE AROUND 26,830 days in the average life. If you walked 26,830 miles you would cover the entire circumference of the equator, and still have enough distance left to go from Paris to Moscow. There are also 26,830 possible permutations in the first major game to be solved by a computer. That sounds like a lot, but this is a game so simple that in America there is a family of animal trainers that raises chickens to play it against humans as a casino attraction: noughts and crosses.

Most humans have solved noughts and crosses, and the solution is a draw. Writing a program to play a perfect game of noughts and crosses is now a basic undergraduate assignment.

A bigger number is 4,531,985,219,092. There have been roughly 4.5 trillion seconds since humans evolved. When Victor Allis, a computer-science student, contemplated the number in 1988, he realised it was too big for any computer to handle. It is, however, the number of possible permutations in Connect 4. “Computers then were pretty small,” says Allis.

Allis now runs Quintiq, a large Dutch software company. But he still enjoys talking about Connect 4. Unlike Schaeffer and draughts,  . . .

Continue reading.

Written by LeisureGuy

27 April 2018 at 11:01 am

Where Countries Are Tinderboxes and Facebook Is a Match

leave a comment »

Amanda Taub and Max Fisher report in the NY Times:

MEDAMAHANUWARA, Sri Lanka — Past the end of a remote mountain road, down a rutted dirt track, in a concrete house that lacked running water but bristled with smartphones, 13 members of an extended family were glued to Facebook. And they were furious.

A family member, a truck driver, had died after a beating the month before. It was a traffic dispute that had turned violent, the authorities said. But on Facebook, rumors swirled that his assailants were part of a Muslim plot to wipe out the country’s Buddhist majority.

“We don’t want to look at it because it’s so painful,” H.M. Lal, a cousin of the victim, said as family members nodded. “But in our hearts there is a desire for revenge that has built.”

The rumors, they believed, were true. Still, the family, which is Buddhist, did not join in when Sinhalese-language Facebook groups, goaded on by extremists with wide followings on the platform, planned attacks on Muslims, burning a man to death.

But they had shared and could recite the viral Facebook memes constructing an alternate reality of nefarious Muslim plots. Mr. Lal called them “the embers beneath the ashes” of Sinhalese anger.

We came to this house to try to understand the forces of social disruption that have followed Facebook’s rapid expansion in the developing world, whose markets represent the company’s financial future. For months, we had been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest — a potentially damaging practice in countries with weak institutions.

Time and again, communal hatreds overrun the newsfeed — the primary portal for news and information for many users — unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company. Some users, energized by hate speech and misinformation, plot real-world attacks.

A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.

Facebook declined to respond in detail to questions about its role in Sri Lanka’s violence, but a spokeswoman said in an email that “we remove such content as soon as we’re made aware of it.” She said the company was “building up teams that deal with reported content” and investing in “technology and local language expertise to help us swiftly remove hate content.”

Sri Lankans say they see little evidence of change. And in other countries, as Facebook expands, analysts and activists worry they, too, may see violence.

One Town, Two Versions

Five hours east of Medamahanuwara lies the real Ampara, a small town of concrete buildings surrounded by open green fields. The road there passes over verdant mountains before coasting through tropical flatlands, contested territory during the civil war that ended in 2009, now distinguished mostly by quiet teahouses.

But the imagined Ampara, which exists in rumors and memes on Sinhalese-speaking Facebook, is the shadowy epicenter of a Muslim plot to sterilize and destroy Sri Lanka’s Sinhalese majority.

As Tamil-speaking Muslims, the Atham-Lebbe brothers knew nothing of that version of Ampara when, using money they saved laboring in Saudi Arabia, they opened a one-room restaurant there. They had no way to anticipate that, on a warm evening in late February, the real and imagined Amparas would collide, upending their lives and provoking a brief national breakdown.

It began with a customer yelling in Sinhalese about something he had found in his dinner. Unable to understand Sinhalese, Farsith, the 28-year-old brother running the register, ignored him.

He did not know that the day before, a viral Facebook rumor claimed, falsely, that the police had seized 23,000 sterilization pills from a Muslim pharmacist in Ampara.

The irate customer drew a crowd, which gathered around Farsith, shouting: “You put in sterilization medicine, didn’t you?”

He grasped only that they were asking about a lump of flour in the customer’s meal, and worried that saying the wrong thing might turn the crowd violent.

“I don’t know,” Farsith said in broken Sinhalese. “Yes, we put?”

The mob, hearing confirmation, beat him, destroyed the shop and set fire to the local mosque.

In an earlier time, this might have ended in Ampara. But Farsith’s “admission” had been recorded on a cellphone. Within hours, a popular Facebook group, the Buddhist Information Center, pushed out the shaky, 18-second video, presenting it as proof of long-rumored Muslim plots. Then it spread.

Vigilante Justice

As Facebook pushes into developing countries, it tends to be initially received as a force for good.

In Sri Lanka, it keeps families in touch even as many work abroad. It provides for unprecedented open expression and access to information. Government officials say it was essential for the democratic transition that swept them into office in 2015.

But where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximize user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.

In the Western countries for which Facebook was designed, this leads to online arguments, angry identity politics and polarization. But in developing countries, Facebook is often perceived as synonymous with the internet and reputable sources are scarce, allowing emotionally charged rumors to run rampant. Shared among trusted friends and family members, they can become conventional wisdom.

And where people do not feel they can rely on the police or courts to keep them safe, research shows, panic over a perceived threat can lead some to take matters into their own hands — to lynch.

Last year, in rural Indonesia, rumors spread on Facebook and WhatsApp, a Facebook-owned messaging tool, that gangs were kidnapping local children and selling their organs. Some messages included photos of dismembered bodies or fake police fliers. Almost immediately, locals in nine villages lynched outsiders they suspected of coming for their children.

Near-identical social media rumors have also led to attacks in India and Mexico. Lynchings are increasingly filmed and posted back to Facebook, where they go viral as grisly tutorials. . .

Continue reading.

And Mark Zuckerberg seems to be doing very very little to address this problem. The same goes for Twitter.

Written by LeisureGuy

23 April 2018 at 4:14 pm

Your Online Purchase Might Get Declined For No Reason Anyone Is Willing to Tell You

leave a comment »

Dystopia alert. Kevin Drum writes at Mother Jones:

The Wall Street Journal tells us today about Riskified, a company that scores consumer behavior and then decides whether to approve or deny purchases. It’s used mostly for online purchases, by big and small sellers alike:

Michael Green, 50 years old, found out about one of the firms by accident. He ordered headphones online for his son’s 18th birthday. Days later, when he hadn’t heard anything about the order, he contacted the headphone brand, Audeze, which told him the purchase had been canceled because a third-party firm had determined he was a fraud risk. Mr. Green noticed that the status of his order said “Riskified Rejected.” When he emailed Riskified to ask why he had come up as a fraud risk, a customer service agent told him the company had no further information. “There was no explanation, no appeal,” said Mr. Green, a financial professional in Austin, Texas.

But wait!

After The Wall Street Journal contacted Riskified, CEO Eido Gal said Mr. Green’s order was incorrectly declined. “Riskified tends to be far more accurate and efficient than traditional fraud-prevention methods, but no solution is perfect, and we’re still improving,” he said.

There are two big problems here. First, there’s the usual “fuck you” attitude that all these companies have unless a reporter calls them up. Mr. Green was up the creek for no reason he could determine, and Riskified refused to bother looking into it until the Wall Street Journal contacted them. Suddenly, it turned out Green was “incorrectly declined.” How about that?

The second big problem is that no one knows what goes into these algorithms that check consumer behavior. For example, did you know that a third of all people named Green are black? Did that have any effect on things? There’s no telling, since Riskified’s algorithms are proprietary and they won’t tell us. What’s more, it’s even possible that this little factoid had an effect without Riskified even knowing it. . .

Continue reading.

Written by LeisureGuy

19 April 2018 at 4:24 pm

Why Zuckerberg’s 14-Year Apology Tour Hasn’t Fixed Facebook

leave a comment »

Mark Zuckerberg is so good at apologizing for harm to his customers because he’s had so much practice making such apologies—and the problems are never fixed because they stem from his business model.

Zuckerberg is like an abuser: apologizes profusely after each incident, but the incidents continue to recur. In this case, the abuse victims are Facebook users.

Zeyner Tufekci writes in Wired:

IN 2003, ONE year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. “I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

In 2004 Zuckerberg cofounded Facebook, which rapidly spread from Harvard to other universities. And in 2006 the young company blindsided its users with the launch of News Feed, which collated and presented in one place information that people had previously had to search for piecemeal. Many users were shocked and alarmed that there was no warning and that there were no privacy controls. Zuckerberg apologized. “This was a big mistake on our part, and I’m sorry for it,” he wrote on Facebook’s blog. “We really messed this one up,” he said. “We did a bad job of explaining what the new features were and an even worse job of giving you control of them.”

Then in 2007, Facebook’s Beacon advertising system, which was launched without proper controls or consent, ended up compromising user privacy by making people’s purchases public. Fifty thousand Facebook users signed an e-petition titled “Facebook: Stop invading my privacy.” Zuckerberg responded with an apology: “We simply did a bad job with this release and I apologize for it.” He promised to improve. “I’m not proud of the way we’ve handled this situation and I know we can do better,” he wrote.

By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.

In 2010, after Facebook violated users’ privacy by making key types of information public without proper consent or warning, Zuckerberg again responded with an apology—this time published in an op-ed in The Washington Post. “We just missed the mark,” he said. “We heard the feedback,” he added. “There needs to be a simpler way to control your information.” “In the coming weeks, we will add privacy controls that are much simpler to use,” he promised.

I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better—oh yeah, and the consent decree that the Federal Trade Commission made Facebook sign in 2011, charging that the company had deceptively promised privacy to its users and then repeatedly broken that promise—in the intervening years.

Last month, Facebook once again garnered widespread attention with a privacy related backlash when it became widely known that, between 2008 and 2015, it had allowed hundreds, maybe thousands, of apps to scrape voluminous data from Facebook users—not just from the users who had downloaded the apps, but detailed information from all their friends as well. One such app was run by a Cambridge University academic named Aleksandr Kogan, who apparently siphoned up detailed data on up to 87 million users in the United States and then surreptitiously forwarded the loot to the political data firm Cambridge Analytica. The incident caused a lot of turmoil because it connects to the rolling story of distortions in the 2016 US presidential election. But in reality, Kogan’s app was just one among many, many apps that amassed a huge amount of information in a way most Facebook users were completely unaware of.

At first Facebook indignantly defended itself, claiming that people had consented to these terms; after all, the disclosures were buried somewhere in the dense language surrounding obscure user privacy controls. People were asking for it, in other words.

But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.

Zuckerberg again went on an apology tour, giving interviews to The New York Times, CNN, Recode, WIRED, and Vox (but not to the Guardian and Observer reporters who broke the story). In each interview he apologized. “I’m really sorry that this happened,” he told CNN. “This was certainly a breach of trust.”

But Zuckerberg didn’t stop at an apology this time. He also defended Facebook as an “idealistic company” that cares about its users and spoke disparagingly about rival companies that charge users money for their products while maintaining a strong record in protecting user privacy. In his interview with Vox’s Ezra Klein, Zuckerberg said that anyone who believes Apple cares more about users than Facebook does has “Stockholm syndrome”—the phenomenon whereby hostages start sympathizing and identifying with their captors.

This is an interesting argument coming from the CEO of Facebook, a company that essentially holds its users’ data hostage. Yes, Apple charges handsomely for its products, but it also includes advanced encryption hardware on all its phones, delivers timely security updates to its whole user base, and has largely locked itself out of user data—to the chagrin of many governments, including that of the United States, and of Facebook itself.

Most Android phones, by contrast, gravely lag behind in receiving security updates, have no specialized encryption hardware, and often handle privacy controls in a way that is detrimental to user interests. Few governments or companies complain about Android phones. After the Cambridge Analytica scandal, it came to light that Facebook had been downloading and keeping all the text messages of its users on the Android platform—their content as well as their metadata. “The users consented!” Facebook again cried out. But people were soon posting screenshots that showed how difficult it was for a mere mortal to discern that’s what was going on, let alone figure out how to opt out, on the vague permission screen that flashed before users.

On Apple phones, however, Facebook couldn’t harvest people’s text messages because the permissions wouldn’t allow it.

In the same interview, Zuckerberg took wide aim at the oft-repeated notion that, if an online service is free, you—the user—are the product. He said that he found the argument that “if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth.” His rebuttal to that accusation, however, was itself glib; and as for whether it was aligned with the truth—well, we just have to take his word for it. “To the dissatisfaction of our sales team here,” he said, “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.”

As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to . . .

Continue reading.

I notice a pattern here: apologize, but continue the same behavior. Zuckerberg should be removed.

In addition the US should adopt the same privacy controls as the EU enforces. (I know: the US cannot, because the corporations that control the country will not allow it.)

Written by LeisureGuy

9 April 2018 at 1:40 pm

Whistleblower Sandy Parakilas says Facebook knew risks but failed to act

leave a comment »

Mark Bridge reports in the Times:

The Facebook leak to Cambridge Analytica was “worse than a data breach” because the company failed to safeguard users even after it understood the risks, a whistleblower said.

Sandy Parakilas, 38, worked at Facebook in 2011 and 2012, two years before the data of 50 million users was obtained by a Cambridge University researcher and shared with the British firm in violation of Facebook’s terms.

Parakilas told MPs that he had warned senior executives that poor safeguards could enable “foreign powers” or data brokers to harvest users’ data without their consent. Speaking by video link, he told members of the digital, culture, media and sport committee that Facebook had not acted on his concerns before the leak.

He said that the company’s previous practices that permitted developers to access data of users and their friends were “far outside the boundaries of what should have been allowed”.

He added that the company had failed to properly investigate a number of other reports of data misuse, suggesting that it had turned a blind eye due to the fear of incurring legal liability.

Mr Parakilas has told The Observer that there was no control of data once it left the company’s servers.

He told the select committee that he did not remember “a single physical audit of a developer’s storage” after reports of data misuse. When Facebook stopped allowing developers to access data on the “friends” of users it had acted to prevent rival social networks from obtaining their details, he said.

“Facebook didn’t want its data to go to data brokers, but their primary motive was to motivate the huge ecosystem of apps in the fastest way,” he said. “There were people like me saying it could go to data brokers; I think it was a risk they were willing to take.

“I think it was well understood both internally and externally that there was risk with the way [it] was handling data.”

Mr Parakilas also criticised Facebook for failing to notify users, sue Cambridge Analytica or call in law enforcement after reports that the British firm used the data to target voters. Instead, it continued to accept Cambridge Analytica’s assurances that it had destroyed the records obtained without consent.

Facebook has denied reports that users were victims of a data “breach”, saying that the data was obtained by the researcher Aleksandr Kogan in line with its own terms and conditions. Mr Parakilis said: “Users had no idea that this had happened. Their data was compromised in the same way as it would have been during a technical breach.”

Massachusetts Institute of Technology researchers warned yesterday that while Facebook has made it harder to gather users’ data since 2014, software developers could harvest large quantities of data without users’ permission or knowledge. In a blog post, they added that “such activity can be made difficult to distinguish from ordinary” browsing.

Robert Thomson, chief executive of News Corp, said that Facebook and YouTube were guilty of publishing “pernicious” and “reprehensible” content because they refused to spend the money required to keep their sites safe.

He warned that social media users struggled to differentiate between fact and fiction because they had been “socialised to accept” dubious content online. He also expressed fears that the tech titans could manipulate their algorithms for political and commercial gain — to penalise competitors or censor stories they did not like.

Continue reading.

Written by LeisureGuy

24 March 2018 at 10:50 am

%d bloggers like this: