Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Beware thought leaders and the wealthy purveying answers to our social ills

leave a comment »

Bethany McLean, a contributing editor at Vanity Fair and author of Saudi America: The Truth About Fracking and How It’s Changing the World, had a very interesting book review in the Washington Post last September:

More than a century ago, Oscar Wilde outlined the danger posed by those trying hard to improve society. “Just as the worst slave-owners were those who were kind to their slaves, and so prevented the horror of the system being realized by those who suffered from it, and understood by those who contemplated it,” Wilde wrote, “so, in the present state of things in England, the people who do most harm are the people who try to do most good.”

In his impassioned new book, Winners Take All: The Elite Charade of Changing the World, journalist Anand Giridharadas argues that the equivalent of today’s slaveholders are the elite citizens of the world, who are philanthropic more often than not — but in ways that ultimately serve only to protect and further their interests and cement the status quo. “For when elites assume leadership of social change, they are able to reshape what social change is — above all, to present it as something that should never threaten winners,” he writes.

Giridharadas knows this firsthand. In the summer of 2011, he was named a Henry Crown fellow of the Aspen Institute. That’s a coveted title, and the anointed are supposed to serve as a “new breed of leaders” who can help solve the “world’s most intractable problems.” But when Giridharadas, who has a gift for phrases that resonate, saw peers with day jobs at firms like Facebook and Goldman Sachs, he began to question the system that allows people to make money in predatory ways and compensate for that through philanthropy. “Instead of asking them to make their firms less monopolistic, greedy or harmful to children, it urged them to create side hustles to ‘change the world,’ ” he writes. “I began to feel like a casual participant in — and timid accomplice to, as well as a cowardly beneficiary of — a giant, sweet-lipped lie.”

In his telling, the roots of the problem go all the way back to Andrew Carnegie, the famed American industrialist, who advocated that people be as aggressive as possible in their pursuit of wealth and then give it back through private philanthropy. It was “an extreme idea of the right to make money in any which way, and an extreme idea of the obligation to give back,” writes Giridharadas, who accuses Carnegie of “dripping with paternalism” for never considering that “the poor might not need so much help had they been better paid.”

Carnegie-ism has grown into what Giridharadas calls “MarketWorld.” In essence, this is the cultlike belief that intractable social problems can be solved in market-friendly ways that result in “win-wins” for everyone involved, and that those who have succeeded under the status quo are also those best equipped to fix the world’s problems. It is “defined by the concurrent drives to do well and do good, to change the world while also profiting from the status quo,” he writes. This notion extends its tentacles in all sorts of ways, such as the idea that big, powerful firms like McKinsey and Goldman Sachs, because of their business success, can also teach some “elusive way of thinking that was vital to helping people.” It’s an idea so powerful that even President Barack Obama decided to seek help from McKinsey as he sought to figure out where to employ his skills post-presidency.

Among the denizens of MarketWorld are so-called “thought leaders,” the speakers who populate the conference circuit, like TED, PopTech and, of course, the Clinton Global Initiative. (When you pause to think about it, “thought leader” is appallingly Orwellian.) Giridharadas argues that the rise of thought leaders, whose views are sanctioned and sanitized by their patrons — the big corporations that support conferences — has come at the expense of public intellectuals, who are willing to voice controversial arguments that shake up the system and don’t have easy solutions. Thought leaders, on the other hand, always offer a small but actionable “tweak,” one that makes conference-goers feel like they’ve learned something but that doesn’t actually threaten anyone.

“Winners Take All” is so readable because it is told through characters, among them Amy Cuddy, a social psychologist at Harvard Business School who, Giridharadas writes, “had spent more than a decade publishing papers on the workings of prejudice, discrimination, and systems of power.” Her work is substantive and deeply challenging — but when she was given the opportunity to speak at PopTech, a stop on the MarketWorld circuit, and then at TED, she spoke not about the embedded systems that marginalize women but rather how women can strike “power poses” to feel more confident. “Without necessarily intending to, she was giving MarketWorld what it craved in a thinker: a way of framing a problem that made it about giving bits of power to those who lack it without taking power away from those who hold it,” Giridharadas writes.

In his view, there is not much moral difference between the Sacklers, the enormously wealthy, extremely philanthropic family who made their money getting people addicted to OxyContin; Bill Clinton, who adopted MarketWorld’s beliefs during his presidency and in his work afterward; and others who have done nothing so obviously huge or harmful, but who aren’t willing to go radical, either. Another compelling character is Andrew Kassoy, who after a long career in finance decided he could help improve the world by creating a legal safe harbor for companies to be dedicated to something more than the bottom line. So was born the B Corporation, a new corporate form in which companies can commit themselves to using “business as a force for good.”

That doesn’t make Kassoy a hero, in Giridharadas’s view, but rather a villain — because although Kassoy wrestles with other options, he ultimately chooses a relatively safe, small “solution.” “Had Kassoy pursued his thought of making it harder for companies to do bad things, involving himself with politics and the law and the system itself, success might have meant the loss of opportunity for the Kassoys of the future, and could even have come at a cost to his own earnings from his old life,” Giridharadas writes. In a nod to Wilde, he argues that the person who “seeks to ‘change the world’ by doing what can be done within a bad system, but who is relatively silent about that system” is “putting himself in the difficult moral position of the kindhearted slave master.”

That Giridharadas questions an idea that has become part of the air we breathe is alone worth the price of the book, and his delicious skewering of the many who exalt their own goodness while making money from dubious business practices makes for entertaining reading. (“Sharing is caring,” one particularly hypocritical venture capitalist actually says in a speech aboard Summit at Sea, a premier MarketWorld event for those who view themselves as leaders of the world.)

But Giridharadas isn’t just raising questions. He’s come to big conclusions: that MarketWorld, along with its philosophical antecedents, like Carnegie-ism and neoliberalism (which anthropologist David Harvey defines as the idea that “human well being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong property rights, free markets and free trade”), has been an abject failure. To prove his point, he doesn’t engage in any specific analysis. In fact, he doesn’t even mention, let alone examine, what is arguably MarketWorld’s most powerful and influential actor: the Bill and Melinda Gates Foundation.

As big-picture proof of the failure of the status quo, Giridharadas cites his indebtedness to Thomas Piketty’s “masterpiece” on the growth of inequality, “Capital in the Twenty First Century.” He argues that any thesis that the world has actually improved over the course of human history is simply a form of brainwashing, a “socially acceptable way to tell people seething over the inequities of the age to drop their complaining.” So whether you accept his complete condemnation depends on whether you think there’s even a debate left to be had.

Giridharadas is able to indict the Kassoys of the world so wholeheartedly because he believes it is obvious that there is a better way, that those who purport to want to do good could actually do good if they could only put their self-interest aside. His key idea is  . . .

Continue reading.

And this Washington Post report by Greg Jaffee is very much to the point: “Capitalism in crisis: U.S. billionaires worry about the survival of the system that made them rich.”

Written by LeisureGuy

21 April 2019 at 1:12 pm

And in addition: Claims of Shoddy Production Draw Scrutiny to a Second Boeing Jet

leave a comment »

I’m not so sure that cutting taxes and thus cutting the government’s ability to deliver services is a good idea. The FAA has had to let Boeing do what it wants. But what Boeing wants is increased profit, even at the expense of safety. This is not an unusual attitude for corporations: cf. coal mines.

Natalie Kitroeff and David Gelles report in the NY Times:

When Boeing broke ground on its new factory near Charleston in 2009, the plant was trumpeted as a state-of-the-art manufacturing hub, building one of the most advanced aircraft in the world. But in the decade since, the factory, which makes the 787 Dreamliner, has been plagued by shoddy production and weak oversight that have threatened to compromise safety.

A New York Times review of hundreds of pages of internal emails, corporate documents and federal records, as well as interviews with more than a dozen current and former employees, reveals a culture that often valued production speed over quality. Facing long manufacturing delays, Boeing pushed its work force to quickly turn out Dreamliners, at times ignoring issues raised by employees.

Complaints about the frenzied pace echo broader concerns about the company in the wake of two deadly crashes involving another jet, the 737 Max. Boeing is now facing questions about whether the race to get the Max done, and catch up to its rival Airbus, led it to miss safety risks in the design, like an anti-stall system that played a role in both crashes.

Safety lapses at the North Charleston plant have drawn the scrutiny of airlines and regulators. Qatar Airways stopped accepting planes from the factory after manufacturing mishaps damaged jets and delayed deliveries. Workers have filed nearly a dozen whistle-blower claims and safety complaints with federal regulators, describing issues like defective manufacturing, debris left on planes and pressure to not report violations. Others have sued Boeing, saying they were retaliated against for flagging manufacturing mistakes.

Joseph Clayton, a technician at the North Charleston plant, one of two facilities where the Dreamliner is built, said he routinely found debris dangerously close to wiring beneath cockpits.

“I’ve told my wife that I never plan to fly on it,” he said. “It’s just a safety issue.”

In an industry where safety is paramount, the collective concerns involving two crucial Boeing planes — the company’s workhorse, the 737 Max, and another crown jewel, the 787 Dreamliner — point to potentially systemic problems. Regulators and lawmakers are taking a deeper look at Boeing’s priorities, and whether profits sometimes trumped safety. The leadership of Boeing, one of the country’s largest exporters, now finds itself in the unfamiliar position of having to defend its practices and motivations.

“Boeing South Carolina teammates are producing the highest levels of quality in our history,” Kevin McAllister, Boeing’s head of commercial airplanes, said in a statement. “I am proud of our teams’ exceptional commitment to quality and stand behind the work they do each and every day.”

All factories deal with manufacturing errors, and there is no evidence that the problems in South Carolina have led to any major safety incidents. The Dreamliner has never crashed, although the fleet was briefly grounded after a battery fire. Airlines, too, have confidence in the Dreamliner.

But workers sometimes made dangerous mistakes, according to the current and former Boeing employees, some of whom spoke on the condition of anonymity because they feared retaliation.

Faulty parts have been installed in planes. Tools and metal shavings have routinely been left inside jets, often near electrical systems. Aircraft have taken test flights with debris in an engine and a tail, risking failure.

On several planes, John Barnett, a former quality manager who worked at Boeing for nearly three decades and retired in 2017, discovered clusters of metal slivers hanging over the wiring that commands the flight controls. If the sharp metal pieces — produced when fasteners were fitted into nuts — penetrate the wires, he said, it could be “catastrophic.”

Mr. Barnett, who filed a whistle-blower complaint with regulators, said he had repeatedly urged his bosses to remove the shavings. But they refused and moved him to another part of the plant.

A spokesman for the Federal Aviation Administration, Lynn Lunsford, said the agency had inspected several planes certified by Boeing as free of such debris and found those same metal slivers. In certain circumstances, he said, the problem can lead to electrical shorts and cause fires.

Officials believe the shavings may have damaged an in-service airplane on one occasion in 2012, according to two people with knowledge of the matter.

The F.A.A. issued a directive in 2017 requiring that Dreamliners be cleared of shavings before they are delivered. Boeing said it was complying and was working with the supplier to improve the design of the nut. But it has determined that the issue does not present a flight safety issue.

“As a quality manager at Boeing, you’re the last line of defense before a defect makes it out to the flying public,” Mr. Barnett said. “And I haven’t seen a plane out of Charleston yet that I’d put my name on saying it’s safe and airworthy.” . . .

Continue reading. There’s much more, and if you travel by plane, you should be uneasy.

Written by LeisureGuy

20 April 2019 at 9:29 am

Facebook Stored Millions of Instagram Passwords Insecurely and Secretly Scraped User Email Accounts

leave a comment »

Facebook is continually guilty of technological malpractice. Are they incompetent? don’t give a damn? straight-out dishonest? It really doesn’t make much difference: their users suffer harm, they apologize and make promises, repeat—over and over.

Brian Feldman reports in New York:

A little over a year ago, a penitent Mark Zuckerberg embarked on another one of his lengthy apology tours for one of the many privacy screwups that his company seems to commit on a daily basis. “We have a responsibility to protect your data, and if we can’t, then we don’t deserve to serve you,” he said. He’s said this numerous times. It’s a cute line.

Anyway, Mark Zuckerberg doesn’t deserve to serve you. Yesterday, while much of the news media was speed-reading the Mueller report, Facebook made some more disclosures about how bad they are about protecting your privacy.

Remember last month, when Facebook disclosed that it had stored millions of user passwords in plaintext accessible to company employees for years? At the time, the company also said tens of thousands of Instagram users were also swept up in it. Yesterday, Facebook quietly updated that month-old announcement to include this addendum:

Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format. We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.

I am but a humble blogger and not a mathematician, but there’s a pretty big difference between tens of thousands and “millions.” One of the numbers is longer, and according to my studies, a longer number often means a bigger number. “Our investigation has determined that these stored passwords were not internally abused or improperly accessed,” Facebook said. Granted, there’s plenty of evidence that Facebook is just bad at investigating.

But wait! There’s more! Last month, security researcher e-sushi discovered that Facebook was asking for users’ email-account passwords in order to verify their identity. This is a practice that is universally disapproved of — never, ever, ever give a third party your password. Responsible services aren’t supposed to ask for your log-in credentials to any site; it’s why help hotlines usually remind callers not to give out passwords. (Just to be clear: When you log-in into a website through Facebook or Google, for example, you are actually just logging in to Facebook or Google and then they securely hand you back to the original website. That’s different.)

Facebook quickly admitted that this method of verifying someone’s identity was stupid as hell (my words) and said it would stop the practice … but not before harvesting the email contacts of 1.5 million people without warning. Since May 2016, Facebook had been scraping user contacts from their email accounts, according to Business Insider, which wrote:

At the time, it wasn’t clear what was happening — but on Wednesday, Facebook disclosed to Business Insider that 1.5 million people’s contacts were collected this way and fed into Facebook’s systems, where they were used to improve Facebook’s ad targeting, build Facebook’s web of social connections, and recommend friends to add.

Facebook says it is now deleting the data, a claim that is effectively worthless. For instance, . . .

Continue reading.

Written by LeisureGuy

19 April 2019 at 9:13 am

15 Months of Fresh Hell Inside Facebook

leave a comment »

Nicholas Thompson and Fred Vogelstein write in Wired:

THE STREETS OF Davos, Switzerland, were iced over on the night of January 25, 2018, which added a slight element of danger to the prospect of trekking to the Hotel Seehof for George Soros’ annual banquet. The aged financier has a tradition of hosting a dinner at the World Economic Forum, where he regales tycoons, ministers, and journalists with his thoughts about the state of the world. That night he began by warning in his quiet, shaking Hungarian accent about nuclear war and climate change. Then he shifted to his next idea of a global menace: Google and Facebook. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” he said. “The owners of the platform giants consider themselves the masters of the universe, but in fact they are slaves to preserving their dominant position … Davos is a good place to announce that their days are numbered.”

ACROSS TOWN, A group of senior Facebook executives, including COO Sheryl Sandberg and vice president of global communications Elliot Schrage, had set up a temporary headquarters near the base of the mountain where Thomas Mann put his fictional sanatorium. The world’s biggest companies often establish receiving rooms at the world’s biggest elite confab, but this year Facebook’s pavilion wasn’t the usual scene of airy bonhomie. It was more like a bunker—one that saw a succession of tense meetings with the same tycoons, ministers, and journalists who had nodded along to Soros’ broadside.

Over the previous year Facebook’s stock had gone up as usual, but its reputation was rapidly sinking toward junk bond status. The world had learned how Russian intelligence operatives used the platform to manipulate US votersGenocidal monks in Myanmar and a despot in the Philippines had taken a liking to the platform. Mid-level employees at the company were getting both crankier and more empowered, and critics everywhere were arguing that Facebook’s tools fostered tribalism and outrage. That argument gained credence with every utterance of Donald Trump, who had arrived in Davos that morning, the outrageous tribalist skunk at the globalists’ garden party.

CEO Mark Zuckerberg had recently pledged to spend 2018 trying to fix Facebook. But even the company’s nascent attempts to reform itself were being scrutinized as a possible declaration of war on the institutions of democracy. Earlier that month Facebook had unveiled a major change to its News Feed rankings to favor what the company called “meaningful social interactions.” News Feed is the core of Facebook—the central stream through which flow baby pictures, press reports, New Age koans, and Russian-­made memes showing Satan endorsing Hillary Clinton. The changes would favor interactions between friends, which meant, among other things, that they would disfavor stories published by media companies. The company promised, though, that the blow would be softened somewhat for local news and publications that scored high on a user-driven metric of “trustworthiness.”

Davos provided a first chance for many media executives to confront Facebook’s leaders about these changes. And so, one by one, testy publishers and editors trudged down Davos Platz to Facebook’s headquarters throughout the week, ice cleats attached to their boots, seeking clarity. Facebook had become a capricious, godlike force in the lives of news organizations; it fed them about a third of their referral traffic while devouring a greater and greater share of the advertising revenue the media industry relies on. And now this. Why? Why would a company beset by fake news stick a knife into real news? And what would Facebook’s algorithm deem trustworthy? Would the media executives even get to see their own scores?

Facebook didn’t have ready answers to all of these questions; certainly not ones it wanted to give. The last one in particular—about trustworthiness scores—quickly inspired a heated debate among the company’s executives at Davos and their colleagues in Menlo Park. Some leaders, including Schrage, wanted to tell publishers their scores. It was only fair. Also in agreement was Campbell Brown, the company’s chief liaison with news publishers, whose job description includes absorbing some of the impact when Facebook and the news industry crash into one another.

But the engineers and product managers back at home in California said it was folly. Adam Mosseri, then head of News Feed, argued in emails that publishers would game the system if they knew their scores. Plus, they were too unsophisticated to understand the methodology, and the scores would constantly change anyway. To make matters worse, the company didn’t yet have a reliable measure of trustworthiness at hand.

Heated emails flew back and forth between Switzerland and Menlo Park. Solutions were proposed and shot down. It was a classic Facebook dilemma. The company’s algorithms embraid choices so complex and interdependent that it’s hard for any human to get a handle on it all. If you explain some of what is happening, people get confused. They also tend to obsess over tiny factors in huge equations. So in this case, as in so many others over the years, Facebook chose opacity. Nothing would be revealed in Davos, and nothing would be revealed afterward. The media execs would walk away unsatisfied.

After Soros’ speech that Thursday night, those same editors and publishers headed back to their hotels, many to write, edit, or at least read all the news pouring out about the billionaire’s tirade. The words “their days are numbered” appeared in article after article. The next day, Sandberg sent an email to Schrage asking if he knew whether Soros had shorted Facebook’s stock.

Far from Davos, meanwhile, Facebook’s product engineers got down to the precise, algorithmic business of implementing Zuckerberg’s vision. If you want to promote trustworthy news for billions of people, you first have to specify what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of different publishers. To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”

That particular choice, which meant the algorithm would be less kind to all kinds of other news—from health and science to technology and sports—wasn’t something Facebook execs discussed with media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either. When one Facebook executive learned about it recently in a briefing with a lower-­level engineer, they say they “nearly fell on the fucking floor.”

The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.

Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Insta­gram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York TimesThe Guardian, and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.

II.

IT WAS, IN some ways, an old story. Back in 2014, a young academic at Cambridge University named Aleksandr Kogan built a personality questionnaire app called ­thisisyourdigitallife. A few hundred thousand people signed up, giving Kogan access not only to their Facebook data but also—because of Facebook’s loose privacy policies at the time—to that of up to 87 million people in their combined friend networks. Rather than simply use all of that data for research purposes, which he had permission to do, Kogan passed the trove on to Cambridge Analytica, a strategic consulting firm that talked a big game about its ability to model and manipulate human behavior for political clients. In December 2015, The Guardian reported that Cambridge Analytica had used this data to help Ted Cruz’s presidential campaign, at which point Facebook demanded the data be deleted. . .

Continue reading. There’s much more.

And FWIW, Facebook’s algorithm reported yesterday that the Notre Dame fire was arson, based on nothing more than some resemblance it found to 9/11.

Facebook is a hot mess.

Written by LeisureGuy

16 April 2019 at 10:48 am

Mark Zuckerberg leveraged Facebook user data to fight rivals and help friends, leaked documents show

leave a comment »

Mark Zuckerberg certainly seems to be a despicable person. Olivia Solon and Cyrus Farivar report at NBC News:

Facebook CEO Mark Zuckerberg oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip, while publicly proclaiming to be protecting that data, according to about 4,000 pages of leaked company documents largely spanning 2011 to 2015 and obtained by NBC News.

The documents, which include emails, webchats, presentations, spreadsheets and meeting summaries, show how Zuckerberg, along with his board and management team, found ways to tap Facebook’s trove of user data — including information about friends, relationships and photos — as leverage over companies it partnered with.

In some cases, Facebook would reward favored companies by giving them access to the data of its users. In other cases, it would deny user-data access to rival companies or apps.

For example, Facebook gave Amazon extended access to user data because it was spending money on Facebook advertising and partnering with the social network on the launch of its Fire smartphone. In another case, Facebook discussed cutting off access to user data for a messaging app that had grown too popular and was viewed as a competitor, according to the documents.

All the while, Facebook was formulating a strategy to publicly frame these moves as a way of protecting user privacy.

Private communication between users is “increasingly important,” Zuckerberg said in a 2014 New York Times interview. “Anything we can do that makes people feel more comfortable is really good.”

But the documents show that behind the scenes, in contrast with Facebook’s public statements, the company came up with several ways to require third-party applications to compensate Facebook for access to its users’ data, including direct payment, advertising spending and data-sharing arrangements. While it’s not unusual for businesses that are working together to share information about their customers, Facebook has access to sensitive data that many other companies don’t possess.

Facebook ultimately decided not to sell the data directly but rather to dole it out to app developers who were considered personal “friends” of Zuckerberg or who spent money on Facebook and shared their own valuable data, the documents show.

Facebook denied that it gave preferential treatment to developers or partners because of their ad spending or relationship with executives. The company has not been accused of breaking the law.

About 400 of the 4,000 pages of documents have previously beenreported by other media outlets, and also by a member of the British Parliament who has been investigating Facebook’s data privacy practices in the wake of the Cambridge Analytica scandal. However, this cache represents the clearest and most comprehensive picture of Facebook’s activities during a critical period as the company struggled to adapt to the rise of smartphones following its rocky debut as a public company.

The thousands of newly shared documents were anonymously leaked to the British investigative journalist Duncan Campbell, who shared them with a handful of media organizations: NBC News, Computer Weekly and Süddeutsche Zeitung. . .

The documents stem from a California court case between the social network and the little-known startup Six4Three, which sued Facebook in 2015 after the company announced plans to cut off access to some types of user data. Six4Three’s app, Pikinis, which soft-launched in 2013, relied on that data to allow users to easily find photos of their friends in bathing suits.

Facebook has acknowledged that it considered charging for access to user data. But Facebook has challenged the significance of those discussions, telling the Wall Street Journal last year and NBC News this month that the company was merely mulling various business models.

Facebook has also repeatedly said that the documents had been “cherry-picked” and were misleading. Facebook reiterated this stance when NBC News contacted the social media company for comment on the newly leaked documents.

“As we’ve said many times, Six4Three — creators of the Pikinis app — cherry picked these documents from years ago as part of a lawsuit to force Facebook to share information on friends of the app’s users,” Paul Grewal, vice president and deputy general counsel at Facebook, said in a statement released by the company.

“The set of documents, by design, tells only one side of the story and omits important context. We still stand by the platform changes we made in 2014/2015 to prevent people from sharing their friends’ information with developers like the creators of Pikinis. The documents were selectively leaked as part of what the court found was evidence of a crime or fraud to publish some, but not all, of the internal discussions at Facebook at the time of our platform changes. But the facts are clear: we’ve never sold people’s data.”

The finding of “evidence of a crime or fraud” came from a preliminary decision by the judge in the Six4Three case about an earlier round of leaked documents.

NBC News has not been able to determine whether the documents represent a complete picture. Facebook declined to provide additional evidence to support the claim of cherry-picking.

Still, these freshly leaked documents show that the plans to sell access to user data were discussed for years and received support from Facebook’s most senior executives, including Zuckerberg, chief operating officer Sheryl Sandberg, chief product officer Chris Cox and VP of growth Javier Olivan. Facebook declined to make them available for comment. . .

Continue reading. There’s more.

Written by LeisureGuy

16 April 2019 at 10:19 am

An electrical meltdown looms: how can we avert disaster?

leave a comment »

Keith Harary has a sobering article in Aeon. Here’s his background:

Keith Harary serves as a strategic and behavioural consultant working on the development of a coordinated national response to the imminent threat of EMP, pandemic and related extreme disaster scenarios. He is an emergency response incident commander with CDS, in Houston, Texas, serves on the board of advisors of Deasil Cognitive and is a member of the EMP Special Interest Group in Infragard, a partnership between the FBI and the private sector focused on critical infrastructure protection.

He writes:

You might find your car dying on the freeway while other vehicles around you lose control and crash. You might see the lights going out in your city, or glimpse an airplane falling out of the sky. You’ve been in a blackout before but this one is different.

In critical facilities across the country, experts predict that it is only a matter of time before the electrical infrastructure holding society together undergoes catastrophic failure. According to the most recent report of the United States Congressional Commission appointed to assess the risk, published July 2017, we face the threat of ‘long-lasting disruption and damage’ to everything from power and clean water to electronic banking, first-responder services and functioning hospitals. Until now, such a dire prediction has typically been associated with only the most extreme doomsday true believers but William Graham, the former chairman of the Congressional Electromagnetic Pulse (EMP) Commission, says that in this case they could be right.

In the broadest sense, an EMP is a sudden burst of extreme electromagnetic interference that causes systems using electricity – especially devices controlled by chips or computers – to fail when the load gets too high. EMPs come in three basic varieties, including a ground-level or high-altitude EMP (HEMP) released by a nuclear burst that could potentially impact power lines, transformers and other critical devices; drive-by EMPs created by high-powered microwave weapons that could silently incapacitate equipment from hundreds of yards away; and coronal mass ejections (CMEs) resulting from solar storms that could interfere with the magnetic sphere surrounding the Earth, bringing down the grid that powers the electronic devices defining our contemporary way of life. According to the 2017 report, Russia, China and North Korea could already have these weapons under wraps. And CMEs from solar storms are like metaphorical magnetic earthquakes: they vary in intensity from relatively harmless ripples all the way to a potential Big One that could take down a nation’s grid within minutes, creating widespread destruction that would take years to repair.

In the most widespread and catastrophic EMP scenario, even motorised vehicles that aren’t damaged will be impacted by the lack of functioning fuel stations as gasoline stops flowing to and from the pumps. With regular deliveries interrupted by the lack of fuel and power, major urban populations will confront empty grocery shelves and a complete breakdown in essential services – from firefighting to garbage collection – in a matter of days.

Like an earthquake, the extent and nature of the damage from an EMP will depend on its intensity and location, as well as its altitude and range. The more intense and higher-altitude the pulse, the more widespread the likely damage, with some bursts more likely to damage long electrical and telephone lines and transformers, and others extending their impact to localised electronic devices and systems. But EMPs do not necessarily behave in ways that are entirely predictable. For example, a local, ground-level EMP burst could enter the underground grid and travel, potentially damaging a crucial transformer and other equipment kilometres away.

The technology that creates the interwoven fabric of our modern lives also makes us vulnerable to the reverberating impact of any potential interruption. The 1965 Northeast Blackout that left 30 million people without power occurred because a transmission line relay was improperly set. The 2003 Northeast Blackout, affecting 50 million people, was reportedly tripped by a single point of failure when a tree branch touched a powerline. The 1977 New York City blackout was caused by two circuit breakers tripping when lightning struck a substation. There are many similar examples, but suffice to say a nuclear-triggered EMP would cause millions of critical failure points across the system. As the EMP Commission concludedin 2004, even low-yield nuclear weapons detonated at an altitude of 30 kilometres could create extensive damage, while a detonation at 300 kilometres could affect the entire continental US and have a catastrophic impact on the nation.

In behind-the-scenes emergency meetings and discussions taking place among informed government and private industry experts in infrastructure-related engineering, information technology, pandemic outbreaks, law-enforcement, logistics, psychology and other relevant disciplines, our conversations are less about whether such a scenario will ever happen than about when – and how bad it will be. If we’re lucky, the coming meltdown might be less than apocalyptic. It might be far off enough that we will still have time to mitigate the worst possible impact. But more likely than not, the disaster will be on us before we can fully prepare. The more we can do in each community right now, the more able we will be to help ourselves and our neighbours. It could happen as quickly as switching off the lights.

It is one thing to confront a major disaster such as a wildfire, hurricane, earthquake or flood, and be able to mobilise critical regional resources to converge on the crisis. Such disasters benefit from a phenomenon referred to as the ‘edge effect’, in which neighbouring localities and states are able to provide recovery assistance. It is another to be hit by simultaneous disasters nation- or even worldwide. Imagine what will happen if the kinds of extreme Western wildfires the US has recently encountered are left to explode because there is no way to put them out. Imagine hurricanes Katrina, Sandy or Michael hitting landfall with no way to prepare, escape or respond.

If dehydration, starvation, disease and environmental threats don’t kill most of us relatively quickly, we are likely to turn on each other. Some recent disasters, such as the hurricanes hitting Texas and Florida, have shown people rallying together to help each other prevail. And yet we still encounter all-too-frequent examples of road rage and similar primitive behaviour, even when life is proceeding as usual.

In behind-the-scenes projections, some experts predict that if such a critical meltdown of infrastructure expands beyond regional containment, anywhere from 60 to 90 per cent of the US population will be dead within roughly the first 11 months. While that prediction might appear extreme, it could conceivably be optimistic. In his 2008 testimony to the US House of Representatives on the threat posed by an EMP attack, Graham summarised the calculation, stating:

We don’t have experience with losing the infrastructure in a country with 300 million people, most of whom don’t live in a way that provides for their own food and other needs. We can go back to an era when people did live like that…10 per cent would be 30 million people, and that is probably the range where we could survive as a basically rural economy.

Fortunately, a handful of experts are now working on practical solutions to mitigate the risk. At Instant Access Networks (IAN) near Annapolis, Maryland, the focus is on hardening critical systems against EMPs, and developing local microgrids that will function independently even in worst-case scenarios. In a point raised by my colleague and leading EMP systems integrator, Charles Manto of IAN, safeguarding even 10 per cent of the country’s critical infrastructure could potentially secure as much as 85 per cent of the economy.  . .

Continue reading. There’s more.

Written by LeisureGuy

16 April 2019 at 8:43 am

A.I. Is Changing Insurance—for the worse

leave a comment »

Sarah Jeong writes in the NY Times:

A smartphone app that measures when you brake and accelerate in your car. The algorithm that analyzes your social media accounts for risky behavior. The program that calculates your life expectancy using your Fitbit.

This isn’t speculative fiction — these are real technologies being deployed by insurance companies right now. Last year, the life insurance company John Hancock began to offer its customers the option to wear a fitness tracker — a wearable device that can collect information about how active you are, how many calories you burn, and how much you sleep. The idea is that your Fitbit or Apple Watch can tell whether or not you’re living the good, healthy life — and if you are, your insurance premium will go down.

This is the cutting edge of the insurance industry, adjusting premiums and policies based on new forms of surveillance. It will affect your life insurance, your car insurance and your homeowner’s insurance — if it hasn’t already. If the Affordable Care Act’s protections for people with pre-existing conditions should vanish, it will no doubt penetrate thehealth insurance industry as well.

Consumers buy insurance from companies to protect against possible losses. But this contractual relationship is increasingly asymmetrical. The insurance companies once relied on a mix of self-reported information, public records and credit scores to calculate risk and assess how much to charge. But thanks to advances in technology, the capacity to collect, store and analyze information is greater than ever before.

A 2018 report from the consulting firm McKinsey notes that “smart” devices — fitness trackers, home assistants like Alexa, connected cars and smart refrigerators — are proliferating in homes. The “avalanche of new data” they can provide will change the face of insurance.

In 2014, the insurance company State Farm filed a patent applicationfor a system that “aggregates and correlates” data for “life management purposes.” The application lists a wide range of information, such as “home data, vehicle data and personal health data associated with the individual.”

Some of the changes heralded by these new technologies will be better for everyone, like faster claims processing. But the use of data collection and artificial intelligenceClose X also raises serious questions about what McKinsey calls “personalized pricing” and what the State Farm patent application calls “personalized recommendations” and “insurance discounts.”

Before the A.C.A., data brokers bought data from pharmacies and sold it to insurance companies, which would then deny coverage based on prescription histories. Future uses of data in insurance will not be so straightforward.

As machine learningClose X works its way into more and more decisions about who gets coverage and what it costs, discrimination becomes harder to spot.

Part of the problem is the automatic deference that society has so often given to technology, as though artificial intelligence is unerring. But the other problem is that artificial intelligence is known to reproduce biases that aren’t explicitly coded into it. In the field of insurance, this turns into “proxy discrimination.” For example, an algorithm might (correctly) conclude that joining a Facebook group for a BRCA1 mutation is an indicator of high risk for a health insurance company. Even though actual genetic information — which is illegal to use — is never put into the system, the algorithmic black box ends up reproducing genetic discrimination.

A ZIP code might become a proxy for race; a choice of wording in a résumé might become a proxy for gender; a credit card purchase history can become a proxy for pregnancy status. Legal oversight of insurance companies, which are typically regulated by states, mostly looks at discrimination deemed to be irrational: bias based on race, sex, poverty or genetics. It’s not so clear what can be done about rational indicators that are little but proxies for factors that would be illegal to consider.

Placing those biases inside a secret algorithm can prevent critical examination of inequality. ProPublica found that people in minority neighborhoods paid higher car insurance premiums than residents of majority-white neighborhoods with similar risk, but its reporters could not determine exactly why, since the insurance companies would not disclose their proprietary algorithms or data sets.

A handful of lawsuits in other arenas have challenged this practice. After Idaho’s Medicaid program started using an automated system to calculate benefits, recipients suddenly saw their benefits cut by as much as 30 percent. When the state refused to disclose its algorithm, claiming it was a trade secret, the A.C.L.U. of Idaho sued to gain access to the code, and ultimately discovered that the formula was riddled with flaws. . .

Continue reading.

Written by LeisureGuy

14 April 2019 at 5:42 pm

%d bloggers like this: