Archive for the ‘Technology’ Category
Corporate spying on your every online activity is making great progress, though corporations don’t talk about it much at all. Julia Anguin reports in ProPublica:
After we published this story, Google reached out to say that it doesn’t currently use Gmail keywords to target web ads. We’ve updated the story to reflect that.
When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company’s “number one priority when we contemplate new kinds of advertising products.”
And, for nearly a decade, Google did in fact keep DoubleClick’s massive database of web-browsing records separate by default from the names and other personally identifiable information Google has collected from Gmail and its other login accounts.
The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer.
The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.
The move is a sea change for Google and a further blow to the online ad industry’s longstanding contention that web tracking is mostly anonymous. In recent years, Facebook, offline data brokers and others have increasingly sought to combine their troves of web tracking data with people’s real names. But until this summer, Google held the line.
“The fact that DoubleClick data wasn’t being regularly connected to personally identifiable information was a really significant last stand,” said Paul Ohm, faculty director of the Center on Privacy and Technology at Georgetown Law.
“It was a border wall between being watched everywhere and maintaining a tiny semblance of privacy,” he said. “That wall has just fallen.”
“We updated our ads system, and the associated user controls, to match the way people use Google today: across many different devices,” Faville wrote. She added that the change “is 100% optional–if users do not opt-in to these changes, their Google experience will remain unchanged.” (Read Google’s entire statement.)
Existing Google users were prompted to opt-into the new tracking this summer through a request with titles such as “Some new features for your Google account.”
The “new features” received little scrutiny at the time. Wired wrote that it “gives you more granular control over how ads work across devices.” In a personal tech column, the New York Times also described the change as “new controls for the types of advertisements you see around the web.”
Connecting web browsing habits to personally identifiable information has long been controversial. . . .
NOTE: Later in the article it tells you how to turn off at least some Facebook ad-tracking/targeting.
Take a look inside an Amazon warehouse (sound not required: it’s just warehouse noise):
That’s from this Ars Technica article, which has other examples, including Domino’s pizza.
In the Washington Post Jonathan Capeheart has a very thought-provoking audio interview, accompanied by a report that includes an interesting quotation from the interview. That reportbegins:
“We’re in a period analogous, truly analogous, to the time in Europe just after Gutenberg mechanized the Chinese invention of the printing press,”Alberto Ibargüen, president of the John S. and James L. Knight Foundation, told me. “After Gutenberg, any Tom, Dick or Martin Luthercould print whatever they want, and it took a hundred years to figure out, to sort it all out.”
The 11th episode of “Cape Up” is all about the state of journalism in the age of social media. “Confused,” is how Ibargüen describes it. The former publisher of the Miami Herald and El Nuevo Herald pointed out that while two Supreme Court decisions have shaped our present-day understanding of the First Amendment as it pertains to newspapers and broadcast television, “The law of First Amendment as to Internet … simply isn’t settled.”
In an effort to “help shape First Amendment law” in the digital age, the Knight Foundation and Columbia University announced in May the creation of the Knight First Amendment Institute at the Ivy League school in New York. Ibargüen told me that when such cases come before the court, “I want somebody at the table, somebody at the courthouse that is saying, ‘Let’s err on the side of transparency. Let’s err on the side of free speech.’ ” But what he said next highlighted the unanswered legal questions facing all of us: Congress, companies, courts and consumers.
That’s not to say that everything is black-and-white. We know it isn’t. The First Amendment itself isn’t. Although, the First Amendment is fairly clear. It says, “Congress shall make no law abridging the freedoms of speech, press, religion, assembly, a redress of grievance.” Five phenomenal rights. But they also don’t say, well, what happens if it is not Congress? What happens if it’s Google? … Think about it. Google, Apple, Amazon, Microsoft and Facebook have more ability to control what we know or think we know than anything in history. Than anyone in history. Than any government has ever had.
We talked about the power of search-engine programmers to mold what we accept as facts. “Even algorithms have parents,” Ibargüen said, “and the parents, the programmers, imbue the algorithm, consciously or not consciously, with some kind of values.” He went on to talk about what happened when you typed “thug” into Google. The ensuing controversy forced changes, so now when you type the word in you get an array of “thug” choices to search. “Somehow that algorithm knows what it is supposed to present,” Ibargüen told me, “and that affects what we think and what we think we know.”
And read this as well: how corporations are programming you—that sounds like corny science-fiction, but it’s behavioral design: it’s here now and it works. On you, and me.
Joseph Cox writes at Motherboard:
In what might be one of the more delicious cases of irony to ever grace a presidential election, a researcher has found that a number of email servers linked to Donald Trump’s hotel and others businesses are running horribly out of date software which receive no security patches, and are lacking other precautions for keeping hackers out.
The findings come at a time when cybersecurity is a crucial topic in the presidential election, with hackers dumping documents from Hillary Clinton’s campaign online, and Trump and his supporters continuing to criticise Clinton’s use of a private email server.
“Running outdated software and operating systems for your publicly facing email infrastructure is problematic, especially when you’re a high profile organisation,” security architect Kevin Beaumont, who highlighted the issues with Trump’s servers, told Motherboard in an email. “During an election where cybersecurity is such a big issue, I was a little amazed at what I saw.”
A number of mail servers for TrumpOrg.com, a domain registered to The Trump Organization, are using end-of-life software, according to Beaumont. Those include the operating system Windows Server 2003 and IIS 6.0, which comes shipped with it.
“IIS is a webserver, and it’s particularly dangerous to run unpatched,” Beaumont told Motherboard. . . .
The guy can’t manage his own businesses, his own campaign. How does he think he can be president?
Big corporations act on the belief that they should be able to do whatever they want, and they use their power in support of that goal. Take a look at this legal battle that Facebook initiated:
There is a man who is a thorn in the side of Facebook — a problem that just won’t go away. In 2008, Facebook sued him, saying he was a hacker and a spammer who was putting users at risk. But in a truly bizarre plot twist, he stood up to the Internet giant — and he has become the unlikely protagonist in a battle for your rights online.
Steven Vachani and I are sitting at a Starbucks because, he doesn’t want to say it, but: He doesn’t have an office or a home in Silicon Valley anymore.
“I’m passionate about what I believe in and I’m also confident that even when the path is not clear, you know, you’ll find a path,” he says. “I’m also willing [to] suffer. I think a lot of people have a — they’re not adaptable.”
Vachani is based in Brazil, where life is more affordable. He flies into San Francisco a lot, though, in an ongoing effort to stay in the tech biz.
He’s got a daily routine here: Go on Priceline and look for the cheapest room in town. And when that’s too expensive, the 41-year-old with graying hair checks into a youth hostel. “I spent years of my life living in a youth hostel, so it’s a place where I feel very comfortable,” he says.
Vachani has had very high highs, and very low lows. In 2008, he was the CEO of a hot startup called Power Ventures. Some of the biggest investors in the Valley — the same people who back billionaire Elon Musk — backed him and his vision.
Closed versus open Internet
The Internet was different back then. You could say a war was brewing between two camps: open borders and closed borders. Facebook, he says, wanted closed borders. “Facebook would be this walled garden where everything would be safely stored there, by their definition of safely. And they would decide how you access it — and if and when you could take it out.”
Current and former Facebook employees say there’s a reason you can’t, with the click of a button, grab all your selfies, rants and newborn baby pics. While your data is not Facebook property — it’s yours, under the terms of service — the company makes it hard for you to take it and go on purpose.
Power wanted a world where users, not a company, were at the center — where they controlled their own data, and could take it with them wherever they wanted freely.
So that’s what Power built: a dashboard for all your social networks. Say you had a LinkedIn, a Twitter, Facebook and MySpace (remember MySpace!). You’d put your usernames and passwords for each into the Power portal. And from there, you could post to any account, grab contacts and pictures, and so on.
Vachani claims his site had about 20 million users at its peak (so people liked it); and the social networks let him be — except for Facebook.
“Their response to us was — was a lawsuit,” he stutters as he recounts.
Facebook and Power had heated exchanges in 2008. And on Dec. 30 (call it a late Christmas present), Facebook went into court and filed a complaint.
It included two big charges: one, that Vachani is a hacker, breaking into Facebook’s computers without the company’s permission. Never mind that Facebook users gave him their names and passwords freely.
Point two — and this caught Vachani off guard — was that he was a spammer. “The word ‘spam’ is kind of like calling someone a rapist. It has — in [the] digital world — calling someone a spammer is the worst thing you could possibly call them.”
It’s actually quite clever. The complaint made it sound like Power was sending unsolicited emails to users, possibly from a fake Facebook account. It wasn’t true. But that didn’t matter. The allegation stuck.
Facebook could have taken a different tack and told users: “Hey, you’re not allowed to use this competitor.” They set other rules, like telling users not to post nude photos. But that directive would reveal a well-kept secret in Silicon Valley.
“Nobody up until that point had ever had, I think, the courage to publicly say to users that they don’t control their data,” he says, “and honestly Facebook didn’t have the courage to say that.
A window into Facebook
Antonio García Martínez agrees. “That’s probably true, yeah. They like the fact that users don’t actually read the terms of service. They absolutely love that.”
García Martínez used to work at Facebook, as a leader in its ad business, and recently wrote a best-selling book about it called Chaos Monkeys.
He doesn’t think much of Steven Vachani’s startup, or get how scraping data from someone else’s social network is fair or even lucrative. But he does think this little-known standoff is an important window into Facebook.
In dealings with users as well as with advertisers, he says, the company is run by control freaks. “They see Facebook as the end all, be all of your social life. And they expect you to accept that, as a worldview. And this guy’s whole product flies in the face of that,” García Martínez says.
Hustling for a lawyer
Power Ventures attorney Amy Sommer Anderson says the very first judge in the case didn’t get the technicalities. “That’s how it seemed. And so he just sort of ate up what Facebook said. Yeah, this sounds complicated!” But, she says, “It’s not.”
Another judge ruled that Vachani was personally responsible for paying Facebook about $3 million in damages. His life was pretty much blown to smithereens. Vachani tried to declare bankruptcy and Facebook blocked him. His company imploded. Prior counsel walked out on him.
It’s in that context he met Anderson, “and he sold it as, you know, it’s just an appearance. It’s one time. Just a fixed fee, just to keep this from getting thrown out,” Anderson says.
Adaptable and resourceful Vachani went on Elance, the site where you find freelancers, and posted about his case — without naming the parties. If he had mentioned Facebook, he says, no one would have been crazy enough to bite.
Anderson had just become a lawyer — she went to law school at night and had never represented anyone before. Her “initial appearance” lasted about a year — and after that, she was not that great at negotiating salary. She says she agreed to a “stupid low rate.”
Asked how low, she hesitates to admit: $55 an hour. Facebook-level lawyers get paid more like $1,000 an hour.
But the more Anderson pored over the briefs and decisions, the more she felt an injustice was being done. “I was hooked on the case. It was like crack. I mean, as soon as I was in, I was in,” she says.
A little vindication
Vachani and Power kept on losing, badly. Until they got to the 9th Circuit Court of Appeals. . .
Tim Harford has an interesting long read in the Guardian that shows how relying on computers has a downside. The article begins by describing the final moments in the cockpit of Air France Flight 447 before it crashed into the ocean. It then continues:
. . . And then an alarm sounded. The autopilot had disconnected. An airspeed sensor on the plane had iced over and stopped functioning – not a major problem, but one that required the pilots to take control. But something else happened at the same time and for the same reason: the fly-by-wire system downgraded itself to a mode that gave the pilot less help and more latitude to control the plane. Lacking an airspeed sensor, the plane was unable to babysit Bonin.
The first consequence was almost immediate: the plane began rocking right and left, and Bonin overcorrected with sharp jerks on the stick. And then Bonin made a simple mistake: he pulled back on his control stick and the plane started to climb steeply.
As the nose of the aircraft rose and it started to lose speed, the automated voice barked out in English: “STALL STALL STALL.” Despite the warning, Bonin kept pulling back on the stick, and in the black skies above the Atlantic the plane climbed at an astonishing rate of 7,000 feet a minute. But the plane’s air speed was evaporating; it would soon begin to slide down through the storm and towards the water, 37,500 feet below. Had either Bonin or Robert realised what was happening, they could have fixed the problem, at least in its early stages. But they did not. Why?
The source of the problem was the system that had done so much to keep A330s safe for 15 years, across millions of miles of flying: the fly-by-wire. Or more precisely, the problem was not fly-by-wire, but the fact that the pilots had grown to rely on it. Bonin was suffering from a problem called mode confusion. Perhaps he did not realise that the plane had switched to the alternate mode that would provide him with far less assistance. Perhaps he knew the plane had switched modes, but did not fully understand the implication: that his plane would now let him stall. That is the most plausible reason Bonin and Robert ignored the alarm – they assumed this was the plane’s way of telling them that it was intervening to prevent a stall. In short, Bonin stalled the aircraft because in his gut he felt it was impossible to stall the aircraft.
Aggravating this confusion was Bonin’s lack of experience in flying a plane without computer assistance. While he had spent many hours in the cockpit of the A330, most of those hours had been spent monitoring and adjusting the plane’s computers rather than directly flying the aircraft. And of the tiny number of hours spent manually flying the plane, almost all would have been spent taking off or landing. No wonder he felt so helpless at the controls.
The Air France pilots “were hideously incompetent”, wrote William Langewiesche, in his Vanity Fair article. And he thinks he knows why. Langewiesche argued that the pilots simply were not used to flying their own aeroplane at altitude without the help of the computer. Even the experienced Captain Dubois was rusty: of the 346 hours he had been at the controls of a plane during the past six months, only four were in manual control, and even then he had had the help of the full fly-by-wire system. All three pilots had been denied the ability to practise their skills, because the plane was usually the one doing the flying.
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”
The paradox of automation, then, has three strands to it. First, . . .