Archive for the ‘Technology’ Category
Bruce Schneier writes:
In December, Google’s Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: “If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else.”
The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.
I was reminded of this last week when I appeared on Glenn Beck’s show along with cryptography pioneer Whitfield Diffie. Diffie said:
You can’t have privacy without security, and I think we have glaring failures in computer security in problems that we’ve been working on for 40 years. You really should not live in fear of opening an attachment to a message. It ought to be confined; your computer ought to be able to handle it. And the fact that we have persisted for decades without solving these problems is partly because they’re very difficult, but partly because there are lots of people who want you to be secure against everyone but them. And that includes all of the major computer manufacturers who, roughly speaking, want to manage your computer for you. The trouble is, I’m not sure of any practical alternative.
That neatly explains Google. Eric Schmidt does want your data to be secure. He wants Google to be the safest place for your data as long as you don’t mind the fact that Google has access to your data. Facebook wants the same thing: to protect your data from everyone except Facebook. Hardware companies are no different. Last week, we learned that Lenovo computers shipped with a piece of adware called Superfish that broke users’ security to spy on them for advertising purposes.
Governments are no different. The FBI wants people to have strong encryption, but it wants backdoor access so it can get at your data. UK Prime Minister David Cameron wants you to have good security, just as long as it’s not so strong as to keep the UK government out. And, of course, the NSA spends a lot of money ensuring that there’s no security it can’t break.
Corporations want access to your data for profit; governments want it for security purposes, be they benevolent or malevolent. But Diffie makes an even stronger point: we give lots of companies access to our data because it makes our lives easier.
I wrote about this in my latest book, Data and Goliath: . . .
The idea that law enforcement (and “official” law-breakers like NSA) can have a backdoor to your encrypted data and criminals won’t be able to use it is as realistic as the idea that law enforcement can have guns and criminals cannot.
One point of interest: “official” backdoors to allow decryption of data is being pushed by the wealthy (e.g., Hillary Clinton) and the powerful (e.g., NSA), and those are the entities with the most to lose once criminals and other (hostile) governments find the backdoors. This will be interesting to watch from a distance.
Pretty clearly that fails the test of reciprocity. Lorenzo Franceschi-Bicchierai reports at Motherboard:
When the US demands technology companies install backdoors for law enforcement, it’s okay. But when China demands the same, it’s a whole different story.
The Chinese government is about to pass a new counter terrorism law that would require tech companies operating in the country to turn over encryption keys and include specially crafted code in their software and hardware so that chinese authorities can defeat security measures at will.
Technologists and cryptographers have long warned that you can’t design a secure system that will enable law enforcement—and only law enforcement—to bypass the encryption. The nature of a backdoor door is that it is also a vulnerability, and if discovered, hackers or foreign governments might be able to exploit it, too.
Yet, over the past few months, several US government officials, including the FBI director James Comey, outgoing US Attorney General Eric Holder, and NSA DirectorMike Rogers, have all suggested that companies such as Apple and Google should give law enforcement agencies special access to their users’ encrypted data—while somehow offering strong encryption for their users at the same time.
Their fear is that cops and feds will “go dark,” an FBI term for a potential scenario where encryption makes it impossible to intercept criminals’ communications.
But in light of China’s new proposals, some think the US’ own position is a little ironic.
“You can’t have it both ways,” Trevor Timm, the co-founder and the executive director of the Freedom of the Press Foundation, told Motherboard. “If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
He’s not the only one to think the US government might end up regretting its stance.
Someday US officials will look back and realize how much global damage they’ve enabled with their silly requests for key escrow.
— Matthew Green (@matthew_d_green) February 27, 2015
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Ironically, the US government sent a letter to China expressing concern about its new law. “The Administration is aggressively working to have China walk back from these troubling regulations,” US Trade Representative Michael Froman said in a statement.
A White House spokesperson did not respond to a request for comment from Motherboard.
“It’s stunningly shortsighted for the FBI and NSA not to realize this,” Timm added. “By demanding backdoors, these US government agencies are putting everyone’s cybersecurity at risk.” . . .
It’s like businesses in an environment without any government regulation: pure free enterprise and unfetter competition, which quickly leads to monopolies that crush competitors, rake in profits, and ruin the environment. Henry Farrell writes at Aeon:
The Hidden Wiki holds the keys to a secret internet. To reach it, you need a special browser that can access ‘Tor Hidden Services’ – websites that have chosen to obscure their physical location. But even this browser isn’t enough. Like the Isla de Muerta in the film Pirates of the Caribbean, the landmarks of this hidden internet can be discovered only by those who already know where they are.
Sites such as the Hidden Wiki provide unreliable treasure maps. They publish lists of the special addresses for sites where you can use Bitcoin to buy drugs or stolen credit card numbers, play strange games, or simply talk, perhaps on subjects too delicate for the open web. The lists are often untrustworthy. Sometimes the addresses are out-of-date. Sometimes they are actively deceptive. One link might lead to a thriving marketplace for buying and selling stolen data; another, to a wrecker’s display of false lights, a cloned site designed to relieve you of your coin and give you nothing in return.
This hidden internet is a product of debates among technology-obsessed libertarians in the 1990s. These radicals hoped to combine cryptography and the internet into a universal solvent that would corrupt the bonds of government tyranny. New currencies, based on recent cryptographic advances, would undermine traditional fiat money, seizing the cash nexus from the grasp of the state. ‘Mix networks’, where everyone’s identity was hidden by multiple layers of encryption, would allow people to talk and engage in economic exchange without the government being able to see.
Plans for cryptographic currencies led to the invention of Bitcoin, while mix networks culminated in Tor. The two technologies manifest different aspects of a common dream – the utopian aspiration to a world where one could talk and do business without worrying about state intervention – and indeed they grew up together. For a long time, the easiest way to spend Bitcoin was at Tor’s archipelago of obfuscated websites.
Like the pirate republics of the 18th century, this virtual underworld mingles liberty and vice. Law enforcement and copyright-protection groups such as the Digital Citizens’ Alliance in Washington, DC, prefer to emphasise the most sordid aspects of Tor’s hidden services – the sellers of drugs, weapons and child pornography. And yet the effort to create a hidden internet was driven by ideology as much as avarice. The network is used by dissidents as well as dope-peddlers. If you live under an authoritarian regime, Tor provides you with a ready-made technology for evading government controls on the internet. Even some of the seedier services trade on a certain idealism. Many libertarians believe that people should be able to buy and sell drugs without government interference, and hoped to build marketplaces to do just that, without violence and gang warfare.
Tor’s anonymity helps criminals by making it harder for the state to identify and detain them. Yet this has an ironic side-effect: it also makes it harder for them to trust each other, because they typically can’t be sure who their interlocutors are. To make money in hidden markets, you need people to trust you, so that they will buy from you and sell to you. Having accomplished this first manoeuvre, the truly successful entrepreneurs go one step further. They become middlemen of trust, guaranteeing relations between others and taking a cut from the proceeds.
To this end, entrepreneurs have found it necessary to create and maintain communities, making rules, enforcing them, punishing rule-breakers, and turning towards violence when all else fails. They have, in effect, built petty versions of the very governments they are fleeing. As the US sociologist Charles Tilly argued, the modern state began as a protection racket, offering its subjects protection against outsiders and each other. The same logic is playing out today on the hidden internet, as would-be petty barons and pirate kings fight to tax and police their subjects while defending themselves against hostile incursions.
No entrepreneur of trust was more successful than the Texan Ross Ulbricht, who, under his ‘Dread Pirate Roberts’ pseudonym, founded and ran the notorious Silk Road marketplace for drugs and other contraband. And no-one better exemplifies how the libertarian dream of freedom from the state turned sour.
Ulbricht built the Silk Road marketplace from nothing, pursuing both a political dream and his own self-interest. However, in making a market he found himself building a micro-state, with increasing levels of bureaucracy and rule‑enforcement and, eventually, the threat of violence against the most dangerous rule‑breakers. Trying to build Galt’s Gulch, he ended up reconstructing Hobbes’s Leviathan; he became the very thing he was trying to escape. But this should not have been a surprise. . .
Continue reading. Later in the article:
The libertarian hope that markets could sustain themselves through free association and choice is a chimera with a toxic sting in its tail. Without state enforcement, the secret drug markets of Tor hidden services are coming to resemble an anarchic state of nature in which self-help dominates.
Libertarianism is a fantasy that does poorly in the real world.
A perhaps realistic take on the Net Neutrality victory: big corporations fight for years. Leticia Miranda reports in ProPublica:
The Federal Communications Commission is scheduled to vote on a proposal today that effectively bars Internet companies from prioritizing some Internet traffic over others.As John Oliver famously explained “ending net neutrality would allow big companies to buy their way into the fast lane, leaving everyone else in the slow lane.”
The FCC’s proposal faces plenty of opposition from telecom companies and others, but it’s just the latest round in a long fight. Here is a brief history of attempts to enact net neutrality and the often successful push against it.
The FCC votes to deregulate cable Internet services.
March 2002: The FCC, under the Bush administration and Republican Chairman Michael Powell, declares that cable modem services are “not subject to common carrier regulation,” meaning they aren’t bound by standards for nondiscrimination in service. Instead, cable Internet services fall under a separate light regulatory regime that gives the commission limited enforcement power.
Tim Wu coins the phrase “net neutrality.”
Fall 2003: Tim Wu, then an associate professor at the University of Virginia Law School, first coins the term “net neutrality” in a paper for the Journal of Telecommunications and High Technology Law. He defines net neutrality to mean an Internet “that does not favor one application…over others.”
The FCC adopts a toothless net neutrality-like policy statement.
August 2005: The FCC adopts a policy statement to “preserve and promote the open and interconnected nature of public Internet,” which focuses on protecting consumer access to content online and competition among Internet service companies. The statement has no power of enforcement.
The first net neutrality bill is introduced in Congress. It dies.
May 2006: Sen. Ed Markey, D-Mass., introduces a net neutrality bill that would keep Internet service companies from blocking, degrading or interfering with users’ access to their services. But the bill stalled in the House Committee on Energy and Commerce and never came to a vote.
The FCC tells Comcast to stop slowing down access to BitTorrent.
August 2008: The FCC, under Republican Chairman Kevin Martin, orders Comcast to stop slowing down user access to BitTorrent, a peer-to-peer sharing network often used to share music and videos.
Comcast sues the FCC, and wins.
September 2008 — April 2010: Comcast voluntary agrees to stop slowing down BitTorrent traffic. But it takes the FCC to court anyway, arguing that the agency is operating outside its authority. Specifically, the company points out that the FCC’s 2005 policy statement on neutrality doesn’t have the force of law.
The FCC writes real rules on net neutrality.
December 2010: Democratic FCC Chairman Julius Genachowski writes an order to impose net neutrality rules. Unlike the FCC’s 2005 policy statement, this new order is a real rule, not just a policy statement.
Except Verizon sues the FCC, saying it has no authority to enforce the rules, and wins.
September 2011 — January 2014: The District of Columbia Circuit Court of Appeals rulesthe Federal Communications Commission can’t enforce net neutrality rules because broadband Internet services don’t fall under its regulatory authority.
Senator introduces net neutrality bill that would ban the FCC from enforcement. . . .
Jeff Wise writes in New York magazine:
The unsettling oddness was there from the first moment, on March 8, when Malaysia Airlines announced that a plane from Kuala Lumpur bound for Beijing, Flight 370, had disappeared over the South China Sea in the middle of the night. There had been no bad weather, no distress call, no wreckage, no eyewitness accounts of a fireball in the sky—just a plane that said good-bye to one air-traffic controller and, two minutes later, failed to say hello to the next. And the crash, if it was a crash, got stranger from there.
My yearlong detour to Planet MH370 began two days later, when I got an email from an editor at Slate asking if I’d write about the incident. I’m a private pilot and science writer, and I wrote about the last big mysterious crash, of Air France 447 in 2009. My story ran on the 12th. The following morning, I was invited to go on CNN. Soon, I was on-air up to six times a day as part of its nonstop MH370 coverage.
There was no intro course on how to be a cable-news expert. The Town Car would show up to take me to the studio, I’d sign in with reception, a guest-greeter would take me to makeup, I’d hang out in the greenroom, the sound guy would rig me with a mike and an earpiece, a producer would lead me onto the set, I’d plug in and sit in the seat, a producer would tell me what camera to look at during the introduction, we’d come back from break, the anchor would read the introduction to the story and then ask me a question or maybe two, I’d answer, then we’d go to break, I would unplug, wipe off my makeup, and take the car 43 blocks back uptown. Then a couple of hours later, I’d do it again. I was spending 18 hours a day doing six minutes of talking.
As time went by, CNN winnowed its expert pool down to a dozen or so regulars who earned the on-air title “CNN aviation analysts”: airline pilots, ex-government honchos, aviation lawyers, and me. We were paid by the week, with the length of our contracts dependent on how long the story seemed likely to play out. The first couple were seven-day, the next few were 14-day, and the last one was a month. We’d appear solo, or in pairs, or in larger groups for panel discussions—whatever it took to vary the rhythm of perpetual chatter.1
I soon realized the germ of every TV-news segment is: “Officials say X.” The validity of the story derives from the authority of the source. The expert, such as myself, is on hand to add dimension or clarity. Truth flowed one way: from the official source, through the anchor, past the expert, and onward into the great sea of viewerdom.
What made MH370 challenging to cover was, first, that the event was unprecedented and technically complex and, second, that the officials were remarkably untrustworthy. For instance, the search started over the South China Sea, naturally enough, but soon after, Malaysia opened up a new search area in the Andaman Sea, 400 miles away. Why? Rumors swirled that military radar had seen the plane pull a 180. The Malaysian government explicitly denied it, but after a week of letting other countries search the South China Sea, the officials admitted that they’d known about the U-turn from day one.
Of course, nothing turned up in the Andaman Sea, either. But in London, scientists for a British company called Inmarsat that provides telecommunications between ships and aircraft realized its database contained records of transmissions between MH370 and one of its satellites for the seven hours after the plane’s main communication system shut down. Seven hours! Maybe it wasn’t a crash after all—if it were, it would have been the slowest in history.
These electronic “handshakes” or “pings” contained no actual information, but by analyzing the delay between the transmission and reception of the signal— called the burst timing offset, or BTO—Inmarsat could tell how far the plane had been from the satellite and thereby plot an arc along which the plane must have been at the moment of the final ping. That arc stretched some 6,000 miles, but if the plane was traveling at normal airliner speeds, it would most likely have wound up around the ends of the arc—either in Kazakhstan and China in the north or the Indian Ocean in the south. My money was on Central Asia. But CNN quoted unnamed U.S.-government sources saying that the plane had probably gone south, so that became the dominant view.Other views were circulating, too, however.Fig. 5 A Canadian pilot named Chris Goodfellow went viral withhis theory that MH370 suffered a fire that knocked out its communications gear and diverted from its planned route in order to attempt an emergency landing. Keith Ledgerwood, another pilot, proposed that hijackers had taken the plane and avoided detection by ducking into the radar shadow of another airliner. Amateur investigators pored over satellite images, insisting that wisps of cloud or patches of shrubbery were the lost plane.
Then: breaking news! . . .
Read the whole thing. Good photos. Plus he really does have an interesting argument. Later in the article:
It’s not possible to spoof the BFO data on just any plane. The plane must be of a certain make and model, equipped with a certain make and model of satellite-communications equipment, and flying a certainkind of route in a region covered by a certain kind of Inmarsat satellite. If you put all the conditions together, it seemed unlikely that any aircraft would satisfy them. Yet MH370 did.
(He provides the specifics for each criterion.)
Kevin Drum has a good post on the benefits of the FCC decision beyond net neutrality:
The FCC voted today in favor of strong net neutrality rules, but this is something that’s been expected for weeks—and something I’ve written about before at length. So instead of commenting on that yet again, I want to highlight something else that might be nearly as important: . . .
UPDATE: Net neutrality succeeded (for now) through an effective guerrilla activism campaign.
Thanks to GOP commissioners, we won’t see the full net neutrality rules today, by Sam Gustin, in Motherboard
Why everyone was wrong about Net Neutrality, by Tim Wu, in the New Yorker.
Very interesting Motherboard article by Bruce Schneier, a well-known (and highly qualified) cybersecurity consultant:
The thing about infrastructure is that everyone uses it. If it’s secure, it’s secure for everyone. And if it’s insecure, it’s insecure for everyone. This forces some hard policy choices.
When I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection—basically, a technology that allows the agency to hack into computers.
Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well.
All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the internet’s defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.
Technological capabilities cannot distinguish based on morality, nationality, or legality
This isn’t the only example of once-top-secret US government attack capabilities being used against US government interests. StingRay is a particular brand of IMSI catcher, and is used to intercept cell phone calls and metadata. This technology was once the FBI’s secret, but not anymore. There are dozens of these devices scattered around Washington, DC, as well as the rest of the country, run by who-knows-what government or organization. By accepting the vulnerabilities in these devices so the FBI can use them to solve crimes, we necessarily allow foreign governments and criminals to use them against us.
Similarly, vulnerabilities in phone switches—SS7 switches, for those who like jargon—have been long used by the NSA to locate cell phones. This same technology is sold by the US company Verint and the UK company Cobham to third-world governments, and hackers have demonstrated the same capabilities at conferences. An eavesdropping capability that was built into phone switches to enable lawful intercepts was used by still-unidentified unlawful intercepters in Greece between 2004 and 2005.
These are the stories you need to keep in mind when thinking about proposals to ensure that all communications systems can be eavesdropped on by government. Both the FBI’s James Comey and UK Prime Minister David Cameron recently proposed limiting secure cryptography in favor of cryptography they can have access to.
But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.
Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.
We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance. . .