Archive for the ‘Technology’ Category
A good article on encrypting your computer’s hard drive—a sensible step if you ever take your computer across international borders, since all your data can be copied any time you cross a border, no reasonable suspicion required. Micah Lee writes in The Intercept:
Time and again, people are told there is one obvious way to mitigate privacy threats of all sorts, from mass government surveillance to pervasive online tracking to cybercriminals: Encryption. As President Obama put it earlier this year, speaking in between his administration’s attacks on encryption, “There’s no scenario in which we don’t want really strong encryption.” Even after helping expose all the ways the government can get its hands on your data, NSA whistleblower Edward Snowden still maintained, “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”
But how can ordinary people get started using encryption? Encryption comes in many forms and is used at many different stages in the handling of digital information (you’re using it right now, perhaps without even realizing it, because your connection to this website is encrypted). When you’re trying to protect your privacy, it’s totally unclear how, exactly, to start using encryption. One obvious place to start, where the privacy benefits are high and the technical learning curve is low, is something called full disk encryption. Full disk encryption not only provides the type of strong encryption Snowden and Obama reference, but it’s built-in to all major operating systems, it’s the only way to protect your data in case your laptop gets lost or stolen, and it takes minimal effort to get started and use.
If you want to encrypt your hard disk and have it truly help protect your data, you shouldn’t just flip it on; you should know the basics of what disk encryption protects, what it doesn’t protect, and how to avoid common mistakes that could let an attacker easily bypass your encryption.
If you’re in a hurry, go ahead and skip to the bottom, where I explain, step-by-step, how to encrypt your disk for Windows, Mac OS X, and Linux. Then, when you have time, come back and read the important caveats preceding those instructions.
What disk encryption guards against
If someone gets physical access to your computer and you aren’t using disk encryption, they can very easily steal all of your files.
It doesn’t matter if you have a good password because the attacker can simply boot to a new operating system off of a USB stick, bypassing your password, to look at your files. Or they can remove your hard disk and put it in a different computer to gain access. All they need is a screwdriver, a second computer, and a $10 USB enclosure.
Computers have become an extension of our lives and private information continually piles up on our hard disks. Your computer probably contains work documents, photos and videos, password databases, web browser histories, and other scattered bits of information that doesn’t belong to anyone but you. Everyone should be running full-disk encryption on their laptops.
Encrypting your disk will protect you and your data in case your laptop falls into the wrong hands, whether because you accidentally left it somewhere, because your home or office was burglarized, or because it was seized by government agents at home or abroad.
It’s worth noting that no one has privacy rights when crossing borders. Even if you’re a U.S. citizen entering the United States, your Constitutional rights do not apply at the border, and border agents reserve the right to copy all of the files off of your computer or phone if they choose to. This is also true in Canada, and in other countries around the world. If you plan on traveling with electronic devices, disk encryption is the only way you have a chance at protecting your data if border agents insist on searching you. In some situations it might be in your best interest to cooperate and unlock your device, but in others it might not. Without disk encryption, the choice is made for you: the border agents get all your data.
What disk encryption is useless against
There’s a common misconception that encrypting your hard disk makes your computer secure, but this isn’t entirely true. In fact, disk encryption is only useful against attackers that have physical access to your computer. It doesn’t make your computer any harder to attack over a network. . .
I have to admit that I was quite skeptical of Tom Wheeler for a long time—partly because of his background and partly because I distrust Obama’s appointments (cf. John Brennan). But this one has turned out well. Sam Gustin writes at Motherboard:
Earlier this week, Comcast CEO Brian Roberts reached out to Federal Communications Commission Chairman Tom Wheeler in a last-ditch effort to lobby for the cable giant’s $45 billion merger with Time Warner Cable. Roberts argued that the deal would benefit consumers and advance the public interest, adding that the company was “eager” to complete the transaction.
Comcast’s argument failed. The cable giant has announced that it’s abandoning the merger with Time Warner Cable, now that regulators have made clear that the deal would face nearly insurmountable obstacles.
“Today, we move on,” Comcast’s Roberts said in a statement.
The deal’s death blow came when the FCC decided to send the merger to anadministrative law judge, which could have resulted in a year-long, trial-like public spectacle. Such hearings are so burdensome for companies that over the last 30 years, no big telecom merger has ever been completed once it was designated for this process, according to policy experts.
“This is the FCC’s nuclear option,” said a person close to the Comcast merger review. “It’s the way to kill a deal.”
By the time Comcast met with federal regulators on Wednesday in a final effort to salvage the merger, FCC officials had become convinced that the deal would not benefit consumers and needed to be blocked. At that point, there was little Comcast could do other than walk away.
“Today, an online video market is emerging that offers new business models and greater consumer choice,” Wheeler said in a statement. “The proposed merger would have posed an unacceptable risk to competition and innovation especially given the growing importance of high-speed broadband to online video and innovative new services.”
The FCC’s strong opposition to the merger is just the latest surprising example of how the agency has sided with public interest advocates against corporate giants under Wheeler’s leadership. From the FCC’s tough new net neutrality rules to the agency’s support for community broadband networks and now its resistance to the Comcast deal, Wheeler has emerged as an unlikely public interest champion.
“I’ve been working in the public interest community for 40 years and I can’t recall a similar period where the FCC moved so aggressively to promote broad public interest objectives and stand up to large corporate interests,” said Andrew Jay Schwartzman, senior counselor at the Georgetown University Law Center’s Institute for Public Representation in Washington. . .
Why Obama won’t use the word “genocide” to describe the mass killing of Armenians by Turkey a century ago, despite his campaign promise
In the Washington Monthly Ed Kilgore looks at the likely reasons Obama has broken yet another pledge that he voluntarily made:
The one thing that is clear about the furor now arising over the Obama administration’s decision to commemorate the centennial of the slaughter of one-and-a-half million Armenians by the Ottoman Empire during World War I without calling it a “genocide” is that the president is indeed breaking a campaign pledge. He’s doing so for much the same reason his predecessors refused to use the g-word: Turkey is a strategically important NATO ally, while Armenia is not. At the moment, Turkey is deeply engaged in the fight against Islamic State.
Without question, if Obama refers to “genocide” on Friday, there will be real-life repercussions. Although the Turks have gradually accepted that something terrible was done by the Ottomans in 1915 to members of a despised minority whose population center lay athwart the border with the war enemy and ancient rival Russia, they deny it meets the intent standards for “genocide.”
To be clear, Americans can say all sorts of graphic things about 1915 so long as they avoid the “g-word.” And there will be a high-level American presence at the centennial commemoration—just not high enough or clear enough, as this tersedescription from AP indicates:
Tuesday’s announcement, accompanied by word that the treasury secretary, Jacob Lew, will attend a ceremony in Armenia on Friday to mark the anniversary, was made shortly after the secretary of state, John Kerry, met with Turkey’s foreign minister, Mevlut Cavusoglu, in Washington.
In brief comments to reporters at the State Department, neither Kerry nor Cavusoglu mentioned Armenia or the upcoming 24 April anniversary.
It will be interesting to see if Obama’s action—or inaction—here joins . . .
Dan Froomkin reports at The Intercept:
A whole new and very dangerous field of warfare has been developed by the Obama administration, in secret, using untested legal justifications, and without even the faintest whiff of oversight.
So kudos to Patrick Tucker, technology editor for Defense One, who took advantage of a recent moment with National Security Agency chief Michael Rogers to ask him: Is there a way to discuss publicly what the future of cyberwar operations will look like?
Rogers said, dismissively, that the public should trust that the U.S. will follow the international laws of conflict and that its use of cyberwarfare would “be proportional” and “in line with the broader set of norms that we’ve created over time.”
But he also acknowledged the need, at some point, for the public to have some sort of a say.
Rogers likened cyberattacks to the development of mass firepower in the 1800s. “Cyber represents change, a different technical application to attempt to achieve some of the exact same effects, just do it in a different way,” he said.
“Like those other effects, I think, over time, we’ll have a broad discussion in terms of our sense of awareness, both in terms of capabilities as well as limitations.”
That discussion is long overdue.
The almost always-wrong Washington Post editorial board had it exactly right when it wrote “now that the United States is going beyond defense, expanding forces for offensive attack, there’s a crying need for more openness. So far, forces exist almost entirely in the shadows.”
The editorial continued:
What concerns us is not the growth of forces but the way it is happening behind the scenes. The U.S. Cyber Command is a military unit, but its chief, Gen. Keith Alexander, is also director of the National Security Agency, which is part of the intelligence community. So far, operations and deployments are being handled almost entirely in secret.
Aside from a line in a speech last fall by Defense Secretary Leon Panetta, and some vague language in a 2011 strategy paper, the missions, purpose and scope of conflict have yet to be satisfactorily revealed. One large missing piece is a declaratory policy similar to that used for nuclear weapons in the Cold War, when nuclear policy was openly debated without divulging important secrets. There’s also little public information about rules of engagement for forces or about chain of command and authority to use them. The nature of the threat should also be exposed to a generous dose of sunlight. If conflict in cyberspace is underway, then it is important to sustain support for the resources and decisions to fight it, and that will require more candor.
You may have gathered by the reference to Alexander and Panetta that this was not a recent editorial. In fact, it came out two years ago. The response: *crickets*.
David Sanger’s 2012 book Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, described the Obama administration’s previously secret cyberwar campaign against Iran, and raised the very excellent question: “What is the difference between attacking a country’s weapon-making machinery through a laptop computer or through bunker-busters?”
No answer was forthcoming.
As Chase Madar, an attorney and the author of The Passion of Bradley Manning: The Story Behind the WikiLeaks Whistleblower, wrote in 2013: . . .
Look at the earlier posts today on the FBI’s forensic “science” mess. Look at how the FBI pays criminals to encourage feeble-minded mopes to try for a terrorist attack, then provides plans and materials (so that the FBI can get credit for stopping its own plot). And now consider how the FBI seems unable to grasp the basic elementary facts of encryption. How do you explain all those? Occam’s Razor suggests the simplest answer: The FBI, as an organization, is stupid. I am not happy about that. We would all be better off if the FBI, as an organization, were intelligent. But it is a highly authoritarian organization, and highly authoritarian organizations generally wander off in the direction of stupidity since such organizations tend to choke off constructive feedback (which often points out organizational error, and authoritarian organizations will not admit error).
Jason Koebler reports for Motherboard:
It has now been six months since FBI Director James Comey said that ” encryption threatens to lead all of us to a very dark place.” Since then, the FBI, Department of Justice, President Obama, and NSA have all taken potshots at encryption, each of them suggesting that the risk of criminals using the technology to hide from law enforcement outweighs the benefits of ordinary people wanting to keep their data and communications private.
The frustrating thing about all of this is how little the conversation has changed in the last six months. Comey and his counterparts at the NSA keep saying that they want lawful, technologically sound ways to access encrypted data if they are given permission to do so by a judge. People who understand the technology keep telling them that such a system is not possible.
Let’s just reiterate that for a moment. In order to create alternate ways of accessing encrypted data, necessarily you must create a security hole or a backdoor into that data. When you purposefully create security holes, those holes can be exploited by others (i.e. not the FBI or the NSA). Therefore, is it really still encryption anymore?
“The notion that electronic devices and communications could never be unlocked or unencrypted—even when a judge has decided that the public interest requires accessing this data to find evidence—is troubling,” FBI Executive Assistant Director Amy Hess wrote in a Wall Street Journal editorial. “It may be time to ask: Is that a cost we, as a society, are prepared to pay?”
She said the move to “ubiquitous encryption” will usher in an era in which criminals will run free after hiding incriminating evidence “without fear of discovery by the police.”
It’s time for the FBI and NSA to tell us what they really want. Because for the last six months, both agencies have been repeatedly asking for something that is simply technologically impossible. When confronted with that fact, the agencies resort to the sort of rhetoric that shows up in Hess’s editorial and in Comey’s speeches. They favor “robust encryption as a key tool to strengthen cybersecurity,” but what does that mean? Who can have encryption, and what kind of encryption can they have?
It’s worth noting that, until recently, the FBI recommended that you encrypt your phone. It’s also worth noting that the man who wrote the Patriot Act thinks you should be allowed to use encryption.
NSA Administrator Michael Rogers has proposed what is known as a “split key” system—one in which a phone manufacturer would create an extra encryption key and then distribute its “parts” to different entities. It’s kind of like escrow—someone holds the key to unlock your phone or your email or whatever. If the NSA or FBI gets a warrant to decrypt the data, it’ll go to that escrow holder and get the key, and then have access to all of your data.
The problems with this suggestion are numerous. Who holds the key? Who can you trust with the key? Joseph Lorenzo Hall, chief technologist with the Center for Democracy and Technology, calls it “not a serious proposal,” for lots of reasons. The FBI and NSA act as though the United States is the only country in the world that wants access to encrypted data. It’s not.American companies are overwhelmingly dominant globally, and companies like Apple, Google, Facebook, and Twitter make heaps of cash overseas. Facebook and Twitter both have a history of caving to the demands of autocratic governments when faced with the possibility of being shut down. So, if the US gets its “golden key” for WhatsApp users, does Turkey get one too? Does Pakistan? Does China? Does Russia? How are you going to make all these keys and keep them separate? What happens if someone gets ahold of them?
And what happens to those American companies who are shipping products globally with built-in backdoors allowing US law enforcement to access user data? Such a provision doesn’t seem likely to go over well in, say, Germany.
This conundrum is the exact same one that the US ran into back in 1997, Hall wrote:
We demonstrated [in 1997] that there would be no provable secure way to communicate using split key key escrow systems, so certain types of sensitive transactions involving health information, financial information, and intimate information would be more vulnerable to interception in the case of a flaw, compromise, or abuse of the system. Also, securing repositories of keying material, validating requests for keys, and distributing keys would be exceedingly complex, and likely much more complex than the underlying encryption itself.
This is costly to say the least, but it can also be dangerous in that adding complexity to a system will inevitably lead to additional methods to undermine it and find vulnerabilities that can be used to attack it.
This is also the exact same conclusion reached by Matthew D. Green, a security researcher at Johns Hopkins University. Here is the premise of a recent blog post he wrote: “Let’s pretend that encryption backdoors are a great idea. From a purely technical point of view, what do we need to do to implement them, and how achievable is it?”
Green’s entire blog post is worth reading, because he does outline several ways in which such a system could be implemented. Each of those backdoors essentially amount to attacks on encryption that would A) not work, B) be ridiculously expensive and difficult to implement, or C) create unnecessary and exploitable vulnerabilities. Green is widely seen as one of the best in the business when it comes to this stuff. He is at the top of his field and is widely respected. Basically, he knows his shit.
And here is the conclusion he reaches:
If this post has been more questions than answers, that’s because there really are no answers right now. A serious debate is happening in an environment that’s almost devoid of technical input, at least from technical people who aren’t part of the intelligence establishment.
The Washington Post notes in an article outlining the split key idea that both the NSA and the FBI won’t or can’t name one single instance in which they were unable to thwart a terrorist or punish a criminal because they couldn’t break encryption. Likewise, the NSA and the FBI are plugging their ears and screaming about “bad guys” and “darkness” when it comes to encryption. They are not offering technical solutions, they are not offering alternatives, they are fear mongering.
So, what does the intelligence community want? . . .
Kevin Drum points out another instance of Congress and the Obama administration failing to do their basic job of protecting the public interest:
Today is the fifth anniversary of the Deepwater Horizon oil rig explosion in the Gulf of Mexico, an event that triggered the nation’s worst-ever oil spill. The well leaked for three months and dumped over 200 million gallons of oil into the sea. The explosion itself killed eleven men; the resulting pollution killed a stupefying amount of wildlife, including 800,000 some birds. And despite billions paid out by BP in fines and restoration costs, the economic impact of the disaster remains wide-reaching and ongoing.
But possibly even more outrageous than the spill itself is how little has been done by government to prevent a similar disaster. The oil and gas industry has stayed active in Washington, and managed to fend off serious efforts to curb drilling: Congress has passed zero new laws—not one—to restrict offshore drilling or force it to be safer. The Obama administration has approved over 1,500 offshore drilling permits since the spill. And back in January the administration announced a plan to open new areas in the Atlantic and Arctic for offshore drilling. As my colleague Tim Murphy noted today, Louisiana’s oversight of the oil industry is rife with ludicrous conflicts of interest that raise serious doubts about the state’s ability to make drilling safer.
In other words, the wounds from BP are scarcely healed, but we’re pushing deeper and deeper into offshore drilling.
In fact, well construction in the Gulf is literally pushing into deeper water, where the risks of a spill are even greater. From an AP investigation pegged to the anniversary: . . .
Capitalism succeeds well in certain areas, and in others it fails, either partially or utterly. A partial failure is for-profit hospitals, which focus on cutting costs and raising prices because capitalism seeks greater profits (to keep investors happy). An utter failure is the refusal to develop and provide medications for diseases that afflict only a few: not enough profit to warrant the research. Government—the shared expenses of the public—generally picks up at the point where capitalism fails: providing roads, airports, air-traffic control, education, and the like.
In MIT’s Technology Review Brian Bergstein looks at one point where capitalism has failed:
One night in 1982, John Mumford was working on an avalanche patrol on an icy Colorado mountain pass when the van carrying him and two other men slid off the road and plunged over a cliff. The other guys were able to walk away, but Mumford had broken his neck. The lower half of his body was paralyzed, and though he could bend his arms at the elbows, he could no longer grasp things in his hands.
Fifteen years later, however, he received a technological wonder that reactivated his left hand. It was known as the Freehand System. A surgeon placed a sensor on Mumford’s right shoulder, implanted a pacemaker-size device known as a stimulator just below the skin on his upper chest, and threaded wires into the muscles of his left arm. On the outside of Mumford’s body, a wire ran from the shoulder sensor to an external control unit; another wire ran from that control unit to a transmitting coil over the stimulator in his chest. Out of this kludge came something incredible: by maneuvering his right shoulder in certain ways, Mumford could send signals through the stimulator and down his left arm into the muscles of his hand. The device fell short of perfection—he wished he could throw darts with his buddies. But he could hold a key or a fork or a spoon or a glass. He could open the refrigerator, take out a sandwich, and eat it on his own. Mumford was so enthusiastic that he went to work for the manufacturer, a Cleveland-area company called NeuroControl, traveling the country to demonstrate the Freehand at assistive-technology trade shows.
Mumford was in Cleveland for a marketing meeting in 2001 when he got news that still baffles him: NeuroControl was getting out of the Freehand business. It would focus instead on a bigger potential market with a device that helped stroke victims. Before long, NeuroControl went out of business entirely, wiping out at least $26 million in investment. At first, Mumford remained an enthusiastic user of the Freehand, though one thing worried him: the wires running outside his body would sometimes fray or break after catching on clothing. Each time, he found someone who could reach into his supply of replacements and reconnect the system. But by 2010, the last wire was gone, and without the prospect of tech support from NeuroControl, the electrical equipment implanted in Mumford’s body went dormant. He lost the independence that had come from having regained extensive use of one hand. “To all of a sudden have that taken away—it’s incredibly frustrating,” he says. “There’s not a day where I don’t miss it.”
Mumford’s voice rises in astonishment as he tells the tale. “I have a device implanted in my body that was considered to be one of the best innovations or inventions of that century,” he says. “The last thing you think is that the company is going to go out of business, and not only is it going to go out of business, but you’re not even going to be able to buy parts for that. That seems insane!”
Around 250 people are believed to have gotten the Freehand from NeuroControl, and Mumford was far from the only one heartbroken by the company’s failure. Their experience is a cautionary tale now for any implantable medical device that might serve “orphan markets”—relatively small groups of people. Although advances in brain-machine interfaces and electrical–stimulation devices are generating marvelous research results in people with paralysis—some are using their thoughts to control robotic arms, and others are taking tentative steps—it’s possible those breakthroughs won’t last long on the market, assuming they can be commercialized at all. Limp limbs can be reanimated by technology, but they can be quieted again by basic market economics.
The initial flourish
The technology in Mumford’s body began to be developed in the 1970s. The lead inventor, P. Hunter Peckham, a biomedical engineer at Case Western Reserve University in Cleveland, wanted to see whether electrical stimulation would reverse atrophy and ultimately restore function to paralyzed muscles. . .
If the US government can waste literally trillions of dollars fighting pointless wars and developing airplanes that do not work (the F-35) and fighting a failed War on Drugs™, then surely it could trim a a billion or so per year to spend on helping people.