Archive for the ‘Technology’ Category
Big telecom companies are not investing in improved infrastructure and fight fiercely any efforts to force them to improve services. Perhaps breaking them up (as AT&T was broken up) might offer more competition (most markets are in effect local monopolies, with only one telecom active).
In the meantime, telecoms will fight. Jason Koebler reports for Motherboard:
Think this whole net neutrality thing is almost over? It’s not. The president of the National Cable & Telecommunications Association said Tuesday that the industry is prepared and ready to sue the hell out of the Federal Communications Commissionin a process that could last up to five more years.
Public sentiment, federal and local governments, and small internet service providers have already begun responding to market forces pushing them to provide faster, indiscriminate service. But the giant telecoms, unsurprisingly, are vowing to litigate the FCC’s new rules, which will take a very long time.
“There’ll likely be an appeal, and litigation with FCC appeals is a pretty long, drawn-out process,” Michael Powell, head of the NCTA, told CNBC. “I would predict it’s at least two and up to five years before the rules are fully and finally settled.”
The NCTA represents Comcast, Verizon, AT&T, Time Warner, and other major telecom companies. Powell pointed out that we have been talking about net neutrality for quite some time now—it hasn’t just been the last year or so since Verizon won a landmark court case that put the FCC’s existing rules against “fast lanes” and other paid prioritization for certain types of traffic at risk. Before that, there was another debate.
“The current set of net neutrality rules is from 2010. It’s 2015, and we still don’t have a new set, and this debate has gone on for a decade,” he said. . .
Kari Paul writes at Motherboard:
Autonomous weapons could be hacked and turned against us, said Peter W. Singer, a strategist and senior fellow at the think tank New America Foundation, at the first annual Future of War Conference on Tuesday. We are entering “a whole new realm” when it comes to autonomous weapons, he argued as part of a panel discussing the future of smart weapons.
“Cyberwarfare offers you the ability to persuade the target to do something it wouldn’t do otherwise,” he said. “I gain access to its software, if I [hack] into it, I can then make it do things other than the operator wants it to do.”
He said this includes hacks as simple as changing the GPS location of a drone, which he said has been done with relative ease by the Department of Homeland Security and by college students, or what he called “ultimate co-option,” which would be, for example, “recoding all American systems as Chinese systems.”
“A human would think, ‘That makes no sense, I’m questioning that order,’ but a computer, if you have that access, will follow that instruction,” he said. “It’s a whole new realm where you’ve never been able to convince a bullet or an arrow to change direction in mid-flight, you can with this kind of system. That points to both possibilities and perils.”
However, it’s unclear how long it will be until we see technology like this come to fruition. Missy Cummings, director of the Humans and Automation Laboratory at MIT, said “we’re not even close” to developing autonomous killing machines in the sense that many people think of them. . .
Continue reading. Of course, automobile systems can also be hacked, and if the hacker locks the doors and the steering, disengages the brakes, and pumps up the accelerator, bad things will certainly happen.
Corporations are not going to fix their weak security until they are financially penalized (at very hefty amounts) when customer files are hacked. With the complete focus on profits, it must become more expensive to lack good security than to implement and maintain good security because corporations will always choose the least costly course (cf. Ford Motor Company’s decision that it was better (i.e., cost less) to burn 180 customers alive than to spend $10/car to prevent the deaths).
I can already predict that corporations will, as they always do, push for “voluntary guidelines” rather than laws, since the former can simply be ignored.
Vindu Goel has the report in the NY Times:
LinkedIn, the largest business-oriented social network, has agreed to compensate up to 800,000 people who paid for its premium services to settle a class-action lawsuit alleging that it falsely assured them it was using strong security measures to protect their personal information.
In June 2012, a file containing 6.5 million encoded LinkedIn user passwords was posted on a Russian hacker site. The passwords were protected with a weak form of security, allowing hackers to easily unravel the code and find the actual passwords. The company, which had about 160 million users at that time, quickly put in place improved security measures, but everyone was advised to change their passwords immediately.
While there was no indication that the breach had any disproportionate effect on the small number of LinkedIn users who were paying the company a subscription fee for extra services, those customers argued in court that the company had deceived them about the level of its Internet security when they had signed up. . .
An interesting note: LinkedIn itself thinks the profiles on its site are essentially worthless. They figure a profile is worth about $1.
Andrea Peterson reports on remarks from Admiral Mike Rogers, head of the NSA, who doesn’t want to be bother with technicalities, he just wants to be able to read all communications in the world: everything open to the NSA, because … you can imagine.
In an unusual public exchange, the director of the National Security Agency and a senior Yahoo executive clashed over cyber-spying Monday, illustrating the growing chasm between Washington and Silicon Valley over whether intelligence officials should have broad access to the products being developed by the nation’s top technology firms.
For a normally staid Washington cyber-security summit, the tense back-and-forth had the packed audience of executives, senior policy makers, bureaucrats and journalists buzzing.
Speaking at the signature event of the conference, NSA Director Adm. Mike Rogers called for a “legal framework” that would enable law enforcement and anti-terrorism officials to tap into encrypted data flowing between ordinary consumers — echoing a stance laid out by other administration officials, including FBI Director James Comey and Attorney General Eric J. Holder. But technology executives as well as many cybersecurity experts argue there is no way to build in such “backdoors” without fundamentally undermining the security that protects online communications around the world. In response to recent revelations about government snooping, firms such as Apple and Google have designed their latest mobile software to make it impossible for the companies to turn over data from smartphones and tablet computers to police — even when authorities have a search warrant.
Roger’s remarks were later challenged by Alex Stamos, Yahoo’s chief information security officer, during a question-and-answer session.
“So it sounds like you agree with Director Comey that we should be building defects into the encryption in our products so that the US government can decrypt…” Stamos began. (These remarks were verified by a transcript provided by the Web site Just Security.)
“That would be your characterization,” Rogers said, interrupting him.
“No, I think… all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto,” Stamos replied. “That it’s like drilling a hole in the windshield.”
“I’ve got a lot of world-class cryptographers at the National Security Agency,” Rogers said.
“I’ve talked to some of those folks and some of them agree too, but…” Stamos said.
“Oh, we agree that we don’t accept each others’ premise,” Rogers replied, interrupting again, as laughter erupted across the audience.
A little bit later in the exchange, Stamos tried to bring up a different point.
“If we’re going to build defects/backdoors or golden master keys for the U.S. government, do you believe we should do so — we have about 1.3 billion users around the world — should we do for the Chinese government, the Russian government, the Saudi Arabian government, the Israeli government, the French government?” Stamos asked.
“So, I’m not gonna… I mean, the way you framed the question isn’t designed to elicit a response,” Rogers replied.
“Well, do you believe we should build backdoors for other countries?” Stamos asked again.
“My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this,” Rogers answered.
“So you do believe then, that we should build those for other countries if they pass laws?” Stamos asked a third time.
“I think we can work our way through this,” Rogers replied.
“I’m sure the Chinese and Russians are going to have the same opinion,” Stamos said.
“I said I think we can work through this,” Rogers said. . .
I do not believe that Rogers is an idiot, though he comes across that way. He simply wants what he wants and waves aside any question or issues that stand in the way, neither answering nor considering him. In his position, that’s very dangerous.
Coming soon to your own locale. Orwell’s Big Brother was a piker compared to the surveillance we’re subjected to these days. Jordan Pearson writes at Motherboard:
An ad company’s drones have been quietly collecting location information from Los Angeles residents’ cell phones for nearly a month, and there’s likely not much anybody can do about it without regulations in place that cover what kinds of data drones can hoover up.
Adnear, a global marketing company that specializes in collecting location data from people for companies looking to create targeted ad campaigns, has been flying a modified version of the DJI Phantom II drone over the San Fernando Valley in LA since February 4th, according to a company blog post. A sensor on the drone tracks devices by collecting data from WiFi connections and cell tower signals and uses that information to obtain their unique device IDs.
“The usage of drones for location data collection would tremendously reduce human intervention and ease the process of collating data in inaccessible regions,” the company wrote in a blog post. “Drones will also enable quick assimilation of a large-scale location data, which would mean faster new market entry for us, since it does take much higher effort at present. We are talking a new level of scale all together.”
Adnear wants to use the data to serve you hyperlocal ads based on what you’re near at the moment. The company did not immediately respond to Motherboard’s request for comment, and we will update this post if we hear from them.
If this still all sounds just a little ominous, that’s because it is. The sensors on Adnear’s drone are likely the same kind they’ve used on “bikes, cars, trains, and even walking up the stairs,” except now they’re flying over you. Using cell tower signals to uncover a device’s ID sounds pretty close to what a StingRay used by police to track suspects by mimicking cell towers does. That technology works by scooping up all location information from cell phones in the area, including those from innocent people.
While Adnear’s approach may not be exactly the same as a StingRay, it’s close enough to cause discomfort—although, of course, Adnear is using their technology for commercial gain, not to catch criminals. The company claims that it doesn’t collect any personally identifiable information, but it’s hard to see how your phone’s ID is anything but.
Right now, Adnear is operating quasi-legally. But any illegality would be because it ran afoul of Federal Aviation Administration commercial drone rules, not privacy ones. The FAA’s proposed regulations for domestic drone use—arrived at after five years of legal limbo for drone owners—haven’t gone into effect yet, and the FAA says it can still limit commercial drone operations. But even the new regulations do not cover privacy at all, instead focusing on safety measures. While regulations for airborne data collection may one day coalesce, they’re not on the immediate horizon.
So, what the company is doing is creepy, but not strictly illegal.
Until then, possible solutions to avoid drone detection include leaving your phone at home and not much else. . .
Brian Merchant writes in Motherboard:
It’s 2015—when we feel sick, fear disease, or have questions about our health, we turn first to the internet. According to the Pew Internet Project, 72 percent of US internet users look up health-related information online. But an astonishing number of the pages we visit to learn about private health concerns—confidentially, we assume—are tracking our queries, sending the sensitive data to third party corporations, even shipping the information directly to the same brokers who monitor our credit scores. It’s happening for profit, for an “improved user experience,” and because developers have flocked to “free” plugins and tools provided by data-vacuuming companies.
In April 2014, Tim Libert, a researcher at the University of Pennsylvania, custom-built software called webXray to analyze the top 50 search results for nearly 2,000 common diseases (over 80,000 pages total). He found the results startling: a full 91 percent of the pages made what are known as third-party requests to outside companies. That means when you search for “cold sores,” for instance, and click the highly ranked “Cold Sores Topic Overview WebMD” link, the website is passing your request for information about the disease along to one or more (and often many, many more) other corporations.
According to Libert’s research, which is published in the the Communications of the ACM, about 70 percent of the time, the data transmitted “contained information exposing specific conditions, treatments, and diseases.” That, he says, is “potentially putting user privacy at risk.” And it means you’ll probably want to think twice before looking up medical information on the internet.
Here’s what’s happening in a bit greater detail: Let’s say you make a search for “herpes.” Plugging that query into a search engine will return a list of results. Chances are, whatever site you choose to click on next will send information not just to the server of the intended site—say, the Centers for Disease Control, which maintains thetop search result from Google—but to companies that own the elements installed on the page. Here’s why.
When you click that CDC link, you’re making a so-called “first party request.” That request goes to the CDC’s servers, and it returns the HTML file with the page you’re looking for. In this case, it’s “Genital Herpes – CDC Factsheet,” which is perhaps the page on the internet you’d least want anyone to know you’re looking at. But because the CDC has installed Google Analytics to measure its traffic stats, and has, for some reason, included AddThis code which allows Facebook and Twitter sharing (beckoning the question of who socializes disease pages), the CDC also sends a third party request to each of those companies. That request looks something like this—http://www.cdc.gov/std/herpes/STDFact-Herpes.htm—and makes explicit to those third party corporations in its HTTP referrer string that your search was about herpes.
Thus, Libert has discovered that the vast majority of health sites, from the for-profit WebMD.com to the government-run CDC.gov, are loaded with tracking elements that are sending records of your health inquiries to the likes of web giants like Google, Facebook, and Pinterest, and data brokers like Experian and Acxiom.
From there, it becomes relatively easy for the companies receiving the requests, many of which are collecting other kinds of data (in cookies, say) about your browsing as well, to identify you and your illness. That URL, or URI, which very clearly contains the disease being searched for, is broadcast to Google, Twitter, and Facebook, along with your computer’s IP address and other identifying information. . .
Remember privacy? I do, and I miss it.
Jason Koebler reports at Motherboard:
News broke earlier this week about the NSA’s “most sophisticated” malware yet: An undetectable backdoor that can filter information to and from a hard drive, using the underlying framework of the drive itself. It surprised a lot of people, sure, but maybe it shouldn’t have. A group of ordinary security researchers warned this was possible, and in fact installed hard drive backdoors themselves, nearly a year ago.
The paper ” Implementation and Implications of a Stealth Hard-Drive Backdoor,” published in March 2014 by a team of eight researchers from Eurecom in France, IBM Research in Zurich, and UCSD and Northeastern University in the US, reads almostexactly like security firm Kaspersky’s expose on the NSA malware. The full paper is absolutely worth your read if you’ve been fascinated by Kaspersky’s revelations.
The malware, developed by Travis Goodspeed and his colleagues (Goodspeed has spoken the most publicly about the exploit), can be installed remotely by people who have no physical access to it. In fact, the paper asserts that such an attack “is not limited to the area of government cyber warfare; rather, it is well within the reach of moderately funded criminals, botnet herders, and academic researchers.”
To install it remotely, a hacker would need to infect the operating system of the user’s computer with run-of-the-mill malware, alter the hard drive’s firmware, and then delete the original, operating system-side virus. From then on, the hacker would have complete access to everything on the person’s hard disk, the exploit would be almost completely undetectable, and it would persist until the hard drive was physically destroyed.
The exploit could also be installed by someone who had physical access to the drive.
“Once you have firmware control of a disk, you can also have it commit suicide or overwrite itself,” he explained at the 0x07 Sec-T Conference last year. “You can also have it act as a backdoor.”
That, apparently, is what the NSA was doing with its exploit. Though we just discovered the NSA was actually doing this, it seems likely that the program was going on for a while, perhaps a decade or more.
The team explains in its paper that a “catastrophic loss of security occurs when hard disks are not trustworthy.” Information can be funneled remotely from the disk and new information can be written to the disk, using remote commands sent to the exploit. An infected hard drive loses less than 1 percent of its read and write speed, so it’s essentially undetectable from a performance perspective. . .