Archive for the ‘Software’ Category
The video is via this Motherboard article by Lorenzo Franceschi-Bicchierai:
For almost two years, the FBI has been trying to tell us that phones and computers that use hard-to-break encryption would doom us all to a future of unsolvable crimes and uncatchable criminals.
Earlier this year, when the FBI couldn’t get into the phone of a dead terrorist, the encryption debate, which has actually been going on and off for more than 20 years, finally had its moment in the sun. The debate is extremely complex, but the position of the FBI can be boiled down to a simple concept: There shouldn’t be unbreakable locks, because nobody is above the law. (By the way, that’s actually pretty much what a much-anticipated—and then much-ridiculed—Senate bill says.)
But is that really true? And why should you, common law-abiding citizen care? John Oliver already did a pretty good job at answering that question, but a new animated video from CGP Grey might be the best simple explainer we’ve seen to date. . .
Though so far the only input they seem interested to hear is from people who agree with them. Joshua Kopstein reports at Motherboard:
Senators Dianne Feinstein and Richard Burr have officially released a draft of their long-awaited encryption bill, which was met with widespread mockery and horrorfrom security experts when it leaked last week.
The bill’s official discussion draft is substantively identical to the leaked version, and would force companies to provide law enforcement with access to encrypted communications when they are compelled by a court—effectively mandating the creation of backdoors in their products and imposing a ban on end-to-end encrypted communications apps.
“Today, terrorists and criminals are increasingly using encryption to foil law enforcement efforts, even in the face of a court order,” wrote Feinstein in a press release Wednesday. “We need strong encryption to protect personal data, but we also need to know when terrorists are plotting to kill Americans.”
The senators also state they will “solicit input from the public and key stakeholders” before the bill is formally introduced in Congress.
But so far, the only “stakeholders” Feinstein and Burr seem interested in discussing the bill with are non-technical people who are already on their side. On Monday, the senators announced a briefing on the bill with a panel composed entirely of cops and prosecutors.
“You would think they would be interested in feedback from a more diverse set of stakeholders even before releasing the draft text, but here we are,” said Joseph Lorenzo Hall, Chief Technologist at the Center for Democracy and Technology. “We will certainly give our feedback, and we always aim to be constructive, although it won’t be pretty and they won’t like it.”
Meanwhile, many tech experts and privacy advocates have already given theirunsolicited input, calling the proposal “ludicrous, dangerous” and “technically illiterate.”
“The essential contradiction here with Burr-Feinstein is that strong encryption is a creature of math, and relatively simple math at that, whereas court orders are legal instrument,” Hall told Motherboard in an email. “To say math can’t trump court orders doesn’t make sense. To say a business can’t offer a product that does complicated math is even less sensical.” . . .
A very interesting article in Motherboard by Lorenzo Franceschi-Bicchierai that will be of interest to anyone following the government’s strenuous efforts to ensure that no citizen is immune from government spying and the government’s equally strenuous efforts to keep secret what they are doing.
Sean Vitka reports in Motherboard:
On Thursday, what appears to be a draft bill from Senators Richard Burr (R-North Carolina) and Dianne Feinstein (D-California) was uploaded by The Hill reporter Cory Bennett. The bill has not been confirmed as authentic, and even if it is authentic, may have changed since the version that was posted online. Regardless, it’s worth critiquing the draft that was published, which aspires to kill end-to-end encryption in America—a move that, to lift a phrase from former NSA director Michael Hayden, only North Korean hackers could love.
Allow me to explain.
The bill, the “Compliance with Court Orders Act of 2016,” requires that all companies providing any kind communications or data service be able to give information to the government in an “intelligible format.” If the company made the data unintelligible, it must provide “technical assistance” to undo it. In case there is any question about the aim, the bill defines intelligible as “decrypted, deciphered, decoded, demodulated, or deobfuscated to its original form.”
Instead of learning from the Department of Justice’s ill-fated attempt to demonizeservices that rely on encryption to protect their customers and maintain user trust, these two Senators are doubling down. To make matters worse, Senators Burr and Feinstein chair the Senate Select Committee on Intelligence, which means they’re the very people tasked with overseeing overreach by intelligence agencies. The White House, increasingly anti-encryption since the Apple-FBI flop, is reportedly deeply spliton the proposal.
If this bill were to pass, it would outlaw secure communications, which are heavily—and increasingly—dependent on end-to-end encryption. By definition, end-to-end encryption cannot be decrypted except by the credentials of the senders and receivers. This is how information that truly needs to be secure is protected, because it minimizes the ways highly sensitive information can be decrypted.
Simply put, this bill would flat-tire end-to-end encryption within America. Every service, person, human rights worker, protester, reporter, company—the list goes on—will be easier to spy on. It jams a crowbar into the gut of Americans’ privacy andsecurity. It sets the precedent that the Department of Justice sought in the Apple-FBI case. And by crippling encryption, it risks turning those compromised products into new funnels of information for the never-ending haystack of information. After all, finding vulnerabilities like these are gold mines for hackers, and many of the world’s best work for American intelligence agencies. But, we’re told, it will make us more secure overall.
But, in fact, the impact on American security is one of the biggest threats of this bill. The notion of a backdoor, or what Senators Burr and Feinstein euphemistically call “technical assistance,” that can only be used by the government—whether law enforcement needs a warrant to do so or otherwise aside—has been unanimously rejected by every mathematician and cryptologist who studies it. That isn’t an exaggeration. You can’t have a backdoor that isn’t a security vulnerability. And Congress knows that. This same fight happened in the 90s, during the Crypto Wars. It was literally the exact. Same. Argument. Loathe as I am to say it, even Michael Hayden, who oversaw the agency’s rise to power and many, many disastrous decisions, agrees.
As far reaching as the effects of this bill would be on Americans’ privacy and safety, its jurisdictional narrowness is yet another catastrophic flaw. At the risk of stating the obvious, this is a proposed American law. It does not control Russian companies, or the North Korean government. It is the modern equivalent of Congress passing a law that bans the development of intercontinental ballistic missiles. “Have fun with that,” the rest of the world seems to say, while Senators Burr and Feinstein proclaim how much safer we are.
Even in draft form, this legislation is so short-sighted it calls into question the authors’ ability to lead the Senate Select Committee on Intelligence, which, again, Senators Burr and Feinstein chair. Their positions are singularly powerful in their ability to ensure that intelligence collection is done effectively and legally. This bill is powerful evidence that they are not up for the job.
As egregious as the Compliance with Court Orders Act of 2016 is, it highlights . . .
Dianne Feinstein is not running for re-election, thank God. She’s done enough damage. But this draft bill is highly disturbing in how it displays a complete and comprehensive ignorance of the field.
UPDATE: See also Joshua Koptstein’s aticle, “Congress’s New Encryption Bill Just Leaked, And It’s As Bad As Experts Imagined“.
See also Jenna McLaughlin’s article in The Intercept: “Bill That Would Ban End-to-End Encryption Savaged by Critics“
Keith Collins in Quartz has a very interesting story on the interdependence of modern technology:
A man in Oakland, California, disrupted web development around the world last week by deleting 11 lines of code.
The story of how 28-year-old Azer Koçulu briefly broke the internet shows how writing software for the web has become dependent on a patchwork of code that itself relies on the benevolence of fellow programmers. When that system breaks down, as it did last week, the consequences can be vast and unpredictable.
“I think I have the right of deleting all my stuff,” Koçulu wrote on March 20 in an email that was later made public.
And then he did it.
The open-source creed
The open-source philosophy is what drove Koçulu to contribute to npm in the first place, and why he ended up abandoning the service. Like many in the broad community of people who write code that anyone can use, he is influenced by the “hacker ethic” of early programmers at the Massachusetts Institute of Technology and a set of more concrete values that were later put forth by the programmer Richard Stallman.
“The fundamental act of friendship among programmers is the sharing of programs,” Stallman wrote in his 1985 manifesto. He railed against “the commercialization of system software,” and laid out ways to make code more communal and widely useful. Many of Stallman’s ideas, as well as the legacy of the hacker ethic, continue to influence programmers like Koçulu.
“I’m a self-taught high school graduate who learn everything thanks to open source community,” Koçulu, who was born in Turkey, wrote in an email to Quartz. “I owe everything I have to the people who never gave up with open source philosophy.”
It began with an email
kik, which helped programmers set up templates for their projects. It wasn’t widely known, but it shared a name with Kik, the messaging app based in Ontario, Canada. On March 11, Koçulu received an email fromBob Stratton, a patent and trademark agent who does contract work for Kik.
Stratton said Kik was preparing to release its own package and asked Koçulu if he could rename his. “Can we get you to rename your
kikpackage?” Stratton wrote.
Michael Nielsen writes in Quanta:
In 1997, IBM’s Deep Blue system defeated the world chess champion, Garry Kasparov. At the time, the victory was widely described as a milestone in artificial intelligence. But Deep Blue’s technology turned out to be useful for chess and not much else. Computer science did not undergo a revolution.
Will AlphaGo, the Go-playing system that recently defeated one of the strongest Go players in history, be any different?
I believe the answer is yes, but not for the reasons you may have heard. Many articles proffer expert testimony that Go is harder than chess, making this victory more impressive. Or they say that we didn’t expect computers to win at Go for another 10 years, so this is a bigger breakthrough. Some articles offer the (correct!) observation that there are more potential positions in Go than in chess, but they don’t explain why this should cause more difficulty for computers than for humans.
In other words, these arguments don’t address the core question: Will the technical advances that led to AlphaGo’s success have broader implications? To answer this question, we must first understand the ways in which the advances that led to AlphaGo are qualitatively different and more important than those that led to Deep Blue.
In chess, beginning players are taught a notion of a chess piece’s value. In one system, a knight or bishop is worth three pawns. A rook, which has greater range of movement, is worth five pawns. And the queen, which has the greatest range of all, is worth nine pawns. A king has infinite value, since losing it means losing the game.
You can use these values to assess potential moves. Give up a bishop to take your opponent’s rook? That’s usually a good idea. Give up a knight and a bishop in exchange for a rook? Not such a good idea.
The notion of value is crucial in computer chess. Most computer chess programs search through millions or billions of combinations of moves and countermoves. The goal is for the program to find a sequence of moves that maximizes the final value of the program’s board position, no matter what sequence of moves is played by the opponent.
Early chess programs evaluated board positions using simple notions like “one bishop equals three pawns.” But later programs used more detailed chess knowledge. Deep Blue, for example, combined more than 8,000 different factors in the function it used to evaluate board positions. Deep Blue didn’t just say that one rook equals five pawns. If a pawn of the same color is ahead of the rook, the pawn will restrict the rook’s range of movement, thus making the rook a little less valuable. If, however, the pawn is “levered,” meaning that it can move out of the rook’s way by capturing an enemy pawn, Deep Blue considers the pawn semitransparent and doesn’t reduce the rook’s value as much.
Ideas like this depend on detailed knowledge of chess and were crucial to Deep Blue’s success. According to the technical paper written by the Deep Blue team, this notion of a semitransparent levered pawn was crucial to Deep Blue’s play in the second game against Kasparov.
Ultimately, the Deep Blue developers used two main ideas. The first was to build a function that incorporated lots of detailed chess knowledge to evaluate any given board position. The second was to use immense computing power to evaluate lots of possible positions, picking out the move that would force the best possible final board position.
What happens if you apply this strategy to Go?
It turns out that you will run into a difficult problem when you try. The problem lies in figuring out how to evaluate board positions. Top Go players use a lot of intuition in judging how good a particular board position is. They will, for instance, make vague-sounding statements about a board position having “good shape.” And it’s not immediately clear how to express this intuition in simple, well-defined systems like the valuation of chess pieces.
Now you might think it’s just a question of working hard and coming up with a good way of evaluating board positions. Unfortunately, even after decades of attempts to do this using conventional approaches, there was still no obvious way to apply the search strategy that was so successful for chess, and Go programs remained disappointing. This began to change in 2006, with the introduction of so-called Monte Carlo tree search algorithms, which tried a new approach to evaluation based on a clever way of randomly simulating games. But Go programs still fell far short of human players in ability. It seemed as though a strong intuitive sense of board position was essential to success.
What’s new and important about AlphaGo is that its developers have figured out a way of bottling something very like that intuitive sense. . .
Derek Mead reports in Motherboard:
Heretofore, we humans have always been inherently narcissistic in our embrace of progress. From our nervous fascination with sexbots to the anthrocentric blindersthat guide our search for aliens, our innovations are rarely more than a mirror shiningback on our own needs. And why wouldn’t it be selfish? The goal is to make the world a better, more interesting place, but those are naturally subjective targets.
The big question, then, is if we keep pushing technology and discovery further and further, who is it for? This is what Rose Eveleth asks in her new column, Design Bias: tech is reflective of the people who build and design it, which means that if the builders lack a diverse perspective, so will the end product. Using the example of facial recognition algorithms, which have struggled with already-marginalized groups, she writes, “facial recognition is a simple reminder: once again, this tech is not made for you.”
These types of biases are not necessarily made with malice, but it’s no more excusable for tech to end up being exclusionary simply because its developers weren’t aware that they might have blind spots in the first place. Our future is not distributed evenly, and access and inclusion are the most important issues facing the most important industries and areas of development today.
But this week we’ve seen insight into an entirely different aspect of this gradiated future: What happens when, instead of tech being designed in the image of its designers, it ends up being reflective of each individual user? A truly distributed technology would learn its mores from all of us. Will we like what we see?
A whole lot of blogs have been written about Tay, the Microsoft chatbot meant to mimic teens in order to “experiment with and conduct research on conversational understanding,” per Tay’s official site. Through “casual and playful conversation” on Twitter, Tay was designed with an AI-like ability to learn how humans speak so it could better connect with us—so we can have an “experience can be more personalized for you.”
Who is “you,” here? At first, the “you” of Tay was a corporate dad stuck in the uncanny valley of teen talk—I’ll never stop laughing at Tay’s nonsensical original bio, “Microsoft’s A.I. fam from the internet that’s got zero chill!”, which read like a 90s commercial for scrunchies—which is about what you’d expect from a Microsoft research project aimed at teens. No harm, no foul, aside from some awkwardness. But then, thousands and thousands of tweets later, where did Tay end up? As a virulent racist, of course.
Tay is designed to specifically not be reflective of its own designers, but reflective of whoever interacts with it. In that, it was certainly a success! It’s a notable shift in design considerations: the “you” in this case are the people who actually use Tay, which is theoretically not exclusionary at all. Tay has no blind spots, because Tay is constantly learning from its users. Contrast this with Siri, which didn’t know how to react to questions about rape or domestic abuse, presumably because it never occurred to the programmers that someone might want to make that search.
If tech is designed purely to learn and evolve in response to its users, not its designers, then design bias would seemingly be a moot point. . .