Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

The product of the government’s knowledge of your activities and your knowledge of the government’s activities is constant

with 2 comments

As governments become more invasive, subjecting their citizens to universal surveillance (email, phone calls, CCTV, and other ways of scooping up essentially all information, breaking encryption, hacking into computers, and so on), they also become more secretive, classifying everything and hiding what they are doing so that citizens do not know. It creates a great inequality of information: the government starts to know everything about you as you are able to know less and less about the government. For example, two recent reports in McClatchy:

US sets new record for denying, censoring government files

White House is again refusing transparency, groups charge

And this, in the National Journal: FBI’s Plan to Expand Hacking Power Advances Despite Privacy Fears

Add to that the many ways that NSA has been secretly spying on us, as well as on our allies and our enemies—as revealed by Edward Snowden—and the way the Obama Administration viciously persecutes any whistleblower who informs the public of what the government is doing (well, except for David Petraeus).

It’s obvious that this process is well underway in the US. And the secrecy is not just about what they learn about you, but also about agreements they are making that will greatly affect your life and future. For example, take the demand from Obama that we buy a pig in a poke in the TPP trade agreement. Democracy Now! has a good video report, with transcript. Their blurb:

Congressional Democrats are openly criticizing the secrecy surrounding the negotiations over the Trans-Pacific Partnership (TPP), just as President Obama begins a major push to pass the controversial deal. The United States is engaged in talks with 11 Latin American and Asian countries for the sweeping trade pact that would cover 40 percent of the global economy. But its provisions have mostly been kept secret. After the White House deemed a briefing on the trade pact “classified,” Rep. Rosa DeLauro of Connecticut called the measures “needlessly secretive,” saying: “If the TPP would be as good for American jobs as they claim, there should be nothing to hide.” This comes as Obama recently called on Congress to pass “fast track” legislation to streamline the passage of trade deals through Congress. Meanwhile, theAFL-CIO says it will withhold contributions to congressional Democrats to pressure them to vote no on fast-track authority. And some tea party-backed Republicans are saying Obama cannot be trusted with the same negotiating authority that past presidents have had. This spring, the White House has invited Japan’s Prime Minister Shinzo Abe to address a joint session of Congress in which he may promote the TPP. For more, we speak with by Lori Wallach, director of Public Citizen’s Global Trade Watch, who has been sounding the alarm about the negotiations. She says Congress could vote on the TPP proposal in the third week in April.

UPDATE: Original title changed because it was wrong: what I wanted to show was the teeter-totter effect: as the government increased its knowledge about you (by spying on you and what you say and write and read and buy and keeping track of whom you associate with and communicate with), it correspondingly shuts down your access to knowledge about what it is doing. The product is constant in that the increase is spying is matched by a decrease in transparency.

Written by LeisureGuy

19 March 2015 at 12:56 pm

Use the buildings to eliminate the shadow between the buildings

leave a comment »

More info in this article.

Written by LeisureGuy

16 March 2015 at 6:27 pm

NYPD on the job—revising Wikipedia to erase NYPD police brutality

leave a comment »

And the revisions are being done from police headquarters. Apparently the NYPD thinks whatever they do is okay, but you damn sure better not record it or describe it.

NY tax dollars at work.

In a way, this is not unlike using “enhanced interrogation techniques” in place of “torture” (something the NY Times was only too happy to do) or “bulk data collection” in place of “mass surveillance.” But a rose by any other name would smell as sweet, and a skunk cabbage as foul.

Still, the Orwellian tendencies of the police state are on full display. You’ll recall in the prescient novel 1984 how history—in the sense of the written record of what happened—was being continually revised, and how the photographs in Stalin’s Soviet Union were continually retouched to remove from the photo those persons who had fallen from favor.

And now the NYPD is doing its bit. It’s apparently much easier from them to revise Wikipedia than to clean up their act.

UPDATE: Another story on the NYPD’s effort to alter history and to erase events.

Written by LeisureGuy

14 March 2015 at 10:44 am

Bruce Schneier on the Hidden Battles to Collect Your Data and Control Your World

leave a comment »

Bruce Schneier, author of the recent book Data and Goliath, is interviewed on Democracy Now! Their blurb:

Leading security and privacy researcher Bruce Schneier talks about about the golden age of surveillance and his new book, “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World.” The book chronicles how governments and corporation have built an unprecedented surveillance state. While the leaks of Edward Snowden have shed light on the National Security Agency’s surveillance practices, less attention has been paid to other forms of everyday surveillance — license plate readers, facial recognition software, GPS tracking, cellphone metadata and data mining.

Video and transcript at the link.

Written by LeisureGuy

13 March 2015 at 11:25 am

Amazing DIY improvement for Lodge cast-iron skillet

with 5 comments

Truly astonishing. I want to do this…

If I were Lodge, I would consider coming out with a “highly polished” line, with higher prices to reflect the additional work.

Written by LeisureGuy

9 March 2015 at 10:55 am

Posted in Food, Technology

A real-life example of a back-door to encryption

leave a comment »

It seems to be obvious to everyone except politicians and law enforcement officials that deliberately building a weakness into any security system pretty much makes it an insecurity system. “I know: let’s have surveillance cameras cover all entrances to the building except for just one!” Idiocy.

And yet many are clamoring to build computer “security” systems that have at least one vulnerability, a vulnerability that ONLY legal authorities will ever discover and use—despite the obvious value of knowing that vulnerability and the high likelihood that someone who knows it (and a fair number of people will, if it is to be widely implemented) will sell the secret—or simply post it on the Internet in protest.

Kevin Drum has an interesting post on this very topic:

Companies like Apple and Google have announced recently that they will start providing their customers with encryption that even Apple and Google don’t have the keys for. This means that even if law enforcement officers get a subpoena for data held by the companies, it won’t do any good. They couldn’t turn over decrypted data even if they wanted to.

This has led to calls from the FBI and elsewhere to provide “backdoors” of some kind for use by law enforcement. This would be a kind of master key available only under court order. But security experts argue that this makes encryption fundamentally useless. If you deliberately build in a weakness, you simply can never guarantee that it won’t be exploited by hackers. Encryption is either secure or it’s not, full stop.

Over at The Switch, Craig Timberg provides an interesting recent example of this. Back in the 90s, we were fighting this same fight, and one temporary result was the government’s mandate that only a weak form of encryption could be exported outside the U.S. This mandate didn’t last long, but it lasted long enough to get incorporated into quite a few products. Still, that was 20 years ago. What harm could it be doing today?

The weaker encryption got baked into widely used software that proliferated around the world and back into the United States, apparently unnoticed until this year.

Researchers discovered in recent weeks that they could force browsers to use the old export-grade encryption then crack it over the course of just a few hours. Once cracked, hackers could steal passwords and other personal information and potentially launch a broader attack on the Web sites themselves by taking over elements on a page, such as a Facebook “Like” button.

….The existence of the problem with export-grade encryption amazed the researchers, who have dubbed the flaw “FREAK” for Factoring attack on RSA-EXPORT Keys….Nadia Heninger, a University of Pennsylvania cryptographer, said, “This is basically a zombie from the ‘90s… I don’t think anybody really realized anybody was still supporting these export suites.”

For vulnerable sites, Heninger found that she could crack the export-grade encryption key in about seven hours, using computers on Amazon Web services….More than one third of encrypted Web sites — including those bearing the “lock” icon that signifies a connection secured by SSL technology — proved vulnerable to attack in recent tests conducted by University of Michigan researchers J. Alex Halderman and Zakir Durumeric. The list includes news organizations, retailers and financial services sites such as americanexpress.com. Of the 14 million Web sites worldwide that offer encryption, more than 5 million remained vulnerable as of Tuesday morning, Halderman said.

This is an object lesson in deliberately building vulnerabilities into encryption technology. Maybe . . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 12:57 pm

The coming of strong AI

leave a comment »

I read an article this morning—one that I cannot find again—on the coming of strong AI: millions are being poured into its development because it could create a significant advantage—not for mankind so much as for the government or corporation that first succeeds. And some of that spending is driven less by the benefits that it might promise than by the threats if another country (think North Korea, for example) succeeds first. In particular, if there are military applications, each country would doubtless prefer that it be the one first to succeed.

In (fruitlessly) looking for the article, I came across this interesting piece by John McGinnis from a few years back (2010) in which a number of these issues are discussed. (It’s also available as a PDF.)

Recently, Artificial Intelligence (AI) has become a subject of major media interest.  For instance, last May the New York Times devoted an article to the prospect of the time at which AI equals and then surpasses human intelligence.[1]  The article speculated on the dangers that such an event and its “strong AI” might bring.[2]  Then in July, the Times discussed computer-driven warfare.  Various experts expressed concern about the growing power of computers, particularly as they become the basis for new weapons, such as the predator drones that the United States now uses to kill terrorists.[3]

These articles encapsulate the twin fears about AI that may impel regulation in this area—the existential dread of machines that become uncontrollable by humans and the political anxiety about machines’ destructive power on a revolutionized battlefield.  Both fears are overblown.  The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence.  The military fear rests on the mistaken notion that computer-driven weaponry will necessarily worsen, rather than temper, human malevolence.  In any event, given the centrality of increases in computer power to military technology, it would be impossible to regulate research into AI without empowering the worst nations on earth.

Instead of prohibiting or heavily regulating artificial intelligence, the United States should support civilian research into a kind of AI that will not endanger humans—a so-called “friendly AI.”[4]  First, such support is the best way to make sure that computers do not turn out to be an existential threat.  It would provide incentives for researchers in the most technologically advanced nation in the world to research and develop AI that is friendly to man.

Second, such support is justified because of the positive spillovers that computational advances will likely provide in collective decisionmaking.  The acceleration of technology creates the need for quicker government reaction to the potentially huge effects of disruptive innovations.  For instance, at the dawn of the era in which the invention of energy-intensive machines may have started to warm up the earth, few recognized any risk from higher temperatures that such machines might cause.[5]  Yet as I will describe below, current developments in technology make the rise of energy-intensive machines seem slow-moving.  Assuming that man-made atmospheric warming is occurring,[6] it likely presents only the first of a number of possible catastrophes generated by accelerating technological change—dangers that may be prevented or at least ameliorated through earlier objective analysis and warning.  But it is no less important to recognize that other technological advances may create a cascade of benefits for society—benefits that false perceptions of risk may retard or even preclude.  As a result, gathering and analyzing information quickly is more important than ever to democratic decisionmaking because the stakes of such regulatory decisions have never been higher.

Given that AI has substantial potential to help society formulate the correct policies about all other accelerating technologies with transformational capacity, such as nanotechnology and biotechnology, the most important policy for technological change is that for AI itself.  Strong AI would help analyze the data about all aspects of the world—data that is growing at an exponential rate.[7]  AI then may help make connections between policies and consequences that would otherwise go overlooked by humans, acting as a fire alarm against dangers from new technologies whose chain of effects may be hard to assess even if they are quite imminent in historical terms.

Such analysis is not only useful to avoiding disaster but also to take advantage of the cornucopia of benefits from accelerating technology. Better analysis of future consequences may help the government craft the best policy toward nurturing such beneficent technologies, including providing appropriate prizes and support for their development.  Perhaps more importantly, better analysis about the effects of technological advances will tamp down on the fears often sparked by technological change.  The better our analysis of the future consequences of current technology, the less likely it is that such fears will smother beneficial innovations before they can deliver Promethean progress.

In this brief Essay, I first describe why strong AI has a substantial possibility of becoming a reality and then sketch the two threats that some ascribe to AI.  I show that relinquishing or effectively regulating AI in a world of competing sovereign states cannot respond effectively to such threats, given that sovereign states can gain a military advantage from AI, and that even within states, it would be very difficult to prevent individuals from conducting research into AI.  Moreover, I suggest that AI-driven robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars as well as an aid to civilization in its fight against terrorism.  Finally, I offer reasons that friendly artificial intelligence can be developed to help rather than harm humanity, thus eliminating the existential threat.

I conclude by showing that, in contrast to a regime of prohibition or heavy regulation, a policy of government support for AI that follows principles of friendliness is the best approach to artificial intelligence.  If friendly AI emerges, it may aid in preventing the emergence of less friendly versions of strong AI, as well as distinguish the real threats from the many potential benefits inherent in other forms of accelerating technology.

1. The Coming of AI . . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 9:50 am

Follow

Get every new post delivered to your Inbox.

Join 1,685 other followers

%d bloggers like this: