Later On

A blog written for those whose interests more or less match mine.

Archive for March 6th, 2015

Amazing flutes!

with one comment

In my high-school band I mostly played flute, though finally switched to saxophone (alto and tenor), which I liked a lot better. But I am bowled over by these flutes. Take a look at the gallery.

Written by LeisureGuy

6 March 2015 at 6:29 pm

Posted in Music

How Thatcher’s Government Covered Up a VIP Pedophile Ring

with 2 comments

Given the profound psychological harm pedophiles wreak on children, there really should be some serious punishment handed out. Nico Hines reports at The Daily Beast:

A newspaper editor was handed startling evidence that Britain’s top law enforcement official knew there was a VIP pedophile network in Westminster, at the heart of the British government. What happened next in the summer of 1984 helps to explain how shocking allegations of rape and murder against some of the country’s most powerful men went unchecked for decades.

Less than 24 hours after starting to inquire about the dossier presented to him by a senior Labour Party politician, the editor was confronted in his office by a furious member of parliament who threatened him and demanded the documents. “He was frothing at the mouth and really shouting and spitting in my face,” Don Hale told The Daily Beast. “He was straight at me like a raging lion; he was ready to knock me through the wall.”

Despite the MP’s explosive intervention, Hale refused to hand over the papers which appeared to show that Leon Brittan, Margaret Thatcher’s Home Secretary, was fully aware of a pedophile network that included top politicians.

The editor’s resistance was futile; the following morning, police officers from the counter-terror and intelligence unit known as Special Branch burst into the newspaper office, seized the material and threatened to have Hale arrested if he ever reported what had been found.

More than 30 years later, an inquiry into allegations of child sex abuse rings, murder, and cover-ups has been launched by the British government after Scotland Yard detectives said they believed statements by victims who claimed they were systematically abused as young boys at sex abuse parties attended by judges, politicians, intelligence officers, and staff at the royal palaces.

In 1983, a controversial MP, Geoffrey Dickens, had made a series of incendiary claims about active pedophiles in the corridors of power. He handed a file containing the names of alleged perpetrators to Leon Brittan; publicly the authorities shrugged off the claims and no trial or prosecution would follow. The dossier mysteriously disappeared.

Decades later, Brittan claimed he had simply handed the papers to his subordinates to investigate and heard no more about it. Last year, he was forced to clarify his statement when it emerged that he had later written to Dickens to say the initial investigation had been deemed “worth pursuing” by investigators.

It is now claimed that confidential Home Office papers collated by Baroness Castle of Blackburn and passed to Don Hale, editor of her local newspaper, the Bury Messenger, claimed that Brittan had played an active role in overseeing the investigation into the pedophile network. “Leon Brittan was mentioned in everything you picked up, his fingerprints were over everything, he was the instigator,” Hale said. “He really had his finger on the pulse, he wanted to know everything about it; all the documents were cc’d back to Leon Brittan or it was an instruction directly from Leon Brittan.”

Brittan, a protégé of Prime Minister Margaret Thatcher, had been promoted to Home Secretary at the age of 43, making him the youngest person to preside over Britain’s domestic law enforcement and national security apparatus since Winston Churchill before the First World War.

Brittan, who died in January, has been accused of raping a woman and sexually abusing boys. He denied the allegations and was never charged, although police investigations have continued after his death. . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 3:06 pm

How US police work these days

with one comment

Radley Balko documents another drug war death: a man who came to the door wearing only basketball shorts and was instantly shot in the face by a police officer. It’s a good column, and he concludes by writing:

It seems likely that Cruice [the victim] was dealing pot. The police say they found a ledger book, a scale, about a half-pound of marijuana and some cash. It also seems likely that if the police had simply knocked on the door and waited, or apprehended Cruice as he was coming or going, Cruice would be still be alive. This insistence on serving drug warrants by barreling into homes creates needless violence, confusion and confrontation. They’re designed to do this. I doubt that Cruice knowingly decided to take on a raiding police team armed only with his basketball shorts. It seems far more likely that he thought they were criminal intruders and was either trying to confront them, or was trying to escape. But there is no room for errors in judgment for the people on the receiving end of these raids — even though sowing confusion and disorientation are the stated aim. But it is only the suspects, the targets of the raids, who are expected to do everything right. When the police screw up and kill someone, they’re generally forgiven, owing again to the volatility of the situation.

So judging from the many, many prior incidents similar to this one, it’s probably safe to say that this officer will be cleared of any wrongdoing. It’s also probably safe to say that any investigation will determine that there’s nothing wrong with the police department’s warrant service policies. At least that’s how these investigations usually go. And if it is determined that the cops in these cases are following policy, and that there’s nothing wrong with the policies themselves, then the only conclusion we can draw is that the police agencies believe unarmed men getting shot in the face is an acceptable consequence of the effort to stop people from getting high on marijuana.

Of course, even that is an illusion. If there’s one thing we can say with near-absolute certainty, it’s that it is no more difficult to buy pot in Volusia County, Fla., today than it was before Derek Cruice was gunned down in his own home. And so we add another body to the pile.

And here are some of the links from Balko’s post of links:

Written by LeisureGuy

6 March 2015 at 2:44 pm

A real-life example of a back-door to encryption

leave a comment »

It seems to be obvious to everyone except politicians and law enforcement officials that deliberately building a weakness into any security system pretty much makes it an insecurity system. “I know: let’s have surveillance cameras cover all entrances to the building except for just one!” Idiocy.

And yet many are clamoring to build computer “security” systems that have at least one vulnerability, a vulnerability that ONLY legal authorities will ever discover and use—despite the obvious value of knowing that vulnerability and the high likelihood that someone who knows it (and a fair number of people will, if it is to be widely implemented) will sell the secret—or simply post it on the Internet in protest.

Kevin Drum has an interesting post on this very topic:

Companies like Apple and Google have announced recently that they will start providing their customers with encryption that even Apple and Google don’t have the keys for. This means that even if law enforcement officers get a subpoena for data held by the companies, it won’t do any good. They couldn’t turn over decrypted data even if they wanted to.

This has led to calls from the FBI and elsewhere to provide “backdoors” of some kind for use by law enforcement. This would be a kind of master key available only under court order. But security experts argue that this makes encryption fundamentally useless. If you deliberately build in a weakness, you simply can never guarantee that it won’t be exploited by hackers. Encryption is either secure or it’s not, full stop.

Over at The Switch, Craig Timberg provides an interesting recent example of this. Back in the 90s, we were fighting this same fight, and one temporary result was the government’s mandate that only a weak form of encryption could be exported outside the U.S. This mandate didn’t last long, but it lasted long enough to get incorporated into quite a few products. Still, that was 20 years ago. What harm could it be doing today?

The weaker encryption got baked into widely used software that proliferated around the world and back into the United States, apparently unnoticed until this year.

Researchers discovered in recent weeks that they could force browsers to use the old export-grade encryption then crack it over the course of just a few hours. Once cracked, hackers could steal passwords and other personal information and potentially launch a broader attack on the Web sites themselves by taking over elements on a page, such as a Facebook “Like” button.

….The existence of the problem with export-grade encryption amazed the researchers, who have dubbed the flaw “FREAK” for Factoring attack on RSA-EXPORT Keys….Nadia Heninger, a University of Pennsylvania cryptographer, said, “This is basically a zombie from the ‘90s… I don’t think anybody really realized anybody was still supporting these export suites.”

For vulnerable sites, Heninger found that she could crack the export-grade encryption key in about seven hours, using computers on Amazon Web services….More than one third of encrypted Web sites — including those bearing the “lock” icon that signifies a connection secured by SSL technology — proved vulnerable to attack in recent tests conducted by University of Michigan researchers J. Alex Halderman and Zakir Durumeric. The list includes news organizations, retailers and financial services sites such as americanexpress.com. Of the 14 million Web sites worldwide that offer encryption, more than 5 million remained vulnerable as of Tuesday morning, Halderman said.

This is an object lesson in deliberately building vulnerabilities into encryption technology. Maybe . . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 12:57 pm

The coming of strong AI

leave a comment »

I read an article this morning—one that I cannot find again—on the coming of strong AI: millions are being poured into its development because it could create a significant advantage—not for mankind so much as for the government or corporation that first succeeds. And some of that spending is driven less by the benefits that it might promise than by the threats if another country (think North Korea, for example) succeeds first. In particular, if there are military applications, each country would doubtless prefer that it be the one first to succeed.

In (fruitlessly) looking for the article, I came across this interesting piece by John McGinnis from a few years back (2010) in which a number of these issues are discussed. (It’s also available as a PDF.)

Recently, Artificial Intelligence (AI) has become a subject of major media interest.  For instance, last May the New York Times devoted an article to the prospect of the time at which AI equals and then surpasses human intelligence.[1]  The article speculated on the dangers that such an event and its “strong AI” might bring.[2]  Then in July, the Times discussed computer-driven warfare.  Various experts expressed concern about the growing power of computers, particularly as they become the basis for new weapons, such as the predator drones that the United States now uses to kill terrorists.[3]

These articles encapsulate the twin fears about AI that may impel regulation in this area—the existential dread of machines that become uncontrollable by humans and the political anxiety about machines’ destructive power on a revolutionized battlefield.  Both fears are overblown.  The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence.  The military fear rests on the mistaken notion that computer-driven weaponry will necessarily worsen, rather than temper, human malevolence.  In any event, given the centrality of increases in computer power to military technology, it would be impossible to regulate research into AI without empowering the worst nations on earth.

Instead of prohibiting or heavily regulating artificial intelligence, the United States should support civilian research into a kind of AI that will not endanger humans—a so-called “friendly AI.”[4]  First, such support is the best way to make sure that computers do not turn out to be an existential threat.  It would provide incentives for researchers in the most technologically advanced nation in the world to research and develop AI that is friendly to man.

Second, such support is justified because of the positive spillovers that computational advances will likely provide in collective decisionmaking.  The acceleration of technology creates the need for quicker government reaction to the potentially huge effects of disruptive innovations.  For instance, at the dawn of the era in which the invention of energy-intensive machines may have started to warm up the earth, few recognized any risk from higher temperatures that such machines might cause.[5]  Yet as I will describe below, current developments in technology make the rise of energy-intensive machines seem slow-moving.  Assuming that man-made atmospheric warming is occurring,[6] it likely presents only the first of a number of possible catastrophes generated by accelerating technological change—dangers that may be prevented or at least ameliorated through earlier objective analysis and warning.  But it is no less important to recognize that other technological advances may create a cascade of benefits for society—benefits that false perceptions of risk may retard or even preclude.  As a result, gathering and analyzing information quickly is more important than ever to democratic decisionmaking because the stakes of such regulatory decisions have never been higher.

Given that AI has substantial potential to help society formulate the correct policies about all other accelerating technologies with transformational capacity, such as nanotechnology and biotechnology, the most important policy for technological change is that for AI itself.  Strong AI would help analyze the data about all aspects of the world—data that is growing at an exponential rate.[7]  AI then may help make connections between policies and consequences that would otherwise go overlooked by humans, acting as a fire alarm against dangers from new technologies whose chain of effects may be hard to assess even if they are quite imminent in historical terms.

Such analysis is not only useful to avoiding disaster but also to take advantage of the cornucopia of benefits from accelerating technology. Better analysis of future consequences may help the government craft the best policy toward nurturing such beneficent technologies, including providing appropriate prizes and support for their development.  Perhaps more importantly, better analysis about the effects of technological advances will tamp down on the fears often sparked by technological change.  The better our analysis of the future consequences of current technology, the less likely it is that such fears will smother beneficial innovations before they can deliver Promethean progress.

In this brief Essay, I first describe why strong AI has a substantial possibility of becoming a reality and then sketch the two threats that some ascribe to AI.  I show that relinquishing or effectively regulating AI in a world of competing sovereign states cannot respond effectively to such threats, given that sovereign states can gain a military advantage from AI, and that even within states, it would be very difficult to prevent individuals from conducting research into AI.  Moreover, I suggest that AI-driven robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars as well as an aid to civilization in its fight against terrorism.  Finally, I offer reasons that friendly artificial intelligence can be developed to help rather than harm humanity, thus eliminating the existential threat.

I conclude by showing that, in contrast to a regime of prohibition or heavy regulation, a policy of government support for AI that follows principles of friendliness is the best approach to artificial intelligence.  If friendly AI emerges, it may aid in preventing the emergence of less friendly versions of strong AI, as well as distinguish the real threats from the many potential benefits inherent in other forms of accelerating technology.

1. The Coming of AI . . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 9:50 am

Tomorrowland: How Silicon Valley Shapes Our Future

leave a comment »

Thomas Schulz reports in Der Spiegel:

The word on the street is that Travis Kalanick, founder and CEO of Uber, can be an asshole. He publicly insults the competition, mocks his own customers on Twitter and believes that politicians are incompetent. A top company executive even went so far as to suggest that journalists be spied on and Kalanick himself has said that it is as easy for him to seduce women as it is for others to call a taxi. In response to unhappy Uber drivers protesting poor pay, Kalanick predicted that they would soon be replaced by computers anyway.

Since December, Uber has been valued at $41 billion, not much less than Germany’s largest financial institution, Deutsche Bank. It only took the company five years to spread from San Francisco to more than 260 cities in over 50 countries around the world. Every month, the company adds another couple of countries and a handful of cities to its portfolio.

Uber is a good — no, a great — product. Essentially carpooling at the push of a button, it is an extremely simple service and one whose implementation is technically brilliant and easy to use. In most parts of the world, Uber is not only cheaper than any taxi service on offer, but also better. The company says that 50,000 new drivers join Uber each month.

The fact that the boss isn’t particularly nice shouldn’t really matter that much, but things aren’t quite that easy in this case. The company, after all, is a mirror image of its founder: aggressive, ruthless and overly ambitious.

After Portland, Oregon, banned the ride-sharing company from operating in the city late last year, Kalanick launched the service there anyway. The head of the local bureau of transportation was furious. “They think they can just come in here and flagrantly violate the law?” he asked. “This is really amazing. Apparently they believe they’re gods.”

There has been similar resistance in many other cities around the world, including in Germany, where Uber simply ignored court orders. For Kalanick, though, such skirmishes are small frays in a much larger war for supremacy. His “vision,” as he calls it, sees Uber becoming a kind of global transportation service that will ultimately allow city dwellers to eschew owning a car. He sees it transforming into a mobility giant that doesn’t just take people from place to place, but also goods — at the click of a button and at the lowest price available. Ideally with a driverless vehicle.

But Uber isn’t the only company with ambitions of taking over the world. That’s how they all think: Google and Facebook, Apple and Airbnb — all the digital giants along with the myriad smaller companies in their wake.

Their goal is never a niche market; it’s always the entire world. But far from being driven by delusional fantasies, their objectives are often realistic, made possible by a potent cocktail unique in economic history: globalization combined with digitalization.

The technological advances made in the last decade have been breathtaking, but it is likely still just the beginning. The growth of new technologies, after all, has been exponential rather than linear, with ever larger advances coming at an increasingly rapid rate. It is like a gigantic avalanche that begins as a tiny snowball at the top of the mountain.

San Jose, CA, Silicon ValleyZoom

Corbis

San Jose, CA, Silicon Valley

The iPhone only made its appearance seven years ago, but most of us no longer remember what the world was like before. Driverless cars were considered to be a crazy fantasy not long ago, but today nobody is particularly amazed by them. All the world’s knowledge condensed into a digital map and easily accessible? Normal. The fact that algorithms in the US control some 70 percent of all trading on the stock market? Crazy, to be sure. But normal craziness.

Dozens of companies are trying to figure out how to use drones for commercial use, be it for deliveries, data collection or other purposes. Huge armies of engineers are chasing after the holy grail of artificial intelligence. And the advances keep coming. Machines that can learn, intelligent robots: We have begun overtaking science fiction.

The phenomenon is still misunderstood, first and foremost by policymakers. It appears they have not yet decided whether to dive in and create a usable policy framework for the future or to stand aside as others create a global revolution. After all, what we are witnessing is not just the triumph of a particular technology. And it is not just an economic phenomenon. It isn’t about “the Internet” or “the social networks,” nor is it about intelligence services and Edward Snowden or the question as to what Google is doing with our data. It isn’t about the huge numbers of newspapers that are going broke nor is it about jobs being replaced by software. It’s not about a messaging service being worth €19 billion ($21.1 billion) or the fact that 20-year-olds are launching entire new industries.

We are witnessing nothing less than a societal transformation that ultimately nobody will be able to avoid. It is the kind of sea change that can only be compared with 19th century industrialization, but it is happening much faster this time. Just as the change from hand work to mass production dramatically changed our society over 100 years ago, the digital revolution isn’t just altering specific sectors of the economy, it is changing the way we think and live.

This time, though, the transformation is different. This time, it is being driven by just a few hundred people. . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 9:32 am

BBS with a Schick Injector, and solution to a high-school-era problem

with 4 comments

SOTD 6 Mar 2015

Do we ever really get over high school?

This morning I brought out the same model of Schick Injector that my step-father switched to (and thus so did I: I used his razor) when I was a junior in high school some years back (mid-1950’s). I chronically got nicks with it, and when I shaved this morning I immediately saw why: I was using a terrible blade angle, scraping the blade across my face. As with the GEMs, the top of the razor should be pretty well flat to the skin, but I had it at a relatively steep angle, which would be expected to produce a lot of nicks—and it did, Lordy, it did.

And my prep wasn’t all that good, either: very hard water. This morning’s prep was excellent. I should have given the brush one more shake—it’s a big knot, that Plisson Chinese grey, and it holds a lot of water—but I persisted, letting some water fall away, and got quite a nice lather from the (pre-reformulation) Floris No. 89 shaving soap.

Three passes with the Schick holding a Ted Pella Teflon-coated blade—I have some actual Schick Injector blades I’ll have to dig out—and the result was unmistakeably BBS. (I know my step-father got great shaves from it, but he never told me about the correct angle, and I imagine he didn’t even think about it—we just shaved and whatever happened, happened.)

A good splash of Floris No. 89 aftershave, and I got a good hit of the extremely nice fragrance, and things are moving along.

The only complaint about the razor is the common problem of single-edged razors: limited lather capacity, and since the Injector head is not very deep, this holds even less lather than the GEM: too much rinsing required. Still, it turns out to be a good razor if you get the angle right. I suppose that’s true of many razors.

Edit: This strikes me as one of those “I wish I could go back in time” things; “Bill [the name I used in high school], I’m from the far future, the year 2015, and I’ve traveled back in time to show you the correct blade angle for the Schick Injector.” That sort of thing

Edit 2: Corrected typo in title. URL still has the typo.

Written by LeisureGuy

6 March 2015 at 8:48 am

Posted in Shaving

%d bloggers like this: