Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Trump uses social media as a weapon

leave a comment »

George Lakoff has an excellent analysis of the tactical value of Trump’s tweeting. Well worth reading. The analysis is presented as a series of tweets.

Written by LeisureGuy

4 January 2018 at 11:35 am

Stop using Facebook and start using your browser

leave a comment »

Jason Kottke makes a good point:

In “an open memo, to all marginally-smart people/consumers of internet ‘content’”, Foster Kamer has a small suggestion to those who care about the health and diversity of online media: stop reading what Facebook tells you to read and use your browser bar (or bookmarks) instead.

Literally, all you need to do: Type in web addresses. Use autofill! Or even: Google the website you want to go to, and go to it. Then bookmark it. Then go back every now and again.

Instead of reading stories that get to you because they’re popular, or just happen to be in your feed at that moment, you’ll read stories that get to you because you chose to go to them. Sounds simple, and insignificant, and almost too easy, right?

It’s only easy, and simple to do. As for why you should do it: It’s definitely not simple, nor insignificant. By choosing to be a reader of websites whose voices and ideas you’re fundamentally interested in and care about, you’re taking control.

And by doing that, you’ll chip away at the incentive publishers have to create headlines and stories weaponized for the purpose of sharing on social media. You’ll be stripping away at the motivation for websites everywhere (including this one) to make dumb hollow mindgarbage. At the same time, you’ll increase the incentive for these websites to be (if nothing else) more consistent and less desperate for your attention.

*head nodding vigorously* I mean, it’s a complicated situation. Facebook and Twitter are easily the best news/blog reading platforms ever invented, better than any RSS reader for most people. By putting most of the web’s information all in one place, they offer incredible speed and convenience, which is hard for people to ignore. I made this point in a footnote this morning: using Facebook instead of just bookmarks is compelling in the same way that shopping at Walmart instead of small-town shops was in the 80s. We blame Walmart for decimating small businesses, but ultimately,  . . .

Continue reading.

Written by LeisureGuy

2 January 2018 at 3:07 pm

Encryption Lava Lamps

leave a comment »

Algorithms cannot produce random numbers. The best algorithms can do is pseudo-random numbers in which the pattern is a long cycle. For true randomness, one must look to the physical world: radioactive decay, for example, or… lava lamps.

Atlas Obscura reports:

What’s encrypting your web traffic as you surf the internet? An advanced algorithm created by a supercomputer? Actually, if the site you’re visiting is encrypted by the cybersecurity firm Cloudflare, your activity may be protected by nothing other than a wall of lava lamps. There couldn’t possibly be a groovier way to keep the internet secure.

Cloudflare covers about 10 percent of international web traffic, including the websites for Uber, OKCupid, or FitBit, for instance. And the colorful wall of lava lamps in the company’s San Francisco headquarters might be what’s generating the random code. The wall features over 100 lava lamps, spanning a variety of colors, and its random patterns deter hackers from accessing data.

As the lava lamps bubble and swivel, a video camera on the ceiling monitors their unpredictable changes and connects the footage to a computer, which converts the randomness into a virtually unhackable code.

Why use lava lamps for encryption instead of computer-generated code? Since computer codes are created by machines with relatively predictable patterns, it is entirely possible for hackers to guess their algorithms, posing a security risk. Lava lamps, on the other hand, add to the equation the sheer randomness of the physical world, making it nearly impossible for hackers to break through.

While you might think that such an important place would kept in secret and locked off from the public, it’s actually possible for visitors to witness these lava lamps in person. Simply enter the lobby of Cloudflare’s San Francisco headquarters and ask to see the lava lamp display. . .

Continue reading.

Written by LeisureGuy

29 December 2017 at 9:06 am

Posted in Technology

Yuval Harari, author of “Sapiens: A Brief History of Humankind,” has a new book

leave a comment »

Here are the opening paragraphs of Homo Deus: A Brief History of Tomorrow:

At the dawn of the third millennium, humanity wakes up, stretching its limbs and rubbing its eyes. Remnants of some awful nightmare are still drifting across its mind. ‘There was something with barbed wire, and huge mushroom clouds. Oh well, it was just a bad dream.’ Going to the bathroom, humanity washes its face, examines its wrinkles in the mirror, makes a cup of coffee and opens the diary. ‘Let’s see what’s on the agenda today.’

For thousands of years the answer to this question remained unchanged. The same three problems preoccupied the people of twentieth-century China, of medieval India and of ancient Egypt. Famine, plague and war were always at the top of the list. For generation after generation humans have prayed to every god, angel and saint, and have invented countless tools, institutions and social systems – but they continued to die in their millions from starvation, epidemics and violence. Many thinkers and prophets concluded that famine, plague and war must be an integral part of God’s cosmic plan or of our imperfect nature, and nothing short of the end of time would free us from them.

Yet at the dawn of the third millennium, humanity wakes up to an amazing realisation. Most people rarely think about it, but in the last few decades we have managed to rein in famine, plague and war. Of course, these problems have not been completely solved, but they have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges. We don’t need to pray to any god or saint to rescue us from them. We know quite well what needs to be done in order to prevent famine, plague and war – and we usually succeed in doing it.

True, there are still notable failures; but when faced with such failures we no longer shrug our shoulders and say, ‘Well, that’s the way things work in our imperfect world’ or ‘God’s will be done’. Rather, when famine, plague or war break out of our control, we feel that somebody must have screwed up, we set up a commission of inquiry, and promise ourselves that next time we’ll do better. And it actually works. Such calamities indeed happen less and less often. For the first time in history, more people die today from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals combined. In the early twenty-first century, the average human is far more likely to die from bingeing at McDonald’s than from drought, Ebola or an al-Qaeda attack.

Hence even though presidents, CEOs and generals still have their daily schedules full of economic crises and military conflicts, on the cosmic scale of history humankind can lift its eyes up and start looking towards new horizons. If we are indeed bringing famine, plague and war under control, what will replace them at the top of the human agenda? Like firefighters in a world without fire, so humankind in the twenty-first century needs to ask itself an unprecedented question: what are we going to do with ourselves? In a healthy, prosperous and harmonious world, what will demand our attention and ingenuity? This question becomes doubly urgent given the immense new powers that biotechnology and information technology are providing us with. What will we do with all that power?

Before answering this question, we need to say a few more words about famine, plague and war. The claim that we are bringing them under control may strike many as outrageous, extremely naïve, or perhaps callous. What about the billions of people scraping a living on less than $2 a day? What about the ongoing AIDS crisis in Africa, or the wars raging in Syria and Iraq? To address these concerns, let us take a closer look at the world of the early twenty-first century, before exploring the human agenda for the coming decades. . .

I bought it and am reading it now.

Written by LeisureGuy

28 December 2017 at 1:15 pm

Autonomous killing machines are already here: We call them “corporations”

leave a comment »

Ted Chiang has an article worth reading in Buzzfeed. Here’s who he is:

Ted Chiang is an award-winning writer of science fiction. Over the course of 25 years and 15 stories, he has won numerous awards including four Nebulas, four Hugos, four Locuses, and the John W. Campbell Award for Best New Writer. The title story from his collection, Stories of Your Life and Others, was adapted into the movie Arrival, starring Amy Adams and directed by Denis Villeneuve. He freelances as a technical writer and currently resides in Bellevue, Washington, and is a graduate of the Clarion Writers Workshop.

The article begins:

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.


In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

Because corporations lack insight, we expect the government to provide oversight in the form of regulation, but the internet is almost entirely unregulated. Back in 1996, John Perry Barlow published a manifesto saying that the government had no jurisdiction over cyberspace, and in the intervening two decades that notion has served as an axiom to people working in technology. Which leads to another similarity between these civilization-destroying AIs and Silicon Valley tech companies: the lack of external controls. If you suggest to an AI prognosticator that humans would never grant an AI so much autonomy, the response will be that you fundamentally misunderstand the situation, that the idea of an ‘off’ button doesn’t even apply. It’s assumed that the AI’s approach will be “the question isn’t who is going to let me, it’s who is going to stop me,” i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse. When Uber wanted more drivers with new cars, its solution was to persuade people with bad credit to take out car loans and then deduct payments directly from their earnings. They positioned this as disrupting the auto loan industry, but everyone else recognized it as predatory lending. The whole idea that disruption is something positive instead of negative is a conceit of tech entrepreneurs. If a superintelligent AI were making a funding pitch to an angel investor, converting the surface of the Earth into strawberry fields would be nothing more than a long overdue disruption of global land use policy.

There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

There have been some impressive advances in AI recently, like  . . .

Continue reading.

Written by LeisureGuy

28 December 2017 at 10:52 am

The near future: Autonomous killerbots from unknown sources

leave a comment »

This was all discussed in Daniel Suarez’s excellent tech-sci-fi novel Kill Decision: swarms of small (and inexpensive) autonomous killerbots. Here’s another view, via Jason Kottke (from a post worth reading):

Written by LeisureGuy

28 December 2017 at 10:01 am

What Jeff Bezos wants for Christmas is. . .world peace (with the help of some spies and special forces types)

leave a comment »

Very intriguing article by Danny Fortson in the Times:

Buried deep beneath the white marble foyer of the US Institute of Peace (Usip), with its sweeping views across the National Mall to the Lincoln Memorial, is a cavernous, black-walled basement. Workers call it the “bat cave”. For good reason. The windowless lair is the site of a bold experiment: Amazon’s plunge into the peace-making business.

Having almost wiped out the high street and stolen a march on rivals in the race to develop artificial intelligence, the $570bn (£426bn) retail giant is trying its hand at solving problems far trickier than next-day delivery. Challenges such as terrorism, corruption, resource scarcity and violent extremism.

Amazon is a founding partner of the PeaceTech Accelerator, an obscure joint-venture of military men, former spies, peaceniks and industry that it launched last year with C5 Capital, the London investment firm run by Andre Pienaar, a politically connected corporate intelligence veteran, and the PeaceTech Lab, a spin-off of Usip.

Sheldon Himelfarb, PeaceTech Lab’s chief executive, called it “the future of peace-building”. He added: “Governments simply do not have the resources to move upstream and do conflict prevention in the world. Partnerships with the private sector like this are good business and good for the future of the planet.”

The Seattle giant’s foray into war prevention is perhaps the most striking example yet of its boundless ambitions. It also points to its increasingly close relationship with the US government. Amazon Web Services (AWS) last month announced the launch of Secret Region, a cloud platform specially designed for the CIA and other government intelligence agencies. Jeff Bezos, owner of The Washington Post, has laid down roots in the capital. The world’s richest man ($90bn) was revealed in January as the anonymous buyer of a $23m, 27,000 sq ft mansion in the same neighbourhood as the Obamas and several cabinet officials.

The PeaceTech accelerator scours the globe for start-ups in conflict zones, brings them to the American capital for an intensive eight-week course of mentorship and meetings, and then sends them back to their home countries, often with a new government contract in hand.

AWS uses its unique vantage point as the world’s biggest provider of cloud services to funnel interesting start-ups to the “bat cave”. Once there, it provides them up to $50,000 in free “cloud credits” as well as mentors and corporate coaches.

The accelerator also relies on social media advertising, as well as partners like SAP’s national security division, which recently joined the programme, to lure talent. At the end of the programme, the Accelerator has the option of taking a small equity stake in its graduates.

Himelfarb, a veteran of peace campaigns from Burundi to Bosnia, went to great pains to make clear that the accelerator was an independent NGO with no direct government affiliation, despite its connections in Washington — and location in the bowels of the Usip.

The latter was created by an act of Congress in 1984 as a body “devoted to the non-violent prevention and mitigation of deadly conflict abroad”. It sits across the street from the State Department. The board of directors includes Secretary of State Rex Tillerson and General Jim Mattis, secretary of defence, as well as congressmen from both sides of the aisle.

The PeaceTech Lab, which oversees the accelerator, was spun out in 2014 as a stand-alone entity to apply technology to conflict resolution. It receives funding from some government contracts, but is not directly financed by Washington. Instead it relies on donors and corporate partners. Himelfarb said: “We couldn’t have private sector funding under the charter for Usip. They spun us out so we could be more entrepreneurial.”

Himelfarb said that when C5 and Amazon suggested creating an accelerator, it was “a dream come true”. He added: “As my friends in the military tell me all the time, ‘You can have a vision, Sheldon, but vision without resources is a mirage’.”

He now has resources. Perhaps more importantly, he has acquired the connections. Ron Moultrie spent nearly four decades in American intelligence. A Russian linguist who served in the CIA before moving to the National Security Agency, he retired in 2015 after five years as head of operations, the third in command of the world’s biggest cyber-espionage organisation.

He helped to get the accelerator off the ground via his role as chairman of C5’s American division. The seasoned spy, who still travels under aliases and has top-secret security clearance, chuckled at the notion that the government had turned to Amazon to scout for on-the-ground intelligence help. “I’d hope that if the intelligence community wanted or needed to be able to put that net out there, they’d have another way of doing it,” he said. “This is just a collaboration of riches you have here, where everything has fallen together at the right time.”

What is clear is that US intelligence has come under fire for failings for years, going back to the first Iraq War when it relied on bogus information from an Iraqi informant called “Curveball”, all the way up to last year’s election, when authorities were caught flat-footed by Russia’s misinformation campaign.

The 3bn-plus smartphones in the world, packed with cameras, sensors and GPS trackers, have opened a new front. “It doesn’t matter if you’re in Somalia or South Sudan or downtown London, almost everybody has a smartphone,” said Nancy Payne, PeaceTech’s vice-president. “Traditional peacebuilding is not going away, but why not actually try to figure out how you can tackle complex social problems using technology in the same way that you’re using it to figure out how you disrupt the taxi industry?”

RedCrow, a programme alumnus based in Ramallah, in the Palestinian territories, is but one example. . .

Continue reading.

Written by LeisureGuy

24 December 2017 at 7:07 am

%d bloggers like this: