Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Outlook for college grads

leave a comment »

Very interesting column by Danielle Paquette in the Washington Post on the impact of automation and AI on highly compensated jobs (banking, finance, software engineering). The column opens with:

The job title “Wall Street trader” once evoked sleek suits, martini-soaked lunches and chaotic offices  — a gateway to prosperity at a relatively young age. But earlier this year, Marty Chavez, the chief financial officer of Goldman Sachs, revealed that some of the investment bank’s well-paid humans were being replaced by unpaid robots.

Over the last seventeen years, the number of stock traders at the firm has shrunk from 600 to two, he told  a Harvard computer science symposium in January. . .

And it concludes with:

. . .“Investment banking is next on the chopping block,” Webb said. And engineering isn’t off the hook, either. “The next iteration of artificial intelligence,” she said, “is artificial intelligence creating software for itself.”

In one Google Brain experiment, for example, software became better at teaching itself tasks — such as navigating a maze, for example — than the engineers who were charged with making it smarter.

“That obviates the need,” she said, “for a human engineer.”

Written by LeisureGuy

16 May 2017 at 8:15 pm

Jason Koebler switched from Chrome to Opera, and so did I

leave a comment »

Jason Koebler writes at Motherboard:

It’s time to break up with Chrome and all the RAM it eats up.

If the purpose of a web browser is to load, view, and interact with the largest percentage of websites on the internet, then the best web browser in the world is Google Chrome, which can handle just about anything you throw at it. But if you like opening more than a couple tabs at once, Google Chrome is not the browser for you.

Over the last few years, I have grown endlessly frustrated with Chrome’s resource management, especially on MacOS. Admittedly, I open too many tabs, but I’d wager that a lot of you do, too. With Chrome, my computer crawls to complete unusability multiple times a day. After one too many times of having to go into Activity Monitor to find that one single Chrome tab is using several gigs of RAM, I decided enough was enough.

I switched to Opera, a browser I had previously thought was only for contrarians.

This, after previous dalliances with Safari and Firefox left me frustrated. Chrome has a built-in advantage, because web developers optimize their pages for the most popular browser (Chrome!). And so as Chrome gets more popular, its compatibility continues to get better while Safari’s and Firefox’s would suffer (at least in theory). Safari uses an engine known as WebKit and Firefox uses Gecko, while Chrome is built on an engine called Blink, which is used in all Chromium-based browsers (Chromium is a fully featured, open source browser that served as the backbone for Chrome).

Safari manages resources well but didn’t work great with a lot of streaming video. More importantly, Safari doesn’t use favicons (the tiny icons on the tab that tell you what site you’re on), which, can I just say, is a WILD design decision and a complete deal breaker for anyone who opens a lot of tabs. I found Firefox to be slow and ran into compatibility issues as well—this experiment was over a year ago so I don’t remember the specifics, but I didn’t love it. I spent only a couple hours with the upstart Vivaldi browser before getting frustrated with its non-Chrome-ness.

After several months of using Opera, most of my web browsing problems have been solved. Wednesday, Opera released a new version of its browser, called “Reborn,” which adds in-browser WhatsApp, Facebook, and Telegram messaging. For now, this feature is just a gimmick to me: The real appeal of Opera is that it is essentially Chrome but with a better, less disastrous and less time consuming mechanism of failure.

Because Opera is also based on Blink, I almost never run into a website, plugin, script, or video that doesn’t work flawlessly on it. In fact, Opera works almost exactly like Chrome, except without the resource hogging that makes me want to throw my computer against a brick wall.

This is exactly the point, according to Opera spokesperson Jan Standal: “What we’re doing is an optimized version of Chrome,” he said. “Web developers optimize most for the browser with the biggest market share, which happens to be Chrome. We benefit from the work of that optimization.”

Why I can’t use Chrome anymore

One of the original draws of Chrome was that it handles each tab as a separate process. This means that if one tab crashes, it doesn’t crash the whole browser. This innovation—once the selling point of the browser—is one of the reasons why Chrome is a nightmare to use today. As we started running powerful applications within tabs and as websites became bloated with autoplaying videos, tracking scripts, and ads, each individual tab we open has the potential to be a resource hog. That’s how you end up with a couple tabs using multiple gigs of RAM. Though I’ve tried extensions like the Great Suspender and OneTab, these never felt like full solutions and neither did much to help my problem.

Google has tried to rein in resource-hogging tabs, but in my experience on MacOS, new versions of Chrome haven’t solved the problem. . .

Continue reading.

Written by LeisureGuy

15 May 2017 at 4:20 pm

Posted in Software, Technology

Why your next Echo command should be: ‘Disconnect me from the internet’

leave a comment »

Tim Johnson writes at McClatchy:

Dr. Herbert Lin, one of the nation’s pre-eminent thinkers on cybersecurity policy, shuns the internet-connected devices that fill some American homes.

He’ll have nothing to do with “smart” refrigerators, hands-free home speakers he can call by name, intelligent thermostats and the like.

“People say to me, ‘How can you have a doctorate in physics from MIT and not trust in technology?’ And I look at them and say, ‘How can I have a doctorate in physics from MIT and trust technology?’ ” Lin said.

Part of what he distrusts is the “internet of things,” and the ease with which hackers can penetrate “smart” devices with digital worms and shanghai them into massive robotic networks to launch crippling digital attacks or generate ever greater quantities of spam.

It is a mistrust based on mathematics. Internet-enabled devices are exploding in number. Gartner, a research giant in technology, says the devices will climb from 6.4 billion at the end of last year to 25 billion by 2020. Such growth sharply augments the power of hidden robotic networks, or botnets.

[RELATED: If the NSA can be hacked, is anything safe?]

Now, an unseen battle unfolds. Weaponized digital worms are entering the scene and infecting masses of devices that obediently await instructions from a remote master to spring to action, possibly a new botnet attack.

The threat from botnets is so serious that FBI Director James Comey brought them up at a Senate hearing last week, saying the “zombie armies” created from internet devices can do tremendous harm.

(RELATED: This new Amazon device can give you fashion advice. But, warns UNC prof, what else it is finding out?)

“Last month, the FBI – working with our partners, with the Spanish national police – took down a botnet called the Kelihos botnet and locked up the Russian hacker behind that botnet,” Comey said. “He’s now in jail in Spain, and the good people’s computers who had been lashed to that zombie army have now been freed from it.”

Further botnet attacks are inevitable. . .

Continue reading.

Do read the entire article. One feels a sense of urgency. Later in the article:

Now a new worm, dubbed Hajime – Japanese for “beginning” – is spreading.

The Moscow-based Kaspersky Lab estimated in late April that the Hajime worm had already penetrated 300,000 devices worldwide and could rally them into a botnet army at a moment’s notice.

Written by LeisureGuy

8 May 2017 at 7:09 pm

The dystopian aspect of AI emerges in reality

leave a comment »

AI can be a force for good or not. Adam Liptak in the NY Times describes one problematic aspect—and if you think that’s bad, how about when an algorithm decides that it’s not worthwhile to treat your life-threatening illness?

When Chief Justice John G. Roberts Jr. visited Rensselaer Polytechnic Institute last month, he was asked a startling question, one with overtones of science fiction.

“Can you foresee a day,” asked Shirley Ann Jackson, president of the college in upstate New York, “when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?”

The chief justice’s answer was more surprising than the question. “It’s a day that’s here,” he said, “and it’s putting a significant strain on how the judiciary goes about doing things.”

He may have been thinking about the case of a Wisconsin man, Eric L. Loomis, who was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.

In March, in a signal that the justices were intrigued by Mr. Loomis’s case, they asked the federal government to file a friend-of-the-court brief offering its views on whether the court should hear his appeal.

The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes.

The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”

The Wisconsin Supreme Court ruled against Mr. Loomis. The report added valuable information, it said, and Mr. Loomis would have gotten the same sentence based solely on the usual factors, including his crime — fleeing the police in a car — and his criminal history.

At the same time, the court seemed uneasy with using a secret algorithm to send a man to prison. Justice Ann Walsh Bradley, writing for the court, discussed, for instance, a report from ProPublica about Compas that concluded that black defendants in Broward County, Fla., “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.”

Justice Bradley noted that Northpointe had disputed the analysis. Still, she wrote, “this study and others raise concerns regarding how a Compas assessment’s risk factors correlate with race.”

In the end, though, Justice Bradley allowed sentencing judges to use Compas. They must take account of the algorithm’s limitations and the secrecy surrounding it, she wrote, but said the software could be helpful “in providing the sentencing court with as much information as possible in order to arrive at an individualized sentence.” . . .

Continue reading.

See also this book.

Written by LeisureGuy

2 May 2017 at 1:53 pm

Posted in Law, Software, Technology

The dark secret at the heart of AI

leave a comment »

Will Knight writes in MIT’s Technology Review:

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? . . .

Continue reading.

Written by LeisureGuy

16 April 2017 at 10:00 am

Surely it’s easier to store stuff digitally…. isn’t it?

leave a comment »

But what about an effort to preserve old software as software has changed, mutated, and evolved? Where can one find the grand early firsts? In the physical world, we can review how, say, buildings of worship evolved over the millennia through the Darwinian process of meme evolution, structures that show the incredible diversity of such structures: Mayan pyramids, Greek temples, (Somehow I can’t picture an Egyptian house of worship), Muslim mosques, Russian Orthodox onion-shaped roofs, the great cathedrals of Europe, the plain simplicity of the Shakers, the steeples of New England, ..  and that’s not even scratching the surface of that one meme.

But suppose you wanted to see the same sort of historical display of software, which certainly is as real as houses of worship: they are both memes. But one is hard tissue that fossilizes readily, and the other is soft tissue that disintegrates quickly and thus is lost to fossilization. And that’s whay software will not be able to be seen in the same way.

Jordan Pearson reports in Motherboard:

Code is heritage.

Software has shaped our lives and culture for decades, and now the United Nations will make a push to get world governments to work toward preserving it.

On Monday, UNESCO, one of the founding agencies of the UN, announced a partnership with the French Institute for Research in Computer Science and Automation (INRIA) to preserve every piece of software under the sun. Last year, INRIA kicked off its Software Heritage project with the mission of collecting software source code and ensuring it’s never forgotten. The project has so far logged 58 million projects and billions of source files.

“We live in a digital world, but we don’t know how it works—what’s behind the machine? It’s software,” said Davide Storti, a UNESCO spokesperson, over the phone. “If you know how software works, you might better understand the world you live in. So, it’s important for education, and that kids have access to this notion.”

“It’s something that governments should participate in,” he said.

Read More: Programmers Are Racing to Save Apple II Software Before It Goes Extinct

Software preservation is a massive undertaking, especially outside of sought-after programs like video games, and one that requires a lot of collaboration and time. That’s why, with the current partnership, UNESCO will strike up a conversation among its 195 member states about how they can work to preserve code-as-cultural-heritage.

“Member states might agree that they want to do this, but nothing will happen,” Storti said. “So, we’re partnering with a public institution that is doing this.”

“[INRIA’s] software is real, it’s there, it contains millions of software projects, so it’s thanks to our partner that we can bring value to the conversation,” he continued.

There are a few software preservation projects out there right now, many of which are run by hobbyists, obsessives, and enthusiasts. For example, a group of hackers led by Jason Scott of the Internet Archive are currently on a mission to crack and preserve every piece of Apple II software before they’re lost forever.

“It’s important to not only discuss the preservation itself, but also how these small initiatives can talk to larger initiatives,” Storti said. “As we see it, we look forward to organizing debates on exactly this kind of subject.”

There’s clearly a lot of firming-up to be done, but UNESCO is ready to start the conversations that may lead to government-led software preservation. And with . . .

Continue reading.

Memes stand out quite clearly in this example, and the fact that they evolve is clear.

It’s also clear that this sort of preservation is specifically a job for the government. Businesses come and go and cannot (any longer) be expected to focus on anything other than maximizing shareholder value: profits uber Allës is the current mindset/meme. So business won’t do it. But, as the article explains, it is worth doing. The only funded entity in sight is the government, and governments in general take on the responsibility of preserving the nation’s cultural heritage (which quite specifically consists of memes: that’s what culture is).

In the future it seems likely that people will want to know, “How did we end up in this situation?” and at those times having a record of the software family of meme evolution would be very helpful. Example: Shortly after the moment our new AI overloads take over completely, we might well be curious as how that happened, since no one was working toward that.

But that’s the nature of evolution: no one was working toward bringing about those weird courtship rituals in the previous post. Those are due purely to the process of evolution, not with a purpose in mind. Just as the random combination of letters will in time produce a masterwork, so the random interactions of meme-creators will in time produce the Great AI. Thus the emergence of the Great AI will similarly be due not to our actions’ specific intentions—that is, we are not working toward that purpose—but as an outcome of meme evolution much as the courtship rituals emerge from lifeform evolution. In particular, our minds are rooted in the evolution of lifeforms. Mind is an emergence, a phenomenon seen in various contexts (life itself being an emergence). And with minds we got memes, which evolve extremely rapidly—look at meme evolution since 1700, for example. And ultimately one would expect another emergence, which might well be the Great AI.

It will just emerge, and as a result of our own activity— building faster computers with more memory and figuring out machine learning and how to apply it, and hooking things together. As with lifeform evolution, not all mutations are beneficial, and many meme mutations are in effect pruned by natural selection. But the ones that work continue to evolve in various directions, so culturally we develop things as odd as those courtship rituals. All those working on various aspects of the Great AI are, in effect, parts of it: the cells, as it were, whose intelligence is directed toward doing things from which the Great AI will emerge: not designed, but evolved.

That does offer one possible way to increase the odds that the Great AI will be benign. (It’s clear already that memes themselves may not be benign—cf. North Korea.) Note the environment determines which variants are adaptations that survive and flourish and which are maladaptations that weaken and go extinct. That’s true for lifeforms and for memes. As the oceans become more acidic with climate change (absorbing CO2), that affects how well different lifeforms fare. Some will go extinct, others will flourish, giving rise to many species. When the environment change, natural selection picks winners and losers. The change be very slow, or quite sudden (e.g., the asteroid strike) The effect is a change in the direction of evolution. Evolution is like a snowflake: the patterns show the humidity and temperature of the air at the snowflake as the snowflake formed, so the snowflake is a record of those changes. Similarly, the evolution of lifeforms is a record of environmental change (with the environment in this including other lifeforms).

For memes, we are the environment. That is, we are the hosts for cultural knowledge (memes), so we determine the survival value of various memes. Indeed, in history you can trace out how certain memes arose, flourished (or not), branched, etc. Take a man’s tie. It originated as a scarf, divided at bow ties and long ties, has various small variants in knots, and branched off in various subspecies: the Western string tie, the lanyard, and so on. So if we want the Great AI to evolve in a benign direction, we need to provide an environment that encourages that—and we are the environment.

And it’s clear that we can change the meme environment. Donald Trump has had a very strong impact on which memes flourish and which are struggling. I fear, however, that the meme environment now being developed reduces the chance that the Great AI will be benign for humans. (Again: North Korea is a bunch of memes that turn out not to be good for their hosts.)

The control we have is very indirect, but it does seem important to go a direction different from the one President Trump is headed.

I hope you’re a fan of singularity sci-fi. You may be living it pretty soon.

Written by LeisureGuy

5 April 2017 at 2:30 pm

VPNs and I: Venturing into a new territory

leave a comment »

I first came across “How to set up a VPN in 10 minutes for free (and why you urgently need one),” by Quincy Larson (republished by Quartz with a different title). As a result, I’m now using Opera as my default browser, since you can just turn on VPN in the browser.

That caused me to have to convince various sites (Amazon, Netflix, etc.) that I am really who I say I am, generally with an email verification step (all to the good). It also resulted in my Google searches being in Dutch, since my VPN site was in the Netherlands. Opera picks the most efficient site, but you can specify the country you want, and once I specified the U.S., Google switched to English for its own messages.

Opera is as far as I’ve gone, but in time I will explore further. I’ve already learned that Netflix doesn’t like to connect to VPNs, but since I watch through my Amazon Fire Stick, that’s not been a problem.

Two other articles to note:

The Motherboard Guide to VPNs,” by Lorenzo Franceschi-Bicchierai in Motherboard.

Using a VPN Sucks,” by Jason Koebler in Motherboard, with the subtitle “The problem isn’t just that VPNs aren’t great, it’s that Republicans have broken the damn internet.”


Written by LeisureGuy

1 April 2017 at 9:49 am

%d bloggers like this: