Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Without net neutrality in Portugal, mobile internet is bundled like a cable package

leave a comment »

Michael J. Coren reports in Quartz what happens when net neutrality goes overboard, which seems to be the goal of the GOP. Well worth reading. It begins:

The principle of net neutrality is simple: companies that connect you to the internet must treat all content equally. In policy terms, that means the government ensures internet service providers do not block, slow, or otherwise discriminate against certain content or applications.

In the US, this policy was enshrined in the Open Internet Order in 2015, when the Federal Communications Commission (FCC) passed its strongest net neutrality policies to date. The current FCC chairman, Ajit Pai, is now preparing to roll those back. His main objection, Pai told PBS, is that the rules hinder investment in expanding broadband. “My concern is that, by imposing those heavy-handed economic regulations on internet service providers big and small, we could end up disincentivizing companies from wanting to build out internet access to a lot of parts of the country, in low-income, urban and rural areas,” he said.

The proposal has sparked a backlash by critics who say it will result in a rich and poor internet. Companies willing to pay ISPs such as Verizon and Comcast will get faster, favored service. Companies unable or unwilling to fork out the cash will find it hard to compete, while customers may see their internet service offered in tiered “bundles,” similar to the way television channels are grouped by cable providers.

Consider Spain and Portugal. Lisbon-based telecommunications firm MEO has been rolling out mobile packages (link in Portuguese) that provide users with data plans limited to specific apps. Customers will pay more for using data for apps outside the package relative to those in the preferred packages. It was not clear if companies paid to be included in the packages.

“[That’s] a huge advantage for entrenched companies, but it totally ices out startups trying to get in front of people which stifles innovation,” wrote Silicon Valley congressional representative Ro Khanna on Twitter. “This is what’s at stake and that’s why we have to save net neutrality.”

Read the whole thing.

Written by LeisureGuy

5 November 2017 at 6:49 am

AI is coming along at just the right time: Science has outgrown the human mind and its limited capacities

leave a comment »

Ahmed Alkhateeb, a molecular cancer biologist at Harvard Medical School whose work focuses on the development of analytical platforms to improve research efficiency in biomedicine, writes in Aeon:

The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency. 

– Ibn al-Haytham (965-1040 CE)

Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year.Meanwhile, the quality of the scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.

The twin challenges of too much quantity and too little quality are rooted in the finite neurological capacity of the human mind. Scientists are deriving hypotheses from a smaller and smaller fraction of our collective knowledge and consequently, more and more, asking the wrong questions, or asking ones that have already been answered. Also, human creativity seems to depend increasingly on the stochasticity of previous experiences – particular life events that allow a researcher to notice something others do not. Although chance has always been a factor in scientific discovery, it is currently playing a much larger role than it should.

One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated?

I believe it can, using an approach that we have known about for centuries. The answer to this question can be found in the work of Sir Francis Bacon, the 17th-century English philosopher and a key progenitor of modern science.

The first reiterations of the scientific method can be traced back many centuries earlier to Muslim thinkers such as Ibn al-Haytham, who emphasised both empiricism and experimentation. However, it was Bacon who first formalised the scientific method and made it a subject of study. In his book Novum Organum (1620), he proposed a model for discovery that is still known as the Baconian method. He argued against syllogistic logic for scientific synthesis, which he considered to be unreliable. Instead, he proposed an approach in which relevant observations about a specific phenomenon are systematically collected, tabulated and objectively analysed using inductive logic to generate generalisable ideas. In his view, truth could be uncovered only when the mind is free from incomplete (and hence false) axioms.

The Baconian method attempted to remove logical bias from the process of observation and conceptualisation, by delineating the steps of scientific synthesis and optimising each one separately. Bacon’s vision was to leverage a community of observers to collect vast amounts of information about nature and tabulate it into a central record accessible to inductive analysis. In Novum Organum, he wrote: ‘Empiricists are like ants; they accumulate and use. Rationalists spin webs like spiders. The best method is that of the bee; it is somewhere in between, taking existing material and using it.’

The Baconian method is rarely used today. It proved too laborious and extravagantly expensive; its technological applications were unclear. However, at the time the formalisation of a scientific method marked a revolutionary advance. Before it, science was metaphysical, accessible only to a few learned men, mostly of noble birth. By rejecting the authority of the ancient Greeks and delineating the steps of discovery, Bacon created a blueprint that would allow anyone, regardless of background, to become a scientist.

Bacon’s insights also revealed an important hidden truth: the discovery process is inherently algorithmic. It is the outcome of a finite number of steps that are repeated until a meaningful result is uncovered. Bacon explicitly used the word ‘machine’ in describing his method. His scientific algorithm has three essential components: first, observations have to be collected and integrated into the total corpus of knowledge. Second, the new observations are used to generate new hypotheses. Third, the hypotheses are tested through carefully designed experiments.

If science is algorithmic, then it must have the potential for automation. This futuristic dream has eluded information and computer scientists for decades, in large part because the three main steps of scientific discovery occupy different planes. Observation is sensual; hypothesis-generation is mental; and experimentation is mechanical. Automating the scientific process will require the effective incorporation of machines in each step, and in all three feeding into each other without friction. Nobody has yet figured out how to do that.

Experimentation has seen the most substantial recent progress. For example, the pharmaceutical industry commonly uses automated high-throughput platforms for drug design. Startups such as Transcriptic and Emerald Cloud Lab, both in California, are building systems to automate almost every physical task that biomedical scientists do. Scientists can submit their experiments online, where they are converted to code and fed into robotic platforms that carry out a battery of biological experiments. These solutions are most relevant to disciplines that require intensive experimentation, such as molecular biology and chemical engineering, but analogous methods can be applied in other data-intensive fields, and even extended to theoretical disciplines.

Automated hypothesis-generation is less advanced, but the work of Don Swanson in the 1980s provided an important step forward. He demonstrated the existence of hidden links between unrelated ideas in the scientific literature; using a simple deductive logical framework, he could connect papers from various fields with no citation overlap. In this way, Swanson was able to hypothesise a novel link between dietary fish oil and Reynaud’s Syndrome without conducting any experiments or being an expert in either field. Other, more recent approaches, such as those of Andrey Rzhetsky at the University of Chicago and Albert-László Barabási at Northeastern University, rely on mathematical modelling and graph theory. They incorporate large datasets, in which knowledge is projected as a network, where nodes are concepts and links are relationships between them. Novel hypotheses would show up as undiscovered links between nodes.

The most challenging step in the automation process is . . .

Continue reading.

Written by LeisureGuy

4 November 2017 at 9:34 am

Posted in Science, Technology

Learning my new carbon-steel skillet

leave a comment »

I hadn’t really thought about carbon-steel skillets, but when I ran across a review I was open to the idea, partly because I love my carbon-steel chef’s knife so much. Carbon-steel and cast-iron conduct and retain high similarly. Carbon-steel skillets are, in general, lighter than cast-iron skillets in the same size, and carbon-steel skillets are much smoother than cast-iron, particularly cast-iron made by Lodge. (Field cast-iron is considerably smoother than Lodge, but not so smooth as carbon-steel.)

Like cast-iron skillets, carbon-steel skillets require seasoning, and must be dried well (I heat on a burner or in the oven for either) or they rust. The first seasoning of a new carbon-steel skillet requires first that you remove the protective varnish (or whatever) by vigorous scrubbing with a very stiff-bristled brush, under hot water and a good detergent.

Once the protective layer is gone, immediately dry it and do the first seasoning. One method is shown in this video:

Written by LeisureGuy

3 November 2017 at 12:29 pm

Not good news: White Supremacists Share Bomb-Making Materials in Online Chats

leave a comment »

A.C. Thompson and Ali Winston report in ProPublica:

Right-wing extremists communicating in confidential online chats in recent months have shared scores of documents detailing the manufacture and use of bombs, grenades, mines and other incendiary devices.

The documents, which range from instructions on detonating dynamite to U.S. military manuals for constructing improvised explosives and booby traps, were passed around during online conversations among members of Anticom, a secretive and militant group that has emerged during the past year.

Records of the online chats were made available to ProPublica by Unicorn Riot, a leftist media collective that has reported critically on racist marches and right-wing political rallies in cities around the U.S.

Anticom, or Anti-Communist Action, views itself as a guerilla army fighting against what it has called the radical elements of the country’s political left. On its social media channels, Anticom openly embraces fascist ideology and imagery, and the group’s members have engaged in hate-filled talk involving Jews, Muslims, immigrants and African Americans. In recent weeks Anticom has stepped out of the shadows as its members have provided security to so-called alt-right champion Richard Spencer at a speaking event in Florida. Anticom also helped to organize a “White Lives Matter” protest in Shelbyville, Tennessee, last weekend.

It is unclear how seriously the documents shared in the chats were explored by any of Anticom’s members or followers, much less whether the documents were actually used to craft incendiary devices. But the transcripts of the chats include racist talk and open mentions of mass killings.

The user who posted the bomb-making documents, for instance, said he or she wanted to overthrow the U.S. government. “Death to all non whites,” the user wrote in a chat forum post on April 26. Another Anticom member encouraged recruits to construct a bomb and use it to carry out an attack in the style of the 2013 Boston Marathon bombing.

The chatroom logs shared with ProPublica show that Anticom members were in communication with another extremist group, several members of which have surfaced in federal investigations.

In May, federal agents searching the Tampa home of 21-year-old Brandon Russell discovered an array of explosives and bomb ingredients: fuses made from rifle shells, a white cake-like explosive substance called HMTD, more than one pound of ammonium nitrate and other explosive precursors, and two different kinds of radioactive material. The agents promptly arrested Russell, who was both a member of the Florida National Guard and a leader of Atomwaffen, a small fascist group calling for a “white revolution in the 21st century.”

Federal authorities only uncovered Russell’s bomb-making materials after his roommate and fellow Atomwaffen member Devon Arthurs killed two of their comrades. Arthurs later told law enforcement that he acted in order to prevent acts of domestic terrorism, and that Atomwaffen intended “to build a FourthReich.” Russell participated in “neo-Nazi internet chat rooms where he threatened to kill people and bomb infrastructure,” and was plotting to blow up a nuclear power plant near Miami, according to Arthurs.

After . . .

Continue reading.

Written by LeisureGuy

2 November 2017 at 3:42 pm

How to Build a Robot That Wants to Change the World

leave a comment »

John Pavlus writes in Quanta:

Isaac Asimov’s famous Three Laws of Robotics — constraints on the behavior of androids and automatons meant to ensure the safety of humans — were also famously incomplete. The laws, which first appeared in his 1942 short story “Runaround” and again in classic works like I, Robot, sound airtight at first:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, hidden conflicts and loopholes abound (which was Asimov’s point). In our current age of advanced machine-learning software and autonomous robotics, defining and implementing an airtight set of ethics for artificial intelligence has become a pressing concern for organizations like the Machine Intelligence Research Institute and OpenAI.

Christoph Salge, a computer scientist currently at New York University, is taking a different approach. Instead of pursuing top-down philosophical definitions of how artificial agents should or shouldn’t behave, Salge and his colleague Daniel Polani are investigating a bottom-up path, or “what a robot should do in the first place,” as they write in their recent paper, “Empowerment as Replacement for the Three Laws of Robotics.” Empowerment, a concept inspired in part by cybernetics and psychology, describes an agent’s intrinsic motivation to both persist within and operate upon its environment. “Like an organism, it wants to survive. It wants to be able to affect the world,” Salge explained. A Roomba programmed to seek its charging station when its batteries are getting low could be said to have an extremely rudimentary form of empowerment: To continue acting on the world, it must take action to preserve its own survival by maintaining a charge.

Empowerment might sound like a recipe for producing the very outcome that safe-AI thinkers like Nick Bostrom fear: powerful autonomous systems concerned only with maximizing their own interests and running amok as a result. But Salge, who has studied human-machine social interactions, wondered what might happen if an empowered agent “also looked out for the empowerment of another. You don’t just want your robot to stay operational — you also want it to maintain that for the human partner.”

Salge and Polani realized that information theory offers a way to translate this mutual empowerment into a mathematical framework that a non-philosophizing artificial agent could put into action. “One of the shortcomings of the Three Laws of Robotics is that they are language-based, and language has a high degree of ambiguity,” Salge said. “We’re trying to find something that is actually operationizable.”

Quanta spoke with Salge about information theory, nihilist AI and the canine model of human-robot interaction. An edited and condensed version of the conversation follows.

Some technologists believe that AI is a major, even existential threat. Does the prospect of runaway AI worry you?

I’m a bit on the fence. I mean, I do think there are currently genuine concerns with robots and the growing influence of AI. But I think in the short term we’re probably more concerned about maybe job replacement, decision making, possibly a loss of democracy, a loss of privacy. I’m unsure how likely it is that this kind of runaway AI will happen anytime soon. But even an AI controlling your health care system or what treatment options you’re getting — we should start to be concerned about the kind of ethical questions that arise from this.

How does the concept of empowerment help us deal with these issues? . . .

Continue reading.

Written by LeisureGuy

1 November 2017 at 8:13 pm

Posted in Evolution, Law, Memes, Technology

Easy video intro to neural networks

leave a comment »

Part 1:

Part 2:

And also:

Written by LeisureGuy

31 October 2017 at 9:30 am

A Different Take: Our Robot Hellscape Awaits Us

leave a comment »

Kevin Drum has a somewhat depressing view of where AI will take us:

As you all know, I think that intelligent robots will eventually take over all human work. The standard take on this—which I repeat in my recent article—is that even if this produces mass unemployment in the medium term, it will be great in the long term. No more work! We can all live in comfort, pondering philosophy and engaging in uplifting conversation. We will paint and read and admire nature. We will explore the planets and send generation ships to distant stars. It will be a golden age for humanity.

Maybe, but it so happens that I don’t believe this. So just in case you’re not depressed enough by all things Trump, here are a few scenarios I actually consider more likely. Trigger warning: I’m not joking! I don’t have any special knowledge, of course, but I really believe that some of these things are pretty plausible. Conversely, I don’t believe the golden age stuff for a second. Without the pressure of needing to survive, the vast majority of humanity has very little ambition. We’re a lot more likely to watch dumb TV and play video games than we are to read Plato or study cures for cancer. In fact, it’s way worse than that.

Here are a few possibilities. Note that for the purposes of this thought experiment, I’m assuming that we succeed in building strong AI that’s better and smarter than the smartest human being. That may or may not happen, but those are the rules of the game: . . .

Continue reading.

Written by LeisureGuy

30 October 2017 at 11:43 am

%d bloggers like this: