Later On

A blog written for those whose interests more or less match mine.

The coming of strong AI

leave a comment »

I read an article this morning—one that I cannot find again—on the coming of strong AI: millions are being poured into its development because it could create a significant advantage—not for mankind so much as for the government or corporation that first succeeds. And some of that spending is driven less by the benefits that it might promise than by the threats if another country (think North Korea, for example) succeeds first. In particular, if there are military applications, each country would doubtless prefer that it be the one first to succeed.

In (fruitlessly) looking for the article, I came across this interesting piece by John McGinnis from a few years back (2010) in which a number of these issues are discussed. (It’s also available as a PDF.)

Recently, Artificial Intelligence (AI) has become a subject of major media interest.  For instance, last May the New York Times devoted an article to the prospect of the time at which AI equals and then surpasses human intelligence.[1]  The article speculated on the dangers that such an event and its “strong AI” might bring.[2]  Then in July, the Times discussed computer-driven warfare.  Various experts expressed concern about the growing power of computers, particularly as they become the basis for new weapons, such as the predator drones that the United States now uses to kill terrorists.[3]

These articles encapsulate the twin fears about AI that may impel regulation in this area—the existential dread of machines that become uncontrollable by humans and the political anxiety about machines’ destructive power on a revolutionized battlefield.  Both fears are overblown.  The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence.  The military fear rests on the mistaken notion that computer-driven weaponry will necessarily worsen, rather than temper, human malevolence.  In any event, given the centrality of increases in computer power to military technology, it would be impossible to regulate research into AI without empowering the worst nations on earth.

Instead of prohibiting or heavily regulating artificial intelligence, the United States should support civilian research into a kind of AI that will not endanger humans—a so-called “friendly AI.”[4]  First, such support is the best way to make sure that computers do not turn out to be an existential threat.  It would provide incentives for researchers in the most technologically advanced nation in the world to research and develop AI that is friendly to man.

Second, such support is justified because of the positive spillovers that computational advances will likely provide in collective decisionmaking.  The acceleration of technology creates the need for quicker government reaction to the potentially huge effects of disruptive innovations.  For instance, at the dawn of the era in which the invention of energy-intensive machines may have started to warm up the earth, few recognized any risk from higher temperatures that such machines might cause.[5]  Yet as I will describe below, current developments in technology make the rise of energy-intensive machines seem slow-moving.  Assuming that man-made atmospheric warming is occurring,[6] it likely presents only the first of a number of possible catastrophes generated by accelerating technological change—dangers that may be prevented or at least ameliorated through earlier objective analysis and warning.  But it is no less important to recognize that other technological advances may create a cascade of benefits for society—benefits that false perceptions of risk may retard or even preclude.  As a result, gathering and analyzing information quickly is more important than ever to democratic decisionmaking because the stakes of such regulatory decisions have never been higher.

Given that AI has substantial potential to help society formulate the correct policies about all other accelerating technologies with transformational capacity, such as nanotechnology and biotechnology, the most important policy for technological change is that for AI itself.  Strong AI would help analyze the data about all aspects of the world—data that is growing at an exponential rate.[7]  AI then may help make connections between policies and consequences that would otherwise go overlooked by humans, acting as a fire alarm against dangers from new technologies whose chain of effects may be hard to assess even if they are quite imminent in historical terms.

Such analysis is not only useful to avoiding disaster but also to take advantage of the cornucopia of benefits from accelerating technology. Better analysis of future consequences may help the government craft the best policy toward nurturing such beneficent technologies, including providing appropriate prizes and support for their development.  Perhaps more importantly, better analysis about the effects of technological advances will tamp down on the fears often sparked by technological change.  The better our analysis of the future consequences of current technology, the less likely it is that such fears will smother beneficial innovations before they can deliver Promethean progress.

In this brief Essay, I first describe why strong AI has a substantial possibility of becoming a reality and then sketch the two threats that some ascribe to AI.  I show that relinquishing or effectively regulating AI in a world of competing sovereign states cannot respond effectively to such threats, given that sovereign states can gain a military advantage from AI, and that even within states, it would be very difficult to prevent individuals from conducting research into AI.  Moreover, I suggest that AI-driven robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars as well as an aid to civilization in its fight against terrorism.  Finally, I offer reasons that friendly artificial intelligence can be developed to help rather than harm humanity, thus eliminating the existential threat.

I conclude by showing that, in contrast to a regime of prohibition or heavy regulation, a policy of government support for AI that follows principles of friendliness is the best approach to artificial intelligence.  If friendly AI emerges, it may aid in preventing the emergence of less friendly versions of strong AI, as well as distinguish the real threats from the many potential benefits inherent in other forms of accelerating technology.

1. The Coming of AI . . .

Continue reading.

Written by LeisureGuy

6 March 2015 at 9:50 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.