Later On

A blog written for those whose interests more or less match mine.

Is a conscious AI even necessary?

with 2 comments

Good (lenthy) article by David Gelernter:

 Artificial intelligence has been obsessed with several questions from the start: Can we build a mind out of software? If not, why not? If so, what kind of mind are we talking about? A conscious mind? Or an unconscious intelligence that seems to think but experiences nothing and has no inner mental life? These questions are central to our view of computers and how far they can go, of computation and its ultimate meaning–and of the mind and how it works.

They are deep questions with practical implications. AI researchers have long maintained that the mind provides good guidance as we approach subtle, tricky, or deep computing problems. Software today can cope with only a smattering of the information-processing problems that our minds handle routinely–when we recognize faces or pick elements out of large groups based on visual cues, use common sense, understand the nuances of natural language, or recognize what makes a musical cadence final or a joke funny or one movie better than another. AI offers to figure out how thought works and to make that knowledge available to software designers.

It even offers to deepen our understanding of the mind itself. Questions about software and the mind are central to cognitive science and philosophy. Few problems are more far-reaching or have more implications for our fundamental view of ourselves.

The current debate centers on what I’ll call a “simulated conscious mind” versus a “simulated unconscious intelligence.” We hope to learn whether computers make it possible to achieve one, both, or neither.

I believe it is hugely unlikely, though not impossible, that a conscious mind will ever be built out of software. Even if it could be, the result (I will argue) would be fairly useless in itself. But an unconscious simulated intelligence certainly could be built out of software–and might be useful. Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing the most important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”–which changes over the course of a day and a lifetime.

Without this cognitive continuum, AI has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies–which seem to underlie creativity.

My case for the near-impossibility of conscious software minds resembles what others have said. But these are minority views. Most AI researchers and philosophers believe that conscious software minds are just around the corner. To use the standard term, most are “cognitivists.” Only a few are “anticognitivists.” I am one. In fact, I believe that the cognitivists are even wronger than their opponents usually say.

But my goal is not to suggest that AI is a failure. It has merely developed a temporary blind spot. My fellow anticognitivists have knocked down cognitivism but have done little to replace it with new ideas. They’ve showed us what we can’t achieve (conscious software intelligence) but not how we can create something less dramatic but nonetheless highly valuable: unconscious software intelligence. Once AI has refocused its efforts on the mechanisms (or algorithms) of thought, it is bound to move forward again.

Until then, AI is lost in the woods.

Continue reading.

Written by LeisureGuy

30 June 2007 at 7:23 am

Posted in Science, Software

2 Responses

Subscribe to comments with RSS.

  1. I think conscious software is a long way off. (50-100 years?) I also think, although I cannot prove it, that it’s impossible to have very intelligent software that is not conscious.

    JewishAtheist

    2 July 2007 at 10:25 am

  2. The question then arises where the higher animals—say, cats, dogs, deer—are conscious. They are undoubtedly more intelligent than computers today and, it sometimes seems, quite intelligent. But would you say they are conscious?

    LeisureGuy

    2 July 2007 at 5:48 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,191 other followers