Advertisement

Strong AI vs. Weak AI

Started by October 05, 2006 04:47 PM
26 comments, last by johdex 18 years, 1 month ago
Hello, I'm wondering, how many of you out there are working on Strong AI projects? (See below for info on Strong Ai vs. Weak Ai.) I ask because I've seen to have run the gamut in terms of developing simple AI agents. I'm looking for something a little more complex and in the field of strong ai, rather than weak ai or narrow ai. Dr. Ben Goertzel's venture with Novamente seems interesting. However, I think he and his team are biting off more than they can chew. I think a simpler approach with a simpler goal would be a better start. If you are interested, you can take a look at some of his speeches: http://video.google.com/videoplay?docid=569132223226741332&q=agi http://video.google.com/videoplay?docid=574581743856443641&q=agi Strong AI: http://en.wikipedia.org/wiki/Strong_AI Weak AI: http://en.wikipedia.org/wiki/Weak_AI#Weak_AI
Several things:

First, Strong AI isn't necessarily something one programs towards, because it's more of a philosophical stance than a practical one. Anything a Strong AI can do a Weak AI can do too. The distinction isn't in terms of what thay can do, it's about what's under the hood. Is the AI actually sapient, or just making a perfect mimicry?

The Novamente program, from what I can tell, isn't trying to head towards Strong AI. It's heading towards something that it claims might be able to reason intelligently, but this isn't the same as Strong AI.

From what I can tell from the lectures, Novamente are building some reasonable attempt at a general-purpose reasoning engine. That progect might be interesting -- I didn't get too much from the lectures, and there weren't any real demonstrations. Then they hope that the Singularity will arrive and magically solve all their problems in changing their engine into something that might be labeled "Strong AI".

The first lecture, by the way, was fairly hokey: it's just a catalogue of all the wishes people have been hoping will come about after the supposed Singularity. The guy also seemed very proud of having worked out that, given infinite computing power, you could program anything in only a few lines. I assume he hadn't heard of a Universal Turing Machine before, which was worked out 70 years ago.

Really, most people working in AI don't particularly care about Strong AI -- that's left to the philosophers. Since no one has yet come up with a situation in which a Strong AI would behave any differently than a Weak AI, it's a little pointless.
Advertisement
Quote: Original post by Asbestos
Since no one has yet come up with a situation in which a Strong AI would behave any differently than a Weak AI, it's a little pointless.


I don't think I understand this statement.

When I think of Weak AI, I think of individual problems that an AI is able to solve. For example, theorem proving, advanced 3d path finding, medical diagnosis, etc.

When I think of Strong AI, I think of a well rounded "human-like" AI that is able to perform all of these tasks on some level - the level at which they can be performed being dependent on the the AI's level of intelligence.

So in other words, there is no specific situation for Strong AI, because by definition Strong AI is for _all_ situations.




Oops. The post above is mine, I just didn't sign in.

Quote: Original post by Asbestos
The distinction isn't in terms of what thay can do, it's about what's under the hood. Is the AI actually sapient, or just making a perfect mimicry?


Philosophically, I believe that no distinction can be made between sapience and perfect mimicry of sapience. Sapience is sapience. The only was to make a distinction between two sapient AI's would be to compare source code and hardware (as you said).
Quote: Original post by Anonymous Poster
When I think of Weak AI, I think of individual problems that an AI is able to solve. For example, theorem proving, advanced 3d path finding, medical diagnosis, etc.

When I think of Strong AI, I think of a well rounded "human-like" AI that is able to perform all of these tasks on some level - the level at which they can be performed being dependent on the the AI's level of intelligence.


That's not quite the distinction between Strong and Weak AI.

Searle, who created the terms "Strong AI" and "Weak AI", explains it best in the Chinese Room argument, which I'm sure you've read, but I'll just clarify it here:

Searle envisions a room in which a person is stationed with a very large rule-book. In this case, it happens to be a rule book for speaking Chinese. Stimuli come in to the room in the form of written Chinese texts. The speaker, who speaks no Chinese, is able to use the rule book to send out replies in Chinese, creating a perfect conversation. The person still does not understand Chinese, however, and so is an example of Weak AI: There's nothing but rules under the hood, there is no actual understanding.

The point of the argument isn't in the limits of what the person in the room can do. There's no reason why, if he had an even bigger book, the guy inside could conduct perfect conversations, do math and logic puzzles, wage war, write a book, reason about politics, human affairs, love and art and all the rest. Searle's point isn't that the guy wouldn't be able to do all this, it's that he would still have no actual understanding: he just be following rules.

So in terms of AI, the outward behaviors make no difference. What matters is whether the AI is actually sapient, or just "pretending" to be.

Now one may certainly not agree with Searle's point. Philosophers debate whether or not the person in the room actually understands Chinese; Turing proponents say that "if it looks like intelligence, then it is intelligence"; people programming just shrug their shoulders and keep programming. After all, to someone programming, there's no difference as to whether it is actually sapient, or just pretending to be.

This is why it's a philosophical point, not a programming point.
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?
Advertisement
Quote: Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


The human brain is just a big neural net though too. The only difference is how we've been programmed, not whether we are or not.

We're programmed by past experience (learning - something neural nets are good at) and hardcoded genetic coding. Both of these are things we can do in software too.

I think it's an interesting discussion personally. I don't think as an AI programmer I'm necessarily quite so uninterested in the philosophy underlying AI; maybe it's because my first introduction to AI was from a psychology curriculum, as opposed to a machine intelligence one.

As far as strong AI goes, I'm certainly not working on a project to develop it though! Don't want to build skynet just yet... :)
---PS3dev
Quote: Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


Well, since I believe that consciousness is an evolved process, and that there is no single spark or neuron or anything that flips a switch from "unconscious" to conscious" (and that humans are more conscious than ants, which are more conscious than mollusks), then I don't think that there is a line that can be drawn. However, we've left the stage at which the discussion could ever be answered in an AI forum, or where we wouldn't all be repeating things said many times before. For some interesting reading, straight from Endnote:

Nagel, Thomas. 1974. "What is it Like to Be a Bat?"
Weiskrantz, L. 1986. "Blindsight: A case study and its implications."
Dennett, Daniel. 1991. "Consciousness Explained"
Chalmers, David. 1995. "Facing Up to the Problem of Consciousness"
Chalmers, David. 1996. The Conscious Mind: In Search of a Fundamental Theory
Dennett, Daniel. 1999. "The Zombic Hunch: Extinction of an Intuition?"
Harnad, S. 2005. "What is Consciousness?"
Quote: Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


For the xth time, the kind of neural networks generally used in AI is not even a model of the brain at all. The brain is much, much more than just a neural net. As far as recent biology research shows, it doesnt even function like a net.

The terms "strong vs weak AI" was probably first used by some incompetent AI researchers who were trying to give some credibility to their work.

At any rate, there is nothing in that "field" that I would consider of any use for practical game ai.
Quote: Original post by Asbestos
That's not quite the distinction between Strong and Weak AI.

Searle, who created the terms "Strong AI" and "Weak AI", explains it best in the Chinese Room argument, which I'm sure you've read, but I'll just clarify it here:

Searle envisions a room in which a person is stationed with a very large rule-book. In this case, it happens to be a rule book for speaking Chinese. Stimuli come in to the room in the form of written Chinese texts. The speaker, who speaks no Chinese, is able to use the rule book to send out replies in Chinese, creating a perfect conversation. The person still does not understand Chinese, however, and so is an example of Weak AI: There's nothing but rules under the hood, there is no actual understanding.

The point of the argument isn't in the limits of what the person in the room can do. There's no reason why, if he had an even bigger book, the guy inside could conduct perfect conversations, do math and logic puzzles, wage war, write a book, reason about politics, human affairs, love and art and all the rest. Searle's point isn't that the guy wouldn't be able to do all this, it's that he would still have no actual understanding: he just be following rules.

So in terms of AI, the outward behaviors make no difference. What matters is whether the AI is actually sapient, or just "pretending" to be.

Now one may certainly not agree with Searle's point. Philosophers debate whether or not the person in the room actually understands Chinese; Turing proponents say that "if it looks like intelligence, then it is intelligence"; people programming just shrug their shoulders and keep programming. After all, to someone programming, there's no difference as to whether it is actually sapient, or just pretending to be.

This is why it's a philosophical point, not a programming point.


I agree. I think I may have just misunderstood you earlier.

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea. I have my own position on that debate, but I don't want to delve into it here.

Here is what I am interested in (This kind of goes back to my original post about Novamente):
Given todays technology, would it be possible to program an AI agent in a simulated 3d environment with the ability to learn via "human capabilities". (ie, natural language processing(text), vision, touch, smell, taste). I know that it's possible to develope each of these abilities independent of each other. However, it would be interesting to incorporate them all into one a AI in a useful way so that knowledge from one sense would be useful to another.

As I said above, Novamente seems to be overkill for the moment. While it make take a project 10 or 100 times that size to produce an actual human-like AI, for the moment, I think much much smaller steps need to be taken with regards to incorporating several senses into 1 AI.

This topic is closed to new replies.

Advertisement