Advertisement

Strong AI vs. Weak AI

Started by October 05, 2006 04:47 PM
26 comments, last by johdex 18 years, 1 month ago
I'm wondering why exactly anybody would even bother making a "strong AI"? What do we get out of it? A machine that mimics humans, i.e. a tempremental machine, prone to laziness and disobedience, unable to reason logically? No thanks.

Quote:
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.


Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.
Quote:
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote: Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote: Why would someone want to build Strong AI.
Advertisement
Quote: Original post by aphydx
right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai.

How would it prove that?
Quote: Original post by Optimus Prime
Quote:
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote: Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote: Why would someone want to build Strong AI.


Going further, I would say that human "understanding" relies on rules we are not aware of.
[size="2"]I like the Walrus best.
Regarding Human Conscisousness/Spaience/whetever you want to believe in:

Check into people with 'split brain' and 'alien hand' disorders.
Where signifigant sections(even half) of their brain have been disconnected from the rest, and continue to act upon the portions of the body that they control.

The 'conscious' half of the brain (or at least the part that talks) claims that they have an 'alien' presence living with them that sometimes acts uncontrollably contrary from what they want...
Makes you wonder, if there really is a part of the brain you could point at as say 'consiousness is right there!'...
Strong AI could potentially make the production of Weaker AI easier and so while the initial investment is the difficult part the results could produce "good" AI far quicker and easier than programming in every eventuality as with Weaker AI. Given an infinite amount of time Weaker AI can always produce the same outcomes as Strong AI but in a finite world I really can't see Weaker AI consistently beating Strong AI over time.
Advertisement
regarding Strong AI and programming consciousness:

you can't program something that you don't understand

and I don't believe for a second that tossing a bunch of GAs or NNs into a blender and shaking them up will accidentally build one either
Quote: Original post by Graphain
Strong AI could potentially make the production of Weaker AI easier and so while the initial investment is the difficult part the results could produce "good" AI far quicker and easier than programming in every eventuality as with Weaker AI. Given an infinite amount of time Weaker AI can always produce the same outcomes as Strong AI but in a finite world I really can't see Weaker AI consistently beating Strong AI over time.


Uh, No
Pit the StrongAI known as You against the WeakAI of Chessmaster 4000. You will consistently lose.
In the context of a system Intuition(random) is no match for the Expert System. -by definition of expert
Quote: Original post by aphydx
penrose's argument for the higher abilities of the human mind (compared to an algorithmic axiomatic system) is applicable here, and argues that strong ai is not technically feasible, at least not w/ a turing machine

right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai. however it is unclear to me whether the incompleteness theorem also proves that we as potential machines cannot prove the consistency of or even explain our own cognitive faculties

so i will settle for the temporary argument that strong ai will never be possible, unless we stumble upon it by accident

but i could be convinced otherwise --edit, either by argument or maybe physical force--


Does anybody in the AI community actually take Penrose seriously?
Quote: Original post by Asbestos
The guy also seemed very proud of having worked out that, given infinite computing power, you could program anything in only a few lines. I assume he hadn't heard of a Universal Turing Machine before, which was worked out 70 years ago.


Is that actually the case? I don't see how it possibly can be - Kolmogorov Complexity argument. I don't see how you could be a serious AI researcher and not be aware of that.

http://en.wikipedia.org/wiki/Kolmogorov_complexity

Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...

This topic is closed to new replies.

Advertisement