Advertisement

Strong AI vs. Weak AI

Started by October 05, 2006 04:47 PM
26 comments, last by johdex 18 years, 4 months ago
Quote:

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote:
Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote:
Why would someone want to build Strong AI.
Quote:
Original post by aphydx
right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai.

How would it prove that?
Advertisement
Quote:
Original post by Optimus Prime
Quote:

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote:
Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote:
Why would someone want to build Strong AI.


Going further, I would say that human "understanding" relies on rules we are not aware of.
[size="2"]I like the Walrus best.
Strong AI could potentially make the production of Weaker AI easier and so while the initial investment is the difficult part the results could produce "good" AI far quicker and easier than programming in every eventuality as with Weaker AI. Given an infinite amount of time Weaker AI can always produce the same outcomes as Strong AI but in a finite world I really can't see Weaker AI consistently beating Strong AI over time.
Quote:
Original post by aphydx
penrose's argument for the higher abilities of the human mind (compared to an algorithmic axiomatic system) is applicable here, and argues that strong ai is not technically feasible, at least not w/ a turing machine

right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai. however it is unclear to me whether the incompleteness theorem also proves that we as potential machines cannot prove the consistency of or even explain our own cognitive faculties

so i will settle for the temporary argument that strong ai will never be possible, unless we stumble upon it by accident

but i could be convinced otherwise --edit, either by argument or maybe physical force--


Does anybody in the AI community actually take Penrose seriously?
Quote:
Original post by Asbestos
The guy also seemed very proud of having worked out that, given infinite computing power, you could program anything in only a few lines. I assume he hadn't heard of a Universal Turing Machine before, which was worked out 70 years ago.


Is that actually the case? I don't see how it possibly can be - Kolmogorov Complexity argument. I don't see how you could be a serious AI researcher and not be aware of that.

http://en.wikipedia.org/wiki/Kolmogorov_complexity

Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...

Advertisement
Quote:
Original post by Optimus Prime
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.


Why don't we just make the smarest AI possible, and ASK it? :P
Quote:
Original post by Optimus Prime
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.


Why don't we just make the smarest AI possible, and ASK it? :P
Sorry, Internet spaz...
Quote:
Original post by CzarKirk
Why don't we just make the smarest AI possible, and ASK it? :P


Won't it just say: "42" ?
Crystal Space 3D : [url]http://www.crystalspace3d.org[url]Blender : [url]http://www.blender3d.org[url] Blender2Crystal :[urlhttp://b2cs.delcorp.org/index.php/Main_Page[url]

This topic is closed to new replies.

Advertisement