"Natural articifial Intelligence"?? That's an oxymoron if I ever heard one. "Artificial Intelligence" is merely the _illusion_ of intelligence in an inanimate object. "Artificial" means "not real". Just look it up in a Dictionary. For young kid, a good puppet show is an example of AI; and, I must say, that isn't much different from an Asimo demonstration: an elaborate puppet show for adults.
"AI" and "excellent programming" can be the same thing; however, excellent programming might also produce true "Machine Intelligence", which would not be artificial, but real intelligence that exists in a machine. I don't think anyone knows how to do that yet.
What is the difference between AI and Good Programming...?
Quote: Original post by Timkin
[...] researchers in AI (and those on the periphery or in associated endeavours) cannot agree on a definition of AI. However, we can and do talk about the difference between strong AI and weak AI. The former is the recreation of (and ultimately hopefully therefore the understanding of) human intelligence via artificial means (and some proponents of strong AI insist that we must do this by mimicking the cognitive abilities of humans). Thus, the goals of strong AI are to create artificial systems that have the robustness, adaptability and creativity of humans and that could operate in the same domains as we do with the same effectiveness.
Weak AI on the other hand is about the creation of artificial systems that can perform tasks that humans can do, without the necessity that the system be of human intelligence.
That's not exactly what the distinction is. Strong AI says that computers can become self-aware. Weak AI says they will just be simulating self-awareness, but we won't be able to tell the difference. It's a philosophical and semantic difference, and is not useful to game programmers.
See
http://en.wikipedia.org/wiki/Strong_AI
Quote: Original post by Cowboy Coder
That's not exactly what the distinction is. Strong AI says that computers can become self-aware. Weak AI says they will just be simulating self-awareness, but we won't be able to tell the difference. It's a philosophical and semantic difference, and is not useful to game programmers.
See
http://en.wikipedia.org/wiki/Strong_AI
The Wikipedia holds no credibility as a reliable source of information on any topic, so you most definitely should not rely on it as the basis of an argument for your persepctive. Having said that, the page you linked doesn't support your statement above.
I'm not going to be drawn into an argument about the definition of strong versus weak AI, simply because it differs depending on who you talk to... but my statements above are not at odds with what is written in that wiki page... and they are based on extensive study of this aspect of AI and philosophy of mind.
As for the relevance to game programmers... the OP raised the question of 'AI' versus 'good programming', NOT game AI versus programming... and game programmers should get out and read more... it makes them better programmers and less likely to talk about World Of Warcraft at the next party they attend. ;)
Well, unfortunately Wikipedia agrees with every authority on this particular subject.
[Edited by - Cowboy Coder on February 26, 2007 11:38:48 PM]
[Edited by - Cowboy Coder on February 26, 2007 11:38:48 PM]
Quote: Original post by Cowboy Coder
Well, unfortunately Wikipedia agrees with every authority on this particular subject.
...and your evidence for this is what? A citation list of a dozen articles? This still doesn't address the fact that you made a claim about the definition of strong versus weak AI that is NOT supported by that Wikipedia page. One section of that page deals with artificial consciousness, not the whole page and certainly not the parts making claims about the definition of strong AI and weak AI. If you want to justify your claim, you're going to need more than that page, since it clearly doesn't support your supposition.
Like I said, it's a semantic difference. You said that strong AI was mimicking humans though processes, and weak AI was just performing certain tasks like chess which did not really involve human though processes.
I said most AI literature says Strong AI is where the AI is self aware (and seems human), and Weak AI is where the AI is not self-aware (and still seems human). Personally I feel the distinction is meaningless, but it matters to many people. I was thinking more along the lines of people like Dennet, Searle, Penrose and Hofstadter. You perhaps were thinking of more hands-on practitioners of the craft.
It seems that some people actually do use the term as you described, but really that's a corruption, since the term "strong AI" was coined by Searle in 1980, and he was very much in the philosophical side of the argument - even arguing such nonsense that computers could not be self-aware since they were deterministic. See the "chinese room" argument.
For references, try Google Books:
http://books.google.com/books?q=%22strong+ai%22
or Scholar:
http://scholar.google.com/scholar?hl=en&lr=&safe=off&q=%22strong+ai%22
I said most AI literature says Strong AI is where the AI is self aware (and seems human), and Weak AI is where the AI is not self-aware (and still seems human). Personally I feel the distinction is meaningless, but it matters to many people. I was thinking more along the lines of people like Dennet, Searle, Penrose and Hofstadter. You perhaps were thinking of more hands-on practitioners of the craft.
It seems that some people actually do use the term as you described, but really that's a corruption, since the term "strong AI" was coined by Searle in 1980, and he was very much in the philosophical side of the argument - even arguing such nonsense that computers could not be self-aware since they were deterministic. See the "chinese room" argument.
For references, try Google Books:
http://books.google.com/books?q=%22strong+ai%22
or Scholar:
http://scholar.google.com/scholar?hl=en&lr=&safe=off&q=%22strong+ai%22
Quote: Original post by Cowboy Coder
I said most AI literature says Strong AI is where the AI is self aware (and seems human), and Weak AI is where the AI is not self-aware (and still seems human).
Actually, I'd say that's a perspective carried by the philosophy community when discussing philosophy of mind and the issue of its synthesis in an artificial environment, rather than a perspective carried by the artificial intelligence community, which in its modern incarnation, is populated principally by practitioners. This philosophical view was very much centered around understanding mind and psychology and the possibility of an artificial variety, rather than creating 'artificial intelligence'. I believe Searle's view of the classification is outdated within the modern perspective (and certainly his opinion is ;) ). Harnad, for example, gives an excellent exposition of the modern view of strong versus weak AI. He also notes that it is not a bipartite classification, but rather that there is a continuous scale of possibilities... and my use of the terms is certainly more inline with his perspective.
Quote: It seems that some people actually do use the term as you described, but really that's a corruption, since the term "strong AI" was coined by Searle in 1980
I don't agree that it's a corruption of the term. As with many categorisations, a better understanding of the things that may or may not be in a given class leads us to redefine the class boundaries (restate its definition). We can certainly talk about Searle's 'strong AI' perspective (or more distinctly, his weak AI perspective, since that's actually more relevant to modern AI and programming), but a large majority of researchers in artificial intelligence would not relate to this philosophical view, let alone be able to discuss the issue from that perspective. It is unfortunate and perhaps lamentable that most modern AI practicioners are computer scientists with little or no formal training in philosophy, but that is the reality. Hence, the terms "strong AI" and "weak AI" have a meaning relevant to their modern context, which is lacking of the philosophical underpinnings that coined the terms.
Perhaps though if Searle was given the benefit of the subsequent 30 years of research in functional neurology and computational intelligence he might not have created such a bipartite and necessarily divisive classification! ;)
Cheers,
Timkin
If you can prove that it works, it's not AI.
If you don't even know what it's doing or how it's doing it, that's strong AI
If you don't even know what it's doing or how it's doing it, that's strong AI
---visit my game site http://www.boardspace.net - free online strategy games
March 12, 2007 06:08 PM
Quote: Original post by ddyer
If you can prove that it works, it's not AI.
If you don't even know what it's doing or how it's doing it, that's strong AI
That defenition seems inherently contrary to Understanding; hence I reject it under the lameness clause.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement