Advertisement

Anything like a "real" AI.

Started by November 03, 2001 01:36 PM
92 comments, last by kenwi 23 years, 3 months ago
Just to clear up a point about the Turing test...

quote:
Original post by Prefect
Turing has once drawn up the Turing test. He basically said that if a human can''t tell whether an entity he''s talking to is human or not, then that entity must be intelligent.


The above is a common mis-interpretation of the Turing Test. Alan Turing''s original test was one of computability and ''understanding'', rather than assessing ''intelligence''.

To quote Stevan Harnad (probably the foremost expert on the Turing Test):

quote:

''According to [the Turing] test, we should stop denying that a machine is "really" doing the same thing a person is doing if we can no longer tell their performances apart.''



So, if a machine passes the Turing test it doesn''t mean it is intelligent, just that it is doing something like what we humans are doing.

It may be that a machine *requires* intelligence to do what we humans do, but that is not necessarily the case!

Cheers,

Timkin
quote:
Original post by Anonymous Poster
Computers can be intelligent... but CPU speeds won''t allow it in real-time yet. Things like pattern recognition could/would take a long time on a computer.


The latest CPU can perform 4,000 MIPS. The latest Human Brain can perform 1,000,000x more Calculations Per Second than the latest CPU (NBC News Report)...How far are we away from 4,000,000,000 MIPS?
If the AMD K6-2 500 came out Q2 1997 which made 1,000 MIPS and the latest AMD Athlon 4 1.33GHz came out Q2 2001 which made 4,000 MIPS. We''re progressing 1,000 MIPS per year, so in 4,000,000 years we''ll reach ''brain speeds'' ... I hope we''re still not using x86 arch by then



"1-2GB of virtual memory, that''s way more than i''ll ever need!" - Bill Gates
"The Arcade: Quite possibly the game of the century!" - The Gaming Community
may or may not represent anybody in paticular
---------------------------------------------------------------------------------------"Advances are made by answering questions. Discoveries are made by questioning answers." - Bernhard HaischReactOS
Advertisement
Actually, it''s not as simple as that. Trends show (Moore''s law) that processor speeds double every 18 months, which would means that we would reach 4,000,000,000 MIPS in approximately 30 years (If my quick calculation was correct) providing Moore''s law continues to hold. Also parallel processing means that even now we could probably reach 4,000,000,000 MIPS (it would just take a very large number of processors - by the way, I''m no expert on this. I don''t know if this is possible but surely parallel processing will give some significant speed increase).
Intelligence is nothing to do with processing speed. We''re not powered by a CPU, we''re a distributed, massively parallel processing device. Our brain has something in the region of 3 trillion neurons. This is not the important part, the important part is the interconnection of those neurons. Some connect to only a handful of other neurons (possibly only to one other) many connect to tens of thousands. There is no real comparison between our ''processing speed'' and that of a CPU. Apart from distribution our brains are also asynchronous. There is no clock, although decision making is often marked at ending when the state of our brain falls from one stable state into another (which is the reason for the clock in the CPU, read this article if you''re interested http://www.techreview.com/magazine/oct01/tristram.asp).

Intelligence is about how you process, not about speeds. You need a definition of intelligence if you want a true comparison between a computer and a human being. You can use the Turing test, but this often becomes a test of the gullibility of human beings rather than the intelligence of the machine. If you take a contrived and limited enough domain you can teach a computer to perform ''intelligently'' enough to fool humans into thinking they are talking to another individual. The problem is that the domain we exist is so large and complex no computer can deal with it in terms of higher logic used for standard AI. The world we live in does not contain many absolutes (in fact, none at all, it could be argued). The way we deal with it is mostly through generalised reaction using instincts (for walking, avoiding moving objects etc) and only by abstracting from the real world for higher order decision making and abstracted thinking using internal models and the like(language, pathfinding etc.). There is so much complexity, you would need a system that existed in the world, not one that could be programmed in abstract. It would have to adapt based on experience to be able to ''understand'' the world around it, whether this is by some complex internal lifetime adaptation or by a cross-generational evolutionary approach. The computer would have to be structurally coupled to the world, by this I mean it would have to be changed by its interactions with the world. Any internal understanding being based on an external experience of the thing it comes to understand. This is not based on clock speed, nor even the complexity of the system. The problem with creating computers to act like this is that the way we came to act like this is by evolution, both to adapt across generations and in the ways we learn to become structurally coupled to the world during our lifetimes. Of course you can do this in simulation, but you would need a very complex world simulation to become anything like as intelligent as we have become ourselves. It''s a question of granularity. If a computer system changes one bit internally due to a feature of the external world, you can say it has learnt something, but few people would say it was intelligent until it had learnt something to rival us or creatures we thought of as intelligent. To do that would require, say, a computer that could model our entire world and perform a few hundred million years of evolution. It''s not something you could program. IMO.

Mike
I''ve read queries on here ''bout where we assimilate notions of memory and ''pictures of words'' to say in our brains....
I''m looking into the study of the wernick area of the brain to answer this perplexing thing.
no one knows life.
jds
Jasons888, no-one may know life, but Maturana and Varela had a very good definition of life as an auotpoietic (self-generating) unity.

"An autopoietic machine is a machine organised (defined as a unity) as a network of processes of production (transformation and destruction) of components that produces the components which: (i) through their interactions and transformations continuously regenerate and realise the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realisation as such a network."

Basically, an autopoietic unity is one that is made up of a network of processes that regenerate themselves and define the organisation of that unity in physical space.

The theory runs much deeper than this and has very bold ramifications, but it''s interesting nonetheless. Not really very computer games though
Advertisement
From an intrigued individual: what are those bold ramifications?
Don''t keep us in the dark here!
Well there''s plenty of directions autopoiesis takes and many areas of biology and ethology (and other specialities) that can be explored using the paradigm, but I''ll give you an example of one of the things I''ve studied.

If you look into the definition of communication from the perspective of autopoeitic theory (See, for example, Maturana''s "The biology of language the epistemology of reality", 1978 I believe) it''s defined as a mutual ontogenetic structural coupling across a consensual domain. Basically communication occurs where one unity interacts with another, altering some aspect of that unity, resulting in a reciprocal interaction altering the first unity. By interaction the unities have changed each others behaviour. This is called behavioural cohesion, an example of which could be something really simple. Say you accidentally leave a bowl of dogfood in your back garden where you normally feed your dog (a perturbation to the environment). A fox comes in the night and eats some of the food (the perturbation affects the behaviour of the fox), which you happen to notice (the behaviour of the fox perturbs the environment) and causes you to leave more food in the garden (the fox has now altered your behaviour). You and the fox have undergone mutual structural coupling. This would be behavioural cohesion as there is a direct causal link between your actions and the actions of the fox. For behavioural coordination to occur there has to be some level of abstraction in the process of behavioural cohesion, for instance some dynamic where you use a bell to tell the fox that the food is there.

Anyway, that''s just one place where autopoietic theory can be used to explain a biological phenomenon. You can try and come up with a definition of communication that isn''t self-referential and doesn''t rely on the observable consequences of the communication rather than the communication itself, but you''ll probably find it quite hard.

Another thing I was thinking (but not for very long) along the lines of autopoiesis being a definition for life, is that it seems to incorporate fire into the sphere of things definable. A previous definition that I liked was life as a "persistant chemical reaction", which also incorporates fire. But this could be wrong as I haven''t thought about it too deeply.

To be honest I''m not an expert on these things and being asked to explain something I said makes me feel like I''m back at uni.

Anyway, if you can fit these ideas into game AI I''ll buy you a beer ;-)

Mike

Geez, I''ve been missing out on a great thread.
I haven''t had time to read them all yet, but these are
my thoughts on the subject.

We are trying to make a truly intelligent AI. Maybe we should take a different approach ? It''s like trying to build a computer, when the transistor hasn''t been invented yet. We don''t yet have all the parts and knowledge to accomplish such a task. So we have to take a different route.

So I will put forth to accomplish this task through the understanding of the evolution of intelligence. We must try and simulate this evolution. At some point in our past, we learned to reason. This is key in the making of a true AI. So how did this come to be, and can we simulate that evolutionary leap ?

If we can simulate the evolution of intelligence, then maybe we can make a true AI. But who knows, maybe this approach will take longer than the original.

Guy
Adulthood is the vehicle for making you childhood dreams come true.
Yeah, this has been an insteresting topic indeed. but it''s not too late for you to catch up on it :D

Well, my opinion is that we cannot make a artificial intelligence as "good" as a humans, with all it''s features and all that before we do our processing in another way. I dont think a human brain''s thinking consists of 1''s and 0''s..
I''m not going to write more right now, I''m just tired and everything will just be a total mess. anyways, you know my main argument against Real AI =)

Kenneth Wilhelmsen
Try my little 3D Engine project, and mail me the FPS. WEngine
--------------------------
He who joyfully marches to music in rank and file has already earned my contempt. He has
been given a large brain by mistake, since for him the spinal cord would fully suffice. This
disgrace to civilization should be done away with at once. Heroism at command, senseless
brutality, deplorable love-of-country stance, how violently I hate all this, how despicable and ignoble war is; I would rather be torn to shreds than be a part of so base an action! It is my conviction that killing under the cloak of war is nothing but an act of murder

This topic is closed to new replies.

Advertisement