Advertisement

Anything like a "real" AI.

Started by November 03, 2001 01:36 PM
92 comments, last by kenwi 23 years, 3 months ago
What does this have to do with "Game AI" and why would i want my agent to think about "nothing" when its about to get flanked by a group???
Timkin said: If you throw the balance out by just a small amount, the system collapses either into chaos (leading to various mental disorders of varying degrees) or to a shutdown of the control processes of the brain and ultimately death.

Just to add to this point, there''s another way of viewing it. Of course the brain can only survive in a small constrained set of physical environments (i.e. that created by our own skull) but the brain is a largely homeostatic, ultrastable piece of wetware. You can be born into a huge range of physical and social environments that affect the development of the brain. The brain rearranges its functioning to deal with its input and produce behaviour while trying to keep certain bodily (including brain) parameters within boundaries. Things like happiness, depression, pain, pleasure etc. Should you be thrown from one social environment (say New York city) into another (such as the African plains) then you can adapt to the social constraints formed by the society you join. As well as this you can adapt to the flora and fauna relevant to your survival. The point where all these internal adaptations break down is when the environment doesn''t allow you to retain your internal parameters within boundaries (situations of physical or mental abuse or being trapped alone on a desert island where social interaction is prevented), it''s at these points that the brain, in attempting to adapt and finding no place of equilibrium within which the parameters are conserved, breaks down and mental illness kicks in. Mental illness is merely a state your brain enters to deal with a given environment in the best possible way.

What I''m trying to say is that the brain is very robust and can deal with the world and a plethora of environments, even those that take it to its limit. The brain may be carefully balanced but that balance is maintained by the brain itself, even under significant perturbations from the environment.

Mike
Advertisement
MikeD, what are your credentials for discussing the brain? Nothing personal, but it sounds like you are making this up. If you have a degree in brainology or something, I''ll let it go; but some of the things you said aren''t true.
Computers can be intelligent... but CPU speeds won''t allow it in real-time yet. Things like pattern recognition could/would take a long time on a computer. Think about every day life... you see a dog, you instantly recognize this as a dog based on other dogs you''ve seen. A computer could do the same thing, but try writing a program that checks images against other images for similarities.... not realize that the dog can be in any postition in a single frame, and of any size.. it makes recognition much harder and much more intensive. You can make a program which sorts through images of it''s past and finds patterns, but it''d take a LONG time for it to work. Now add things like sounds, smells, touch, etc... and you see why computers of today can''t do "real" AI. Transmeta is the closest I''ve seen to good AI so far.. it is constantly figuring out ways to optomize the instructions it''s given to run more efficiently, and the coders didn''t have to program everything into it, it "learns" the programs you run, and optomizes itself for it.

Billy
I don''t believe that it is possible to create real "AI" with the programming languages of today. I know that this is going to upset alot of AI fanatics that think that carbon based life forms is just another step in evolution, but let''s face it we don''t even know even a shrap of how our own brain works how could we then design a real AI? If anyone can give me a real idea on how to implement some of these examples I''d be most... impressed =)

Can you make an AI see a word in a foreign language, and understanding it''s meaning without translating it to it''s own native language? (As I do with English and Swedish most of the time)

You can tell a computer, that it is a Computer, that it''s name is AZ12124we, it''s located in Göteborg, Sweden. You can tell the computer where Göteborg is on a map, but can you make a computer understand why or where Göteborg is and not simply knowing it for a fact?

Can you make a computer experience real feelings, not expressing but experiencing them. Not programming them to show affection or love but to make a computer truly love someone?

And in the end can you make a computer that from the beginning is a single mb of information contained inside a chipset grow and evolve into a full grown K7 1,533? (ok really really badly illustraded but you get the idea... don''t you? =)

I could go on forever but I think that I''ve pretty much made my position clear. I know that alot of people won''t agree with me on this and that some probably will. And that''s the beauty of life and nature, we''re all different.

// The Shadows
Fredrik Nilsson - Rogue Software
Sorry ''bout my bad english I''m not of this world
Well Anonymous poster #1, I won''t hesitate to question your credentials in return, if you ''know'' what I''m saying isn''t true please cite some references.

My personal references are ideas first put forward in AI terms by 1950''s cyberneticist W Ross Ashby. Look up Ashby and his homeostat on the web, there''s places that will give you plenty of information on both. For a more modern homeostastatic experiment, showing internal homeostasis maintaining behavioural homeostasis look at Ezequiel Di Paolo''s "Homeostatic adaptation to inversion of the visual field and other sensorimotor disruptions." You can find it on his homepage at www.cogs.susx.ac.uk/users/ezequiel/ under publications. It''s a fantastic piece of work. This theory of internal stability ontogenetically adapting an individual to maintain internal variables within parameters causing behavioural ultrastability is about the only good generic theory of lifetime learning I''ve read about that cannot be quickly pulled apart. But read the papers before we discuss it.

As to my qualifications. I''ve got a BSc in Computing with AI and a Masters in evolutionary and adaptive systems (basically the study of theoretical biology, artificial life and adaptive behaviour). No PhD I''m afraid but not a complete fool either. If you were interested in my knowledge of neurophysiology (or brainology if you prefer) you could read my Masters dissertation entitled "Homeostatic mechanisms in spiking neurons and spike-time dependant plasticity". I may not have particularly advanced knowledge of neural anatomy but I know enough to gain a distinction at Masters level.

If you do know more than me on any small part of what I was saying and care to put an argument against it, please be my guest. I''m always willing to learn or be proved wrong, and move onto a theory superior to my previous ''best guess''.

Until then,

Mike
Advertisement
I think it is possible to create "true" AI. Though you might need to understand the goals and limits correctly.

We evolve by "survival of the fittest". We learn by experiencing the world, and interacting with people. The better we live (more food, more comfort), the more "effective" we are.

But what would count for my PDA-AI? It''s only goal in live would be to assist me.

Emotions can be added pretty swiftly. When I''m happy with the way she works, she is happy. When I show to be angry for some reason, the AI would logically become sad/curious.

Understanding what I say. Well then, we would need a way for her to store information in an understanding, rather then fact basis. Try to analyze things by breaking them down: desk=>wood=>tree=>plant, plant=>organic, organic=>alive, alive=>???; day=>time=>?. My point? There are a few words we can''t really explain. They are abstract. Take "alive". Something is either alive, or it isn''t. Something can be hard or soft. Time just exists, period. So does space.

How to translate this into a way for the AI to understand? Well, I think that might be pretty hard, because most of these concepts are irrelevant to the AI. Alive or death is pretty much useless in a PC. Perhaps we should just hard-code these concepts. They just exist. Then everything that the AI learns/encounters, could be stored in a node, with links to it''s basic concept. Example: hour would lead to Time(abstract) and Day and Second. Day would lead to Time(abstract) and Earth and Sun.

Oookaaaay.... This sounds pretty fuzzy me thinks.......
TIME and ALIVE and DEAD are tantamount to common sense reasoning and basic philosophical understanding. They can be broken down. It just taks work.

TIME, for example:

Break it down into relationships such as during, before, after, duration, dynamicism, processes, etc. If I asked you at what moment Cristopher Columbus set foot on the shores of the New World, you likely wouldn''t know. But you can still reason effectively about this moment. Here''s why: you know it happened before Georege Washington was born. You know it happened after Cleopatra died. You know it happened in the late 15th century. You know the 15th century came before the 16th century. You know anything you did not happen in the 15th century. This is one way that we organize time in our brains. We don''t use explicit numbers or sequenced arrays. We use relationships.

___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
quote:

Computers can be intelligent... but CPU speeds won''t allow it in real-time yet. Things like pattern recognition could/would take a long time on a computer.





I read somewhere (maybe in ''Dos For Dummies'' some years back?) that the human brain puts along somewhere between 20 and 40 mhz (to the best that I can remember), compared to computers that were an incredible :p 166 mhz at the time that I read this. What''s the difference that I see as to why we can think at 20 mhz and a computer cannot at 1.5 ghz? Well, (ignoring the fact that ghz is just the clock speed and not the calculations per second since it wont matter much because of the design) most computers processes only a single thing at a time where as a neural net like our brain can process many different things at once.

I doubt computers will every achieve a practicle speed human-emulating AI on silicon alone. However... with a 30 qubit quantum computer, I''d guess it would be a possibility

quote:

You can tell a computer, that it is a Computer, that it''s name is AZ12124we, it''s located in Göteborg, Sweden. You can tell the computer where Göteborg is on a map, but can you make a computer understand why or where Göteborg is and not simply knowing it for a fact?



Understand why of what? I live in the USA because that''s where I was born. The computer could easily know that it lives in Sweden because that is where it was installed. What is this any more than a fact? What''s in a name? Nothing. It''s just a word we learn to associate with a place or an object (in the Göteborg''s case, the area of Göteborg). What is there to understand except that you are there.

quote:

Can you make a computer experience real feelings, not expressing but experiencing them. Not programming them to show affection or love but to make a computer truly love someone?



Can you tell a difference between what it shows and what it feels? If you can''t, does it matter? Would it be correct to say that it''s outward expressions are different from ours? I cant be 100% sure that anybody else besides me has the feelings that I do, but I assume they do because of their expressions.

quote:

But what would count for my PDA-AI? It''s only goal in live would be to assist me.



I dont consider that a conscious being. Too much like David from AI. No depth, just an infinite loop.

  while (!master_served){   serve_master();}  


Invader X
Invader''s Realm
Invader: You don''t seem to understand what I''m trying to say. The difference between experiencing an emotion and simply showing it off makes all the difference. I mean if you''ve ever loved someone you know that you''re totally irrational in all your decisions and reasoning. You can, I guess, make a computer irrational as a respones to the triggered function Love() but I can''t see any way to program a computer to actually feel irrational. Let''s face it you can''t even make a computer generate a totally random number, It''s an impossibilty, at least that is what i''ve heard. The function randomize takes a number from the systimer and that''s hardly random? Or is it?

// Shadows
Rogue Software

This topic is closed to new replies.

Advertisement