Advertisement

True AI

Started by March 27, 2002 04:02 PM
80 comments, last by zzzomed 22 years, 6 months ago
quote: Original post by Anaton
Original post by zzzomed
If we take away how the human dody physically works, then we will better understand how the AI will come to the same result in forming its words, or anything else that it has to do.

Please satisfy my curiousity regarding this: Are you stating that if we remove the physical aspects of how the human body acts then we can better understand how the AI will come to the same results? I'm a bit confused by your statement here.



Ok, I am saying that if you mentally remove exactly how the human body works i.e.: chemical reactions ect… Then you will understand how the program will interpret our actions and what it will do in its attempt to mimic it. I am not saying that how our conversations work are random, I am just saying that the AI program will interpret it as to having some random aspect to it.

My email is ruai@comcast.net

[edited by - zzzomed on April 28, 2002 7:25:50 PM]
My email is ruai@comcast.net
quote: Original post by zzzomed
I am not saying that how our conversations work are random, I am just saying that the AI program will interpret it as to having some random aspect to it.


Okay, I think I see where you're coming from zzzomed... unfortunately I disagree with you... and again I believe you should be using the word unpredictable, rather than random.

What am I going to say next (after this sentence is over)?

For that matter, what are you going to say in response to that last sentence?

The answer to the former might have been a short treatise on how we believe conversations are planned, but it wasn't. It was a question for you!

Certainly, there is a field within AI research that concerns itself with planning for conversation. It's a very interesting field. One of the beliefs held by many of these researchers is that conversations are, at least in part, predictable. This doesn't mean that every word can be predicted, but rather that given the current topic, the speakers and the previously spoked dialogue, one can predict the direction of the conversation and choose words/sentences accordingly so as to direct the conversation. If one knew how each others thought processes were directed, then one could certainly predict what each was going to say.

I believe zzzomed, that in this conversation situation, you would say that the words used by someone else in the conversation appear random to someone else in the conversation (or someone outside the conversation listening ing). Look back at the definitions I provided in my previous post. Wouldn't you agree that it is more likely that there IS some (set of) causal rule(s) that governs the words we choose, based on our life experiences, the language we speak, the house we grew up in, our friends and their patterns of speach? Certainly, we could not hope to know all of this information so that we could predict what was going to be said... thus I would suggest that words spoken in conversation are often unpredictable (to varying degrees), rather than random.

So let's assume that we agree words are unpredictable. Does this mean we need to build an AI that chooses its words at random, so that they appear unpredictable? No, certainly not. We simply need to build an AI that embodies the complexity of thought processes that we embody. Then you should expect to hear an apparently 'intelligent' (non-mechanistic) conversation.

While random certainly (by definition) means unpredictable, unpredictable does not (and should not) mean random. Clearly though, this is the scapegoat used by many games programmers when they want unpredictability... they use (pseudo-)random numbers. (In actuality, because the numbers are not truly random, they are just unpredictable, because the algorithm to generate them and the prior information - the seed - are not known by the game player!!)

Anyway, feel free to challenge my assertion that you would say it is random, rather than unpredictable... just please offer some supporting argument for randomness over unpredictability so that I have something to dissect and consider.

Cheers,

Timkin

[edited by - Timkin on April 28, 2002 8:12:41 PM]
Advertisement
y cant human like AI just be so complex that it is also unpridictable? When this is able to happen, I think, thats when when Human like AI will come into existance..

just my 6.283185307 sence...

Tazzel3d ~ Dwiel
Very simply, because we don''t have the hardware capable of doing it...yet.
quote: Original post by Timkin

I believe you should be using the word unpredictable, rather than random.

What am I going to say next (after this sentence is over)?

For that matter, what are you going to say in response to that last sentence?

The answer to the former might have been a short treatise on how we believe conversations are planned, but it wasn''t. It was a question for you!


Ok. It is true that what you are going to say next is not really random but just unpredictable.

As for what you are going to say next, that is usually just a response to the previous sentence. What I am talking about is when you finish a topic and bring up a new one (i.e.: After talking about what happened at work, what is the chance of you starting off in just telling jokes compared to asking what is for supper). This is generally unpredictable, and would probably be best reproduced by a computer as somewhat random (if anything, more easily produced as random). On the other hand, it cannot be just random, there has to be some logic behind it. People do not just have random thoughts, somehow they put some kind of logic to back up the rationality behind the thought. If the True AI program does not at least have a random aspect to open up another conversation, you get nowhere, and the human gets board of the program just responding to him/her. On the other hand, the program can at least make an attempt and start up a conversation with the human, even if it is just randomly picked.

quote: Original post by Timkin

So let''s assume that we agree words are unpredictable. Does this mean we need to build an AI that chooses its words at random, so that they appear unpredictable? No, certainly not. We simply need to build an AI that embodies the complexity of thought processes that we embody. Then you should expect to hear an apparently ''intelligent'' (non-mechanistic) conversation.


I agree with you (but only in part). The True AI program should not just choose its words at random, nor should it choose the most proper and defining words. You do not just go around calling a political dispute an (real word, just a bunch of prefixes and sufixices so it will not really be in just any dictionary)antidisasstablishmentinternism, instead you just say political dispute. If you do use that word, everyone else will not know what you are talking about and just change the subject. On the other hand, if you let the program write its own programming for itself, then it will know what words to choose and where to use them as well as who to use them around (i.e. if you are trying to get a job interview, you will use somewhat larger and more dignified words than you would use to a child or friend). This is one of the questions that I do not think we can answer until we let the True AI figure it out for itself and download the program to look at it. The True AI program will figure out what to do and how to do it in its own way, but I don''t think we can know how it will do that until we see what it has made itself (I know that is somewhat redundant).

My email is ruai@comcast.net
My email is ruai@comcast.net
This also might help some people to understand what I am saying:

You think with the mind; not the brain.

the mind is defined as (I found this at webster.com):the element or complex of elements in an individual that feels, perceives, thinks, wills, and especially reasons

The brain is defined as (I also found this at webster.com):the portion of the vertebrate central nervous system that constitutes the organ of thought and neural coordination, includes all the higher nervous centers receiving stimuli from the sense organs and interpreting and correlating them to formulate the motor impulses, is made up of neurons and supporting and nutritive structures, is enclosed within the skull, and is continuous with the spinal cord through the foramen magnum

I think we should be comparing the True AI program to our "mind" not our "brain".

My email is ruai@comcast.net
My email is ruai@comcast.net
Advertisement
Obviously i have got the completely wrong idea here. By talking about ''True AI'' you don''t mean true intelligence, you want truly artificial intelligence, with emphasis on the artificial.
If this isn''t actually what you mean, then why do you insist on saying
"On the other hand, it cannot be just random, there has to be some logic behind it. People do not just have random thoughts, somehow they put some kind of logic to back up the rationality behind the thought."
and then in the next sentence you go and completely contradict yourself by saying
"If the True AI program does not at least have a random aspect to open up another conversation, you get nowhere, and the human gets board of the program just responding to him/her."
Maybe it''s just that my definition of true ai conflicts with yours, where my definition of true AI would be an AI based off the workings of the brain (either animal or human). Your definition must be somewhat different, as you directly say that True AI should use some sort of random generator, rather than the internal thought processes that the human brain uses to get thigns done.
quote: I did exactly this in my youth and it worked well, although it never worked well enough. Since then we have been trying to do all from retro-fractal thinking to preminission-algorithms. The best solution yet to date (what I know of) is the Isil-split method.


This may be from a while back (first topic page), but what exactly is the Isil-split method?
quote: Original post by zzzomed
You think with the mind; not the brain.

...

I think we should be comparing the True AI program to our "mind" not our "brain".


The brain is the organ responsible for thought. The mind, is an abstract concept. If we want to make a model of something which can be observed and occurs in the physical world, we must model the brain. From a complete model of the brain one may assume rather optimistically that a mind will arise.

If you mean to say that we should model AI at a higher level - as object relationships rather than as neurons (semantic nets vs. ANNs), then you have a valid point. It is in these types of deterministic algorithms that most practical progress is made.

However, if you truly want to create a complete model of a human mind, it seems one would need a complete model of the human brain. Considering how much progress neuroscientists still have left to achieving this understanding, I think we''ve got some waiting to do!

In the meantime, use either approach. Write new deterministic algorithms, or create new neural network architectures, or, better yet, mix the two. For games, you''re more likely to create an opponent which is reliably good using primarily deterministic algorithms.

If you''re hellbent on building a brain, though, go with ANNs - and try to solve the single biggest problem plaguing them: scalability. ANNs, as it stands, don''t scale well. If you can do something to help solve this problem, we will be one step closer to reaching AI''s Holy Grail.
ahh..
this conversation isnt about random its about if AI is a function or a relation.

I had to think bout this when i was working with NN.

What the original poster is asserting is that _any_ AI that is a function at its core cannot simulate a person.

what thats saying is that a function has the property that it has one output per an input. a relation has the property that it can have more then one output per an input.

But its a grey area with people. The thing is that people change as they go through time. As the mind exist it changes itself. so you literally cant answer the question is a mind a function or a relation.

I had to explore this topic when i asked myself: Now that i have explored, understand the math, and can code in a few minutes a back propogating neural network what in theory could i apply it to.

The answer is that a BP NN in use is a functions. So its limited that it only has one output. further you cant readily tell it to give you multiple. the BP NN in training is a gradient search through a space that converges on a local min. That convergences takes lots of training.

But other algo''s are potentially turned into relations even though they are really functions. I mean at the heart of most AI algo''s is a search for a local extremea. if the algo is extended so that you tell it you dont just want the local min max but all the local extrema in a range then you get a relation.

but what to do with your multiple output? you have to function it or random it to apply it to one thing. there really isnt strong uses for AI relations as opposed to AI functions. i suppose the most interesting use might be to apply the results to individuals in a colony. each working a different result.

This topic is closed to new replies.

Advertisement