Artificial intelligence, what it is and can ever obtain it
Here's a few thoughts I've been having about AI, what is it we're really trying to achieve and whether it's possible. Posting here to gather my thoughts and have them torn apart. I'll start with two sets of definitions, one for intelligence: intelligence ( P ) Pronunciation Key (n-tl-jns) n. The capacity to acquire and apply knowledge. The faculty of thought and reason. Superior powers of mind. See Synonyms at mind. And one for knowledge: knowledge ( P ) Pronunciation Key (nlj) n. The state or fact of knowing. Familiarity, awareness, or understanding gained through experience or study. The sum or range of what has been perceived, discovered, or learned. So, nice, I've copied and pasted some information from www.dictionary.com, I'm sure you're very proud of me. What do these definitions matter? Intelligence is derived from the state of knowledge and understanding and the process of reasoning over that knowledge. Knowledge is acquired from the process of experience or learning. All experience or learning occurs from the act of interaction and the changes caused by that interaction. Interaction, leads to change, we call that change experience, that experience leads to knowledge, which can be measured as a change in behaviour. (abstract reasoning is covered by this chain of events as the interaction of an entity's brain with itself) To further reduce you could say Interaction leads to experience leads to change in behaviour or Interaction leads to internal change, leads to external change Where does this leave intelligence? It leaves it in the position of being a metric we use to measure the complexity of behaviour and the effects of interaction on behaviour, i.e. how experience changes us. So how can you create an artificial intelligence? Well, what do you mean? "How can you create an artificial entity with a domain of interactions with an environment that has behaviour of measurable complexity and whose behaviour is altered by it's interactions with its environment"? Well, if that's what you mean, then we already have. If you want some references I'll dig up some papers but, as a thought experiment, imagine a minimal simulation or experiment which contains an entity that matches the above criteria. It's not difficult. Or did you mean "How can you an artificial human being, with the same complexity of behaviour, ability for adaption and understanding of its world as a human being". Well, if that's what you meant then we can't. Why not? Well, we can clearly make a machine or arbitrary complexity, that's no issue. I also have no doubt that we can create a machine with an equal amount of adaptionality as a human being. The only reason why that should not be possible is if you believe we are more than just machines or if you think that there is something special about the chemistry we are constructed from. Something so special that you couldn't model it in a computer of "suitable power". If you do, then I'd like to ask why (beyond spouting "Quantum processes, it's the nano-tubules guv'nor and we can't simulate them, honest", which always feels like an argument of the form "but we must be special else...else...I don't feel special and I like feeling special, can't we just pretend we have souls?"). The problem I have is that of "understanding". Understanding is acquired from experience and experience is formed from interactions with an environment and the changes those interactions cause. By that definition your understanding is regulated by the structure of your being. Of your inputs, internal processes and outputs (as much as you like to arbitrarily delineate such processes from each other or from the act of "being"). Change the structure and you change the domain of interactions and domain of perturbations (how you interact and how you are changed by such interactions). So your understanding is literally that, it is "your understanding". My understanding of "trees" is caused by my every interaction with entities I choose to label "tree". Your understanding of trees is qualitatively different from mine by the differences in our structures and the necessary differences in our experiences: being at different points in space-time for an otherwise similar experience, for instance. A dog's understanding of tree is, again, qualitatively different from our understanding, as it is from every other dog's understanding. There are intransients, which can be measured purely by the intransients in behaviour, as no other metric can tell us anything of any certainty as to the similarities or differences in our understanding, which is why anyone who thinks about other beings for any amount of time comes to the conclusion that they cannot be certain of the realness of anyone but themselves. So, for a computer to have understanding of trees, it must have experience of trees, which is mediated by its being and defined by its structure and its domains of interactions and perturbations. To have human understanding you need human inputs, human thought processes, human outputs, made from the same chemistry as a human being, else it is computer inputs, computer thought processes, computer outputs forming computer understanding (and only for that specific computer). So human understanding is impossible for a computer, without that computer being of form and function as a human being, which would get us absolutely nowhere. In the end, we both have artificial intelligence and we will also never achieve it. To apply this paradigm to problems in AI, such as John Searle's Chinese Room Problem: the man in the room understands Chinese to the point of understanding his environment, of the inside of the room, filled with Chinese symbol input, a book of thought processes and Chinese symbol output. He does not understand Chinese as a Chinese speaker, but he understands it by his interactions with the symbols of it, in a way no Chinese speaker would. That a person talking to the room might measure the intelligence of the room by their interaction with it and come to the conclusion that the room speaks Chinese in the same way that he speaks Chinese is unimportant. I no more know that someone replying to this post understands the ideas in the same way I do, in fact I have guaranteed that they do not (to some degree of understanding). All I guarantee is that their kind of behavioural response shows some kind of understanding that is different to mine but has some intransients based on the similarity of the behavioural intransients. And that's all any of us can guarantee, ever. And that's my thinking over for today. Mike
Quote:
Original post by MikeD
...
...
...
So, for a computer to have understanding of trees, it must have experience of trees, which is mediated by its being and defined by its structure and its domains of interactions and perturbations. To have human understanding you need human inputs, human thought processes, human outputs, made from the same chemistry as a human being, else it is computer inputs, computer thought processes, computer outputs forming computer understanding (and only for that specific computer).
So human understanding is impossible for a computer, without that computer being of form and function as a human being, which would get us absolutely nowhere.
In the end, we both have artificial intelligence and we will also never achieve it.
...
...
...
Mike
Interesting thoughts on AI...I think you have the basics down pretty well. I know even less about it than you do...but I am an interested layman nonetheless.
This article may interest you...and further your understanding of what we are trying to accomplish...and SHOULD we even try:
http://www.cs.usu.edu/~degaris/artilectwar2.html
--------------------------------------------------
I see the pursuit of AI as not trying to make an AI being that is "like us"....I think that is impossible.
I do think that it is possible for a machine to SIMULATE a human response, and be similar to us in many ways. I want to create AI to server humanity. To use it as a tool...not a replacement for humans.
For example...imagine how much better our synthesis of information would be if we had better recall of the original facts?
To use an example. The character "Data" form Star Treck NG. He was able to record and recall each moment of his life with perfect clarity...down to the minute detail.
Not being able to build a computer to reproduce a human is a good thing (but I bet it could actually be done), because I don't want computers to be human, I would like to them do things for me (happy slaves vs. members of society).
Very insightful and thought provoking MikeD. You get the Cheerio of the Day Award. (Well a ++ rating from me anyway.)
Homo sapiens are simply a failed experiment of a superior race to see how they would resolve differences caused by spatial separations and genetic makeup.
Sorry, I would have a more insightful reply, but I am getting very tired. Maybe tomorrow.
Homo sapiens are simply a failed experiment of a superior race to see how they would resolve differences caused by spatial separations and genetic makeup.
Sorry, I would have a more insightful reply, but I am getting very tired. Maybe tomorrow.
-0100110101100011010000110110111101111001
Quote:
(and only for that specific computer).
Not at all. It would only take one computer / system to achieve "conscious intelligence" and then we could copy that over to other systems giving them the exact same intelligence.
You made a good point with the example of a "tree." The word "tree" in and of itself has no meaning beyond what we give it. A "tree" in any language is just a label of the concept of "tree." For AI to understand what a "tree" is, the system would have to "experience a tree."
But how does one experience a tree? The dictionary defines a tree as:
Quote:
1. A perennial woody plant having a main trunk and usually a distinct crown.
2. A plant or shrub resembling a tree in form or size.
# Something, such as a clothes tree, that resembles a tree in form.
# A wooden beam, post, stake, or bar used as part of a framework or structure.
# A diagram that has branches in descending lines showing relationships as of hierarchy or lineage: a family tree; a telephone tree.
# Computer Science. A structure for organizing or classifying data in which every item can be traced to a single origin through a unique path.
Awesome. This means absolutely nothing to someone who doesn't understand (or "know") the concept of a tree. How do we learn what a tree is? What is the underlying concept of a tree? From the definition above, the word "tree" can have many different meanings depending on what concept we're talking about.
So, how in the world do we ever really "know" what we're talking about? I have no idea. I have ideas, like everyone else, but I don't know how to go from "my concept of how we learn and understand" to a theory that can be coded in a language such as c++.
And then, even if I could create a system that is capable of "learning," how will I ever know that the system is aware of itself? In fact, is being able to learn and understand concepts the same thing as being aware of one's own existance? How can we even be sure that other people are aware of their own existance?
A professor once told me to show him my mind. After thinking about it for a few moments, I realized what he was asking is quite impossible. Someone could take my brain out of my head, set it on a table, and say "look, here is Stephen's mind." But is that my mind? No, that's just a big mess on somebody's table. To get to my point, even if AI can show the characteristics of "self awareness," how can we prove that it is indeed aware of itself?
And does being aware of one's self the same thing as intelligence? I had a dog once who would stare at himself in the mirror for hours on end. What was going on in his mind? I'd often spend hours watching him watch himself, just wondering what it was that he was thinking about. Did he understand that the "dog in the mirror" was, in fact, him? If he realized this, did he think about it after looking away from the mirror? What about when he saw me looking at my reflection in the mirror. Did he put the two together and become aware of himself?
I love AI discussions! I need to get a few books on the subject and really start learning more about it. Perhaps the questions I have have already been asked, and answered.
Mike! It's great to see you around here again! It's been quite some time! Merry Christmas and a Happy New Year to you!
ROFLMAO: Heheh... no offence intented, but one could not accuse Mike of not knowing much about AI. He has a postgraduate education in AI from one of Britains leading schools in that area; he's a member of the AI Interface Standards Committee, the body undertaking the task of developing a common interface standard for AI for the computer games industry; he works for Lionhead studios and has worked on leading titles like Fable... so all in all, I'd say Mike knows exactly what he's talking about!
Personally, I think that's a very good post Mike. It's somewhat aligned with my own thoughts on the matter, but there are some fundamental issues (differences of opinion) I have with it. I've started to put some down on paper, but my wife is hassling me to get our daughter off to sleep for an afternoon nap. I'll try and put something up tomorrow.
Cheers,
Timkin
Quote:
Original post by Tom Knowlton
Interesting thoughts on AI...I think you have the basics down pretty well. I know even less about it than you do...
ROFLMAO: Heheh... no offence intented, but one could not accuse Mike of not knowing much about AI. He has a postgraduate education in AI from one of Britains leading schools in that area; he's a member of the AI Interface Standards Committee, the body undertaking the task of developing a common interface standard for AI for the computer games industry; he works for Lionhead studios and has worked on leading titles like Fable... so all in all, I'd say Mike knows exactly what he's talking about!
Personally, I think that's a very good post Mike. It's somewhat aligned with my own thoughts on the matter, but there are some fundamental issues (differences of opinion) I have with it. I've started to put some down on paper, but my wife is hassling me to get our daughter off to sleep for an afternoon nap. I'll try and put something up tomorrow.
Cheers,
Timkin
I think we will know what an ai is once we explore the worlds beyond our universe. It could turn out that an ai might be an abstract entity oblivious of its environment. If we want to create human like ai then I think it makes sense to develop it along the same lines as we developed. That means, put it inside of an environment and teach it to survive in it. To be shaped by it and ultimately conquer it. Has anyone thought about how we will change in the future? We are not going to be looking like we do today I don't think. Who knows? We might become immobile, lose legs and hands and instead develop connections with other brains to pursue greater things in search of more knowledge. Most of us are already glued to their tvs or computers.
What you would need is an external representation, then an internal representation.
the external representation is the world. for eg. if you want to show it what a tree is, you modify its external representation to include one, then you say "Yep this is a tree".
The internal representation is the units perseption of its environment.
It is the entirity of its existance, it knows nothing that is not inside its internal representation. and that is just a function of the external environment.
What is understanding?
If i was to say that i understand CD's, what would i have to have done to say that truthfully?
from my point of view, when you understand something, you understand how it interacts with other things, and how its constituent parts interact to do the things that it does, you probabbly know how to control it, and use it also.
Or else when you say you understand something, you know what it does, and how to change that, but not how it does it.
Just something to think about.
From,
Nice coder
the external representation is the world. for eg. if you want to show it what a tree is, you modify its external representation to include one, then you say "Yep this is a tree".
The internal representation is the units perseption of its environment.
It is the entirity of its existance, it knows nothing that is not inside its internal representation. and that is just a function of the external environment.
What is understanding?
If i was to say that i understand CD's, what would i have to have done to say that truthfully?
from my point of view, when you understand something, you understand how it interacts with other things, and how its constituent parts interact to do the things that it does, you probabbly know how to control it, and use it also.
Or else when you say you understand something, you know what it does, and how to change that, but not how it does it.
Just something to think about.
From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:
Original post by Timkin
Mike! It's great to see you around here again! It's been quite some time! Merry Christmas and a Happy New Year to you!
Thanks Timkin, somehow I drifted away from the boards and kinda forgot they existed for a while. I hope your Christmas and New Years were good :)
Quote:
Original post by Timkin
ROFLMAO: Heheh... no offence intented, but one could not accuse Mike of not knowing much about AI. He has a postgraduate education in AI from one of Britains leading schools in that area; he's a member of the AI Interface Standards Committee, the body undertaking the task of developing a common interface standard for AI for the computer games industry; he works for Lionhead studios and has worked on leading titles like Fable... so all in all, I'd say Mike knows exactly what he's talking about!
I thought about saying "this is what I've done and who I am" but, in the end, that makes no odds to the discussion. If he thinks I don't know much about AI then he's entitled to his opinion ;)
I'm glad you remembered who I am though :)
Quote:
Original post by Timkin
Personally, I think that's a very good post Mike. It's somewhat aligned with my own thoughts on the matter, but there are some fundamental issues (differences of opinion) I have with it. I've started to put some down on paper, but my wife is hassling me to get our daughter off to sleep for an afternoon nap. I'll try and put something up tomorrow.
And what were your thoughts in the end? I'd be interested to know, the ideas above were only a first stab and full of inconsitencies and half baked ideas, I'm sure.
Mike
Okay... here are some of my surface thoughts on understanding. Feel free to pick them apart and expose the flaws. I'd certainly enjoy refining my ideas. ;)
I'm trying to convince myself that one can understand something without having experienced it, which would mean that understanding and experience are only correlated, rather than causally related (and computers could understand trees). We have the ability to learn by analogy, so in principle we should be able to gain understanding by analogy. Experience then simply helps to reduce the uncertainty in our beliefs enabling us to make better predictions about events. Obviously we also have the ability to formulate models by observation and confirm these through repeated experience. Of course, I'm a confessed Bayesian, so I'm probably clouded in my beliefs about this! ;)
For those that would say that without the experience there is no understanding, then by deduction I would say that you believe that withouth qualia, there is no understanding. An interesting notion which fits with Mike's thought that understanding is subjective. So is qualia the key to understanding or is it just a fancy Latin word for observation and the internal processes generated by observation.
Let's take a crack at this from the perspective of language and Searle's argument. It is widely known that the way in which humans first learn language during their early development is not the way in which humans learn language in their teen years and beyond. Therefore, since our experience of language is necessarily different and the methods we use to encode it are different, then isn't our understanding of it different (according to the 'experience' argument of understanding)? Put aside for the moment the issue of fluency and assume that both a toddler and an adult learning Chinese know the same limited set of characters and have the same vocabulary and understanding of grammar in Chinese. Does the adult understand Chinese any less than the child, or vice versa? I don't think so, since both could presumably use their limited vocabularly to interact with each other and other Chinese speakers. This would suggest that the difference in experiences of the child and adult does not result in a different understanding of Chinese. Unless of course, one believes that somehow the toddler and adult encode Chinese differently and the adult is only using pattern matching algorithms to associate outputs to inputs. Neuropsychology doesn't bare this out. Which areas of the cortex encode language doesn't depend on the way in which you learn that language, so the only possibility is that we encode information within the same area differently when we learn as a child or as an adult. There isn't, to my knowledge (having just spent 2 years working in a neuro team that does research in this and related areas), any evidence that this is the case. That doesn't mean it isn't the case and we may yet learn this... but I doubt it. So, if the toddler and the adult both understand Chinese, then uniqueness of experience does not define unique understanding... and therefore computers could learn Chinese and presumably understand it if they were given the opportunity to learn Chinese as an adult or toddler does. Of course, this means that the computer must be able to ground the symbols of Chinese and this requires certain sensory abilities.
On a side track for a moment...
I think that understanding is the ability to take a model of something and link it to one's other models so as to preserve the consistency of all internal models and the ability to make predictions not only as to the behaviour of the new thing being modelled, but also the affect on the rest of the things that are understood. Thus, understanding is about building a set of models forming a self-consistent representation of things in the world and how they behave and interact. This would mean that understanding is contextual to the individuals model but not necessarily contingent on their experiences. Because two individuals models are grounded in the same world, albeit through different interactions with that world, there are sufficient commonalities due to grounding upon which they can share understanding and communicate effectively.
If understanding is then about the consistency of models grounded in the world, then one can understand something without having experienced it, so long as one can ground enough of the larger context of models so as to make accurate predictions with the new model.
So, if you want to teach a computer to understand Chinese, you're going to have to teach it to understand a lot of things in addition to Chinese. Of course, this leaves us with an interesting conundrum: how does one understand the first model? My brief statement on this is that for many animal lifeforms on Earth, it is evident that some understanding is hard-wired into the brain (presumably through evolution). Of course, for human babies, many things are also not understood and it is a very interesting day indeed spent watching ones child and seeing how they try and build consistent models of the world around them without any starting points!
Cheers,
Timkin
[Edited by - Timkin on January 9, 2005 7:10:58 PM]
I'm trying to convince myself that one can understand something without having experienced it, which would mean that understanding and experience are only correlated, rather than causally related (and computers could understand trees). We have the ability to learn by analogy, so in principle we should be able to gain understanding by analogy. Experience then simply helps to reduce the uncertainty in our beliefs enabling us to make better predictions about events. Obviously we also have the ability to formulate models by observation and confirm these through repeated experience. Of course, I'm a confessed Bayesian, so I'm probably clouded in my beliefs about this! ;)
For those that would say that without the experience there is no understanding, then by deduction I would say that you believe that withouth qualia, there is no understanding. An interesting notion which fits with Mike's thought that understanding is subjective. So is qualia the key to understanding or is it just a fancy Latin word for observation and the internal processes generated by observation.
Let's take a crack at this from the perspective of language and Searle's argument. It is widely known that the way in which humans first learn language during their early development is not the way in which humans learn language in their teen years and beyond. Therefore, since our experience of language is necessarily different and the methods we use to encode it are different, then isn't our understanding of it different (according to the 'experience' argument of understanding)? Put aside for the moment the issue of fluency and assume that both a toddler and an adult learning Chinese know the same limited set of characters and have the same vocabulary and understanding of grammar in Chinese. Does the adult understand Chinese any less than the child, or vice versa? I don't think so, since both could presumably use their limited vocabularly to interact with each other and other Chinese speakers. This would suggest that the difference in experiences of the child and adult does not result in a different understanding of Chinese. Unless of course, one believes that somehow the toddler and adult encode Chinese differently and the adult is only using pattern matching algorithms to associate outputs to inputs. Neuropsychology doesn't bare this out. Which areas of the cortex encode language doesn't depend on the way in which you learn that language, so the only possibility is that we encode information within the same area differently when we learn as a child or as an adult. There isn't, to my knowledge (having just spent 2 years working in a neuro team that does research in this and related areas), any evidence that this is the case. That doesn't mean it isn't the case and we may yet learn this... but I doubt it. So, if the toddler and the adult both understand Chinese, then uniqueness of experience does not define unique understanding... and therefore computers could learn Chinese and presumably understand it if they were given the opportunity to learn Chinese as an adult or toddler does. Of course, this means that the computer must be able to ground the symbols of Chinese and this requires certain sensory abilities.
On a side track for a moment...
I think that understanding is the ability to take a model of something and link it to one's other models so as to preserve the consistency of all internal models and the ability to make predictions not only as to the behaviour of the new thing being modelled, but also the affect on the rest of the things that are understood. Thus, understanding is about building a set of models forming a self-consistent representation of things in the world and how they behave and interact. This would mean that understanding is contextual to the individuals model but not necessarily contingent on their experiences. Because two individuals models are grounded in the same world, albeit through different interactions with that world, there are sufficient commonalities due to grounding upon which they can share understanding and communicate effectively.
If understanding is then about the consistency of models grounded in the world, then one can understand something without having experienced it, so long as one can ground enough of the larger context of models so as to make accurate predictions with the new model.
So, if you want to teach a computer to understand Chinese, you're going to have to teach it to understand a lot of things in addition to Chinese. Of course, this leaves us with an interesting conundrum: how does one understand the first model? My brief statement on this is that for many animal lifeforms on Earth, it is evident that some understanding is hard-wired into the brain (presumably through evolution). Of course, for human babies, many things are also not understood and it is a very interesting day indeed spent watching ones child and seeing how they try and build consistent models of the world around them without any starting points!
Cheers,
Timkin
[Edited by - Timkin on January 9, 2005 7:10:58 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement