the perfect human ai
ok humans have intellegance but i think that an ai system that could learn your moves ,speak realisticly ,and asks for help and does not want to die would be "intellegant ai" wut do you think
So is your post. At least my posts are spelled correctly. But if you are desperate for clarification, I tell you this: an AI that could "learn your moves", which I interpret to mean could learn how to act like you, would simply be copying a living organism. Speaking realistically is not a function of AI, for the most part. Asking for help and avoiding death are not qualifiers for intelligence, either.
When I think of intelligence, for living things, I think of an organism that can learn about, react to, and adapt to its environment. It must be able to learn for itself, to an extent. An "intelligent AI" is not a human mimic - I think it would be more like Jane the computer entity from Orson Scott Card's Ender saga.
Since you did not give any parameters as to what kind of AI you were talking about, I gave you the best answer I could. Although the anonymity afforded by the Internet is probably a factor in what produces this next statement, please grow up, think before you post, and make sure your post is legible before you hit the "submit" button. Thanks.
When I think of intelligence, for living things, I think of an organism that can learn about, react to, and adapt to its environment. It must be able to learn for itself, to an extent. An "intelligent AI" is not a human mimic - I think it would be more like Jane the computer entity from Orson Scott Card's Ender saga.
Since you did not give any parameters as to what kind of AI you were talking about, I gave you the best answer I could. Although the anonymity afforded by the Internet is probably a factor in what produces this next statement, please grow up, think before you post, and make sure your post is legible before you hit the "submit" button. Thanks.
my siteGenius is 1% inspiration and 99% perspiration
If it cannot be distinguished from a human in any way, then it is intelligent, since that is how we define the word. If you mean something else, you should define "intelligence".
"An "intelligent AI" is not a human mimic"
Meh, I dunno. That's what humans are, for the most part. I think people overrate human intelligence. What we experience as cognition is just a thin oily film floating on top of a deep pool of instinct and inherited knowlege. As soon as someone builds a computer capable of containing all that (and as soon as its quantified) I'm sure they'll be able to make a computer as smart as a person.
But that's a fairly low bar.
Meh, I dunno. That's what humans are, for the most part. I think people overrate human intelligence. What we experience as cognition is just a thin oily film floating on top of a deep pool of instinct and inherited knowlege. As soon as someone builds a computer capable of containing all that (and as soon as its quantified) I'm sure they'll be able to make a computer as smart as a person.
But that's a fairly low bar.
Firstly, you should consider that your post is quite ambiguous. Being that there is no universal, all-encompassing definition/interpretation of what intelligence is, and that you did not give your personal opinion on the matter, I am left, essentially, to guessing.
Nevertheless, here is my opinion. Have you ever heard of something called the "Theory of Mind"? Put simply, to develop a theory of mind regarding some other entity (whether it truly be intelligent or not is irrelevant in this case) is to, by some means, come to a conclusion that this other entity possesses thoughts, desires, motivations, and a knowledge base completely independent from your own. One definition thus is any AI entity about which the majority of humans develop a theory of mind is "intelligent."
A classic example of testing whether young kids (under 4 years usually) are capable of developing theory of minds regarding other humans is thus: the child is shown the scene of a woman, let's call her Sally, placing her money into a jar on the kitchen table. Sally then leaves, and lo and behold during her absence Bob comes in and surreptitiously takes the money from the jar and places it into the cupboard. The child is then confronted with the final scene of Sally returning in search of her money. Before Sally looks to retrieve it, the play stops and the child is asked where Sally will look for the money: the jar on the kitchen table, or the cupboard? This tests whether the child realizes that Sally does not know what the child does -- that Bob moved the money from the jar to the cupboard for whatever reason.
If the child answers the jar, then he/she has developed a theory of mind regarding Sally. Otherwise, the child has not.
The goal with game AI, in my opinion, is to design AI entities such that humans are unknowingly coerced into developing theory of minds regarding the agents.
Nevertheless, here is my opinion. Have you ever heard of something called the "Theory of Mind"? Put simply, to develop a theory of mind regarding some other entity (whether it truly be intelligent or not is irrelevant in this case) is to, by some means, come to a conclusion that this other entity possesses thoughts, desires, motivations, and a knowledge base completely independent from your own. One definition thus is any AI entity about which the majority of humans develop a theory of mind is "intelligent."
A classic example of testing whether young kids (under 4 years usually) are capable of developing theory of minds regarding other humans is thus: the child is shown the scene of a woman, let's call her Sally, placing her money into a jar on the kitchen table. Sally then leaves, and lo and behold during her absence Bob comes in and surreptitiously takes the money from the jar and places it into the cupboard. The child is then confronted with the final scene of Sally returning in search of her money. Before Sally looks to retrieve it, the play stops and the child is asked where Sally will look for the money: the jar on the kitchen table, or the cupboard? This tests whether the child realizes that Sally does not know what the child does -- that Bob moved the money from the jar to the cupboard for whatever reason.
If the child answers the jar, then he/she has developed a theory of mind regarding Sally. Otherwise, the child has not.
The goal with game AI, in my opinion, is to design AI entities such that humans are unknowingly coerced into developing theory of minds regarding the agents.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement