Advertisement

Anything like a "real" AI.

Started by November 03, 2001 01:36 PM
92 comments, last by kenwi 23 years, 3 months ago
Fraglegs: Yes I suppose that I do agree on the fact that I would have felt bad if i didn''t help someone in need but that''s not the reason for why I do it, at least not conciously just as you said, ''cause I actually don''t even consider any feelings of guilt that I might get. For example if i see a beggar I think poor fellow spose I could spare some change... not ooh a beggar I must give him some money so i won''t get any recurring feelings of guilt during the next three or four hors. However, on the life thing, during the split-second where I make my decision to either save her at the expense of my own life or to spare my own neck I really, ok I haven''t encountered this situation so I don''t know how I''d act but, wouldn''t reflect over grief or anything like that I would simply act on pure emotion, i.e. I''d do that because I do value her more than myself.

Another point that I''d like to state is that if we really do everything for ourselves then wouldn''t we be rather dumb if we gave our life for anything. I mean neither grief, guilt or religion could mean more to ourselves then ourselves and if we''re dead we won''t be anymore i.e. we would have given up that which we hold most dearly, ourselves.

I think that the reason for which we feel empathy is that we were meant to help each other I think that''s nature''s way to better our chances of survival. If we were truly egoistic doing everything for ourselves how come then that we even feel empathy or why else could we be emotionally hurt, if we were created totally egoistic love or empathy would simply have been a burden.

// Shadows
Rogue Software
(Check the Game Design forum)
FragLegs: As I''ve already said in my post to Invader the problem isn''t to make the computer mimic a humans behavior the problem is to make the computer not only act but feel alive. For example a parrot can mimic the human speech but it doesn''t understand it. The same thing goes for a computer, a computer that is programmed with a response for every possible situation with no real idea of why it is doing it isn''t intelligent it''s dead.

Now I know that some people might say that acting on an event is a sort of knowing sure the computer knows that event A() has happened so trigger response B() but the computer don''t know why A() triggers B() it simply knows it.

// Shadows
Rogue Software
Advertisement
Shadows:

You have a really good point regarding the evolution of empathy. If I wanted to argue it, I suppose I could say something to the effect of cooperation is neccessary for our survival and empathy works toward that end. However, I regret posting so hastily before. I am not at all sure of the egoistic point of view. Sometimes it seems like the most natural explanation for a lot of our actions, but at other times it seems like so much hot air. Let me play devil''s advocate for a moment, though.

On lending aid to a beggar without considering the benefits to yourself:
When flushing the toilet, do you think of why you are doing it? No, of course not, it''s just "the right thing to do". At one point in your life you probably had to be taught why it is proper to flush the toilet, but after years and years of flushing on your own, it becomes an automatic action. Now giving money to someone who needs it is not the same thing as flushing the toilet, but perhaps the analogy holds. We all develop a moral code as we grow. We have a millions unwritten rules about how to act in certain situations. This is fortunate for us because otherwise we would be paralyzed by the amount of analysis that every decision would require. At some point in your life, the rule "Help people when you can" worked its way into your moral code (mine too). So now you act without thinking about it. Someday, you might find that this rule doesn''t move you consistenly towards your maximum happiness and you might reevaluate the rule (I hope not though, we need more people in the world who are willing to help).

On dying for a cause:
Can we not say that your maximum happiness may lie in death instead of life without some ideal (freedom, God, etc.)? Note that I have always said that we work to achieve our maximum happiness, not our longest life.

Well, there''s my bit of devil''s advocacy. I am very interested in this debate precisely because I am so unsure of its outcome. I considered not continuing the debate because of its irrelavence to Game Development, but on reflection I decided that this question may be central to AI. Can we boil everything down to: "Seek the maximum pleasure"? If so, true AI might be possible some day.

FragLegs
Shadows:

No fair posting twice in a row, now the order is off

My claim is that a computer who acts in the way described earlier in the posts would, in fact, be feeling the emotions of happiness/sadness/etc. In fact, I claim that our emotions work exactly the same way.

I can''t say anything more about happiness than "It is a state that I am preprogrammed to seek." It is a reward for performing actions that further some purpose (I''m not sure what, perhaps this is where someone should speak up on gene survival). When I achieve happiness, certain chemical changes occur inside of me. Other than that, though, what can you say? Can''t a computer say the same thing?

FragLegs

I know this post isn''t terribly clear. Working, listening to the Get Up Kids, debating philosophy in an AI context, all at the same time..it''s a little bit much for my poor neural net.
quote:

Invader: The difference between simply showing of emotions and actually experiencing them... well if you don''t understand it''ll be hard to explain...



I''m saying that since you have no way of ever telling the difference in another organism, then it is safe to say that there is no difference (regardless of whether or not there ''truly'' is a difference because we will never know). Since we do not know and can never know, we could say that it does exist; however, since I want a sentient computer with emotions I will opt to say it doesn''t so that my AI is sentient. I say that it doesn''t so that I can acknowledge your existance and you do the same for me.

Well... FragLegs has apparently said everything else I would have =P

Invader X
Invader''s Realm
Thank you all for your points, about attaining "maximum-happiness" (whatever that might be) as our sole goal in life and thinking...

Why? Because this means that my PDA-AI can also be sentient, though limited to help me out. Helping me in the best way would be the maximum-happiness for it. So, you can''t claim that it has other priorities then I have, right? :-)

I think the only real solution to these problems can come from making assumptions. Like they say, innocent till proven guilty (or guilty until proven innocent, just what suits you best).

-I ASSUME that I am sentient.

-I also assume that other people are sentient in the same way as I am, I can''t be sure, but I think so.

-I assume animals are sentient. No real reason, I just think they are. Observe how a dog can be faithfully to his owner. Or be totally screwed. Just like we can be.

-I assume a rock is not sentient, because it is static. Chemical compounds are also not sentient. Why? I can''t quite say. We are also chemical, and I consider us sentient. I can''t give a real reason, I''ll just assume it is true.

-I assume that a computer can *become* sentient. I think it can become aware of it''s surroundings, and react to it, and *learn* from it... Hmmm... Come to think of it, this seems to be quite a nice way to describe sentience.

--Perception: notice your surroundings. Accept input. We can do it. Animals do it. A rock doesn''t. Chemical stuff does.
--Reaction: act on what you perceive. We can dodge a ball. A cat can jump on a mouse. A rock can''t. Chemical stuff can react with other stuff it encounters.
--Learning: observing what happens after the reaction, and perhaps change the way you will react next time. We can catch a hot pan that falls, feel that it is hot, and next them, just let it drop. An animal will no longer approach people as easily after it has been hit with it a stick. A rock, well, it will always do the same, after hitting hit n times. Chemical stuff will also always do the same thing (as long as it is the same stuff).

I know, this is in no way perfect, just something that popped up in my mind... To write it down carefully, and without errors, would require me to think about it a *long* time :-)

-Maarten Leeuwrik

(PS: Jippie! Only 5 spelling errors!!!)
Advertisement
Invader and Fraglegs (and anyone else that cares ):
Ok I think that this discussion is probably reaching it's end. We've gone through alot of important points but we seem to get stuck at this:

I know that i am sentient and alive. Whether or not I can prove it to you and I still know why I do the things I do. In my opinion a computer that only does what it's programmed to do, how it's programmed to respond in a certain situation isn't actually smart it's just big. To me that's the same thing as a parrot mimicking a humans speech without understanding it. Now you've made it fairly clear that you disagree on that point but until now you haven't made me change my opinion in that field so if you don't have any new facts I'd like to leave that behind.

Regarding the evolution of empathy as "the effect of cooperation is neccessary for our survival and empathy work towards that end"... Now in my opinion empathy doesn't att all work toward cooperation you don't cooperate with someone to gain something in return as a result of empathy, thus not aiding at all to our own survival. As I see it the actions that are made based on empathy will very rarely give you any sort of personal gain and if we humans were so 'egoistic' then we wouldn't have empathy at all. We would still be able to function together since "cooperation" is in fact in almost all cases extremely egoistic since the goal is to get the maximum outcome of the work that we're putting in to something. Cooperation has nothing to do with empathy it's merely a natural result of us being smart enough to understand that we will perform better in a group then by ourselves.

I agree with you FragLegs that we have sort of a moral code with millions of rules for our everyday life, as a result of us growing up. But on the other hand, flushing the toilet doesn't really make me happy... see your point though

I can also agree with you on the fact that I can't describe happines as anything else but "It is a state that I am preprogrammed to seek." The same goes for all emotions, I can't describe anger, satisfaction, feeling tired and i sure as hell can't explain love but what I'm saying is that even if a computer has a value for all our emotions and even if it's goal is to maximaze it's happiness it wouldn't be sentient anyhow since it doesn't experience "happiness" it simply knows that
happy = maxHappiness and that according to me is the difference. You can't (at least not in software) make a computer feel anything. Really simplified a computer can't feel pain (or?) so what is the point with getting the AI to avoid pain? It doesn't really mean anything to the computer at all it simply knows that:
pain = high
cause of action: AttempToLowerPain()

// Shadows
Rogue Software
(Check my topic on the Game Design forum)

Edited by - shadows on November 10, 2001 12:33:44 PM
quote:
Original post by Invader X


So do you. If I talk about a Snorgoflax, then you''ll have no idea what I''m talking about.




Well, if you say "Snorgoflax" I do know a few things as a human having the discussion:

- I know that I have no idea what a "Snorgoflax" is, though I might be able to draw some
conclusions based on the context of the usage (think Tolkien and the many words the races
had for each other). No AI does that that I know of.
- I know that you seem to know what a "Snorgoflax" is, and that might in turn elicit
questions from me to you about it.
- As a h uman I also know that you could well have just made that up. AIs ususally aren''t built
to consider "bad data"....something which NPCs will have to cope with as MassMOGs get more
intricate and include more people.

Just a couple of thoughts as I peruse what is almost certainly a nearly dead topic...




Ferretman

ferretman@gameai.com
www.gameai.com

From the High Mountains of Colorado

Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com

quote:
Original post by Enigma
As far as I''m aware (don''t quote me on this) the human brain is completely incapable of generating a sequence of truly random numbers either (just look at the proportion of people who, when asked to pick a number between one and ten, pick seven, or when asked to pick a vegetable choose a carrot).







Actually, I''ve wondered about that study. Did it only include Americans? I wonder if, say, Austrians would tend to pick some other number, or if Chinese would pick onions as a vegetable. I assume that culture has a huge role to play in such things....

...but I digress. Apologies.




Ferretman

ferretman@gameai.com
www.gameai.com

From the High Mountains of Colorado

Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com

quote:


So do you. If I talk about a Snorgoflax, then you''ll have no idea what I''m talking about.







My first thought was an animal of some kind, not exotic since it sounded to made up for that

// Shadows

This topic is closed to new replies.

Advertisement