Advertisement

Anything like a "real" AI.

Started by November 03, 2001 01:36 PM
92 comments, last by kenwi 23 years, 3 months ago
quote:

Invader: You don''t seem to understand what I''m trying to say. The difference between experiencing an emotion and simply showing it off makes all the difference.



Oh I understand you perfectly. My point is that there is no difference that truely matters since you and I cannot know what the computer is feeling.

quote:

You can, I guess, make a computer irrational as a respones to the triggered function Love() but I can''t see any way to program a computer to actually feel irrational. Let''s face it you can''t even make a computer generate a totally random number, It''s an impossibilty, at least that is what i''ve heard.



I disagree that we act irrationally. I believe We only seem irrational because we are acting as rational as we can except the data that we are rationalizing with is either misinterpreted or wrong. (I don''t have a degree in any of this but if you can point me in the direction of a source that disagrees with me, please do) In the case of love, we are only being ''irrational'' because we are trying to do everything in our power to get this person either to know us, or like us, or whatever. (See Ronin''s post)

As for random numbers, I dont believe that a computer (or a human) will ever be able to generate a completely random number.

quote:

And, my PDA-AI with as an ultimate goal to assist me, well, that would be just he same as me with the ultimate goal to create true AI. Does that put me in an infinite loop as well?



Of course not. If you knew that by attempting to create an AI you would die a horrible, painful death before completing it and if you didn''t you would live a long and happy life, I''m pretty sure that you would choose the latter. However, since your PDA-AI can only serve you, it cant even have concept of that choice and, in my opinon, is therefore not a true human emulating AI. However, if you do make this PDA-AI that only wants to serve me, I''d be glad to get one

quote:

All computers do is execute logical operations.
Executing logical operations can''t create art/feelings/conciousness
If the operations don''t do this, the system as a whole can''t do it.



Beautiful response to this ^ GameCat

quote:

Of course you can program a computer to express love or making it believe it loves someone. But to the AI that''s only variable with a value (ok lots of ''em variables =) but still it could be a bit saying
LovePerson02232 = 1
You can''t, atleast not today, program the computer to actually love the person.



What''s the difference between making a computer believe it is in love with someone (doesn''t your own brain do that to you by messing with your hormones?) and actually loving them? It seems to me they are one in the same.

Invader X
Invader''s Realm
quote:
Original post by Invader X
Oh I understand you perfectly. My point is that there is no difference that truely matters since you and I cannot know what the computer is feeling.


to believe that there is no difference shows that you have never dated a gold-digger
seriously, though... the computer wouldn''t be feeling anything. it is simply processing data (even if it is a lot of data, and a lot of very complex processing). it matters because we are curious if we can make a computer really, actually, have emotions, not whether we can program it to trick us into thinking so.
quote:

I disagree that we act irrationally. I believe We only seem irrational because we are acting as rational as we can except the data that we are rationalizing with is either misinterpreted or wrong. (I don''t have a degree in any of this but if you can point me in the direction of a source that disagrees with me, please do) In the case of love, we are only being ''irrational'' because we are trying to do everything in our power to get this person either to know us, or like us, or whatever. (See Ronin''s post)

you keep saying this, but love is more than trying to get someone to notice you. what about when you have been married for years, or it is love for your own children. you would not be trying to get THEM to know you, or like you, when you push them out of traffic to save their life, thereby killing yourself. even in less extreme cases this is true; when you love someone else (not just a crush or whatever you are talking about) you value them over yourself, and will act irrationally to help them.
quote:
As for random numbers, I dont believe that a computer (or a human) will ever be able to generate a completely random number.

you should read my previous post.
quote:
Beautiful response to this ^ GameCat

(1) so i guess you have somehow proved that there is no such thing as a soul?
(2) this argument is nothing more than science fiction until you actually DO replace an entire brain with synthetic neurons. only then will it be a "beautiful" response. although, actually, i think replacing it would not be enough; you would have to build the synthetic brain from scratch to have a valid experiment.
quote:

What''s the difference between making a computer believe it is in love with someone (doesn''t your own brain do that to you by messing with your hormones?) and actually loving them? It seems to me they are one in the same.

setting variables would not be making the computer believe it is in love. computers don''t "believe" anything. they are not sentient beings. programming in a statement to the effect that the computer "loves x" does not make it love "x", it just stores that information; there is no belief involved at all.

--- krez (krezisback@aol.com)
--- krez ([email="krez_AT_optonline_DOT_net"]krez_AT_optonline_DOT_net[/email])
Advertisement
Firstly about the Randomness issue, I''d like you to take a look at: http://www.dilbert.com/comics/dilbert/archive/dilbert-20011025.html

Then about the AI thing - The only reral way that we''ll get an AI to feel is to actually create it in HARDWARE - there is no possible way in software alone that it could feel. Take this anaolgy, Feelings - the chemicals in our brain are the software component, but the neural net, the solid type part is the hardware. Thus in order to create a ''feeling / living'' ai or somputer is to actually create a good adapting neural net processor and then ''feed'' it and nurture it the correct software and code and inputs.

(Just my take on things + with influences from Alpha Centauri)
Daemin(Dominik Grabiec)
I''ve been away for a few days and I come back to another page and a half in this discussion! hehe...

Some comments and thoughts...

To those arguing about whether computers can or cannot truly ''feel'' something, be it an emotion or whatever... let me tell you a brief (true) story.

Back in 1998 I was in the US for several conferences, all in the AI and cognitive science fields. I had the pleasure of attending the 1998 AAAI (American Association for Artificial Intelligence) conference and on one particular evening I was in the exhibition hall visiting many of the booths. Late in the evening I came across a small table nestled among some larger displays. On the table was a small furry robotic cat. From a few meters away I was actually momentarily fooled into thinking it was a real cat. There was a middle aged Japanese professor sitting behind the table and behind him were a few technical posters on his research. It didn''t look like an overly exciting display but I thought I''d stop for a look and see what this cat was about. The professor explained to me that he had built this small robotic cat to investigate the relationship between tactile contact and emotions. He had designed a ''simple'' model (by comparison to other more common models) of emotion and had fitted dozens of tactile sensors under the fur of the cat. The point to realise is that no specific action responses were programmed into the cat but that physical response of the cat was linked to its emotional state. He explained that the emotional model was calibrated for responding to stroking the fur. So, I stroked the fur and the cat sat up and purred at me, actually nuzzling a bit to rub the tactile sensor against my hand. I was very impressed. It made the Sony dog (which at that stage had only just won its first international Robo-Cup soccer tournament and wasn''t yet on the market) look downright catatonic! I played with the cat for a few minutes and was able to elicit several other interesting responses. Basically I found that it liked to be rubbed in different places in different ways and it would respond accordingly to either increase its pleasure or try to get me to rub it somewhere that would feel better. With each interaction it made various purring or meowing sounds to indicate its happiness. This gave me an idea. Without discussing it with the professor I gave the cat a short, sharp smack on the top of the head. The robotic cat pulled its head back sharply and I have to say that the sound it made was a very realistic hissing sound. Any of you cat owners out there know the sound... it comes from very high in the mouth cavity. I was quite surprised and the professor nearly jumped 3 feet in the air. Not because I had smacked the cat but because he was totally astonished at the reaction of the cat. He quickly explained that he had not put anything specific into the hardware or software for dealing with responses to harmful/''painful'' stimuli. Somehow, within the basic physical and emotional model he had designed there was the capacity for dealing with this opposite emotional state and the reaction that the cat had come up with was totally realistic in our sense of what a cat would do, even though the model was never calibrated to do this!

So, here''s a question for you. What did the cat feel, if anything? It certainly wasn''t a programmed response to the stimuli, so what generated the response to an apparently ''painful'' stimulus? If you feel that the response was merely an emergent property of the programming that already existed and the cat didn''t really feel anything, then what does this say about organic cats? Should I now not worry about going and smacking organic cats on the head?


Moving along...

I''m going to avoid the use of the word ''soul'', because, quite frankly, its existence cannot be proven and invocation of the idea of a soul is just a quick way to say we have no idea how thoughts are generated.

quote:
Original post by krez

seriously, though... the computer wouldn''t be feeling anything. it is simply processing data (even if it is a lot of data, and a lot of very complex processing). it matters because we are curious if we can make a computer really, actually, have emotions, not whether we can program it to trick us into thinking so.



What is your brain processing? Given this, what is its output?

quote:
Original post by krez
I disagree that we act irrationally. I believe We only seem irrational because we are acting as rational as we can except the data that we are rationalizing with is either misinterpreted or wrong. (I don''t have a degree in any of this but if you can point me in the direction of a source that disagrees with me, please do) In the case of love, we are only being ''irrational'' because we are trying to do everything in our power to get this person either to know us, or like us, or whatever. (See Ronin''s post)


We should be careful with the use of the term ''rational''. It has a specific meaning when applied to people and to AI. In the example of someone acting irrationally with regards to love, that irrationality is only an external perspective. Rationality can only be evaluated from the perspective of the agent that is acting. Hence, while it may seem that someone is acting irrationally when they are in love, to them they are acting perfectly rationally according to THEIR personal measure of utility (which in this case is a function of their happiness).

quote:
Original post by krez
when you love someone else (not just a crush or whatever you are talking about) you value them over yourself, and will act irrationally to help them.


That''s not acting irrationally. In fact, it is acting quite rationally according to a utility function that puts the physical well-being of someone else above your own. It would certainly look like an irrational action to someone who puts their own personal well-being above that of anyone else but that doesn''t mean it IS irrational. As I said, rationality must be evaluated with reference to the acting agents utility measure.

quote:
Original post by krez
setting variables would not be making the computer believe it is in love. computers don''t "believe" anything.



How does your brain record and utilise the belief that the sun will rise tomorrow? Given your answer, if I now asked you to consider that you lived your entire life in a room and had only books to read to obtain information (but assume ANY information could be written into the book) then how would your brain record and utilise the belief that the sun will rise tomorrow?

Is there a difference between the two beliefs in your brain and is there a difference between the two beliefs in your conscious thoughts?

I''m not suggesting I have the answers to these questions (although I have some fairly well informed opinions). I am simply curious to hear different answers to these questions from different people.

Finally, regarding ''Shadows'' attempt at generating a random number. A random number is not one selected without thought. A random number is one sampled from a set of numbers without bias or preference to any particular number in that set. You (apparently) chose the set of integers. Based on a very quick analysis of your number, one of two conclusions can be drawn. 1) IF you used the digits running horizontally above your keyboard, then you are probably left handed as your brain favoured your left hand significantly when producing that sequence of numbers (60% with the left hand and 40% with the right hand) assuming you did something ''close'' to touch typing. 2) If however you used the keypad to the right of the keyboard, and used a single hand (probably the right), then your brain favoured your index (first) finger more. Your middle finger was not very liked by your brain, probably because it was squeezed between your index and ring fingers on the small keypad. It was only used to produce 25% of the digits (given the use of the keypad and some assumptions about how a hand types on the keypad). Your ring finger produced 35% of the digits given the above assumptions. Basically, the probability distribution of the digits was extremely skewed. There is significant evidence to suggest that your brain exhibited a bias when it selected the particular number to represent on the screen! Hence, you''re number is not random.

Well, enough from me. It''s late on a Friday and time to go home to the one I love! ;-)

Cheers all,

Timkin
quote:

seriously, though... the computer wouldn''t be feeling anything. it is simply processing data (even if it is a lot of data, and a lot of very complex processing). it matters because we are curious if we can make a computer really, actually, have emotions, not whether we can program it to trick us into thinking so.



How do you know that it wouldn''t? Can you prove to me that it doesn''t? Isn''t your brain just processing data from your senses and your neural net using your past experiences to formulate your next move?

Can you prove that I am a sentient being and not just a figment of your imagination?

If you cannot prove that I have emotions, why should you be so eager to try and prove if a machine that looks like it has emotions truely feels them?

quote:

you keep saying this, but love is more than trying to get someone to notice you. what about when you have been married for years, or it is love for your own children. you would not be trying to get THEM to know you, or like you, when you push them out of traffic to save their life, thereby killing yourself. even in less extreme cases this is true; when you love someone else (not just a crush or whatever you are talking about) you value them over yourself, and will act irrationally to help them.



How is it irrational to die for your offspring or your wife? So you die, but they live on. If you look at things from an evolutionary point of view, you only married your wife for her genes so it would be a rational thought to help preserve her genes and to allow her to nurture your offspring (which also carry your genes). But what of adoption? Many people find pleasure in having families and if they cannot have children, why not accept another? There are no selfless acts. Everything you do is for yourself or the survival of your genes and ideals.

quote:

(1) so i guess you have somehow proved that there is no such thing as a soul?
(2) this argument is nothing more than science fiction until you actually DO replace an entire brain with synthetic neurons. only then will it be a "beautiful" response. although, actually, i think replacing it would not be enough; you would have to build the synthetic brain from scratch to have a valid experiment.



1) I do not accept dualism in consciousness. Since I doubt you can present any proof of a soul (if you can, by all means present it) you cannot counter this arguement very well. I''ve yet to hear of any evidence that we are anything more than biological entities.
2) Currently, sentient computers are also science fiction. Although, explain why replacing it would not be enough. I''m interested.

quote:

setting variables would not be making the computer believe it is in love. computers don''t "believe" anything. they are not sentient beings. programming in a statement to the effect that the computer "loves x" does not make it love "x", it just stores that information; there is no belief involved at all.



We may not set variables in our brain like bool Love, but we store our feelings about the person somewhere and from my own thoughts, my guess is that we love people because they have achieved whatever standards we have mentally set (consciously or unconsciously). So, maybe for us it could be interpreted progmatically as:

if (Person.smart && Person.attractive && Person.interested_in_x ...)
Love(Person);

Invader X
Invader''s Realm
eh...
i do not have enough background in AI to argue this anymore, and telling you guys to stuff it would be bad form, so...
you can theorize whatever you like. i, however, will think it is fluff and science fiction until you can produce an actual result (not some example that implies something else that implies something else that might imply that you are correct).
quote:
Can you prove that I am a sentient being and not just a figment of your imagination?
If you cannot prove that I have emotions, why should you be so eager to try and prove if a machine that looks like it has emotions truely feels them?

i''m sorry, i always thought this was a scientific concept, not a philosophical one. if you expect me to prove your existence before i can even make a comment in this discussion, i want no part of it.
why would i have to prove anything? wouldn''t the people who are arguing that they can produce a sentient computer have the burden of proof? it is already accepted by humanity in general (except by argumentative AI-mongers, and other assorted philosophers) that we all exist, and have emotions, et cetera. while i do not generally feel that the majority is always right, this one is a dead give away...

--- krez (krezisback@aol.com)
--- krez ([email="krez_AT_optonline_DOT_net"]krez_AT_optonline_DOT_net[/email])
Advertisement
Invader: The difference between simply showing of emotions and actually experiencing them... well if you don''t understand it''ll be hard to explain...

You see if the computer only shows the emotions without actually feeling them then the computer won''t be intelligent, it would only do what it was programmed to do, i.e. only do what their programmer/ers told it to do in that situation. Making a computer actually feel these emotions that would be true AI and not only mimicing...

Krez: From my point of view this is far from a scientific debate (since we''re not discussing the actual code ) And I for one think that you should keep on posting

// Shadows
Rogue - The EH-CRPG Soon to be... - Software
Invader: You said that it isn''t irrational to die for your offspring... considering evolution, genes and blablabla but what about dying for love or friendship... I mean you could love someone, not have children but still be willing to die for that person simply because you love her/him so much, that has nothing to do with evolution and also i think that saying that all we do is for ourselves is rather pathetic actually. If that is what you do then fine but I for one don''t work like that. If I see someone that need help and I am reasonally able to give them help then I do... before you even bring it up NO! not to make myself feel pleased with myself but because I think that''s what people should do help each other (no i''m not a commie )
I think it''s possible to make a computer remember stuff and make it talk, BUT it shoud be a program with over 3 billions lines, and a damn Cray to use it...

But mebbe in the future you''ll have a talking lil''l robot in school and stuff
Shadows:

Would you feel bad (maybe a little guilty) if you refused to help someone who needed it when you were easily able to offer your assistance? If so, how can you say that you help someone with no thought of your own gain? Even if it is not a concious thought, the drive to please yourself is always there. This does not take away from the "goodness" of your actions in any way. It is still very commendable that helping others gives you some amount of pleasure (as oppossed to achieving pleasure by harming others).
As for dying for someone...I love my girlfriend more than anything in this world. I would gladly lay down my life for her, whether it furthers my genes or not (i.e. whether or not we have children). I like to think that my willingness to trade my life for hers is completely selfless, but when I am being totally honest with myself, I have to admit that I am willing to make this trade because the grief that I would feel at her death would be far worse than death itself (compounded with the thought that I could have saved her...it would be too much).

Okay, tying it all back in with AI. We do not act irrationally (already been said). In fact we ALWAYS act in the way that we believe will maximize our own happiness (whether we admit it or not). Now, is it so hard to imagine programming a computer to act similarly? If it were robust enough, the computer could devop its own notions of what maximizes its happiness and come up with new solutions for how to achieve those goals (I believe this is called "emergence"). In fact, this seems like one of the least complicated areas of AI (in contrast to getting information into and out of the computer in a way that would allow the computer to generate goals and plans of action).

Of course, you are free to disagree with any or all of this (and I expect you will).

FragLegs

This topic is closed to new replies.

Advertisement