I''ve been away for a few days and I come back to another page and a half in this discussion! hehe...
Some comments and thoughts...
To those arguing about whether computers can or cannot truly ''feel'' something, be it an emotion or whatever... let me tell you a brief (true) story.
Back in 1998 I was in the US for several conferences, all in the AI and cognitive science fields. I had the pleasure of attending the 1998 AAAI (American Association for Artificial Intelligence) conference and on one particular evening I was in the exhibition hall visiting many of the booths. Late in the evening I came across a small table nestled among some larger displays. On the table was a small furry robotic cat. From a few meters away I was actually momentarily fooled into thinking it was a real cat. There was a middle aged Japanese professor sitting behind the table and behind him were a few technical posters on his research. It didn''t look like an overly exciting display but I thought I''d stop for a look and see what this cat was about. The professor explained to me that he had built this small robotic cat to investigate the relationship between tactile contact and emotions. He had designed a ''simple'' model (by comparison to other more common models) of emotion and had fitted dozens of tactile sensors under the fur of the cat. The point to realise is that no specific action responses were programmed into the cat but that physical response of the cat was linked to its emotional state. He explained that the emotional model was calibrated for responding to stroking the fur. So, I stroked the fur and the cat sat up and purred at me, actually nuzzling a bit to rub the tactile sensor against my hand. I was very impressed. It made the Sony dog (which at that stage had only just won its first international Robo-Cup soccer tournament and wasn''t yet on the market) look downright catatonic! I played with the cat for a few minutes and was able to elicit several other interesting responses. Basically I found that it liked to be rubbed in different places in different ways and it would respond accordingly to either increase its pleasure or try to get me to rub it somewhere that would feel better. With each interaction it made various purring or meowing sounds to indicate its happiness. This gave me an idea. Without discussing it with the professor I gave the cat a short, sharp smack on the top of the head. The robotic cat pulled its head back sharply and I have to say that the sound it made was a very realistic hissing sound. Any of you cat owners out there know the sound... it comes from very high in the mouth cavity. I was quite surprised and the professor nearly jumped 3 feet in the air. Not because I had smacked the cat but because he was totally astonished at the reaction of the cat. He quickly explained that he had not put anything specific into the hardware or software for dealing with responses to harmful/''painful'' stimuli. Somehow, within the basic physical and emotional model he had designed there was the capacity for dealing with this opposite emotional state and the reaction that the cat had come up with was totally realistic in our sense of what a cat would do, even though the model was never calibrated to do this!
So, here''s a question for you. What did the cat feel, if anything? It certainly wasn''t a programmed response to the stimuli, so what generated the response to an apparently ''painful'' stimulus? If you feel that the response was merely an emergent property of the programming that already existed and the cat didn''t really feel anything, then what does this say about organic cats? Should I now not worry about going and smacking organic cats on the head?
Moving along...
I''m going to avoid the use of the word ''soul'', because, quite frankly, its existence cannot be proven and invocation of the idea of a soul is just a quick way to say we have no idea how thoughts are generated.
quote:
Original post by krez
seriously, though... the computer wouldn''t be feeling anything. it is simply processing data (even if it is a lot of data, and a lot of very complex processing). it matters because we are curious if we can make a computer really, actually, have emotions, not whether we can program it to trick us into thinking so.
What is your brain processing? Given this, what is its output?
quote:
Original post by krez
I disagree that we act irrationally. I believe We only seem irrational because we are acting as rational as we can except the data that we are rationalizing with is either misinterpreted or wrong. (I don''t have a degree in any of this but if you can point me in the direction of a source that disagrees with me, please do) In the case of love, we are only being ''irrational'' because we are trying to do everything in our power to get this person either to know us, or like us, or whatever. (See Ronin''s post)
We should be careful with the use of the term ''rational''. It has a specific meaning when applied to people and to AI. In the example of someone acting irrationally with regards to love, that irrationality is only an external perspective. Rationality can only be evaluated from the perspective of the agent that is acting. Hence, while it may seem that someone is acting irrationally when they are in love, to them they are acting perfectly rationally according to THEIR personal measure of utility (which in this case is a function of their happiness).
quote:
Original post by krez
when you love someone else (not just a crush or whatever you are talking about) you value them over yourself, and will act irrationally to help them.
That''s not acting irrationally. In fact, it is acting quite rationally according to a utility function that puts the physical well-being of someone else above your own. It would certainly look like an irrational action to someone who puts their own personal well-being above that of anyone else but that doesn''t mean it IS irrational. As I said, rationality must be evaluated with reference to the acting agents utility measure.
quote:
Original post by krez
setting variables would not be making the computer believe it is in love. computers don''t "believe" anything.
How does your brain record and utilise the belief that the sun will rise tomorrow? Given your answer, if I now asked you to consider that you lived your entire life in a room and had only books to read to obtain information (but assume ANY information could be written into the book) then how would your brain record and utilise the belief that the sun will rise tomorrow?
Is there a difference between the two beliefs in your brain and is there a difference between the two beliefs in your conscious thoughts?
I''m not suggesting I have the answers to these questions (although I have some fairly well informed opinions). I am simply curious to hear different answers to these questions from different people.
Finally, regarding ''Shadows'' attempt at generating a random number. A random number is not one selected without thought. A random number is one sampled from a set of numbers without bias or preference to any particular number in that set. You (apparently) chose the set of integers. Based on a very quick analysis of your number, one of two conclusions can be drawn. 1) IF you used the digits running horizontally above your keyboard, then you are probably left handed as your brain favoured your left hand significantly when producing that sequence of numbers (60% with the left hand and 40% with the right hand) assuming you did something ''close'' to touch typing. 2) If however you used the keypad to the right of the keyboard, and used a single hand (probably the right), then your brain favoured your index (first) finger more. Your middle finger was not very liked by your brain, probably because it was squeezed between your index and ring fingers on the small keypad. It was only used to produce 25% of the digits (given the use of the keypad and some assumptions about how a hand types on the keypad). Your ring finger produced 35% of the digits given the above assumptions. Basically, the probability distribution of the digits was extremely skewed. There is significant evidence to suggest that your brain exhibited a bias when it selected the particular number to represent on the screen! Hence, you''re number is not random.
Well, enough from me. It''s late on a Friday and time to go home to the one I love! ;-)
Cheers all,
Timkin