Blurry line between morality and sentience?
Scenario 1: A predator drone is flying over Afghanistan. It's equipped with air-to-ground missiles. It's flying on autopilot and searching for bad guys to blow up. After a few hours, it finds a target and shoots. Unfortunately, the predator drone had a false-positive and ended up killing innocent people.
Scenario 2: In the near future, an MIT computer scientist creates the first sentient robot. The AI for the robot is an artificial neural network and is capable of thinking, acting, and behaving very much like a human being. For all intents and purposes, it has an autonomous will, is rational, is capable of learning, and would pass the Turing test with flying colors. The robot spends a year learning about the ways of humans (its a very fast learner). Afterwards, the MIT computer scientist releases the robot into the world to see how it fares. In less than a month, the robot intentionally murders someone.
Scenario 3: Dr. Frankenstein decides to have another go at creating an artificially reanimated corpse. This time, he vows to make sure that the being is as perfect as possible. The being looks beautiful, speaks eloquently, thinks rationally, has an autonomous will, etc. It too would pass the Turing test and would be nearly indistinguishable from a human being. Dr. Frankenstein teaches his creation all the ways of humanity and bestows a sense of moral values into it. Eventually, Dr. Frankenstein releases his creation into the wild and follows it to see how it interacts with people. To his disappointment, Frankenstein monster number two also murders someone intentionally.
Scenario 4: Mr. and Mrs. Jones have a newborn son. They raise their son as best they can, teaching him all the ways of humanity and morality. By all accounts, they raised their son as best they could. Their son turns into a legal adult and moves away from home. A month later, the son is involved in an altercation in which he ends up murdering someone intentionally.
Now, the question is this: In all the cases, who is the murderer? Who is to blame for the murder? who is to be punished and how is the punishment to be rendered?
In the last scenario, my initial intuition says that the son of Mr. and Mrs. Jones should be held accountable for his own actions. After all, he has a mind of his own (in that its an autonomous will with sentient reason). Mr. and Mrs. Jones can't be blamed for the actions of their son. This seems practically plausible, right?
Yet, in Scenario #3, our intuition seems to blame Dr. Frankenstein for releasing a monster into the world. Clearly, the doctor should be to blame for the murder committed by his second monster. But, isn't Dr. Frankenstein just as much of a parent to his creation as Mr. and Mrs. Jones are to their son? Doesn't his monster also have an autonomous will and fully rational sentience?
In scenario #2, the similarity between the sentience between Mr.Jones Jr. and the robot are exactly the same. Yet, there's an even more stark contrast between who is the murderer, who should be blamed, and who should be punished. Much like Dr. Frankenstein, the computer scientist would be blamed for the murder. But why? Isn't the robot just as equally sentient as Mr. Jones Jr.?
In the case of the predator drone, obviously nobody would blame a machine for acting the way it did on autopilot. The artificial intelligence is pretty much a script which failed. Calling the AI script a murderer would be about as absurd as calling complex math equations murderers.
Now, can we work our way backwards from the drone back to Mr. Jones Jr, and say that the AI algorithm in the drone is not to blame? If AI algorithms are not to blame, and the artificial neural network in the robot is also an AI algorithm, albeit more complex, then it can't be blamed either. Yet, it has characteristics of sentience. Much in the same way, the only difference between the mind of Dr. Frankensteins monster and the robot is that Dr. Frankensteins monster has a biological neural network instead of an electronic one. It's still a complex algorithm, which happens to exhibit the characteristics of a sentient being. The only difference between Dr. Frankensteins monster and Mr. Jones Jr. is that the monster is an artificial construction of human life. Yet, they both share a biological brain. It could even be argued that Mr. Jones Jr. is also a construction of sentience, except that the method of construction involved sex between Mr. and Mrs. Jones. Isn't Mr. Jones Jr's mind just a complex neural network algorithm? And the things which shaped the mind of Mr. Jones Jr. were the influences of his parentage? So, if we blame the people who unleash the Frankensteins, the robots and the predator drones for the actions their creations commit, shouldn't we also hold Mr. and Mrs. Jones directly accountable for the actions of their son? Or, should we hold the actual actors accountable for their actions? If so, how do you go about punishing a robot?
Eric Nevala
Indie Developer | Spellbound | Dev blog | Twitter | Unreal Engine 4
If the entity is capable of understanding the difference between right and wrong, knows that murder is wrong, chooses to commit murder, it's to blame, no matter its origin.
Former Microsoft XNA and Xbox MVP | Check out my blog for random ramblings on game development
Quote: Original post by slayemin
How do you go about punishing a robot?
Remove it's battery pack.
If a tiger or a chimp escapes from the zoo and kills a person, who do we hold responsible? Whomever is negligible. There's your answer viz Predator drones.
Dr. Frankenstein and MIT scientists are not like parents in that parents don't design their children, and while parents are the primary influence on children, they aren't the only influences. To borrow from Pete Townsend, Wicked Uncle Ernie may have damaged the child without the parents ever knowing.
Check this out: Scientists Worry Machines May Outsmart Man Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by Machaira
If the entity is capable of understanding the difference between right and wrong, knows that murder is wrong, chooses to commit murder, it's to blame, no matter its origin.
It still doesn't explain the fact that someone who did know the difference still had to act, not choose, but act. And whatever criteria was used, you can't say that the being in question is responsible for that criteria since that would lead to infinite regress.
OTOH, in all 4 scenarios you have to try to make sure that it doesn't happen again. That is why we have jails :)
So... Muira Yoshimoto sliced off his head, walked 8 miles, and defeated a Mongolian horde... by beating them with his head?
Documentation? "We are writing games, we don't have to document anything".
Documentation? "We are writing games, we don't have to document anything".
Until programs are recognized as citizens, responsibility for their actions fall upon the custodian who set them loose upon society.
Quote: Original post by LessBread
If a tiger or a chimp escapes from the zoo and kills a person, who do we hold responsible? Whomever is negligible.
I hope you meant "whomever [was] negligent" :)
Yes. And if they happen to be very very tiny, so much the better! [grin]
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by LessBreadheh that's what I was thinking of. Then again whomever invents gray goo first will probably be the first to die.
Yes. And if they happen to be very very tiny, so much the better! [grin]
If the scientists or persons recognise and say that it has an autonomous will, then they have recognised its responsible for its own actions.
By allowing the robot to fly around and choose targets of its own they have made it responsible for its own actions. When it does make a mistake they probably wouldnt put it in prison, just dissasemble. Or if they are really clever they would just put it down to collateral damage, as if people cannot get targets correct a hundred percent of the time they would know neither could an autonomous AI. No matter how much tech and money they throw at it. They might get the odds quite high but never perfect which is probably why we wont have fully autonomous fighting machines for a long time, if ever.
I think the odds for that ever happening are a million to one.
[Edited by - Calabi on July 31, 2009 2:44:42 PM]
By allowing the robot to fly around and choose targets of its own they have made it responsible for its own actions. When it does make a mistake they probably wouldnt put it in prison, just dissasemble. Or if they are really clever they would just put it down to collateral damage, as if people cannot get targets correct a hundred percent of the time they would know neither could an autonomous AI. No matter how much tech and money they throw at it. They might get the odds quite high but never perfect which is probably why we wont have fully autonomous fighting machines for a long time, if ever.
Quote:
Check this out: Scientists Worry Machines May Outsmart Man Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
I think the odds for that ever happening are a million to one.
[Edited by - Calabi on July 31, 2009 2:44:42 PM]
Quote: Original post by Calabi
... They might get the odds quite high but never perfect which is probably why we wont have fully autonomous fighting machines for a long time, if ever.
I don't think line that needs to be crossed isn't "perfect" but rather "higher or equal to human error".
If a soldier has a 96% success rate but the robot has a 99%-- the robot should replace the soldier.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement