Advertisement

Blurry line between morality and sentience?

Started by July 30, 2009 03:09 PM
18 comments, last by Dim_Yimma_H 15 years, 3 months ago
You can also look at it like this: The amount of time spent on functioning well makes the eventual failure more serious, because it doesn't make (common) sense.

Everything that's new and has no experience is probably held less responsible than that which should know better. Frankenstein's monster didn't get much time to learn anything, while it's creator should've been able to learn before it. The short time of functioning is probably also why the monster will be demolished quite quickly, it hasn't existed long enough for anyone to care about its further existence.

Parents will always have the opportunity to know better than their children, but when the children have existed long enough it's not possible for their parents to to know for them, the influences on the children come from so many places.

Influences, I think that makes an important distinction here:
1. The drone was only "influenced" by its manufacturers.
2. The robot was influenced by the scientist, and a little by whatever information it "learned"... Who decided what it should learn? - The scientist, probably.
3. Frankenstein’s monster was only influenced by Dr. Frankenstein, so he's the main cause.
4. The child of Jones' is influenced by the parents, but more so by the society - It's simply not possible to tell the main point of influence, which places the blame on the central point, the child.

So I think I agree with LessBread about this one.

Quote: Original post by Toni Petrina
OTOH, in all 4 scenarios you have to try to make sure that it doesn't happen again. That is why we have jails :)

That may be the only thing that, kinda objectively seen, makes sense in this dilemma.
Quote: I don't think line that needs to be crossed isn't "perfect" but rather "higher or equal to human error".

If a soldier has a 96% success rate but the robot has a 99%-- the robot should replace the soldier.


I can imagine that being the case if just the bean counters being involved, but general public uproar and old fashioned tradition might get in the way. People generally, are less forgiving and demand more of machines.
Advertisement
Possibly because they don't care for the machines personally. I wonder if that could change if someone grew up with an ever nearby artificially intelligent machine - Though, that's not really realistic yet.
Now there is an idea. Instead of unleashing an AI with claws on the world straight out of the box, what if it was packaged in a cell phone (or similar small housing) and then given to a child to teach tamagotchi style or if not for interaction, then for observation. In other words, piggyback the development of such an AI on the emotional development of a child. Establish a regimen of development for AI sans claws that culminates in a series of tests that must be passed before that AI is given claws. That's what we do with ourselves. We don't just hand out guns and badges to people and say they're police.

The obvious rebuttal to this idea (outside of technical issues of machine learning), is that the inspiration for these kinds of AI is that they can be manufactured, that we don't have to wait 20 or 30 years for them to "ripen". That brings up questions about deploying manufactured inorganic autonomous devices to kill human beings and thus destroy the decades of development that goes into each person.

Dim_Yimma_H also seems to be hinting at the possibility of using AI as a kind of "machine assisted ethics". So as above, the AI sans claws in a cell phone accompanies a child observing but also advising the child in regard to the legal, ethical and moral implications of the situation the child find him/her-self in, a kind of AI-Nanny (or AI-Nag).
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Here's another recent news story regarding robots, ethics and the sentience of readers: Robot developers learn perils of new media

Quote:
Somewhere in the blogosphere the biomass-fueled robot being developed for the military turned into a battlefield corpse-eater. Their lesson was that information, once released, can't be controlled.
...
EATR, for Energetically Autonomous Tactical Robot, is a robotic ground vehicle that Finkelstein's small company is designing with U.S. Defense Department funding; it can sustain itself on long missions by foraging for twigs, leaves and other kinds of vegetation.

But wild speculation on the Internet this month was that Finkelstein's Robotic Technology Inc. and a partner were building flesh-eating robots for the Pentagon.

Scores of blogs and news sites, including FoxNews.com, ran with the unfounded report. The online furor caught the companies off guard and turned into a major distraction.
...
The speculation that launched EATR into the popular consciousness as a carnivore can be traced to a straightforward news release from Cyclone on July 7. The word "biomass" in the release was misinterpreted to mean EATR could feed on corpses on a battlefield.

It reached a fever pitch in mid-July, when FoxNews, Fast Company and CNET published online reports repeating the speculation, without first checking with Robotic Technologies or Cyclone.

The companies' websites were swamped. The project's main sponsor, the Defense Department's Defense Advanced Research Projects Agency, wanted the public record corrected.

Cyclone issued a second news release July 16, calling EATR a "vegetarian" -- leading to even more news coverage. A story about EATR being a "vegetarian" ran in Britain's Guardian newspaper.
...
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by Toni Petrina
It still doesn't explain the fact that someone who did know the difference still had to act, not choose, but act.

What's to explain? If they commit the act of murder, knowing it's wrong, they're to blame.

Quote: Original post by Toni PetrinaAnd whatever criteria was used, you can't say that the being in question is responsible for that criteria since that would lead to infinite regress.

I never said they were responsible for the criteria. You follow the laws of the land though or you pay the penalty.

Former Microsoft XNA and Xbox MVP | Check out my blog for random ramblings on game development

Advertisement
Quote: Original post by necreia
Quote: Original post by Calabi
... They might get the odds quite high but never perfect which is probably why we wont have fully autonomous fighting machines for a long time, if ever.


I don't think line that needs to be crossed isn't "perfect" but rather "higher or equal to human error".

If a soldier has a 96% success rate but the robot has a 99%-- the robot should replace the soldier.


This is extremely frightening to me. Imagine if we managed to build robots with nano-technology and imprinted an AI which had a "seek and destroy bad guys" algorithm with 99% efficiency. The robots would fly like humming birds and be the high speed projectile (instead of bullets). They could be dropped onto a city (via missile payload) and they'd scour the city like a massive swarm of locusts, killing every hostile person. Air forces, navies, armies, marines, etc. would all be rendered obsolete.

The scariest part is that these little buggers could be mass produced like twinkies and they wouldn't cost more then $5. Suddenly, politicians of technologically advanced nations can wage a "war" (more like slaughter) without loss or risk of life to their citizens, and maybe at a cost of a few million dollars. The only restraint would be a moral restraint, and based on what we see with corrupt politicians, we'd all be doomed.

Suppose that you do replace all your infantry with robots and fight a war against a technologically inferior nation. Is it even fair that death on one side results in scrap metal and death on the other side results in severe loss of life and families torn apart? I mean, you'd have a bunch of bots on one side that go "boom, head shot!" with mathematical precision and "spray and pray" on the other side, hoping they get lucky.

I think the future is too scary if war is still being waged. People need to STFU about their petty beefs with each other and find better ways to get along...before it's too late and we extinct ourselves.

---------------

Quote: Influences, I think that makes an important distinction here:
1. The drone was only "influenced" by its manufacturers.
2. The robot was influenced by the scientist, and a little by whatever information it "learned"... Who decided what it should learn? - The scientist, probably.
3. Frankenstein’s monster was only influenced by Dr. Frankenstein, so he's the main cause.
4. The child of Jones' is influenced by the parents, but more so by the society - It's simply not possible to tell the main point of influence, which places the blame on the central point, the child.


In the first scenario, the predator drone is just a set of instructions. Lot's of If-then statements. There's not really an "influence", other then the programmer who wrote the instructions.

The other three situations are a thought exercise in exploring our humanity. You seem to believe that our sense of humanity is bestowed upon us by our nurturing. If you give each being the exact same nurturing in every scenario so that nurture isn't a factor, is there anything about our nature as human beings which distinguishes us from the Frankenstein monster and the sentient robot?

Quote: The obvious rebuttal to this idea (outside of technical issues of machine learning), is that the inspiration for these kinds of AI is that they can be manufactured, that we don't have to wait 20 or 30 years for them to "ripen". That brings up questions about deploying manufactured inorganic autonomous devices to kill human beings and thus destroy the decades of development that goes into each person.


Let's say that the sentient robot spends 20 years learning the ways and customs of people by being a part of a family. Or maybe you even have fifty robots learning our ways and customs. Then you pick the one robot you think exhibits the best character traits, copy their brain, and upload it to every subsequent robot produced. Each manufactured robot would then have 20 years of life experience, yet be one day old.
A) Would it be immoral to "delete" the other 49 electronic brains? (probably an unoriginal question that's been asked a thousand times) What's the difference between deleting an electronic mind with 20 years of life experience and murdering a human being?
Quote: Original post by slayemin
Quote: The obvious rebuttal to this idea (outside of technical issues of machine learning), is that the inspiration for these kinds of AI is that they can be manufactured, that we don't have to wait 20 or 30 years for them to "ripen". That brings up questions about deploying manufactured inorganic autonomous devices to kill human beings and thus destroy the decades of development that goes into each person.


Let's say that the sentient robot spends 20 years learning the ways and customs of people by being a part of a family. Or maybe you even have fifty robots learning our ways and customs. Then you pick the one robot you think exhibits the best character traits, copy their brain, and upload it to every subsequent robot produced. Each manufactured robot would then have 20 years of life experience, yet be one day old.


That's probably how it would be done, just like with a hard drive and a new pc.

Quote: Original post by slayemin
A) Would it be immoral to "delete" the other 49 electronic brains? (probably an unoriginal question that's been asked a thousand times) What's the difference between deleting an electronic mind with 20 years of life experience and murdering a human being?


The 20 years of electronic mind experience could be stored on disk, so it wouldn't be deleting it. And the machine could be shut off and reprogrammed, per the above scenario. The Sarah Connor Chronicles began to explore this idea right before Fox canceled the show. Murdering a human being on the other hand... to cite Bill Munny, "It's a hell of a thing, killing a man. Take away all he's got and all he's ever gonna have."


"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by slayemin
Scenario 1: A predator drone is flying over Afghanistan. It's equipped with air-to-ground missiles. It's flying on autopilot and searching for bad guys to blow up. After a few hours, it finds a target and shoots. Unfortunately, the predator drone had a false-positive and ended up killing innocent people.
...
Now, the question is this: In all the cases, who is the murderer? Who is to blame for the murder? who is to be punished and how is the punishment to be rendered?

People who ordered to attack.Not a drone,because it hasn't mind and free will.And not a drones creator,because next time they can protect innocent people from terrorists.Don't sure-don't shoot,otherwise one day civil people will buy
">something
to protect themselves,may be get to people who order, and will become "bad guys" [smile]
Quote: Original post by slayemin
You seem to believe that our sense of humanity is bestowed upon us by our nurturing.

Ah, but that's not the truth. I believe the source of influence is relevant when it comes to what responsibility can be expected from a being - or the source of "nurture". Genes play a part in the weight a being puts on different moral choices, even after the logic of the moral has been nurtured.

Quote: Original post by slayemin
If you give each being the exact same nurturing in every scenario so that nurture isn't a factor, is there anything about our nature as human beings which distinguishes us from the Frankenstein monster and the sentient robot?

A. Biological genes
B. Personal effort of learning
- Is there something other you're thinking of?

This topic is closed to new replies.

Advertisement