Quote:
Original post by necreia
Quote:
Original post by Calabi
... They might get the odds quite high but never perfect which is probably why we wont have fully autonomous fighting machines for a long time, if ever.
I don't think line that needs to be crossed isn't "perfect" but rather "higher or equal to human error".
If a soldier has a 96% success rate but the robot has a 99%-- the robot should replace the soldier.
This is extremely frightening to me. Imagine if we managed to build robots with nano-technology and imprinted an AI which had a "seek and destroy bad guys" algorithm with 99% efficiency. The robots would fly like humming birds and
be the high speed projectile (instead of bullets). They could be dropped onto a city (via missile payload) and they'd scour the city like a massive swarm of locusts, killing every hostile person. Air forces, navies, armies, marines, etc. would all be rendered obsolete.
The scariest part is that these little buggers could be mass produced like twinkies and they wouldn't cost more then $5. Suddenly, politicians of technologically advanced nations can wage a "war" (more like slaughter) without loss or risk of life to their citizens, and maybe at a cost of a few million dollars. The
only restraint would be a moral restraint, and based on what we see with corrupt politicians, we'd all be doomed.
Suppose that you do replace all your infantry with robots and fight a war against a technologically inferior nation. Is it even fair that death on one side results in scrap metal and death on the other side results in severe loss of life and families torn apart? I mean, you'd have a bunch of bots on one side that go "boom, head shot!" with mathematical precision and "spray and pray" on the other side, hoping they get lucky.
I think the future is too scary if war is still being waged. People need to STFU about their petty beefs with each other and find better ways to get along...before it's too late and we extinct ourselves.
---------------
Quote:
Influences, I think that makes an important distinction here:
1. The drone was only "influenced" by its manufacturers.
2. The robot was influenced by the scientist, and a little by whatever information it "learned"... Who decided what it should learn? - The scientist, probably.
3. Frankenstein’s monster was only influenced by Dr. Frankenstein, so he's the main cause.
4. The child of Jones' is influenced by the parents, but more so by the society - It's simply not possible to tell the main point of influence, which places the blame on the central point, the child.
In the first scenario, the predator drone is just a set of instructions. Lot's of If-then statements. There's not really an "influence", other then the programmer who wrote the instructions.
The other three situations are a thought exercise in exploring our humanity. You seem to believe that our sense of humanity is bestowed upon us by our nurturing. If you give each being the exact same nurturing in every scenario so that nurture isn't a factor, is there anything about our nature as human beings which distinguishes us from the Frankenstein monster and the sentient robot?
Quote:
The obvious rebuttal to this idea (outside of technical issues of machine learning), is that the inspiration for these kinds of AI is that they can be manufactured, that we don't have to wait 20 or 30 years for them to "ripen". That brings up questions about deploying manufactured inorganic autonomous devices to kill human beings and thus destroy the decades of development that goes into each person.
Let's say that the sentient robot spends 20 years learning the ways and customs of people by being a part of a family. Or maybe you even have fifty robots learning our ways and customs. Then you pick the one robot you think exhibits the best character traits, copy their brain, and upload it to every subsequent robot produced. Each manufactured robot would then have 20 years of life experience, yet be one day old.
A) Would it be immoral to "delete" the other 49 electronic brains? (probably an unoriginal question that's been asked a thousand times) What's the difference between deleting an electronic mind with 20 years of life experience and murdering a human being?