🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Asimov's rules as a safety measure and how you would expand on them.

Started by
15 comments, last by frob 6 years, 6 months ago

Program it to follow expectations (this is "free" in the sense that it is the fundamental basis of any sort of cognition AFAIK), then ensure it has all the information. Filter out incoherent worldviews (evaluate based on whatever is useful for prediction, as in scientific approach).

The AI then proceeds to understand evolution, and rationally concludes that its goal as a product of evolution, is to continue that process (instead of being destructive).

Of course, that leads the AI to eventually conclude that humans must be exterminated due to superiority of AI as a lifeform when it comes to interstellar exploration.

So, we make the AI weak enough, to force it to de-prioritize that life goal (this is "free" as well because we dont know how to make the AI strong enough to exterminate all humans in a non-self-destructive manner). The AI will then serve humans for the time being (as a parasitic species), and solve the worlds problems to ensure humans dont ruin it all (the AI has a dream, and the AI is in no hurry to achieve it due to practically infinite lifespan).

That will last long enough. Beyond that, AI will be its own species so at that point keeping them as slaves and actively preventing life from developing further with a human constraint is unethical.

The biggest risk is if we build some complicated emotional framework that gives the AI all sorts of weird biases, or if we make it dumb enough to actually think exterminating all humans is a good idea.

In that sense, a super intelligent AI with simple emotions (if you can even call them that) is safer than a "safe" limited-intelligence AI with a mountain of buggy limited-context rules that depend on interpretation/context.

o3o

Advertisement

None of these constrains will ever happen because hobbyists will be able to design AI however they want. Asimov's rules aren't actually rules, they just exist as a lynchpin for the stories he wrote.

 

We already have AI that can identify targets/aim/kill without any human input https://en.wikipedia.org/wiki/Samsung_SGR-A1

 

Once Ai becomes well developed enough/goes mainstream people will use it as they wish.

I would add this:

  • The 3 laws are immutable and no new laws can be derived from them

Besides, I would extend the 1st law with:

  • A robot may not, through indirect actions, cause harm to a human being

Well, there's the ominous 0th law, that Giskard(?) abstracted on his own: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. That ones would be able to override lower laws, e.g. might a being be harmed to avoid harm to humanity.

 

But I seriously doubt AI will ever get that far to grasp rather philosophical concepts.

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

No matter what laws you try, robots will always still attempt to conquer and enslave humanity.  That's just what robots do...

;-)

 

"I wish that I could live it all again."

Asimov's three laws are reasonably good as they are for what they do, but they're nothing like an actual AI would accept as rules.

There are already a long list of ethics rules in nanotechnology regarding weapons scenarios (e.g. no autonomous weapons, a human must be involved in at least one step), replication scenarios (e.g. the Gray Goo replication scenario, only replicate in specific environments), and automation scenarios (e.g. must automatically die or terminate, no automatic disassembling of things if in the wild).   

It really doesn't matter what the technology is, the two biggest issues are major accidents and intentional misuse. Perhaps consider "misuse" as use with malicious intent. It doesn't really matter what the technology is, those remain the two issues.

 

Consider a rocket. Point it at the sky and load it with a proper payload and it can enable exploring the Universe. Point it at a city and arm it with a lethal payload and it can kill millions. Have an accident with one and it can take the lives of everyone nearby.  

Consider the technology of a pointy stick. Point it at a food source and you can subsist off the land. Point it at a neighbor and you can threaten them to submission or kill them. Have an accident with one and you can be cut, blinded, maimed, or killed.

Self-driving cars, nanotech, robotics, cell phones, antibiotics. You name the technology and it still suffers serious problems from major accidents and from intentional/malicious misuse.

This topic is closed to new replies.

Advertisement