Advertisement

Is Artificial Intelligence (AI) Really a Threat?

Started by January 31, 2015 01:53 PM
51 comments, last by Sik_the_hedgehog 9 years, 11 months ago

Well technically the most dangerous AI is one that doesn't care about us at all, and has some goal orthogonal to us. It would kill us not from hate but simply from its goal not requiring us to exist but requiring resources we have or use.

Currently the smartest AI in the world has the reasoning capibility of a human 3 year old.

We do not have to wory about HAL any time soom .

HAL-9000.jpg

What AI would that be? i really really doubt anything comes close to the reasoning of a 3 year old (and spouting random sentences that seem to match what you're asked as a 3 year old would isn't "reasoning as " a 3 year old). If anything like that existed it would actually be pretty damn close to reasoning as an adult.

Advertisement

I believe that AI Will become very dangerous in certain applications far sooner than people believe. The major issues will not come from "Sentient thoughts" or a Massive AI takeover or that mumbo jumbo, but once they are capable of processsing an arbitrary problem, and then coming up with a solution. This is similar to what AltarofScience said, where they are solving problems with optimal efficiency without the capability of understanding ethics.

As a VERY poor example and highly exaggerated, say someone gives an intelligence the goal of curing world hunger. Rather than planting a large amount of crops to feed everyone, it could interpret the most efficient route is to exterminate select cities populations so that those who are not killed are no longer hungry through either poisoning city water supplies, nuking that shiz, or something similar. This is not coming from any form of malice towards humans, simply because it was the most efficient solution to the problem it was given.

Once again thats an exhageration, but can cause severe issues with specialized robots, especially with infrastructure/industrial applications. Say for example there is a Robot that is charged with moving people between Hostpital Beds (As they are currently Developing). If a patient exceeds the maximum weight for the bed, the robot has a limitation where it cannot place the patient on the bed since it exceeds the limit. However as the Robot still needs to move the patient, it decides the most efficient way is to simply chop the patients legs off so that it does not exceed the maximum weight.

Thats why I personally believe they are far more dangerous BEFORE they grow "emotions" or any of that shenanigans, as they are just boxes performing a goal in the most efficient way possible.

TL;DR: The real threat is once AI is given the ability to intepret a problem and develop its own solution with maximum efficiency. As alot of the moral preconceptions that humans develop do not exist in an AI, as such has no regaurd for human life, and often the most efficient solution to a problem is the most damaging (Easier to break something than to fix it).

If either of those things happen, then we have an AI (or a whole bunch of them) with the goal of destroying humans.

Then its all simply a matter of whether they have enough control of their environment to manipulate it such that humans die off.

Or maybe it doesn't hate you, and doesn't love you. You are just made of atoms that it can use for something else.

The flipside of guessing AI's will come equipped with the entire bag of human emotions, and being afraid of those, is overlooking simple indifference. The space of all goal structures is vast, and love and hate occupy very small areas of it. Unless we equip them with a very precise set of values that overlap with the human values we all share, things can get strange and deadly.

As a VERY poor example and highly exaggerated, say someone gives an intelligence the goal of curing world hunger. Rather than planting a large amount of crops to feed everyone, it could interpret the most efficient route is to exterminate select cities populations so that those who are not killed are no longer hungry through either poisoning city water supplies, nuking that shiz, or something similar. This is not coming from any form of malice towards humans, simply because it was the most efficient solution to the problem it was given.

Exactly. "Cure world hunger", when held as a goal by a human, lives in the context of general human values, and weird solutions like "lobotomize everyone to not feel hunger while they starve to death", that offend those values, are automatically rejected.

The problem of friendly AI, in a nutshell, is to give it that context of commonsense human values.

 

I believe that AI Will become very dangerous in certain applications far sooner than people believe. The major issues will not come from "Sentient thoughts" or that mumbo jumbo, but once they are capable of processsing an arbitrary problem, and then coming up with a solution. This is similar to what AltarofScience said, where they are solving problems with optimal efficiency.
 
As a VERY poor example and highly exaggerated, say someone gives an intelligence the goal of curing world hunger. Rather than planting a large amount of crops to feed everyone, it could interpret the most efficient route is to exterminate select cities populations so that those who are not killed are no longer hungry through either poisoning city water supplies, nuking that shiz, or something similar. This is not coming from any form of malice towards humans, simply because it was the most efficient solution to the problem it was given.
 
Once again thats an exhageration, but can cause severe issues with specialized robots, especially with infrastructure/industrial applications. (Another examples, an AI that controls train track junctions givin the abstract problem of maintaining optimal schedule efficiency so that everything remains on time. A 2 car passenger train breaks down on the tracks and needs to be manually removed. further behind on the tracks is a large freight train. Rather than stopping the frieght train so that the passenger car can be moved, as any sensible person would do, the AI calculates that the frieght car will withstand the imapct and be able to continue on schedule, and the cost of delaying the train is far greater than the loss of the passenger train, so it has the freighter continue through the passenger train, killing everyone aboard.
 
Thats why I personally believe they are far more dangerous BEFORE they grow "emotions" or any of that shenanigans.
 
TL;DR: The real threat is once AI is given the ability to intepret a problem and develop its own solution with maximum efficiency. As alot of the moral preconceptions that humans develop do not exist in an AI, as such has no regaurd for human life, and often the most efficient solution to a problem is the most damaging (Easier to break something than to fix it).

 

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!
Advertisement

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.

But see, that sort of AI would have conditions that it must take into account. Something like no harm must come to human beings. Those sort of rules would prevent it from solving/optimizing the problem in a way that would involve killing/otherwise harming human beings. It's less likely to be an issue than sentience.

No one expects the Spanish Inquisition!

AI is not a threat in the distant future, it is a threat in the present.

Pisshead engineers are not only building fully autonomous drones with weapons for the military without thinking about the consequences, prototypes of fully autonomous cars are on the road right now (that's not just a crackpot Google idea, but something Daimer-Benz is actually considering moving into mainstream production at this time).

Millions of people travel in airplanes every year which will crash any time the AI feels like it, such as when a sensor freezes -- without the pilot being able to do anything about it (talking of Airbus specifically, but other manufacturers probably are not far behind).

Ever since man discovered fire (and probably before that) it has been the same thing with every new technology. That new thing, no matter what it is, is good for everything, everybody gotta have it everywhere, and everybody gotta depend on it. It takes a catastrophe (at least a few thousand, if not millions dead) before the last shithead realizes that diversity is a good thing and everything should be applied with a grain of salt, and of course, a backup plan. Monocultures are bound to fail catastrophically (the wine fretter being the probably most prominent, albeit not important for survival, example in history).

Problem is that nowadays, everything has to be super fast and super-omnipresent, every shit has to be "online" and "smart" and "on facebook", which amplifies the general effects of stupidity. I wonder why people think they must be able to control the lights in their home over the internet, but there's no limit to human stupidity. Every fucking toaster needs to run Java... to produce toast.

By the time the vast public realizes that AI is a real and present threat, it will be too late. Some experts (like Mr. Rogoff) suggest abolishing physical money because hey, only criminals use physical money anyway. What a great plan, especially in the light that a single successful hack could bring down an entire country's economy in one day.

If you have any doubt about the consequences of using the great new thing to the extreme, ask the Irish about potatoes.

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

I'm not very familiar with many AI's, but I do disagree with this. (IMHO... Which is probably wrong) The human brain consists of a complex neurological network and due to the emergent properties when these neurons start to network they can "think"/"process". Im not very familiar on how each neuron modifies itself when learning, however a method that very closely tries to replicate this is Neural Networks specially with Machine Learning. In this case each neuron has a weight and during each cycle the network is backpropagated and modified. Now, we have a computing problem, the human brain consts of a very large number of neurons and having a Neural Network with this many neurons will be too slow today due to computing speeds ( Or is it? I dont know enough ). Now this together with a bit of intelligent anomaly detection ( For backpropagation ) an intelligent network could be made. However theres only a few things missing such as the input and output nodes/neurons. The input neurons due to design could be the sound data, touch, smell and such, this could all be sent into the input nodes. And again, by design the output nodes could be speech/sound or whatever. A human takes a few years before one can communicate with it and it would be interesting to see what would happen with such a complex neural network.

IMHO. I believe humans see way too highly of them when talking about feelings and consciousness, in my opinion its an emergent property when such a complex neurological network is allowed to network and nothing more ( Then theres also religion but thats another story... )

A while ago I made a neural network that recieved a 2d image that it was to recognise and classify based on automated supervised learning ( Huuge data set ), and I was actually suprised on how well it performed and learned and I see no reason why it couldnt go further, much further.

About the whole end of the world thing, maybe. If we imagine a point where the neural network is as well performing as a human brain. If you raise a human to believe that death is good and insane things ( In my opinion... ), theres a pretty damn big change that the human is going to cause harm. And the same with a neural network or whatever AI model. Even with a dog, raise it wrong, harm will come.

But, we dont know anything for sure, so, this is all just me talking.

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

This topic is closed to new replies.

Advertisement