Advertisement

Is Artificial Intelligence (AI) Really a Threat?

Started by January 31, 2015 01:53 PM
51 comments, last by Sik_the_hedgehog 9 years, 9 months ago

Well technically the most dangerous AI is one that doesn't care about us at all, and has some goal orthogonal to us. It would kill us not from hate but simply from its goal not requiring us to exist but requiring resources we have or use.


Exactly. Emotions and feelings are a human construct and most likely wouldn't be part of the programming of an AI.

For example if we struggle to define love, as philosophers have for millenia, how could we program it into a computer?

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.

You're problem is you are clearly responding to a Christian here. A soul? Humans don't have souls either. That's the appropriate response to his comment.

Advertisement

Well technically the most dangerous AI is one that doesn't care about us at all, and has some goal orthogonal to us. It would kill us not from hate but simply from its goal not requiring us to exist but requiring resources we have or use.


Exactly. Emotions and feelings are a human construct and most likely wouldn't be part of the programming of an AI.

For example if we struggle to define love, as philosophers have for millenia, how could we program it into a computer?

We don't struggle to define love. Its pretty simple. Many people simply refuse to accept the scientific explanation. Evolved releases of chemicals to stimulate procreation and later cooperation between parents. Unless you are not referring to romantic love.

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.

You're problem is you are clearly responding to a Christian here. A soul? Humans don't have souls either. That's the appropriate response to his comment.

Easy now, I agree with you, but we should still respect it.

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.
As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.
You're problem is you are clearly responding to a Christian here. A soul? Humans don't have souls either. That's the appropriate response to his comment.

Oh boy. I have a feeling this road of discussion might start a flame war...

In that note though, I am sure the first time someone creates an AI there would be religious uproar. It would disprove that only a god may create life, and throw religious belief into question much like a first contact with alien life. There would be those who instantly denounce it and refuse to believe it is real if it doesn't fit their ideology. This in itself might be extremely dangerous.
I don't think intelligence (on any scale) is what we should be worried about. We already live in a world with billions of highly intelligent beings, some of which already have extremely destructive goals.

Intelligence is limited by its ability to act. I think what we should be worried about is giving any intelligence (human, artificial, or otherwise) too much power to influence the physical world.
Advertisement

I don't think intelligence (on any scale) is what we should be worried about. We already live in a world with billions of highly intelligent beings, some of which already have extremely destructive goals.

Intelligence is limited by its ability to act. I think what we should be worried about is giving any intelligence (human, artificial, or otherwise) too much power to influence the physical world.

I agree that its the impact that matters and that we already have many intelligent monsters out in the world. However, we are living in an increasingly digitized age. The impact that an AI can have is larger and will only get larger as time goes by.

Perhaps sentient AIs will be like humans: some will be good, some will be bad (e.g. criminals, terrorists, etc) if they take any interest in humans at all. Remember that each sentient AI will have its own goals which will not necessarily be the same, so they don't all have to desire the extermination of the human race.

No one expects the Spanish Inquisition!

On a complete side note... This is exactly what the tv series Person Of Interest is discussing ( With lots of action obviously ). I really recommend it.

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

I have two points when it comes to this. Let's just assume for one second a fully self-aware, highly intelligent AI is born tomorrow with marvelous capabilities (i.e. capable of decrypting any communication; controlling critical infrastructure):

1. We are indeed very dependent on technology. A lot of people assumes the technology is just there. I won't forget 3 years ago when there was a major blackout in the city; no semaphores, no street lights, not even cell towers worked. For 2 hours. It was evening so it was hard to see.

It was chaos.

People were driving as fast as they could through avenues, crossing them was suicidal. People in the street were walking at an accelerate rate, I was on the bus and most of the crowd was filled with hysteria and paranoia because they couldn't use their cell phones to contact their beloved ones to just tell them they were ok (I don't know how they managed to survive 10 years ago when only a few had cell phones).

Everyone just wanted to get home.

In the end, there were only minor accidents though.

All of this was just a series of unfortunate events that lead to near complete technology failure for 2 hours and revealed how much people depend on it. Like an addiction.

I don't want to imagine what could happen if this happens... on purpose. But in the end though, it's not like everyone died and the city disappeared of the face of the earth. 2 hours later everything went back to normal.

2. Computers may become very powerful as in the movies, but they're not invulnerable. They need energy to function, maintenance, and are vulnerable to electric shocks, overvoltage, magnetism, strong interference (i.e. radio waves), and ultimately EMPs.

SkyNet's approach of nuking everything will not work because that would cover the skies and stop existing the majority of power plants from functioning, which would cripple AI's power supplies. Not to mention the radiation would interfere with their wireless communication. Also nobody will be left to extract raw materials to manufacture more machines; factories have a lot of automation but they also require a lot of human workers.

So, bottom line, and assuming all machines turn against us (and there aren't those that sides humanity), a lot of people could suffer and die, but I doubt human kind as a species will be overthrown or replaced by machines. Worst case scenario a truce would be reached and live together; or stay in a constant battle that never ends.

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts. It could only ever be an encyclopedia knowledge base.

As for the idea of killing people to save food, do you not think many many humans have suggested that over the centuries and probably have carried it out also. By the way I do not condone this!

Of course What I explained is still many years away. However I think you are confusing things. Giving an AI a final goal conditions, and then having it process various solutions that result in that goal is something that is already possible in very limited environments (mostly data sets). Reasoning is all about data, and has nothing to do with concious or soul or any of that voodoo.

Once a machine is ble to take in a set of arbitrary inputs from the system its working with, and then interpret a method to match the desired output using the means its was programmed with. An important thing is not to confuse Robots with AI. AI for infrastructure is one of the biggest things im worried about.

You're problem is you are clearly responding to a Christian here. A soul? Humans don't have souls either. That's the appropriate response to his comment.

Easy now, I agree with you, but we should still respect it.

Its not about respect and a flame war isn't about to start. But there is zero purpose in monotheists discussing AI. What's the point of trying to have a conversation where one side is mindkilled towards over 50% of the premises of said discussion? Its impossible to answer their arguments because they have no basis in reality, thus speaking to them about philosophy is a total waste of time. Compare this to pagans for instance. In Greek myth there are thinking machines and sentient animals. Of course all the pagans are dead now, wonder where they went...

This topic is closed to new replies.

Advertisement