Yes it is.... claiming otherwise is simply closing your mind to the possibilities that might arise out of AI... some of them can be very positive. A LOT are EXTREMLY negative for the WHOLE HUMAN RACE. Yes, caps are needed, because a lot of people does not seem to get it.
You build a machine with its own motives, an advanced enough intelligence, now you arm and armour it to be able to be used in an aggressive military act... but also to prevent its own destruction (because, you know, this thing will cost billions initially).
Yes, there will be many intelligent people involved in its creation that might devise failsafes to make sure they cannot lose control... on the other hand lots of people are involved whose job it is to prevent others from gaining control over it.
So many systems built in that could be extremly dangerous in the wrong hands... and then you turn over control to an alien intelligence that most probably is only understood rudimentary still by its creators. You see where I am going?
I say AI is much too dangerous to be overly optimistic. You always have to apply murphys laws. And when you apply that, you have to say, best NOT to arm your autonomous drones no matter how convienient it might be from a military point of view, best NOT to build in failsafes to prevent the machine being controlled by a third party, but make the machine self destruct in such an incident, and many other things that might lessen the value of the military drone, but on the other hand will make it much harder for an AI to do much harm if it goes out of control.
And the day the first AI will go out of control will come for sure...
This all comes from a person that is of the viewpoint that new technology usually brings at least as much good as it does harm. Yes, I am not that fond of atomic reactors and all, but I think its a big improvement over coal energy... much less polution usually with a very minor risk of a BIG pollution incident, which can be minized nicely if run by competent people.
I do see that AI IS the future of makind. It is, to some extend, inevitable, its the next evolution of computers and also kind of the next step of evolution of mankind. Besides getting rid os Sickness and Age, heightening the human intelligence at an increased rate is the next step, and as long as there is not some big breakthrough in neurosciences in the next few decades, AI with a well design human-machine-interface is the only way to really achieve that.
BUT:
Just as atomic reactors, atomic bombs, military equipment, and other things, AI is nothing that should be left to amateurs or, even worse, stock market driven private companys. If there is one group that tends to fuck up even more than the military, its private companys. AI is nothing where you should cut corners, or try to circumvent laws and, even worse, common sense. Private companys do that all the time.
There needs to be a vocal group of pessimists now, that make the general masses aware of the dangers of this new wonder weapon the private companys want to build and release upon the world. There need to be laws, failsafes and regulations in place BEFORE the first sentinent machine walks the earth. A screw up like what happened with the internet is NOT a good idea when it comes to AI. It worked out fine with the internet, and it wouldn't have flourished as much within tighter rules. But just think how much more harm a script kiddie will do when they hack a whole army of military drones and lose control over it.
So yes, I totally support Bill Gates and Elon Musk in their stance. Even though I think they should go into more detail about how to deal with AI, their stance is a very valid one from my point of view.
If I would deviate even farther into the future and into speculation land, I'd say humanity could, even if AI does not go totally out of control and humanity survives the first wave of sentinent machines, face even harder to solve ethical problems.
If machines are sentinent and at least as intelligent as humans, even if you can control them or at least make sure they are friendly... can you still treat them as we do today? Will there be laws defending a machines right to live? to learn? to be paid a wage, or to open his own shop, or to *gasp* reproduce?
How will earth react if besides the 10+ billions humans living on it, there are at least as much sentinent machines that now are also consuming resources (lets hope they all just run on solar energy)... that need their own space to live?
How will humanity react if they are not only made redundant in their jobs by machines that are optimized better for it... they cannot even protest against it without being "robophobic" and breaking laws (think of the way the blacks and women fought for getting the same rights)?
We might have atleast a jurisdical and political fight between classes (machines and humans in this case) at our hands in the not so distant future, that will dwarf gender, or ethnic difference conflicts of the past... we can only hope it does not lead to uproars like in zarist russia in the first world war, or in the monarchic europe in the 15th to 19th century.
In the end it might not be the AI going out of control or the AI being to alien to humans that will lead to a conflict with humanity. It might be that the conflict arises because machines become to similar to humans on a psychological and emotional base, and because humans are tradionally EXTREMLY SLOW to react to social changes and the need to break the commons state of things to keep peace between different groups.