Quote: Original post by Tom Knowlton
Moravec's book was titled???
Mind Children, Harvard University Press, 1988.
Quote: Original post by Tom Knowlton
Moravec's book was titled???
Quote: Original post by C-Junkie
It's a simple technique. Same with the word "singularity." All it is is a nifty way of observing the influence of your work. If you see people talking about "gigadeath" you know you're the one that started this discussion, rather than some other source.
Quote: Original post by C-Junkie
I'm going to have to figure out how Moravec is.
[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]
Quote: Original post by frankgoertzen
We currently have something like 8.9 Billion people on Earth - and we're trashing it. So before we try to make artificial intelligence maybe we should work on some human intelligence.
Perhaps our cars should be smart enough to know why and where we plan on driving them and then choose to let us drive. Our printers should understand what we're printing before letting us print.
Quote: Original post by capn_midnight
It's an interesting topic, the ethics of AI. However, I have to put it in these terms (in regards to the "doctor AI"): what does it matter *who* dies, just as long as fewer people die that those that are treated by humans?
Quote:
Actually, current AI diagnosis systems are 90% accurate, versus human rates of 75% accuracy, and yet people would be shocked and appauled to be diagnosed by a "robot". So what? What does emotion have to do with your health and well being? Doctoring is just another profession.
Eric Nevala
Indie Developer | Spellbound | Dev blog | Twitter | Unreal Engine 4
Quote: Original post by slayemin
I think the author is way off base on his thoughts.
1. He is reading too much into theoretical sciences and assuming they will work.
2. Artificial intelligence systems are built to be used as TOOLS
3. Reproducing systems need to consume resources/materials to reproduce
4. We humans have something very few computing systems have: the ability to change the environments around us (via hands). How would an asteroid shaped computing system go out and say, change something? Its only capable of thought. Unless it is able to communicate/control mindless robots, I don't see it happening. In which case, we humans are the ultimate designers of these systems and its unlikely that we'd even program concepts other then the job we require the tool to perform. A robot that only knows how to mine precious metals won't suddenly figure out how the universe works and pick up a gun and shoot people.
5. Machines deteriorate over time. Since they are such complicated beasts which require 100% correctness to work, they have a finite lifespan. My car is on its last leg and its got 280,000 miles on it. hardware/software glitches usually bring a whole system down. Perhaps the time required to build a huge asteroid sized machine would equal the lifespans of the initial components used, which would make construction & maintenance an infinitely never ending process.
I think the more alarming trend is happening in the defense departments though. I forsee battle fields 20-50 years from now being taken to levels of automation which only rich, high tech nations can afford. Tanks will drive themselves and sense, engage and assess enemy targets. War isn't fair, but what sorts of measures of backup/precaution will we take to ensure there isn't friendly fire incidents or our machines don't turn on us? today's modern armies will definately be annihilated by these future unmanned machines. Controlling them will be like playing a computer game. (whoa, scary. deciding the fates of others lives in computer-game style. "hey, look its super realistic GTA! lets go run people over with my tank!" "you imbicile! that's real!")
Anyways, its already too late to attempt to revert battle feild automation since DARPA, Raytheon, McDonal Douglas, Boeing and other defense contractors are undoubtedly already secretly developing and competing to be the first to develop automated AI precision guided weapons of destruction. USA is already using armed drones to watch the battle feild and seek battlefeild targets. All it takes is a message popup and someone to confirm engagement.
Someday there will be sniper bots out there that have many ways to sense a person, such as body heat, electromagnetic, optical, IR, etc. If they're on the battle feild roaming around a city, hiding behind a wall won't make a person safe. It'll be like playing Counter-strike with wall hacks and headshot scripts with a bunch of networked bots hunting.
I think its seriously possible. We have all the resources necessary to come up with this stuff.
I think in short we only need to worry that the people using the tools which we create will have the highest levels of ethical reasoning which is also programmatically hard-coded into the machines we use.
Einstein gave the world nukes...which was inevitable anyways. Maybe some day when war becomes so dangerous and devastating, hopefully people will choose to side with peaceful resolutions instead of mutually assured destruction on grand scales. I'm more optimistic then the author who wrote that long article/book.