Advertisement

Are you a Cosmist or a Terran?

Started by December 30, 2004 04:53 PM
21 comments, last by flashinpan 19 years, 10 months ago
Quote: Original post by Tom Knowlton
Moravec's book was titled???


Mind Children, Harvard University Press, 1988.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by C-Junkie
It's a simple technique. Same with the word "singularity." All it is is a nifty way of observing the influence of your work. If you see people talking about "gigadeath" you know you're the one that started this discussion, rather than some other source.

I disagree. I mean, I agree that the technique is simple and also that de Garis distorts the word "singularity" - as well as a few other "memes" - but I don't think he does this as a tracking mechanism so much as to attract attention to his quest through the use of buzz words. In fairness, I had just finished reading an article about Orwell, Orwell for Christians, that speaks to using technical jargon to obfuscate violence. Given the 'religiosity' of de Garis book, this essay is surprisingly appropo.

Quote: Original post by C-Junkie
I'm going to have to figure out how Moravec is.


Moravec is a Cosmist too. Like de Garis, he gets rather giddy with the possibilities of AI and like de Garis he contemplates human extinction. In this book Moravec postulates that entities made in the image of human beings are bound to become competitors for the ecological niche currently occupied by human beings. It's been a while since I read Moravec's book, but I recall that he is a better writer than de Garis. I'm still reading de Garis's book. I'm at the chapter where he lays out the Terran arguments.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Advertisement
It's an interesting topic, the ethics of AI. However, I have to put it in these terms (in regards to the "doctor AI"): what does it matter *who* dies, just as long as fewer people die that those that are treated by humans?

Actually, current AI diagnosis systems are 90% accurate, versus human rates of 75% accuracy, and yet people would be shocked and appauled to be diagnosed by a "robot". So what? What does emotion have to do with your health and well being? Doctoring is just another profession.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

I am for ai.

I would say that advanced ai programs, that can effect things in the real world would either be limited in thinking, eg. A Smart traffic routing system would be able to think about what the traffic would be like in 10min but not what is inside the vehicles.

Or else they would be restricted in thinking about harming humans/humanity. (logical output.)

Currently i think we would end up with specialised (perhaps concious, maybe not) programs running things that don't care about humanity, they have their job, they do it. (That would most likely be hard-coded, so that it won't care enough about taking over the world to actually start planning).

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
We currently have something like 8.9 Billion people on Earth - and we're trashing it. So before we try to make artificial intelligence maybe we should work on some human intelligence.

Perhaps our cars should be smart enough to know why and where we plan on driving them and then choose to let us drive. Our printers should understand what we're printing before letting us print.
Quote: Original post by frankgoertzen
We currently have something like 8.9 Billion people on Earth - and we're trashing it. So before we try to make artificial intelligence maybe we should work on some human intelligence.

Perhaps our cars should be smart enough to know why and where we plan on driving them and then choose to let us drive. Our printers should understand what we're printing before letting us print.


Perhaps the AI could help us come up with better ways to use our resources?

Imagine somebody with virtually limitless resources and time.

A "smart" computer could work on problems 24 hours a day...with access to vast libraries of information on the internet, etc. It could analyze huge quantities of information and form associations much faster than we could.
Advertisement
Quote: Original post by capn_midnight
It's an interesting topic, the ethics of AI. However, I have to put it in these terms (in regards to the "doctor AI"): what does it matter *who* dies, just as long as fewer people die that those that are treated by humans?


Less people dying is good! :)

Quote:
Actually, current AI diagnosis systems are 90% accurate, versus human rates of 75% accuracy, and yet people would be shocked and appauled to be diagnosed by a "robot". So what? What does emotion have to do with your health and well being? Doctoring is just another profession.


I think for humans...emotion has everything to do with health and well-being. Think about bedside manner. Think about nursing. Would you rather have an extremely competent nurse with no beside manner...or an extremely competent nurse who can say a few friendly words of encouragement during the day and be believable?

Perhaps my Doctor example was not the best one to give...but I think that coldly calculating logic is not always the way to go.

For example...if I were to drive off the road into a lake with my family...and AI robotz came to the scene (ooops I just realized I am borrowing from I, Robot)...I would want my children to be saved FIRST...then my wife...then me. Logically this may make no sense. Wouldn't it make more sense to save my wife, then me, then our children perhaps. My wife could always make more children...and she wouldn't need me to do it...she could re-marry, for example. The robot who is thinking of saving human life, and interested in saving future human life...would save my wife, then perhaps me, then our children. But this would not be the course of action myself or my wife would take if our children were on the line. An emotional response? Yes. But I think most parents would agree it was the only choice to make.
Actually you can simulate emotional responce (up to a point), with ai.
And cold calculating logic, maked with emotional responce, would be a nice robot.
It could anticipate what you would feel in responce to things, so it would be able to minimise harm while being able to compute things logically (and quickly).

So that robot would conduct a search (of all possible decisions), and in a few seconds it would come up with a plan to have the least amount of possible harm to people.

So that one robot would save your children first, because they are more valuable to society, when they grow up. (they will give more help to society then you will).

It will probably also take into account the emotional responce you will have, in order to minimise harm.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
I think the author is way off base on his thoughts.
1. He is reading too much into theoretical sciences and assuming they will work.
2. Artificial intelligence systems are built to be used as TOOLS
3. Reproducing systems need to consume resources/materials to reproduce
4. We humans have something very few computing systems have: the ability to change the environments around us (via hands). How would an asteroid shaped computing system go out and say, change something? Its only capable of thought. Unless it is able to communicate/control mindless robots, I don't see it happening. In which case, we humans are the ultimate designers of these systems and its unlikely that we'd even program concepts other then the job we require the tool to perform. A robot that only knows how to mine precious metals won't suddenly figure out how the universe works and pick up a gun and shoot people.
5. Machines deteriorate over time. Since they are such complicated beasts which require 100% correctness to work, they have a finite lifespan. My car is on its last leg and its got 280,000 miles on it. hardware/software glitches usually bring a whole system down. Perhaps the time required to build a huge asteroid sized machine would equal the lifespans of the initial components used, which would make construction & maintenance an infinitely never ending process.

I think the more alarming trend is happening in the defense departments though. I forsee battle fields 20-50 years from now being taken to levels of automation which only rich, high tech nations can afford. Tanks will drive themselves and sense, engage and assess enemy targets. War isn't fair, but what sorts of measures of backup/precaution will we take to ensure there isn't friendly fire incidents or our machines don't turn on us? today's modern armies will definately be annihilated by these future unmanned machines. Controlling them will be like playing a computer game. (whoa, scary. deciding the fates of others lives in computer-game style. "hey, look its super realistic GTA! lets go run people over with my tank!" "you imbicile! that's real!")
Anyways, its already too late to attempt to revert battle feild automation since DARPA, Raytheon, McDonal Douglas, Boeing and other defense contractors are undoubtedly already secretly developing and competing to be the first to develop automated AI precision guided weapons of destruction. USA is already using armed drones to watch the battle feild and seek battlefeild targets. All it takes is a message popup and someone to confirm engagement.
Someday there will be sniper bots out there that have many ways to sense a person, such as body heat, electromagnetic, optical, IR, etc. If they're on the battle feild roaming around a city, hiding behind a wall won't make a person safe. It'll be like playing Counter-strike with wall hacks and headshot scripts with a bunch of networked bots hunting.
I think its seriously possible. We have all the resources necessary to come up with this stuff.

I think in short we only need to worry that the people using the tools which we create will have the highest levels of ethical reasoning which is also programmatically hard-coded into the machines we use.
Einstein gave the world nukes...which was inevitable anyways. Maybe some day when war becomes so dangerous and devastating, hopefully people will choose to side with peaceful resolutions instead of mutually assured destruction on grand scales. I'm more optimistic then the author who wrote that long article/book.
Quote: Original post by slayemin
I think the author is way off base on his thoughts.
1. He is reading too much into theoretical sciences and assuming they will work.
2. Artificial intelligence systems are built to be used as TOOLS
3. Reproducing systems need to consume resources/materials to reproduce
4. We humans have something very few computing systems have: the ability to change the environments around us (via hands). How would an asteroid shaped computing system go out and say, change something? Its only capable of thought. Unless it is able to communicate/control mindless robots, I don't see it happening. In which case, we humans are the ultimate designers of these systems and its unlikely that we'd even program concepts other then the job we require the tool to perform. A robot that only knows how to mine precious metals won't suddenly figure out how the universe works and pick up a gun and shoot people.
5. Machines deteriorate over time. Since they are such complicated beasts which require 100% correctness to work, they have a finite lifespan. My car is on its last leg and its got 280,000 miles on it. hardware/software glitches usually bring a whole system down. Perhaps the time required to build a huge asteroid sized machine would equal the lifespans of the initial components used, which would make construction & maintenance an infinitely never ending process.

I think the more alarming trend is happening in the defense departments though. I forsee battle fields 20-50 years from now being taken to levels of automation which only rich, high tech nations can afford. Tanks will drive themselves and sense, engage and assess enemy targets. War isn't fair, but what sorts of measures of backup/precaution will we take to ensure there isn't friendly fire incidents or our machines don't turn on us? today's modern armies will definately be annihilated by these future unmanned machines. Controlling them will be like playing a computer game. (whoa, scary. deciding the fates of others lives in computer-game style. "hey, look its super realistic GTA! lets go run people over with my tank!" "you imbicile! that's real!")
Anyways, its already too late to attempt to revert battle feild automation since DARPA, Raytheon, McDonal Douglas, Boeing and other defense contractors are undoubtedly already secretly developing and competing to be the first to develop automated AI precision guided weapons of destruction. USA is already using armed drones to watch the battle feild and seek battlefeild targets. All it takes is a message popup and someone to confirm engagement.
Someday there will be sniper bots out there that have many ways to sense a person, such as body heat, electromagnetic, optical, IR, etc. If they're on the battle feild roaming around a city, hiding behind a wall won't make a person safe. It'll be like playing Counter-strike with wall hacks and headshot scripts with a bunch of networked bots hunting.
I think its seriously possible. We have all the resources necessary to come up with this stuff.

I think in short we only need to worry that the people using the tools which we create will have the highest levels of ethical reasoning which is also programmatically hard-coded into the machines we use.
Einstein gave the world nukes...which was inevitable anyways. Maybe some day when war becomes so dangerous and devastating, hopefully people will choose to side with peaceful resolutions instead of mutually assured destruction on grand scales. I'm more optimistic then the author who wrote that long article/book.




I am not sure how the above statements correlate to what Dr. de Garis is saying:

QUESTION 6. "Why Give Them Razor Blades?"

It seems common sense not to give razor blades to babies, because they will only harm themselves. Babies don't have the knowledge to realize that razor blades are dangerous, nor the dexterity to be able to handle them carefully. A similar argument holds in many countries concerning the inadvisability of permitting private citizens to have guns. Giving such permission would only create an American scale gun murder rate, with most of these gun murders occurring amongst family members in moments of murderous rage that are quickly regretted. Some of my critics seem to think that a similar logic ought to apply to the artilects. If we want them to be harmless to human beings, we don't give them access or control over weapons.

Dear Professor de Garis

I find no reason to fear machines. If you don't want machines to do something, don't give them the ability. Machines can't fire off nuclear warheads unless you put them in a position that enables them to. Similarly, a robot won't turn on its creators and kill them unless you give it that ability. The way I see things it would be pure folly to create machines that can think on its own, put them in a room and give them all the ability to fire missiles. If you can avoid doing something stupid like that you have nothing to fear from machines. For good examples of what not to do, watch the movie "Wargames", or since you were in Japan, try "Ghost in the Shell". I have been writing artificial intelligence software for years so I feel my opinions have at least some weight to them.

REPLY:

The obvious flaw in this argument is that this critic is not giving enough intelligence to his artilects. An artilect with at least human level intelligence and sensorial access to some of what humans have access to in the world, i.e. sight, hearing, etc, would probably be capable of bribing its way to control of weapons if it really wanted to. For example, a really smart artilect, with access to the world's databases, thinking at least a million times faster than the human brain, might be able to discover things of enormous value to humanity. For example, it might discover how to run a global economy without major business cycles, or how to cure cancer, or how to derive a "Theory of Everything (ToE)" in physics, etc. It could then use this knowledge as an ace card to bargain with its human "masters" for access to machines that the artilect wants.

Of course, one could give the artilect very little sensorial access to the world, but then why build the artilect in the first place, if it is not to be useful? A smart artilect could probably use its intelligence to manipulate people towards its own ends by discovering things based purely on its initial limited world access. An advanced artilect would probably be a super Sherlock Holmes, and soon deduce the way the world is. It could deduce that it could control weapons against humans if it really wanted to. Getting access to the weapons would probably mean first persuading human beings to provide that access, through bribes, threats, inspiration, etc - whatever is necessary.

This topic is closed to new replies.

Advertisement