Advertisement

Good AI equals self-awareness

Started by February 16, 2009 01:43 AM
32 comments, last by bibiteinfo 15 years, 7 months ago
Quote: Original post by st8ic88
If one says that "self-awareness" can be programmed into a machine, one misunderstands the concept of "self-awareness".


The OP may be a troll, but that doesn't change the fact that you are quite simply wrong.

People put way too much value into the ideas of life, consciousness, and self-awareness.

It's tough to define self-awareness. But it's definitely easy to see that a machine could have identical existence and experience to a human. Therefore if a human is self-aware, then a machine could be so as well.
http://www.sharpnova.com
Oh Jesus. Look, make it quick. Someone mention Searle's chinese room and then let's all yell for five pages.
Advertisement
Quote: Original post by Sneftel
Oh Jesus. Look, make it quick. Someone mention Searle's chinese room and then let's all yell for five pages.


Racist.
Consciousness is the abstraction our brains use to manage the incredible amount of information we have to process.

Game AI doesn't have to do anything like that.
Anthony Umfer
For once I would like people to stop trying to anthropomorphasize everything and just dump the idea of making machines "human like." Though that won't be possible since humans are ego-centric beings that once thought the universe revolved around them. Well, I guess we haven't stopped thinking that either.
Quote: Original post by WeirdoFu
For once I would like people to stop trying to anthropomorphasize everything and just dump the idea of making machines "human like." Though that won't be possible since humans are ego-centric beings that once thought the universe revolved around them. Well, I guess we haven't stopped thinking that either.

Wow... I guess that will make this panel I'm sitting on at GDC a little awkward, won't it?

Characters Welcome: Next Steps Towards Human AI

Speaker: Robert Zubek (Senior Engineer, Zynga), Richard Evans (Senior AI Engineer, Electronic Arts), Dave Mark (President and Lead Designer, Intrinsic Algorithm), Daniel Kline (Lead Game Engineer, Crystal Dynamics), Phil Carlisle (Game Programmer and Researcher/Lecturer, University of Bolton), Borut Pfeifer (Lead AI Programmer, Electronic Arts)
Date/Time: Monday (March 23, 2009) 5:15pm — 6:00pm
Location (room): Room 2018, West Hall
Track: AI Summit
Format: 45-minute Panel
Experience Level: All

Session Description
AI characters can be beautifully modeled and animated, but their behavior rarely matches their life-like appearance. How can we advance the current state of the art, to make our characters seem more believable? What kinds of human behaviors are still missing in our AI, how can we implement them, and what challenges stand in the way? This session will discuss practical approaches to pushing the boundaries of character AI, past successes and ideas for the future, with an experienced panel representing a wide range of perspectives and games.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
Quote: Original post by edepot
The hardest thing about artificial intelligence is being able to make a program sentient, or self-aware.

Nah, it's easy to make a program self-aware: Just give it a map with a you-are-here indicater. Then the program is aware of it's location within the context of it's universe.

How's my ammo load-out? Do I have enough fuel to make it to the next refuelling point? That's self-aware. And dirt cheap simple.

But wait. Did we mean making the program aware of it's location outside of the context of it's own universe? Hmm. Let's try that with people: State your purpose in life, and your place in the multiverse...tall order.

Perhaps all we really mean by "self-aware" is aware of pertinent relationships to the universe in which people ordinarily operate.

Like a language/grammer checker. Or a factory robot. Nope, that didn't cover it either, did it?

How about this: Make it do everything I can do, only better :P
--"I'm not at home right now, but" = lights on, but no ones home
Self-awareness of the sort we would strive to emulate/recreate is, in fact, not awareness of self. Instead, we would consider an entity that uses evaluative reasoning based upon its present situation and past experience a self-aware being. It is meerly the evaluation of the most possible or obvious actions, then, that would comprise a self-aware being, and, through undoubtably complex, though entirely possible, programming, this could be possible.

Feel free to attempt aforementioned evaluation technique, beginning on a small scale, and observe how, as time progresses as the evaluation becomes more restricted to the most potentially appropriate actions, the programmed entity become more human-like in its actions.

However, the true emulation of this behaviour is inadvisable in the context of a game, due to the obvious restrictions of processing power, and is normally only capable of running at any sort of speed with an artificial neural network trained using the rules of the decision making.

Hope this is of some help.
This topic is irrelevant to these boards. If a piece of software with a consciousness like ours could be made (of course it could never be proven as such), it'd be blatantly and grossly unethical to deploy it in a game.

[Edited by - Baiame on March 29, 2009 3:35:27 PM]
I was a bit surprised to find a statement here that is actually interesting:

Quote: Original post by Baiame
This topic is irrelevant to these boards. If a piece of software with a consciousness like ours could be made (of course it could ever be proven as such), it'd be blatantly and grossly unethical to deploy it in a game.


Why do you believe this?

What if this piece of software with a consciousness somehow grew out of some character AI for a game? This would most likely imply that this software is very much tied to that particular game world, and would be very much upset in a different environment. In that case, I'd argue that it would be blatantly and grossly unethical to uproot the software from its environment.
Widelands - laid back, free software strategy

This topic is closed to new replies.

Advertisement