Advertisement

Ideas for AI and games

Started by January 17, 2006 05:17 AM
9 comments, last by glSmurf 19 years, 1 month ago
Just thought I would start a thread for a change so we could try and compile some ideas for games and using AI techniques. I'll start off: 1. Use a recurrent neural network to try and predict player movements in an FPS. 2. Use a genetic algorithm to generate attributes for different monsters while using a neural network to learn the fitness map. 3. Use a neural network to model state changes due to events in a game and then use it in the game. This way you just load new parameters if you want different responses to events. 4. Use fuzzy logic to perform mesh deformations to improve look of 3d models based on an expert rulebase. Well that's a few. Anyone else got some ideas? [Edited by - NickGeorgia on January 17, 2006 5:48:29 AM]
This could become interesting =)

Quote:
Original post by NickGeorgia
3. Use a neural network to model state changes due to events in a game and then use it in the game. This way you just load new parameters if you want different responses to events.

Using neural networks to model state changes sound alittle bit like overkill to me, but I also think that would leed towards a more unpredictable behaviour.

I think there should be more online learning in game ai. I know this often is a hard task as this could cause the agent to unlearn vital parts of the game logic. The more an agent can learn and adapt to its enviroment the more alive it will appear. But how to we prevent the unlearning part?
Advertisement
Thanks for contributing glSmurf. I would think that this would be an important topic for this forum LOL. Or everyone has secret ideas hehe.

Yes that idea might be overkill. Actually what I was thinking is having a way of holding certain fundamental behaviors (say for a rat, walk, turn, run, stop, stand, sit, etc.) as a script. Then the sequence in which these scripts are run depend on the game environment (and maybe some hidden internal logic as well). You could go a little further and work on modifying parameters of a script (how fast, angle, etc.). This is kind of a hybrid system where you have event driven scripts and time driven physics parameters. Anyway, it was just an idea.

As for unlearning, you could always hold a database full of fundamental behaviors and possibly newly generated ones. You then try to abstract away the "fluff" so to speak through some unsupervised classification mechanism. Anyway, I'm pretty interested in what you have to say about this and any ideas you may come up with. This is pretty much a brainstorming thread.
Actually I haven't given it that much thought as of now. I've just recently found intrerest for AI (last month), and now I've spent as much time as possible reading books/articles/tutorials to learn more =)

Currently I am reading about reinforcement learning and other learning algorithms that might fit my needs.

About my ideas.

I've been thinking in similar paths as you have concerning usage of neural networks(NN). The fundamental behaviours of our rat (to follow your example) should be data-driven/scripted. A NN will serve as a "brain" reacting to the enviroment, choosing one or more of these fundamental behaviours with the NN's output as a gradient (e.g. if the walk output is 0.5 then the rat will be walking at halfspeed).
The rat will keep its training patterns (maybe combined with the test set), and it will also have a prototype set with experimental patterns that represent its experience with the enviroment. Prototypes that is learnable together with the original training patterns will be kept and those who fail will either be discarded or combined with another similar prototype pattern, forming a new prototype. This way I will still be sure of that the original patterns won't be forgotten.

I've been looking into the adaptive resonance theory abit to find a method for making clusters of the prototype patterns. Finding patterns that might be combined or altered in some way.

Other thoughts on the unlearing problem. I created a pretty cool fuzzy logic system (FLS) (thanks for sharing your journals btw =D) that is able to add/remove rules at runtime. This FLS could be used to model the target behaviour (possible other logic as well) as closesly as possible.
Every now and then the NNs output is compared with the FLSs output, and if the error is large the NN will be retrained or maybe a backup will be loaded. This might keep the rat from going "crazy".

These are just thoughts that are far from realized. Does this make any sense at all or am I too optimistic that this will work?

[Edited by - glSmurf on January 17, 2006 12:43:26 PM]
I think I get what you are saying. Is it something along the lines of "when do we declare a new pattern and determine it is a good pattern so we can keep it in our knowledge base?" If so, then yes, this is a difficult situation. It will probably require some definition of distance from pattern to pattern to show that it is "different enough" to be classified as a new pattern. Also once we determine it is a pattern, we could then classify it as good or bad through another "fitness" measure. But I think it's even more complex than that since we even have to deal with sequences of patterns and declare which are good and which are bad. But I like the idea. It would be of great benefit to have such a system.

Another problem with AI I've been thinking about is the ability to create not so perfect competition against the player. How do we control the simulation so that behaviors seem intelligent but not perfect? If we consider ourselves, we are limited to what we can "see" through our own sensors (eyes, nose, etc.) These sensors are not always perfect, in fact there is a great deal of sensor and aprior knowledge fusion taking place to aid in deciphering a situation. Our sensors do not tell the whole story, yet there is something going on that allows us to model a situation with a fair amount of accuracy. Anyway, enough rambling. I'll get back to this after I take a break.
just some more brainstorming =)

[learning agents]
Making the new patterns is not realy a problem ..and even a random pattern method might work as it will change the behaviour in some way. The random pattern could represent the agent exploring the enviroment in a new way.
The most optimal way would be through observations. (Rat A sees rat B eat cheese and figures that cheese might be eatable).

I think the hardest part of finding new patterns is how to determine if its an improvement or not. Some problems might be measured in how long it takes to solve the problem, but for behaviours of an agent in a game it will be alot harder.

I guess you have to try each new pattern separatly to avoid testing a perfect pattern and a worthless simultaneously, and then discarding both.


[perfect competition]
I think the best way to deal with this is to mimic the player as much as possible. ..can't write more now ..to be continued

[Edited by - glSmurf on January 17, 2006 5:46:12 PM]
Advertisement
Hey glSmurf, did you see the AI Flocking thread and that link yet? It's pretty cool on the types of dynamics presented. It tries to make movement look intelligent while trying to follow a path for instance.
glSmurf, I think I will just continue this in my journal. If you have any inputs, let me know there or PM me. We could collaborate on some ideas and try them out by actually programming them if you want to. Maybe a GDNet article or two in the works if we find something clever.
That sound lika a good idea =)

Actually i've already started programming some of these things. It is my way of working with new ideas. I almost allways try to implement my ideas as I come up with them, so I can figure out if they are worth spending more time doing further research to make improvements.

Let me know when you've written down your thougths about this in your journals and maybe we could work something out from there.
I'm afraid I don't know much about AI programming, but this is an interesting idea (IMO), so I'll just post it and let you heavyweights weigh-in :-D

Is there way to instill a "will to live" in an AI? Something whereby you program into the AI that it if reaches 0 units/buildings (talking RTS here) it dies, so it has to strive to not reach 0 units/buildings. Does that make sense?

This topic is closed to new replies.

Advertisement