Advertisement

Algorithmic Ecology: Machine Learning AI Engine

Started by August 29, 2014 10:21 PM
6 comments, last by Algorithmic Ecology 10 years, 2 months ago

Mods, my official announcement for this is in the Announcements forum: http://www.gamedev.net/topic/660368-algorithmic-ecology-machine-learning-ai-engine/ so please remove if any information is considered redundant.

Anyway, I haven't really posted here before- I come from more of a Controls and formal AI background so I'm not a gamedev expert in any sense, but I wanted to run something by you guys and hopefully get some feedback.

I'm working on a machine learning model for the domain of games/simulation that produces behaviors that typically are not produced with conventional search-based AI. The idea is that a developer should be able to use the architecture as an API or external library and be able to produce useful and unique agent control functions without having to deal with the underlying training algorithms. The architecture should be able to produce a function using (ideally) only inputs and a desired end behavior. I have a working prototype ready that uses the current architecture to produce an ecosystem of agents that trains in response to environmental changes. The short demo video is at

The project is still in its infancy but I'm planning on using a lot of exciting current academic research in machine learning to incrementally allow for more complex behaviors. At the moment I'm trying to propogate the idea out to developers to get some feedback on the project before charging forward with development, but so far I am happy with the progress and the outlook for the future.

Since you guys are more experienced in the domain I was hoping to get some comments on the project. Any constructive discussion or insight is appreciated. Thanks!

I wouldn't rule out the usefulness of this kind of thing at some point. Here are a few potential obstacles to this being used in games:

  • Lack of predictability - any AI that's too "strong" risks acting in a way that breaks level design or other gameplay requirements because the designer can't predict how it will act for the player.
  • Timeslice - games normally have a lot going on all at once. Graphics are of course often a big slice. If AI takes too long per frame, it can jeopardise the performance of the whole game.
  • Player impression - an AI can be cooler than Siberia in winter, but if the player doesn't see enough cues for why the AI does what it does, they will think it's being random/stupid/cheating. Many of these techniques are impenetrable when it comes to reasons. The player wants to see the AI's fear or greed or hunting instinct. Appropriate animations or behaviours can't be shown if you don't know the reasons yourself.

None of these are definite killers, but they take overcoming. For example an alibi for behaviour from a known set could be an output and fitness criteria. How it arrives at that result is up to the AI.

Advertisement

Thanks jefferytitan, I appreciate the feedback!

For predictability, I think this only applies when a developer means to train a game in real time. For example, I've been drafting up a couple ideas for simple games with AI that automatically scales with the player. Offline, though, machine learning agents generally will only become just strong enough to overcome their training scenarios because no reward is defined for becoming any better than that, so its actually harder to develop "too strong" AI than it may sound.

Training AI in real-time as someone is playing may be a challenge, but I think there is a common misconception that neural nets always operate slowly. For some perspective, in my demo video I included a segment that had 160 agents rendered on screen. Each agent contained a net with 374 neurons, meaning that there were around 6x10^4 neurons operating in the same thread without any shudder at 60fps, so at this point I'm confident that the limitation will be more in rendering than computation. The processor instructions at the assembly level for Neural Nets actually appear to be simpler with less wasted cycles than "if-then" instructions (trained NNs are basically just math equations with the inputs as variables), so at the moment it isn't an issue (knock on wood).

Your third point is one that will take some research, though. Agents like this are capable of finding and exploiting bugs in the environment and finding absurd solutions to problems, which is both a blessing and a curse. There will probably be some pain in figuring this one out. I do have several ideas for solving the problem of triggering the appropriate animation for a behavior, and it actually might be a good idea to demonstrate that in my next release.

Thanks for the feedback! This will definitely help me.

Why machine learning?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Many reasons, but mostly because it doesn't seem to be very common and there are some newer methods in machine learning research that I think could have some potential.

There's also a personal interest factor and the practice is useful to me for my field.

At least you admit that it is for personal reasons... because it is the biggest pariah in game development.

I haven't looked in much detail but it seems to me that in order to get something that would be specific the game design you wanted it to be for, you would have to put in a lot of time just to put together the training set. And even then, you have no idea what you are going to get out of it. Seems like a real blindfolded dart-throwing approach.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement

I never understood the issue with machine learning tbh. If used properly on only a few components of your system, it can really help

I haven't looked in much detail but it seems to me that in order to get something that would be specific the game design you wanted it to be for, you would have to put in a lot of time just to put together the training set. And even then, you have no idea what you are going to get out of it. Seems like a real blindfolded dart-throwing approach.

I haven't had this experience yet, but I'll keep it in mind.


WireZapp- Yeah, that's the idea. Machine learning algorithms can reliably produce functions that would be prohibitively difficult for a human to produce, but it wouldn't normally replace something like tree or graph-based heuristics in terms of provability and whatnot.

This topic is closed to new replies.

Advertisement