Advertisement

Learning Based on Stochastics

Started by June 26, 2002 05:26 AM
5 comments, last by NuFAN 22 years, 5 months ago
Hi, I finally found the time to think about simple implementation methods of machine-learning in games. The approach I thought of uses a stochastic estimation (Bayes Theorem or whatever you want) for decision-making algorithms. This estimation is based on default values which you have to determine before the AI is doing anything. These parameters can be trimmed in the game easily by indirect adaption. Has anybody already evaluated this approach and can tell me something about the (dis-)advantages ? I know this will not work for all genres, but it could be useful for RTS-games and more.
Graphix Coding @Skullpture Entertainmenthttp://www.skullpture.de
If you''re referring to the use of Bayesian Networks as a state space model for a game, then I don''t believe any have been implemented in a published game (yet)... at least not that I''ve heard of.

Perhaps you could elaborate a little more on how you were thinking of applying state estimation to a decision theory problem. It''s certainly a valid idea, but coming up with an applicable and tractable application is another thing!

Cheers,

Timkin
Advertisement
I think Black & White uses Bayesian networks. Not sure though
Hi,
this example here is coming from "AI Game Programming Wisdom", from Paul Tozour''s Article about Bayesion Networks. The game is an RTS-game. An example could be that the player often attacks with a small-group of land-units while in the background he builds an large airforce or lots of submarines to attack the non-player-enemy.

Bayesian networks give the opportunity to compute the probability for a combination of actions based on the result. For this task, first a general lookup-matrix is being generated based on empiric values. You watch players and then build a matrix based on their behaviors. This decision matrix can be modified at runtime by inverted indirect adaption of the player. The AI recognizes that the player often attacks with small-groups of tanks and then starts a large offensive with submarines. Because he recognizes this tactic, the probability-matrix is being modified as the game is running, the more often the player tries one tactic, the more likely it is that the AI will have the correct remedy.

This would make the AI seem more intelligent without cheating. Apart from this, it would improve the factor of replayability. This is just one idea I had, I''m sure there are lots of applications that are more suited than this little example.
Graphix Coding @Skullpture Entertainmenthttp://www.skullpture.de
Okay, so you''re actually talking about learning parameters for a decision network (BN that contains decision (action) and utility nodes) and then using it to test opposing strategies (plans).

Yep, this sort of decision theoretic planning would be very useful in games. Unfortunately, many gross simplifications need to be made to the state space representation so that tractable inference and learning can be performed in the time frame of the game play. Of course, if the model were being built up over a number of hours of gameplay then the computational load would be far less and perhaps practical for some game genres.

I''m particularly interested in seeing BNs make it into games, particularly since it will mean more of a demand for people like me!

If you want any particular advice on implementing a Bayesian Network, Dynamic Bayesian Network or Decision Network, just holler. I''ve had a lot of experience with them during my PhD research.

Cheers,

Timkin
Timkin :

I have already implemented Bayesion Networks for Pattern Anticipation in my spare-time and am fascinated by the progress with which the computer tries to anticipate the player''s behavior. In such small cases, things work really fine and I will probably write some bot in the near-future for testing a fully-blown system. I will certainly contact you when I step into problems.

Bye
Graphix Coding @Skullpture Entertainmenthttp://www.skullpture.de
Advertisement
quote: Original post by NuFAN
I have already implemented Bayesion Networks for Pattern Anticipation in my spare-time... (ed:snip)

Excellent! What was the specific problem domain you looked at and what behaviours were you trying to model and anticipate?

quote: Original post by NuFAN
In such small cases, things work really fine and I will probably write some bot in the near-future for testing a fully-blown system.


I''m sure that many members of this forum would like to hear of your experiences with applying a BN to this task. The only way we''re going to get more people playing with more toys (techniques) is to share the experiences - both good and bad - of implementing them.

Some questions (to give us an idea of the problem you tackled):
1) How many state variables did you model
2) How big was the state space you considered?
3) How many decision (action) variables did your network have?
4) How many utility variables?
5) What inference algorithm did you implement for your network?

Thanks,

Timkin

This topic is closed to new replies.

Advertisement