Advertisement

AI-assisted game balancing

Started by July 21, 2007 03:42 AM
15 comments, last by LorenzoGatti 17 years, 4 months ago
Quote: Original post by Steadtler
Quote: Original post by ToohrVyk
The problem is that I want to be able to change the rules a lot in order to find the best balance, and I'd rather not have to generate a new heuristic every single time.


Implementation detail... thats what software engineering is for!


But, my degree is in Computer Science, I'm not an Engineer!

If you just want to check, but don't care about how to always play the best game, the problem space sounds like it's small enough that you might be able to use Monte Carlo. run 10,000 games with random counters and pick the best 10 and eyeball them for a pattern, or pick the best 100 and see what counters aren't used.

Ralph
Quote: Original post by ToohrVyk
Quote: Original post by Steadtler
The size of the search space is not as important as the size that you will explore. A* can be applied to even infinite search spaces. As for the heuristic, well, I didnt see your rules, but Im not convinced it would be so difficult to find a good one, at least if you can determine what a perfect score would be (even if that score cannot be reached).


The problem is that I want to be able to change the rules a lot in order to find the best balance, and I'd rather not have to generate a new heuristic every single time.


I would think that the fitness function for your genetic algorithm would be quite similar to the heuristic for the A* search, so I don't think that would be a good reason to eschew using A* for this.

However personally, I like using GAs for such tasks. Careful you don't waste too much state space on symmetrical solutions here though, if you can see an easy way to avoid that. GAs are also quite amenable to arbitrary tests like the ones you want - seed your initial population with a few genomes that only use one structure, and perhaps consider having a gene that locks them into that strategy. You'll quickly see whether those pure solutions beat the hybrids.
Advertisement
Quote: Original post by ToohrVyk
1.)So, I'm looking for a means of quickly and automatically testing what structures would appear in an optimal solution to the game—so, this means solving the game.
2).The game simulation is indeed a cellular automaton. What I'm looking for is handling the "player control" side.


Stephen Wolfram (designer of Mathematica) wrote a book called "A New Kind of Science". He proposes several interesting ideas on complexity and computational equivalence. Amazingly, he demonstrates a universal machine using just a very simple cellular automation, and does so with other simple structures as well.

My understanding of it is that complex actions do not stem from complex designs, but are a natural feature of some of even the simplest systems. The book also says that all processes exhibit only two kinds of action: Simple and Complex, and that all complex processes are computationally equivalent.

Also of significant note, the idea of computational irreducability: If a complex system is represented in a simple form, then end states are undecidable except by expressing the system. (I might have misunderstood this though).

[Edited by - AngleWyrm on July 22, 2007 10:09:23 PM]
--"I'm not at home right now, but" = lights on, but no ones home
Quote: Original post by ToohrVyk
The problem is that I want to be able to change the rules a lot in order to find the best balance, and I'd rather not have to generate a new heuristic every single time.


So automate the heuristic generation. Vote A*! ;-)
You could always just brute force it by branching all the possible moves each turn!

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I don't know if you are familiar with an algorithm called UCT (Upper Confidence bounds applied to Trees). It's a refinement of Monte Carlo methods, and for the last couple of years it's been the most popular algorithm in computer go. It's general enough that can be applied to minimax situations and to "maximax" situations as well.

I don't know enough details about your problem to know if this is a reasonable algorithm to use, but reading about it can't hurt. If you don't find any good links by searching the web, I can provide a few later.

Advertisement
The true objective is optimizing game rules (not playing strategies as usual) to ensure that all boring simple strategies are vastly inferior to interesting strategies.
This requires a more precise specification of the nature of the rules and of their possible variations; for example, negative feedback loops and/or nonlinear interactions are necessary, but none are obvious.
This kind of game rule model, presumably expressable as a large number of coefficients of several kinds (how much buildings, actions, resource amounts etc. affect this and that) can be searched, e.g. with genetic algorithms, to find good rules.
Good rules can be recognized because some rather different complex strategies are significantly better than the best simple strategies; the evaluation needs to search for the best strategies in the two categories, maybe with a genetic algorithm as already mentioned.
If nested genetic algorithms are employed, the starting population of strategies for a set of rule weights can be the already suggested set of very simple strategies plus a combination of the best simple and complex strategies for the parent rule weights, which in most cases should be good.

Omae Wa Mou Shindeiru

This topic is closed to new replies.

Advertisement