AI-assisted game balancing
I am considering a simple "city-building" optimization game. The player has access to a small map of tiles (9x9) where he can build one structure per turn. Structures generate or transform resources deterministically automatically every turn and/or may alter the state or effect of nearby structures. The objective is to get enough resources every 10 turns to move on to the next batch of 10 turns.
Since the player has access to a lot of different structures (and the number increases every 10 turns), I would like to make sure that there are no useless structure: either the path to victory makes way of all structures available, or several nearly equivalent paths to victory use (when combined) all the structures available. So, I'm looking for a means of quickly and automatically testing what structures would appear in an optimal solution to the game—so, this means solving the game.
I've thought about using a genetic algorithm (since the turn sequence lends itself pretty well to being a genome, and a sum of resources at the end plus a 10-turn-bonus are a nice fitness score) but I'm afraid this would not detect degenerate "use only one type of structure" variants that players would try out naturally. What are your thought on this?
If you have an array of resources, and buildings that change those totals, the buildings could be thought of as vectors in resource-space. The minimum amount goals would be hyperplanes in that hyperspace. Except the addition of neighbors changes things.
Cellular automation? Touring machines?
Cellular automation? Touring machines?
--"I'm not at home right now, but" = lights on, but no ones home
Perhaps a graph search? If you can calculate a fitness at any given state then you've got yourself a heuristic. With such a large state space you're probably not guaranteed the absolute best or all of the solutions, but with a termination time and a little randomness involved you could probably find a lot of acceptable solutions. If that doesn't work you could always resort to testers. It sounds like an interesting game I wouldn't mind testing.
I think a genetic algorithm would be very good at discovering which structures are most useful and which are least useful. You could always explicitly add degenerate strategies to the gene pool during the process to check that they don't survive against more complex strategies. Simpler strategies could receive a bonus (or complex strategies a penalty) so that the complex strategies will only survive when they are significantly better.
Why, why use genetic algorithms for something that is completely deterministic?
Your problem is finding the optimal path in a non-cyclic, positively weighted graph.
This simply calls for A*.
Your problem is finding the optimal path in a non-cyclic, positively weighted graph.
This simply calls for A*.
AngleWyrm: the game simulation is indeed a cellular automaton. What I'm looking for is handling the "player control" side.
Vorpy: the bonus/penalty for simplicity/complexity is an excellent idea [smile]
Alrecenk, Steadtler: Because the search space is too big. I intend the game to have 60 steps, around 10 different buildings, and each of them can be built in many different places on the map. I'll have at least 1060 different nodes in my graph... besides, I don't have a decent heuristic (I cannot approximate the final score based on the score at any point on my path), so A* would in fact be Dijkstra anyway.
Vorpy: the bonus/penalty for simplicity/complexity is an excellent idea [smile]
Alrecenk, Steadtler: Because the search space is too big. I intend the game to have 60 steps, around 10 different buildings, and each of them can be built in many different places on the map. I'll have at least 1060 different nodes in my graph... besides, I don't have a decent heuristic (I cannot approximate the final score based on the score at any point on my path), so A* would in fact be Dijkstra anyway.
Quote: Original post by ToohrVyk
Alrecenk, Steadtler: Because the search space is too big. I intend the game to have 60 steps, around 10 different buildings, and each of them can be built in many different places on the map. I'll have at least 1060 different nodes in my graph... besides, I don't have a decent heuristic (I cannot approximate the final score based on the score at any point on my path), so A* would in fact be Dijkstra anyway.
The size of the search space is not as important as the size that you will explore. A* can be applied to even infinite search spaces. As for the heuristic, well, I didnt see your rules, but Im not convinced it would be so difficult to find a good one, at least if you can determine what a perfect score would be (even if that score cannot be reached).
Since GA and other random searches are very unlikely to give you the best solution possible anyway, you might consider using a non-admissible heuristic that will give you very good solutions in less time than an admissible one.
edit: I almost forgot. Whatever method you use, dont forget to prune your solutions for symmetry.
Quote: Original post by Steadtler
The size of the search space is not as important as the size that you will explore. A* can be applied to even infinite search spaces. As for the heuristic, well, I didnt see your rules, but Im not convinced it would be so difficult to find a good one, at least if you can determine what a perfect score would be (even if that score cannot be reached).
The problem is that I want to be able to change the rules a lot in order to find the best balance, and I'd rather not have to generate a new heuristic every single time.
An artificial neural network might be able to learn a decent evaluation function. It needs to be carefully designed, since a good problem representation requires hundreds of nodes and an uninformed design would be too complex to efficiently learn anything useful.
The design can take advantage of the localized effects of buildings. If a building only interacts with things within 2 tiles of it, then a network designed to estimate the value of one particular spot only needs inputs from a 5 by 5 section. Only the things in this area that interact in some way with the building need to be considered. If the relative location of other buildings doesn't matter, then the number of inputs only needs to be approximately three times the number of building types. Binary inputs indicate the type of the center building and the presence or absence of types of buildings in the surrounding area. For each type of building, another input indicates how much of that building is in the area, or if there are at least two. There would also be inputs to indicate the type of terrain around the tile.
This is now a reasonable input size to apply the function to every tile and then add up the results to get the total estimate. There are still 1,000 moves to look at, but you only need to reevaluate tiles that changed and the surrounding tiles, and overall only about 25,000 evaluations of the artificial neural network will be needed per turn. Playing a whole game would require evaluating the network about 60*25,000 times. I think playing a whole game like this would take a few minutes at most, unless I'm underestimating something here.
There are now under 100 ANN edge weights to optimize. If my estimates are correct, a genetic algorithm could play hundreds or thousands of games in a few days, which should be enough to get some results. Of course I may have oversimplified the game to the point where this approach is missing some important stuff that would greatly increase the number of inputs required...
Of course the advantage of this approach, if it works, is that the result can be applied to any map. It could also generate a rough solution to be refined through the other genetic algorithm.
The design can take advantage of the localized effects of buildings. If a building only interacts with things within 2 tiles of it, then a network designed to estimate the value of one particular spot only needs inputs from a 5 by 5 section. Only the things in this area that interact in some way with the building need to be considered. If the relative location of other buildings doesn't matter, then the number of inputs only needs to be approximately three times the number of building types. Binary inputs indicate the type of the center building and the presence or absence of types of buildings in the surrounding area. For each type of building, another input indicates how much of that building is in the area, or if there are at least two. There would also be inputs to indicate the type of terrain around the tile.
This is now a reasonable input size to apply the function to every tile and then add up the results to get the total estimate. There are still 1,000 moves to look at, but you only need to reevaluate tiles that changed and the surrounding tiles, and overall only about 25,000 evaluations of the artificial neural network will be needed per turn. Playing a whole game would require evaluating the network about 60*25,000 times. I think playing a whole game like this would take a few minutes at most, unless I'm underestimating something here.
There are now under 100 ANN edge weights to optimize. If my estimates are correct, a genetic algorithm could play hundreds or thousands of games in a few days, which should be enough to get some results. Of course I may have oversimplified the game to the point where this approach is missing some important stuff that would greatly increase the number of inputs required...
Of course the advantage of this approach, if it works, is that the result can be applied to any map. It could also generate a rough solution to be refined through the other genetic algorithm.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement