Advertisement

new to AI, question....

Started by August 24, 2010 09:12 AM
3 comments, last by rhm3769 14 years, 5 months ago
Designing a board game, mixing elements of risk and monopoly with a few other ideas.... There are quite alot of decisions I would have to program for AI in this but from playing enough games against AI in the games I'm creating this from, after awhile the AI is sort of predictable.... Given the potential number of options any one player can have during any one turn and given the different strategies available and switching strategies, I am seriously thinking about trying to develop the AI in a way that allows it to "learn" how to play based off of how real players play.

I'm thinking this would involve recording everything that happens for every player for every turn. Store all of this data, compile it into patterns before an action is taken and then have the pattern linked to the action taken and counts of times that exact situation has been repeated and repeated actions. When it comes time for an AI player's turn, they would check against this list for their exact situation and find the most used action. I'll have to program something in so they don't always do the most used action, maybe link the next few game events to the action taken and find patterns from that. So in a sense, the AI is "thinking" the board looks like this, so I should do this, but this, this, or this can happen. I should do this, but this will most likely happen, or I don't like the possible outcomes of doing this, but these possible outcomes from doing this look better to me, so I'm going to do 'second most used' action.

How exactly would this be accomplished and is it even possible, if it hasn't been done already and at what point would the data collection and storage and sorting and searching and deciding on what move to make be too much?

I know there's a possibility of the AI being made "dumb" since it is making moves based off what actual players make during quite a few first games, but I don't plan on it being played by a huge number of people until I get this system worked out and implemented, if possible, and those I plan on having play it helped come up with the basic idea and rules and gameplay, I'm just the one who stepped up and volunteered to program it, there are too many things to keep track of if you were to play it on an actual board and since we plan on using the program to run test plays for setting the actual values and everything, having to remember what was used and how it worked out adds to the difficulty of playing on an actual board.

Right now, I don't even have the game programmed yet, if the method I mentioned is possible and wouldn't be an "impossible" thing to implement, I'd prefer to plan out exactly how it needs to be done so I can plan that into the game before it gets done and then I look at it and can't add AI in without changing alot of code.
Before we get to the rest, I'm going to question your premise that doing a simple deliberative AI is harder than doing a learning AI. Why do you think that? A deliberative AI that uses the facts available and reasons on them for your example games of Risk and Monopoly is actually not that hard. At least not compared to constructing a reasonable learning algorithm.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
I'm sorry.... That's not exactly what I was implying in this. I'm looking more for a system that takes more into account than just the current game setting for a situation. Some parts of the AI will be that simple, do this or that, if this is this, then do this else do that. But there's more involved in the game than just buying and then attacking.

I have a system similar to the Chance and Community Chest cards in Monopoly except it's not as random as landing on that spot and getting a card from arandomly suffled deck. It's based on points that accumulate based on actions. At x points you have a specific bonus while at y points a specific penalty is assigned. During that phase of the turn that checks for this, if you rate a bonus you can accept or pass it on, accepting resets your points, passing it on keeps your points so you can take a chance at a higher bonus. If you rate a penalty, it's given but your points stay the same. While the order and values are set, the actual debt or payout is randomized somewhat so one player might get x from a bonus while anothe rplayer might get y from the same bonus.

With that out of the way, players not only affect their own points but every other players, also. For example, attacking and defending successfully rates points. If you notice a player is close to a bonus or penalty and you can effectively increase their chances of getting or passing it, throwing a battle to give them points is acceptable. Or you would normally attack this player but in doing so you put them in a position to possibly rate a bonus on their next turn, then you probably wouldn't. So it's not as simple as looking at the layout of the board for any given turn. That seems alot of checks to program for AI when those could be passed over in having the AI learn. If the players as a whole don't take those into account for the majority of the game, then the AI shoudn't. Even in doing the learning AI, there will have to be checks and a random factor.

I guess what I'm trying to get at is I don't want a predictable AI and I'm looking more towards an AI system that adapts based on collective gameplay as well as gameplay from a single machine.





Ack.

OK... the combination of "new to AI" and the problem space you outlined above is not terribly pleasant.

You might want to ponder this: "learning AI" can't really just learn "stuff". Even when you learn something, all you are doing is turning knobs. You have to define what the knobs are and what they do in the first place. Therefore, you need to write a skeleton decision-making AI for this game and only THEN actually try to tune it with in-game data.

That said, even a skeleton AI for the ruleset you have laid out is going to be largely seeded by a huge-ass monte carlo search on what might happen in the future. Not something that is really tuneable in the first place. The state space is just too large.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

It makes sense, thanks. As far as storing the data it would search through, store all global information(all player positions, all ownerships, troop counts, points, money, etc) linked with what moves were made and the changes to the local area over the next few player turns, then the local area data(surrounding ownerships, troop counts, relevant player data for those in the area, etc) linked to the moves made and linked back to changes over the next few player turns would be logical? How exactly would this be different from logging everything turn by turn for a means of testing for game balance, apart from the AI searching and deciding what move to make based off a programmed set of conditions it should be looking for?

This topic is closed to new replies.

Advertisement