What AI for a card game ?
Hi all, I'm working on a little Magic-like card game ( although I don't really know Magic ). But I'm not sure how to handle the AI. Is there any known algorithm for this kind of games ? I tried an alpha beta search, but it limits my design to have absolutly no random : the attack / defense / damage values have to be fixed, and I can't have a card deck ( but that's less important for me, I didn't wanted one anyway ). The second issue I have with an alpha beta is that performance is depending on the number of card I can play at one time, and the combinaisons can be quickly really high, so performance quickly degrades... What other kind of technics can apply to this kind of games ? Should I make a long term hi level strategy layer, and a low level layer following the strategy and deciding which card to use ? Any help is welcome, Thanks, Emmanuel [Edited by - Emmanuel77 on August 10, 2006 5:58:55 PM]
Thanks APs for your answers, but I'm looking for algorithms that could help find a decent card to play for the AI.
Once I have the base algorithm, I think it would be easy to tweak it to make different kind of opponents.
Actually it is already what I have with my alpha-beta algorithm.
I can have different evaluation function for different AIs, to have some players favorising some ways to play.
But my alpha beta is a little too slow, and I can't put any random in it.
I though about dealing with the random part by using average values in place of real values, but it just won't work (tm) :
if I have a creature card with 10D6 attack, but with a huge amount of hit points against a creature with 11D6 defense, but with only a few hit points, a algorithm using average values will always find the attack to fail, although it should hit from time to time...
So I was wondering if there was an academic way of dealing with this.
thanks for your help anyway,
Emmanuel
Once I have the base algorithm, I think it would be easy to tweak it to make different kind of opponents.
Actually it is already what I have with my alpha-beta algorithm.
I can have different evaluation function for different AIs, to have some players favorising some ways to play.
But my alpha beta is a little too slow, and I can't put any random in it.
I though about dealing with the random part by using average values in place of real values, but it just won't work (tm) :
if I have a creature card with 10D6 attack, but with a huge amount of hit points against a creature with 11D6 defense, but with only a few hit points, a algorithm using average values will always find the attack to fail, although it should hit from time to time...
So I was wondering if there was an academic way of dealing with this.
thanks for your help anyway,
Emmanuel
Quote:
Original post by Emmanuel77
Thanks APs for your answers, but I'm looking for algorithms that could help find a decent card to play for the AI.
Once I have the base algorithm, I think it would be easy to tweak it to make different kind of opponents.
Actually it is already what I have with my alpha-beta algorithm.
I can have different evaluation function for different AIs, to have some players favorising some ways to play.
But my alpha beta is a little too slow, and I can't put any random in it.
I though about dealing with the random part by using average values in place of real values, but it just won't work (tm) :
if I have a creature card with 10D6 attack, but with a huge amount of hit points against a creature with 11D6 defense, but with only a few hit points, a algorithm using average values will always find the attack to fail, although it should hit from time to time...
So I was wondering if there was an academic way of dealing with this.
thanks for your help anyway,
Emmanuel
I've seen aproches for that kind of problems using Neural Nets or genetic algorithms or both!
Quote:
Original post by Emmanuel77
I though about dealing with the random part by using average values in place of real values, but it just won't work (tm) :
if I have a creature card with 10D6 attack, but with a huge amount of hit points against a creature with 11D6 defense, but with only a few hit points, a algorithm using average values will always find the attack to fail, although it should hit from time to time...
That's just a statistics or probability problem. Each range has a probability curve and from that you can work out the percentage chance of the attack succeeding. That percentage can be used to modify the perceived value of the attack. eg. A 25% chance of a win = 1/4 as much benefit from this move as if it was a guaranteed win.
I think I understand your problem - you can have randomness in an alpha-beta search.
For each possible outcome you need to multiply the result of your evaluation function by the probability of that event happening. So events that have tiny chances of happening are taken least into account.
Also I thought that Alpha-Beta is a good choice for a card-game if you are doing pruning.
What is the depth of your tree and how many cards do you have?
[Edited by - stevenmarky on August 11, 2006 8:37:49 AM]
For each possible outcome you need to multiply the result of your evaluation function by the probability of that event happening. So events that have tiny chances of happening are taken least into account.
Also I thought that Alpha-Beta is a good choice for a card-game if you are doing pruning.
What is the depth of your tree and how many cards do you have?
[Edited by - stevenmarky on August 11, 2006 8:37:49 AM]
How about a finite state machine type AI?
I believe it would work great for a cardgame. You could create different agents that would resemble different personalities or strategies.
I believe it would work great for a cardgame. You could create different agents that would resemble different personalities or strategies.
I've thought about this one myself quite a bit. The biggest hurdle I face is that I don't know the rules of the game well enough, but I may have an idea that would be useful.
The finite state machine suggested above is a possibility, but I think you need a probabilistic state machine such that transitions between states are not deterministic. You need to introduce the random component, as you mentioned above, but at the same time, you want to bias the result such that you maximize your probability of winning.
The question is how to determine the transition probabilities. The way I thought to approach it is to set up a random simulation where you have various scenarios of which card is played. You then follow the standard rules of play, which include the random dice throw, and record the results. After running this numerous times (probably millions of iterations), you'll start to collect sets of probabilities, and those can then be related to which cards were played. These could then be used to construct your state machine.
Your really monitoring a stochastic process and trying to come up with proper rules.
I hope that makes sense.
-Kirk
The finite state machine suggested above is a possibility, but I think you need a probabilistic state machine such that transitions between states are not deterministic. You need to introduce the random component, as you mentioned above, but at the same time, you want to bias the result such that you maximize your probability of winning.
The question is how to determine the transition probabilities. The way I thought to approach it is to set up a random simulation where you have various scenarios of which card is played. You then follow the standard rules of play, which include the random dice throw, and record the results. After running this numerous times (probably millions of iterations), you'll start to collect sets of probabilities, and those can then be related to which cards were played. These could then be used to construct your state machine.
Your really monitoring a stochastic process and trying to come up with proper rules.
I hope that makes sense.
-Kirk
Gorogoro :
Have you got some links to some places where this kind of techniques are used ?
I though about generic algorithm to tune the values that I give to my evaluate function, but I don't know how to go any further
DarkZoulz :
I am confused... How a state machine can tell me which card to play ? I have a given state for the game, for the board ( which card are played, their location, and their current state : HP, and some modifiers ). Sure once I have a robust algorithm to choose the card the AI should play, I can tweak some parameters based on some game state to change the strategy ( like the AI is winning, so get more aggressive, or whatever ), but what I'm really after is the core card chooser algorithm.
stevenmarky :
My current minimax algorithm is with alpha beta pruning ( what other kind of pruning is there ? Pruning branches that are "obviously" bad ones ? ). It uses a depth of 4, with a choice between 12 cards that can be played on 5 locations. All the locations are not free at one time, but I can quickly reach the 200 000 states evaluations with this numbers...
One more thing is that my implementation is in python ( because all my game states transitions, and cards implementation are in python ), so 200 000 takes quite some time.
stevenmarky and Kylotan :
I'm not sure how to use minimax with random :
If I am at one game state, and a card play with a damage of 10 - 15, it will create 6 different states ( for each given damage ) ?
The minimax need some states in its branches, I understand I can use the probabilities to evaluate a given state : eg card A have 25 % attack, and 10-15 Damage makes a 12.5 / 4 global attack value.
But I can't create a new game state, and a new branch from this...the results are really different whether the opponent card is dead or not...
kirkd :
I'm not sure I follow you...
You mean that for 'some' game state, I try all the possibilities a big number of time, and record which one is the better ?
But I have two big issues with such a technique : I have a really important number of possible game states. Even by trying to find some kind of patterns, it still really big.
Second issue, is I don't know how to evaluate whether the result is good, till the result is not 'player One wins', or 'Player Two wins'. If I had such a magic perfect evaluation function, I would just choose the better card. As in a chess game, the result can show how good it was some times after it was played...
Everyone :
Thanks all for your advices...
For the moment, I will try to have a better evaluation function, it seems to be the most important part of it...
Emmanuel
Have you got some links to some places where this kind of techniques are used ?
I though about generic algorithm to tune the values that I give to my evaluate function, but I don't know how to go any further
DarkZoulz :
I am confused... How a state machine can tell me which card to play ? I have a given state for the game, for the board ( which card are played, their location, and their current state : HP, and some modifiers ). Sure once I have a robust algorithm to choose the card the AI should play, I can tweak some parameters based on some game state to change the strategy ( like the AI is winning, so get more aggressive, or whatever ), but what I'm really after is the core card chooser algorithm.
stevenmarky :
My current minimax algorithm is with alpha beta pruning ( what other kind of pruning is there ? Pruning branches that are "obviously" bad ones ? ). It uses a depth of 4, with a choice between 12 cards that can be played on 5 locations. All the locations are not free at one time, but I can quickly reach the 200 000 states evaluations with this numbers...
One more thing is that my implementation is in python ( because all my game states transitions, and cards implementation are in python ), so 200 000 takes quite some time.
stevenmarky and Kylotan :
I'm not sure how to use minimax with random :
If I am at one game state, and a card play with a damage of 10 - 15, it will create 6 different states ( for each given damage ) ?
The minimax need some states in its branches, I understand I can use the probabilities to evaluate a given state : eg card A have 25 % attack, and 10-15 Damage makes a 12.5 / 4 global attack value.
But I can't create a new game state, and a new branch from this...the results are really different whether the opponent card is dead or not...
kirkd :
I'm not sure I follow you...
You mean that for 'some' game state, I try all the possibilities a big number of time, and record which one is the better ?
But I have two big issues with such a technique : I have a really important number of possible game states. Even by trying to find some kind of patterns, it still really big.
Second issue, is I don't know how to evaluate whether the result is good, till the result is not 'player One wins', or 'Player Two wins'. If I had such a magic perfect evaluation function, I would just choose the better card. As in a chess game, the result can show how good it was some times after it was played...
Everyone :
Thanks all for your advices...
For the moment, I will try to have a better evaluation function, it seems to be the most important part of it...
Emmanuel
Quote:
Original post by Emmanuel77
DarkZoulz :
I am confused... How a state machine can tell me which card to play ? I have a given state for the game, for the board ( which card are played, their location, and their current state : HP, and some modifiers ). Sure once I have a robust algorithm to choose the card the AI should play, I can tweak some parameters based on some game state to change the strategy ( like the AI is winning, so get more aggressive, or whatever ), but what I'm really after is the core card chooser algorithm.
Emmanuel
Well, I was thinking that the "core card chooser" should probably choose a card based on a strategy or personality. What card it plays exactly is really not that relevant. If the AI is aggressive, it plays cards that are marked as aggressive. If you want it to make even more specific card choices you could just make have more states and divid your cards into more specific categories.
Genetic algorithms with a greedy-probabilistic selection/deselection of cards during mutation sounds like a method that would produce good combinations, personalized decks and order opponents by skill. Just keep subpopulations for the diversity in which the best combinations are stored and the mutation factors vary greatly. This way, it can explore for "more global" maxima while fine-tuning local maxima. The later kind would seem more natural to the player. If you have many different sort of cards, you could also rank them according to heuristic evaluation and the general level of the decks they participate in. After one or two poor decks with it in,you could for example reduce the probability of it being selected for another deck.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement