Advertisement

Minimax woes!

Started by August 04, 2009 03:17 AM
6 comments, last by cryo75 15 years, 3 months ago
Hi all, I've implemented a Minimax with Alpha-Beta for a nine men morris game. Even with a depth of 7 the AI is plain dumb and the moves seem at random. With forward pruning, the AI just follows me around blocking my moves. I win just the same. The same minimax has been used in Connect4 and works fine. My scoring function: public int GetScore() { int score = 0; score -= ((Opponent.ClosedMills.Count * 6) + (Opponent.OpenMills.Count * 10) + (Opponent.OwnNodes.Count * 2)); score += ((currentPlayer.ClosedMills.Count * 4) + (currentPlayer.OpenMills.Count * 6) + (currentPlayer.OwnNodes.Count * 2)); return score; } Is it too simple?? Thanks, Ivan
Quote: Original post by cryo75
I've implemented a Minimax with Alpha-Beta for a nine men morris game. Even with a depth of 7 the AI is plain dumb and the moves seem at random.

There could be many reasons. Of the top of my head:
* Maybe a depth of 7 is not enough to play the game well
* Maybe your evaluation function is not good enough
* Maybe there are bugs in your code
* Maybe this game needs a quiescence search for the scores to be meaningful (I've never played the game, so I wouldn't know)

Quote: With forward pruning, the AI just follows me around blocking my moves. I win just the same.

Perhaps you should wait before you implement potentially dangerous optimizations, of which forward pruning is definitely an example.

Quote: My scoring function:

public int GetScore()
{
int score = 0;

score -= ((Opponent.ClosedMills.Count * 6) +
(Opponent.OpenMills.Count * 10) +
(Opponent.OwnNodes.Count * 2));
score += ((currentPlayer.ClosedMills.Count * 4) +
(currentPlayer.OpenMills.Count * 6) +
(currentPlayer.OwnNodes.Count * 2));

return score;
}

Is it too simple??

Probably. It's also not symmetric, which is a bad sign.

Advertisement
Quote: Original post by alvaro
Quote: Original post by cryo75
I've implemented a Minimax with Alpha-Beta for a nine men morris game. Even with a depth of 7 the AI is plain dumb and the moves seem at random.

There could be many reasons. Of the top of my head:
* Maybe a depth of 7 is not enough to play the game well
* Maybe your evaluation function is not good enough
* Maybe there are bugs in your code
* Maybe this game needs a quiescence search for the scores to be meaningful (I've never played the game, so I wouldn't know)


* I've tried with a depth of 10 and currently trying with 15 but it's too slow for a 7x7 board with just 24 possible positions and 9 pieces per player
* Not good enough evaluation function could be one reason
* Bugs are not excluded. The rules are implemented and work well as it has been tested in 2-player mode
* Quiescence search for a simple game like this?

Quote:
Quote: With forward pruning, the AI just follows me around blocking my moves. I win just the same.

Perhaps you should wait before you implement potentially dangerous optimizations, of which forward pruning is definitely an example.

Quote: My scoring function:

public int GetScore()
{
int score = 0;

score -= ((Opponent.ClosedMills.Count * 6) +
(Opponent.OpenMills.Count * 10) +
(Opponent.OwnNodes.Count * 2));
score += ((currentPlayer.ClosedMills.Count * 4) +
(currentPlayer.OpenMills.Count * 6) +
(currentPlayer.OwnNodes.Count * 2));

return score;
}

Is it too simple??

Probably. It's also not symmetric, which is a bad sign.


I've placed more weight on the opponent so that the current player considers potentially bad positions much more than his own.
Quote: Original post by cryo75* I've tried with a depth of 10 and currently trying with 15 but it's too slow for a 7x7 board with just 24 possible positions and 9 pieces per player

Perhaps your search could be made much faster. Do you have good move-ordering heuristics? Transposition tables? Iterative deepening?
Quote: [...]
* Quiescence search for a simple game like this?

Quescence is needed in games where the static evaluation might change a lot temporarily, like in a capture-recapture sequence in chess. It probably doesn't apply to this game, but it's not a function of complexity of the game.
Quote: I've placed more weight on the opponent so that the current player considers potentially bad positions much more than his own.

This is very tempting to do, and it generally has bad results. Just make your evaluation function your best shot at estimating who is ahead, and the search will take care of having the right balance between attack and defense. If you want to have personality adjustments later on, this would be a fine way to implement them, but I wouldn't mess with it before you are happy with the quality of play.
Quote: Original post by alvaro
Quote: Original post by cryo75* I've tried with a depth of 10 and currently trying with 15 but it's too slow for a 7x7 board with just 24 possible positions and 9 pieces per player

Perhaps your search could be made much faster. Do you have good move-ordering heuristics? Transposition tables? Iterative deepening?
Quote: [...]
* Quiescence search for a simple game like this?

Quescence is needed in games where the static evaluation might change a lot temporarily, like in a capture-recapture sequence in chess. It probably doesn't apply to this game, but it's not a function of complexity of the game.
Quote: I've placed more weight on the opponent so that the current player considers potentially bad positions much more than his own.

This is very tempting to do, and it generally has bad results. Just make your evaluation function your best shot at estimating who is ahead, and the search will take care of having the right balance between attack and defense. If you want to have personality adjustments later on, this would be a fine way to implement them, but I wouldn't mess with it before you are happy with the quality of play.


I do have a simple move ordering when getting the list of possible moves. For example, first I get the opponent's pieces that can lead to a mill (ie. 3 pieces in a line), then those that can lead to an open mill (ie. 2 pieces), then I consider all other pieces.
This is my Minimax:

   public class Minimax    {        IBoard board;         Random rand = new Random();         public Move GetBestMove(IBoard board, int depth)        {            this.board = board.Copy();            List<Move> moves = board.GetMoves();            List<Move> bestMoves = new List<Move>();            bestMoves.Clear();            bestMoves.Add(moves[0]);            foreach (Move move in moves)            {                this.board.MakeMove(move);                move.Score = -AlphaBeta(this.board, move, depth, -999999, 999999);                this.board = board.Copy();                if (move.Score > bestMoves[0].Score)                {                    bestMoves.Clear();                    bestMoves.Add(move);                    Console.WriteLine("Add & Clear " + move.NodeFrom.Name + "->" + move.NodeTo.Name + ": " + move.Score);                }                else if (move.Score == bestMoves[0].Score)                {                    bestMoves.Add(move);                    Console.WriteLine("Add " + move.NodeFrom.Name + "->" + move.NodeTo.Name + ": " + move.Score);                }                else                    Console.WriteLine("Ignore " + move.NodeFrom.Name + "->" + move.NodeTo.Name + ": " + move.Score + ", Best " + bestMoves[0].NodeFrom.Name + "->" + bestMoves[0].NodeTo.Name + ": " + bestMoves[0].Score);            }            int rnd = (rand.Next(bestMoves.Count));            Move newMove = bestMoves[rnd];            Console.WriteLine("Chose " + newMove.NodeFrom.Name + "->" + newMove.NodeTo.Name + ": " + newMove.Score);                        return bestMoves[rnd];        }                     int AlphaBeta(IBoard board, Move origMove, int depth, int alpha, int beta)        {            if (depth <= 0)                return this.board.GetScore();            if (this.board.Won() != 0)                 return this.board.GetScore();            List<Move>moves = this.board.GetMoves();            int val;            foreach (Move move in moves)            {                this.board.MakeMove(move);                val = -AlphaBeta(this.board, move, depth - 1, -beta, -alpha);                this.board = board.Copy();                if (val >= beta)                    return beta;                if (val > alpha)                    alpha = val;            }            return alpha;        }    }


I don't see any problems with it. Do you?

Ivan
Advertisement
Quote: Original post by cryo75
This is my Minimax:

*** Source Snippet Removed ***

I don't see any problems with it. Do you?

No, not right away. Well, I don't know the details of the programming language you are using, so I am not sure if parameters are passed by value or by reference, and that kind of thing.

C#

I also changed my scoring function to:

        public int GetScore()        {            int score = 0;            //Draw            if (!currentPlayer.CanMove && !Opponent.CanMove)                return 0;            //Current player victory condition            if (((Opponent.NewPieces == 0) && (Opponent.OwnNodes.Count < rules.MinPiecesToFly)) || (!Opponent.CanMove))                return 65536;            //Opponent victory condition            if (((currentPlayer.NewPieces == 0) && (currentPlayer.OwnNodes.Count < rules.MinPiecesToFly)) || (!currentPlayer.CanMove))                return -65536;            //Positive score            score += (currentPlayer.OwnNodes.Count) +                     ((currentPlayer.ClosedMills.Count + currentPlayer.OpenMills.Count) * 3) +                     (currentPlayer.PossibleMoves / rules.MaxNodes);            //Negative score            score -= (Opponent.OwnNodes.Count) +                     ((Opponent.ClosedMills.Count + Opponent.OpenMills.Count) * 3) +                     (Opponent.PossibleMoves / rules.MaxNodes);                    return score;        }


Still dumb AI with a depth of 7!!

This topic is closed to new replies.

Advertisement