Hi I have been working on a code of connect-4 for few months now. Its a strong player but however its not perfect. I am now focussing in slightly tweaking my search algorithm so that the search seed is increased.
I have pretty decent evaluation. But for some positions, I just dont understand how to write a favourable routine and that is where I want my machine power to work.
My search code contains negamax implementation with alpha-beta pruning. It also has Iterative deepening, killers and hashtables. I tried to add NegaScout but it doesnt give my search any benifit.
My evaluation is coded in bitboards and currently I am able to search uptil 17 plys from start with evaluations and it reaches uptil 25plys with only winning conditions.
All evaluation is coded in form of bitboards.
I am not favouring opening book as of now as I want to learn to code and optimize the search algorithm.
The snippet of my entire code can be found here
Kindly help.
Thank you.
Help in optimizing the negamax+alphabeta
I have some experience with chess programming, but less with connect-4.
For chess extensions and reductions help a lot. Maybe you could try Late Move Reductions (LMR) - after n searched moves at a certain position, reduce the remaining moves by one ply. No idea whether null-move reductions would work for connect-4.
Quiscence search. Between normal search and Evaluation add another search that only considers tactical moves.
Futility Pruning/Razoring. If 1 ply away from the leaf node and you have searched all moves with potential of pushing the score above alpha, prune the remaining ones.
Improving PV-nodes move ordering through a separate PV-Hashtable. This hashtable could hold information about the quality (# treesize, beta-cuts/treesize, or similar) of each individual move.
For chess extensions and reductions help a lot. Maybe you could try Late Move Reductions (LMR) - after n searched moves at a certain position, reduce the remaining moves by one ply. No idea whether null-move reductions would work for connect-4.
Quiscence search. Between normal search and Evaluation add another search that only considers tactical moves.
Futility Pruning/Razoring. If 1 ply away from the leaf node and you have searched all moves with potential of pushing the score above alpha, prune the remaining ones.
Improving PV-nodes move ordering through a separate PV-Hashtable. This hashtable could hold information about the quality (# treesize, beta-cuts/treesize, or similar) of each individual move.
My friend John Tromp made a program that solves the game in under 5 minutes in a modern computer, simply by using alphabeta from the root of the game. Check out his website, especially Fhourstones.
I haven't looked at the code myself, but he claims a key optimization he used is a move-ordering heuristic that keeps an 8-bit integer per square on the board, which is updated whenever a move that wasn't searched first happens to produce a beta cut: The value of the moves tried before this one get reduced by one and the value of the beta-cut move is increased by the number of moves before it. I am not sure how he handles saturation, but I think you can look at the code yourself.
On the evaluation side of things, "The Complete Book of Connect 4" by James D. Allen has a lot of great insights into the game.
I hope that helps.
I haven't looked at the code myself, but he claims a key optimization he used is a move-ordering heuristic that keeps an 8-bit integer per square on the board, which is updated whenever a move that wasn't searched first happens to produce a beta cut: The value of the moves tried before this one get reduced by one and the value of the beta-cut move is increased by the number of moves before it. I am not sure how he handles saturation, but I think you can look at the code yourself.
On the evaluation side of things, "The Complete Book of Connect 4" by James D. Allen has a lot of great insights into the game.
I hope that helps.
@Edmund: That is a good info. Thank you,however null move wouldn't work for connect four as the game is itself dependent on parity and zugswang,which officially decides the fate of the game.
@Alvaro: It was exactly due to his website that I was hooked on to code and make a perfect connect-4. I took a look at that code quiet a while ago and wasnt able to digest it though. However since you recommend it, I think I should revisit it again and would apply that strategy.
I have started to jot down the result of position and then upload this in persistent hash, can I call this simple logic as machine learning?
@Alvaro: It was exactly due to his website that I was hooked on to code and make a perfect connect-4. I took a look at that code quiet a while ago and wasnt able to digest it though. However since you recommend it, I think I should revisit it again and would apply that strategy.
I have started to jot down the result of position and then upload this in persistent hash, can I call this simple logic as machine learning?
I have started to jot down the result of position and then upload this in persistent hash, can I call this simple logic as machine learning?
Sure you can: This is called "rote learning" and it can be a powerful tool to solve games.
I did something similar a few years back to the best of memory it looked like this
But after reading the last post i can't help feel the best solution would be to precompute the score for all possible boards and then use that to build a looi up table using the hash of the board and the best next move to make. That way in game all that is done is a lookup call.
int currentDepth = 0;
int bestMove = -1;
int bestScore =0;
hashtable hashedResults ;
int SpreadSearch(Move lastMove)
{
int score = 0;
int spread = 0;
bool left = true;
while(true)
{
if(!left)
spread++;
if(spread > size)
break;
if(left && lastMove.Position - spread < 0)
continue;
if(!left && lastMove.Position + spread => size)
continue;
Move nextMove;
nextMove.Player =( lastMove.Player + 1) % 2;
if(left)
nextMove.Position = lastMove.Position - spread;
else
nextMove.Position = lastMove.Position + spread;
if(!lastMove.Board.IsValidMove(nextMove.Position)
continue;
nextMove.Board = lastMove.Board.PerformMove(nextMove.Position, nextMove.Player);
if(nextMove.Board.WinningMove)
{
if(nextMove.Player == AIPlayer)
return 1;
else
return -1;
}
int delta = hashedResults[nextMove.Boared.HashCode];
if(delta != null)
{
score += delta;
continue;
}
else
{
currentDepth++;
delta = SpreadSearch(nextMove);
currentDepth--;
hashedResults.Add(nextMove.Board.HashCode);
if(currentDepth == 0)
{
if(bestScore < delta)
{
bestScore = delta;
bestMove = nextMove.Position;
}
}
score += delta;
}
return score/2;
}
But after reading the last post i can't help feel the best solution would be to precompute the score for all possible boards and then use that to build a looi up table using the hash of the board and the best next move to make. That way in game all that is done is a lookup call.
Writing Blog: The Aspiring Writer
Novels:
Legacy - Black Prince Saga Book One - By Alexander Ballard (Free this week)
@TechnoGoth: Thanks for replying. But pre-computing the value is not the best solution all the time. Its a trade off for sure more over if positions needed to be pre computed would significantly decrease if the search ply is increased. for example if my search is able to solve the position after 8 plys, I have to record only 4 moves. In this way, I record less and search more. Which game did you program BTW and applied this search and how much plies deeper did it search if you can remember?
I haven't done the math but storing precomputed best moves for given board state to depth of 20 shouldn't be very large. You'd only need to store a key representing the hash code for the board and the position to place the next piece. This would all done before hand and the data included as part of the game. after all only the current state of the board matters since the sequence of moves is irrelevent. A1, B1, C1 is the same as C1, B1, A1 given that a cell can only have 3 different values.
The search was my solution to a connect 4 ai.
The highest difficulty setting i had i think went to a depth of 10 that was sufficent i found to win 99% of games. It could go deeper if you wanted. But the deeper you go the more duplication you get so you need more memory to ensure you don't recompute identical paths.
The search was my solution to a connect 4 ai.
The highest difficulty setting i had i think went to a depth of 10 that was sufficent i found to win 99% of games. It could go deeper if you wanted. But the deeper you go the more duplication you get so you need more memory to ensure you don't recompute identical paths.
Writing Blog: The Aspiring Writer
Novels:
Legacy - Black Prince Saga Book One - By Alexander Ballard (Free this week)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement