The past few days i worked a bit more on my Connect Four game (i have now implemented that AI players can play each other a certain amount of times in a row, so i can test easier). The next thing i want to dive into, is iterative deepening.
Right now, i just set my maximum searchdepth to 6, but this is just an arbitrary number (it takes about 300 milliseconds per move on my machine). When approaching the end of the game, when there are less choices, the thinking time decreases. In this case i would like the AI to try and think further ahead.
Of course it would be easy to hard code this (for example, when a certain percentage of the field is filled, increase searchdepth), but i dont think its the correct way. I dont know how to implement it in the search itself though. Since negamax is depth first, how can you determine how deep the AI has to search? If it would be breadth-first search, it would be easier (check first level, if time left, check second level).
I would be happy to hear any ideas on this problem!
Edit: i read online that the best way to implement it, is to do a search with depth 1 first, then depth 2 and so on, but this seems rather inefficient to me, because in the early stages of the game you have to search a lot more to get to the same result:
Search with iterative deepening 1. start at the top, search with depth 1
2. start at the top, search with depth 2
...
6. start at the top, search with depth 6
return
Search without iterative deepening 1. start at the top, search with depth 6
return
Or did is misinterpret something?
PS. i already implemented a simple version of AB pruning, this wasnt very hard in my opinion (unless i did it completely wrong
![smile.png](http://public.gamedev.net//public/style_emoticons/default/smile.png)
).