Advertisement

How to weaken AI?

Started by March 25, 2013 01:54 PM
7 comments, last by Luckless 11 years, 7 months ago

What is a good way to weaken an AI's strength when using Negascout?

1. Decrease search depth

2. Apply noise to the evaluation score

3. Reduce the number of generated moves

I like the first two. Some chess programs play well most moves and introduce an occasional blunder on purpose. I guess reducing the number of generated moves occasionally would be a way to implement this. But you should make sure that the program doesn't miss obvious moves that no human would miss (like recapturing a queen in chess).
Advertisement
Decreasing the search depth would be a great way to weaken the Ai.

Alvaro is also correct - decreasing the number of generated moves would work, but you want to make sure that you don't miss obvious moves. You might also try forcing the Ai to make a stupid move once every few turns, giving the appearance that the Ai is weaker.

"The code you write when you learn a new language is shit.
You either already know that and you are wise, or you don’t realize it for many years and you are an idiot. Either way, your learning code is objectively shit." - L. Spiro

"This is called programming. The art of typing shit into an editor/IDE is not programming, it's basically data entry. The part that makes a programmer a programmer is their problem solving skills." - Serapth

"The 'friend' relationship in c++ is the tightest coupling you can give two objects. Friends can reach out and touch your privates." - frob

Another possibility is to reduce the features in your evaluation function

e.g. In chess a new player probably wouldn't understand concepts like passed pawns and outposts whereas a more advanced player would

Another possibility is to reduce the features in your evaluation function
e.g. In chess a new player probably wouldn't understand concepts like passed pawns and outposts whereas a more advanced player would

A similar idea is to use bad parameters in the evaluation function, so the relative strength of different features is not correctly taken into account. What makes this particularly interesting is that you can create odd personalities for your program: One that really likes to push pawns forward, one that is obsessed with having its own king protected, one that is vicious in attacking the opponent's king... while they disregard other considerations.

The most important thing to keep the game interesting is random behaviours. Decreasing search depth is just a way to speed up the calculations.

Advertisement

The best of both worlds is weighted randoms. If you rank your potential actions based on some sort of score, size the potential for selection based on the relative scores, then throw a random number at it, you get a variety of reasonable looking behaviors with a diminishing chance for the very stupid ones. e.g.

  • Best Choice 50%
  • Good Choice 25%
  • OK Choice 15%
  • Dumb Choice 8%
  • Ridiculous Choice 2%

Assume the numbers above were generated by comparing their favorability. If you throw a random number at that, most of the time you are going to get one of the top 2 selections. However, sometimes you will get the less than preferable ones and that will mix things up a bit.

You could actually do this with minmax trees by simply not picking the highest scoring branch and, instead, using the scores to seed your weighted randoms.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

The best of both worlds is weighted randoms. If you rank your potential actions based on some sort of score, size the potential for selection based on the relative scores, then throw a random number at it, you get a variety of reasonable looking behaviors with a diminishing chance for the very stupid ones. e.g.

  • Best Choice 50%
  • Good Choice 25%
  • OK Choice 15%
  • Dumb Choice 8%
  • Ridiculous Choice 2%

Assume the numbers above were generated by comparing their favorability. If you throw a random number at that, most of the time you are going to get one of the top 2 selections. However, sometimes you will get the less than preferable ones and that will mix things up a bit.

You could actually do this with minmax trees by simply not picking the highest scoring branch and, instead, using the scores to seed your weighted randoms.

Whilst this works with just straight minimax, it fails once you start implementing even simple search enhancements. i.e. it would be doubtfully useful with alphabeta and most likely useless with PVS

Depending on exact usage you can also consider the impact of a goal/target based AI approach. Advanced AI would favour higher level goals in a game, such as attacking economies, feints and lures, and other similar options, where as a lower level AI is going to be focused more on simple attacks and reactions.

How to modify your AI for challenge level depends on the exact style of gameplay after all.

Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

This topic is closed to new replies.

Advertisement