Advertisement

AI action priorities

Started by July 02, 2009 05:19 AM
14 comments, last by IADaveMark 15 years, 4 months ago
I'm making a single-player 2d RPG. I need a system for the AI for choosing the right action. For example, when an NPC is in the player's party (following the player), and attacked by a monster, then it will stop following and defend itself instead. Defending itself has a higher priority than following. My question is, is there a general way to implement such priority system, without having to write a giant if-else statement for all possible actions? One possible solution would be to give every action type a priority integer. But even that can be problematic. For example, I can assign "defend yourself" the highest priority (1). But what if I later introduce another action, "follow the leader whatever happens"? I'd have to modify the whole priority system and all the numbers, as the highest prioirity already belongs to something else. I could avoid this problem by defining priorities pairwise, for example ("follow the leader whatever happens" > "defend yourself"), and ("defend yourself" > "follow the leader") But this leads to a large definition table as well. Is there some ingenious system that could avoid this?
Using a tree structure gives you a way to prioritize things globally without having to assign arbitrary numbers. All you need is to is prioritize the behaviors relatively to each other by reordering the branches in the tree.

See decision trees or behavior trees.
BTs for Next-Gen Game AI (free registration required)

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Advertisement
Quote: Original post by alexjc
Using a tree structure gives you a way to prioritize things globally without having to assign arbitrary numbers. All you need is to is prioritize the behaviors relatively to each other by reordering the branches in the tree.

See decision trees or behavior trees.
BTs for Next-Gen Game AI (free registration required)

Alex


I quess I can rephrase my question in another way: if A=1,...,Z=26, and the only thing I know is that (C < Z) and (B < C), then I want avoid having to define obvious things such as (B < Z). Adding the items to a binary search tree will accomplish this, as will any complete logical inference algorithm. But how can I ensure that every possible ordering will be inferred, while minimizing the amount of pairwise orderings that I have to explicitly define?
Your problem could be viewed as graph problem. Every action is a node and an edge between A and B is defined as priority(A)<priority(B). If your graph doesn't contain any cycles (i.e. A<B, B<C , C<A which would be in error in your case) you can apply a topological sort (http://en.wikipedia.org/wiki/Topological_sort). This will bring your graph in your wished linear order meaing you can assign a priority int value in the right order to your action.

--
Ashaman
The easy answer to your ordering problem is to not hard-code the numbers. You don't say at design time "defend yourself = 100". Instead, you create these numbers on the fly based on your needs at the time. That way, you leave yourself flexibility for later (like you mentioned).

Additionally, you don't put everything into one big formula. By calculating each component separately and even normalizing each component on a scale (e.g. 0-1), you can then have a separate comparison of the components by using coefficients. That is, by simply changing the coefficient between "choice A" and "choice B", you are tilting the balance between them.

For example, if "get ammo" is currently set to 0.7 (moderately important but not critical) and "get health" is currently set to 0.5 (we are only moderately wounded) - a straight-up comparison gives us the decision to "get ammo". However, if we needed to boost the decision to "get health" compared to all other potential actions based on some sort of situational trigger, we can boost it with a coefficient outside the black box that gave us the 0.5 in the first place. So, if there was a situational trigger that we defined as "increase health priority by 50%, we could compare:

Ammo: 0.7 * 1 = 0.7
Health: 0.5 * 1.5 = 0.75

In this case, we now get the health rather than the ammo.

However, if we were either not wounded or only barely scratched (let's say 0.2 priority), then our situational health boost would net:

Ammo: 0.7 * 1 = 0.7
Health: 0.2 * 1.5 = 0.3

In that case, we would get the ammo instead. We haven't implicitly said "get the bloody health in this case" - we have said "temporarily make your health awareness more important, but it's still up to you to compare it to other things."

To use your example, let's say that our "defend ourself from a threat" action has a natural coefficient of 3:1 over "follow the leader". "Defend" is based on things such as the proximity of a threat, the agent's health, etc. A big black box that spits out a number that shows the urgency of defending ourself. The "follow" component is based on the proximity to the player with the value increasing (perhaps exponentially) with the distance to the player. If he is nearby, the urgency is low... as the distance increases, the value climbs. Both of those components are normalized from 0-1.

Consider the following:

Defend from threat = 0.2
Follow player = 0.2

Remember the 3:1 ratio...

(3 * D) = 0.6 <--
(1 * F) = 0.2

If the distance to the player was greater, however, giving us a "follow" rate of 0.7:

Defend = 0.2
Follow = 0.7

(3 * D) = 0.6
(1 * F) = 0.7 <--

The only reason we are following, however, is because the priority on defending was so low (for whatever reason). Even if the player was far away (F = 1.0), if the threat level was such that D was greater than 0.33, the agent would select defend instead.

Defend = 0.35
Follow = 1.0

(3 * D) = 1.05 <--
(1 * F) = 1.00

Now, if there was a situation where you wanted to temporarily boost the priority on follow some, you simply change the coefficient to something different... such as 2:1 or even 1:1. You don't have to mess with what is in the black boxes that came up with your individual normalized values... you only need adjust how they are prioritized relative to each other.

Note that all of this can be combined with the BTs that Alex mentioned above or with the transitions in FSMs or as weights for a planner-based system. It doesn't matter.

For all of this and more, I would recommend you to an excellent book on the subject of the math of behavioral decision making for game AI... but I just can't seem to find the link to it. Maybe it is around here somewhere? ;-)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote: But how can I ensure that every possible ordering will be inferred, while minimizing the amount of pairwise orderings that I have to explicitly define?


I believe a tree does that for you (not necessarily binary). Keep in mind that the branches you chose to insert into the tree are abstractions that are problem specific. So in an action game, your branches could be combat, suspicious, idle... etc.

It's these abstractions you choose that will minimize your work, along with the choice of using the tree.

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Advertisement
I'm with Ashaman here. Keywords: Partial order (vs. total order), directed acyclic graph (DAG). Check Wikipedia. I also like InnocuousFox's idea, though it kind of makes an end run around the question.

Alexjc: How would you encode this as a tree?
an acyclic graph is a tree.
I think InnocuousFox's system is exactly what I need. However, I can imagine that finding the correct values would still take a lot of work. I'd still have to manually define the initial individual priorities, and the coeffiecients between the pairs for different situations. But I'm not sure if there is a way around that (maybe dividing the actions into different groups could help, because then I'd only have to define things for the groups and not individual actions), unless the agent can learn the correct values from experience or something.

Do you remember the name of the book you were referring to? Not sure if this subject is covered the book that I have (Russell & Norvig), because I haven't read everything yet. It would be very useful if someone could tell me what exactly a game AI programmer should know, in addition to what is presented in that standard book (for example, it doesn't really tell how to implement a complete AI system.) Are there any other good book suggestions?
It's within 1" of screen real estate of the "X" below.

X

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement