Advertisement

AI action priorities

Started by July 02, 2009 05:19 AM
14 comments, last by IADaveMark 15 years, 4 months ago
formalproof, he's talking about his own book!

You'll find Dave's suggested approach doesn't "ensure that every possible ordering will be inferred, while minimizing the amount of pairwise orderings". It's much less explicit and may even take a white to tweak (it's an art), but potentially it could end up being much more emergent if that's the kind of game you're building.


Emergent, see the link I pasted above. Decision trees or behavior trees are rather straightforward and there's loads of documentation out there.

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!


Finding a prioritization metric is usually a problem in simulated behavior above a certain complexity.

You can break down the design into 'Needs' which need to be met and 'Solutions' to achive those needs, and then 'Goals' which are active (picked/decided) -- for the current situation. The object does the action/subtasks specified by the Goal(parameters) + Solutions(script) to try to achieve the current highest priority Need. The situation is then reevaluated and if priorities change, another goal is picked.

The priority for each Need would be calculated depending on various factors analyzed for the (current) situation. Those factors are analyzed/summed (via a weight system) and an escalation graph is used to allow non linear representation of reallife priorities -- where things suddenly become much more important than even a combination of other needs. (Think of the reactions given to a wound 'bruised', 'dripping blood', 'gushing your blood out in torrents' -- very different levels of responce to each...)

The output priority (from graph function/histogram) will be on a common priority scale (used by all goals to be able to pick the highest). Each Need could have very different analysis methods and different numeric scaling but its priority graph would then be tuned to generate the common scale used to pick the focus of an objects activities.


Some Thresholds may be established to differentiate priorities (or even preset for certain goals) to a 'Quantum Level' of priority. All other Needs registering priority lower than a set of Needs in the current highest Quantum need not be considered (skipping more detailed (and costly) processing). Survival versus Convienence would be two obvious Quantum levels. (ie- the threshold for 'Starving' would elevate the food Needs over some maintenece task of keeping an adaquate (above minimal) supply of ammunition.)


An example Need for a NPC (pet) would be to keep within a certain maximum distance of its leader and if the range (simple distance or a pathfinding cost) exceeded that max then closing that distance would become a higher priority.
The greater the distance, the more effort/risk might be made to move closer.

If immediate survival (another Need) was not generating a higher priority, then the 'sty close to boss' Need would win (for a while) and advance that goals solution to being active. The Solution would be ways to move to a point closer and the Goal would have the specific parmeters in the current situational context to make use of that Solution.



One thing often needed is a buffer of a Solution task (if its more than an impulse type action) to give that solution some time to make progress before some 'almost the same' priority Need suddenly gets picked instead (due to minor situational factor fluxuations). The Need/Goal/Solution already in action gets a added priority bonus to keep a competitor Need from rapidly seizing control (or even cauing a flip flopping pattern).


To hold down computing loads, some of the situational analysis may happen on a schedule at regular intervals, with others react/wakeup to significant events (classification of which is a whole nuther problem...). Reevaluating the whole situation can be costly and needs to be avoided, so sometimes a good default action can be executed as a reaction and have a real solution be found on the regulary scheduled AI cycle.






--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement
Quote: Original post by ibebrett
an acyclic graph is a tree.


A tree is an acyclic graph, but not the other way around. Unless your definition of a tree is such that branches are allowed to merge with other branches.
Quote: Original post by ibebrett
an acyclic graph is a tree.


Please look up "lattice".

Hm. I've implemented decision trees before, but a decision lattice could be interesting...

I think there are situations where it is helpful to have the different action types grouped into explicit categories (especially if there are many action types), and ranked by priority as it was suggested in the original post.

As Dave showed, computing the utility of different actions by weighting them is the general way of prioritizing/selecting between them.
But I think that when you are designing these weights, having categories could make it easier because you only have to keep in mind the other actions in the category, and you can still feel safe that the action will be weighted in a meaningful way in relation to all other actions.

There is another case where categories could be helpful:
While the agent is running its currently selected action, you will also want it to periodically run the (costly) action selection (decision) code again to see if some action has come up with higher utility than the one being performed, so the agent can switch actions.
In this case, if action types have been categorized by priority this code can be optimized so that action types of lower priority than the one currently running are disregarded out of hand (at the loss of some precision) without computing their actual utility.

For general categories, I suggest something like 'Security', 'Work', 'Leisure', bug higher granularity could also be helpful.
Quote: Original post by captain_crunch
As Dave showed, computing the utility of different actions by weighting them is the general way of prioritizing/selecting between them.
But I think that when you are designing these weights, having categories could make it easier because you only have to keep in mind the other actions in the category, and you can still feel safe that the action will be weighted in a meaningful way in relation to all other actions.

That is very correct. By building your decision weights in stages, you can achieve "compartmentalized confidence". That is, if each "black box" only deals with related things, you don't have to worry about what is going on outside the box. If you are comfortable with how things are compared and contrasted inside the box, then you can seal that one up and use ONLY its output in conjunction with the results of other black boxes. You basically are building things in a tree style until you arrive at one final decision process.

More details and examples of this in my book. (thanks, Alex, for giving away the answer to my hunt. You're no fun!)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement