Advertisement

help me understand utility AI

Started by February 28, 2017 06:25 AM
34 comments, last by Alturis 7 years, 8 months ago

Those are not typically done by the game AI.

Instead, those choices are usually given to the player in order to give the player a better experience. The computer AI generally doesn't care about that sort of thing. The AI usually is not the one playing the story, they're the ones being acted upon by the player.

If there is a need for something specific like that in the game design, the specific requirements should be identified and communicated. Then the individual tasks will be given a scoring function that matches their flow relative to the story's rules.

I mean, something I've frequently hacked together (and frequently heard) is that randomness can sometimes appear intelligent. If the NPC occasionally picks the second best choice in a computational sense, they might actually stumble onto the actual best choice in a strategic sense, and appear to outwit the player (not to mention other AIs).

Still, I'm curious if there is a computational way to notice when there's more utility in doing the second best thing because it actually helps advance you towards another goal.

Advertisement

The "two birds one stone" example is formed strangely. For some reason you have split it up by Captain or Brother but that is not a property of the action, but a property of the motivation. Motivations are meant to be captured by the utility function, not the selection of actions. The real actions on offer here are:

  • Kill Cameron: 70 + 0 = 70
  • Kill Donald: 60 + 58 = 118
  • Kill Ernest: 30 + 0 = 30
  • Kill Adam: 0 + 65 = 65
  • Kill Garth: 0 + 20 = 20

The first value in the sum is the status gain, the second value is the family rank gain. Applying a simple sum of the different motivations in this case (where killing non-family members gives you zero family rank, and killing family members gives you zero status by default) shows that Donald is clearly the best target.

(You don't have to use a simple Sum here; but you do need to consider all the motivations for performing a given action, and combine them in some way.)

Select between actions to perform; don't select between reasons to perform them.

How do you handle "two birds, one stone" types of problem solving using utility AI?

That's really a question for AIDaveMark, he's the local expert here.

I would think that you might have 3 functions, one for 2 birds one stone, one for military advancement, and one for succession advancement. 2 birds would of course score highest of the three. That way it would go for 2 birds first, or fall back to military or succession targets if appropriate.

Looks like Kylotan has the 2 birds part for you already.

if I hunted a wild boar, it might take me a little longer to find food, but I'd also get my combat training in.

figure both "chance to find food" and "combat training gained" into a scoring function. and weight that vs just eating or just training. chance of success will be the decision point.

Another example: there's a high utility on recruiting an ally

recruit action gets a higher score when it turns an enemy to your side, high enough that it beats attacking. then you weigh recruiting vs attacking. you will always recruit all enemies you can before attacking the remaining ones.

As you can see, the how-to is usually pretty straight forward. getting the numbers to all work out, perhaps a bit more work. I personally find translating rules to decision trees to perhaps be a bit more straight forward. but the two are very similar. tweaking the conditions of a decision tree is akin to tweaking the functions of a utility system. in some cases the condition checks and utility functions might even be more or less identical in their effects. the only real difference in behavior is that utility systems score all moves every time, which means you don't have to worry about doing the checks based on behavior priority. instead, you have to worry about the ranges of the scores produced, which determines the priorities of behaviors. two different ways to achieve the same basic thing.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This is incredible guys. Open to other solutions and ideas, obviously. But I can work with this.

The idea that I might have special utility functions that look for synergies seems like a good enough solution. I'm getting used to the idea that I might have hundreds of utility functions. It's going to be a daunting thing to tweak and balance... but in some ways, fairly straightforward to program.

Usually only a few are needed, but they need to be based on data to be shared.

Generally each interaction has a collection of data for the motives that drive it, there is only a single scoring function needed. The function takes a collection of motivators and the actor being compared, runs through the list, and computes a single number indicating motive for the action.

You don't need hundreds of utility functions, instead you need data values you can pass to a single function.

Advertisement

That's an important distinction, yeah. Maybe a 20 or so utility functions, but perhaps hundreds of "motivators".

A thought occurred to me because I really like the architecture that Kylotan suggested:

There are many ways to approach this.

One way would be to have a concept of a high level activity ("Retreat") which contains a sequence of low level actions ("Call for help", "Take cover", "Heal", "Leave the area"). If you have a lot of systems that tend to have these sequences or something similar, there is a good argument for using utility systems to pick basic high level activities and for behaviour trees to handle the individual actions.

Having separate utility functions for every action is going to be highly redundant. This might be a moment where I could bring in some of those synergies.

Let's say I have a utility function called "challenge someone for rank". It's calculated as the largest expected value of victory against someone my rank or higher. If I reach a certain threshold, I trigger a behavior tree that selects between several actions.

The obvious action is "challenge the best option for victory".

But the system would also consider actions that challenge the second best option under a few circumstances, effectively aggregating the motivations. Maybe the next best challenge would also kill someone I hate, or also kill a rival on the line of succession.

Similar idea for the food vs combat example. If I'm starving, trigger a behavior tree that selects from several food-finding options. But in addition to "go run around looking for food", the system would also consider "if you're in a combat quest to kill something that is made of delicious meat, just carry on".

I'm reluctant to invent my own solutions because I'm not arrogant enough to think I've figured out something no one else has. But this seems in line with what a few other people are suggesting, and so probably has some basis in good design?

You're over-complicating this.

Did you watch the videos I linked above?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I did, actually, and it's much appreciated. It seems like common practice is to come up with a utility function for every action... but that seems like a LOT of actions. Let alone if I start creating utility functions for "two-in-one" actions.

So how would you incorporate a potential decision into the rest in order to pick the one with the best score if you don't score it in the first place? You have to score them somehow. And the most relevant thing to do is to score them on what might be important regarding whether you would make that decision or not. That's pretty much the entire point of utility based AI.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement