Advertisement

AI goals

Started by February 25, 2007 04:26 AM
16 comments, last by Timkin 17 years, 8 months ago
You want to solve a planning problem, possibly even solvable by STRIPS. A planning problem is defined by several elements :

* an initial state
* a set of goals
* a list of possible actions each one consisting of a list of preconditions and consequences


In your problem you would state some initial fact when your AI spawns like IHaveAMediumWeapon, PlayerHasBetterWeapon, PlayerIsFarAway, IHaveFullHealth, PlayerIsWounded, etc...

A goal, that would be something like "engage the player in the deadliest way possible", let's call this goal EngageDeadly

Then a list of actions is made available :
Attack1
Precond : SameArea(Player, Me), IHaveBigGun, not(PlayerHasBetterWeapon), ShootAtPlayer
Consequence : EngageDeadly

WalkToThePlayer
Precond : PlayerLocation(X), WalkTo(X)
Consequence : SameArea(Player, Me)

GetABigGun
Precond : SameArea(BigGun, Me), PickUp(BigGun)
Consequence : IHaveBigGun

etc...

It may sound a bit like scripting but actually allows for a very flexible range of actions.
Quote: Original post by sirGustav

Quote: A* might be overkill and its use still doesn't solve the most difficult part
f.e.a.r uses A*, and the decision tree is partially built by the programmer by specifying the requirements of each actions. Basically to kill someone you need to fire a gun. To fire a gun you need to have bullets. When you reload a gun you get bullets. The path finding takes care of the rest.
Check out "3 states and a plan" by jeff orkin and it becomes pretty clear.

Haven't come to the implementation part though as my engine is still in the design phase :)


If anybody's curious, I implemented it a while ago, altough in a very simple demo:

">Click to see a video demo on YouTube


I was amazed at how easy it was to get the behaviors to run, once the system is in place. That demo is running in debug, by the way, and I mean full debug in the IDE.

PS: Sorry it looks so dorky. Im not an artist nor a marketing expert :)
Advertisement
If your programing your own ai I would just use states.
It's an easy way of doing it and it's effective.

to make your ai more complex, you just make more states

ie

attackaggresive
attackdefense
spymode
campmode
campgunspawn (lol)

etc

the other 50% is how do you know what state to be in. Alot of times you can select one randomly, and stay on that state for x amount of seconds.

For certain states you need to do some if or switch statesment to pick our which state might be best.

for example, you could do a kill death ratio for each state, and have the ai use the highest one. This would make it seem like the ai is adapting, as it will naturally find the attackstate which works best against you.

or you can pick the attack state based off what weapon the player is currently using, or has used the most to kill you etc etc.

and last but not least is how well it aims on you and how often it fires and it's response to both, the best practice is to make it human like, or you can time the players response times and fire habits and accuracy and make the ai match it so it's always balanced.

In terms of What should my AI do next? thats the easy part. pretend it's you instead like you were playing the game, what would you do next?

maybe find a gun first? and some ammo? that might be more important.

to make it figure out how to get to the gun there are a few ways to do it.
you can have your AI add to the waypoint system, and add a waypoint where the gun is, so if he ever wants to get that gun, he knows where it is.

To see if the ai should know about the gun you can do a line of sight check, or a distance check, or a distance from waypoint check.
Black Sky A Star Control 2/Elite like game
Quote: Original post by ViperG
If your programing your own ai I would just use states.
It's an easy way of doing it and it's effective.


And throw aside 15 years of trial and error in game AI...

The problems with States Machines, is that as the number of behaviors grow, the complexity of the machine becomes unmanagable, up to the point where adding a new behavior becomes a chore that nobody wants to do. When adding a new state in a state machine, you will often need to add or alter a lot of transitions. But when adding a new action in a State Space Search, you seldom have to change the existing ones. Thats because in a State machine, the deliberation is done by the programmer, while in the planner system, the deliberation is done by the AI itself.

State machines work well for many AI problems, but in a complex world with lots of possible actions and dependencies...
The STRIPS/A* style approach sounds intresting plus i've always loved the NOLF2 and fear AI :)

still trying to work out how goals and actions all fit together, they make sence in that to achive a goal a number of actions have to be performed, its the implementation thats got me now.

are there any books that cover STRIPS/A* style approach? google seems really short in this sort of info :-/

[Edited by - supagu on March 6, 2007 4:40:47 AM]
Quote: Original post by Steadtler
Quote: Original post by ViperG
If your programing your own ai I would just use states.
It's an easy way of doing it and it's effective.


And throw aside 15 years of trial and error in game AI...

The problems with States Machines, is that as the number of behaviors grow, the complexity of the machine becomes unmanagable, up to the point where adding a new behavior becomes a chore that nobody wants to do. When adding a new state in a state machine, you will often need to add or alter a lot of transitions. But when adding a new action in a State Space Search, you seldom have to change the existing ones. Thats because in a State machine, the deliberation is done by the programmer, while in the planner system, the deliberation is done by the AI itself.

State machines work well for many AI problems, but in a complex world with lots of possible actions and dependencies...




And State Machines can be extended recursively and generalized to make them more flexible. They also very nicely handle tasks (subtasks) which are sequential in their execution. The most basic 'sequential' aspect is to first do a validation and estimation step before then proceding with the actions that fulfil the task.

A Planner can call a State Machine and a State Machine can call a Planner (maybe better called a 'Solver').

Many Planner systems are too dependant on 'facts' and can be wasteful in environments in which the situation changes quickly or only partial information is available (uncertainty). Estimation/evaluation functions(and data) and judgement metrics are needed as a substitution for certainty and godlike knowledge. The (re)evaluation mechanism is actually a major part of the system.

------

“A behavior is a list of rules for achieving a goal, where each rule consists of a condition and then one or more actions to be executed if the condition is satisfied. The rules are evaluated in order from top to bottom during each game loop, and the first rule that matches is executed. The ordering of the rules should be such that the first step in the behavior is at the bottom, the second step is just above that, and so on, with the topmost rule checking whether the goal has been successfully achieved. When read from bottom to top, the rules should thus form a complete plan. It should be clear that this structure is essentially a function containing only an if-then statement. (…) A behavior encoded in this way gives game characters important flexibility: if an action fails, the character will simply try again. If the character can no longer perform the action, it will drop down to an earlier step in the plan, and if a step can be skipped, the character will skip it automatically.”


Constant reevaluation is one reason that behavior AI can have such a heavy CPU load. Priorities shift with the situation and just about everything has to be rechecked for validity of the current tasks and better choices of goals to pursue. The system described above is OK for 'simplex' behaviors of relatively dumb objects (few/simple rules), but would be wasteful for more complicated ones.



--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement
The NOLF2 AI was based on a goals using hierarchical state machine to sequence behaviors.

The FEAR AI was based on goals using a planner to sequence behaviors.

Be sure to look at the Halo2 'behavior tree' writeup as well. I think its on Gamasutra. It has some interesting ideas and nice performance behavior.
Quote: Original post by wodinoneeye
A Planner can call a State Machine and a State Machine can call a Planner (maybe better called a 'Solver').


A state machine would be more correctly the output of a planner capable of producing conditional plans (policies). One could feed to a planner a graph that represents the set of feasible state machines that could be produced from a given behaviour set and state space and obtain from it the machine(s) (policy) that achieve a given goal optimally (with respect to a given metric). Indeed, that's what the current system I'm working on does (automated generation of state machines from graph search).


Quote: Original post by wodinoneeye
Constant reevaluation is one reason that behavior AI can have such a heavy CPU load. Priorities shift with the situation and just about everything has to be rechecked for validity of the current tasks and better choices of goals to pursue.


This is where the classic dichotomy of deliberate vs reactive planning arises. In dynamic environments, deliberative planning for optimal results is wasteful, since the plans are typically quickly suboptimal given changes in the environment. Reactive plans/behaviours though have severe limitations; most notably that local optimality in no way guarantees global optimality, so you cannot be assured of achieving a global goal with only local action selection.

The tradeoff is a hybrid deliberative-reactive approach. An example of this is receding horizon control methods used in engineering (particularly in robotics). You solve a deliberative problem within a finite horizon, with an estimate of its value on the horizon, commit to a small portion of this solution and solve it again during activity. Reactive architectures are equivalent to an horizon of one plan step.

Unfortunately these methods are naive replanners, in that they only reiterate a limited horizon planning problem, so they too can be inefficient (basically because they don't actually check whether they need to replan or not before doing so). I've previously published a solution to this in the form of meta-reasoning about plan quality as a trigger for replanning. Bringing this into a game environment is a current area of my research.

This topic is closed to new replies.

Advertisement