AI goals
i've got some rudimentary AI in my game, in past games ive done very specific short sighted AI that doesnt require much code.
Now the problem im faced with is getting AI to play a multiplayer, team based FPS.
So, getting them to follow way point at a problem, but telling them what to do is my current dilema.
For example:
AI spawns, he finds the closest waypoint list and moves to the closest waypoint. Now what should he do? Ultimately he should check for close by enemies, if the enemis are rather far away he should find a vehicle so he can get to the enemy.
Once within range of the enemy he should engage them.
Now thats the high level concept, how to i work this in to a generic flexible solution.
Some sort of state based solution?
would i have states like:
-FindVehicle
-DriveVehicle
-EnageEnemyOnFootRanged
-EnageEnemyOnFootMelee
-EnageEnemyAsDriver
-EngageEnemyAsGunner
?
now down to an even lower level, once i have a waypoint list, how do i find a vehicle or other points of intrest?
should i just mark up my waypoints with some extra info?
also what do i do if im following a path, come across a way point where i need to do something special, eg. jump, use something, wait for a vehicle to spawn, just more specific info i need to attach to waypoints?
edit:
just looking in to NN's and maybe this is something i could use for the high level decision making, feeding it distances to enemies, friendlys and vehicles and evaluates its team work and killing effectiveness?
[Edited by - supagu on February 25, 2007 6:34:28 AM]
1) Dont touch NN. You were already on the right track.
2) Check out Jeff Orkin's presentations and papers on blackboard/planner architectures:
http://web.media.mit.edu/~jorkin/
You need to keep track of what the agent perceive/knows/think. Thats what a blackboard is there for. Also, remember waypoints are here to help you navigate, not to restrict you!
A classic way is to attack some trigger to your navigation link that will add a new goal to your agent, or more simply tells him to perform some animation right away, when you attempt to go trough this link.
Good luck!
2) Check out Jeff Orkin's presentations and papers on blackboard/planner architectures:
http://web.media.mit.edu/~jorkin/
Quote:
now down to an even lower level, once i have a waypoint list, how do i find a vehicle or other points of intrest?
should i just mark up my waypoints with some extra info?
You need to keep track of what the agent perceive/knows/think. Thats what a blackboard is there for. Also, remember waypoints are here to help you navigate, not to restrict you!
Quote: also what do i do if im following a path, come across a way point where i need to do something special, eg. jump, use something, wait for a vehicle to spawn, just more specific info i need to attach to waypoints?
A classic way is to attack some trigger to your navigation link that will add a new goal to your agent, or more simply tells him to perform some animation right away, when you attempt to go trough this link.
Good luck!
Your states correspond to high level behaviors. There are a variety of methods you can use to choose between behaviors. You can hand code an evaluation function to make the decisions, or you can use a form of reinforcement learning with a function approximator (possibly a neural network, but there are other methods that may be simpler and possibly more reliable), or you can hand code some things and use the computer to fine tune other things.
I believe one valid option is to use A* to "pathfind" through a decision tree. If you think about it, pathfinding and decision making are very similar, you have your current position (state), and where you want to end up (goal state).
I imagine all of the trick that work with A*, like breaking down the search area into larger discrete chunks and only doing micro level pathing when you reach that larger node.
I imagine all of the trick that work with A*, like breaking down the search area into larger discrete chunks and only doing micro level pathing when you reach that larger node.
Quote: Original post by SteadtlerThank God someone said it!
1) Dont touch NN. You were already on the right track.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Quote: Original post by Vorpy
Your states correspond to high level behaviors. There are a variety of methods you can use to choose between behaviors. You can hand code an evaluation function to make the decisions, or you can use a form of reinforcement learning with a function approximator (possibly a neural network, but there are other methods that may be simpler and possibly more reliable), or you can hand code some things and use the computer to fine tune other things.
These medium level behaviours are too heterogeneous to allow decisions. I think it would be better to make an hopefully acyclic graph of what actions are useful for what objectives, and use it to plan hierarchically and compute which mutually exclusive lowest-level behaviours (e.g. turn left/right, aim left/right, stop/walk/run, shoot or not, change weapon or not) are more useful and why.
Above the OP's list of behaviours there are basic (often conflicting) goals:
- accomplishing the mission
- not dying
- helping comrades
Then there are more specific goals:
- avoiding enemy fire
- attacking or disturbing enemies attacking you
- running from explosions, to a safe distance or behind hard cover
- not walking into hazards and traps
- not hurting comrades
- attacking or disturbing enemies attacking comrades
At lower levels than the OP's list there are generic movement plans (e.g. take cover from direction X, run away from location X, approach location X) and specific movement plans (probably a list of waypoints) spanning many turns and subject to sudden replanning.
Omae Wa Mou Shindeiru
Quote: Original post by Ultimape
I believe one valid option is to use A* to "pathfind" through a decision tree. If you think about it, pathfinding and decision making are very similar, you have your current position (state), and where you want to end up (goal state).
I imagine all of the trick that work with A*, like breaking down the search area into larger discrete chunks and only doing micro level pathing when you reach that larger node.
A* might be overkill and its use still doesnt solve the most difficult part -- that of building the decision tree. Either the network/tree has to be prebuilt (before A* invoked) with valid options matching the current situation OR the A* has to invoke similar evaluation code to determine what vertices(associations) between decision 'nodes' exist and are valid for the current situation.
Unfortunately the decision tree itself is the product of digesting a usually irregular and fluid goal/problem/solution space that varies significantly depending on objects present (not just static terrain which A* pathfinding is commonly used for).
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Quote: Original post by Timkin
... hence the value of plan space search methods! ;)
The simulation engine Im working on has a soft class (real polymorphism) based system where 'solutions' are inherited by objects for various tasks initiated by goals. Its more of a linear search then to find a match for the current task and situational factors. Solutions then invoke subtasks which are likewise 'solved' recursively. That mechanism isnt too bad, but the difficulty comes from evaluating uncertainty and trying to pick the 'best' of several viable solutions (cost/risk measurement+prediction). Best of all, as its a behavioral control system, the current plan is constantly reevaluated to allow opportunistsic actions and then resume partially completed tasks.
Not a very clean/simplistic set of data to just 'plug in' to A*.....
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
I got this from some page(don't remember which one) a few days ago, might help:
qfe, clicky: http://web.media.mit.edu/~jorkin/ Read especially "3 States & a Plan: The AI of F.E.A.R." it has seriously reduced my "fsm writing" ;)
Check out "3 states and a plan" by jeff orkin and it becomes pretty clear.
Haven't come to the implementation part though as my engine is still in the design phase :)
Quote: And Elizabeth Gordon’s description of behaviors as ordered lists of rules in “A Goal-Based, Multitasking Agent Architecture” is very inspiring.
“A behavior is a list of rules for achieving a goal, where each rule consists of a condition and then one or more actions to be executed if the condition is satisfied. The rules are evaluated in order from top to bottom during each game loop, and the first rule that matches is executed. The ordering of the rules should be such that the first step in the behavior is at the bottom, the second step is just above that, and so on, with the topmost rule checking whether the goal has been successfully achieved. When read from bottom to top, the rules should thus form a complete plan. It should be clear that this structure is essentially a function containing only an if-then statement. (…) A behavior encoded in this way gives game characters important flexibility: if an action fails, the character will simply try again. If the character can no longer perform the action, it will drop down to an earlier step in the plan, and if a step can be skipped, the character will skip it automatically.”
Quote: Check out Jeff Orkin's presentations and papers on blackboard/planner architectures:
http://web.media.mit.edu/~jorkin/
qfe, clicky: http://web.media.mit.edu/~jorkin/ Read especially "3 States & a Plan: The AI of F.E.A.R." it has seriously reduced my "fsm writing" ;)
Quote: A* might be overkill and its use still doesn't solve the most difficult partf.e.a.r uses A*, and the decision tree is partially built by the programmer by specifying the requirements of each actions. Basically to kill someone you need to fire a gun. To fire a gun you need to have bullets. When you reload a gun you get bullets. The path finding takes care of the rest.
Check out "3 states and a plan" by jeff orkin and it becomes pretty clear.
Haven't come to the implementation part though as my engine is still in the design phase :)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement