Multiagent pathfinding in current games
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Cooperative agent solutions stress that agents negotiate the solution to their problem among themselves, generally with each agent having equal say in the final solution. No single agent typically knows what all other agents are doing (but might be given information about neighbouring agents). An example of this is the flocking behaviour we hear mentioned often in this forum.
Coordinated agent solutions generally rely on an oracle (which may be one of the agents) that coordinates the operation of all agents. The oracle is responsible for observation, decision making and disemination of orders to agents. This is commonly seen in squad-based AI where the squad leader acts as the oracle.
The most common approach in games for pathfinding for multiple agent scenarios would be the latter. Most of the review/post-mortem articles I have read over the years have focues on pathfinding once for the squad leader and then having all units follow the same path (possibly while maintaining a rigid formation), or having each unit use the leaders path as a basis for their activity. There are of course exceptions and variations on this theme in many shipped games, but the core idea is the same: solve the problem once and massage the solution for individual agents.
As for cooperative agents in games, you'll see a lot less of that, mostly because it becomes very difficult to design high level group behaviours using only rules for local interaction between individuals. Having said that, steering behaviours (as InnocuousFox pointed out) have become popular in recent years, particular with regard to enforcing local movement behaviours on groups of agents with common goals (such as squads moving in formation).
Gamasutra is a good resource for post-mortem articles and design articles. The Game Programming Gems and Game AI Wisdom series of books also have a good selection of articles on techniques used in commercial titles.
I hope this helps.
Cheers,
Timkin
Quote: Original post by Timkin
The most common approach in games for pathfinding for multiple agent scenarios would be the latter. Most of the review/post-mortem articles I have read over the years have focues on pathfinding once for the squad leader and then having all units follow the same path (possibly while maintaining a rigid formation), or having each unit use the leaders path as a basis for their activity. There are of course exceptions and variations on this theme in many shipped games, but the core idea is the same: solve the problem once and massage the solution for individual agents.
This is more or less what Company of Heroes is doing. We have soldiers grouped into squads, and only the leader is actually going to the goal... the rest are busily making short distance move orders to stay in formation. Other soldiers are static obstacles to these searches. This works out ok to avoid each other because we constantly repath (every ~0.5-1 second for followers, every 1-3 seconds for leaders). We keep the costs down on the repathing through lots of different optimizations, but primarily the use of hierarchal pathfinding keeps it in check.
Chris
Quote: Original post by SKalliola
I just read about Cooperative pathfinding (Silver, 2005) but that's quite new thing, highly unlikely used in Warcraft 3, Command & Conquer, Baldur's Gate, etc. :)
The pathfinding in Baldur's Gate is so poor that I doubt any cooperation took place at all. One character would quite often go the long way around if another character happened to stand between it and the destination, even if that character was moving away from the choke point at the time.
Can you generate a time estimate of when a choke point will be cleared and convert that to a terrain weight? That way, in a situation such as that, the agent would decide whether to wait (a la Reynolds queuing at a door), or find a new path. Let's face it, it would be just as silly for us to watch a huge queue at a door if there was a slightly longer, yet more time-effective path available.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Obviously there are some omniscience issues here, but storing a timestamp per node is small enough to have separate copies of this for each player I assume.
1) pathfind agent A
2) if any idle agents are along the path of A, move them out of the way as A draws near
3) completely ignore collisions between agent A and agent B if they are both moving and their paths intersect
From an RTS view it's more likely that you'll interpret in-motion pass-through collision as occlusion (especially in the older games where you couldn't bring the camera that close to the ground). Nowadays in RTS games you do a little more look ahead and avoidance but there's still a _lot_ of in-motion collision. Generally you get a lot of "dynamic avoidance" just by making sure that units come to rest in unique positions.
-me