Advertisement

ai chasing player through maze

Started by September 28, 2010 07:16 PM
4 comments, last by LorenzoGatti 14 years, 1 month ago
If you have an enemy AI (or lots of enemy AIs) chasing a player through a maze using A*, how often should I recalculate the A* path? because as a player moves through the maze, the end point of an AI current A* path will be where the player used to be and not where he currently is. I dont believe it is a smart idea to recalc the path every frame or even 10 times per second. especially when the number of AIs grow to 50-100 or more.

Recalculate only when, AND IF necessary. Tip: Don't recalculate, reuse old data. :)

For example when you calculate the path to the target(x,y), and when you wonder whether or not you should recalculate the path, take the current distance from the target to the old (x,y) and if it exceeds some number, say, N, then recalculate. Then think if you should recalculate the path from the current position to the new target position, or from the old target position(x,y) to the new target position and append that to the path. :)

Now, you have a system which recalculates only when the data is becoming irrelevant. Now the next question will be what N should you use. ;) Why not make N relative to the time since the last update to the path?

Advertisement
instead of having each AI plot it's own path, it might make more sense in your game to do a "flood fill" from the center of the player til it hits all your enemies.

when you hit an enemy you can work backwards up the flood fill to get the path to the player.

It's more expensive than a regular A* path find but if you have 50 enemies its going to take far less resources than doing 50 path finds.

Another idea is that if there is only one player but many enemies, that instead of having each enemy keep track of it's path to the player, instead have the map keep track of how to get to the player from each location.

like if you had a tile based map, one tile might be "pointing left" meaning you go left from there to get to the player and the tile to the left might be "pointing up" meaning to go up to get to the player (etc) so that from any point on the map you can get to the player by following the directions.

Having the map keep this data would let your N number of enemies all benefit from it while only having to calculating it one time whenever it needs calculating, instead of one time per each enemy.

Random thoughts for you to mull over (:
Another random idea: Use something like heat propagation in a grid to have a function that grows as you get closer to the player. You start by giving the tile where the player is a value of 1.0 and all the other tiles a value of 0.0. Now every frame do one pass through the map assigning to each tile the average of the values of the four neighbors multiplied by 0.99. This will converge to a function that grows as you get closer to the player. When the player moves, you update the function once (or a few times) per frame, so it might take a little bit for the information to propagate (very much like a wave), but the function will be more and more correct. Then each agent can simply move following the direction in which the function grows.

The updating of the function parallelizes well, so it can even be computed on the GPU.

Perhaps not a very practical solution, but I think it's an interesting way to think about the problem.
Quote: Original post by alvaro
have a function that grows as you get closer to the player.


If you flip this around* to be "a function that decreases as you get close to the player," then the optimal policy can be described in these terms, and the scalar function that expresses it is called the Bellman Value Function (the optimal policy is to just always go locally downhill this function). On a discrete state space (like your grid), it is the solution to the Bellman Equation and can be found most generally via Value Iteration, and more specifically in the case of shortest paths by doing a flood-fill as Atrix256 suggests (in which case the Value Function is known as the Distance Transform). In a continuous state space, you instead get a PDE called the Hamilton-Jacobi-Bellman Equation, which in the shortest-path case becomes the Eikonal Equation.

The heat equation (aka Laplace's Equation) as alvaro suggested will also have most of the nice properties you want; in particular, the global maximum will be at the player's location. In fact, there have been a few papers written in the robotics literature about using it for precisely this purpose. It has some disadvantages however: There tend to be large regions that are mostly flat, and convergence isn't particularly fast -- unless, maybe, if you resort to multigrid methods (by contrast you can compute the Distance Transform in a finite number of iterations).

The aspect of your problem that I personally find more interesting is that there become opportunities for some agents to encircle the player and perform collective actions that are more intelligent than each individually making a beeline for him. This comes under the heading of "multiagent pursuit-evasion games on graphs." We've had some conversations here on Gamedev about this problem in the context of a game in which a player would try to escape zombies; you can probably find these threads by searching. That said, you should probably view this last paragraph as addressing "stuff to do eventually;" for your immediate problem I think the answer is summed up with the two words "Distance transform."

(* Actually this sign change isn't really necessary; I just prefer it...)
You might also cast the problem as pursuit-evasion on a graph and send the multiple agent to surround the player and close in without leaving escape routes, eventually reaching the player.

This would be bad in Pacman (guaranteed defeat) but good to force the player to act if he can fight (e.g. Cavelon), dig effectively (e.g. Mr Do!) or both (Bomberman).

Omae Wa Mou Shindeiru

This topic is closed to new replies.

Advertisement