Speed up A* over long distances
Hi all,
my buddy has implemented A* so I'm able to use it for my agents. As we tested it, we learned that we've very much underestimated the search time. Problem is, what we use very large geo-data maps, probably a lot larger that in most computer games.
Now we're assessing different ideas how to deal with that situation:
- The algorithm that calculates the search grid for our geographic maps can be adjusted to create different resolutions. Maybe we can pre-calculate a number of search grids in different resolutions and than use them in dependency to the asked start-goal distance. So a request works in several levels. for example
1. ask start->goal (very far) -> get very coarse-grained way points A from the most coarse-grained grid
2. ask start->A1 (not so far any more) -> get better grained way points B
3. ask start->B1 (local route) -> get fine grained way points C
4. move start->C1 ... Cend
5. get new C from B1->B2
6. move from B1->C1...Cend
...end so on.
Would you do it that way, are are there better methods out there?
Thank you in advance!
You are right. A* is really slow for large search spaces. I recommend you to check out HPA* (Hierarchical PAthfinding A*).
This is IMO quite nice presentation to start with:
http://www.cs.ualberta.ca/~bulitko/F06/presentations/2006-09-29-ML.pdf
otherwise use google, there are tons of links.
MaR
This is IMO quite nice presentation to start with:
http://www.cs.ualberta.ca/~bulitko/F06/presentations/2006-09-29-ML.pdf
otherwise use google, there are tons of links.
MaR
You could create an illusion of a fast pathfinding method in three easy steps
1. Create a quick path, which is just a straight line from the start point to the finish point. Have your agents stroll down this path.
2. Pick an arbitrary point along this quick path, and calculate your A* path from that point to the finish point.
3. When the A* calulation is complete, have your agent move off the quick path and intercept the A* path.
I believe this method was used in the Empire Earth games.
1. Create a quick path, which is just a straight line from the start point to the finish point. Have your agents stroll down this path.
2. Pick an arbitrary point along this quick path, and calculate your A* path from that point to the finish point.
3. When the A* calulation is complete, have your agent move off the quick path and intercept the A* path.
I believe this method was used in the Empire Earth games.
"Many that live deserve death. And many that die deserve life. Can you give it to them?Then don't be too eager to deal out death in judgment."-J.R.R. Tolkien
Thank you. That HPA* seems to be the generalized approach of the thing I head in mind.
The approach with moving a straight path until A* has finished has three problems I see so far:
1. It's possible to run in all kinds of obstacles: driving my tank under water or moving my scout direct in front of a group of enemy tanks
2. the cpu is, in opposite to the HPA* approach under heavy load
3. It's hard to predict how far away that "interception point" should be
do you know how this problems was solved in the games you mentioned?
The approach with moving a straight path until A* has finished has three problems I see so far:
1. It's possible to run in all kinds of obstacles: driving my tank under water or moving my scout direct in front of a group of enemy tanks
2. the cpu is, in opposite to the HPA* approach under heavy load
3. It's hard to predict how far away that "interception point" should be
do you know how this problems was solved in the games you mentioned?
Hi there,
what it is usually done when dealing with very large area is to make it hierarchical, as it was previously pointed out. Debate is about how to make the higher level. A good approach is to work at the high level using a topological map, and then to run A* between topological areas. This has a good tradeoff between precision and workload.
Another consideration: for games, pathfinding is usually suboptimal. I.e., you can precompute good portions of your paths. Then, using steering behaviors, you can avoid sudden obstacles...
Regards,
what it is usually done when dealing with very large area is to make it hierarchical, as it was previously pointed out. Debate is about how to make the higher level. A good approach is to work at the high level using a topological map, and then to run A* between topological areas. This has a good tradeoff between precision and workload.
Another consideration: for games, pathfinding is usually suboptimal. I.e., you can precompute good portions of your paths. Then, using steering behaviors, you can avoid sudden obstacles...
Regards,
Thanks JohnRuin.
The agents I'm doing are not fully autonomous acting, so a partly precomputed path is unapparent.
Rather users assign mission scripts to them (Go to A and wait for enemies. At time x go to B. If you see enemies of class low, attack them, else flee. If you run out of supply go to C ... and so on) , and they have to fulfill that missions. So it's pretty impossible to predict there they can/should go at runtime.
The agents I'm doing are not fully autonomous acting, so a partly precomputed path is unapparent.
Rather users assign mission scripts to them (Go to A and wait for enemies. At time x go to B. If you see enemies of class low, attack them, else flee. If you run out of supply go to C ... and so on) , and they have to fulfill that missions. So it's pretty impossible to predict there they can/should go at runtime.
That's fine.
Nonetheless, I think you could anyway plan the high level trajectories (i.e., go from area A to B, from B to C, ..., from M to N), and then compute at run time with A* the actual trajectories to perform the basis step (i.e., from A to B).
Nonetheless, I think you could anyway plan the high level trajectories (i.e., go from area A to B, from B to C, ..., from M to N), and then compute at run time with A* the actual trajectories to perform the basis step (i.e., from A to B).
Yes, HPA* is definitely the way to go. (Wrote about it a few days ago ironically :-)
http://aigamedev.com/theory/near-optimal-hierarchical-pathfinding
As for your questions:
1. Try to avoid going into detailed pathfinding with A*. This is the biggest cost, so if you can avoid it, the search will be multiple orders of magnitude faster.
2. To do that, use procedural steering behaviors rather than detailed planning. Most obstacles can be approximated as convex polies, so reactive behaviors are fine.
3. Compute the path lazily, not entirely upfront. In dynamic worlds, the path is rarely used to the end anyway.
I hope that helps!
http://aigamedev.com/theory/near-optimal-hierarchical-pathfinding
As for your questions:
1. Try to avoid going into detailed pathfinding with A*. This is the biggest cost, so if you can avoid it, the search will be multiple orders of magnitude faster.
2. To do that, use procedural steering behaviors rather than detailed planning. Most obstacles can be approximated as convex polies, so reactive behaviors are fine.
3. Compute the path lazily, not entirely upfront. In dynamic worlds, the path is rarely used to the end anyway.
I hope that helps!
Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!
Give IDA* a try. Essentially, you perform a depth first search up to a certain level of depth. If you don't find your goal, you start over with a slightly increased depth maximum. Here's some pseudo-code:
I found that despite the fact that you're repeating the search during each iteration, I saw a massive speed up. For the domain I'm working in it increased the search speed by about 1000X. Seriously.
-Kirk
EDIT: By the way, use the same heuristics and scoring that you're using with A*. f() = g() + h() works the same.
root = start nodethreshold = root’s g() perform a depth-first search starting at rootif goal not found, set threshold = minimum g() found that is higher than current threshold repeat depth-first search starting at rootdepth-first search(node): if node = goal return goal found if node’s f() > threshold return goal not found else for each child of node, while goal not found, depth-first search(child)
I found that despite the fact that you're repeating the search during each iteration, I saw a massive speed up. For the domain I'm working in it increased the search speed by about 1000X. Seriously.
-Kirk
EDIT: By the way, use the same heuristics and scoring that you're using with A*. f() = g() + h() works the same.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement