Time/Space tradeoffs in Pathfinding
So you can generate a perfect pathfinding solution using n^2 space (an easy way to do this is for each node contain a map {(dest node)|->(next node)}).
This takes a fair amount of time to set up, but during execution it takes O(|path| lg(n)) effort to reconstruct a path.
There are various algorithms that let you find a single path in guaranteed O(n^2) time. With a good hint function (which is not guaranteed to be possible), A* can do better.
I'm wondering about time-space tradeoff options between these two. Ie, do you know any algorithms that don't require a "magic" hint function that use less than n^2 space and less than n^2 time?
If your map doesn't change, and you have unlimited space, you can obviously precompute every best path. This gives O(1) performance (if my nomenclature is correct). With normal maps, this is completely impractical.
If you want minimum storage, A* is the best you can do. And the worst case for that is O(n^2) as you note.
So the best performance/space trade-off is likely to be one where you cache paths, if that is possible.
If you want minimum storage, A* is the best you can do. And the worst case for that is O(n^2) as you note.
So the best performance/space trade-off is likely to be one where you cache paths, if that is possible.
Are we still talking about finding optimum paths, or is any path ok? With just O(n) storage you can put together a constant time path finder, but at least the naive solution I'm thinking of could produce not so hot results.
Another possibility would be applying LOD to pathfinding. It is a fairly common approach, pathfinding with simplified graphs and then refining level by level. If you combined this with your caching idea, the storage requirements would be less than O(n^2) since each node would only store caches for nodes that were in the same set.
And the path reconstruction from your cache is O(|path|), with no lg(n). If you were thinking that is the time to search through the cache each time, you can store the (dest, next) pairs fairly easily to give constant trial lookup time.
Another possibility would be applying LOD to pathfinding. It is a fairly common approach, pathfinding with simplified graphs and then refining level by level. If you combined this with your caching idea, the storage requirements would be less than O(n^2) since each node would only store caches for nodes that were in the same set.
And the path reconstruction from your cache is O(|path|), with no lg(n). If you were thinking that is the time to search through the cache each time, you can store the (dest, next) pairs fairly easily to give constant trial lookup time.
Turring Machines are better than C++ any day ^_~
Quote: Are we still talking about finding optimum paths, or is any path ok? With just O(n) storage you can put together a constant time path finder, but at least the naive solution I'm thinking of could produce not so hot results.
Ayep, I'm talking about finding optimal paths. :) Although an algorithm with a hard cap on how bad it paths (say, no more than 1.5 times as far) that had better than O(n^2) time and O(n^2) pre-computed space requirements would be interesting!
Quote: And the path reconstruction from your cache is O(|path|), with no lg(n). If you were thinking that is the time to search through the cache each time, you can store the (dest, next) pairs fairly easily to give constant trial lookup time.
Lookup in a map takes time logarithmic in the number of possible unique entries. Yes, if you have a limit of (say) 2^32 unique entries, you can make that O(lg(2^32)) = O(32) = O(1). :) (That is basically how array lookup works: it is constant, so long as you have no more than 2^32 entries.)
But sure, that is a tangent to the problem at hand. Call it O(1) -- O(lg(n)) grows so slowly that it might as well be.
Quote: Another possibility would be applying LOD to pathfinding. It is a fairly common approach, pathfinding with simplified graphs and then refining level by level. If you combined this with your caching idea, the storage requirements would be less than O(n^2) since each node would only store caches for nodes that were in the same set.
Heuristic improvements -- like LOD type stuff -- that don't guarantee the best path is found are sometimes practical, but not what I'm looking for.
If there was a way to build a LOD pathfinding that guaranteed a best path, or provided a strict upper bound on how bad the path found would be, that I'd like to find out about. :)
Quote: If your map doesn't change, and you have unlimited space, you can obviously precompute every best path. This gives O(1) performance (if my nomenclature is correct). With normal maps, this is completely impractical.
Yes, because pre-computing all paths costs O(n^2) space. On even a million node graph (say, 1000 by 1000), that's on the order of a trillion bytes of data.
Quote: If you want minimum storage, A* is the best you can do. And the worst case for that is O(n^2) as you note.
Naw, I want better than O(n^2) pre-computed storage. A* is basically zip overhead storage wise (ie, O(n) data required -- aka, the local pathing costs!).
An exponential LOD style A* would probably be O(n lg n) esque storage: but as far as I can tell, they don't provide guarantees about how crappy their paths will be. If there was a LOD A* with a guarantee about how crappy their paths would be, and a cost guarantee that was better than O(n^2), I'd love it. :)
Ie, I've seen a claim that with a "good enough" hint function, you can get A* down to a O(n lg n) time bound. Is there any guaranteed ways to generate good enough hint functions on a decent class of graphs?
Under some circumstances you can use LOD and still be certain of getting optimal paths - that is when you can split the graph in two by breaking one connection between nodes. Each node needs to know which half it's in, so storage cost is one bit per node, per split.
Let's say we have a graph where that's possible and we split it and call those two parts A and B. In that case if your start and destination are both in A then you can ignore B when searching (because if you entered B then you could only come back out the same way you got in). In addition if the start is in A and the end is in B then you must go through that one connection, so you can run two searches in parallel. Each split also reduces the memory requirements for storing all paths somewhat, if you want to do that.
For a more generic solution that doesn't rely on graph topology I'd pick Iterative Deepening A* (IDA*). With an admissible heuristic you can control the maximum error in your path length easily by just adjusting how much you increase the maximum search depth on each iteration.
Let's say we have a graph where that's possible and we split it and call those two parts A and B. In that case if your start and destination are both in A then you can ignore B when searching (because if you entered B then you could only come back out the same way you got in). In addition if the start is in A and the end is in B then you must go through that one connection, so you can run two searches in parallel. Each split also reduces the memory requirements for storing all paths somewhat, if you want to do that.
For a more generic solution that doesn't rely on graph topology I'd pick Iterative Deepening A* (IDA*). With an admissible heuristic you can control the maximum error in your path length easily by just adjusting how much you increase the maximum search depth on each iteration.
I don't know if this comes even close to helping you, but you might try IDA* or Fringe Search. Check this thread (and others) for more details: http://www.gamedev.net/community/forums/topic.asp?topic_id=471712&PageSize=25&WhichPage=1
IDA* essentially does a depth-first search using the same heuristic as A*. It searches to a maximum depth and then repeats the depth first search to a new depth. (Yes, it repeats the search.) You don't need any storage beyond retracing your path one the end is found. You can do that with zero storage if you structure your nodes properly.
Fringe Search is basically the same as IDA* but instead of repeating the entire search it store only the fringe or frontier and restarts at the new depth there.
The problem that I've had in the past with A* is that for really large maps it requires a lot of storage space and managing/searching the open/closed lists takes a lot of time. IDA* and Fringe both eliminate the need for open/closed lists and I've seen huge improvements in search times.
-kirk
IDA* essentially does a depth-first search using the same heuristic as A*. It searches to a maximum depth and then repeats the depth first search to a new depth. (Yes, it repeats the search.) You don't need any storage beyond retracing your path one the end is found. You can do that with zero storage if you structure your nodes properly.
Fringe Search is basically the same as IDA* but instead of repeating the entire search it store only the fringe or frontier and restarts at the new depth there.
The problem that I've had in the past with A* is that for really large maps it requires a lot of storage space and managing/searching the open/closed lists takes a lot of time. IDA* and Fringe both eliminate the need for open/closed lists and I've seen huge improvements in search times.
-kirk
A simple pathfinding data structure that might be used for small static maps is storing a set for each directed edge A->B, containing every node N for which the optimal path from A to N starts with B.
With 10000 nodes and a square grid, a brute-force implementation would take about 4*10000^2 bits, a rather affordable 50 MB of memory.
But this bunch of sets can be made smaller in several ways (including separating sparse and dense portions, sharing identical portions between sets, run-length encoding and other generic compression, maybe Bloom filters); does anyone know of sophisticated implementations and performance data?
With 10000 nodes and a square grid, a brute-force implementation would take about 4*10000^2 bits, a rather affordable 50 MB of memory.
But this bunch of sets can be made smaller in several ways (including separating sparse and dense portions, sharing identical portions between sets, run-length encoding and other generic compression, maybe Bloom filters); does anyone know of sophisticated implementations and performance data?
Omae Wa Mou Shindeiru
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement