Advertisement

Caching paths

Started by December 11, 2015 06:26 PM
2 comments, last by Alberth 8 years, 11 months ago

Hi there, I am developing a cache system for my A* algorithm and I would like to find out a little more about the state of art at the moment.

The problem is that I couldn't really find much on caching on academic articles. Most results either say "you may cache it" or are just an ad-hoc implementation that is either an ordered list or a hash table.

I am wondering if anyone knows an article or can point me into some article about path caching.

Thanks in advance.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

I don't know any articles about caching A* path finder results.

(I assume you are working in the case of optimal paths.)

Pondering about it, the biggest issue is probably how to decide your path finding results have become invalid. I don't see an easy answer to that question.

Knowing when your result is still valid is probably easier. I would define "valid" as "if you run the computation again, you will get the same answer" (where "same" is still complicated).

In free open space, I would expect that a computed path stays valid if reachability of all nodes that you explored in the search is the same (ie it stays free open space), and all traversal costs from one node to the next in the set explored nodes is the same (ie not suddenly making it more or less attractive to reach a node).

The reasoning here is that in your path search, you explore a number of nodes while looking for the shortest path. If you do that computation again, and reachability and traversal costs of all nodes that you explore have not changed, you should end up with the exact same set of nodes, and the same path. "Same" is a bit tricky here, you can have several distinct routes of equal length, and due to the order of the nodes in the data structures, and order of considering traversals, I would expect you could end up with a different route of equal length. (Shorter cannot happen I think, since the path you got the first time was already the shortest given the costs, and longer cannot happen since the path you derived the first time still exists, and would then be better.)

So in free open space, if you store the set of explored nodes (ie all nodes in the open and closed lists) along with the path, and costs of reaching each of these nodes does not change, I think your path is still valid (modulo alternative routes of equal length).

If you add blockages to the equation (ie some nodes cannot be entered) it gets more complicated. A new blockage at the current path makes the path invalid (duh smile.png ).

A new blockage in the set explored nodes but not on the path seems harmless at first sight. Reasoning is that the node with the new blockage is no longer reachable, but that doesn't affect the path, since the blockage was not on the path. In addition, nodes reached from the node that is now blocked, cannot be reached from that node any more. Maybe they can be reached through some other node, but that path is not going to be shorter (or the original path would not have gone through the now blocked node). Thus if anything happens to the explored nodes off the path, their distances get bigger rather than smaller. Your set of explored nodes for the path may get smaller thus. This makes sense, adding new blockages generally means you're blocking off some options.

Adding new blockages outside the set explored nodes does not give new (shorter) paths.

Removing blockages is quite complicated. Note that blocked nodes are never in the open or closed lists, since you cannot reach the node at all. If the removed blockage is neighbour of a closed node, the blockage prevented all paths through that now unblocked node. Some of those newly available paths to the destination may be shorter than the path you currently have. You could check that, I think, by extending the path from the closed neighbours to the now unblocked node. If the sum of the real distance to the now unblocked node and its estimate is less than the current length of the path to the destination, a path through the now unblocked node may indeed be shorter (ie your current path is now invalid). If the sum is equal, a path may exist, but it is not shorter (since your estimate is never above the real distance). It may have equal length though, which leads you back to the "same path" discussion above.

I think removing a blockage from a node that only has neighbouring explored nodes in the open list is harmless. Nodes in the open list have equal or higher costs than the real path, you add some travel costs from the neighbour to the now unblocked node, so its total cost is never going to be lower than the path you have now.

If you have blockages, I think a reasonable approximation could be

- New blockage on the shortest path breaks the path

- New blockage elsewhere should not cause any change to the shortest path

- Removed blockage of a neighbour node of the explored nodes invalidates the path. (The discussion above was more precise, but it requires knowledge of real distances from the source to all explored nodes, which is probably too costly.)

I didn't consider changes in costs for traveling to a neighbour, but it seems to me you could perhaps apply similar reasoning as with blockages.

Advertisement

I don't know any articles about caching A* path finder results.

(I assume you are working in the case of optimal paths.)

Pondering about it, the biggest issue is probably how to decide your path finding results have become invalid. I don't see an easy answer to that question.

Knowing when your result is still valid is probably easier. I would define "valid" as "if you run the computation again, you will get the same answer" (where "same" is still complicated).

In free open space, I would expect that a computed path stays valid if reachability of all nodes that you explored in the search is the same (ie it stays free open space), and all traversal costs from one node to the next in the set explored nodes is the same (ie not suddenly making it more or less attractive to reach a node).

The reasoning here is that in your path search, you explore a number of nodes while looking for the shortest path. If you do that computation again, and reachability and traversal costs of all nodes that you explore have not changed, you should end up with the exact same set of nodes, and the same path. "Same" is a bit tricky here, you can have several distinct routes of equal length, and due to the order of the nodes in the data structures, and order of considering traversals, I would expect you could end up with a different route of equal length. (Shorter cannot happen I think, since the path you got the first time was already the shortest given the costs, and longer cannot happen since the path you derived the first time still exists, and would then be better.)

So in free open space, if you store the set of explored nodes (ie all nodes in the open and closed lists) along with the path, and costs of reaching each of these nodes does not change, I think your path is still valid (modulo alternative routes of equal length).

If you add blockages to the equation (ie some nodes cannot be entered) it gets more complicated. A new blockage at the current path makes the path invalid (duh smile.png ).

A new blockage in the set explored nodes but not on the path seems harmless at first sight. Reasoning is that the node with the new blockage is no longer reachable, but that doesn't affect the path, since the blockage was not on the path. In addition, nodes reached from the node that is now blocked, cannot be reached from that node any more. Maybe they can be reached through some other node, but that path is not going to be shorter (or the original path would not have gone through the now blocked node). Thus if anything happens to the explored nodes off the path, their distances get bigger rather than smaller. Your set of explored nodes for the path may get smaller thus. This makes sense, adding new blockages generally means you're blocking off some options.

Adding new blockages outside the set explored nodes does not give new (shorter) paths.

Removing blockages is quite complicated. Note that blocked nodes are never in the open or closed lists, since you cannot reach the node at all. If the removed blockage is neighbour of a closed node, the blockage prevented all paths through that now unblocked node. Some of those newly available paths to the destination may be shorter than the path you currently have. You could check that, I think, by extending the path from the closed neighbours to the now unblocked node. If the sum of the real distance to the now unblocked node and its estimate is less than the current length of the path to the destination, a path through the now unblocked node may indeed be shorter (ie your current path is now invalid). If the sum is equal, a path may exist, but it is not shorter (since your estimate is never above the real distance). It may have equal length though, which leads you back to the "same path" discussion above.

I think removing a blockage from a node that only has neighbouring explored nodes in the open list is harmless. Nodes in the open list have equal or higher costs than the real path, you add some travel costs from the neighbour to the now unblocked node, so its total cost is never going to be lower than the path you have now.

If you have blockages, I think a reasonable approximation could be

- New blockage on the shortest path breaks the path

- New blockage elsewhere should not cause any change to the shortest path

- Removed blockage of a neighbour node of the explored nodes invalidates the path. (The discussion above was more precise, but it requires knowledge of real distances from the source to all explored nodes, which is probably too costly.)

I didn't consider changes in costs for traveling to a neighbour, but it seems to me you could perhaps apply similar reasoning as with blockages.

Thank you for your answer, albert.

I actually read an article that uses a very similar strategy of what you describe (reusing the already computed parts of a path), but my focus is to reuse the paths itself. It is quite hard to find because caches are very limited, as you stated, they need to be clear at changes and most algorithms work on a very limited set of rules (there seems to be no answer that fits all the constraints).

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

The set of nodes in the open and closed lists represent the set relevant nodes that can affect the optimal path. I don't think you can get away by not storing them in some way, and still have a cache you can trust. (Unless you give up optimal path, then any connection is good.)

Note that "store them" can be as simple as a bounding rectangle around the set. It's an over-approximation (ie you may throw away paths too early) but it will work.

This topic is closed to new replies.

Advertisement