Advertisement

Closest A* Pathfinidng?

Started by June 03, 2006 07:55 AM
30 comments, last by Tom 18 years, 5 months ago
Quote: Original post by Christer Ericson
Your comment assumes the A* evaluation function has been set up to return a shortest path.


I made no such assumption and my comment assumes only what the OP posted; that he wanted the closest path if there was no path returned. The choice of cost function is his and will obviously dictate the particular nodes investigated by the algorithm. The principle I discussed was non-specific to the cost function. I did not say that the best node in the open list when the search is halted will be the closest node to the goal. I said 'treat this as the new destination'. Take some time to think about this statement please and what it means in terms of cost function contours and 'closeness' to the goal.


Quote: A* is more general than that, and it is perfectly plausible to create an evaluation function that includes a "path smoothly curving" (or whatever) criterion in the evaluation function. Easier said than done to come up with, yes, but perfectly doable nevertheless.


Actually it's fairly trivial to come up with such path constraints: use a spline tree approach coupled with A*. Computationaly more expensive, but it gets the job done.
One way would be to make non-passible squares on your map have a finite (but large) cost. So instead of attempting to navigate this graph:
B1111111111111111111 11E

You instead navigate this graph:
B11111111111111119999111911E

Then, when moving the unit, simply have it stop when it reaches the non-passable square. This also has the side effect of occasionally producing invalid paths if your route is convoluted enough. Frankly, though, this can be a desirable side effect, since 1) it closer mimics the thinking of an RTS unit without a birds-eye view, and 2) it places a more reasonable upper bound on the computation time for pathfinding.
Advertisement
Quote: Original post by Timkin
I made no such assumption and my comment assumes only what the OP posted; that he wanted the closest path if there was no path returned.

Sorry, you're right, I just glanced at the OP's post and completely misread what was asked for!
With an impassible cost too high, the map will be exhausted anyway. But the lower the impassible cost, the less the unit will try to navigate the obstacle. It seems like a nice way to trade off quality for speed... if that is a trade one is willing to make.

An odd effect is that more effort will be spent to navigate a thicker obstacle than a thin one.

An interresting case. Take 9 to be shorthand for something larger ( > 13):
s=start, g=goal, f=final location
f 9 1 1 1 1 11 9 9 9 9 9 11 1 9 9 9 9 11 1 1 s 9 9 g


Quote: it places a more reasonable upper bound on the computation time for pathfinding.

I think you can come up with examples where it costs more to do it that way, as now the blocked tiles are actually generated and expanded? Perhaps the expected behavior will be better, but not the upper bound. Unless you would also use an upper bound for f? Then it's fuzzy...
"That''s to do with the day I finally realized the world had gone totally mad and built the Asylum to put it in, poor thing, and hoped it would get better."-Wonko the SaneSo Long, and Thanks for All the Fish
well if the A* open list is empty that means it can't find the path, so would it not be ok to just go to the last node in your closed list and use that as your finishing point? hm.. (im pretty new at pathfinding)
After a failure search your closed list for the node with the best heuristic. Use that as your new destination. You don't even need to run the pathfinder twice.
Advertisement
Just a quick note to the previous two posters: if you use a node out of the CLOSED list, you're losing a small amount of information; that which you generated when you found the successor nodes to that node in the closed list, which are all in the OPEN list at the end of the search (not having been expanded and subsequently placed into the CLOSED list). Hence, choosing the best unexplored node (the best node in the OPEN list) as I suggested above is a slightly better option because we know that it is closer to the goal in terms of cost (but not necessarily distance).

If you require the closet node in distance to the goal, then you would need both a heuristic that is a one-to-one function of distance and then you would need to check all nodes searched so far for that with the lowest heuristic cost.

Cheers,

Timkin
If your A* failed to find the goal, your open list would be empty. That is generally how you would detect a failure. The down side is that you would have visited every traversable node in the entire state space which isn't always the most efficient way to fail. So like Timkin suggested, there needs to be an alternative method of failure (there are loads of ways to do this). Regardless, the check for the alternative method of failure should be done before the node is added to the open list. Hence, the failure is still detected by an empty open list.

Timkin, I am still not understanding how you can fail the search and still have an open list. Can you explain?

In most implementations each visited node would have a pointer to the node prior to it in the best path. So you could essentially find the best path back to your starting point from ANY node in the final closed list.
Quote: Original post by dgeuss
If your A* failed to find the goal, your open list would be empty.
... That is generally how you would detect a failure.


That is the only failure circumstance built into the basic algorithm. As I said in my earlier post, allowing the search to exhaust the set of possible nodes is extremely wasteful. There is a difference between A* the algorithm and a practical implementation of that algorithm. If you're only ever implementing the basic algorithm you're robbing yourself of clock cycles that could be spent elsewhere, by ignoring optimisations and tweaks tuned to your domain.

Quote: Regardless, the check for the alternative method of failure should be done before the node is added to the open list. Hence, the failure is still detected by an empty open list.


Again you're still assuming that failure exists when there are no nodes in the open list. If you use a scheme similar to the one I suggested earlier, you can detect a soft failure well before you have exhausted all possible paths. Furthermore, this test should be done at any time you're about to start a new iteration of the algorithm (about to pull the 'best' node from the open list). If the cost of this 'best' node is greater than your threshold, return a failure. The 'best' node in the open list is now your best candidate (by cost) as a destination node that is close to the goal. Why trace back to a branch node up the tree? There is, as far as I can see, no benefit in doing this.

Obviously this scheme (and all others I can think of) fails under certain circumstances (like a finite length wall existing in front of the goal for which the cost to go around is greater than your failure threshold... and your heuristic does not account for this)... but then, if you're smart about your implementation, you'll do some analysis of the domain first to find a good set of parameters (e.g., compute the path cost distribution and choose your threshold from this).

Cheers,

Timkin
I think I understand that you are suggesting better now. You are saying to check for the fail condition of a path by checking the node as its popped off the open list. What I usually do is make this sort of check before the node is added to the open list. It has the same result so either method is fine I am sure, except the method you are suggesting may have a higher memory requirement as the open list grows.

The only problem is that the "best" node may not even be on the open list anymore. It may have been closed earlier in the search. We are looking for the node with the best (lowest) heuristic. This is different than the heuristic + travel-cost used on the open list. We may have gotten extremely close to the destination earlier in the search and closed the node, then failed later in the search as the algorithm is checking for a path around the barrier (which could be very far away from the destination).

To prevent having to check all the closed nodes for the one with the best heuristic, it would be nice to keep a "best-node-so-far" pointer based on the heuristic result. It could be checked and updated before the node is added to the open list when the heuristic is calculated. In a successfull search this would of course become the destination, but in a failed search it would become the closest node to the destination that was found before the search failed.

Having an alternative method of failure is actually a completely different subject all together.

This topic is closed to new replies.

Advertisement