Polygonal Pathfinding with Incremental Funnel - how!?
Hey there,
This is a followup to the post here.
I've recently integrated navigation meshes into my code base, and have written my pathfinding.
It uses midpoint edge-to-edge cost, and then passes the resulting channels/tunnels to the funnel algorithm, to generate the straight paths.
All of this works, except that sometimes, due to the size of polygons, the correct choice is not made on pathfinding.
The post I linked to talks about using the funnel algorithm during the path finding itself to generate better choices. While I conceptually understand this, I don't really know how to implement it.
Right now, when the best node is taken off the open list, all of it's neighbours are added to the open list, with their total cost being the sum of the current node's cost, and the distance from the previous edge's midpoint to this edge's midpoints.
Where do I put this funnel operation in here? My assumption has always been that current best-nodes children can never have a better 'total cost' than their parent, but it's entirely possible that this is the case when calculating cost as a funnel-calculation from the start of the path to the current node being evaluated.
Can anyone shed some light on this?
S>
I've not implemented the kind of solution you describe, i.e. using the funnel algorithm within A*. The more "traditional" approach would be to try to generate a better navmesh with polygon sizes that suit your pathfinder. This will not only be much faster, but you won't have to solve uncommon problems like this one...
Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!
Quote: Original post by alexjc
I've not implemented the kind of solution you describe, i.e. using the funnel algorithm within A*. The more "traditional" approach would be to try to generate a better navmesh with polygon sizes that suit your pathfinder. This will not only be much faster, but you won't have to solve uncommon problems like this one...
Certainly trying to find find better polygon sizes would be the one solution, but I'm not sure it is every going to ultimately solve the problem.
At the moment the navigation mesh generation is somewhat out of my hands, so I was hoping to find a solution on the pathfinding side, instead.
OK. There are two quick ways to solve this:
1. Recalculate your funnel from scratch everytime you need to calculate the heuristic. This is going to be expensive, but your A* datastructure will remain lightweight.
2. Store the information about the current funnel in each A* node on the stack, and add to it incrementally as you expand new nodes. This will be more efficient but take up loads more memory.
The ideal solution (once it works) is to try to find a compromise, what's the minimal amount of information that you need to store per A* node that will allow you an efficient representation and fast heuristic calculation.
I'd advice trying the super slow version first, to see if it gives you good results.
1. Recalculate your funnel from scratch everytime you need to calculate the heuristic. This is going to be expensive, but your A* datastructure will remain lightweight.
2. Store the information about the current funnel in each A* node on the stack, and add to it incrementally as you expand new nodes. This will be more efficient but take up loads more memory.
The ideal solution (once it works) is to try to find a compromise, what's the minimal amount of information that you need to store per A* node that will allow you an efficient representation and fast heuristic calculation.
I'd advice trying the super slow version first, to see if it gives you good results.
Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!
Quote: Original post by alexjc
1. Recalculate your funnel from scratch everytime you need to calculate the heuristic. This is going to be expensive, but your A* datastructure will remain lightweight.
2. Store the information about the current funnel in each A* node on the stack, and add to it incrementally as you expand new nodes. This will be more efficient but take up loads more memory.
I'm confused - in both of these cases, don't I cause my cost-per-node to change during the calculation of the cost (i assume you meant cost, not heuristic)? And doesn't this change cause problems in the cost of the other child nodes?
Is the node cost calculate by the funnel-path's intersection with each node edge?
Yes, I meant cost and not heuristic. Your cost is going to change based on the way you arrive into a node, but that shouldn't be a problem. Some A* implementations have the ability to check if a node is already in the open list, and update that value. I wouldn't worry too much about the child nodes also changing, A* will expand your nodes in a sensible order, so it should look OK even if it is theoretically not accurate. You'll have to try it for your problem.
Alternatively, you can treat the whole search space as one large tree (instead of a graph) and have each position+path as a unique node within A* -- but it's much slower. We do this for animation/motion planning.
Anyway, just try it out! If you have everything in place it won't take you long to experiment.
Alternatively, you can treat the whole search space as one large tree (instead of a graph) and have each position+path as a unique node within A* -- but it's much slower. We do this for animation/motion planning.
Anyway, just try it out! If you have everything in place it won't take you long to experiment.
Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!
Quote: Original post by alexjc
Yes, I meant cost and not heuristic. Your cost is going to change based on the way you arrive into a node, but that shouldn't be a problem. Some A* implementations have the ability to check if a node is already in the open list, and update that value. I wouldn't worry too much about the child nodes also changing, A* will expand your nodes in a sensible order, so it should look OK even if it is theoretically not accurate. You'll have to try it for your problem.
Alternatively, you can treat the whole search space as one large tree (instead of a graph) and have each position+path as a unique node within A* -- but it's much slower. We do this for animation/motion planning.
Anyway, just try it out! If you have everything in place it won't take you long to experiment.
Thanks for the push. I realized that some optimizations I had made to my A* were causing me the grief:
I assumed that, because circular loops would yield more expensive paths, I never had to consider if a new node I was adding to the open list was already in my path or not - it would always be more expensive to visit it twice. Using the string pull distance as my cost, instead of the sum of my parent's cost and my local 'parent-to-me' cost, was yielding circular loops because sometimes costs seemed artificially smaller, causing infinite recursion.
Now, every time I consider a new node for addition to the open list, I confirm it isn't in my list of ancestor nodes at all before proceeding. I'm not sure if this is an appropriate addition, but it was necessary to solve the cases where sometimes a child node's 'total cost' was lower than it's parent's cost as a result of its orientation relative to a vertex causing a lower cost from the string pull distance.
Hope that makes sense.
update: actually, not so fast ... it seems like this solution doesn't give me the best path in all cases - in fact, where it solves one set of problems, it introduces another. My strategy as a test was to recalculate a string-pulled distance for every new node added to the open list, and set the 'cost' to be the Euclidean distance between each point in the string-pulled path. For some cases, this works, but for others it seems to never investigate the best-path case. ... more puzzling to be done.
double-actually: looked like some bugs in some code crept in with my change over here, so things are looking like they might be working at the end of the work week.
I'll do some profiling and see how it performs on the consoles.
[Edited by - Sphet on May 7, 2010 8:27:45 PM]
I thought I would update this in case anyone searches for the topic.
While I did end up getting the funnel algorithm to run on incomplete costs, I achieved almost the same results by subdividing long edges into three nodes: both end points, and a midpoint. It increased the number of nodes potentially searched but was ultimately faster than performing the funnel at every step.
While I did end up getting the funnel algorithm to run on incomplete costs, I achieved almost the same results by subdividing long edges into three nodes: both end points, and a midpoint. It increased the number of nodes potentially searched but was ultimately faster than performing the funnel at every step.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement