A* for multiples targets
Hello all,
I'm have interest in using A* planner in order to make a choice between differents targets in 2d grid world with distance based heuristik:
1) My first Hypothesis is that A* works well for 2 or more targets if
h(u,G)= min( distance(u,gi) ) for gi in G
I haven't make a lot of test but it seems to work...
Is it sounds good to you?
2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal...
h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G
but it doesn't work as i expected:(
It seems that the preference pounderation my heuristik too minor as the more preference is high the more h tends to be 0 and then the more A* become Dijkstra...
Do anyone have an idea on what to do? Is the only way to select between my two weighted target is a pre A* a-priori selection or can I use A* to make this selection?
Quote: Original post by mp3butcherYes. Mathematically, distance to a set of points is precisely "max(distance(u,gi)) = distance(u,G)".
1) My first Hypothesis is that A* works well for 2 or more targets if
h(u,G)= min( distance(u,gi) ) for gi in G
I haven't make a lot of test but it seems to work...
Is it sounds good to you?
Quote: 2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal...You can't just divide like that. The heuristic has to remain larger than the minimum possible distance, which dividing does not guarantee, otherwise A* breaks.
h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G
You could use a barycentric weighting instead:
h(u,G) = sum(distance(u,gi) * pref(gi)) / sum(pref(gi))
However, this doesn't solve your problem, since A* will still find the shortest path (which is independent of the heuristic) and so there will be no actual selection. What you can do is instead alter your graph so that you have only one new target, but every old target is connected to the new target with an edge—have the weight of that edge be larger for low-preference nodes and smaller for high-preference nodes.
Thanks for this fast answer.It seems to work properly.
Do you have litterature reference for this type of problem?
I haven't found any by myself:(
Do you have litterature reference for this type of problem?
I haven't found any by myself:(
I don't know of any. Try to see the A* algorithm in terms of searching through a graph, this could help you a bit I believe.
Quote: Original post by mp3butcher
2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal...
h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G
but it doesn't work as i expected:(
It seems that the preference pounderation my heuristik too minor as the more preference is high the more h tends to be 0 and then the more A* become Dijkstra...
About this 2nd question..
As mentioned before changeing heuristic cost function wont be a good approach. I think best solution would be to modify algorithm stop condition. You can stop the search after finding the most prefered node or when you have found lesser solution, but overall cost of nodes beeing currently searched is too high (so if way to best node would be found - it would be too costly). Limit value to stop the search should be depandent on overal cost to get to allready found less-prefered node, and that node 'priority'.
If you already have a single-target A* implementation, it might be easier to implement the search backwards: Start the algorithm by putting all the targets in the open list, and use the agent's position as the goal. In order to make a target less desirable, pretend that it already has some cost when you put it in the open list.
thanks for your post alvaro, It is a good idea.
but the problem with backward search is that you make induction not deduction...It avoids the possibility to integrate a state var in the cost function:
for example i have a confidence index that is initialized at 0 and when i find a signage in the environnement i put it at max and then decrease it with the time.
I can integrate this confidence index in my cost function but only in forward search.
but the problem with backward search is that you make induction not deduction...It avoids the possibility to integrate a state var in the cost function:
for example i have a confidence index that is initialized at 0 and when i find a signage in the environnement i put it at max and then decrease it with the time.
I can integrate this confidence index in my cost function but only in forward search.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement