Quote:Original post by headfonez
Quote:Original post by MrRowl 1. First starts walking in a straight line from A to B, negotiating obstacles using some local steering.
2. At the same time you simulate the agent's movement (distributing this over multiple updates... but still running much faster than real-time) in order to estimate his position in the future.
3. If you detect that the agent will eventually get stuck, then start the RRT pathfinding from that (predicted) location. With luck, you'll have calculated the result by the time the agent actually gets there (may be 10-20 seconds into the future). With even move luck, your agent will actually end up in the same place that you started your pathfinding from!
4. Assuming everything worked, when your agent reaches the stuck location, have him play a "puzzled" animation (!) then follow your RRT path to B, assuming one exists. |
That is golden |
...and is essentially a very simplified version of the technique I developed during my PhD (gratuitous self-promotion incoming ;) ). I worked on a tougher problem, which deals with problems regarding uncertainty in whether you would get stuck or not (and hence failing to successfully complete the plan). One can view the true problem as being based on the probability of success of a plan and then find information theoretic means to decide when it is appropriate to look for a new plan, based on ones uncertainty and how it changes. In the above example, the prediction of the stratight line plan equates to a reduction in the uncertainty, with only two possible outcomes of which one or the other is determined by the test (i.e., no residual uncertainty). What to do in this case is clear cut (replan if the current straight line plan will fail). A more interesting problem (and the one I solved) is what to do when you have residual uncertainty after doing that test (perhaps because you are not sure how the world will evolve).
Cheers,
Timkin