Advertisement

3D Pathing in Highly Dynamic Environments

Started by July 24, 2006 12:54 PM
28 comments, last by Timkin 18 years, 5 months ago
Quote:
Original post by MrRowl
Maybe this


1. First starts walking in a straight line from A to B, negotiating obstacles using some local steering.

2. At the same time you simulate the agent's movement (distributing this over multiple updates... but still running much faster than real-time) in order to estimate his position in the future.

3. If you detect that the agent will eventually get stuck, then start the RRT pathfinding from that (predicted) location. With luck, you'll have calculated the result by the time the agent actually gets there (may be 10-20 seconds into the future). With even move luck, your agent will actually end up in the same place that you started your pathfinding from!

4. Assuming everything worked, when your agent reaches the stuck location, have him play a "puzzled" animation (!) then follow your RRT path to B, assuming one exists.


That is golden
Reading this thread, and seeing mention of robot navigation reminded me of vector field navigation that can be used by robots. The general idea is that you create a field of vectors that point in the direction of the target, but away from obstacles. Summing the vectors provides the direction to move. Here's an example with some nice graphics for an optical navigation system.
http://people.csail.mit.edu/lpk/mars/temizer_2001/Optical_Flow/
There are other papers out there too if you search for "robot vector field navigation"

I'd think you'd be able to create a vector field from the current view of your spider to do this. But I'm no AI expert. Maybe it's too computationally intensive or maybe it's technically the same as some of the other pathfinding methods mentioned.

Looks like a fun project!
Tadd- WarbleWare
Advertisement
Quote:
Original post by headfonez
Quote:
Original post by MrRowl
1. First starts walking in a straight line from A to B, negotiating obstacles using some local steering.

2. At the same time you simulate the agent's movement (distributing this over multiple updates... but still running much faster than real-time) in order to estimate his position in the future.

3. If you detect that the agent will eventually get stuck, then start the RRT pathfinding from that (predicted) location. With luck, you'll have calculated the result by the time the agent actually gets there (may be 10-20 seconds into the future). With even move luck, your agent will actually end up in the same place that you started your pathfinding from!

4. Assuming everything worked, when your agent reaches the stuck location, have him play a "puzzled" animation (!) then follow your RRT path to B, assuming one exists.


That is golden


...and is essentially a very simplified version of the technique I developed during my PhD (gratuitous self-promotion incoming ;) ). I worked on a tougher problem, which deals with problems regarding uncertainty in whether you would get stuck or not (and hence failing to successfully complete the plan). One can view the true problem as being based on the probability of success of a plan and then find information theoretic means to decide when it is appropriate to look for a new plan, based on ones uncertainty and how it changes. In the above example, the prediction of the stratight line plan equates to a reduction in the uncertainty, with only two possible outcomes of which one or the other is determined by the test (i.e., no residual uncertainty). What to do in this case is clear cut (replan if the current straight line plan will fail). A more interesting problem (and the one I solved) is what to do when you have residual uncertainty after doing that test (perhaps because you are not sure how the world will evolve).

Cheers,

Timkin

This topic is closed to new replies.

Advertisement