Advertisement

3D Pathing in Highly Dynamic Environments

Started by July 24, 2006 12:54 PM
28 comments, last by Timkin 18 years, 3 months ago
I had originally thought that the answer to your spider controllers would be to define walking and climbing controllers for them and then switch between the two depending on the existance of discontinuities in the local gradient surface around the spider. However, the climbing controller complexity is quite high because of the number of degrees of freedom in the spider (each leg would need two hinge points... times 8 legs... cut that in half by using some symmetry arguments and constraints for imposed balance). The walking controller is much easier since you can use a state machine for each leg and a cockroach walking model: the lead legs define the motion and each other leg's action depends on the state of a set of coupled legs.

I think that a better solution might be to observe the local height map around the spider and compute a local tiling based on the projection of surfaces into the floor plane. Then, for each tile, determine a cost as a linear function of gradient, with gradients that are too steep to climb up having infinite cost. This reflects the change in horizontal velocity as slope increases. Connectivity between tiles would be determined by the height discontinuity in the original 3D surface (also reflected as a slope discontinuity between tiles). If the height discontinuity exceeds an upper threshold, the step up is too high for the spider to climb. If it is negatively signed, the spider could jump down. Then, perform local pathfinding on a horizon determined by the size of the local heightmap. I would think that using something like MILP (Mixed Integer Linear Programming) would be a good, fast approach, since fast solvers already exist. Alternatively, write your own local planner. You'll need to determine a cost heuristic for the horizon edge, but a good idea might be to favour higher positions rather than lower ones, since it's easier (and cheaper) to climb down to get to the goal than up. Thus, the spider would tend to take paths up and over obstacles, but that are not straight lines (but rather represent shallow climbs), much like how roads are built over mountains.

As for animating the movement, you can either define a single walking animation and rotate the base of this walking frame to the local surface gradient, or define a coarse set of animations based on the slope. You would need to work out how to smoothly transition between animations as slope changes, especially at discontinuities. This might be tricky, but ultimately, should prove computationally cheaper than trying to compute the movement of each leg of the spider and its body and doing collision detection with its feet and the surface.

I hope this helps. If you want me to elaborate on any of these ideas, please holler.

Cheers,

Timkin
What you need is some sort of 'learning AI' I saw an example of this under another topic, something about a nural net or something...

Basicly you code in 'expected' results and if those results aren't met, the system traces backwards from the expected result to see where the devation occured, then alters the expected result based on where something went wrong.

You could do somthing like this in your code. Im SO new to programming, its not even funny, but Im good at logic, and Im currently learning C# and using DX Studio to create stuff.

What I would do is setup a pathfinding grid, and then create a variable 'finalTargetNode' which would be the AI's target, such as a player building. Then create some sort of time check, like:


if (currentAction=="travelling" && npcMesh.velocity

Ok so what this does is it lets you use a basic pathfinding system and then adjusts the path if the npc objects velocity is lower then what it is expected to be. It then disables the current pathfinding node for future calculations.(once the finalTargetNode has been reached, all the nodes can be reset)

Hope this helps, I have no idea how to impliment this, but the logic is pritty sound.
Advertisement
^^
sorry my code didn't copy over, i wasnt awear of the limits.

What you need is some sort of 'learning AI' I saw an example of this under another topic, something about a nural net or something...

Basicly you code in 'expected' results and if those results aren't met, the system traces backwards from the expected result to see where the devation occured, then alters the expected result based on where something went wrong.

You could do somthing like this in your code. Im SO new to programming, its not even funny, but Im good at logic, and Im currently learning C# and using DX Studio to create stuff.

What I would do is setup a pathfinding grid, and then create a variable 'finalTargetNode' which would be the AI's target, such as a player building. Then create some sort of check, like:

--psuedo--
if (currentAction=="travelling" && npcMesh.velocity <
Man im sorry guys, ill just break it down in english.

if npcVelocity isnt what it should be and npcAction is travelling

disable the currentNode for future pathfinding calculations
find a new path

when the target has been achieved, re-enable all the nodes.
Hey all,

Depending on the resouces required you could give each 'bot' a memory say everything it 'saw' in the last t seconds (say 5 depending how fast they move) and use a method similar to ray casting.

You pick an angle say 10 degrees and create lines at say 1 degree intervals starting 10 degress to the left of the target and ending 10 degrees to the right. Next you intersect the lines with everything in its memory (treat area it hasn't seen as emty) and find the distance from the 'bot'. Next check if thare are any lines that dont intersect, if so follow the one closest to the target. Next check if any of the intersections are further than a set distance away, if so walk in that direction.

Finaly if they are all to close then turn in the direction of the edge line were the edge line was furthest away

There are a few problems with the above but these all have simple solutions, For Example to avoid it getting stuck in corners you can make the cut off distance fairly large or you can give it a tendancy to turn in the same direction it last turned.

Repeat some time later

Note. The memory is not esential but it speeds up the computations and prevents stupid behaiviours such as walking in circles, You could also try and 'bounce' the lines to the target but because the world changes so much i wouldnt recomend this.

If any one wants me to code a quick version of this im happy to do it tommorow, but some one please tell me how do you do the little scrolly code boxes.
You could do an 'on collision' command. Use A*, and setup a 'on collision' redetect path, and the node the NPC was going to is added to the 'closed' list.
Advertisement
I've had a similar problem (I'm making an RTS game). Now I'm dealing with group behavours, not with path finding. Try this which worked for me:

1) Find the path from the point you're to the dst. I assume that you're ussing a queue of positions once you've obtained a path.
2) Each frame compute the possibility of crashing if you make a new step.
3) If you'll crash, use A* to concatenate the point where you are to the point from the queue (and not the final position)

Step 2 may sound very expensive, but it is not, at least the code i've written is pretty fast (I don't do magic though)
But it works.
Quote: Original post by MrRowl
Maybe this


1. First starts walking in a straight line from A to B, negotiating obstacles using some local steering.

2. At the same time you simulate the agent's movement (distributing this over multiple updates... but still running much faster than real-time) in order to estimate his position in the future.

3. If you detect that the agent will eventually get stuck, then start the RRT pathfinding from that (predicted) location. With luck, you'll have calculated the result by the time the agent actually gets there (may be 10-20 seconds into the future). With even move luck, your agent will actually end up in the same place that you started your pathfinding from!

4. Assuming everything worked, when your agent reaches the stuck location, have him play a "puzzled" animation (!) then follow your RRT path to B, assuming one exists.


That is golden
Reading this thread, and seeing mention of robot navigation reminded me of vector field navigation that can be used by robots. The general idea is that you create a field of vectors that point in the direction of the target, but away from obstacles. Summing the vectors provides the direction to move. Here's an example with some nice graphics for an optical navigation system.
http://people.csail.mit.edu/lpk/mars/temizer_2001/Optical_Flow/
There are other papers out there too if you search for "robot vector field navigation"

I'd think you'd be able to create a vector field from the current view of your spider to do this. But I'm no AI expert. Maybe it's too computationally intensive or maybe it's technically the same as some of the other pathfinding methods mentioned.

Looks like a fun project!
Tadd- WarbleWare
Quote: Original post by headfonez
Quote: Original post by MrRowl
1. First starts walking in a straight line from A to B, negotiating obstacles using some local steering.

2. At the same time you simulate the agent's movement (distributing this over multiple updates... but still running much faster than real-time) in order to estimate his position in the future.

3. If you detect that the agent will eventually get stuck, then start the RRT pathfinding from that (predicted) location. With luck, you'll have calculated the result by the time the agent actually gets there (may be 10-20 seconds into the future). With even move luck, your agent will actually end up in the same place that you started your pathfinding from!

4. Assuming everything worked, when your agent reaches the stuck location, have him play a "puzzled" animation (!) then follow your RRT path to B, assuming one exists.


That is golden


...and is essentially a very simplified version of the technique I developed during my PhD (gratuitous self-promotion incoming ;) ). I worked on a tougher problem, which deals with problems regarding uncertainty in whether you would get stuck or not (and hence failing to successfully complete the plan). One can view the true problem as being based on the probability of success of a plan and then find information theoretic means to decide when it is appropriate to look for a new plan, based on ones uncertainty and how it changes. In the above example, the prediction of the stratight line plan equates to a reduction in the uncertainty, with only two possible outcomes of which one or the other is determined by the test (i.e., no residual uncertainty). What to do in this case is clear cut (replan if the current straight line plan will fail). A more interesting problem (and the one I solved) is what to do when you have residual uncertainty after doing that test (perhaps because you are not sure how the world will evolve).

Cheers,

Timkin

This topic is closed to new replies.

Advertisement