3D Pathing in Highly Dynamic Environments
I'm not sure what the state of the art is for this problem, it seems pretty hard.
I have a 3D game that I am making that is basically built of Lego bricks. I want to add some AI characters to the game that can negotiate the environment and basically get from point A to point B (at the very least). The problem is that the world is multi-user and every aspect of the map is completely dynamic since any user can move any set of bricks at any time. I've been thinking that some kind of continually updated navigation mesh might still be my best hope. It's complicated though, since its hard and expensive to determine, a priori, which pile of bricks a character could climb over, and which are impassable.
For a first cut at this problem, I don't care about a globally optimal solution, I just want to generate some interesting/reasonable behavior.
Does anyone have experience solving a problem like this? I'm reading Game Programming Gems 5 and it looks like I might be able to adapt D* to my needs, but it's not obvious to me how to cope with automatically generating navigation meshes.
Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...
it's going to be very hard to come up with a pathfinding solution that works well here assuming that there are no constraints on brick placement. The biggest problem is that if it really is that dynamic you're going to have many people who will spend a good deal of their time just trying to box other people/bots in so they get stuck. You might want to consider having prohibited building zones (roads) as your primary nav mesh; or create combinatorial rule sets that prevent bricks from completely enclosing areas.
Probably what will be most informative is to do research on real-world robotic navigation systems. For instance the hospital walking robots likely have interesting algorithms by which they solve the hospital floor. i know they have panic states where they just call out for someone to move stuff out of their way =)
The bottom line is that any look-ahead pathing system is only a guess in your world. What was a good path can be swiftly invalidated by the actions of other agents in the world. So probably if you opt out of a build-free road system the best you can do is relatively easily broken versions of bump and steer. If your rules prevent truly 3D construction (like n-storied buildings) you could just have a nav mesh that has nodes turned on/off by the placement of bricks.
-me
Probably what will be most informative is to do research on real-world robotic navigation systems. For instance the hospital walking robots likely have interesting algorithms by which they solve the hospital floor. i know they have panic states where they just call out for someone to move stuff out of their way =)
The bottom line is that any look-ahead pathing system is only a guess in your world. What was a good path can be swiftly invalidated by the actions of other agents in the world. So probably if you opt out of a build-free road system the best you can do is relatively easily broken versions of bump and steer. If your rules prevent truly 3D construction (like n-storied buildings) you could just have a nav mesh that has nodes turned on/off by the placement of bricks.
-me
It's not clear to me how the game world is defined and how it evolves dynamically through gameplay. Are you saying that you have a 3D environment with dynamic boundaries (that are grown by players) and you want to move lots of objects around (without collision) inside this changing world? Or, are you saying that the environment is static and you just need to avoid other agents? I'd like to get a clear picture of the problem before I suggest a solution.
Thanks,
Timkin
Thanks,
Timkin
From reading the OPs post I think his world is a set size but all objects in it (the lego blocks, people, etc) can be moved at any time, almost anywhere so he cannot really have predefined way points like you do in worlds with lots of static geometry.
Basically, he needs real-time dynamic path finding. Sounds fun (and the future).
Basically, he needs real-time dynamic path finding. Sounds fun (and the future).
"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin
Depending on how dynamic the world is, "path finding" in the sense of creating a complete path will be impossible. Since any form of planning without being able to predict how the environment will change will create solutions that quickly become infeasible and, thus, a waste of time.
Given tight time constraints, you may never actually have an optimal solution anyways. So, navigation-wise, it's probably best to just look ahead a few steps at most. Rather than spending time planning too far ahead, it might just be best to have a set of navigational heuristics that the AI character uses based on situations that may occur along the way towards the target.
Given tight time constraints, you may never actually have an optimal solution anyways. So, navigation-wise, it's probably best to just look ahead a few steps at most. Rather than spending time planning too far ahead, it might just be best to have a set of navigational heuristics that the AI character uses based on situations that may occur along the way towards the target.
It seems to me that there really isn't such a thing as 'pathfinding' in a world like you describe, Telamon. I think your best best might be to work on something more akin to steering behaviors, because any path you calculate can be invalidated at any time and constantly recalculating paths would use a significant amount of CPU. Actually, though, it depends on how large your world is - you might be able to get a heirarchial A* to be fast enough if you optimize the routine that determines whether two nodes should be linked.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
While I'm still waiting to hear from the OP as to the specifics of his world, I thought it might be meaningful to respond to the last two posters. We can certainly start another thread if there is a risk of hijacking this one, should this discussion lead on...
Pathfinding in dynamic environments (in the sense of constructing global plans) is certainly possible. You just need to put aside the preconceptions about planned paths and optimality that underly planning in static domains. The best way to look at the problem is that traditional views of pathfinding posit a domain model that is a true representation of the domain. A path that is optimal in the model is then optimal in the domain and since the model exactly matches the domain, the plan guarantees that the goal will be achieved.
When there is uncertainty in the model, such that the execution domain of the plan lies somewhere within the uncertainty bounds of the model, then the 'optimal' plan within the model is still the best one to execute in the domain until you gather more information about the domain (and hence alter your uncertainty). However, no plan will give a guarantee (probability of 1) of achieving the goal. One must take into account both the probability of achieving the goal with a given plan and the cost to execute that plan. This is Decision Theoretic Planning.
So, when planning paths in a highly dynamic environment, one need only construct a model of the domain (in which to deliberate about plans) that incorporates ones uncertainty about the likelihood of collisions at any time.
Since most people working in games don't want to do any form of planning under uncertainty, they typically rely on another 'fix'. That is, insert a dynamic action into the plan whenever it fails, or is going to fail within a small horizon, so as to bypass the failure. Typically, this means detecting the collision using a small predictive look-ahead of the domain state and then bypassing the collision point, returning to the plan further along the track.
There are, of course, other answers, but whether these are appropriate or not will depend on the use of the system and the computational power available (although the latter is a function of the complexity of the domain and of the agent).
Cheers,
Timkin
Pathfinding in dynamic environments (in the sense of constructing global plans) is certainly possible. You just need to put aside the preconceptions about planned paths and optimality that underly planning in static domains. The best way to look at the problem is that traditional views of pathfinding posit a domain model that is a true representation of the domain. A path that is optimal in the model is then optimal in the domain and since the model exactly matches the domain, the plan guarantees that the goal will be achieved.
When there is uncertainty in the model, such that the execution domain of the plan lies somewhere within the uncertainty bounds of the model, then the 'optimal' plan within the model is still the best one to execute in the domain until you gather more information about the domain (and hence alter your uncertainty). However, no plan will give a guarantee (probability of 1) of achieving the goal. One must take into account both the probability of achieving the goal with a given plan and the cost to execute that plan. This is Decision Theoretic Planning.
So, when planning paths in a highly dynamic environment, one need only construct a model of the domain (in which to deliberate about plans) that incorporates ones uncertainty about the likelihood of collisions at any time.
Since most people working in games don't want to do any form of planning under uncertainty, they typically rely on another 'fix'. That is, insert a dynamic action into the plan whenever it fails, or is going to fail within a small horizon, so as to bypass the failure. Typically, this means detecting the collision using a small predictive look-ahead of the domain state and then bypassing the collision point, returning to the plan further along the track.
There are, of course, other answers, but whether these are appropriate or not will depend on the use of the system and the computational power available (although the latter is a function of the complexity of the domain and of the agent).
Cheers,
Timkin
July 25, 2006 08:29 PM
How big (complex) is the environment and how far a path thru it might the AI object be needing to navigate ??? Hierarchical A* could be done but only with a probability of success subject to revisions as the map changes (and the Hierarchy model mutates) and Fine level pathfind determines that the Coarse level is inaccurate.
How significant is the need to get an optimal path ??? How much backtracking after a predicted path closes is too much?? (do you need to constantly recheck the planned path and recalulate detours in segments of it -- this effects the amount of working data you might have to retain....)
How frequent will modification be made?? Maybe a local test could determine if the modification changes the Coarse level representation (does the placement create or destroy a chokepoint...) How often should the Hierarchy map be reevaluated (not an insignificant task...)
Are their tryely dynamic objects (like things that roll) besides the ither players which might be only temporary obstructions???
All these things effect what kind of shortcuts (or clever solutions) might be employed -- balanced against the CPU/Mem resources you have available.
Your "lego" though makes me thing you are trying to describe a fixed grid that appears to be 3d.
If the blocks are bound to be placed on a fixed grid (like real legos), then you could probably resolve this to
a "A*" + "Memorized Path" + "Valid Move" sort of thinking in the bots.
A* on the grid, using a simple huristic, where:
each 2d gridpoint is a node
each block has a node ontop of it that takes the place of the node in the blocks position.
each node that is more than X gridpoints higher than the connecting point is a impasable node from that direction.
then memorize the path, and check against the huristics each time the agent moves along the path. If there is a problem with the movement, then go back to the A* and find a new path.
If the blocks are bound to be placed on a fixed grid (like real legos), then you could probably resolve this to
a "A*" + "Memorized Path" + "Valid Move" sort of thinking in the bots.
A* on the grid, using a simple huristic, where:
each 2d gridpoint is a node
each block has a node ontop of it that takes the place of the node in the blocks position.
each node that is more than X gridpoints higher than the connecting point is a impasable node from that direction.
then memorize the path, and check against the huristics each time the agent moves along the path. If there is a problem with the movement, then go back to the A* and find a new path.
Thanks for the replies so far - I will look into steering behaviors. I know "path-finding" is a bit of a misnomer for what I am trying to accomplish. A path-finding approach would work if I could recompute the path often enough, but is maybe solving a harder problem than I need to in order to get good behavior.
Since I was perhaps unclear before, let me explain the problem a bit more. I have an online 3d world built out of bricks. Right now, we are working on a deathmatch FPS-style game, so we have built an arena. It looks like this:
Players each control a minifigure with an armament of weapons, including a rocket launcher that can blow buildings apart (everything is made of bricks). Bricks can also be snapped together players can build things back up if they want (there is even a "wall" tool that makes it easy to build defensive positions in-game). There is no imposed "3d grid" - the worldspace is continuous, which makes the problem harder. The world is highly maleable. Here is the same arena a few minutes later:
Right now we are trying to make the arena more interesting when there is only one player in it, so we are looking at spawning computer-controlled mechanical spiders or tanks that seek the player out and hopefully keep him entertained long enough for another human player to join the game.
Our arena map is pretty open, but I am interested in solving the path finding problem for denser maps (we have a 11,000 brick city map that we want to put online someday - so it would be great if the AI controller could navigate through multi-story office buildings)
I'm aware this is a hard problem with no quick fix. Our game is full of problems like that. If you can suggest relavent research papers, I will seek them out.
Since I was perhaps unclear before, let me explain the problem a bit more. I have an online 3d world built out of bricks. Right now, we are working on a deathmatch FPS-style game, so we have built an arena. It looks like this:
Players each control a minifigure with an armament of weapons, including a rocket launcher that can blow buildings apart (everything is made of bricks). Bricks can also be snapped together players can build things back up if they want (there is even a "wall" tool that makes it easy to build defensive positions in-game). There is no imposed "3d grid" - the worldspace is continuous, which makes the problem harder. The world is highly maleable. Here is the same arena a few minutes later:
Right now we are trying to make the arena more interesting when there is only one player in it, so we are looking at spawning computer-controlled mechanical spiders or tanks that seek the player out and hopefully keep him entertained long enough for another human player to join the game.
Our arena map is pretty open, but I am interested in solving the path finding problem for denser maps (we have a 11,000 brick city map that we want to put online someday - so it would be great if the AI controller could navigate through multi-story office buildings)
I'm aware this is a hard problem with no quick fix. Our game is full of problems like that. If you can suggest relavent research papers, I will seek them out.
Shedletsky's Bits: A Blog | ROBLOX | Twitter
Time held me green and dying
Though I sang in my chains like the sea...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement