Advertisement

Planning Methods

Started by March 20, 2010 06:12 AM
20 comments, last by wodinoneeye 14 years, 7 months ago
Steadtler, I wouldn't use the term reactive planner here. It generally means a plan that's triggered reactively but carried out to the end without making new decisions (like most behavior trees). It doesn't seem to be the case here.

Alvaro's suggestion is a full planner, but he seems to suggest executing it every frame. I'm not sure there's a word for that, it almost becomes a reactive policy but informed by runtime sampling.

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Quote: Original post by alexjc
Steadtler, I wouldn't use the term reactive planner here. It generally means a plan that's triggered reactively but carried out to the end without making new decisions (like most behavior trees). It doesn't seem to be the case here.

Alvaro's suggestion is a full planner, but he seems to suggest executing it every frame. I'm not sure there's a word for that, it almost becomes a reactive policy but informed by runtime sampling.


Terminology is a bitch in AI... Im talking the definition of reactive planner from Rich & Knight: a system that has a notion of the future but continuously retake a decision every frame, which seems to fit Alvaro's suggestion.
Advertisement
Quote: Original post by SteadtlerTerminology is a bitch in AI... Im talking the definition of reactive planner from Rich & Knight: a system that has a notion of the future but continuously retake a decision every frame.


Ah, I see! It's not a great definition IMHO. Most reactive systems seem to fit that definition... After all, the whole point of an intelligent decision is to understand what's going to happen, either by design or runtime/offline technique.

I prefer the definition of "picking a plan reactively without deliberating into the future." How you execute that plan is a different terminology all together...

I wonder what Norvig has to say about this!

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Funny thing is as I remember, Rich & Knight dispute people who came up with the term "reactive planner", as it doesnt have to involve planning. Sorry for getting sidetracked here. I like systems where I can control when and how to replan, which is something ofter overlooked.
Rich & Knight should be punished for trying to redefine a term... :-) It's bad enough already without people trying to overload common terms!


I agree with your premise though. In the case of Alvaro's system, you could get the same "control" by taking into account the current plan as part of the search process. Even if it's done every frame you get a way to persist with the same plan... KILLZONE 2 does this.

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Yes, I am suggesting reconsidering often; I don't know if it has to be on every frame. Picking a sequence of actions and executing them without reconsideration doesn't seem appropriate unless the environment is extremely simplistic (deterministic results of actions, no other unpredictable agents in the scene...).

I am not sure the algorithm I propose is a "planner" in a traditional sense of the word. Some actions will be taken only because the agent is expecting to perform some other actions in the future. In this sense it is planning things, and we would probably think of what the agent is doing as following some plan, even if the plan doesn't exist as a concrete data structure anywhere in the code.

If the algorithm is set up right, the agent shouldn't abandon a good [perceived] plan without a good reason. The last action should have brought the agent closer to reaching some goal, so the agent would be more likely than before to see what it's supposed to do to achieve it.

Advertisement
Re. this confusion over terminology:

I think that the words used in the decision and control community avoid confusion nicely: A controller is open loop if it is a function from the current time to an action (i.e., a sequence of actions), and closed-loop if it is a mapping from the current state to an action.

I think this is a better distinction, because "planning" (looking into the future, usually using dynamic programming) can be used to create closed-loop controllers; such a mapping from state to action is also known as a policy. An example of this is the policy computed by value iteration, or the brushfire algorithm for single-source pathfinding; I think both of these things can properly be called planning.
Alvaro, in static worlds you shouldn't get oscillation for the reasons you mention. But in dynamic worlds (and game environments) things can change to induce oscillation. If a target moves just out of range into the next high-level area, or another object is in the way making the option a bit more expensive. These things need some kind of "momentum" factor to overcome. It's not hard to do but you need to do it.

Emergent, I like the open-loop closed-loop concept. Thanks for the insights!

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Quote: Original post by alexjc
Alvaro, in static worlds you shouldn't get oscillation for the reasons you mention. But in dynamic worlds (and game environments) things can change to induce oscillation. If a target moves just out of range into the next high-level area, or another object is in the way making the option a bit more expensive. These things need some kind of "momentum" factor to overcome. It's not hard to do but you need to do it.


Oh, that's what you were talking about! Yes, you probably need to introduce hysteresis by adding some sort of penalty for changing your mind.

Ok all that is too advanced for me right now :(

This topic is closed to new replies.

Advertisement