Advertisement

AI planning

Started by May 14, 2006 07:03 AM
21 comments, last by hymerman 18 years, 6 months ago
I recently read the various articles on Excalibur and found them very interesting. I've had similar ideas for a system of my own earlier, alhough not quite as sophisticated as Excalibur. Has anyone here had some experience with this kind of planning systems? I just want to see where the break between theory and practice runs with this kind of system. Also, I'm thinking of trying to design and implement my similar system for real now, building from the excalibur example, but probably on a smaller scale. Any advice, relevant links or other words of wisdom greatly appreciated.
One of my best friend did his master degree thesis about planners. The problem with them is that planning problems are essentially NP-complete, which means there are no efficient solutions. Also, it is often hard to adapt a planning problem in a continuous world. Solving a simple travelling problem can take huge ammounts of time and memory.

Im currently building a goal-oriented AI engine, but much with a more forward approach, without "real" planning.
Advertisement
But how much does it help if you do not require the plans to be optimal? I thought that was the main point in Excalibur that could make the system possibly work in realtime. The agents create a plan pretty quickly and then proceed to refine that plan as they get the processing power to do so. Might not be enough though, but it seems like a way to alleviate part of the problem. And in most cases less than optimal plans actually make agents seem more realistic, since that's what humans would come up with from time to time...

As for your own system, could you explain how you're taking a more forward approach?
Damn, I wrote a long reply, then hit the wrong key and lost everything. sigh.

For some problems, it will help a lot, for others, not at all. Simply put, Planning is finding a sequence of actions that will solve move from the current state to a final one, with constraints. Having constraints often mean putting all of your heuristics to the ground.

Say the agent want to open a door. A planner will consider all of the possible sequences of actions in search for one that will open the door.

In my engine, the agent will consider only the actions that he knows are a way to open a door. But correctly implemented, it can looks like if it planned ahead. If the agent knows the door is locked, he could get the key first, for example.
Quote: Original post by Steadtler
In my engine, the agent will consider only the actions that he knows are a way to open a door. But correctly implemented, it can looks like if it planned ahead. If the agent knows the door is locked, he could get the key first, for example.


This sounds like partial-order planning in action space, although with such a brief description, I could be way off. ;)

As Steadtler points out, most planning problems in the real world are computationally difficult. The most successful systems use a lot of heuristic information to refine the search space to a region in which a locally optimal solution most likely occurs and then concentrate on finding that solution. In my PhD I took a different approach, which was far more dynamic and more realistic for the real world (IMHO). I ignored the problem of finding a plan and instead concetrated on how an agent could recognise when its current plan should be changed due to changes in its information about the world. Thus, planning just became an issue of finding a solution within a model of the world and the real problem became recognising when the model of the world didn't fit the real world and what the implication was for your plan. In this sense, the model is an instantiation of the world + heuristics + constraints. Of course, the system that I built would not be appropriate for a small game world, since the time frames on which it operates are hours to days. However, the principles could be applied for game planning systems (for pathfinding and for plans to achieve agent goals). Indeed, this is something I'm trying to find time to look at in the next year or two. 8)

Cheers,

Timkin
Quote: Original post by Timkin
Quote: Original post by Steadtler
In my engine, the agent will consider only the actions that he knows are a way to open a door. But correctly implemented, it can looks like if it planned ahead. If the agent knows the door is locked, he could get the key first, for example.


This sounds like partial-order planning in action space, although with such a brief description, I could be way off. ;)


I looked up what they mean by partial-order (Planners aint my speciality), and its not exactly it, though there are strong similarities. But that kind of planner could be a nice addition to my engine later-on...

What I am doing is essentially my own twisted attempt at a goal-driven engine. Its works somewhat like a state machine but the state is a collection of goals. I think it will be easier to apply it to games than a planner, even if it is somewhat less powerfull.

Well, when it'll be working to my taste, you will all get a peek :)
Advertisement
Check out the planner Jeff Orkin designed for Monolith (used in FEAR and upcoming games)

http://www.jorkin.com/gdc2006_orkin_jeff_fear.doc

http://www.jorkin.com/aiide05OrkinJ.pdf

http://www.jorkin.com/WS404OrkinJ.pdf
I will have no chance at finding that reference for it was several years ago, but I first heard about planners as an algorithm NASA was using in their space probes. They were giving commands like "put yourself in an economy state" and it would shut down circuits in the correct order, close valves, find a way to work around a non-responding motor or a defective sensor. I would say that planners can be used today in real life application and I always wondered why games didn't use it more.

I am currently on a small project on my own, that looks a little like Excalibur (but far less advanced I would say). Didn't hear about it before, thanks for the link! If you are interested in this field, do not hesitate to drop a mail at quemener dot yves at free dot fr.
Thanks for the links, fup. Yeah, that kind of planner is great for games where the situation does not critically change too often. I wonder what kind of heuristics they used in their space-search, thats really the key in making it real-time, and the most difficult part IMO. (thats why they dont speak about it too much, hehehe).

Thats funny Yvanhoe, my planner-specialist friend worked for the Canadian Space Agency. The thing about space probed, they are a really controlled environment, and they usually have plenty of time. Nobody's shooting at them (yet!)

You guys convinced me to look deeper into planners... I think somewhere a decision making engine like mine and a planner could complement each others nicely.

Quote: Original post by Steadtler
Damn, I wrote a long reply, then hit the wrong key and lost everything. sigh.

After this happened to me twice, I started writing out my few longer posts in notepad and pasting them in :)

Quote: Original post by Steadtler
Say the agent want to open a door. A planner will consider all of the possible sequences of actions in search for one that will open the door.

In my engine, the agent will consider only the actions that he knows are a way to open a door. But correctly implemented, it can looks like if it planned ahead. If the agent knows the door is locked, he could get the key first, for example.

I was thinking about a backwards planner, such that instead of adding actions (which are in general just state transitions) to the agents current state, you try to 'backengineer' the goal state. If the agent wants to open a door X that is locked, you search for an action that results in that state ('door X open'), and finds the 'open door X' action. Now this action in turn has a start state saying 'door X is unlocked', meaning we now have to find an action that results in the door being unlocked. Looking through our available actions we find 'unlock door X', which in turn has a start state 'has key for door X' etc...
Now, this could mean that the agent tries to backengineer a state that she cannot reach with any available actions, but in this field (game AI) that is not necessarily a catastrofy. If we just add some 'timeout threshold' when an agent considers something impossible for now, she would just choose another goal to work towards.
The reason I think this system would be computationally easier than a general planner is that the action 'eat apple' will not be considered until such a point it actually impacts the agents goal to open the door, i.e. when an action in the plan has a start state 'has eaten apple'.
This system doesn't seem very robust when the world state changes 'on it's own' though.

Geez. I really need to learn the terminology here. These kind of specific research areas are impossible to discuss meaningfully without knowing the words. The 'planner' I tried to describe above is almost certainly a 'a shortterm yada yada planner with reversed yada yada heuristics' or somesuch.

@ Timkin
I would be most interested in reading your PhD... thesis?... if it's available.

@ fup
Thanks for the links. Will read those as soon as I get a chance.

@ Yvanhoe
I'm certainly interested, but for now I'm just trying to get my knowledge level up to par with my interest. Don't think I could contribute much as is. AI is by far the most interesting field of programming, but it is not the one I know the most about...

This topic is closed to new replies.

Advertisement