Quote:
Original post by BrianL
The basic idea is that you generate neighbors by finding actions which can solve something in the current state. The simplest implementation would test every action when looking for neighbors. Usually, you can do better by precomputing actions which can potentially solve different properties.
I see. It just occurred to me that you probably don't have to create the graph per se. You could just store actions in a map, where they are indexed by which properties of the state they affect, and then for each step look up the properties that are missing to form the goal state? That way, you avoid the 'problem' of creating the graph, but you still wouldn't have to fall back to the base case of checking all actions each step...
Quote:
Original post by BrianL
I believe we put out an SDK containing all of the planning code. Its a pretty massive download, but you might want to check out the 1.04 release. Look at the module named 'aiplanner.cpp'.
If you are serious about planning, you might want to look at the book 'Automated Planning: Theory and Practice'. It covers the field fairly well.
I'm certainly serious enough about it to check out your SDK and that book. Thanks for the tips.
Quote:
Original post by hymerman
What most of you seem to be talking about is actually a problem solver, not a planner. Usually, if you're using A*, you're making a problem solver, searching the space of states for a solution. A planner searches the state of partial plans for a plan from start state to goal state.
Could you explain the difference to me? The only references I could find seemed to say a planner is a particular form of problem solver.
I've read a lot of references to STRIPS, but always in the context of how the proposed planner in the article is better in some way or another, so I came too see STRIPS as the outdated base case that was either hopelessly inefficient or too static for problems such as agents in a dynamic world. It is some thirty-odd years old now after all. Might want to revise that opinion and take a look. Also, I just learned about
Markov Decision Chains which also seems like something I should look into. Wikipedia is your friend.
I'm beginning to get a grasp of where I'm aiming with this. Thanks for all your input.
One (of many) problem I can foresee is how to enforce something like c4's sensory honesty. From the little I've found about that actual system, they seem to actually record sounds and images and then perform speech reconition etc. Way overkill for my needs in any case. Right now I'm thinking that the agent 'sees' statevariables just like the ones used to depict states internally. These are then stored in 'working memory', and each update they degrade if they are not still visible to the agent. Thus, something seen a while ago will have a lower 'probability' or 'certainty'. But that begs the question of what certainty to assign to things that have never been sensed?
Probably way too early to get into those specifics though. I'll assume that way works for now. To be continued...
@APThat system sounds a lot like where I'm aiming. Is that something you've implemented, or is it a design you've made?