path planning
alright so for my research project im going to do path planning, wondering if you guys had any advice or cool articles. I know so far that I want to use a fuzzy system for the low level reactive behaviors, and some type of NN for higher level planning. I''m going to use the FEAR framework, which mods Q2. The part I need the most guidance on at the moment is how you put the two, I suppose they might be called "layers" together. Also any fuzzy path finding tutorials that are out there would be useful, but creating a system on my own shouldn''t prove terribly difficult. I''ve already found a number of resources, I''m just looking for any more input you guys have.
Paul.
You are going to use a NN for path planning? Perhaps a better description of the context you are doing this planning in would be of use here.
Dave Mark - President and Lead Designer
Intrinsic Algorithm - "Reducing the world to mathematical equations!"
Dave Mark - President and Lead Designer
Intrinsic Algorithm - "Reducing the world to mathematical equations!"
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Ok, I suppose it would. I''m going to use a fuzzy system for the actual pathfinding. The neural network would be used to reinforce desired behaviors, like picking up powerups, or going via a route that doesn''t get the bot killed. At least thats my understanding of a way it could work. I''m still reading through Alex C.''s dissertation concerning a similiar system, however I''d like mine to be much simpler, since I''m going into a science fair, not a masters degree. Does what I am saying make sense?
It sounds that what you''re attempting to do is use a fuzzy neural model to make decisions about branches to take in a tree structure (defined by action choices), based on inputs that rate the quality of each branch (local powerups, numbers of enemies in vicinity, etc). Is this correct?
If it is, then my personal opinion is that there are far more reliable, far easier to implement, more well established methods for performing such planning that offer provable and measurable results.
Regards,
Timkin
If it is, then my personal opinion is that there are far more reliable, far easier to implement, more well established methods for performing such planning that offer provable and measurable results.
Regards,
Timkin
alright, well I guess I'm not heading in a wonderfully productive direction.. so what kind of stuff in the area of subjective path-planning would make some good research? You're talking about lots of different methods but I've found only a few papers dealing with the topic. Could you give me some insight?
ED: perhaps some more information is in order now that i think about it.. I want to do a project in the area of finding a path through a dynamic world, when the robot has no pre-concieved map of the area.
[edited by - dovyman on September 5, 2003 3:54:34 PM]
ED: perhaps some more information is in order now that i think about it.. I want to do a project in the area of finding a path through a dynamic world, when the robot has no pre-concieved map of the area.
[edited by - dovyman on September 5, 2003 3:54:34 PM]
Alex (FEAR) has written a paper about this:
http://www.base-sixteen.com/Navigation/
My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
http://www.base-sixteen.com/Navigation/
My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
My Website: ai-junkie.com | My Books: 'Programming Game AI by Example' & 'AI Techniques for Game Programming'
Yeah, I''m in the process of reading it.
I was throwing around some ideas with a CS guy tonight, and he mentioned a project they had done which got readings on chemicals, and returned a confidence level of its predictions of what chemical it was.
So we started talking about applying something like that to a project. Because it seems like theres a missing link almost, because most autonomous agents that navigate through unknown environments do so very reflexively, for example a subsumption architecture. And on the other side of things there are obviously algorithms like A* that deal with pathfinding when you have an internal representation. Now, applying the confidence level idea, it would seem interesting if algorithms could be found that could "mix" these two areas.
The human brain must maintain internal representations of some sort because we can find our way easily around familiar places, like our houses, yet it is not limited to this, because obviously if someone moved your chair, your unlikely to then be unable to navigate a room. (ability to reason concerning your path, and yet implement reactive responses) Now of course this brings up the issue of, well if you have an internal rep, you have to check it for accuracy. I think you might work around this problem by blurring the precision of the representation. For example, you could tell me the basic layout of your house, but you are unlikely to be able to say, I have two rooms, x ft by y feet, connected by a hallway z feet long.
In short, if you could calculate the confidence quickly of the bots knowledge of an environment to a degree (fuzzy), for example knowing the layout of a room, just the fundamentals like the boundaries of the room, then you could do this "mixing" with reactive behaviors to plan a path through the rooms to your destination, and yet keeping an eye out for obstacles. The amount of confidence would determine the "mixing" of the methods, if you know nothing, then you must rely on purely reflexive behaviour, but if you know the boundaries of the room, you are in considerably better shape.
I hope someone actually reads this extremely long post, I''d like some feedback on what I''ve said, don''t be too harsh, I''m just kinda spewing out some ideas that I''ve been mulling over.
I was throwing around some ideas with a CS guy tonight, and he mentioned a project they had done which got readings on chemicals, and returned a confidence level of its predictions of what chemical it was.
So we started talking about applying something like that to a project. Because it seems like theres a missing link almost, because most autonomous agents that navigate through unknown environments do so very reflexively, for example a subsumption architecture. And on the other side of things there are obviously algorithms like A* that deal with pathfinding when you have an internal representation. Now, applying the confidence level idea, it would seem interesting if algorithms could be found that could "mix" these two areas.
The human brain must maintain internal representations of some sort because we can find our way easily around familiar places, like our houses, yet it is not limited to this, because obviously if someone moved your chair, your unlikely to then be unable to navigate a room. (ability to reason concerning your path, and yet implement reactive responses) Now of course this brings up the issue of, well if you have an internal rep, you have to check it for accuracy. I think you might work around this problem by blurring the precision of the representation. For example, you could tell me the basic layout of your house, but you are unlikely to be able to say, I have two rooms, x ft by y feet, connected by a hallway z feet long.
In short, if you could calculate the confidence quickly of the bots knowledge of an environment to a degree (fuzzy), for example knowing the layout of a room, just the fundamentals like the boundaries of the room, then you could do this "mixing" with reactive behaviors to plan a path through the rooms to your destination, and yet keeping an eye out for obstacles. The amount of confidence would determine the "mixing" of the methods, if you know nothing, then you must rely on purely reflexive behaviour, but if you know the boundaries of the room, you are in considerably better shape.
I hope someone actually reads this extremely long post, I''d like some feedback on what I''ve said, don''t be too harsh, I''m just kinda spewing out some ideas that I''ve been mulling over.
Hello Paul. I''d like to hear from you sometime.
Cheers, comrade
Kyle Evans,
Artificial entertainment [Movie/Game Reviews]
Editor In Chief - IGWorld.com
Contact: kyle@igworld.com
Cheers, comrade
Kyle Evans,
Artificial entertainment [Movie/Game Reviews]
Editor In Chief - IGWorld.com
Contact: kyle@igworld.com
Cheers, comrade Kyle Evans,Artificial entertainment [Movie/Game Reviews]Contact: kyser3152@yahoo.com.au
quote:
Original post by fup
Alex (FEAR) has written a paper about this:
I wrote a whole PhD thesis on this!
quote:
Original post by Dovyman
So we started talking about applying something like that to a project. Because it seems like theres a missing link almost, because most autonomous agents that navigate through unknown environments do so very reflexively, for example a subsumption architecture.
If you have no knowledge of your environment beyond simple sonar-style (or similar vision based) sensor readings, then you can only react to what you detect in your sensors, implementing a reactive strategy, like ''identify objects then avoid objects while moving north''. Reactive plans offer no guarantees of global optimality. Agre & Chapman give a really good coverage of this problem (reactive planning) in their 1980 paper.
quote:
Original post by Dovyman
And on the other side of things there are obviously algorithms like A* that deal with pathfinding when you have an internal representation. Now, applying the confidence level idea, it would seem interesting if algorithms could be found that could "mix" these two areas.
They already exist. They involve learning an internal representation of the environment - for example, a map of the environment - and utilising this for path planning. In my PhD thesis, I extended this idea to learning in extremely dynamic environments (fully autonomous robotic aircraft flying in a cyclone) and to dealing with uncertainty in the internal model when planning and when deciding to throw the current plan away and find a new one. This uncertainty covered two aspects: 1) mismatch between the model (and model dynamics) and the real world; and, 2) the uncertainty inherent in the evolution of the beliefs (because of initial uncertainty in the state of the domain).
In particular, I developed a robust algorithm for triggering replanning in dynamic environments that were subject to uncertainty; it''s called Probabilistic Prediction-based Replanning (PPR). The general idea is that you start with a model of the environment (which could be complete ignorance) and come up with a plan. While you''re executing that plan, you''re learning a better model of the evolving environment and using this to re-evaluate your beliefs about the quality of your current plan. Given the agents preferences and new beliefs about the evolving environment, there will be a point at which the agent would prefer to spend some time formulating a new plan rather than continue to execute its current plan to completion. This new plan can be computed while still executing the current steps of the current plan and the method then guarantees a gradual decline in the perceived value of the current plan prior to plan failure. Plan failure is actually avoided so the agent is always working to complete its goals as best it can. The algorithm also offers guarantees about the optimality of the plan it ultimately executes (composed of the parts of each of its successive plans it executes) given the dynamic nature of the environment. That is, given the parameter choices, the algorithm guarantees that the agent follows the lowest cost plan at all times, given its beliefs, which change throughout time.
Unfortunately you won''t find any publications on PPR at this time as I''ve been too swamped in my current job to publish the couple of half written papers on the subject. As for my PhD thesis, it''s still being examined (yes, STILL... apparently there are issues with the appropriateness of the examiners), so I can''t give you a copy yet. However, I''m happy to discuss general ideas with you and to help with analysing others'' ideas.
Cheers,
Timkin
I know that Timkin
, but your thesis is based in the real world, whereas the OP mentioned he is going to use FEAR... exactly the same environment Alex used for testing his design.

My Website: ai-junkie.com | My Books: 'Programming Game AI by Example' & 'AI Techniques for Game Programming'
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement