Advertisement

navigation ai

Started by October 03, 2003 10:34 PM
4 comments, last by Dovyman 21 years, 1 month ago
alright, I''m doing navigation with fuzzy subsumption to find paths in environments with multiple levels (other planes, cliffs ramps etc). I was trying to think of ways in which time to find the path to the goal when the environment is unknown could be minimized. One apparent but lengthy method would simply be backtracking if the robot finds its path its bad or a dead end. However that could cause the robot to go around the entire map before finding the goal. I realize attempting to plan with a reactive system may be a losing battle, but what other ways can I do this? One idea was some type of tree, which partitions the environment into a tree as it encounters it and allows an algorithm to determine the best behavior. Not really sure, any ideas?
The tree system sounds good. Every time you get to a decision point, you make a node. For each direction available to you at the decision point, you create a leaf off the node. If you get to a dead end (i.e. no more leaf nodes) then you simply return to the next highest node that has unexplored leafs. Let''s face it, that is how we explore unknown maps anyway, isn''t it? If we get stuck, we go back to a place we haven''t explored and go down a new hall.

Dave Mark - President and Lead Designer
Intrinsic Algorithm - "Reducing the world to mathematical equations!"

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
Hm, I guess I''ll give that a shot then. Is this a new idea I''m having or is this something thats been tried already and if so where?
quote: Original post by InnocuousFox
If you get to a dead end (i.e. no more leaf nodes) then you simply return to the next highest node that has unexplored leafs.


That''s simply backtracking Dave, which Doveyman already suggested and rejected as too lengthy.

quote: Original post by InnocuousFox
Let''s face it, that is how we explore unknown maps anyway, isn''t it? If we get stuck, we go back to a place we haven''t explored and go down a new hall.


Not really, actually. Anecdotal evidence suggests that people actually plan what they think is the best path based on what they see and what they infer as being unseen (based on other environments). Rats do this as well. You can put a rat in a maze and allow it to learn it. Then put it in a different maze that has the same beginning. It will try and execute its initial plan based on the earlier maze until it realises its in a different maze.

So this forms the basis of an interesting approach to planning in partially observable domains (check out POMDP literature). Generate a probabilistic model of the sorts of things you might find in the domain: doors, walls, rooms with multiple exits, obstacles, etc. Train your agent on a series of domains so that it can learn the correlations between the likely existence of one object and another and thus form higher order conditional models; e.g., p(door|corridor). You can extend this to an arbitrary oder conditional distribution, although the amount of data required to support such classifications scales with the order.

Then classify the current domain based on what has been seen and infer what things may be unseen just over the horizon. Make your decision based on maximising some utility measure at this horizon.

There''s plenty of literature on limited horizon problems for partially observable Markov decision processes (POMDPs). You might want to start with the work done by Tom Dean, Leslie Kaebling or Ann Nicholson.

Here''s an alternative I''ve ripped off the top of my head as I was typing this. Measure how ''open'' the domain is... that is, the ratio of open space to total floor space in the domain seen so far. There will be a correlation between this value and how restricted paths between any two points are, which is again correlated with the cost to travel between those two points. For points in the domain that are close together, an open domain should mean that a straight line can be pursued. A cluttered domain would mean that more effort needs to be spent on local obstacle avoidance.

Over longer distances, the more open the domain, the higher the likelihood of being able to take a straight line (and hence a higher preference for starting out aiming at the target point). In a closed, loosely connected domain, more effort will need to be spent on testing possible routes and building a map. In the worst case scenario, you''d have to rely on brute force search and backtracking.

However, if you build a map of the domain (which can be used to do your ''openness'' computation, then you could possible infer likely successful routes to the target, based on the context of the horizon (the unexplored bits right on the edge of the observed domain). A corridor heading toward another corridor is probably going to join up. However, if there is enough space to put a room in between the two, the corridor might end in a room. But then, the room might have a door on the other side of it. This then gets back to the model I mentioned above, where you want to know how likely things are in the domain.

Let us know how you get on with this problem. BTW, did you say previously that you were doing this as part of a research project for college, or is this just out of interest for yourself?

Cheers,

Timkin
Well I find AI interesting, but not enough to pursue research of this kind. I actually attend a specialized high school for math and the sciences, and part of our curriculum requires us to complete a major research project to enter into regional/state/international sci fairs.

Your approach sounds interesting, from all my investigations thus far it seems that I can''t avoid finding some situation where the environment is designed counter to the algorithm, but your plan seems to minimize the number of situations where that might be the case. I''ll also have to take a peak at the other research out there.

Thanks,
Paul
quote: Original post by Dovyman
I''ll also have to take a peak at the other research out there.


Some of it can be a bit brutal mathematically, although, that''s partly because most computer scientists don''t know how to communicate maths effectively/efficiently. I''m sure you''ll be able to pick it up. If you have difficulties, just holler at this list. There are quite a few very intelligent people here that can help you out.

Cheers,

Timkin

This topic is closed to new replies.

Advertisement