Hi guys,
I've got a difficult problem that I don't know how to solve, seeing as I haven't studied anything regarding pathfinding except for the A* algorithm.
I need some help for the case that the object that is supposed to find a goal, doesn't know anything about it's surrounding, except for the info it gathers. For example, a unit needs to move to a position. It knows that this position is at a distance of 100 feet and at a 45 degree angle to it's current alignment. So technically it knows what direction to go into, and when to stop. I could mark this point in a grid and use A* to get the fastest path. Of course there might be walls, houses or trees in the way. So these obstacles would be noted in the grid once they are found, and A* can be used again to calculate the new path needed to get to the goal.
How the unit collects data doesn't really matter to me right now, be it via camera, lasers, ultrasound etc. I really would like to know how some of the simple robots looking for a goal do it. Even if I would divide everything up into a grid, I'd need a fairly small grid size, wouldn't I? Otherwise an obstacle that is found in the grid next to the unit automatically marks that grid tile as "don't go through here" eventhough the obstacle might only take up half of that grid tile. So technically passages will be avoided that would be big enough to walk through. I could imagine solving that with a very small grid size, but that would cost an enormous amount of memory I imagine, and I don't think real time systems take to kindly to that approach.
I'm not sure if I've illustrated this problem clear enough, but I hope to get some info on how this kind of situation is usually taken care of.
Cheers!
Pathfinding in unknown terrain
Simplest approach would be to have the bot wander towards its goal, just take the next closest node to it's destination. If it knows the direction to go then this would be a completely reliable (human way) to get to its destination.
Think walking going through the woods to get to someones house the first time, you don't know the terrain or even the fallen branches / trees, but if you know the direction you can get there by just going through the woods and maintaining as much as you can, the direction towards that house.
Check the nodes around (which would be visible) and take the closest to your destination. There will be a pitfall if there is some sort of convex geometry and the AI gets stuck, but this would just require to back track some (to the point before you enter the convex geometry, and taking the next shortest node.
Think walking going through the woods to get to someones house the first time, you don't know the terrain or even the fallen branches / trees, but if you know the direction you can get there by just going through the woods and maintaining as much as you can, the direction towards that house.
Check the nodes around (which would be visible) and take the closest to your destination. There will be a pitfall if there is some sort of convex geometry and the AI gets stuck, but this would just require to back track some (to the point before you enter the convex geometry, and taking the next shortest node.
If you are worried about grid resolution, use a quad-tree and only subdivide the grid cells as you come across them, and only until each cell is the size of the robot. This will allow you a very workable grid cell resolution with minimal memory usage, but will still be very fast (O(logn) for a quad tree instead of O(1) for the fixed size grid solution).
Quote: Original post by kuroioranda
If you are worried about grid resolution, use a quad-tree and only subdivide the grid cells as you come across them, and only until each cell is the size of the robot. This will allow you a very workable grid cell resolution with minimal memory usage, but will still be very fast (O(logn) for a quad tree instead of O(1) for the fixed size grid solution).
Sounds very interesting. Any article or paper you know that explains this concept, or a similar implementation? I've never used a quad tree, I'd have to look into that first, but thanks for the info so far!
Fate fell short this time your smile fades in the summer.
Quote: Original post by BrickInTheWallQuote: Original post by kuroioranda
If you are worried about grid resolution, use a quad-tree and only subdivide the grid cells as you come across them, and only until each cell is the size of the robot. This will allow you a very workable grid cell resolution with minimal memory usage, but will still be very fast (O(logn) for a quad tree instead of O(1) for the fixed size grid solution).
Sounds very interesting. Any article or paper you know that explains this concept, or a similar implementation? I've never used a quad tree, I'd have to look into that first, but thanks for the info so far!
There was a pretty good article on gamedev a while back. The problem they wanted to solve with it was a little different, but the same basic idea applies. You have a very large space that is mostly empty of data, but the parts that do have data need to be fairly granular.
Quad trees
A common solution for real world robots is SLAM (Simultaneous Localization and Mapping). Just google for that and you'll find what you're looking for.
-me
-me
Quote: Original post by kuroioranda
If you are worried about grid resolution, use a quad-tree and only subdivide the grid cells as you come across them, and only until each cell is the size of the robot. This will allow you a very workable grid cell resolution with minimal memory usage, but will still be very fast (O(logn) for a quad tree instead of O(1) for the fixed size grid solution).
This.
You don't need articles if you get the idea. Take your world, and divide it into 4 spaces and put them into a list structure. If you run into an obstacle, divide the space you are in into 4 spaces hanging off the one you are currently located in via a tree arrangement (e.g. this spaces points to the 4 smaller spaces and vice versa). That way, you are only creating resolution where you need it. As you discover more about the world, you can mark obstacles in the appropriate sections. In the mean time, you assume everything you can't see is passable.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Quote: Original post by InnocuousFox
You don't need articles if you get the idea. Take your world, and divide it into 4 spaces and put them into a list structure. If you run into an obstacle, divide the space you are in into 4 spaces hanging off the one you are currently located in via a tree arrangement (e.g. this spaces points to the 4 smaller spaces and vice versa). That way, you are only creating resolution where you need it. As you discover more about the world, you can mark obstacles in the appropriate sections. In the mean time, you assume everything you can't see is passable.
Thanks I've got it now. The article about using the quadtrees to determine what is in the camera field gave me what I needed. I simply didn't know what a quadtree is.
Thanks guys!
Fate fell short this time your smile fades in the summer.
I hate to bump my own thread, but I've recently read something about the D*, D* lite, Focussed D* and LPA* algorithms, which are apparently used for dynamic environments, ones which can be subject to change. No info/map is needed in the beginning, except of course where there goal is (though this could also just be an approximation). Sadly, the documents are very hard for me to understand, as I have problems when it comes to maths and english, since all the math I learned was in German. These algorithms do look interesting though. I've tried looking for some more info on D*, but I really can't find a lot. Has anyone ever had any experience using these? Surely I could just use A* everytime I need to recalculate my path, but these algorithms look very interesting, and I'm quite curious.
Fate fell short this time your smile fades in the summer.
Quote: Original post by BrickInTheWall
Sadly, the documents are very hard for me to understand
Yeah... part of academic life, I'm afraid. :-I haven't implemented D*-Lite myself but I'm aware of its importance; some labmates have done work with it.
Does the list of papers you've looked at include "the" D*-Lite paper by Koenig? Reading it thoroughly would take some time, but it doesn't look intimidatingly dense...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement