'Image Matching' (Robot Exploration-Generated Maps)
(Not quite game AI, but pretty close)
For 'Senior Project' (a project required to graduate, done together by all students in the class), I'm working on 'robot exploration' and we have to (over the course of two semesters) build a simulation and an AI to run robots in it. In an attempt to make the project actually interesting, we're trying to make things as realistic as possible (in an abstract sense), so robots don't have universal coordinate systems, don't always know where they are relative to eachother, won't always be able to communicate, etc.
One thing we'll need to tackle (eventually) is how to match two maps presuming two robots are using different coordinate systems so that their maps have different origins and different orientations.
What algorithms can handle both differences in a semi-efficient manner? So far, it seems that image correlation algorithms generally only find an object that is translated ('different origin' basically). I'm fairly certain that I've read something about a similar algorithm that also dealt with arbitrary orientation and even different scale, but I'm having a hard time remebering any keywords that I could use to find a relevant paper or book.
I'd appreciate any info relevant to the problem of robot exploration, and specifically anything that would help figure out how to synchronize arbitrarily rotated/translated maps that likely have only a small part in common.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
There was a guest speaker at the 2005 Canadian Conference on Computer and Robot Vision (CRV 05) that made an amazing speech on this exact subject. Unfortunatly, his speech is not in the conference proceedings, but I will try to find out who he was (bad memory for names, heard he is pretty famous :P).
It was an army contract, very up-to-date stuff. When robots became too-unsure of thier position, they would meet each others and co-update thier relative internal data. The results of the robots-built map of a whole building floor were actually *more* accurate than handtape mesurements by army boys.
It was an army contract, very up-to-date stuff. When robots became too-unsure of thier position, they would meet each others and co-update thier relative internal data. The results of the robots-built map of a whole building floor were actually *more* accurate than handtape mesurements by army boys.
My thoughts would be to break each map down into a manageably small dataset (say marking only points at which walls change direction), and then rotate one dataset repeatedly looking for common nodes. You could break this down into smallish sectors for performance; if your robots KNOW that an area is the same for both of them (because they met there, for example) that should give you a starting sector - and would let you quickly calculate orientation for the others. As long as your robots agree on the basic geometry of the sector, knowing that they are in the same place should instantly solve the problem. If that isn't an issue, you're probably going to be stuck with "rotate, check for matches - repeat, and keep the rotation that matches best".
I don't know much about the field, but that's how I'd do it. Bored waiting for a database update at work, figured I'd pitch in my 2 cents!
I don't know much about the field, but that's how I'd do it. Bored waiting for a database update at work, figured I'd pitch in my 2 cents!
Bracket: Well, the representation I have in mind for the map is basically a grid where each cell is a smaller grid that actually stores the obstacle info (and the large grid would be able to expand in any direction - it's there as an implementation detail, basically). At the very least, the 'small cells' will store 'probability of an obstacle being here' (used to represent different sensor types and lower accuracy at farther ranges). Inside a certain radius, the probabilities will be set to 0.0 or 1.0, but outside they might be any value in between.
It wouldn't be too big a deal to entirely ignore cells not set to 1 or 0, so the algorithm can essentially work as matching bitmaps.
Of course once the robots meet, the problem becomes much simpler (matching the area around them, then having one reset it's orientation and origin to the other's and adjust it's map appropriately), but it's possible two robots can't come into contact but can observe similar areas (such as a small waterway between them that they can sense across but can't move over), and it's likely that they'll often start be outside of eachother's sensor ranges but still be in communication range (and they could triangulate positions with directional comm devices, but it'd be good to have a backup since triangulation can be very slow and isn't too precise without specialized equipment).
Steadtler: I think I read a paper about that, but iirc it involved a lot of things very specific to their situation (indoor environment, everything very angular, etc). We'll be handling mostly outdoor environments (thus the simulation - we don't care about mechanics etc so much as making the software work despite other problems {which we can easily simulate once we make the simulation}) and while we technically have complete control over everything because of the simulation, we'd like to make it handle as much as possible. We're thinking about it as the kind of thing that would be used to control robots mapping Mars(no GPS, semi-random starting position, etc) as a 'worst case' situation.
It wouldn't be too big a deal to entirely ignore cells not set to 1 or 0, so the algorithm can essentially work as matching bitmaps.
Of course once the robots meet, the problem becomes much simpler (matching the area around them, then having one reset it's orientation and origin to the other's and adjust it's map appropriately), but it's possible two robots can't come into contact but can observe similar areas (such as a small waterway between them that they can sense across but can't move over), and it's likely that they'll often start be outside of eachother's sensor ranges but still be in communication range (and they could triangulate positions with directional comm devices, but it'd be good to have a backup since triangulation can be very slow and isn't too precise without specialized equipment).
Steadtler: I think I read a paper about that, but iirc it involved a lot of things very specific to their situation (indoor environment, everything very angular, etc). We'll be handling mostly outdoor environments (thus the simulation - we don't care about mechanics etc so much as making the software work despite other problems {which we can easily simulate once we make the simulation}) and while we technically have complete control over everything because of the simulation, we'd like to make it handle as much as possible. We're thinking about it as the kind of thing that would be used to control robots mapping Mars(no GPS, semi-random starting position, etc) as a 'worst case' situation.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote: Original post by Extrarius
One thing we'll need to tackle (eventually) is how to match two maps presuming two robots are using different coordinate systems so that their maps have different origins and different orientations.
When attempting to match maps, are you in a process of inter-robot communication? Since each robot knows its location within its own map couldn't you also pass that information along to the other and thereby get a transform matrix to apply to the other's map?
Basically, during communication presumably you could have a mechanism whereby the robot is able to measure the relative offset/orientation of the other. (range finder + some mechanism for determining orientation).
Or is this an offline task where you just have 2 robot maps and the two are not currently in communication?
Or you could be like bees and use the angle of the sun & the location of a common "home" to get an absolute frame of reference. =)
-me
Once robots know their location relative to eachother, the process is trivial. The problem comes when they can't sense eachother but are inside communication range.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Presumably you've read the wealth of literature (or at least a good sampling) on the topic of Simultaneous Mapping and SLAM? Some questions...
What sort of uncertainties are you incorporating into the simulation? Presumably some sensor error. Are you also simulating movement uncertainty in your robots (which will mean that if coming back to somewhere they have already mapped they cannot be certain that it's the same area, because motion uncertainty will cause misalignment).
As for aligning maps, you will probably want to use techniques developed for image mosaicing, which are often applied for aerial and satellite mapping, particularly if you're mapping outdoor areas with presumably non-flat terrain. The alternative would be to do landmark mapping and alignment based on correlation of labelling and orientation. Holler if you want more details.
Cheers,
Timkin
What sort of uncertainties are you incorporating into the simulation? Presumably some sensor error. Are you also simulating movement uncertainty in your robots (which will mean that if coming back to somewhere they have already mapped they cannot be certain that it's the same area, because motion uncertainty will cause misalignment).
As for aligning maps, you will probably want to use techniques developed for image mosaicing, which are often applied for aerial and satellite mapping, particularly if you're mapping outdoor areas with presumably non-flat terrain. The alternative would be to do landmark mapping and alignment based on correlation of labelling and orientation. Holler if you want more details.
Cheers,
Timkin
Quote: Original post by TimkinWe(the class working on it) are going into this problem without knowing anything about the area. I've read a few papers, but so far we've mostly been focusing on the code design since we've barely started and are supposed to have something (a simulation with robots) soon. It'll be a while before we get far enough to start worrying about various uncertainties, but eventually we'd like to allow for as many as possible (including things like a broken robot sending out nonsense maps and claiming it's map should take priority based on whatever metrics we use). So far, I don't think we ever considered the difference true 3d terrain would make, and I don't think we'll be implementing that (besides having different types of obstacles that some robots can pass and others cannot and possibly weights for movement cost over various terrain).
Presumably you've read the wealth of literature (or at least a good sampling) on the topic of Simultaneous Mapping and SLAM? Some questions...[...]
I'd love to hear more, and thanks for the keywords =-)
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote: Original post by Extrarius
Steadtler: I think I read a paper about that, but iirc it involved a lot of things very specific to their situation (indoor environment, everything very angular, etc). We'll be handling mostly outdoor environments (thus the simulation - we don't care about mechanics etc so much as making the software work despite other problems {which we can easily simulate once we make the simulation}) and while we technically have complete control over everything because of the simulation, we'd like to make it handle as much as possible. We're thinking about it as the kind of thing that would be used to control robots mapping Mars(no GPS, semi-random starting position, etc) as a 'worst case' situation.
Yes, its aimed at indoors environment. Oh well, the guy names is Dieter Fox, and from what Ive gathered he is on of the expert on the subjet. Here is his homepage
http://www.cs.washington.edu/homes/fox/
Check out his excellent paper "Distributed Multi-robot Exploration and Mapping"
You can gather good ideas, keywords, and references from that.
September 15, 2006 04:23 PM
For keywords since I was working on this very issue at a robotics company I have done a ton of research and remember the pain that can be involved as there are several methods. Some are computationally very expensive and others are more progressive. They should still apply in your case traditionally these algorithms are used in correcting for drift or error in the encoders/odometry of the robot, but should still apply for comparing seperate maps of different robots.
Look at the company website Evolution Robotics (http://www.evolution.com) as they used to have some decent papers on their site (no going into detail of course). But there are several algorithms out there.
The tough part you might find is building the simulator application and making it flexible enough to extend it later for adding in this localization routine and behaviors too?
The speaker the previous poster might have heard was Paola Penjamin (sp?) he I believe is one of the lead architects for Evolution Robotics. The keywords listed below should help you in regards to the problem you describe.
Keywords:
SLAM
VSLAM
Localization
Self Localization
Look at the company website Evolution Robotics (http://www.evolution.com) as they used to have some decent papers on their site (no going into detail of course). But there are several algorithms out there.
The tough part you might find is building the simulator application and making it flexible enough to extend it later for adding in this localization routine and behaviors too?
The speaker the previous poster might have heard was Paola Penjamin (sp?) he I believe is one of the lead architects for Evolution Robotics. The keywords listed below should help you in regards to the problem you describe.
Keywords:
SLAM
VSLAM
Localization
Self Localization
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement