Image Matching
Hi.. I am trying to do some image matching for a school proj.
I found a code which make use the difference between the descriptors of each pixel to do the comparison. I have some qns of the code and hope someone could help me out.
1. why is 0.6 used?
2. which algo is this based on?
for (int k = 0; k < Keypoints.Count; k++)
{
int min=0;
int dsq, distsq1 = 100000000, distsq2 = 100000000;
/* Find the two closest matches, and put their squared distances in and distsq2.*/
for (int k1 = 0; k1 < Keypoints2.Count; k1++)
{
dsq = DistSquared(Keypoints[k], Keypoints2[k1]);
if (dsq < distsq1)
{
distsq2 = distsq1;
distsq1 = dsq;
min = k1;
}
else if (dsq < distsq2)
{
distsq2 = dsq;
}
}
/* Check whether closest distance is less than 0.6 of second.*/
if (10 * 10 * distsq1 < 6 * 6 * distsq2)
{
key1.Add(k);
key2.Add(min);
}
}
/* Return squared distance between two keypoint descriptors.
*/
int DistSquared(KeypointN k1, KeypointN k2)
{
int i, dif, distsq = 0;
int[] pk1, pk2;
pk1 = k1.Descriptor;
pk2 = k2.Descriptor;
for (i = 0; i < 128; i++) {
dif = (int) pk1 - (int) pk2;
distsq += dif * dif;
}
return distsq;
}
1 - Use "source" UBB tags to display code like this.
2 - [EDIT: At first glance, I thought the following...] If your images have N (where N = width*height) pixels, then this algorithm simply treats each image as a vector in N-dimensional space and computes the usual Euclidean distance. The constants like 0.6 look to be arbitrary numbers that the author of the code chose by manual "tuning" (I would guess). [EDIT: That said, on closer inspection I think inferno82 is correct and it is not pixel values but features of some sort that are being compared.]
[Edited by - Emergent on March 19, 2010 7:01:39 AM]
2 - [EDIT: At first glance, I thought the following...] If your images have N (where N = width*height) pixels, then this algorithm simply treats each image as a vector in N-dimensional space and computes the usual Euclidean distance. The constants like 0.6 look to be arbitrary numbers that the author of the code chose by manual "tuning" (I would guess). [EDIT: That said, on closer inspection I think inferno82 is correct and it is not pixel values but features of some sort that are being compared.]
[Edited by - Emergent on March 19, 2010 7:01:39 AM]
There are a many ways to do image matching. It can be really easy or really complicated, depending on what you need.
The easiest is like this:
Just call 'Compare' to test against each image and use the one that has the lowest score.
If you want something a bit better Google 'normalized cross corrleation'.
If you need something that is more robust to scaling and rotation then you'll have to do feature matching. For that you might want to look at something like SIFT. YouTube has some great videos of SIFT in action.
If you're talking about object detection then you may want to look in to using Eigenvectors, SVMs, PCA, or a cascade of weak classifiers.
The easiest is like this:
int Compare(Image img1, Image img2){ int score = 0; for ( int x = 0; x < img.width; x++) { for ( int y = 0; y < img.height; y++) { int d = img1[x,y] - img2[x,y]; score += d * d; // error squared } } return d;}
Just call 'Compare' to test against each image and use the one that has the lowest score.
If you want something a bit better Google 'normalized cross corrleation'.
If you need something that is more robust to scaling and rotation then you'll have to do feature matching. For that you might want to look at something like SIFT. YouTube has some great videos of SIFT in action.
If you're talking about object detection then you may want to look in to using Eigenvectors, SVMs, PCA, or a cascade of weak classifiers.
Quote: Original post by willh
If you're talking about object detection then you may want to look in to using Eigenvectors, SVMs, PCA, or a cascade of weak classifiers.
And even more appropriately than PCA, ICA... (PCA is about the components that are good for building your images (and tend to be the things that your images have in common); ICA is about the components that are good for distinguishing between them.)
There is always the euclidean distance algorithm, works well for black and white pictures.
for control image a(width,height)
create a (width*height) dimensional vector, 'control', where each value is the colour(grey) value of the control
do the same for the test images 'test1'...'testN' and compare the euclidean distance between each of the test image vectors and the control image vector.
for control image a(width,height)
create a (width*height) dimensional vector, 'control', where each value is the colour(grey) value of the control
do the same for the test images 'test1'...'testN' and compare the euclidean distance between each of the test image vectors and the control image vector.
I am assuming that key points refers to a type of feature gathered in each image. These could be anything from SIFT, FAST, Shi & Tomasi, etc... Although by the DistSquared function it looks like its a 128 valued descriptor. So I'm leaning towards a SIFT descriptor. How are key points generated?
The 0.6 value, I would guess comes from empirical testing and the original programmer found that for his/her data set, 0.6 was the allowable distance to determine whether two features where potential matches. If that's the case you might have to determine this value on the data set you are using. Just grab a couple features from different images that you know are the same and compute the Euclidean distance between the descriptors to get a average separation.
The algorithm is simply performing simple feature matching. Then, I would assume after performing the feature matching you would have to determine based on the number of features matched and their distances, if the two scenes were the same. Again, you would have to determine what number of matched features constitutes a matched image.
- me
Quote:
Original post by TeSsl
1. why is 0.6 used?
The 0.6 value, I would guess comes from empirical testing and the original programmer found that for his/her data set, 0.6 was the allowable distance to determine whether two features where potential matches. If that's the case you might have to determine this value on the data set you are using. Just grab a couple features from different images that you know are the same and compute the Euclidean distance between the descriptors to get a average separation.
Quote:
Original post by TeSsl
2. which algo is this based on?
The algorithm is simply performing simple feature matching. Then, I would assume after performing the feature matching you would have to determine based on the number of features matched and their distances, if the two scenes were the same. Again, you would have to determine what number of matched features constitutes a matched image.
- me
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement