Advertisement

Using a camera as a mouse (Image recognition)

Started by February 01, 2008 09:26 AM
11 comments, last by UnshavenBastard 16 years, 9 months ago
Quote: Original post by Timkin
Quote: Original post by Steadtler
To extract the ego-motion from the optical flow, I liked a paper (it might even be one of Srinivasan's, that guy's done everything) that suggested to first find the rotation that makes the residual flow parallel. But personally, I found it easier (when I worked on the 3d case) to first find the translation that makes the residual flow null or concentric.


Did you investigate this beyond 'getting it to work'. Did you find (or do you believe) that it was an artifact of your experimental setup, or do you have a reason to believe it's a more fundamental result?


The reason I found was because the second option removes the need to evaluate the direction of the parallel flow prior to each step of the optimization process. It eliminate a degree of error and simplify the cost function. Assuming, of course, that you know the optical center of your camera. Else there is no gain.

Of course, the original paper included panning and tilting with large aperture, so it couldnt do that. Since here the only rotation is around the optical axis (Im assuming), it could apply.

(My case was different too, I used additional visual cues for rotation)
Thanks for the extra info...

My situation is different as well, as I have three cameras in a fixed, known orientation, each with wide field of view... so I get a hemispheric image, from which it is fairly easy to deduce rotations around any axis in 3D space. I have the added benefit of several accelerometers fixed in the frame of reference of the cameras... so this gives me sufficient information for visually mediated attitude control of the camera system. ;) Most of my work in this area is based around biological models for visuo-motor control of flying robots and my flying camera system is one example, being based on the primate vistublar-occular system.

Ah, it's all good fun and I could chat about this stuff all day... but I'd better actually do some work... 8(

Cheers,

Timkin
Advertisement
hrm, I'm not sure whether this could be feasibly at all... but it just came to my mind when reading the thread... two ideas, I guess the 2. is more realistic:

1)
you have some not too powerfull, relatively high frequency, stationary radio transmitter sending some signal, and in the robot, a direction sensitive, rotateable antenna (radar like), you could determine the direction of the stationary transmitter and thus know your orientation (in 2D)
I have no idea how precise this could be.


2)
another thing would be to use your web cam and place it on a rotateable part of your robot's head.
then you have a lightsource, that the robot can, via radio, switch on and off, to be sure it's the right light source.
you use preferably infrared, so it's not disturbed by other light sources in your room. My cheap chinese webcam can pick up IR, so you just need to stick an IR filter before such a cam, and voila, have a pure IR cam.
You can use as much IR LEDs in your room as you want, at different positions which the robot all knows, and have it switch on the LED that he likes to see via radio. if it makes orientation finding easier that with only one.
So you rotate your camera around, and have the robot adjust it such that the center pixels of the camera are illuminated by it, and if you have the orientation of the camera relative to the robot (rotation measurement with maybe parts from an old, non-optical mouse?), you have your robot's orientation...

if you use more than one light source, you can also calculate position


EDIT:

if the rotation thing is problematic, you could use 4 webcams (if they have wide enough FOV, that is) and put them with 90 degrees to each other on the robot's head. and infer the angles to the lightsources from which pixels are lit up by them. you'd have to know the FOV angle of the camera to do this, you could calculate it by capturing an image of your wallpaper with marks on it and measuring the distance cam--wall... or something (your gf might not like the marks on the wall, though.

EDIT #2:
hrm, of course this only works if it's okay that nobody stands in the robot's view :-D

This topic is closed to new replies.

Advertisement