Advertisement

An idea

Started by January 01, 2002 10:06 PM
5 comments, last by llvllatrix 23 years, 1 month ago
I was wondering if it was possible to get the z coordinates from the 2d picture of an object. Its a real pain in the A$$ trying to create objects on a milkshape editor or hard coding them. The basis of my idea is that if a light source is directly in front of your object, darker colors would be further away and lighter colors would be closer(i would think you would have to take the average of the rgb intensities of the color to determine the "darkness or lightness" of a spot). After that you could create a 3d mesh of the object you want to make. If anyone has tried something like this or has any other ideas, thanks for your feedback. I would make my life (and im sure the lives of many others) a lot easier if I just had to take a picture of the model I wanted. Note - I havent read the height-mapping tutorial yet. As far as I think, a process like this would mean creating a height map from a photograph.
Not sure if this is what you're looking for exactly, but I figure it could be applied to modeling.

http://www.cs.unc.edu/~ibr/projects/HQWarping/HighQualityWarping.html

They use distance data from a laser range finder.

I wanna' ride on the pope mobile.

Edited by - Hikeeba on January 1, 2002 12:30:29 AM
Advertisement
im a 3d artist and i use 3D studio MAX... theres a function that use 2d textures to model a mesh. Its called displace. I think darker colors push the meshes up.

just giving my 2 cents

Kiroke
Kirokewww.geocities.com/kiroke2
There are some techniques to take stereoscopic images and build simple 3D shapes from them. The problem is a matter of precision in your photos (knowing exact view coordinates) and making a system to identify common features in each photo. There are some high-priced systems that will do this, but I''ve no hands-on experience with them. They also, obviously, can''t create what is not in the photo (like the backside of things, for example).

Using a single photo with a light near the camera will really only get you the curvature of the objects from the view of the camera. And that''s really only if the objects are all of the same material.

When a photo can become a 3D object is about the time infinite-zoom video tapes become reality.
Or one of those cameras they had in enemy of the state. They were hillarious "Rotate that image 90 degrees around the y-axis"... "Aha, the vital clue" ... then also in the movie "We can''t work out who he is, he never looks up, so the camera never gets him"... hmmmmm

A possible problem with this method is that the light source needs to be as close as it can be to the camera, ideally, it should be the camera. But, in that case, you will get a 3d model of the light source . But, as soon as you move it away from the camera, you get shadows which will stuff up the z unless you apply some tricky algorithms to it.

You might even find that a neural net could do it, regardless of the position of the light source... it just might take a bit of training.

Trying is the first step towards failure.
Trying is the first step towards failure.
Failure is the first step towards success - im gonna fool around with some black and white images and get back to u guys – with probably more failures
Advertisement
In professional 3D scanners, an extremely precise laser system is used to get the depth information.

Have a look at www.cyberware.com for some really nice stuff. Don''t look at the prices though, you''ll get a heart attack...

This topic is closed to new replies.

Advertisement