i² = -1;
Gaze into the monitor. . .
When I read the data from the depth buffer, I get a number between 0 and 1. How can I convert that into the units into the screen something is?
Chess is played by three people. Two people play the game; the third provides moral support for the pawns. The object of the game is to kill your opponent by flinging captured pieces at his head. Since the only piece that can be killed is a pawn, the two armies agree to meet in a pawn-infested area (or even a pawn shop) and kill as many pawns as possible in the crossfire. If the game goes on for an hour, one player may legally attempt to gouge out the other player's eyes with his King.
January 23, 2001 06:35 PM
My guess would be something like this.
real_dist=near+z_read*(far-near)
near and far is the range you have set for the z buffer and z_read is the value you have read
real_dist=near+z_read*(far-near)
near and far is the range you have set for the z buffer and z_read is the value you have read
That''s what I thought too, but I got some value of about 98.2 when it should have been about 10. I think it''s exponantial so there is alot of detail up close and not so much far away.
Chess is played by three people. Two people play the game; the third provides moral support for the pawns. The object of the game is to kill your opponent by flinging captured pieces at his head. Since the only piece that can be killed is a pawn, the two armies agree to meet in a pawn-infested area (or even a pawn shop) and kill as many pawns as possible in the crossfire. If the game goes on for an hour, one player may legally attempt to gouge out the other player's eyes with his King.
Hi,
The depth value is by default, between the range 0.0 to 1.0 but can be changed with glDepthRange() (red book, page 131). This is the value between the near and far clipping planes, scaled to fit the depth range. The problem is, the value returned is is a fixed point format, which doesn't seem to be described anywhere, so you need to convert it to standard format (float, etc).
I've seen it done somewhere but I can't remember where, I'll have a quick search around for it tonight.
Anons post from above should work for values between 0 and 1 though, as the fixed point format for 0..1 _is_ defined to be just the fractional part (all 1s for 1.0). Make sure that the depth buffer value is read into a short (for 16 bit depth buffer), and divided by 65536.0 to make it a float or double.
Dan
update -I've found the info I was looking for in the online version of the blue book. Firstly from the glReadPixels() definition -
Depth values are read from the depth buffer. Each component is
converted to floating point such that the minimum depth value
maps to 0.0 and the maximum value maps to 1.0. Each component is
then multiplied by GL_DEPTH_SCALE, added to GL_DEPTH_BIAS, and
finally clamped to the range [0,1].
Not a lot of help, but maybe subtracting GL_DEPTH_BIAS, then dividing by GL_DEPTH_SCALE will help?
Secondly, from the dlDrawpixels() definition -
Each pixel is a single-depth component. Floating-point data is
converted directly to an internal floating-point format with
unspecified precision. Signed integer data is mapped linearly to the
internal floating-point format such that the most positive
representable integer value maps to 1.0, and the most negative
representable value maps to -1.0. Unsigned integer data is mapped
similarly: the largest integer value maps to 1.0, and zero maps to
0.0. The resulting floating-point depth value is then multiplied by
GL_DEPTH_SCALE and added to GL_DEPTH_BIAS. The result is clamped to
the range [0,1].
This says that it is a undefined floating point value, contradicting the red book, strange?
Sorry, that wasn't as useful as I remembered it. Oh well, it might help. It's a better situation than in directX though, as drivers can store depth buffer data in any way they see fit, making it nearly impossible to get accurate depth info. Some 3d cards don't even use a depth buffer! (PowerVR???) They use some other method, but tell D3D that they use one anyway.
Dan
Edited by - danbrown on January 24, 2001 1:25:10 PM
The depth value is by default, between the range 0.0 to 1.0 but can be changed with glDepthRange() (red book, page 131). This is the value between the near and far clipping planes, scaled to fit the depth range. The problem is, the value returned is is a fixed point format, which doesn't seem to be described anywhere, so you need to convert it to standard format (float, etc).
I've seen it done somewhere but I can't remember where, I'll have a quick search around for it tonight.
Anons post from above should work for values between 0 and 1 though, as the fixed point format for 0..1 _is_ defined to be just the fractional part (all 1s for 1.0). Make sure that the depth buffer value is read into a short (for 16 bit depth buffer), and divided by 65536.0 to make it a float or double.
Dan
update -I've found the info I was looking for in the online version of the blue book. Firstly from the glReadPixels() definition -
Depth values are read from the depth buffer. Each component is
converted to floating point such that the minimum depth value
maps to 0.0 and the maximum value maps to 1.0. Each component is
then multiplied by GL_DEPTH_SCALE, added to GL_DEPTH_BIAS, and
finally clamped to the range [0,1].
Not a lot of help, but maybe subtracting GL_DEPTH_BIAS, then dividing by GL_DEPTH_SCALE will help?
Secondly, from the dlDrawpixels() definition -
Each pixel is a single-depth component. Floating-point data is
converted directly to an internal floating-point format with
unspecified precision. Signed integer data is mapped linearly to the
internal floating-point format such that the most positive
representable integer value maps to 1.0, and the most negative
representable value maps to -1.0. Unsigned integer data is mapped
similarly: the largest integer value maps to 1.0, and zero maps to
0.0. The resulting floating-point depth value is then multiplied by
GL_DEPTH_SCALE and added to GL_DEPTH_BIAS. The result is clamped to
the range [0,1].
This says that it is a undefined floating point value, contradicting the red book, strange?
Sorry, that wasn't as useful as I remembered it. Oh well, it might help. It's a better situation than in directX though, as drivers can store depth buffer data in any way they see fit, making it nearly impossible to get accurate depth info. Some 3d cards don't even use a depth buffer! (PowerVR???) They use some other method, but tell D3D that they use one anyway.
Dan
Edited by - danbrown on January 24, 2001 1:25:10 PM
Dan
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement