Advertisement

How to hide a lens flare behind mountains ?

Started by February 11, 2002 03:32 AM
41 comments, last by Bestel 23 years ago
Forget about all this raytracing stuff.

Use glReadPixels. It will be 100 times faster than doing intersections on triangles. You are reading *a single* pixel ! Even on my old Voodoo2 the time it takes isn''t even measurable.
Yes, I am realizing that it's not very easy to test intersection and this method will not be efficient when I will insert models on the terrain.

I think that I will come back to glReadPixel()...

But now, I would like to ask you a question (that you will find certainly stupid, but..) How can I use the projection matrix to find 2D coordinate of a pixel from a 3D position ?

And what happen when the projection of the point that I want to test is out of the screen ?


Edited by - bestel on February 11, 2002 2:38:41 PM
Advertisement
The points projects on screen via a simple mathematical formula.
Say W is the position in World Coordinates, C is the clip coordinates, and P the projection matrix, then it is :
C = P * W

Clip coordinates range from -1 to +1 in X, Y and Z. Eg clip coordinates lower than -1 or greater than +1 lie out of the viewing frustum.
To get screen coordinates, scale the value so that X coordinates that goes from -1 to +1 become from 0 to window_width, and repeat the same operation for y.

You have to make sure that W is in world coordinates, otherwise you have to transform via the modelview matrix before.

If you want an easy method, you gotta take a look at gluProject and gluUnproject. You need the GLU, though.


If the point is out of the limits, the result for glReadPixels is simply undefined.
That's why you have to make sure that the pixel lie into the screen before reading the framebuffer.

Edited by - vincoof on February 11, 2002 2:51:49 PM
thanks



I will try to use gluProject() because it seems to be very easy.

But does this function is slow in relation to project the point with my own implementation ?
I've done something like that :
              double x = 0;          // x result of projection  double y = 0;          // y result of projection  double z = 0;          // z result of projection  float depth = 0;       // depth of the point (x,y)  double modelview[16];  // a buffer to store my Modelview matrix  double projection[16]; // a buffer to store my Projection matrix  int viewport[4];       // a buffer to store my Viewport matrix  glGetDoublev(GL_MODELVIEW_MATRIX, modelview);  glGetDoublev(GL_PROJECTION_MATRIX, projection);  glGetIntegerv(GL_VIEWPORT, viewport);  gluProject(-400, 1600, -400, // -400, 1600 and -400 are 3d coordinate of my lens flare             modelview, projection, viewport,             &x, &y, &z)   glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);  if (depth>0.95)  {     // draw lens flare  }            


I think there is a problem with this code because my depth is always 0 (when the point is out of the screen) or something near 1.
If there is a mountain behind the lens, depth equals something near 1 too.

Do you have an idea where the problem comes from ?


Edited by - bestel on February 11, 2002 4:55:32 PM
theres no need for gluproject here
assuming this is a sun on the skybox the depth value will be always the same (whatever u cleared it to prolly 1) this do a read pixels + check if the value is less than 1.

personally i cast a ray into the scene from the eyepoint

http://uk.geocities.com/sloppyturds/gotterdammerung.html
Advertisement
This is strange because I''ve tried (just for test) to use GL_RGB instead of GL_DEPTH_COMPONENT and I''ve checked if the RGB value was near white.

And it works.

So I don''t understand why, when I do it with GL_DEPTH_COMPONENT , my depth value is always near 1...

quote:

theres no need for gluproject here



I don''t understand why I don''t need to use gluProject(). If I want to know where to use glReadPixels(), I must project my 3D point, no ?
one thing you need to be carful of, is to draw all the things that will not occlude the flare without z writing... ie glDepthMask(0)

it doesn''t sound like this is being done.

I still suggest that a solid line of sight code will always beat a depth read. But, obviously, writing a solid line of sight algo'' aint easy, so a depth read is probably the better choice here.
> I still suggest that a solid line of sight code will always beat a depth read

Never. Do some timing. Reading a depth pixel will transfer 4 bytes of data over the AGP bus. Now imagine a ray/model intersection code with 10, 20 or more thousand faces.

A depth read will already beat a ray intersection code on a single triangle. Even with Plücker coordinates, you need at least one fdiv and 2 fmul''s per triangle. A ReadPixel if faster than that.
yes. but you can only do a depth read on screen. And lens flares DO occur from off screen.

and on very old video cards depth reading is VERY slow.
the agp bus is designed for sending data, not recieving it.

if you are testing 20,000 triangles, your line of sight code is insanly badly written.

and you will almost always find that the CPU is not the bottle neck these days. asking the video card to access the depth buffer and return data will get very slow if done multiple times.

and don''t forget,
Line of sight isn''t just used for lens flares.

And a complex algo will also allow you to slowly fade the flare out as it''s slowly, or quickly, obscured. depth reading has problems doing this. (and is slow to read an area of depth values)

have a look at the eXterminate terrain engine for a good example of using line of sight to test a lens flare. (although the way it saves it''s normal data and edge normal data isn''t very efficient)

This topic is closed to new replies.

Advertisement