Advertisement

Depth Test Phenomenon

Started by November 27, 2005 10:15 PM
5 comments, last by Guy Perfect 19 years ago
This is a problem that I've been dealing with for years, but never knew the cause/solution for working with it. The problem is that polygons or edges of polygons that are real close to each other start bleeding through in spots despite the fact that depth testing is on. I'm sure it's a simple fix, but I'm in the dark as to what it is. Here's a picture of this oddity: That's a picture of a boring ol' 6-sided cube drawn with Triangle Strips with Depth Testing set up for LEQUAL. The edges do not overlap whatsoever except for the lines connecting the vertices. The spots in the image that are showing through shouldn't be visible at all. It's an easy workaround to use backface Culling, but if I can fix it without "making due," I'd like to. Anyone have any information?
I would say that either your near clipping plane is too close, or your far clipping plane is too far away. Try adjusting the clipping planes to cover the minimum area required for your cube and see if that fixes it.
Advertisement
Yeah, that fixed it. Thanks. The near plane was too near. I guess I'll just have to draw things bigger. So much for my beloved 10×10×10 realm.

Why do adjustments of the clipping planes make a difference? As far as I can tell, they should have no bearing on the bleed-through-ness of polygons. What reason is there for this peculiarity?
It's not so much the scale that you put everything, but the ratio between the near clipping plane and the far clipping plane that determines how 'spread out' the depth buffer will be.

For example, you would get just the same level of depth buffer quality from near=0.1, far=1000 as you would if all the objects were 10x as large and you used near=1, far=10000.
It's an issue of precision. The z buffer is however many bits deep, and so it can only hold a finite number of depths. If you ask it to hold very near depths and very far depths, it will comply, but it's going to have to make rather big jumps to get all the way from one to the other. When you have two pieces of geometry that both round to the same depth in the z buffer, it can't tell which one is really farther away, and may draw the wrong one.

Now, you might be thinking, "well, [0.0001, 10] is less than 1 unit bigger than [0.1, 10], wtf?" This phenomenon happens (I believe) because the algorithm essentially scales whatever range you put in to be a range of [1,2^bitdepth] by dividing, and dividing by 0.0001 is very much different from dividing by 0.1, whereas dividing by 1.1 is not so different from dividing by 0.1, and 4.1 is even less different from 3.1, etc.
--Riley
Quote: Original post by Guy Perfect
It's an easy workaround to use backface Culling, but if I can fix it without "making due," I'd like to. Anyone have any information?


Backface culling is not something you do for an easy fix, it should be used.
And even though you might have fixed it using the correct z-near/z-far values it is not perfect.
The problem is still there, only on a much smaller scale (if you rotate the cube then you might still see some flickering at the edges in certain angles, sort of like the botom edge of the red polygon in the picture but only some single stray pixels).

Advertisement
I always use backface culling, even if the back of polygons will be visible. I usually render two seperate polygons for the purposes of lighting and texturing, even if they overlap exactly.

But if I have one small front-facing polygon in front of, and very close to, a large front-facing polygon, I'd rather it not perform the depth testing incorrectly. That's why I needed a solution to the problem.

The cube illustration is merely an example of what the problem is.

This topic is closed to new replies.

Advertisement