I have a complex 3D scenes, the values in my depth buffer ranges from close shot, few centimeters, to various kilometers.
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur when sampling the previous screen texture.
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.
I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth ?
Is there some good practices regarding those issues, and how can I address them ?
Wait why are you depth peeling, that's crazily expensive.
Anyway, it sounds like a floating point error yes? Well the only trick I know is flipped z, which is pretty standard today. You flip your depth axis and resample from there, gives a more linear approximation of depth and can solve some precision issues nigh for free. Here's a longer tutorial: https://nlguillemot.wordpress.com/2016/12/07/reversed-z-in-opengl/
I'm assuming the bias is small enough that precision issues start creeping in? Also the effect is actually a neat kind line drawing art style.
I didn't know the reverse trick might give it a go sometime, thanks.
I am working on a simulation software for building, and for those specifics needs I had to implement depth peeling, and yes it's crazily expensive, but I found a way to compute the most appropriate number of peels. the latest peels dont't bring a lot of information on your final image.
Depth bias and normal offset values are specified in shadow map texels. For example, depth bias = 3 means that the pixels is moved the length of 3 shadows map texels closer to the light.
By keeping the bias proportional to the projected shadow map texels, the same settings work at all distances.
I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
So for each of my sampling point, I offset its position from some pixels in its x direction (3 pixels give me good results on various scenes).
I compute the world difference between the currentPoint and this new offsetedPoint
I use this difference as a bias for all my depth testing
No correction for shadow cascade variance and tricky interpolation stuff (it's always more complicated than I think), I was wondering why it showed up in such an odd pattern, it seems blindingly obvious in retrospect. Glad you found the answer.