I'm working on screen space shadows in an OpenGL/C++ renderer, and I've been having this weird erroneous shadowing artifact. Here's a video of the artifact in action. Notice how the screen space shadows that I want to see are rendering correctly, just with a lot of artifacts on top. This means there isn't anything blatantly wrong, like forgetting to bind a uniform or a texture.And here's the relevant part of my glsl code.
The artifact seems to show up when the view direction is perpendicular to the light direction and the fragment being shaded has a high(ish) depth value; the artifact never seems to show up for fragments that are right by the near plane. I did some debugging, and I found that the the majority of the false shadowing occurs in the first 3 iterations of the march loop, which, if I'm understanding things correctly, means that the surface being shaded is somehow shadowing itself.
The first thing I thought was that this is simply caused by lack of depth buffer precision, and possibly floating point error accumulating from calculations in the shader, but the more debugging I do the more I think that that isn't the issue, or at least not all of it. For one, pushing out the camera's near plane and pulling in its far plane doesn't seem to improve the artifacts, nor does switching from GL_DEPTH24_STENCIL8 to GL_DEPTH32F_STENCIL8. The whole artifact also just seems too extreme to be caused numeric precision issues alone.
The only thing that seems to help with this artifact is using a large minimum value when checking the depth delta between the ray and the depth buffer. But I have to use a really big epsilon of like .01 to (mostly) get rid of the artifacts (even at .01 the artifacts still show up at extreme angles). And using an epsilon any bigger than that causes all screen space shadows to disappear. Actually, there's one thing that works better than a big epsilon, and that is comparing the nonlinear (hyperbolic) depths of the ray and the depth buffer. That is, instead of linearizing the depth buffer's value and comparing it to the ray's z coord, I delinearize the ray's z coord and compare it to the depth buffer value. When I compare nonlinear depth values, I can get away with a smaller epsilon value, which makes sense because precision decreases as z increases because of all the hyperbolic z jazz. But even when doing this, shadows still disappear when the fragment being shaded has a significantly high depth value.
But the overarching issue here is that I've never heard of people experiencing depth buffer precision issues, having to delinearize depth values, or use crazy high depth epsilons when doing screen space ray marching. The guy who wrote the tutorial I'm following didn't even use an epsilon value; he just did (depth_delta > 0.0f)
(with depth_delta
being linear), and it seemed to work for him. I've also looked at several other screen space shadows implementations, and none of them describe the issues I'm having. All of this leads me to believe that I'm making a higher level mistake somewhere
Do any of you know how to fix these artifacts? Have any of you experienced similar issues when implementing screen space ray marching effects, and, if so, how did you fix them? Any responses would be super appreciated, thanks in advance ?
Misc info:
- The shadows are from a single directional source. Its direction is (1,-10, 1) with Y being up
- The filtering mode for the screen depth texture is GL_NEAREST for both mag and min, and the texture has no mipmaps
- The screen depth texture's format is GL_DEPTH24_STENCIL8
- My camera's near plane is .1f and its far plane is 37.1
- I compute
inverse_camera_projection_matrix
on the CPU by casting my camera's view matrix fromglm::mat4
toglm::mat<4,4, double>
, doingglm::inverse
, and then casting it back to single precision before sending it off to my shader (to avoid precision issues) - The bug shows up on my amd linux machine and my nvidia windows machine
- The SSS pass is being run in a fullscreen quad fragment shader
- I'm using OpenGL 4.6