Advertisement

Screen space reflections: Artifacts problem

Started by June 02, 2019 10:17 AM
5 comments, last by JohnnyCode 5 years, 8 months ago

Hello! I'm trying to implement a simple (so far) screen space reflections shader. Below is my code:   


const int numBinarySearchSteps = 100;
    const int maxSteps = 400;
    const float rayStep = 0.01; // 25;
    const float minRayStep = rayStep/2.5;

    vec3 BinarySearch(vec3 dir, inout vec3 hitCoord, out float dDepth, out int hit)
    {
        float depth;


        for(int i = 0; i < numBinarySearchSteps; i++)
        {
            vec4 projectedCoord = projectionMatrix * vec4(hitCoord, 1.0);
            projectedCoord.xyz /= projectedCoord.w;
            projectedCoord.xyz = projectedCoord.xyz * 0.5 + 0.5;

            depth = texture2D(depthMapTex, projectedCoord.xy).x;

            vec4 viewSpaceDepth = inverse(projectionMatrix) * vec4(projectedCoord.xy, depth * 2.0 - 1.0, 1.0);
            viewSpaceDepth.xyz /= viewSpaceDepth.w;
        
            dDepth = viewSpaceDepth.z - hitCoord.z; 


            if(dDepth < 0.0)
                hitCoord += dir;

            dir *= 0.5;
            hitCoord -= dir;    
        }

        vec4 projectedCoord = projectionMatrix * vec4(hitCoord, 1.0);
        projectedCoord.xyz /= projectedCoord.w;
        projectedCoord.xyz = projectedCoord.xyz * 0.5 + 0.5;

        hit = 1; 

        return vec3(projectedCoord.xy, depth);
    }


    vec4 RayCast(vec3 dir, inout vec3 hitCoord, out float dDepth, out int hit)
    {

        dir *= rayStep;

        float depth;

        /*vec4 projectedCoord = projectionMatrix * vec4(hitCoord, 1.0);
            projectedCoord.xyz /= projectedCoord.w;
            projectedCoord.xyz = projectedCoord.xyz * 0.5 + 0.5;

            depth = texture2D(depthMapTex, projectedCoord.xy).x;

        return vec4(vec3(depth),1.0);*/

        for(int i = 0; i < maxSteps; i++)
        {
            hitCoord += dir;


            vec4 projectedCoord = projectionMatrix * vec4(hitCoord, 1.0);
            projectedCoord.xyz /= projectedCoord.w;
            projectedCoord.xyz = projectedCoord.xyz * 0.5 + 0.5;

            depth = texture2D(depthMapTex, projectedCoord.xy).x;

            float zNear = 5.0;
            float zFar = 2000.0;
            float z_n = 2.0 * depth - 1.0;
            float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));

            /*vec4 viewSpaceDepth = inverse(projectionMatrix) * vec4(projectedCoord.xy, depth * 2.0 - 1.0, 1.0);
            viewSpaceDepth.xyz /= viewSpaceDepth.w;*/
            
            dDepth = -hitCoord.z - z_e; 


            if(dDepth > 0.0){
                if (dDepth >= length(dir)) {
                    // Inconclusive
                    hit = 2; 
                    return vec4(0.0);
                }
                return vec4(BinarySearch(dir, hitCoord, dDepth, hit), 1.0);
            }
        }
        hit = 0; 
        return vec4(0.0, 0.0, 0.0, 0.0);
    }

    vec3 ssr()
    {
        // Reflection vector
        vec3 reflected = normalize(reflect(normalize(viewSpacePosition), normalize(viewSpaceNormal)));

        // Ray cast
        vec3 hitPos = viewSpacePosition;
        float dDepth;

        int hit;  /* 0 = no hit, 1 = good hit, 2 = nonsense hit */
        vec4 coords = RayCast(reflected /** max(minRayStep, -viewSpacePosition.z)*/, hitPos, dDepth, hit);

        vec3 color; 
        if (hit == 1) {
            color = texture2D(colorMapTex, coords.xy).rgb;
        }
        else {
            color = vec3(0.0);
        }
        return color;
    }

So far, all it does is just return the color of every pixels purely specular reflection, or black if there is no reflection on the screen or if the reflection ray passes behind an object.

My issue is that I get these bizarre artifacts where the pixel gets a color (dDepth >= length(dir)), even though dDepth is supposed to be way larger than length(dir). I've been stuck on debugging this for ages now and I'm wondering if anyone can see the mistake?

This is the scene using just a simple shader (to the right), and the scene running only the SSR-shader (on the left)

Extremely thankful for any help!!!
Simon

1.PNG

It might be z buffer distribution being not linear and not considered in refusion logic which seems to just sample it, have you considered that?

Advertisement
7 minutes ago, JohnnyCode said:

It might be z buffer distribution being not linear and not considered in refusion logic which seems to just sample it, have you considered that?

Thanks for your reply!! Would you be able to elaborate a little? Do you mean the z buffer used drawing the depth map? I'm aware it's not linear, which is why I linearize it and convert back to view space.

Maybe I don't fully understand your answer.. Thanks again!

5 hours ago, Swimon said:

Do you mean the z buffer used drawing the depth map? I'm aware it's not linear, which is why I linearize it and convert back to view space.

To just clarify, you do so when constracting following G-buffer you sample from in this line? (Btw this looks like a world space reflection using depth buffer of rendered projection which will result in what you see, maybe I am wrong).


depth = texture2D(depthMapTex, projectedCoord.xy).x;
16 minutes ago, JohnnyCode said:

To just clarify, you do so when constracting following G-buffer you sample from in this line? (Btw this looks like a world space reflection using depth buffer of rendered projection which will result in what you see, maybe I am wrong).



depth = texture2D(depthMapTex, projectedCoord.xy).x;

So "depthMapTex" is just the depthbuffer of the camera rendered to a texture. I use it to check the depthbuffer value at the projected coordinate of the current ray position. This value is then linearized and converted to view space to be compared with the view space depth of the ray position. This comparison is use to determine if the ray position is now behind and object.

Thanks!!!

On the reflection pass/no pass edges you can see your step precision, it looks like an inch on that booze bottle, and on the right side, you can see the artifact of the inch being in all three dimensions performing hits when it should miss. I wonder of screen space reflection can be performed better way, like finding closest dacing normal by pixel precision for example, from normals buffer in view space.

This topic is closed to new replies.

Advertisement