I followed NVIDIA's tutorial (CSM and PSSM pdf) and sample project but I'm confused how to compare depth with split distances.
My fragment shader looks like:
uniform sampler2DArrayShadow uShadMap;
uniform mat4 uShadMVP[SHAD_SPLIT];
uniform float uShadDist[SHAD_SPLIT];
float shadowCoef() {
vec4 shadCoord;
int i;
for (i = 0; i < SHAD_SPLIT; i++) {
if (gl_FragCoord.z < uShadDist[i])
break;
}
if (i >= SHAD_SPLIT)
return 1.0;
shadCoord = uShadMVP[i] * vPos;
shadCoord.w = shadCoord.z;
shadCoord.z = float(i);
return texture(uShadMap, shadCoord);
}
uShadDist contains distances in view space e.g. [ 12.818679, 26.606153, 46.403923, 100.000061 ] but I guess I need to convert them to clip space coord: [ 0 : 1 ] before comparing with gl_FragCoord.z, right?
Here NVIDIA's code:
// f[i].fard is originally in eye space - tell's us how far we can see.
// Here we compute it in camera homogeneous coordinates. Basically, we calculate
// cam_proj * (0, 0, f[i].fard, 1)^t and then normalize to [0; 1]
far_bound[i] = 0.5f*(-f[i].fard*cam_proj[10]+cam_proj[14])/f[i].fard + 0.5f;
I don't understand what (or how) it does I tried to multiply split dist with camera's perspective proj but I always get >= 1, Can someone explain what value should I compare with gl_FragCoord.z?
One another question is that I'm using Practical Split Scheme but I don't understand what lambda value should I use, I used 0.5. C = (Clog + Cuni) * 0.5f . But I'm not sure if it is best value or not.