Quote
I am now trying to figure out how to get the D24 part of the DSV
In the shader:
In case your code does not contain it:
Texture2D renderedShadowMap : register (t0);
Texture2D depthShadowMap : register (t1);
sampler samplerLinear : register (s0);
The correct filter for the sampler is D3D11_FILTER_MIN_MAG_LINEAR_MIP_POINT and you can otherwise use the defaults in the docs for D3D11_SAMPLER_DESC.
To access the map:
shadowMapZ = depthShadowMap.Sample(linearSampler, shadowLocality).r;
where shadowZ is a scalar float and shadowLocality is a float2. I will explain how to compute shadowLocality in a moment.
Quote
so I can compare it against the z position of the scene vertices.
While it is theoretically possible to do shadow mapping using such a comparison I do not recommend using that approach. The main problem is the homogeneous divide the hardware performs before the depth-stencil comparison. (Homogeneous divide allows the non-linear perspective projection to be done while specifying only linear matrices.) As a result, the Z values are packed together closer to the camera than they are in the distance. There is no way to turn this divide off in D3D11, and since the viewer-Z and shadow-Z vectors go in different directions the math gets complicated.
Here is what I do to avoid this problem and it also makes computing shadowLocality easy:
Computing the shadow map
This is for a directional light. Spot lights are more complicated.
The shadow map needs to be use the orthogonal projection. This can be done by using XMMatrixOrthographicLH() in computing the map. The input values comprise a box along the light vector; width and height should be large enough to cover the shadowed area (larger values will introduce blockiness), nearZ can be 0 and farZ should be large enough to cover the max distance from the light source to anything in the scene.
The pixel shader needs to output a "pure" Z, and a normally rendered scene from the light's angle is not needed. The pixel shader's output can simply be the .z value from the SV_POSITION member of its input struct:
float4 shadowMapPixelShader(shadowPSInput_s input) : SV_TARGET
{
float z = input.position.z;
return float4(z, z, z, 1);
}
The DSV is still needed to compute the shadow map correctly. If you render the map objects in the distance should be white and those close up should be black.
Final pass
The constant buffer for the vertex shader will need to hold both the matrices for the shadow map and those for rendering the scene normally. In addition to multiplying by the "regular" matrices multiply each vertex by the shadow matrices (as was done when the map was created). We now have a recreation of the shadow Zs that were used when creating the map and later the comparison will be simple. We've also almost computed shadowLocality in the process. Pass the resulting float4 to the pixel shader:
struct finalPSInput_s
{
float4 position : SV_POSITION;
float4 shadowPosition;
//anything else needed
};
To compute shadowLocality: The Xs and Ys in shadowPosition range from -1 to 1. As you know, Xs and Ys in a texture range from 0 to 1. Also, Ys are inverted in the rasterization process. The mapping equations are therefore:
localityX = shadowX * 0.5 + 0.5;
localityY = 0.5 - shadowY * 0.5;
After sampling the shadow map, if (shadowPosition.z + bias > shadowMap.z) the pixel is in shadow, otherwise it is not. The bias is necessary to prevent artifacts and will depend on the size of the shadow box (you'll need to experiment to find the best value).
So here is the pixel shader (and we don't need the shadow DSV anymore):
//at this point, the pixel should be ready to output except with only ambient light
shadowLocality.x = shadowPosition.x * 0.5 + 0.5;
shadowLocality.y = 0.5 - shadowPosition.y;
//use the RTV, not the DSV
shadowMapZ = renderedShadowMap.Sample(linearSampler, shadowLocality).r;
if (shadowPosition.z + bias < shadowMapZ)
{
//pixel is not in shadow and light color should be added
}
//output the pixel
-- blicili