[DX11] Deferred rendering and alpha-to-coverage
I would like to use our new DX11 deferred renderer to draw all of our ground cover foliage because it will allow all of the foliage to be lit in the standard deferred lighting pass. However, I'm not having any success.
Here's how I'm setting up the DX11 blend state:
D3D11_BLEND_DESC::AlphaToCoverageEnable = true
D3D11_BLEND_DESC::IndependentBlendEnable = false
D3D11_BLEND_DESC::RenderTarget[0].BlendEnable = false
D3D11_BLEND_DESC::RenderTarget[0].SrcBlend = D3D11_BLEND_ONE
D3D11_BLEND_DESC::RenderTarget[0].DestBlend = D3D11_BLEND_ZERO
D3D11_BLEND_DESC::RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD
D3D11_BLEND_DESC::RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE
D3D11_BLEND_DESC::RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO
D3D11_BLEND_DESC::RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD
The deferred renderer has 3 render targets:
RT0: R8G8B8A8_UNORM: RGB=diffuse, A=alpha-to-coverage output
RT1: R16G16B16A16_FLOAT: RGB=emissive, A=unused
RT2: R8G8B8A8_UNORM: RG=normal, B=spec power, A=spec intensity
I assume that to use alpha-to-coverage in a deferred renderer, I simply need to set AlphaToCoverageEnable to true in the blend state and then output an opacity value to RT0's alpha component. But when I try this, what I see is that when the shader outputs an alpha value of <=0.5 to RT0, nothing at all is written to the RGBA components of RT0. When an alpha value of >0.5 is written to RT0, the RGBA components of RT0 are written just fine.
Any help appreciated!
Yeah, that's what I'd expect to see, but I'm just getting 2 states: unwritten (a <= 0.5) and written (a > 0.5). MSAA is disabled.
desc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
Shouldn't you also set the render target write mask in the blending descriptor?
🧙
Alpha to coverage is a MSAA only technique, how could it work without it ? Forget about the super sampling depth part, It is the edge AA and is unrelated, alpha to coverage just rely on masking a variable amount of the fragments destination based on opacity, the transparency is provide by blending the fragments together at the resolve step.
Now, with a deferred rendered to work with alpha to coverage, you have no choice to use MSAA surfaces for your gbuffer, this is the first requirement, and a big one for bandwidth and memory usage.
But your problem only started here. If you do that, at the lighting stage, you will now have pixels that are "uniform", with all the fragments are identical and can be lit once, and pixels where fragments are different, either because of the edge/depth boundaries of triangles or because of the alpha to coverage. You have no choice but to lit each fragments before blending them ! You have to detect such pixels and it is not always trivial to do it cheaply.
On 8/28/2017 at 8:12 PM, galop1n said:But your problem only started here. If you do that, at the lighting stage, you will now have pixels that are "uniform", with all the fragments are identical and can be lit once, and pixels where fragments are different, either because of the edge/depth boundaries of triangles or because of the alpha to coverage. You have no choice but to lit each fragments before blending them ! You have to detect such pixels and it is not always trivial to do it cheaply.
I presume this is done heuristically? It also seems you need dynamic branching?
Additionally the GBuffer will be really memory consuming (4x or 8x MSAA ).
🧙