I decided to draw all transparent geometry twice. First, opaque fragments are processed by the pixel shader whereas transparent fragments are clipped inside the pixel shader. Second, opaque fragments will be discarded by the early-z-test of the rasterizer whereas transparent fragments are processed by the pixel shader (and blended afterwards).
- Bind a rasterizer state with a Less-Equal comparison function.
- Bind no blend state.
- Render all geometry and clip transparent fragments
- Bind a rasterizer state with a Less comparison function.
- Bind alpha-to-coverage or alpha/transparency blend state.
- Render transparent geometry.
I assume that one should use the most restrictive threshold to check for transparency: alpha less than 1?
In a deferred renderer, the normals of opaque geometry are stored in the GBuffer. So step 3 will correspond to GBuffer packing and unpacking (i.e. 2 passes). Transparency will be handled separately in a forward pass (step 6). But what if I want to use the normal buffer of the GBuffer in the post-processing? Should I take normals of transparent fragments into account (and thus update the normal buffer as well in step 6) with or without blending? Or should I neglect these normals (but what about high alpha values?)?