What's the current state of the art when it comes to rendering shadowmaps for point-lights?
Rendering 6 times to a depth cubemap (+culling)? or is there something more modern?
Not looking for something exotic. Looking for something practical.
Thanks!
Point-Lights Shadowmaps
(respectfully) you're asking semi-contradictory questions:
you're saying "…state of the art.." but then you say … “not exotic” ?
anyway:
- if you want state of the art, then raytraced shadows it is
- Also and not far off, are volumetrically lit and shadowed scenes using voxelised and Henyey-Greenstein scattering function as used in Rise of The Tomb Raider
- if you want practical, like in Doom3 then u can start here (one of these chapters shows how it was done in this game): https://developer.download.nvidia.com/books/HTML/gpugems/gpugems_part02.html
- if u just want shadows, then anything goes, googled tutorials will do
have fun ?
You're right. Let me clarify: state of the art == how do modern games render point light shadows?
The main approach nowadays is shadow mapping. For pointlight shadows, you can render each face separately into faces of a cubemap texture or texture atlas like you said.
On hardware that can use geometry shaders, you can use the geometry shader to export the SVRenderTargetArrayIndex value that decides inside the sahder which cubemap face you will render into (or SVViewportArrayIndex and decide which viewport to render into). In the most naive approach, you can expand the primitives inside the geometry shader six times.
The better approach is to use instancing, and only render the mesh with so many instances that correspond the faces it will be visible inside. The newest GPUs are capable to export the SVRenderTargetArrayIndex and SV_ViewportArrayIndex from the vertex shader stage, so you don't even have to use the geometry shader for this.
You will also want to determine which cubemap faces (frustums) are visible from the camera, so you will want to cull those too.
So to recap, the state of the art approach of cube shadow maps can:
- render a mesh in a single pass into all visible faces
- using instancing, it will determine which faces the mesh will be visible in (instance buffer contains view-projection matrix index)
- using CPU-side frustum-frustum intersection check, determine which cubemap faces are visible from the main camera
- don't use geometry shader if vertex shader supports writing to SVRenderTargetArrayIndex or SV_ViewportArrayIndex
You can also have a look at mapping spheres to 2D textures for single pass shadow-map rendering. Last time I checked it is also quite a bit faster than cube-map shadows.
What you get is basically an UV unwrap of the sphere onto a texture, which you then write your depth info to.
When you need to sample it later you reverse the spherical mapping.
Take a look at this paper for more reference:
https://www.cimat.mx/~alberto/Paper.pdf