Advertisement

Direct3D9: Can you share a depth buffer between a MSAA rendertarget and a non-MSAA RT?

Started by September 29, 2024 08:42 AM
2 comments, last by Tubos 1 month, 2 weeks ago

Hi all,

in our old Direct3D 9 application we are drawing buildings onto a multisampled rendertarget. Then we draw soft particles onto a different rendertarget.

Both render passes need to use the same depth buffer, so the particles correctly disappear behind the buildings.

Here we have a problem: When the “buildings" rendertarget is multisampled, the “soft particle” rendertarget needs to be multisampled too. Otherwise they couldn't share the same depth buffer.

But this is bad for performance - the soft particles do not need multisampling.

Is there any way I could re-use the “buildings” depth buffer for the particle rendering? Here are some ideas I had:

  1. First render the buildings, then use StretchRect to copy the “buildings depth buffer” (which is multisampled) to a “particles depth buffer” (not multisampled). However, there is a limitation in DirectX 9 that makes this impossible: StretchRect must be called outside of a BeginScene / EndScene pair if you operate on depth surfaces.
  2. Call StretchRect after the frame has finished rendering. So we are re-using the depth buffer from the previous frame. When the camera moves slowly, this might be acceptable, but on quick movements it would look bad.
  3. Before drawing the particles, re-render the buildings (depth only) onto the “particles depth buffer”. This is bad for performance.

Are there any solutions I have overlooked?

Are you using FFP or Shaders? I didn't really get that from your post. Because at least with shaders, you can simply simulate StretchRect with a manual rendering, which can be done between BeginScene/EndScene, allowing you to use 1. without issues. The performance will be pretty samey - I'm not absolutely certain how StretchRect is implementated, but especially since on newer GPUs DX9 is usually just simulated using modern APIs under the hood, the performance will be very similar (with the main overhead being the copy-bandwidth and not the render-setup anyway).

Advertisement

I am using shaders. Unfortunately, I cannot replace StretchRect by custom rendering. The key thing I would need to do is to copy from a multisampled depth buffer to a non-multisampled one. In DirectX 9, it is not possible to read from a depth buffer in a pixel shader. In DirectX 10 and higher, this is possible with a special surface format [1].

A workaround would be to write the depth redundantly to a new floating point rendertarget. And then the “StretchRect replacement shader” could read from that, instead of the depth. However, if the buildings have a lot of overdraw, this might increase bandwidth requirements and hurt performance.

[1] https://www.gamedev.net/forums/topic/487477-reading-from-a-depth-texture-in-a-pixel-shader-in-dx9/

This topic is closed to new replies.

Advertisement