Hello,
I am a student building his own real-time pathtracer. It already includes distributed global illumination with fuzzy reflections, soft shadows etc.
The problem with this application is that I only use a single sample per pixel for all effects, so the result is very noisy. I am using spatial denoising, but that is not enough to get rid of aliasing, so I also have to use temporal denoising reprojecting previous frames onto the current one.
When I move the camera, previous frame does not align with the current one, so if I don't do anything, I get ugly ghosting artifacts in motion. I solved this by devaluing previous frame accumulation buffer, but then I don't get temporal denoising in motion, and every time I stop the camera it has to build-up the accumulation buffer from scratch.
My solution to this issue was to create a velocity buffer calculating position of the current pixel's world-space position in the previous frame by multiplying by the previous frame view-projection matrix and subtracing it from the current frame screen-space position in the ray-gen shader like that:
float4 currentFramePosition = mul(float4(payload.prevHitPosition, 1.0f), inverse(g_sceneCB.projectionToWorld));
float4 previousFramePosition = mul(float4(payload.prevHitPosition, 1.0f), g_sceneCB.prevFrameViewProj);
g_rtTextureSpaceMotionVector[dtid] = (currentFramePosition.xy / currentFramePosition.w - previousFramePosition.xy / previousFramePosition.w) * float2(0.5f, -0.5f) + float2(0.5f, 0.5f);
I know calculating current frame's screen-space position is redunant since I can easily get it from DispatchRaysIndex(), but I just did it for consistency, optimization will come later. The multiplications are to map it to (0, 1) space as I had some issues with negative values in these buffers.
Then in the composition pass in compute shader i use this value to offset the sampling position from the texture like that:
float2 m = (motionBuffer[DTid.xy] - 0.5f) * 2.0f * cb.textureDim;
if (m.x < 8 && m.x > -8) m.x = 0.0f;
g_renderTarget[DTid.xy] = (g_renderTarget[DTid.xy] + 7 * g_prevFrame[DTid.xy - float2(m.x / 2, 0)]) / 8;
the if statement is to get rid of floating point precision issues causing image moving when camera is stationary, y-axis disabled temporarily.
The problem is that I get blocky artifacts in motion:
![](https://uploads.gamedev.net/forums/monthly_2021_09/4d9357ec44164dfdb10a2bfd43841cb8.image.png)
Are they caused by the floating point precision issues or something else? How do I get rid of them?
Full code: Microsista/Pathtracer at TemporalReprojection (github.com)
EDIT: I've switched to using double precision, and decided to pass the offsets as int's to the composition shader:
double4 currentFramePosition = mul(double4(payload.prevHitPosition, 1.0f), inverse(g_sceneCB.projectionToWorld));
double4 previousFramePosition = mul(double4(payload.prevHitPosition, 1.0f), g_sceneCB.prevFrameViewProj);
g_rtTextureSpaceMotionVector[dtid] = (currentFramePosition.xy / currentFramePosition.w - previousFramePosition.xy / previousFramePosition.w) * int2(960, -540);
and in composition it's just:
g_renderTarget[DTid.xy] = (g_renderTarget[DTid.xy] + 7 * g_prevFrame[DTid.xy - motionBuffer[DTid.xy]]) / 8;
This is how the velocity buffer looks in PIX in motion:
![](https://uploads.gamedev.net/forums/monthly_2021_09/7a34729df74c497a9d7e916f04afa4ae.image.png)
Is this correct? The image still has blocky artifacts in motion:
![](https://uploads.gamedev.net/forums/monthly_2021_09/83c8b30381e248d3b9745ee0558f2b6f.image.png)
EDIT2: When moving the camera top-bottom it actually works fine, there is no artifacts, no ghosting - perfect, but when moving the camera bottom-top there is ghosting, and when moving the camera horizontally, there are blocky artifacts, weird…