hmmm… why does every paper claim to have solved leaking, only to be topped by the next paper to have solved this as well, but for real this time? : )
My thoughts after reading:
They resemble the scene manually using analytical primitives (box, sphere, capsule…), and they also have LODs for this scene representation. This has two problems: Hard to automate*, and with increasing scene size and complexity their ‘clustering’ becomes classical raytracing, requiring a proper acceleration structure and increasing traversal costs.
I think it's more attractive to use SDF volume bricks than using analytic primitives. Automation is much easier (but still big effort), and there is no GPU divergence from the need to support different kinds of primitives. Accuracy and support across art styles and detail levels should be better too, though memory requirements much higher and the Sponza curtains would leak again.
*) I remember some papers about mesh segmentation which did this locally to cluster meshes into flat / cylindrical / spherical regions.
Unfortunately i do not understand their method of probe placement. It sounds like they place probes on a static regular grid, and if they collide with the SDF scene, the probe moves out of solid space by following the distance gradient. But it seems they do this at runtime, maybe to react on dynamic objects. Imagine a moving wall. Probes would then pushed with the wall, until they snap to to the back side behind it. They detect this by checking the distance traveled as being too large, and enforce a full update of that probe rejecting its history (and thus also its integrated multiple bounces?). Interesting, i wonder which artifacts this causes, but with some TAA i guess it rarely shows up.
I also fail to get their multi bounce thing. I do not understand why they need to spend extra tracing work on this, and why they have a parameter for its contribution. Seems not the typical radiance caching method where multiple bounces are free and correct with no extra effort.
Also failed on their probe visibility test. Do they trace a ray in SS from pixel to probe? Or do they use probe depth buffer like RTX does? I guess they use a spatial trick like tracing only towards one of 8 affecting probes, dithering selected probe in SS, then get visibility to the missing 7 non traced probes from neighbor pixels. Something like that.
Finally mentioning large scene support but not showing examples. The paper feels promising but lacks some pages of better explanation and illustrations. Also, it is just another implementation of the idea to update sparse probes for realtime GI. As with DDGI (now RTXGI) the only news here is about certain implementation details, or which hack works best for whom. I wonder which SDF primitive they would use if curtains would follow a curve.