Hi everyone here,
Hope you just had a great day with writing something shining in 60FPS:)
I've found a great talk about the GI solution from Global Illumination in Tom Clancy's The Division, a GDC talk given by Ubisoft's Nikolay Stefanov. Everything looks nice but I have some unsolvable questions that, what is the "surfel" he talked about and how to represent the "surfel"?
So far as I searched around there are only some academic papers which look not so close to my problem domain, the "surfel" those papers talked about is using the point as the topology primitives rather than triangled mesh. Are these "surfel"s the same terminology and concept?
From 10:55 he introduced that they "store an explicit surfel list each probe 'sees'", that literally same as storing the surfel list of the first ray casting results that from the probe to the certain directions (which he mentioned just a few minutes later). So far I have a similar probe capturing stage during the GI baking process in my engine, I would get a G-buffer cubemap at each probe's position with facing 6 coordinate axes. But what I stored in the cubemap is the rasterized texel data of the world position normal albedo and so on, which is bounded by the resolution of the cubemap. Even I could tag some kind of the surface ID during the asset creation to mimic "surfel", still, they won't be accurately transferred to the "explicit surfel list each probe 'sees'" if I keep doing the traditional cubemap works. Do I need to ray cast on CPU to get an accurate result?
Thanks for any kinds of help.