Josh Klint said:
I have seen some info on “surfels” but I don't understand you would find the nearest surfels and interpolate between them
Ohhh, yeah… that's still my final open problem.
I wanted to do it like lightmaps. Each texel links to a surfel, and interpolation is easy too, at basically no cost.
A problem is texture seams. Because surfels (or any other approach of surface probes) are likely lower resolution than baked light maps, seams become very visible.
So i worked on tools to generate seamless UV maps, which is possible using quadrangulation, so texels always fit other texels across a seam. I found quadrangulation was the hardest problem i've ever worked on.
LOD is involved too. We need seamless, global parameterization on all LODs, and likely we even need a hierarchical mapping between the LODs.
Personally this brought me to the conclusion that lighting and LOD has to be solved at once, ideally using the same data structures for both.
I have this now, mostly, but it still feels not practical for general game models. Actually i require to resample the geometry, which can't preserve fine details of human made, technical objects.
Or, i would need to add extra edges to the original meshes, so the seamless UVs can be applied. But this amplifies geometry for no visual win and worse: LOD isn't solved then, again.
So that's why why >5 years after having the worlds fastest GI algorithm working, i still can't demonstrate or offer it for general application. It sucks.
Besides, even if we have such light mapping approach working, there is still another problem on top: With streaming open worlds and dynamic memory management for our surfels / probes, we need to update the indirection from probe index textures.
As this constantly changes, this update will have some cost, so maybe it's just better to have no direct mapping of surface to probe at all, but resolve it at runtime. Seems that's where i'll have to go.
I see those options (all are bad):
Make tiles from the GBuffer, for each tile traverse the surfel hierarchy and apply the found probes to the pixels of the tile.
It's not that bad, because we do not need a traversal for each single pixel.
But if we do ray tracing, then we need to do a full traversal per hitpoint to shade it (if offscreen).
Make a fast lookup acceleration structure for the surfels. E.g. a froxel grid. Then each pixel finds it's affecting surfels quickly.
We could extend this beyond the screen (to support RT) using a ‘cubemap’ of froxel grids, with an additional regular grid in the middle. I think EAs surfel approach did just that, proofing it's ‘fast enough’.
Splat the surfels to a screen sized buffer stochastically, so for each pixel, we iterate a small region of NxN texels to find all affecting surfels.
Maybe that's faster than building acceleration structures. But lacks support for RT again, ofc.
Edit: That's the ideas to map ‘screenspace to surfels’.
I forgot to mention alternatives, still aiming to prevent a need for a search:
High res indirection textures. Because of high resolution, seams would be no problem. But updating the texel pointers with streaming is an ugly brute force cost.
Surfel pointer per triangle. Tessellate geometry if needed so geometry resolution matches surfel resolution at least. Teh adjacent surfels can be found because i have adjacency pointers per surfel anyway, so no search.
This seems efficient if the surfel resolution is low enough (I was aiming for 10cm for PS4). But it requires to lock geometry resolution and so its LOD to the current surfel hierarchy LOD cut, or vice versa.
Not sure which evil to pick.
There are exceptions, where certain ideas become unpractical, like foliage.
So i could try to be efficient, probably requiring to have multiple techniques for different kinds of geometry,
or i could try to minimize complexity, using just one approach for all cases but requiring a search.
Josh Klint said:
What do you think?
It really, really sucks. <:/
But let me know if you have any other ideas…