There are lots of cool things you can do with SDFs to render implicit surfaces. Most of the examples I've seen have been “all in the shader”, where the entire distance function is encoded in the shader source code.
Examples:
- https://www.iquilezles.org/www/articles/distfunctions2d/distfunctions2d.htm
- https://www.shadertoy.com/view/3lsSzf
The problem is that all of those examples assume the shader has total knowledge of the entire world.
I want to make a large world (too large to fit in a shader, certainly), and am looking for approaches to render it with the SDF approach.
Does anyone have experience with this? I am 2D-only, if it helps.
The stumbling block seems to be: Since the rendering happens in the fragment shader, I somehow have to transfer “game world” information into that shader. But there do not seem to be good ways to send bulk data to the fragment shader.
Some ideas I have had:
- Write one generic shader that can draw, say, a combination of 500 SDFs.
- The input to the shader (maybe a UBO?) would contain an encoded version of a piece of the world, with commands like “put a circle at (x,y,radius), do a union with the next object, …” to build the total SDF for that piece of the world.
- On the CPU side, I'd have to split the world into chunks that could be rendered by that shader, and populate the input data appropriately for each draw call.
- So for 2D, I might do a tile-based render of the screen, where each tile has “small” amount of data, enough to be handled in the shader.
- Generate shaders on-the-fly, depending on the part of the world I want to render.
- Here, the shader code would look a lot like the “everything baked in” shaders, but I'd just be generating the code on the CPU.
- This approach seems bad though, since compiling shaders is a pretty heavy process, in my experience. I did a test with an SDF that was the union of a few thousand circles, and it took 30sec+ to compile.
Any other ideas?