Advertisement

Distance Fields

Started by July 19, 2018 03:28 PM
6 comments, last by JoeJ 6 years, 6 months ago

http://advances.realtimerendering.com/s2015/DynamicOcclusionWithSignedDistanceFields.pdf

I get the idea that they are using for the distance fields, they are essentially a 3d texture/volume with varying resolution per object, but how are they used/rendered? Are the cubes rendered? How/when/where does a shader evaluate them. Is it a fullscreen shader?

NBA2K, Madden, Maneater, Killing Floor, Sims

4 hours ago, dpadam450 said:

but how are they used/rendered? Are the cubes rendered?

They are not rendered directly for the given usecase, they are only used to calculate shadows and ambient occlusion.

Edit: Epic uses multiple volumes - i do not know if they render each as a box to launch pixel shaders, or if they use a fullscreen quad and loop over all volumes intersecting given pixel or tile.

 

4 hours ago, dpadam450 said:

How/when/where does a shader evaluate them.

There are some images of the ray marching ('sphere tracing') process, look at them.

While tracing, at any point in the volume you know the closest distance to the surface (but not any direction where this surface is).

You can then safely advance the ray with that distance (illustrated as a circle / sphere at the current ray position), and after that you read again the distance at the new ray position.

That's the whole idea it is about. So it's basically ray tracing, but instead using a tree it uses the distance field for acceleration.

 

4 hours ago, dpadam450 said:

Is it a fullscreen shader?

Yes, usually. Look at shadertoy for many examples, where it is mostly used for rendering directly. It is shown how to accumulate AO or how to calculate normals. Demo coder 'iq' is famous for his work on this, he has tutorials / introductions as well somewhere. Edit: http://iquilezles.org/www/articles/raymarchingdf/raymarchingdf.htm

Edit2: ...that's it with basics like OA explained: http://iquilezles.org/www/material/nvscene2008/rwwtt.pdf

Advertisement

From memory of my interpretations of their work (take with a grain of salt) - originally they did a full screen shader that had an array of bounding boxes. They'd ray-trace against the boxes, and, if hit, they'd then sphere-trace through a box by reading from its distance field volume-texture. Later, they moved to a single, global "cascaded" volume texture for the whole world (one volume for nearby, another for mid-range, another for everything, all the same resolution but differing world-space sizes meaning that the voxel size is different). They can then do a single full-screen sphere tracing pass. Before that though, each object (which has an individual volume texture for just that object) is copied into the global volume at the appropriate locations. 

I briefly looked at some of the material, what I'm wondering is for a screen-space pixel:

How does a pixel determine what distance field volumes are overlapping it that need to be raycasted against for doing AO? Does it have reference to all DistanceField volumes and is doing a computer shader with a BSP tree or something?

NBA2K, Madden, Maneater, Killing Floor, Sims

I guess if you generate them per instance and pre-compute the distances offline it would make sense. Each instance would have a unique 3d volume.

NBA2K, Madden, Maneater, Killing Floor, Sims

Yeah in order to save a lot of computation Epic currently generates distance fields offline. There's a runtime generated global distance field as well, though lower res.

I don't know how many mip levels they use however, though it has to be bounded. Neither am I sure how they loop through overlapping pre-computed volumes, which one do you test first? Anyway, here's a great talk on using all this distance field stuff for both geometry and lighting done in realtime in a shipping game:

 

Advertisement
13 hours ago, dpadam450 said:

How does a pixel determine what distance field volumes are overlapping it that need to be raycasted against for doing AO?

https://docs.unrealengine.com/en-us/Engine/Rendering/LightingAndShadows/DistanceFieldAmbientOcclusion

They mention to calculate a list of tiles for each individual object (close objects only, distant stuff uses global SDF).

Likely they precompute min/max depth per tile to determinate how much to extrude bounds for objects, to avoid missing objects that are near, but do not intersect the tile.

No, it should be enough to extrude the bounds by distance to camera to guarantee including all affected tiles. So you could calculate the tile list by rasterizing the extruded bounding boxes.

I assume they then run AO compute shader per object tile (no pixel shader), and results accumulate to screen by using atomics.

Edit: Hodgman only describes the global SDF, which they mention to be implemented with scrolling. But the detailed individual objects are still used near the camera, and cause the most performance cost. I assume they mix both approaches by distance to camera.

 

Not sure of anything, but you could look at the source.

 

This topic is closed to new replies.

Advertisement