9 hours ago, Vilem Otte said:
Honestly I'd like to try it - what I'm skeptical about are dynamic scenes (in my thesis I was doing interactive bidirectional path tracing in CUDA - and it worked well... for static scenes, re-building acceleration structures was simply too big performance hit).
Did you try the idea to use static trees per model, just transforming them (refit for skinned meshes), and rebuilding just a smaller tree on top of it all to link it together?
I do so on GPU and performance cost is negligible, but i have lower requirements on quality.
Recently i've read a paper about Brigade and they do the same using a scene graph async on CPU.
4 hours ago, Hodgman said:
-
Dispatch calls that can dynamically produce more dispatch calls, recursively
-
Dispatches that can pre-empt themselves with calls to a different shader, before resuming.
-
Being able to bind an array of compute shaders to a dispatch, and have them refer to each other via pointers.
-
Being able to bind many root-parameter sets to a dispatch.
-
A very smart scheduler that can queue and coalesce these different shader invocations into large thread groups before executing them.
Yummy! I hope this becomes available for general compute APIs as well, will take a look...
EDIT: Do you know if one can keep the data in LDS somehow while 'switching' shaders?
(There is no description, just the SDK and i don't want to install that just to read right now...)
The video seems not super impressive for the effort. The need to denoise even just for specular reflections is somehow disappointing - looking a alternatives like voxel cone tracing i still feel the need to work on better methods / algorithms / data structures. Of course with faster hardware this will work, also for pathtracing, but i still see pathtracing as a simple but slow solution.
Edit: It's more impressing after reading the power point. They do a lot more than just reflections - not anything else is baked