6 hours ago, Lightness1024 said:
For the simple reason that it seems feasible now. It's often not just about having some tech demo, like Quake 3, but about making it useful and accessible in a real world scenario. You can do RT with NVidia OptiX for quite a while, and it works reasonable, but until now, you could not just add it to everyday rendering code.
OpenRL is no different than Optix, you write your dedicated camera/shading etc. shader and watch the system giving you some results after a while, but not from your HLSL/GLSL code ad-hoc.
That's the big difference. Of course, It's hard to predict whether this gonna be a fail like geometry shaders or these several incarnations of tesselation (that mostly got removed), yet, from a graphics programmer point of view, it's a feature much more valuable than having to add 16k, HDR16 @300Hz rendering and 99% of the users don't even really know what to look at even if you show them the results side by side.
Now, the question is, how fast will it run. Path tracing usually converges with a rate of 4:1 of sample-count vs noise.