Advertisement

DirectX12 adds a Ray Tracing API

Started by March 19, 2018 11:22 PM
40 comments, last by NikiTo 6 years, 7 months ago
7 hours ago, Lightness1024 said:

on a 80k$ hardware.

To put the hardware cost in a different perspective, Amazon will lease you that configuration for $12/hour.

Sure, we won't see consumer-priced hardware able to do this for another 3-5 years. But for games starting development this year, that's about the right launch window. And the current cost isn't out of the realm of possibility to equip a dev team with (keep in mind that console devkits used to cost tens of thousands of dollars a pop).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

10 hours ago, Lightness1024 said:

why all of a sudden, if MS and NV speaks about it, it becomes "finally", and all the hype, when this has been a subject ever since the quake 3 demo, 12 years ago : https://www.youtube.com/watch?v=bpNZt3yDXno
And I'm not even talking about heaven seven (http://www.pouet.net/prod.php?which=5), 18 years ago.
DXRT has mercilessly copied all the OpenRL SDK, available here: https://community.imgtec.com/developers/powervr/openrl-sdk/
And it was already hyped too: https://www.extremetech.com/extreme/161074-the-future-of-ray-tracing-reviewed-caustics-r2500-accelerator-finally-moves-us-towards-real-time-ray-tracing
See any similarity in the vocabulary at the time ?

So, is this a fanboy effect or history revisionism ?
Even Embree has been doing RTRT for years on CPU only, as long as you keep it first bounce. And if you check what Epic has to say about it, (here: GDC on youtube) you'll see they use a cluster of 4 tesla v100 with hyperlink and they were not able to include global illumination. They can just afford 2 rays per effect on a 80k$ hardware.

I would put the horses back in the stable, but well, hype is contagious...

 

Raytracing exists since the 70s indeed. The difference is that there is finally an effort from the vendors to make this area more viable, we now have a platform for it and we know that the hardwares will evolve with raytracing in mind and possibly very optimized hardware circuits for it. Now we have an API dedicated to this and we can develop on it knowing that the vendor will improve it over time and we wont have to rewrite everything from scratch. We are no longer on our own in this. That's what the hype is about.

Advertisement
43 minutes ago, ChuckNovice said:

we now have a platform for it and we know that the hardwares will evolve with raytracing in mind and possibly very optimized hardware circuits for it.

What circuits do you have in mind? Ray v.s. triangle or box tests are already faster than loading the data from memory. Fixed function acceleration structure may be too inflexible. Slow random memory access can't be fixed by any circuit at all.

Actually using tensor cores for denoising seems the only real news here, but personally i'm not yet convinced of this... it's more kinda 'if you can't make it work or don't have the time let a neural network do it', similar to nowadays '... then do it in screenspace' :)

4 hours ago, swiftcoder said:

Sure, we won't see consumer-priced hardware able to do this for another 3-5 years.

4 x Titan-V for consumers in 3-5 years? Sounds too optimistic to me - guess 6-10? 

But i agree we'll see such graphics soon if we improve on the software side as well. So i prefer new compute possibilities over fixed function circuits.

37 minutes ago, JoeJ said:

4 x Titan-V for consumers in 3-5 years? Sounds too optimistic to me - guess 6-10? 

Well... "consumer" sounded better than "high-end gaming rigs" :)

38 minutes ago, JoeJ said:

So i prefer new compute possibilities over fixed function circuits.

I don't see anything particularly fixed-function about RTX. It's mostly just a bunch of dispatch logic for the shaders.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

29 minutes ago, JoeJ said:

What circuits do you have in mind?

None yet other than the denoising stuff they've been talking about as you said. However standardizing all this under an API is the first step to allow this to eventually happen. It would be hard for me to believe that in the future years video cards wont have dedicated hardware to accelerate this in some way or at least improve the existing ones with regard to raytracing. Whatever happens, anything that pour effort and money in this area of 3d graphics is good news to me.

3 hours ago, JoeJ said:

What circuits do you have in mind? Ray v.s. triangle or box tests are already faster than loading the data from memory. Fixed function acceleration structure may be too inflexible. Slow random memory access can't be fixed by any circuit at all.

Actually using tensor cores for denoising seems the only real news here,

Denoising isn't part of DXR. Those denoising algorithms have been published previously too - AI denoising has been around for a few years, and NV #hyped their version of it last year :)

Read the DXR manual to see the new HW features they're pushing for, not the #hype/#fail articles/videos written by other people to lazy to read the manual. I posted a summary on the first page: 

There's a lot of HW changes coming to compute-core scheduling, bindless shader programs, compute coroutines and very fine grained indirect dispatching -- all things that renderers based around recursive ray generation system will need to perform well (and without CPU intervention). Offline/film renderers haven't needed these advancements yet because the millisecond-latency issues of CPU based workload scheduling don't affect them. For realtime we're going to need the GPU to be able to feed itself though.

Ray-geometry intersections can be defined with HLSL "intersection shaders", so there's no specific HW required for that, but they also define a fixed-function ray-vs-triangle mode, which does allow GPU vendors to bake that into silicon if they like. I don't know if that's worthwhile, but it's worth noting that even after all the advancements in general-purpose GPU compute hardware, every GPU still has fixed-function HW for texture filtering, rasterization, depth-testing and ROPs, so ray intersection may have a place there.

Advertisement

Ha, ok  - so i downloaded the SDK and there's the docs included with details. Thanks for pushing me :)

I see it's much more fixed function and abstract than i have expected and there is no application other than raytracing. (But i still hope we'll see things like device side enqueue or presistent threads coming  to low level APIs as well...)

Maybe it's good they have to care for hard stuff like bundling similar rays for cache efficiency under the hood - it's really easy to use that way and dedicated circuits have many options to evolve. Worthwhile or not - we'll never know and use it as is ;)

 

Little OT: 10 years ago April Fool: https://archive.techarp.com/showarticleb2bb.html?artno=526&pgno=0

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

Did somebody else notice the lag in the reflections of the robot inside the mirrors? (exact time included in the link)

Since they are only using raytracing for the reflections, I wouldn't be surprised if there are sync issues this early in development.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement