Advertisement

Old School VS Ray Tracing accelerated by GPU

Started by August 21, 2018 11:53 PM
32 comments, last by JoeJ 6 years ago
2 hours ago, JoeJ said:

The seperation of general purpose and RT cores makes it unlikely we can utilize improved sheduling for anything else than raytracing. And even if so, the black box prevents it.

RT cores were added in Tesla, improving thread scheduling for DXR was added in Volta (a year ago). They're separate concepts. 

AMD also has some neat scheduling abilities that aren't exposed by current APIs - such as compute shader wavefronts spawning vertex shader wavefronts... I'm certain that the trend is heading towards more flexibility in GPU thread scheduling. 

If this were still the OpenGL days, this would be the phase where vendors are publishing wacky vendor-specific extensions, and in a few years we'd get a standardised/stable ARB extension, and then even later, it becomes a core API feature. 

20 minutes ago, Hodgman said:

RT cores were added in Tesla, improving thread scheduling for DXR was added in Volta (a year ago). They're separate concepts.

This implies thread sheduling is available for shader cores, but not yet exposed to compute (just for CUDA / CL 2.0), correct?

So you think the shader cores are still in use while raytracing to calculate material interaction, and RT cores do BVH traversal and triangle tests... makes sense :)

Advertisement
1 hour ago, JoeJ said:

So you think the shader cores are still in use while raytracing to calculate material interaction, and RT cores do BVH traversal and triangle tests... makes sense

Yeah it's the same as rasterizing. The shader cores run VS code that feeds triangles into the raster HW, which launches pixel threads back on the shader core in response. 

Now, shaders can generate intersection queries - which are run as software in shader cores and/or on dedicated RT HW - which then generates threads to handle the results. 

I see. So the initial optimisim about increased compute flexibility still holds even with dedicated RT cores :D

Maybe I'm taking a long tour as I have been playing with FPGAs recently.

As far as I've understood, Tensor Cores are 8-bit ALUs (or perhaps even 6-bit). RT cores are dedicated resources.

If this isn't hitting a manufacturing wall I don't know what it is. Texture samplers have always been dedicated and work great, reflections have been traditionally difficult - in the last few years we got this screen-space-reflection trend being a trick I expected to never see again. Don't even get me started on shadows.

With the NV 2k series starting at 500 I might believe this sounds 'reasonable' but zero chance this has an installed user base for the next couple years.

Still, a shift is near. Unless you have invested considerably in your lighting infrastructure, it is coherent, competitive and works, I would suggest to not invest in most traditional schemes at this point. Shall I be back into graphics I will focus my interest in driving the hardware efficiently. Ray-tracing cores will stay there in a form or the other.

 

Previously "Krohm"

So far, the demos of RTX are pretty shitty for me compared to other games:

Capture432432.PNG.13b00c8165e90c5a6a68523197541d99.PNG
 

Capturerewrew.PNG.f1db19bc5cdeb812030c6ca9f4390eb7.PNG

And finally....

Capturefdsfdsfdsfdsfdsfdsfds.PNG.6f8f14dd7dd7d7d51491dfa137923e3c.PNG

SERIOUSLY?!?!?!!? I captured the source too, so you can go and check it by yourself, if it results unbelievable ugly to you...

I am not worried anymore about the capabilities of the technology(not big), but I am worried about the marketing that is storming now.
Many people are commenting that game creators will put reflections everywhere only to can get promotion from NVidia for their games. And the games will change to look glassy. In the interview with the developers of Metro Exodus, they commented that big part of the creation process of games will change. As before, games needed a lot of lights to make them look nice, now they need only one light. The handle of number of polygons will change too.

Only drop shadows look much better in the demos of RTX. Nothing i can not achieve with a big shadow map texture.
If i had a company with the infrastructure built to make games as Uncharted 4, i would not invest in RTX. As Uncharted 4 looks awesome already.

The prices of the GPUs are super normal, i believe. For a gamer that has a job too it is not a big deal.
It is the impact this will have on game development that I don't like.
 

Advertisement
56 minutes ago, Krohm said:

I would suggest to not invest in most traditional schemes at this point.

I argue that the classical raytracing NV offers is too traditional as well.

For example what about alternate representations of geometry, Voxels, Points, Surfels, SDF... (notice how those things merge acceleration structure and geometry to one thing) DXR does not support them, so should we stop with all this?

One could argue that it's still supported by the loophole of custom intersection shaders where you can implement this, even with your own mini BVH. But at this point you no longer benefit from dedicated RT cores, at least not enough to justify their silicon area. Conclusion: Use classical triangles and raytracing, because it's hardware accelerated. Result: Any innovation blocked.

I would prefer a more programmable, less black boxed approach. Even if improved thread sheduling rises hope on more flexible compute to become exposed, the whole concept looks too much fixed function to me, still.

 

With tensor cores it's even worse. I really think we would get the same AA / denoising quality with more compute cores for that silicon area, and so far no other application for tensor cores suitable for games has been shown.

That may change, likely, but at the price of what other innovations we miss by capping general purpose performance?

For now, it really seems NV throws out all they have but the competition has not in the hope developers utilize it like they intend.

 

Nvidia marketing agent disliked my screenshots XD

""" Now I often see ray tracing touted as a magic fix for rendering (usually in discussions on realtime rendering for games) in online discussions, as if ray tracing somehow provides physically accurate results. Well it doesn’t. It comes closer than triangle rasterization (the technology employed in almost all games, and what graphics cards are optimized for) but it’s no simulation of reality. It gives us reflections and refractions virtually for free and it gives very nice hard shadows (unfortunately in the real world shadows are rarely if ever perfectly sharp). So just like rasterization engines have to cheat to achieve reflections and refractions (pay close attention to reflective surfaces in games, they either reflect only a static scene, or are very blurry or reflect only objects that are on screen), a ray tracer has to cheat to get soft shadows, caustics, and global illumination to name a few effects required to achieve photo realism. """

source

9 minutes ago, NikiTo said:

It gives us reflections and refractions virtually for free and it gives very nice hard shadows (unfortunately in the real world shadows are rarely if ever perfectly sharp).

Nothing is for free there, reflections and refractions require to split a single ray to multiple secondary rays (or to accumulate results of random jittered fewer rays over time by denoising, or to use cone tracing which is not supported). Only for rare surfaces like glass and mirrors you get away with keeping it at a single ray.

For soft shadows you miss the point that thy are caused by area lights. We can not do this well by rasterization, because rasterization is only fast from a single viewport. With rays you can use a random starting point at the light surface, accumulate multiple results and converge to a realistic result. Middle ground would be splatting to many small frame buffers, as shown by Imperfect Shadow Maps / Many LoDs (which requires not even rasterization hardware, but has non obvious limitations e.g. to represent light sources accurately in tiny frame buffers for full GI).

So i really think ray tracing is necessary, but it's not the right tool to do everything with it.

 

Here is a nice video where you can spot more bugs :) http://www.pcgameshardware.de/Battlefield-5-Spiel-61688/News/Tweaks-sollen-Raytracing-Leistung-stark-steigern-1264065/

It's nice how the scene is clipped for reflections without any falloff yet. It shows that actual game geometry without any LOD can not handle open world sized scenes. We still need to use tricks and fakes...

 

This one is maybe the most impressive: 

Real GI at indoors. A bit blurry like my own stuff, but not bad! :)

 

 

 

 

 

Sorry, but that article is pretty senseless.

First of all, ray tracing is considered a general term for algorithms based on, well, tracing rays. "Normal" ray tracing, which only handles direct light certainly falls into the category. So does Path Tracing, which adds an approximation for indirect lighting. So does Ray Marching, which also considers non-solid matter.

20 minutes ago, NikiTo said:

Now I often see ray tracing touted as a magic fix for rendering

It is a fix. Not to make an awesome game out of a shit game, but for exactly the features that he named. Reflections, refractions, hard shadows. All in realtime. That's the reason why we will see hybrid renderers for the next couple of years or decades, not pure ray tracers.

That being said, it's a first step towards more sophisticated algorithms such as path tracing, which relys on the same principles. Sending out huge amounts of rays. Running a lot of intersection routines. Hardware needs to be adjusted for that, which is starting to happen now.

29 minutes ago, NikiTo said:

but it’s no simulation of reality

That's just a stupid point. Of course we won't simulate photons. It's nothing but a waste of resources.

Quote

The crux of the problem is that with a path tracer you are locked into an all or nothing approach. If you turn down quality too much you get a grainy image, which you can use to preview but which is wholly unsuitable for production use

That's also not true. GI in games is often based on some form of path tracing. For example, Unity uses path tracing for their light mapper. That's why we have seen all those new denoising algorithms coming up the last couple of years, to make some form of path tracing feasible. Now, of course we're cheating, and more or less of the quality of a path traced image is lost through denoising. But it's still already some form of it.

This topic is closed to new replies.

Advertisement