Advertisement

Old School VS Ray Tracing accelerated by GPU

Started by August 21, 2018 11:53 PM
32 comments, last by JoeJ 6 years ago

What rendering techniques currently used in 3D games will become deprecated with the new Ray Tracing accelerated by GPU?

In case the RT engine uses the same silicon as the other engines(the same way compute and graphic engines use the same silicon), wouldn't it be some times better to use that silicon for some old school techniques than choking the pipeline with Ray Tracing?

This is actually a good question. I have a GPU ray tracing library for quite a while, and while DXR brings in something new (and it's quite welcome for its features for me) - it has quite huge flaws (usability being the first, it's still not officially out - and I noticed it crashes a lot, not to mention that MiniEngine sample doesn't even start on most GPUs that are able to run basic examples, etc.).

So, what is going to change? I don't think there will be too much. Full path tracing that could eliminate most of the effects (like shadows or reflections, GI hacks, etc.) is still quite far off (while MiniEngine samples gives you tens of MRays/s or even over a hundred, it's still quite far off of doing path tracing even at FullHD at 60 fps, with enough samples to eliminate noise - even for bright scenes).

It could help a bit with reflections (on arbitrary surfaces - which makes rays divergent, which ends up in being slower though), yet again the performance is not even nearly close to something one expects from reflections - which in most cases are quite subtle effect - except large bodies like water - having them pixel perfect is nice, but the performance hit may be too high to bear as of now.

With shadows … nope. Hard shadows are bad, even soft shadows are bad - we need plausible shadows and there is again, too much noise. I know there is post filtering to remove noise, but it never looked good enough in motion - always introduced some kind of acne, artifacts or noise.

...

So it won't change much now, some games will allow features like perfect reflections (which you will turn on 2080s … and the reflection buffer will be rendered at half resolution … and therefore for some scenarios even worse than cubemap.

In a decade, maybe when we're able to do full path tracing at 60 fps … but there is a problem, standard shifts to 4K - which means 4 times number of paths you need to calculate.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Advertisement

" Nvidia’s GeForce RTX 2080 Ti will allegedly include 11GB of GDDR6 memory with 4352 CUDA cores "

"10GB of rays per second"

Just few days ago, i found a lot of videos in YT talking about how BV was a complete failure...... Yesterday I watched BV showing RTX and people were like "can't wait to play that awesome game". I can smell marketing here, because the previous demos of BV were looking visually nothing special(no more special than others).
"Tomb Raider confirmed to use RTX"....
I can recall a previous Tomb Raider was using an NVidia feature, for the hair. I haven't heard of that feature anymore, and it was a big thing back then.

Not to mention, i can find 10 years old supposedly real time demos on YT looking like the star wars RTX demo, supposedly using only Intel CPU.

Apart of the marketing tricks and the new religion about RTX, I still would prefer to use those lot of cores and fast memory for the old-school tricks used nowadays.

During a gameplay, i dont think a gamer will have the time and attention to spot a difference between real traced reflections and other tricky techniques. Actually, i have to force myself watching the RTX demos in order to check if the reflections are correct. During a gameplay, nobody would have the time to do that.
 

17 hours ago, NikiTo said:

In case the RT engine uses the same silicon as the other engines(the same way compute and graphic engines use the same silicon), wouldn't it be some times better to use that silicon for some old school techniques than choking the pipeline with Ray Tracing?

I'm surprised myself that RTX actually has dedicated RT cores. I assumed it would be implemented on (improved) compute.

It actually depends on performance if it is worth to have dedicated cores. The question what would happen if you remove RT and tensor cores and instead double the count of general purpose cores, leaving it to the devs to implement RT themselves can only be answered if we compare existing tracers against RTX. I'm curious of benchmarks after release. If they show a 10x speedup than they might be right.

It also depends on how AMD Intel react to this. They have not said anything yet, which makes me think NV has kept their RT plans a secret. But until there is no broad support it's not of so much use in practice. There's also a surprising amount of upcoming games which support some RTX utilization. For now this seems a big NV marketing move, and probably NV staff has implemented most of it. Just guessing... But i dislike the vision development slides more and more through our fingers. The main problem with DXR is that useful things like traversal structure are hidden, or should i say 'implementation dependent, so each vendor can implement optimally'? I think it's the former. Things like RT Acc. structure, AI denoising, AI anti aliasing, Game Works... all those things are somehow NV IP. In my opinion they do too much of our job.

Am i alone with this kind of opinion?

 

17 hours ago, NikiTo said:

What rendering techniques currently used in 3D games will become deprecated with the new Ray Tracing accelerated by GPU?

Reflections,

Shadows,

AO,

finally full GI, but not yet.

Currently it's most of the things we can not do well like screenspace fakery, which is welcome.

Replacing any rasterization (which is not yet practical) then enables other thigs like proper DOF, motion blur, foveated rendering.

 

Edit: Good talk, they show some clever stuff: https://www.gdcvault.com/play/1024814/

 

 

 

 

17 hours ago, NikiTo said:

n case the RT engine uses the same silicon as the other engines(the same way compute and graphic engines use the same silicon), wouldn't it be some times better to use that silicon for some old school techniques than choking the pipeline with Ray Tracing?

Ray Tracing can be complitely paralleled becouse it possible to trace each ray completely independend from other rays. But it require complitely different GPU architecture to use most efficient acceleration algos like clustering etc that require each thread to work complitely separated like on CPU cores razer GPU vectoring where "threads" have to work synchtonized. So, obviously, multicore  general CPU accelerators like Xeon Phi better fit to RT needs then GPUs. 

 

40 minutes ago, NikiTo said:

Actually, i have to force myself watching the RTX demos in order to check if the reflections are correct

Of cource. But RT mean a global specular lighting that make OIT and many other rendering techniques much easier. Unfortunately RT can not do anything with global diffuse lighting and have difficulties with area lights, but solving of it two problems increaing visual realism much more then RT reflections/refractions. Also classical Z-buffer rendering is mathematically same as 1-st step of RT but much faster for computation. So to get advantages of both techniques it require to have Xeon Phi -like cores and CUDA - like cores on single device that i guess will not happen in close future.

#define if(a) if((a) && rand()%100)

https://youtu.be/mlKGctu7OC8?t=54s

time offset is part of the link

No AO at all while climbing the wall.
The very next scene where the wall is strongly lit, looks very awkward and unreal too.

Definitively no AO here either while climbing the wall(time in the link):

https://youtu.be/mlKGctu7OC8?t=1m36s

I can not perceive AO on the body under the arms either. Plus there is a loading symbol that appears from time to time. It says "RT" and is loading something.

 

 

 

 

Advertisement
3 minutes ago, Fulcrum.013 said:

So to get advantages of both techniques it require to have Xeon Phi -like cores and CUDA - like cores on single device that i guess will not happen in close future.

It may have already happened. We do not know how dedicated RT cores on RTX work. I guess it's about clustering and sheduling to compromise work and memory divergence, but there must be more because you can do this with compute as well.

4 minutes ago, NikiTo said:

No AO at all while climbing the wall.

AFAIK Tomb Raider only uses it for shadows, not sure. It's an early title, we will see more impressive stuff after some time...

From my experience with digital painting, AO is what most realism is giving to an image.
Drop shadows help a lot, but at least for me I can not check or tell if a drop shadow is correct. I dont know how about other people.
AO is at least for me easy to tell if realistic. And people accuse me of drawing too 3D like, just because i put AO in my drawings.
I mean, in case of a game like Tomb Raider, i would pay special attention to AO.

33 minutes ago, NikiTo said:

I dont know how about other people.
AO is at least for me easy to tell if realistic.

Then you're quite wrong, because (even raytraced) AO is not realistic at all, just a try to fake GI.

AO works by 'darkening' the scene if probability for light to reach the surface is small. GI works by 'brightening' the scene with indirect light, we could say.

Because likely less indirect light reaches corners the AO trick often looks realistic, but better AO is no move towards realism.

 

@JoeJ  I understand that even the best examples of AO in games(in my opinion Uncharted 4) are not real. They are often only dark stains. But i think if Ray Lights dont reach an area, it will create ambient occlusion and should be pretty accurate with Ray Tracing. It should look pretty more real than AO tricks, but for some reason Tomb Raider has no AO at all.... Looks like something with good drop shadows for me.

Even a dark area without a form under Tomb Raider(the character) would help in my opinion, rather than letting it without AO at all.
I can not say what algorithm I use for my drawings, i just imagine the AO.

My point is: We are talking about the omnipotent RTX here....but no AO.....

(real life ambient occlusion here upon a clean wall(time in the link):
https://youtu.be/b_zHQ6kFuQ0?t=10s
)

 

How the light reaches there:

Capdsadsadsature.JPG.276d805af24806a8ef17e74560c15d54.JPG

 

The parts that should be of the color of the flames but are not:

Captufdsfdsfdre.JPG.f643c2d56168b2e8d85985f26b5dd7f2.JPG

 

This topic is closed to new replies.

Advertisement