Advertisement

Deferred Rendering is still good to use in 2020 and almost 2021?

Started by October 09, 2020 02:39 PM
20 comments, last by vladislavbelov 4 years, 3 months ago

Hi everybody,
Deferred Rendering was a massive new thing and got used a lot during the last decades.
I begin to think deferred rendering is not the right way anymore in 2020 and almost 2021 now.
You can see for example Doom Eternal which is 100% forward rendering.
What is your opinion about it?
Thanks!

The original approach to deferred rendering is certainly not the best way to handle things nowadays anymore, but so is the standard-forward approach of rendering each object * each light. I suspect Doom Eternal is doing exactly that. There are way smarter ways like Tiled/Clustered rendering, which are easily compatible with both forward and deferred approaches.

In the renderer I built using the avalanche-clustered approach, forward with a z-prepass also turned out to be almost always faster, so I can see why you would want to go that route. The main reason for that being that with clustered forward, you get the same time-complexity of only having to render each object/pixel once and only iterate over the lights that potentially affect that pixel, while saving the bandwidth-overhead of doing a full deferred approach (whose only benefit in that scenario is saving the light-calculations on potentially occluded pixels, thus the z-prepass; and also you have the gbuffer which you can use for easier postprocessing).

Advertisement

In my opinion, the deferred-only method came about originally to demonstrate what was achievable with current hardware and newer pipeline in terms of light calculations (and more), but it wasn't to be seen as the only method to render objects (ie. transparency suffered)

Therefore a mix of deferred, clustered shading, forward tech, raytracing (and more) is in my humble opinion the way of the future in terms of rendering. So if your game engine's pipeline can support mixing these techs then you have a footstep into this future.

To name a few… Just Cause and especially DOOM was very advanced when it was released, in that , its engine stored light-calcs, reflection probes (and more) in clustered grids and then applied these in forward tech, which means the engine could switch between generating high-qual splatters in clusters (for example) and apply transparency on splattered objects in fwd tech.

And yet you see, Forward tech predates Deferred and Clustering but the mix of these makes up for what is advanced rendering.

These days, I wouldn't think about how old a tech is, and whether or not I should stop using it, rather I think about how to mix any tech in my pipeline.

  • Please note that when I say old, I mean: if the feature is not deprecated in the SDK or no longer supported the video card.
    So if a tech is old but still usable as a needed feature, it will make its way into my pipeline
  • Also when I say any tech, I mean if the game needs it. I don't add techs just for the sake of it ?

That's it…

Until then ?

It's… whatever. Almost everyone does hybrid now, except for Doom Eternal which is the weirdo that does clustered forward only with a bunch of odd workarounds.

The Last of Us II is mostly deferred on the other hand, looks pretty damned good. They use a neat series of optimizations to get a large number of material slots in deferred efficiently, you can find it here under Deferred Shading in Uncharted 4: https://advances.realtimerendering.com/s2016/index.html

I suppose the real question is what the aims of your project are, what do you need out of it?

It was to get the opinion after saw that Doom Eternal uses exclusively forward rendering and using 16 bits on deferred causes us to have to deal with precision issues.

@Alundra 16bits for what exactly? The Doom Eternal rendering stuff is here: https://advances.realtimerendering.com/s2020/index.htm

And it's a good read. But to sum it up, along with other things about it, there's a number of workarounds to get actually get the art to work right. To put it in a list:

Forward is highly efficient; can do translucency; can do a lot of brdfs/etc.

Deferred has overhead; is better for raytracing; is much better for material layering/blending

So it really depends on what type of game it is.

Advertisement

16 bits on the MRT, generally, and that gives normal precision issues, giving banding.

Alundra said:
16 bits on the MRT, generally, and that gives normal precision issues, giving banding.

Do you know about the techniques for compacting normal-storage? https://aras-p.info/texts/CompactNormalStorage.html

I use #4, and it gives me little to now banding while fitting the normal banding. In general, I use 8-bit/channel for the gbuffer, since ie. position is reconstructed from depth-buffer and doesn't need to explicitely be stored. Other engines use 10-bit/channel for normals, for example https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf​ (page 15).

The tech is still perfectly good. There are eras which used to work but which no longer do, such as sprite systems, which worked great on hardware (including relatively recent consoles and devices like the Nintendo DS) but are terrible on modern PC graphics cards. That's not the case here, the tech is still fully relevant and speedy.

If you're implementing something yourself, use whatever works best for you. If they're built in to the engine or tools you're using, use it or not as you want. But for implementing large systems, it's often a mistake to go for the biggest and brightest new systems UNLESS you're working on a major AAA game, then it's pretty much expected.

Alundra said:
16 bits on the MRT, generally, and that gives normal precision issues, giving banding.

You can turn banding into noise by adding randomness. Often the resulting noise is not visible to the human eye but banding is resolved.

This topic is closed to new replies.

Advertisement