Advertisement

Depth-of-field

Started by November 06, 2017 08:10 PM
16 comments, last by matt77hias 7 years, 3 months ago

Looks nice :) 

But if my suggestion removes all blur, this implies that blur_factor_i is too small to have an effect in either case?

The artefact i'd want to avoid does not show up in your shots. (it was very noticeable in old games that did not read the depthbuffer.) Maybe in the middle screenshot the sky should receive less darkness from the roof edge and avoiding sampling from near pixels would avoid that.

Another idea is to utilize blurred mips from bloom - less samples and noise at the flag poles.

 

 

An easier side by side comparison

SetLensRadius(7.0f);
SetFocalLength(10.0f);
SetMaximumCoCRadius(1.0f);

screenshot-2017-11-07-21-27-39.thumb.png.a4e7236bba749e44839ac1816b43a48b.png

PS.: I have no idea why my screenshots (I use the screengrab of DirectXTex) look sometimes so over saturated on some websites. 

 

🧙

Advertisement
1 hour ago, JoeJ said:

Maybe in the middle screenshot the sky should receive less darkness from the roof edge and avoiding sampling from near pixels would avoid that.

I rescaled my disk radius to 1 instead of 0.5 which is more difficult to reason about.

 

Anyway: I created a separate release if someone wants to play around with it (sponza.cpp contains the parameters for the first scene).

🧙

The standard approach is definetly downsample + blur in a separate pass and blend by depth in an other because it is the fastest and still looks somewhat good enough for casual people. The problem is that blur radius is not variable, the blurred parts leak into the sharp parts of the image and gaussian blur is not life like at all and doesn't produce nice bokeh effect. I want to implemnt something like in the new Doom, as described in this excellent graphics breakdown article. Basically, there is a separation pass of the foreground - background scene, clever blurring (HDR image mandatory), then combine.

As others have mentioned, in recent years its become popular to use bokeh-shape gather kernels whose size varies per-pixel based on the CoC size. This naturally gives you an approximation of bokeh effects, and looks better for transition areas than the old approach of blending between blurred and non-blurred images. 

If you want a working example along these lines, you can check out my BakingLab project. It has a gather-based DOF implementation that's driven by physical camera parameters, and you can even switch over to the CPU-based path tracer for ray-traced DOF. The relevant shader code can be found here: https://github.com/TheRealMJP/BakingLab/blob/master/BakingLab/PostProcessing.hlsl#L160

The sky looks horrible :(

Based on my output or inability to tweak three parameters, I wonder why would I ever want DoF, as it seems to ruin the whole graphical experience? I am a bit biased as well knowing how a ground truth path traced DoF would look like, but this does not look very close.

screenshot-2017-11-08-21-01-29.thumb.png.48f8c27325b2c5b4820206d77f496646.png

12 hours ago, MJP said:

If you want a working example along these lines, you can check out my BakingLab project. It has a gather-based DOF implementation that's driven by physical camera parameters, and you can even switch over to the CPU-based path tracer for ray-traced DOF. The relevant shader code can be found here: https://github.com/TheRealMJP/BakingLab/blob/master/BakingLab/PostProcessing.hlsl#L160

I will take a look. Can I get away with it in a single pass?

🧙

Advertisement
On 7-11-2017 at 6:57 PM, matt77hias said:

if (blur_factor <= 0.0f) {         return;     }

@JoeJ I realized last night that I made a very crude error :o

🧙

This topic is closed to new replies.

Advertisement