Advertisement

HDR Implementation

Started by February 26, 2023 10:56 PM
3 comments, last by SBD 1 year, 8 months ago

I'm cranking away on upgrading to HDR rendering on my project, and have a few questions that I'm hoping folks more experienced in this area can illuminate (more lighting puns to follow).

Focusing on auto-exposure adjustment, I'm presently working on the typical mip-generated scheme (average luminance histogram or other smoothing methods are a future path).

Early examples use raw average, which is obviously pretty twitchy, and no one goes that route these days.

Later examples store log(lum), do the mip-chain, then extract and apply exp().

More recent methods seem to favor log2/exp2 (these are mostly compute shader variants).

I understand that using average log helps to smooth out outlier values (although it still seems pretty twitchy in my experience).

I'm curious what the reasoning behind moving to log2/exp2 is? It has some speed benefits, and algorithmically it helps for things like histogram calculations. Are there any other basic benefits there for smoothing that I'm overlooking? Is log2/exp2 the current favored luminance averaging scheme? Also, are there other things people typically do to try and filter out outlier values to prevent over-corrections (thinking specifically of the mip-chain method still...all sorts of things are possible with compute)?

As it relates to auto-exposure adjustment, I've been "de-saturate()ing" my lighting in my pixel shaders; literally removing extraneous saturate() calls I had around various color calculations so that we can allow color to go into HDR ranges. This has exposed (pun intended) some interesting glitches where at certain angles the screen will go all white or black, and this is definitely a result of the auto-exposure adjustment. I already do clamping of RGB to 60 before color grading, and add a delta to my log calculation to avoid log(0), so there's something "interesting" going on. I have more hunting to do on this (thought it could be over-blown specular, but it wasn't), but I thought I'd mention it in case folks recognize this as a common issue.

Thirdly and speaking of pixel luminance, it seems the two common pixel multpliers for determining a pixel's luminance are:

ITU.BT709: 0.2127, 0.7152, 0.0722

ITU.BT601: 0.299, 0.587, 0.114

I'm still a bit hazy on which one is the most appropriate to use in which situation. They need to be applied to linear colors, but from what I gather ITU.BT709 is for the RGB colorspace, and ITU.BT601 is for YCbCr? I've also seen the the latter referred to as "perceived brightness". While the former seems the most common, I have seen various samples use either, but with no appreciable/explained reason, and the internet is a mess of jumbled and conflicting explanations.

SBD said:
I'm curious what the reasoning behind moving to log2/exp2 is? It has some speed benefits, and algorithmically it helps for things like histogram calculations. Are there any other basic benefits there for smoothing that I'm overlooking? Is log2/exp2 the current favored luminance averaging scheme? Also, are there other things people typically do to try and filter out outlier values to prevent over-corrections (thinking specifically of the mip-chain method still...all sorts of things are possible with compute)?

log2()/exp2() produces a geometric mean of the values when using a downsampling approach, rather than a simple average. This tends to work better if the scene has big contrast. I think log2/exp2 are also implemented with 1 instruction on GPU, which makes them fast. There's no real difference between log2()/exp2() or ln()/exp() or log10()/pow10(), just a scale factor.

My understanding is that the most modern approach for eye adaptation is to use a histogram of log2(luminance). This requires a compute shader that goes through all image pixels and splats them into histogram bins. Then, you need another shader that scans the histogram to determine the target luminance. I think the usual approach is similar to the “Levels” tool in Photoshop. You want to exclude some percent of the very high and low values in the histogram, e.g. 5%, to avoid outliers.

I have used the ITU.BT709 formula for computing luminance from linear RGB.

Advertisement

Thanks, that's about what I figured (log2/exp2 faster and algorithmically convenient, no other real benefits as far as outlier filtering).

I'm presently working on a CPU-side luminance histogram of sorts (just copy lowest generated avg luminance mip into 1x1 staging texture). Obviously not the most optimal/performant thing to do, but thus far hasn't show up as any kind of egregious performance killer, and gives me some flexibility to leverage other CPU-side data for my luminance filtering.

Wanted to follow up with some updates that others may find of interest.

On the subject of “de-saturating” my pixel shader color/lighting calculations/accumulations, I did encounter one nasty little bug that presented itself in non-obvious ways, and this dealt specifically with alpha. The short of it is that one must be careful to clamp alpha still where appropriate (and also not allow negative RGB, for that matter). What this boiled down to for me was replacing my old saturate calls with my own custom implementation which is:

float4 saturate_color_rgba( float4 c )
{
	return float4( max( 0.0, c.rgb ), saturate( c.a ) );
}

This still doesn't necessarily prevent the actual alpha blending from perhaps outputting an alpha value back to the render target that is out of the 0 - 1 range, but so long as you're able to use this saturate before sending your value out of the pixel shader you're Ok. While there is the D3D11_BLEND_SRC_ALPHA_SAT blend factor available to try and deal with this, it's the only alpha saturate blend factor D3D11 provides, and seems to be an incomplete thought without at least a comparable INV_SRC blend factor alongside.

The other artifact I mentioned was an issue with my pixel luminance calculation as it approached zero. My code naively did this:

output.Luminance = log2( output.Luminance + 0.001 ); // Bad

whereas to really catch all the edge cases, it needed to be this:

output.Luminance = log2( max( output.Luminance, 0.001 ) ); // Good

My scheme for CPU-side scene luminance histogram seems to be working pretty decently. It's just a very simple running average of the scene luminance over X frames, which smooths out small blips and also gives a more gradual/natural exposure adjustment; in my case, with a 120 frame buffer, that means it will take 2 seconds (at 60 FPS) to adjust. It's not as fancy as doing a proper (compute) histogram that allows for discarding percentages of outlier pixel luminances, but it seems to do the trick (at least, for now).

Lastly, a very obvious adjustment I made to my tonemapper was to allow for clamping to minimum and maximum exposure values, which can prevent over-brightening or darkening. It's an extremely obvious “feature”, but one you never really see in the samples on the internet:

float3 AutoAdjustExposure( float3 hdrColor, float avgLuminance )
{
	const float kMiddleGray = 0.72;

	float3 c = hdrColor * clamp( ( kMiddleGray / avgLuminance ), HDRAutoExposureMinBrightness, HDRAutoExposureMaxBrightness );

	return c;
}

In the case of my engine/assets, this allowed me to set the max brightness to 1.0, which prevents “dark” scenes (which is any existing SDR scene) from “over-brightening”.

Now that this is sorted, I'd be curious to hear how others are handling their HDR light properties. On the authoring side it seems common and sensible to separate out color and intensity, and I'll be doing the same. I'm curious as to what units people have been using for intensity, and how they're translating that to their in-game linear calculations? Unreal allows for using lumens and other units depending on light type. I've seen others propose using EV stops as an intensity value (sort of weird to use exposure value as a light intensity, but artists are familiar with it). What's your experience? One could probably get away with just pre-multiplying the color with intensity at load-time and carrying on as normal, but there are probably cases where having that separated intensity may be useful (depending on your lighting scheme/calculations). Right now I'm focused on more-or-less standard Blinn-Phong, but physically-based lighting/shading is definitely coming up next.

This topic is closed to new replies.

Advertisement