I'm cranking away on upgrading to HDR rendering on my project, and have a few questions that I'm hoping folks more experienced in this area can illuminate (more lighting puns to follow).
Focusing on auto-exposure adjustment, I'm presently working on the typical mip-generated scheme (average luminance histogram or other smoothing methods are a future path).
Early examples use raw average, which is obviously pretty twitchy, and no one goes that route these days.
Later examples store log(lum), do the mip-chain, then extract and apply exp().
More recent methods seem to favor log2/exp2 (these are mostly compute shader variants).
I understand that using average log helps to smooth out outlier values (although it still seems pretty twitchy in my experience).
I'm curious what the reasoning behind moving to log2/exp2 is? It has some speed benefits, and algorithmically it helps for things like histogram calculations. Are there any other basic benefits there for smoothing that I'm overlooking? Is log2/exp2 the current favored luminance averaging scheme? Also, are there other things people typically do to try and filter out outlier values to prevent over-corrections (thinking specifically of the mip-chain method still...all sorts of things are possible with compute)?
As it relates to auto-exposure adjustment, I've been "de-saturate()ing" my lighting in my pixel shaders; literally removing extraneous saturate() calls I had around various color calculations so that we can allow color to go into HDR ranges. This has exposed (pun intended) some interesting glitches where at certain angles the screen will go all white or black, and this is definitely a result of the auto-exposure adjustment. I already do clamping of RGB to 60 before color grading, and add a delta to my log calculation to avoid log(0), so there's something "interesting" going on. I have more hunting to do on this (thought it could be over-blown specular, but it wasn't), but I thought I'd mention it in case folks recognize this as a common issue.
Thirdly and speaking of pixel luminance, it seems the two common pixel multpliers for determining a pixel's luminance are:
ITU.BT709: 0.2127, 0.7152, 0.0722
ITU.BT601: 0.299, 0.587, 0.114
I'm still a bit hazy on which one is the most appropriate to use in which situation. They need to be applied to linear colors, but from what I gather ITU.BT709 is for the RGB colorspace, and ITU.BT601 is for YCbCr? I've also seen the the latter referred to as "perceived brightness". While the former seems the most common, I have seen various samples use either, but with no appreciable/explained reason, and the internet is a mess of jumbled and conflicting explanations.