Advertisement

Abusing LDR colour grading LUT in HDR

Started by June 19, 2018 04:28 PM
2 comments, last by pcmaster 6 years, 7 months ago

Hi community!

Do you know about any articles that deal with abusing existing 8-bit colour grading LDR LUTs when you want to use them for HDR10 (HW HDR) output? The case is when you mustn't touch the existing LUTs but still want the HDR to maintain the artistic feeling (as much as possible).

I've found this one http://www.glowybits.com/blog/2016/12/21/ifl_iss_hdr_1/ (section Color Grading). I'm currently testing it.

With the approach I devised, there's a problem when the LUT blows "okay" colours to "fully blown" near 1.0:
 


ldr = hdr / (hdr + 0.2) // reinhard-like tonemapper from the existing game
graded = lut(ldr)  // until now, this is exactly what the existing game is doing in LDR)
reconstructedHDR = -0.2 * graded / (graded - 1) // singularity with graded = 1

My approach works fine as long as the LUT isn't too aggressive. When it is, I'm toast :) I tried many things to somehow clamp it, or dial it back if it flies too high, but nothing is satisfactory.

The original game, as most games, applies colour grading after doing the tonemapping. If it was before, I wouldn't really have a problem, would I?

.P

Ideas:

- tone map to LDR, color correct, then inverse tonemap back to HDR, then tonemap to HDR10. I guess this is what you're doing now though... Maybe it wouldn't fail as bad with a better tonemapper? 

- HDR10 has a fixed range of 0-10000, where ~200 is traditional white (and actually a displayable range of more like 0-1000 on current TV's), so you could simply tonemap to HDR10 range, divide by 1000, color correct, then multiply by 1000.

- use scRGB as your post processing color space, port color correction to this new workflow. Tonemap from full range HDR to 0-125 scRGB range (corresponds to 0-10k nits). Then color correct. Then if not doing HDR10, tonemap a second time down to 0-1 sRGB range. But this violates your request to not change the color correction workflow :(

Advertisement

Yeah, I can't tamper with the existing colour correction tuned on LDR after Reinhard-like tonemapping. Not on this project.

I like the second idea though. I'll give it a shot, but I'm too tired after looking at this for long days.

After all my experimentation, the approach from Infamous Second Son that I linked inspired me and it works with an modification. They devised a ColorGradeSmoothClamp function which is y=x near 0 and y=1 near 1.5. I used our original Reinhard-like HDR/(HDR+0.2) instead of this one and it works surprisingly well:

  1. select maximum component of HDR
  2. tonemap the maximum component using the same tone-mapper as the original pipeline (but not RGB, only the max component)
  3. scale = tonemapped max component / max component
  4. colour grade (original hdr * scale)
  5. re-expand: graded / scale

This works precisely for identity colour grading (which is a must) as well.

Of course, the grading is done on a slightly different LDR value because the scale comes only from tonemapping the maximum component, but the resulting colours don't shoot too far when the tonemapper clamps them to 1, it looks pleasant and similar. I wonder if artists will agree :)

Simple inverse tonemapping LDR->HDR just didn't work for me because of the singularity near 1.0. Neither can I use a totally different tonemapper than the original, because I'd be feeding the colour grader totally different LDR values.

Is there a tonemapper without a singularity in its inverse form, which utilises close to the full LDR range?

This topic is closed to new replies.

Advertisement