Advertisement

Tone Mapping

Started by October 12, 2017 12:57 PM
52 comments, last by MJP 7 years, 3 months ago
On 10/21/2017 at 2:55 PM, FreneticPonE said:

That's nice in theory, unfortunately the final brightness your textures end up as onscreen are only somewhat correlated to their albedo in the stored texture. IE a bright near white texture can end up darker than a fairly dark texture if the white one is in darkness and the dark one brightly lit in the actual environment (assuming a large hdr range). Thus you'd end up with banding on the near white texture while the brightly lit dark texture gets no benefit from the perceptual compression.

I'd just stick with srgb for final tonemapping and final perception, instead of source data.

It's certainly true that there's no direct relationship between diffuse albedo color and the final monitor output intensity, due to the things you mentioned as well as a few other factors (grading, specular lighting, the lighting environment, BRDF, TV response and processing, etc.). But in many cases you're going to have radiance = albedo * irradiance / Pi, and so the albedo is going to *roughly* have a linear effect on the output intensity assuming that the output radiance ends up getting mapped to the visible range after exposure. And so you're still going to benefit from distributing your precision non-linearly to roughly map the logarithmic response of the human eye.

As an example, I created a simple gradient in Photoshop that was authored in sRGB space:

Gradient.png.1426374f45b2285801d66713b8f91480.png

I then used this as the albedo map for all meshes in Sponza, and applied some standard diffuse lighting + exposure + filmic tone mapping. Here's what it looks like when it's stored with sRGB transfer function applied, with an 8-bit-per-channel SRGB format that enables automatic conversion to linear:

Floor_Gradient_SRGB.thumb.png.2c51048fcc2dc2e39b786e08416c8c53.png

And now here's what it looks like if I apply the inverse sRGB transfer function before creating the texture (converting it to linear space) and then store the result in an 8-bit-per-channel UNORM texture:

Floor_Gradient_UNORM.thumb.png.401b1c0b0d9b67c7865734923e889929.png

The increased banding is very obvious. It gets even worse if you use BC formats that have even less effective precision:

BC1_UNORM_SRGB:

Floor_Gradient_BC1_SRGB.thumb.png.d724e2edbdd94ac8c813a0cb08fff2c5.png

BC1_UNORM:

Floor_Gradient_BC1.thumb.png.04c44d961acb7e8f358308e9a9adacfe.png

For completeness, here's the "high exposure" case where only the high end of the albedo texture ends up being in the visible range:

UNORM_SRGB:

Floor_Gradient_HighExposure_SRGB.thumb.png.0ddea228ff411d5c9578635090931e25.png

UNORM:

Floor_Gradient_HighExposure_UNORM.thumb.png.f6b818331bdc99b491982fe62baee33d.png

On 10/22/2017 at 6:24 PM, MJP said:

It's certainly true that there's no direct relationship between diffuse albedo color and the final monitor output intensity, due to the things you mentioned as well as a few other factors (grading, specular lighting, the lighting environment, BRDF, TV response and processing, etc.). But in many cases you're going to have radiance = albedo * irradiance / Pi, and so the albedo is going to *roughly* have a linear effect on the output intensity assuming that the output radiance ends up getting mapped to the visible range after exposure. And so you're still going to benefit from distributing your precision non-linearly to roughly map the logarithmic response of the human eye.

As an example, I created a simple gradient in Photoshop that was authored in sRGB space:

Gradient.png.1426374f45b2285801d66713b8f91480.png

I then used this as the albedo map for all meshes in Sponza, and applied some standard diffuse lighting + exposure + filmic tone mapping. Here's what it looks like when it's stored with sRGB transfer function applied, with an 8-bit-per-channel SRGB format that enables automatic conversion to linear:

Floor_Gradient_SRGB.thumb.png.2c51048fcc2dc2e39b786e08416c8c53.png

And now here's what it looks like if I apply the inverse sRGB transfer function before creating the texture (converting it to linear space) and then store the result in an 8-bit-per-channel UNORM texture:

Floor_Gradient_UNORM.thumb.png.401b1c0b0d9b67c7865734923e889929.png

The increased banding is very obvious. It gets even worse if you use BC formats that have even less effective precision:

BC1_UNORM_SRGB:

Floor_Gradient_BC1_SRGB.thumb.png.d724e2edbdd94ac8c813a0cb08fff2c5.png

BC1_UNORM:

Floor_Gradient_BC1.thumb.png.04c44d961acb7e8f358308e9a9adacfe.png

For completeness, here's the "high exposure" case where only the high end of the albedo texture ends up being in the visible range:

UNORM_SRGB:

Floor_Gradient_HighExposure_SRGB.thumb.png.0ddea228ff411d5c9578635090931e25.png

UNORM:

Floor_Gradient_HighExposure_UNORM.thumb.png.f6b818331bdc99b491982fe62baee33d.png

Not saying it's not useful now. But that you'll get the exact same banding in white areas of textures that are darkly lit. So under the right circumstances it's still bad. IE a better format that gives more precision over the whole range is probably needed already. Not to mention something like HDR VR, which should be able to display a massive output range and show up banding very badly.

Advertisement

I wonder (just a thought) if moving towards a more perceptually uniform color space could help ?
Something like HSLuv:

http://www.hsluv.org/comparison/

https://programmingdesignsystems.com/color/perceptually-uniform-color-spaces/

---------------------------------------http://badfoolprototype.blogspot.com/
5 hours ago, FreneticPonE said:

Not saying it's not useful now. But that you'll get the exact same banding in white areas of textures that are darkly lit. So under the right circumstances it's still bad. IE a better format that gives more precision over the whole range is probably needed already. Not to mention something like HDR VR, which should be able to display a massive output range and show up banding very badly.

In the case you're referring to, the upper portion of the texture is effectively compressed to a small portion of the visible range, which makes it unlikely that you'll see banding caused by the texture itself. Here's another comparison:

UNORM_SRGB:

FloorGradient_LowExposure_SRGB.thumb.png.9dd9c3d3f15522389354ff3ae5a07fed.png

UNORM

FloorGradient_LowExposure_UNORM.thumb.png.2ec4b1b8aa0cddab66db61fa09bfd276.png

If you look at the full-resolution image you can see banding (or at least I can on my monitor), but if you look at the values in the image you'll see that the values between the bands are increasing by exactly 1. So we're getting banding from quantizing to 8-bits in the final render target, and not from the texture. To me the two images look identical, which makes sense since both texture formats should have sufficient precision for this case.

 

This topic is closed to new replies.

Advertisement