Advertisement

Tone Mapping

Started by October 12, 2017 12:57 PM
52 comments, last by MJP 7 years, 3 months ago
8 minutes ago, turanszkij said:

For example, how does it know which gamma space was the image authored in (probably it is 2.2 but anyway)

I guess the gamma of the associated "ID3D11Output" is used (not the one of the content creation which is only known by the content creator himself since an sRGB image doesn't know).

🧙

8 hours ago, turanszkij said:

For example, how does it know which gamma space was the image authored in

If you're using this format, you're declaring that it was authored in sRGB curve is used, which actually isn't a gamma curve at all ;) Gamma 2.2 is appropriately equal to sRGB, but they're actually quite different down  close to black (sRGB is superior). sRGB is the standard for PC and web graphics / displays, so if you're trying to standardise all your artists displays to a common standard, sRGB is the obvious choice. 

I would probably only recommend storing colours in a different space if you know that you're not going to use the full black to white range and have more than 8bit precision source data. e.g. Crytek have a workflow that stores the per channel minimum and range in a constant, and they do a MAD after fetching to retrieve the original data. I can't remember, but they might've used a gamma curve too. 

 

On a modern PC you probably won't notice. The per pixel pow is probably under a dozen MADs times the resolution times overdraw. Maybe 20 or 30 million flops, which a modern GPU can eat up. 

On the PS3's terrible GPU we had this on our "must do before we can ship" list, and really did notice the change. On some older games we actually used Gamma 2.0 instead of Gamma 2.2 because the math is way cheaper! :D

However the difference between current low end and current high end is still like 10x different in FLOPs, so if you're optimizing for low end, you still will be trying to claw back every clock cycle you can, and saving 20 million FLOPs would be very welcome ;)

Advertisement
8 hours ago, turanszkij said:

For example, how does it know which gamma space was the image authored in

If you're using this format, you're declaring that it was authored in sRGB, which actually isn't a gamma curve at all ;)
Gamma 2.2 is approximately equal to sRGB, but they're actually quite different down close to black (sRGB is superior). sRGB is the standard for PC and web graphics / displays and the "default" assumption when you open any image file that doesn't contain gamma/colour-space info, so if you're trying to standardize all your artists displays to a common standard, sRGB is the obvious choice. 

I would probably only recommend storing colours in a different space if you know that you're not going to use the full black to white range and have more than 8bit precision source data. e.g. Crytek have a workflow that stores the per channel minimum and range in a constant, and they do a MAD after fetching to retrieve the original data. I can't remember, but they might've used a gamma curve too. 

 

On a modern PC you probably won't notice the perf difference. The per pixel pow is probably under a dozen MADs times the resolution times overdraw. Maybe 20 or 30 million flops, which a modern GPU can eat up. 

On the PS3's terrible GPU we had this on our "must do before we can ship" list, and really did notice the change. On some older games we actually used Gamma 2.0 instead of Gamma 2.2 because the math is way cheaper! :D

However the difference between current low end and current high end is still like 10x different in FLOPs, so if you're optimizing for low end, you still will be trying to claw back every clock cycle you can, and saving 20 million FLOPs would be very welcome ;)

7 minutes ago, Hodgman said:

If you're using this format, you're declaring that it was authored in sRGB curve is used, which actually isn't a gamma curve at all Gamma 2.2 is appropriately equal to sRGB, but they're actually quite different down  close to black (sRGB is superior). sRGB is the standard for PC and web graphics / displays, so if you're trying to standardise all your artists displays to a common standard, sRGB is the obvious choice. 

I would probably only recommend storing colours in a different space if you know that you're not going to use the full black to white range and have more than 8bit precision source data. e.g. Crytek have a workflow that stores the per channel minimum and range in a constant, and they do a MAD after fetching to retrieve the original data. I can't remember, but they might've used a gamma curve too. 

 

On a modern PC you probably won't notice. The per pixel pow is probably under a dozen MADs times the resolution times overdraw. Maybe 20 or 30 million flops, which a modern GPU can eat up. 

On the PS3's terrible GPU we had this on our "must do before we can ship" list, and really did notice the change. On some older games we actually used Gamma 2.0 instead of Gamma 2.2 because the math is way cheaper!

However the difference between current low end and current high end is still like 10x different in FLOPs, so if you're optimizing for low end, you still will be trying to claw back every clock cycle you can, and saving 20 million FLOPs would be very welcome

Oh, that's new to me, In my mind srgb just meant "some gamma space", so that clears it up. About the pow performance on lower end spec, like last gen consoles, I have unfortunately no experience with, and nor do I really care for my purposes. :D

sRGB is a standard, not just a randomish user controlled pow value, it defines the conversion curve  ( mostly a pow plus a linear toe part near zero with a value threshold) plus the color primaries and white point ! for TVs that are Rec.709, the primaries are the same, but not the curve and you need a different conversion at the end again before presentation. Finally, you can add limited versus full range madness, and you are lucky if you are outputting the proper signal.

 

 

You should also keep in mind that the sRGB transfer function is non-linear. This means that applying the inverse transfer function to a filtered texture sample will give you different result than applying that same transfer function to the individual texels before filtering. Using the SRGB formats for sampling will do the former (convert before filtering), while converting in the shader will do the latter unless you forego hardware filtering. Filtering before conversion can give you funky results, such as incorrect intensities after downsampling, or strange colors when interpolating between primaries.

There's also a related issue with render targets that use SRGB formats: when using these, the hardware will convert to linear intensities before applying the blend equation, and then apply the transfer function again before storing the blended result in the render target texture.

Advertisement
10 hours ago, MJP said:

There's also a related issue with render targets that use SRGB formats: when using these, the hardware will convert to linear intensities before applying the blend equation, and then apply the transfer function again before storing the blended result in the render target texture.

This is not an issue ! It is the solution to a proper blending behavior :)

On ‎20‎/‎10‎/‎2017 at 6:54 AM, MJP said:

You should also keep in mind that the sRGB transfer function is non-linear. This means that applying the inverse transfer function to a filtered texture sample will give you different result than applying that same transfer function to the individual texels before filtering. Using the SRGB formats for sampling will do the former (convert before filtering), while converting in the shader will do the latter unless you forego hardware filtering. Filtering before conversion can give you funky results, such as incorrect intensities after downsampling, or strange colors when interpolating between primaries.

There's also a related issue with render targets that use SRGB formats: when using these, the hardware will convert to linear intensities before applying the blend equation, and then apply the transfer function again before storing the blended result in the render target texture.

Oh, that is actually useful then. I didn't care so much for performance implcations of gamma conversion yet, but this is a really useful feature. My other concern with the SRGB texture format was this (from MSDN):

If the driver type is set to D3D_DRIVER_TYPE_HARDWARE, the feature level is set to less than or equal to D3D_FEATURE_LEVEL_9_3, and the pixel format of the render target is set to DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, DXGI_FORMAT_B8G8R8A8_UNORM_SRGB, or DXGI_FORMAT_B8G8R8X8_UNORM_SRGB, the display device performs the blend in standard RGB (sRGB) space and not in linear space. However, if the feature level is set to greater than D3D_FEATURE_LEVEL_9_3, the display device performs the blend in linear space, which is ideal.

I don't intend to support DirectX 9 feature level, but nevertheless this just seems to me that proper support for this format is not 100% guaranteed.

On 10/20/2017 at 7:54 AM, MJP said:

There's also a related issue with render targets that use SRGB formats: when using these, the hardware will convert to linear intensities before applying the blend equation, and then apply the transfer function again before storing the blended result in the render target texture.

This is the behavior you want (conversion after blending). Correct me if I am wrong: after lighting you stick to float formatted RTVs for HDR. So the only two RTVs that can benefit from an sRGB format are: the back buffer and the base color buffer of the GBuffer.

On 10/20/2017 at 7:54 AM, MJP said:

You should also keep in mind that the sRGB transfer function is non-linear. This means that applying the inverse transfer function to a filtered texture sample will give you different result than applying that same transfer function to the individual texels before filtering. Using the SRGB formats for sampling will do the former (convert before filtering), while converting in the shader will do the latter unless you forego hardware filtering. Filtering before conversion can give you funky results, such as incorrect intensities after downsampling, or strange colors when interpolating between primaries.

This makes sRGB formats useless for resources that must be read, unless you use a point sampler (so basically one texel access and no filtering). So the base color buffer of the GBuffer can benefit from an sRGB format.

🧙

On 10/20/2017 at 9:08 AM, galop1n said:

This is not an issue ! It is the solution to a proper blending behavior :)

Indeed, thank you for clarifying that. I was trying to say that the "issue" was with the the behavior that you get from manually applying the transfer function at the end of the pixel shader, because in that case you get the same problem that you do with texture filtering (blending/interpolating in a non-linear color space).

This topic is closed to new replies.

Advertisement