Advertisement

Gamma correction Shader sampling confusion

Started by September 15, 2021 08:25 AM
4 comments, last by Key_C0de 3 years, 2 months ago

Hello. I want to ask a specific question concerning gamma corrected textures..

When I sample a color/diffuse texture do I need to apply gamma correction (^gamma) in order to bring it into linear space? Here is where I read about this LearnOpenGL - Gamma Correction.

(I typically work with DirectX not OpenGl but I don't think there's a difference to this.)

My confusion lies in what I thought prior to reading this article:

I thought that color textures are stored in linear space on disk, so when we sampled them they were good to go to perform our math calculations on. So after we operated on them we performed the opposite gamma correction that the monitor applies ie (^1/gamma) to account for that.

What is right and wrong? Please clear up my confusion. What should I do with color textures to account for gamma correction? I hope my question is understandable, if not let me know and I'll explain further.

None

Color images may be stored in any format on disk as the color space is up to the whatever is writing the file. However, the more general use case or encountered case is that color images are saved in a non-linear color space as it allows for tricks such as allocating more dynamic range to certain parts of the image. With that said, the general texture samplers will assume that the data its sampling is linear( as is ). This means that if you loaded one of the textures with non-linear data, your texture sample will also be non-linear. You will have to tell the sampler that the source data is non-linear so that when the texture sampling unit fetch a value from the texture, it will apply the inverse of the non-linear function to convert the value to linear. I haven't touched DirectX in a while, but the premise is somewhat the same as OpenGL. You'll have to flag/tell the texture/texture sampler unit that the associated texture data is non-linear and needs to be converted.

Advertisement

There are several links at the bottom of that page. I read the top one and it was a great introduction as well.

What they say in these tutorials/articles is that in an image editing program or 3D modelling software the artist would be doing gamma correction to make it look physically realistic on a screen, resulting in the texture/model/image that comes into your application is sRBG. But the problem is that the algorithms used for different effects operate in linear space. So if you apply them to the data directly without linearizing the data first, it will look quite bad. You can several examples of this in the top link.

So as cgrant said, you should instruct your graphics library that the data coming in is in sRGB space (or do it manually in a shader) so that it can translate it to linear space. Then at the end of your pipeline, in a post-process fragment shader for example, you again do the gamma correction to make it look better. If you didn't take this into account when importing, you would have performed gamma correction twice.

@cgrant You are right.

So instruct the sampler to linearize the values by itself, according to Converting data for the color space - Win32 apps | Microsoft Docs

we should specify DXGI_FORMAT_*_SRGB.

Thanks.

None

The cure to all ailments: Chapter 24. The Importance of Being Linear | NVIDIA Developer

None

This topic is closed to new replies.

Advertisement