Advertisement

Gamma correction confusion

Started by June 08, 2018 05:44 PM
8 comments, last by MJP 6 years, 8 months ago

So, i stumbled upon the topic of gamma correction.

https://learnopengl.com/Advanced-Lighting/Gamma-Correction

So from what i've been able to gather: (Please correct me if i'm wrong)

  • Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary.
  • Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?)
  • All games have to apply gamma correction? (unsure about that)
  • All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.)
  • This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong
  • You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format

 

Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.

First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)

 

What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)


vec3 gammaCorrection(vec3 color){
	// gamma correction 
	color = pow(color, vec3(1.0/2.2)); 
	return color;
}

void main()
{
	vec3 color;
	vec3 tex =  texture2D(texture_diffuse, vTexcoord).rgb;
	
	color = gammaCorrection(tex);
    outputF = vec4(color,1.0f);
}

The results look like this:

No gamma correction:

Spoiler

no%20gamma%20correction.png

With gamma correction:

Spoiler

Gammacorrection.png

 

The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)

Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?

The main point of gamma correction is to increase the precision around the blacks. Basically, the actual color intensity that a given value from 0.0 to 1.0 corresponds to is color^2.2, meaning that half the precision of your 8-bit colors is dedicated to the lower 21% of the color intensities your monitor can display, while the other 79% has much lower precision. This is why it was still kept around even after CRTs.

Indeed, all normal image files are stored in sRGB space already. This means that the image can be displayed correctly on the screen without any further processing. However, if you want to do any kind of processing of the image, for example scale the color intensities, do blending, do lighting, etc, you need to convert the colors to linear space, process the image, then convert it back to sRGB again for displaying correctly. For example, a consequence of this is that 255/2 = ~187 in sRGB.

For your game, any texture you read from a file you should load as a GL_SRGB texture. This allows you to keep the full precision of the image data, but automatically convert it to linear space on demand.

You need to take caution in what render target texture format you use as well. If you were to render linear color values to a GL_RGB8 render target, you would completely ruin the precision of the blacks, giving you a huge amount of banding in dark areas. A simple solution is to use GL_RGB16F or even GL_R11F_G11F_B10F, which has a similar precision to sRGB. Another option is to use a GL_SRGB8 texture as your render target together with glEnable(GL_FRAMEBUFFER_SRGB). GL_FRAMEBUFFER_SRGB causes OpenGL to automatically convert any values you write to an sRGB render target to sRGB space. With this enabled, your GL_SRGB8 render target functions exactly like a GL_RGB8 render target, just with much more precision around the blacks (and lower towards whites) meaning that you won't lose precision inbetween.

You also need to convert the entire image back to sRGB space before you display it in a final postprocessing step, which you seem to be doing. Note that correct sRGB is not the same as pow(x, 1.0 / 2.2), but it is quite close.

 

Regarding the washed out look: This is a natural consequence of the gamma correction at the end. Many people argue that not having gamma correction gives a "deeper" look, but it also makes it impossible to get accurate results. For example, if you add two lights together the result should be twice as bright, but if you try to do lighting without gamma correction the result will look closer to 4x as bright, making tweaking scene colors and special effects very difficult. You should rework your colors and assets to look good with gamma correction on. The earlier you make the switch, the less painful it will be. If you are using HDR, the tone mapping function has a much bigger impact on how washed out the image looks than the gamma correction has.

Also, you can test your monitor's gamma using this website: http://www.lagom.nl/lcd-test/gamma_calibration.php

Advertisement

So textures need to be loaded in as GL_SRGB in order to gamma correct them for calculations, meaning we convert them from SRGB to linear space.

Now, what i don't get is why the framebuffer also has to be set to sRGB. The texture values which are read/processed are converted to linear space and stored linearly in the framebuffer so it should be fine? (As an example, if i read the framebuffer values in a shader for additional postprocessing effects, then i already have them in linear space and don't need to convert anything with GL_SRGB.) The only thing that we have to do is to convert back from linear space to SRGB with (as an example) a post processing shader at the end of the renderstage.

Am i missing something with the framebuffer?

There are 2 purposes to using sRGB as an image/framebuffer format:

1) It's what monitors expect.  So, it can be pumped straight to be monitor without extra work.

2) 8 bits is not enough precision to avoid banding artifacts in dark regions when the color is stored linearly.  If you use a higher bitrate framebuffer then you can get away with storing linear space colors.  And, sRGB888 is OK as an intermediate format.  But, RGB888 filled with linear-space colors will result in a lot of banding.

29 minutes ago, corysama said:

1) It's what monitors expect.  So, it can be pumped straight to be monitor without extra work.

Well, yes if i output the framebuffer directly on the screen then the SRGB framebuffer will do the conversion from linear to SRGB space for me.

But more often than not (deferred rendering) we will do additional post-processing steps (reading from the albedo buffer, etc...) thus we need the linear space. From my understanding, setting the SRGB flag for the framebuffer would convert the linear colors to SRGB if i access the framebuffer in a postprocessing shader which then would lead to wrong results again (as i would add/multiply SRGB colors).

I found this post here:

https://stackoverflow.com/questions/11386199/when-to-call-glenablegl-framebuffer-srgb

And in the first answer tells us that we should remain in linear space until the very end, thus not setting SRGB for postprocessing purposes.

Howerver, as you said the precision of the framebuffer needs to be increased in order to avoid loosing precision due to the conversion.

So the solution would be:

  1. Setting textures to SRGB
  2. framebuffers should remain in linear space (RGB not SRGB) but increase the precision (RGB10,FP16, etc...) in order to preserve precision 
  3. at the end of the renderpipeline do gamma correction with a shader or a seperate SRGB framebuffer to output the framebuffer to the screen in SRGB

Is this correct?

 

In your postprocessing you'll need to set the framebuffer as an sRGB source texture to convert it back to linear when you sample it.  It's the same as your loaded textures.  The 8-bit render target is being used as intermediate storage between fp32 calculations.  The linear->sRGB->linear round-trip is designed to minimize loss during that 8-bit intermediate step.

So, it goes: load 8-bit sRGB texture file, sample 8-bit sRGB texture converting it to a linear fp32 color, do lighting math in fp32, convert to sRGB to store in an 8-bit framebuffer, set the frambuffer as an sRGB8888 source texture, sample the 8-bit texture converting from sRGB to linear fp32, do post-processing math, store to another 8-bit sRGB framebuffer.

You can avoid the linear->sRGB->linear process if you can afford a higher-precision intermediate format.

Advertisement

I think i understand now.

26 minutes ago, corysama said:

In your postprocessing you'll need to set the framebuffer as an sRGB source texture to convert it back to linear when you sample it.  It's the same as your loaded textures.  The 8-bit render target is being used as intermediate storage between fp32 calculations.  The linear->sRGB->linear round-trip is designed to minimize loss during that 8-bit intermediate step.

So, it goes: load 8-bit sRGB texture file, sample 8-bit sRGB texture converting it to a linear fp32 color, do lighting math in fp32, convert to sRGB to store in an 8-bit framebuffer, set the frambuffer as an sRGB8888 source texture, sample the 8-bit texture converting from sRGB to linear fp32, do post-processing math, store to another 8-bit sRGB framebuffer.

You can avoid the linear->sRGB->linear process if you can afford a higher-precision intermediate format.

So essentially we store the FP32(linear) values in an SRGB(non linear) buffer in order to preserve precision between steps.

Does writing into an SRGB texture convert linear data to SRGB data? The only way this can work is if:

  • writing to an SRGB framebuffer converts linear (written) data to non linear (SRGB) data
  • reading/sampling the SRGB framebuffer converts SRGB data (which is sampled) to linear data. (That's how the textures also work)

Is this how sRGB framebuffers/textures behave?

Sorry for all those guestions. Never worked in the sRGB color space and have absolutely no idea how reading/writing from/to sRGB textures actually behaves.

Yep.  You understand it now.  When sampling a framebuffer, it's just another texture.

Having sRGB framebuffers and textures is not just a convenience.  Blending and texture filtering need to be done in linear space to work properly (linearly).  So, under the hood, the blendop has to do newSrgbPixelColor = toSRGB(blend(linearShaderOutputColor, toLinear(currentSrgbPixelColor)))  You can't do that in a pixel shader without "programmable blending" extensions.  Similarly, the texture sampler has to convert all samples to linear before performing filtering.  In theory you could do a bunch of point samples and convert+filter yourself in a pixel shader.  But, you really do not want to.  Especially not for complicated sampling like anisotropic.

I just wanted to drop a link to a great presentation that I read the other data, which I think might be relevent: https://research.activision.com/t5/Publications/HDR-in-Call-of-Duty/ba-p/10744846

It's about implementing support for HDR displays, but it starts out with a good intro to colorimetry/photometry and how it applies to displays. So it might help you understand the concepts behind sRGB a bit better, and also understand how the new HDR standards differ.

This topic is closed to new replies.

Advertisement