Advertisement

Gamma correction - sanity check

Started by November 20, 2017 08:57 AM
24 comments, last by GuyWithBeard 7 years, 2 months ago

A swapchain in 8888 either unorm or srgb still expect srgb content for display. So you are likely to keep it srgb unless you do a copy from a separate 8888 unorm view of your offscreen surface.

Offscreen surface need enough precision so 8888srgb or hdr. You will see soon that you need tonemap so don't assume ldr all the way. 

 

 

14 minutes ago, GuyWithBeard said:

only two formats supported for the actual back buffer are VK_FORMAT_B8G8R8A8_UNORM and VK_FORMAT_B8G8R8A8_SRGB.

I don't know Vulkan, but do these map to the similarly named DXGI_FORMATs? It seems strange that you only have BGRA and no RGBA? I needed the latter to use my back buffer as an UAV (though, now I do not reuse the back buffer for some intermediate calculations).

4 minutes ago, galop1n said:

Offscreen surface need enough precision so 8888srgb or hdr. You will see soon that you need tonemap so don't assume ldr all the way. 

But I don't get the combination LDR and forward+? Do you just use forward+ because you use lots of lights per view, but few per tile? If you had lots of lights per pixel, light contributions would accumulate a lot, making LDR less sufficient?

🧙

Advertisement

To be honest I don't quite have a use for Forward+ at the moment. That's why I put it in parentheses. I just happened to stumble upon this fine article and figured it looked interesting: https://www.3dgep.com/forward-plus/

Anyway, I might get to HDR at some point but rendering is only a small part of the development I am doing so I simply haven't gotten to it yet. I am by no means a graphics programmer per se.

And speaking of, I have moved over to using sRGB textures and an sRGB back buffer now on both Vulkan and DX12 and I would like to do some more sanity checking if you don't mind.

The image output is a lot brighter, as you would expect. However, simply clearing the window to a solid color (ie. not doing any real drawing) is also a lot brighter and I am unsure if that is correct. I made my clear color [128, 128, 128] and it seems to come out as [188, 188, 188] (checked by doing a print-screen and looking up the color in GIMP). Since I should now be working in linear space which is then converted to gamma space before being presented to the screen I would expect the color to come out the same. Black comes out as black and white comes out as white, so it seems to be allright. However, I thought the whole point of working in linear was that half-way between black and white would come out as that.

Where did I go wrong? (and thanks in advance for explaining these fundamental things)

EDIT: Textures also seem to come out a lot brighter, so there's clearly something wrong. I will have to investigate.

47 minutes ago, GuyWithBeard said:

Black comes out as black and white comes out as white, so it seems to be allright.

That would always be the case. The linear and sRGB corrected curves have the same value at the lowest (0) and highest (1 or 255) value, but the curves differ in between.

 

I think the most easy thing you can verify first is a simple sprite. Create a sprite with an sRGB format, load the sprite as resource with an sRGB format and render (no operations) that sprite to a back buffer with an sRGB format. You should see the same result as in your (sRGB) image viewer.

🧙

1 minute ago, matt77hias said:

That would always be the case. The linear and sRGB corrected curves have the same value at the lowest (0) and highest (1 or 255) value, but the curves differ in between.

Yes, I know that. But if the pipeline was configured correctly, shouldn't a clear color of 128 come out as 128?

12 minutes ago, GuyWithBeard said:

Yes, I know that. But if the pipeline was configured correctly, shouldn't a clear color of 128 come out as 128?

Assume you have sRGB textures and an sRGB back buffer:

You write: 128 (linear color space)

The hardware performs the sRGB conversion:  ~ (128/255)^(1/2.2)*255 = 186 (sRGB color space)

If you sample that texel, the hardware will give you again ~ (186/255)^(2.2)*255 = 128 (linear color space), but if you just look at your screen you will see the value 186 (sRGB color space).

 

Stated differently you want to see the middle grey intensity between black and white, but the intensity curve of your display is non-linear, so you adapt your linear intensity value of 128 to 186 which will be half way between black and white for your display.

🧙

Advertisement

Yes, but in this case I render the color instead of sampling it. Would rendering the color to screen and doing a print-screen of that equate to your third sampling step? Ie. should I expect 128 or 186? (actually it came out as 188).

3 minutes ago, GuyWithBeard said:

actually it came out as 188

I do not know the exact linear to sRGB function. A gamma of 2.2 is a (cheap) approximation.

3 minutes ago, GuyWithBeard said:

Ie. should I expect 128 or 186?

You will see 186 on your display, you will see 128 in your shader calculations. (see my edit to my previous post)

🧙

I feel like you are dodging my actual question :) Anyway, thanks thus far. I will have to read up on this a bit I feel...

EDIT: For the record, Matt77hias, edited his previous posts to provide more info. Thanks dude!

Ok I'll try to rephrase:

Lets assume we have a sprite with an UNORM_SRGB with a single stored color of [0.73,0.73,0.73,1]

We now want to sample from that texture in a pixel shader which will write to the back buffer which also has a UNORM_SRGB format. If we sample from the texture, the hardware knows that we deal with an SRGB texture and performs the conversion from SRGB to linear color space before sampling: [0.73,0.73,0.73,1] > [0.5,0.5,0.5,1]. Then, the sampling is performed and we get [0.5,0.5,0.5,1]. Next, we write that color to the back buffer. The hardware knows that we deal with an SRGB back buffer and performs the conversion from linear to SRGB color space (after blending) [0.5,0.5,0.5,1] > [0.73,0.73,0.73,1].

The display will then apply that "signal" and we will perceive the "signal" as half way between pure black and pure white.

If you write the image (e.g. snapshot) to disk and use some image viewer, the image viewer will say that each texel has a raw value of [0.73,0.73,0.73,1].

🧙

This topic is closed to new replies.

Advertisement