Is it possible to let the hardware perform (temporal) dithering in D3D11 upon presenting the back buffer of the swap chain?
If not, what are good, but cheap algorithms to apply dithering (and temporal dithering)?
Is it possible to let the hardware perform (temporal) dithering in D3D11 upon presenting the back buffer of the swap chain?
If not, what are good, but cheap algorithms to apply dithering (and temporal dithering)?
🧙
float Noise(float2 uv){return frac(sin((uv.x+uv.y)*199.f)*123.f);}
float Dither(float v, float colorCount)
{
float c = v * colorCount;
c += Noise(PixelPosition) > frac(c) ? 1.f : 0.f;
c -= frac( c );
c /= colorCount;
return c;
}
...
color.r = Dither(color.r,32);//32 -> 5bit
color.g = Dither(color.g,64);//64 -> 6bit
color.b = Dither(color.b,32);//32 -> 5bit
ah, sorry, one edit of code wiped out the rest of my post
I wrote something like:
There are many ways to add dithering, depends on what your goal is. if you want filmic grain, you can simply add a random value to the pixel (with a very low weight, e.g. 4.f/255.f), if you want to simulate old school dithering that was made to give the impression of more colors, while the bit depth was actually low (e.g. r5g6b5) some code like below..(em. above now) could do that.
if you need something special, you'd need to tell the goal for your dithering
3 hours ago, Krypt0n said:if you want to simulate old school dithering that was made to give the impression of more colors, while the bit depth was actually low (e.g. r5g6b5) some code like below..(em. above now) could do that.
if you need something special, you'd need to tell the goal for your dithering
I want to reduce/eliminate "patterns" due to quantization (8 or 10 bit unsigned integer channels) that appear when using flat base colors (i.e. no textures). To do so, I want to add some noise to have a 50/50 chance of rounding to the largest lower integer value and rounding to the smallest higher integer value given some floating point value. Ideally, I want some temporal variability as well to smooth the noise over 30 or 60 FPS (which would have the nice effect that our eyes would not perceive any noise at all). The latter also implies that the rendered image should always look slightly different from the previous image independent of the camera frame (so even if the camera is fixed).
So for now, it doesn't have to be artistic, but just more accurate and resistant to quantization errors. Furthermore, I use 8 or 10 bit unsigned integers for my back buffer channels, so nothing exotic as 5 or 6 bit
🧙
you can adjust my pseudo code easily e.g.
float Noise(float2 uv, float t){return frac(sin((uv.x+uv.y)*199.f+t)*123.f);}
float Dither(float v, float colorCount,float t)
{
float c = v * colorCount;
c += Noise(PixelPosition,t) > frac(c) ? 1.f : 0.f;//this checks whether the colors last bit(s) is/are above a random number and therefore rounds randomly
c -= frac( c );
c /= colorCount;
return c;
}
...
const float range = 128.f; //8bit
//const float range = 512.f; //10bit
color.r = Dither(color.r,range,frameTime*123.f);
color.g = Dither(color.g,range,frameTime*123.f);
color.b = Dither(color.b,range,frameTime*123.f);
7 minutes ago, Krypt0n said:you can adjust my pseudo code easily e.g.
float Noise(float2 uv, float t){return frac(sin((uv.x+uv.y)*199.f+t)*123.f);} float Dither(float v, float colorCount,float t) { float c = v * colorCount; c += Noise(PixelPosition,t) > frac(c) ? 1.f : 0.f;//this checks whether the colors last bit(s) is/are above a random number and therefore rounds randomly c -= frac( c ); c /= colorCount; return c; } ... const float range = 128.f; //8bit //const float range = 512.f; //10bit color.r = Dither(color.r,range,frameTime*123.f); color.g = Dither(color.g,range,frameTime*123.f); color.b = Dither(color.b,range,frameTime*123.f);
I will give it a try these days. (P.S.: It can be worthwhile to calculate Noise only once per fragment )
🧙
2 minutes ago, matt77hias said:(P.S.: It can be worthwhile to calculate Noise only once per fragment )
It was pseudo code to make the trivial idea behind it easy to understand, this whole code can be summarized to
color.rgb += frac(sin(pixelid*199.f+frametime*123.f)*123.f)*(1.f/range);
resulting in exactly the same output
On 31/3/2018 at 4:30 PM, Krypt0n said:color.rgb += frac(sin(pixelid*199.f)*123.f)*(1.f/range);
I tried the non-temporal one, but it doesn't look that like uniform sampling, since one can clearly observe some patterns:
I tried some stranger looking functions to the u and v values of each pixel which gave better results. Though, it seems more like trial-and-error. Maybe, I can use a linear congruential generator with a different seed for each pixel based on the uv coordinates and time?
Apparently HLSL has a noise function, but it is only for Perlin noise. So not the white noise, I am looking for. It is a pitty that I cannot get the GPU's thermal noise as for the CPU.
🧙
Another alternative: https://stackoverflow.com/a/10625698/1731200 (incl. shadertoy demos).
🧙