Advertisement

Depth Of Field artifacts

Started by March 31, 2019 06:50 PM
4 comments, last by congard 5 years, 7 months ago

Hello, during the DOF testing, artifacts were revealed as shown in the picture:

fig23-10.jpg

Shader code:


#version 330

#algdef

in vec2 texCoord;

layout (location = 1) out vec3 dofFragColor;

uniform sampler2D scene; // dof
uniform sampler2D dofBuffer;

float dof_kernel[DOF_KERNEL_SIZE];

const int DOF_LCR_SIZE = DOF_KERNEL_SIZE * 2 - 1; // left-center-right (lllcrrr)
const int DOF_MEAN = DOF_LCR_SIZE / 2;

void makeDofKernel(float sigma) {
	float sum = 0; // For accumulating the kernel values
	for (int x = DOF_MEAN; x < DOF_LCR_SIZE; x++)  {
		dof_kernel[x - DOF_MEAN] = exp(-0.5 * pow((x - DOF_MEAN) / sigma, 2.0));
	    // Accumulate the kernel values
	    sum += dof_kernel[x - DOF_MEAN];
	}

	sum += sum - dof_kernel[0];

	// Normalize the kernel
	for (int x = 0; x < DOF_KERNEL_SIZE; x++) dof_kernel[x] /= sum;
}

void main() {
	makeDofKernel(texture(dofBuffer, texCoord).r);
	dofFragColor = texture(scene, texCoord).rgb * dof_kernel[0];
	
	#ifdef HORIZONTAL
	for(int i = 1; i < DOF_KERNEL_SIZE; i++) {
		dofFragColor +=
			dof_kernel[i] * (
				texture(scene, texCoord + vec2(texOffset.x * i, 0.0)).rgb +
				texture(scene, texCoord - vec2(texOffset.x * i, 0.0)).rgb
			);
	}
	#else
	for(int i = 1; i < DOF_KERNEL_SIZE; i++) {
		dofFragColor +=
			dof_kernel[i] * (
				texture(scene, texCoord + vec2(0.0, texOffset.y * i)).rgb +
				texture(scene, texCoord - vec2(0.0, texOffset.y * i)).rgb
			);
	}
	#endif
}

How can I fix this problem? I would be grateful for help.

What DOF_KERNEL_SIZE are you using out of curiosity?

Are you perhaps missing specifying the size of a texel? This post seems to use a similar algorithm as yours:

https://stackoverflow.com/questions/52130594/depth-of-field-artefacts


uniform float max_sigma = 12.0;
uniform float min_sigma = 0.0001;

...
  
    vec2 texOffset = 1.0 / textureSize(scene, 0); // gets size of single texel
    tmp = texture(dotBuffer, texCoord);
    fdof = tmp.a;
    makeDofKernel(max_sigma * fdof + min_sigma);

 

Advertisement

Yes, the code was incomplete. I missed a couple of lines:


#define DOF_KERNEL_SIZE 4
...
vec2 texOffset = 1.0 / textureSize(scene, 0);

By coincidence, that question on stackoverflow is my question, then the problem was a bit different. Now the blur is smooth, but with sharp drops in depth you can notice such artifacts.

This is what I'm trying to achieve:

IMG_20190401_084443.thumb.png.7d458bd60f4cae500f75e00d5af393e4.pngIMG_20190401_084518.png.c70a9354a0c32f23d50e4a2e237b8b20.png

Nvidia GPU Gems chapter 23 says:

Quote

The most objectionable artifact is caused by depth discontinuities in the depth image. Because only a single z value is read to determine how much to blur the image (that is, which mip level to choose), a foreground (blurry) object won't blur out over midground (sharp) objects. One solution is to read multiple depth values, although unless you can read a variable number of depth values, you will have a halo effect of decreasing blur of fixed radius, which is nearly as objectionable as the depth discontinuity itself.

http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch23.html

But I have no idea how to implement it correctly

The topic is still relevant. I still could not cope with these artifacts. The code has been updated a little, but the essence is the same (focus on the sky):

Spoiler


#version 330

#algdef

in vec2 texCoord;

layout (location = 0) out vec3 fragColor;

uniform sampler2D image;
uniform sampler2D positionMap;

uniform struct CinematicDOF {
	float p; // plane in focus
	float a; // aperture
	float i; // image distance
} cinematicDOF;

float dof_kernel[KERNEL_RADIUS];

const int DOF_LCR_SIZE = KERNEL_RADIUS * 2 - 1; // left-center-right (lllcrrr)
const int DOF_MEAN = DOF_LCR_SIZE / 2;

void makeDofKernel(float sigma) {
	float sum = 0; // For accumulating the kernel values
	for (int x = DOF_MEAN; x < DOF_LCR_SIZE; x++)  {
		dof_kernel[x - DOF_MEAN] = exp(-0.5 * pow((x - DOF_MEAN) / sigma, 2.0));
	    // Accumulate the kernel values
	    sum += dof_kernel[x - DOF_MEAN];
	}

	sum += sum - dof_kernel[0];

	// Normalize the kernel
	for (int x = 0; x < KERNEL_RADIUS; x++) dof_kernel[x] /= sum;
}

void main() {
	vec2 texOffset = 1.0 / textureSize(image, 0); // gets size of single texel
	float p = -cinematicDOF.p;
	float f = (p + cinematicDOF.i) / (p * cinematicDOF.i);
	float d = -texture(positionMap, texCoord).z;
	float sigma = abs((cinematicDOF.a * f * (p - d)) / (d * (p - f)));
	makeDofKernel(sigma);
  
	fragColor = texture(image, texCoord).rgb * dof_kernel[0];
	
	#ifdef HORIZONTAL
		for(int i = 1; i < KERNEL_RADIUS; i++) {
			fragColor +=
				dof_kernel[i] * (
					texture(image, texCoord + vec2(texOffset.x * i, 0.0)).rgb +
					texture(image, texCoord - vec2(texOffset.x * i, 0.0)).rgb
				);
		}
	#else
		for(int i = 1; i < KERNEL_RADIUS; i++) {
			fragColor +=
				dof_kernel[i] * (
					texture(image, texCoord + vec2(0.0, texOffset.y * i)).rgb +
					texture(image, texCoord - vec2(0.0, texOffset.y * i)).rgb
				);
		}
	#endif
}

 

 

And here is the result:

Spoiler

1591463040_2019-06-2321-27-43.thumb.png.95a8e5c451011edf67bd25155daf7e7c.png808603330_2019-06-2321-27-43__.thumb.png.fb19a39fa204efc54f2b0a7f7b7250da.png1815430672_2019-06-2321-36-17.thumb.png.c76d5c26c9d807a78b2739c2459fa1af.png

 

It's really ugly... And I don't know how to deal with it

EDIT: here you can see the artifact even better:

Spoiler

45740796_2019-06-2322-22-52.thumb.png.7a4ed6f7bb2b10eb7fd1db69f7155b5b.png

 

 

This topic is closed to new replies.

Advertisement