Advertisement

Bloom

Started by November 06, 2017 08:20 PM
13 comments, last by maxest 7 years, 2 months ago

How is Bloom implemented? Is it just like:

  1. Down sample the HDR image (1/16, 1/8?)
  2. Create a "bloom texture" where each texel := hdr * saturate(luminance - threshold * average_luminance)
  3. Apply Gaussian to the bloom texture in horizontal and vertical direction (separable)
  4. Add to the (non-down sampled) HDR color the sample of the bloom texture (scaled by some other parameter?)

🧙

That's more or less how the simplest implementation works - just downsample, blur a few times with different weights, and then add all of them to the final image. But it's not at all accurate and the threshold stage leads to some weird problems. I don't find the results particularly appealing, especially since doing it this way leads to resolution dependence, making bloom at higher resolutions like 4k pretty much unnoticeable.

Most of the more recent engines don't perform the threshold stage and instead perform a more energy conserving blend between the regular HDR image and the bloom image, something like lerp(HDR, Bloom, Weight). Blending it this way works because very bright pixels in the bloom image will still be quite visible with low blend weights when they bleed over dark pixels. I usually go for something like 0.05-0.15. Too low and bloom only shows up for very, very bright sources, too high and the image becomes muddy and/or blurry. You can probably do some magic effects by controlling that value over ranges of the image (lens dirt?).

Anyway, there are various different ways to do the actual blooming effect. Some implementations simply perform a 2 pass blur with special weights and then composite that. Others just do downsamples then blur those and add them together at the end.

My favorite way to do the blooming/blurring is similar to the approach that Bungie used for Halo 3. It provides really nice natural bloom sizes compared to the others since it essentially makes use of multi-pass blurs. The general idea is to perform a gaussian blur between the upsampling/downsampling, instead of doing all of the blurs first and adding later. It goes something like:

  1. Downsample the HDR image to a fixed size. (Using fixed size to avoid resolution dependent bloom radii. I usually use 512 for width, and 512 * aspectRatio for height).
  2. Downsample the result a few more times using a wide gaussian filter. In my usual implementation, I downsample 2 more times: one to 128, and the other at 32 (again both of those have heights multiplied by the aspect ratio).
  3. For each of those results (after all of them are downsampled), upsample and add to the level above it using a 5x5 gaussian blur filter (Ex: blur 32 image, upsample to 128 image, blur the 128 image, upsample to 512 image, blur 512 image).
  4. Lerp the 512 texture with the HDR image using some small weight of your choice to decide how much bloom should be in the image.

The trick to this is to make sure your downsample/upsample shaders are optimized. My initial blur pass uses uses 9 texture samples with the help of some linear filtering tricks. The other blur passes use 5 texture samples, similar linear filtering tricks. Hardly anything crazy, since most downsample passes + bloom blurs require many samples anyway.

Advertisement
12 hours ago, Styves said:
  • Downsample the HDR image to a fixed size. (Using fixed size to avoid resolution dependent bloom radii. I usually use 512 for width, and 512 * aspectRatio for height).
  • Downsample the result a few more times using a wide gaussian filter. In my usual implementation, I downsample 2 more times: one to 128, and the other at 32 (again both of those have heights multiplied by the aspect ratio).
  • For each of those results (after all of them are downsampled), upsample and add to the level above it using a 5x5 gaussian blur filter (Ex: blur 32 image, upsample to 128 image, blur the 128 image, upsample to 512 image, blur 512 image).
  • Lerp the 512 texture with the HDR image using some small weight of your choice to decide how much bloom should be in the image.

Doesn't neglect this the luminance (my HDR image contains the non-perceptual radiance values)?

And what about AA? Doesn't blurring the whole image in a similar fashion ruins the sharpness of dark edges?

🧙

I would recommend looking here:

http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare

The bloom proposed here is very simple to implement and works great. I implemented it and checked. Keep in mind though that there is a mistake in the slides which I pointed out in the comments.

Basically, the avoid getting bloom done badly you can't undersample or you will end up with nasty aliasing/ringing. So you take the original image, downsample it once (from 1920x1080 to 960x540) to get second layer, then again, and again, up to n layers. After you have generated, say, 6 layers, you combine them by upscaling the n'th layer to the size of (n-1)'th layer by summing them. The you do the same with the new (n-1) layer and (n-2)'th layer. Up to the full resolution. This is quite fast as the downsample and upsample filters need very small kernels but since you're going down to a very small layer you eventually get a very broad and stable bloom.

7 hours ago, matt77hias said:

Doesn't neglect this the luminance (my HDR image contains the non-perceptual radiance values)?

And what about AA? Doesn't blurring the whole image in a similar fashion ruins the sharpness of dark edges?

I'm not sure I understand the questions. :/

13 minutes ago, Styves said:

I'm not sure I understand the questions. :/

I think my confusion is due to the missing "brightness" filtering step:

 

19 hours ago, Styves said:
  • Downsample the HDR image to a fixed size. (Using fixed size to avoid resolution dependent bloom radii. I usually use 512 for width, and 512 * aspectRatio for height).
  • Downsample the result a few more times using a wide gaussian filter. In my usual implementation, I downsample 2 more times: one to 128, and the other at 32 (again both of those have heights multiplied by the aspect ratio).
  • For each of those results (after all of them are downsampled), upsample and add to the level above it using a 5x5 gaussian blur filter (Ex: blur 32 image, upsample to 128 image, blur the 128 image, upsample to 512 image, blur 512 image).
  • Lerp the 512 texture with the HDR image using some small weight of your choice to decide how much bloom should be in the image.

Shouldn't the first step above downsample the bright spots of the HDR image only instead of the complete HDR image including the low luminance areas?

🧙

Advertisement

It's old practice to do that, but it's a legacy pass from before proper HDR rendering. It isn't necessary when you work with proper HDR ranges. If you have natural HDR brightness ranges, then only bright areas will bloom when blending it in at really low values. It's energy conserving, since light is never being "added",  so it's more correct/realistic (for an image effect anyway).

For example, if your HDR pixel has a value of 2.0, and your bloom has a value of 64.0, and you blend at a weight of 0.05, the pixel will be 5.1.

Another example: If your HDR pixel has a value of 2.0, and bloom has a value of 3.0, with the same weight (0.05), then your pixel will have a value of 2.05, which in practice is hardly noticable.

Final example: If your HDR pixel has a value of 2.0, and bloom has a value of 2.0, with the same weight (0.05), then your pixel will have a value of 2.0. Exactly the same as it was before.

Keep in mind that a bloom pixel that is darker than an HDR pixel generally doesn't happen, so cases like HDR being 2.0 and bloom being 1.0 don't really happen. The reason is that the gaussian blurs you apply to blur the image will favor bright pixels when weighing the samples, so bright pixels will usually prevail (since we're using HDR ranges).

There's no need to isolate bright pixels when you can leverage the contrast/ratio between light and dark. ;)

Be sure to combine them before applying exposure compensation.

PS: CryEngine and a few other game engines perform bloom this way.

2 minutes ago, Styves said:

There's no need to isolate bright pixels when you can leverage the contrast/ratio between light and dark. ;)

But if you do not isolate, you blur sharp edges everywhere?

🧙

If you want to be technical, yes. But you'll never see it because of the contrast between bright/dark pixels. How do you think bloom happens on real lenses?

I think Styves is right. In Call of Duty they take 4% of the HDR scene's colors and use that to apply bloom on. No need for thresholding. But it's not a problem to apply it.

This topic is closed to new replies.

Advertisement