In my project I have the need to render a very wide dynamic range of light intensities for both the light of distant and nearby stars. For example, the ratio of intensity between the sun at 1 AU vs. the dimmest star visible to a human eye is 2.15e13 (apparent magnitude of -26.832 vs. 6.5). This far exceeds the range and precision of 16-bit float.
There are a few options that I could use to "make it work":
- Increase the brightness of far away stars to reduce the dynamic range. The problem with this is having a smooth monotonic transition from overbright to normal brightness when moving toward or away from a star. It's also less realistic in various ways (e.g. ambient star light at night will be too bright).
- I can scale the overall light intensity of the scene to look good in dim or bright conditions (but not both at the same time). This scales the light power prior to rendering pixels so isn't sensitive to framebuffer precision. However, it's hard to do this automatically except for with traditional auto-exposure, and that requires that the dynamic range fits in the texture format, so it can't work here (dim stars are pure black with 16 bits). I would need a separate system for controlling the exposure range based on distance to the nearest star, for example.
- Or I can just use 32-bit float RGB textures for the framebuffer and all post processing. With this, everything just works without any funny business (including traditional auto-exposure).
So, I want to get a feeling about if 32-bit float is practical for a game in the near future. I understand the trade offs (more memory, bandwidth), but don't have a good idea if that price is an acceptable one to pay these days. I think I could afford the main framebuffer in 32-bit, but not sure about the extensive post processing for bloom, auto exposure, and tone mapping (and their intermediate textures). I am targeting mid-level to high-end PC hardware (minimum 4 GB GPU) made in the 5 years prior to the release date, which would not be for at least 2 years.