So when using floating point to represent luminosity, one would presumably do so to push the notion of maximum brightness beyond "1" as is the standard in fixed point math. Having no real theoretical limit (other than the technical limit depending on the number of bits used), the question of how those values should correlate with real-world numbers emerges. In the context of an HDR framebuffer, small float formats in particular, one would ideally want a distribution that leverages the characteristics display technology and the human vision. Intuitively, the "1" point should represent the absolute white point of a display, but these of course vary to a high degree, and I doubt that this would offer anything close to an efficient precision distribution of luminosity values that humans are able to discern. I guess the question boils down to "How bright should the 1 value be in an R11G11B10 framebuffer?"
Floating poing luminosity
Why not to tweak it a bit? To see what value looks nicer in your specific case.
Self-taught game development
When doing HDR rendering, the capability of the display doesn't matter until after the tonemapping step, so you only care about the range of brightness values that can exist in your virtual scene.
10/11 bit floats have a range of 0 to about 65k, and 16bit floats have a range of about -65k to +65k (and much improved precision).
10bit float is bad for linear lighting values though though. Really bad. Worse than 8bit sRGB kind of bad. You need to use a gamma curve of around 2.0 to 3.0 just to get similar precision to 8bit sRGB this in turn reduces your storable range to about 0 to 255 for gamma2 or about 0 to 40 for gamma3, instead of the lovely 65k mentioned above.
If you use an auto-exposure system, you should apply that exposure value as part of your lighting shaders -- before writing to the 11/11/10 buffer (and well before tonemapping).
So if you go with gamma2 encoding and your scene has a 1000x ratio between a bright surface and a dim surface, then your should use a pre-exposure multiplier so that your brightest objects (regardless of actual luminance value) come out around 255 before gamma encoding and your dim ones at 0.25.
After tonemapping, you'll probably be going to 8bit sRGB for PC, 8bit rec709 for HDTV (pretty much the same thing) or 10bit rec2020 for UHDTV. For the first two, white/1.0 probably maps to a display brightness of somewhere around 80 to 300 cd/m2. You shouldn't really scale this though because 8bits is barely enough as is to avoid banding... For the latter (UHD), the 10bit format directly encodes absolute cd/m2 values, giving you direct control over display brightness.
. 22 Racing Series .
1 hour ago, Hodgman said:10/11 bit floats have a range of 0 to about 65k, and 16bit floats have a range of about -65k to +65k (and much improved precision).
10bit float is bad for linear lighting values though though. Really bad. Worse than 8bit sRGB kind of bad. You need to use a gamma curve of around 2.0 to 3.0 just to get similar precision to 8bit sRGB this in turn reduces your storable range to about 0 to 255 for gamma2 or about 0 to 40 for gamma3, instead of the lovely 65k mentioned above.
This does perhaps explain how I have seen good results from examples of 10-bit float buffers in action. I assumed it just magically looked good.
Since 16-bit has the same exponent range, I guess my original question applies to it as well. Thinking about it for a while, using a display brightness of 300 cd/m2 (what you and Wikipedia mention) as a reference for a linear untampered buffer does seem to be way too low. If the sun were to be equivalent to the highest exponent in a buffer (leaving some mantissa overhead), our 300 cd/m2 display brightness would be represented as 2^15 / (1.6 * 10^9 / 300) = 0.006144f, and 1f would be about 49k cd/m2. If the display brightness were instead represented as the intuitive 1f, it's clear that sunlight would face some severe clipping, but is it too severe?
The sun is going to clip in almost every small format; 1.6*10^9 is a lot of candles! This probably doesn't matter as long as it's clipped to some value that's still "extremely bright".
Alternatively, if you go over the 65k limit of a float 16 buffer, it actually overflows to positive infinity. During post processing, you could check for pixels that are infinity and replace them with 1.6*10^9 cd/m2
Again though, the brightness of a display (e.g.300 cd/m2) is completely irrelevant during rendering. The goal is not to map real world intensities 1:1 onto the monitor.
A typical JPEG photograph taken at midday on a snowfield, or a photograph taken in a glow-worm cave, both encode their images to sRGB, using the full 0 - 1 range. The camera's exposure / shutter time, aperture and ISO sensitivity settings will determine how much real world light is collected by the sensor, and a mostly linear (but possibly curved right at the top to "soft clip") tonemapping function will be applied, and then the sRGB curve will be applied for storage to 8bpp.
Your renderer has the exact same task - pick an 'exposure' value that will scale the scene's brightness into a workable range, then tonemap it, then encode it for display (e.g. sRGB).
If you're photographing or rendering the midday sun, then you could pick a exposure value that maps a billion candles to white. However if you're photograping or rendering a cave, that same exposure value would result in a black image (so you'd pick something else!)
With FP16 you can used some fixed units in the FP16 buffer, and apply exposure as the first step in your tonemapper, OR, you can apply exposure as the light step in lighting and write pre-exposed values into the buffer. The former is easier, but the latter will get better quality. With FP11/10 you pretty much have to do the latter.
. 22 Racing Series .
Floating point values actually work pretty nicely for storing HDR (scene referred) intensities that ultimately end up getting mapped to a visible range. This is because the floating point is inherently exponential in how it represents numbers (due to the exponent and mantissa), which effectively means that as you get further you end up with larger and larger "buckets" that serve as the smallest difference the value can represent between two values without having to round up or down. So if you're at a higher intensity like 10,000, you're not going to be able to represent 10,000.000001 in a 16-bit float. However that doesn't really matter, since the visual system really works more on a logarithmic scale. What that means is that Going from 0.00001 to 0.00002 may be perceived as big difference to a human (because relatively speaking you've doubled the intensity), but going from 10,000.00001 to 10,000.00002 is imperceptible. So floating point tends to naturally work out for that by "discarding" very small differences as you get into the higher ranges (or vice versa as you get into lower ranges closer to 0).