Let's say I use a GGX distribution to model the NDF part of Cook-Torrance, such as here: https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
When the roughness is low, value of NDF can go up to well above 1, something in the range of hundreds of thousands. Therefore the whole BRDF would evaluate to a big number.
Now, say we have a single directional light source, a smooth surface (low roughness), and a camera. The half vector is equal to the surface normal, and we try to evaluate the amount of light reflected from surface to camera:
Lo(x, w) = brdf(x, h, v, n, l) * Li(x, l) * dot(n, l), where h=n
Then apparently, the outgoing radiance is stronger than the incoming radiance at the surface by hundreds of thousands?
That's my first question. If we convert incoming radiance to camera to our final pixel value, how does it make sense that reflected radiance can be stronger than the light source?
Now's my second question. I understand radiance's unit is a watts per area per steradian. To evaluate the actual light landed on the imaging sensor, I have to compute the rendering equation (integrate the same brdf*Li for all light incoming from every direction). After this integral, I'm sure the integrated radiance will have a value less than the original light source, since BRDFs are designed to integrate to 1 over hemisphere. However, doesn't this contradict the approximation we are using to compute pixel values in real-time graphics (using radiance directly to compute pixel value)?
Any help is appreciated, thanks.