RobMaddison said:
it never seemed to look quite right to me.
Assuming roughness of zero (perfect mirror) and a point light, it would never show up in the reflection at all. It's area is zero, so it's solid angle is zero too, the probability for a ray to hit the light is zero as well.
A point light can not exist in reality, so we can only integrate it analytically. For games that's the norm and has tradition, but it's kind of fake, causes ugly lighting, hard shadows at any distance, and the typical round highlight we know from phong shading.
Another typical game limitation is finite falloff of lights, while in reality it's always infinite. We are so used to those things we start to miss them when moving towards more accurate and realistic lighting models.
BTW, all this is why i personally like IBL much more as a mind model to understand lighting. It has none of the above problems:
No point lights, no highlights, no falloff or any other stuff that is not real. It puts those things into the correct perspective and we get proper conclusions:
We don't see ‘highlights’ - we only see reflections of bright emissive surface like a sun, and it's always an area light. If the sun is round, the reflection is round too. If it's a neon sign we see just that. Phong would have a hard time with that neon sign.
Distance does not matter. We don't need to care about falloffs. If the sun is far away, it just becomes a smaller region in the environment image so it contributes less to the integrated result. We learn it's solid angle that matters, but not distance.
And finally we do not even need to think about shadows. They do not exist either - it's just the light source is not visible in the environment image. So we get proper (soft) shadows by integrating the image.
Now the only problem is we can not generate a high res environment image for every pixel in real time, ofc. And even if we could it would be a waste because a perfect mirror would sample only a single texel from that image, for example.
But the model allows to look at the problem from a different angle. It helps to get rid of limited legacy models like point lights plus constant ambient. It helps to understand RT, e.g. how importance sampling tries to calculate only few pixels of the environment image to get good results. It also helps to differ between generation of the environment information (GI, light transport) and shading, which is combining this environment with material (PBS, BDRF, etc.).
That just said to give some context. PBR in games is still very limited, and people should be still creative to do non physical stuff to make things look ‘better’ in any case.