RobMaddison said:
My question is, isn‘t image based lighting all you need for PBR?
Yes, if this image would contain all the lighting for the given pixel to shade.
But because we can not calculate one environment map per pixel, it is just a faked compromise to use image based env for assumed static surrounding at infinite distance, and add some analytic dynamic lights on top.
I have implemeted basic PBR in a simple offline path tracer, and there is no need for neither environment maps nor analytical lights when using just emmissive surface as light sources. (and not caring about fast next even estimation to sample lighting).
That's probably the easiest way to look at realistic rendering, without any need for special cases or custom data structures to represent lighting.
But you can at least imagine things correctly to avoid confusion.
Any lighting on any point is a weighted sum of it's visible environment interacting with the material, and image based lighting comes very close to represent this environment.
So if we would render this env map as seen from the shading point, and this render would also contain all light sources, IBL would be perfect.
But in practice some lights are missing, so we need to add them to the result we get from IBL. That's why we do ‘both’ IBL + analytical lights. Nothing wrong with that.
BTW, even if we could render env map per pixel (it's not impossible, see e.g. the Many LODs paper.), we had a problem with adding point lights we use in games.
Becasue point lights have zero size and render resolution is finite we could not capture them well in the rendered env map.
This is becasue point lights do not exist in reality, only area lights do. So it's the concept of point lights which is wrong the most here, eventually.
Also giving lights a finite radius of contribution is wrong and not really realistic.