Advertisement

New method for specular highlights

Started by June 02, 2020 04:19 PM
4 comments, last by MJP 4 years, 8 months ago

I had an idea about non-IBL specular highlights a while back and although I haven't tried it out yet, I thought it was worth putting up here to see if it's been tried before. I was thinking you could have a simple greyscale lookup texture (only need one channel), around 512x512 in size. In the pixel shader, if you dot the surface normal with the cross product of the plane in which the eye position, pixel position and sun are on, you'll get a value between 1 and -1. You can then remove any negative, saturate it and use it as the u coordinate of the LUT and use roughness as the v coordinate.

Each LUT line would be a one pixel pie-slice of the specular highlight with degrees of roughness and an intensity-preserving spread of that highlight in the v direction . Line 0 would be a very tight glossy specular highlight with a fall-off, line 512 would be a spread of the light intensity across the u coordinate and all lines in between out be a linear (or not) interpolation of the two.

Has anyone thought of doing this before? Seems like a straight forward way of accurately preserving intensity across all roughness levels and also a way of tailoring the highlight and fall-off to your taste.

RobMaddison said:
and also a way of tailoring the highlight and fall-off to your taste.

… which is exactly what PBR workflow tries to avoid : )

That aside, is there a need for a LUT at all? What's wrong with the way PBS handles analytical lights / what problem would you solve / what do you aim to improve?

EDIT:

I assume you want to see more specular. I know this problem - often i want it but i get it only at grazing angles. Damn physical correctness.
The ‘correct' way of getting the desired effect would be either making the light more intense, or making it an area light (e.g. a sphere), so highlight becomes larger.

Layered materials can also help, e.g. clearcoat.

Advertisement

I haven't played with specular highlights for a few years but I seem to remember it being really difficult to get a nice highlight using different roughness levels - it never seemed to look quite right to me.

RobMaddison said:
it never seemed to look quite right to me.

Assuming roughness of zero (perfect mirror) and a point light, it would never show up in the reflection at all. It's area is zero, so it's solid angle is zero too, the probability for a ray to hit the light is zero as well.
A point light can not exist in reality, so we can only integrate it analytically. For games that's the norm and has tradition, but it's kind of fake, causes ugly lighting, hard shadows at any distance, and the typical round highlight we know from phong shading.
Another typical game limitation is finite falloff of lights, while in reality it's always infinite. We are so used to those things we start to miss them when moving towards more accurate and realistic lighting models.

BTW, all this is why i personally like IBL much more as a mind model to understand lighting. It has none of the above problems:
No point lights, no highlights, no falloff or any other stuff that is not real. It puts those things into the correct perspective and we get proper conclusions:
We don't see ‘highlights’ - we only see reflections of bright emissive surface like a sun, and it's always an area light. If the sun is round, the reflection is round too. If it's a neon sign we see just that. Phong would have a hard time with that neon sign.
Distance does not matter. We don't need to care about falloffs. If the sun is far away, it just becomes a smaller region in the environment image so it contributes less to the integrated result. We learn it's solid angle that matters, but not distance.
And finally we do not even need to think about shadows. They do not exist either - it's just the light source is not visible in the environment image. So we get proper (soft) shadows by integrating the image.

Now the only problem is we can not generate a high res environment image for every pixel in real time, ofc. And even if we could it would be a waste because a perfect mirror would sample only a single texel from that image, for example.
But the model allows to look at the problem from a different angle. It helps to get rid of limited legacy models like point lights plus constant ambient. It helps to understand RT, e.g. how importance sampling tries to calculate only few pixels of the environment image to get good results. It also helps to differ between generation of the environment information (GI, light transport) and shading, which is combining this environment with material (PBS, BDRF, etc.).

That just said to give some context. PBR in games is still very limited, and people should be still creative to do non physical stuff to make things look ‘better’ in any case.

S.T.A.L.K.E.R. did something like this a very long time ago in order to allow for different lighting responses in a deferred renderer: https://developer.nvidia.com/gpugems/gpugems2/part-ii-shading-lighting-and-shadows/chapter-9-deferred-shading-stalker

A presentation for Destiny also mentioned something along these lines, however I believe they may have dropped it and adopted more standard shading models prior to release: https://advances.realtimerendering.com/s2013/Tatarchuk-Destiny-SIGGRAPH2013.pptx

Personally I prefer to stick with the mathematical models, because they are “well-behaved” in ways that are hard to guarantee with other methods (for instanced they're properly normalized, and they can be importance sampled).

This topic is closed to new replies.

Advertisement