I read this wonderful article about shader permutation:
https://therealmjp.github.io/posts/shader-permutations-part1/
https://therealmjp.github.io/posts/shader-permutations-part2/
The article's definition of shader permutation:
“what I’m referring to is taking a pile of shader code and compiling it N times with different options. In most cases these permutations are tied directly to features that are supported by the shader, often by writing the code in an “uber-shader” style with many different features that can be turned on and off independently.”
My understanding is: there are features enabled / disabled base on inputs during compilation. Since developer never know what combination (of features) will be used in the run time. It's better to compile all permutations. This is where the problem come from.
Please correct me if my understanding is wrong.
What I don't understand is why can't we put every piece of related code into one giant shader with many branches(uber shader).
Like what Doom Eternal was doing (https://advances.realtimerendering.com/s2020/RenderingDoomEternal.pdf) ?
A couple of possible reasons I can think of:
1. Put everything into one file created a shader that's too large to maintain.
2. Node based material system is hard to convert into Uber shader (maybe?).
3. Hard to optimize considering the size of it?
4. Too much instructions to fit into instruction cache.