This is a question I've had for aaages, but I haven't been able to find a proper answer. These days, a lot of engines have material shaders that have incredible complexity, often using node-based editors. While not trivial, it seems pretty straightforward how these can be turned into compiled pixel shaders. However, pixel shaders are also often used for lighting and other effects the engine uses that are applied after the material is calculated. I have absolutely no idea how to combine the two.
I could for instance, pre-render the material shader onto an offscreen surface, and then use that as a texture input for a polygon using the engine's shader with all the lighting and whatnot code, but this seems excessive to do for every shader. Plus if the shaders use any animation, they would have to be rendered offscreen each frame.
I've also considered using multipass effect files. Rendering the material in one pass, then doing lighting calculations in a second pass, but as I understand it this can create artifacts with anti aliasing.
Finally, I could compile the material shader into HLSL, and just write in my lighting code at the end of the material HLSL code.
I've tried to research this, but I can't find an answer as to how this is commonly implemented. I've even downloaded a few open source material editors to browse through their rendering code, but I struggled to find one in C++, which is really the only language I know.
Hope that makes sense.