Konjointed said:
Sorry I don't know the best way to phrase it, but say I wanted to make a tree that has a shadow and wind effects or a character that has a shadow and animation to my understanding this would require multiple shaders I suppose you could make one monolithic one
The shadow part is confusing.
The standard is to render the character twice.
Once to the shadow map, requiring it's own projection from the light source. Using the same vertex shader for skinning, but with a simple pass thru pixel shader.
And a second time to the frame buffer, using camera projection, the vertex shader but a complex pixel shader for materials.
So you must render multiple times. Once for each light where the character casts shadow, and for the frame buffer. There is no way around that.
In theory it is possible to skin the vertices just once and then render to multiple viewports, e.g. using geometry shader. But this is not very practical. Because the character might be not visible from some viewports, the approach still generates overhead and redundant processing, afaik.
But it is pretty common to skin vertices just once with a compute shader, store the transformed vertices to VRAM, and then render to multiple viewports no longer needing to transform multiple times.
This approach is usually worth it if we need to transformed vertices for some other things as well, e.g. fine grained occlusion culling of small clusters over any mesh. Typically used in GPU driven rendering approaches. Or, if we use ray tracing, we must pre-transform this way so the BVH can be updated from known vertex positions.
But that's probably not what you mean, i guess.
Konjointed said:
I can't have multiple shaders applied at once (which is what I originally though) and switching between shaders would cause the previous to be discarded so what do I do?
No, you can't have multiple shaders from the same shader stage applied to the same draw call.
If you need this, you have to create a new shader combining both functionalities into a new single one manually.
There also is no kind of shader stack, which remembers former active shaders and automatically selects the last used one if you disable the current. If you need such functionality, you need to implement and manage it yourself. (Idk which API you use, but this applies to any API afaict.)
But why would you want to have this at all? What is the problem you try to solve?
Konjointed said:
I could think of is shaders would require information from previous shaders or would just have duplicate code, but I don't think either are very good solutions.
There are no function calls in any gfx API. CUDA or OpenCL can do this, but sadly game devs are left behind.
So until they change this and give us an update, duplicating code is our only option.
Though, this is usually only a problem with general purpose compute programs, not for rendering. So i wonder what you try to do.