I'm back with an oddly specific question.
I'm working on implementing a render graph abstraction, largely taking from the advice on the now legendary FrameGraph architecture in the Frostbite engine
https://www.gdcvault.com/play/1024612/FrameGraph-Extensible-Rendering-Architecture-in
I have most of a simple render graph abstraction implemented (e.g. virtualized passes and resources, pass culling, resource lifetime detection for transients, external resource registration, simple and clean API).
I'm currently on the compile / execute phase of the render graph and looking mostly at resource inputs. Is there a decent way to tie descriptor sets and descriptor resources into a render graph?
Would I have to recreate the descriptor sets per frame / every time the graph changes in order to catch changes in the number / order of bindings? Is a render graph only meant to handle less granular resources like depth/color outputs, full screen texture inputs, or global buffers? Maybe I'm thinking about this wrong but looking for some footing on what the best way to tie descriptor sets into a render graph would be.
I guess as an example, if I have a mesh pipeline object for rendering a bunch of meshes, where there is a to-be-bound descriptor set with 4 to-be-bound textures (e.g. albedo, roughness, metal, emissive, etc.), is this something that's best handled outside of a render graph pass description? or is it something I can hook into a render graph in some decent way?
And if a render graph should care about descriptor sets in any such way, is it better to pass in descriptor sets directly into a render pass description as some sort of input, or have descriptor sets dynamically allocate based on render pass inputs? By “better” I mean, is there any sort of industry standard for this?
Much thanks!