Currently, I populate a buffer containing all the information (from the current scene) I need so far to render a single frame.
vector< const CameraNode * > m_cameras;
vector< const ModelNode * > m_opaque_models;
vector< const ModelNode * > m_transparent_models;
vector< const DirectionalLightNode * > m_directional_lights;
vector< const OmniLightNode * > m_omni_lights;
vector< const SpotLightNode * > m_spot_lights;
vector< const SpriteNode * > m_sprites;
RGBSpectrum m_ambient_light;
const SceneFog * m_fog;
I only have a forward pass (opaque -> transparent) at the moment (and a final pass for the sprites only). This pass takes the above buffer as input and iterates the content of the buffer itself. Due to the latter, I can avoid a virtual method call per node. The downside of every pass having to iterate the content of the buffer, is a giant code blob that looks kind of similar for each pass.
My camera node contains a settings structure which selects the render mode, the BRDF and some layers. The render modes includes: visualize shading normals/normal map shading normals/diffuse color/diffuse texture/reference texture etc. The layers include wireframe, bounding boxes, etc. My current implementation tries to avoid having a separate pass class for each render mode and layer (BRDFs only require a separate PS) by hacking (set a different PS, iterate lights or not, use a fixed material, etc.) the behavior of these extras in the forward pass. This approach limits the extensibility: I could use shaders with their own constant buffer layouts instead of reusing and hacking the existing constant buffer layout.
So to sum up, I wonder if you eventually need a 1 to 1 mapping between passes and render modes?