I am currently developing a general-use 3d renderer and I am facing an architecture/design dilemma.
The renderer I built collects “render commands” to be executed later in a queue and “texture downloads” into another queue for texture pixels submission into a big texture atlas (these operations are later performed by the selected graphics api [OpenGL, DirectX, ….]).
When submitting render commands the graphics api does not get involved, commands just get buffered into the queue.
void Render()
{
// Client fills the commands
RenderCommandQueue command_queue;
PushMesh(&command_queue, ........);
PushCube(&command_queue, ........);
TextureDownloadQueue texture_queue;
AddDownload(&texture_queue, position_in_atlas, "test.png", .....);
// Submit commands to internal renderer
SubmitCommandsToGraphicsApi(&command_queue, &texture_queue);
}
For drawing sprites and basic shapes i decided to compose a vertex array on the CPU to be later submitted to the graphics api but for meshes since there could be many more vertices i feel like its not a very good idea to submit the vertices of every mesh to the GPU, EVERY FRAME.
I am posting this here to see if any of you guys have some ideas to conveniently submit meshes to the graphics api. (I have tought of creating a separate queue for models but i am not sure how i would store them since i cannot allocate/deallocate GPU buffers while submitting render commands).
NOTE: While designing the renderer architecture i used handmade hero's renderer as a reference.