Advertisement

Mesh submission to hardware renderer

Started by April 17, 2022 04:42 PM
3 comments, last by hplus0603 2 years, 7 months ago

I am currently developing a general-use 3d renderer and I am facing an architecture/design dilemma.

The renderer I built collects “render commands” to be executed later in a queue and “texture downloads” into another queue for texture pixels submission into a big texture atlas (these operations are later performed by the selected graphics api [OpenGL, DirectX, ….]).

When submitting render commands the graphics api does not get involved, commands just get buffered into the queue.

void Render()
{
	// Client fills the commands
	RenderCommandQueue command_queue;
	PushMesh(&command_queue, ........);
	PushCube(&command_queue, ........);

	TextureDownloadQueue texture_queue;
	AddDownload(&texture_queue, position_in_atlas, "test.png", .....);

	// Submit commands to internal renderer
	SubmitCommandsToGraphicsApi(&command_queue, &texture_queue);
}

For drawing sprites and basic shapes i decided to compose a vertex array on the CPU to be later submitted to the graphics api but for meshes since there could be many more vertices i feel like its not a very good idea to submit the vertices of every mesh to the GPU, EVERY FRAME.

I am posting this here to see if any of you guys have some ideas to conveniently submit meshes to the graphics api. (I have tought of creating a separate queue for models but i am not sure how i would store them since i cannot allocate/deallocate GPU buffers while submitting render commands).

NOTE: While designing the renderer architecture i used handmade hero's renderer as a reference.

First: What is your goal? If it's “great performance on modern GPUs,” then you should probably look at what Direct3D 12, Vulkan, and Metal APIs actually need, and then structure your renderer architecture on top of that.

If your goal is just “learn about drawing graphics,” then you can probably choose whatever mechanism you want, chase it down, and analyze how it works out.

For example, if you have a mix of “static meshes” (pre-loaded objects) and “streaming vertex arrays” (runtime generated sprites) then you probably need to make that distinction in your renderer API. Have the user of your API call a function like “make mesh object,” and pass in both vertex and index arrays. The API then uploads this to the rendering API in appropriate buffers, and returns some identifier to the caller. You can then have a rendering command to “draw mesh 123 with material 456.”

enum Bool { True, False, FileNotFound };
Advertisement

Thanks for your reply.

My goal is to create a "clean" and simple api to use that also allows me to do optimizations in the internal graphics api specific code.

The approach you suggested is nice and but generating handles/ids might be tricky since the render commands structure itself is transient(every frame it gets resetted).

Yes, the pre-allocated mesh structure must live past a single frame.

Just like textures and shaders – you don't want to re-upload textures, or re-compile shaders, every frame.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement