martin.gee said:
@undefined thanks for your input. I'm curious how you handle object/mesh creation in terms of data.
Do you have some sort of object manager class that makes sure that mesh data, material data and such get linked? And how are buffer structures allocated and linked?
Well my system is quite specific to my needs. A lot of my meshes are generated by voxels through a variant of marching cubes. Those are in double precision since I'm generating large worlds. What I have is a bunch of different CPU mesh formats with the ability to add more. Then I have GPU mesh formats, which can have different data depending on the object I'm rendering. I can also add more of those.
For instance, my CPU voxel meshes are a list of voxels. This way I don't have to copy the data out of the voxels and then do a second copy to the GPU. Also in some cases the data inside the voxel can change, but I still want to keep the same voxels in a given mesh. This is for supporting LOD. A more normal CPU mesh will simply be an array of vertexes and one of indexes. In any case I have a bunch of templates that can be used for doing easy conversion from a CPU mesh to a GPU mesh. Note that mesh generation and download to the GPU, is done in its own thread so it doesn't slow down rendering. I think this much is pretty standard from what I've seen.
I have the concept of a pipe which lower down is really a command list. There are “view” pipes for rendering, and “copy” pipes. With copy pipes I open a pipe, add a bunch of meshes, and close it. I can either tell it to block at close time until its done downloading, or I can tell it to continue and do the block later before I need to actually use the mesh. This way I avoid blocking at all much of the time.
Objects can store meshes in any form that makes sense. I'm not really going for ECS right now since it doesn't really seem to fit what I'm doing. I do have the concept of a chunk which is generally used for terrain. But I've generalized it to mean a set of meshes that is part of an object. An object can have several chunks. Most will have only one, but planets may have hundreds, and chunks can be added or deleted at any time (usually for LOD).
Chunks also have their own transformation matrix. These are sent down every frame for any chunk that's rendered. I have to do this to support large worlds. Unlike a lot of systems, I can never go to “world” coordinates on the GPU because I would lose precision since GPUs only support float (at least at a decent level of performance) and my planets would look like garbage. So I always go directly the view coordinates which keeps anything that is close to the camera in high precision.
Next i assign materials, textures, transformation matrices into a structured bufferer. This is manual, and where i wonder how others structure the systems
In my case materials are kept with each mesh. However, “mesh data” can be shared by more than one mesh so I can apply different materials to the same model.
My objects themselves are in a tree, so objects can have sub-objects. From what I gather what I call an object is what other engines call an actor. In between objects and sub-objects, I have object references. There are different kinds of references. A simple one is a static reference. But for instance, I also have an astral mechanics reference which does obits and so forth. For character control I have a walk reference that connects to keyboard and mouse inputs. And for the camera (also an object), I have follow references, which can track any object under the same parent as themselves.
Physics is a whole different thing. Most of the stuff is run time procedurally generated. This is so I can have earth sized planets without actually storing most of the data. The problem is since I don't have everting built at once there is nothing to collide with. So each object has the option of having a sister physics object. The sister object builds meshes just around the player as you move, so it's kind of like “Just In Time” collision. The problem with using a graphics mesh for physics is there is no guarantee that a mesh will be built by the time a player arrives at a given location. If something lags, a player could fall through the geometry since LOD is done in its own time. However, the physics geometry I build in the main loop. If something lags, it lags, but at least the player won't ever fall through the earth, since this gets rid of race conditions. Since the physics mesh is very small, I haven't found lag to be a problem anyway.
But again, this is all very specific to what I'm doing so don't take this as any kind of template.
Edit: BTW if you are just trying to make a game, DirectX12 is far from the simplest way to do it. I mean there are game engines that are mostly or fully free, which will save you thousands of hours. Most people starting out would never jump right into DX12. The fact that you have something working at all is pretty impressive for someone new. I'm not saying you defiantly shouldn't work in DX12, just pointing out there are easier ways. If you want to lean lower level development, or are doing something really special, it might be worth it.