(I'm new to this site, please let me know if I've posted to the wrong place)
I've been creating games for 4 years, and 3D games for 2 years. I've only used Unreal Engine 4 and Unity for a few weeks. So the job of deciding on a design and patterns for my engine was kind of up to me, and my experience. I've came up with a - in my opinion - very good and expandable design, but I need your help to improve it and make it better, because of course, I can't know every little detail.
The engine is written in C++, and uses the OpenGL and Vulkan API to interact with the GPU.
Here's an overview of the system:
When the engine starts, it loads every asset. This won't remain like this, I'll make it load in the background, but that's a task for a future time, as I need to start working with threads and thread safety. It loads the whole scene from a so called ‘world file’, which basically tells the engine where are the models, textures, and sounds, which one to load, how to compose materials, and then objects from them.
The engine uses an Entity-Component-System architecture. Everything in the game is a GameObject. It has a transform, a parent, and a list of children, and a list of components. Every component adds some data to the object. The systems in the engine are called engines. There are multiple engines, like RenderingEngine, AudioEngine, PhysicsEngine, etc. The RenderingEngine has a list of Renderer components and Light components. When a GameObject with a Renderer component gets updated, it adds itself to the list of renderer components in the RenderingEngine. So basically, the components have some data, and register themselves to the corresponding engine, so they can get updated, or used upon rendering.
What this means is if you wish to add a new functionality, you have to create a component for that purpose, and if non of the existing engines do what you need, you have to write a new one. This is not a very complicated process, and all of this is API independent, so you don't need to bother writing everything twice because of the two APIs.
The rendering process is like this:
The smallest elements of the rendering are VBO, Shader, and Material. A VBO and a Shader make up a GraphicsPipeline (there are more Pipeline types, ComputePipeline, and RayTracingPipeline, but they're not relevent right now). A GraphicsPipeline and a Material make a Renderer component. What's important here is that we only create one VBO for every model, one Shader for every program, and one Material for every different type. So if we render the same model twice, we will still only have one VBO. They're raw pointers, so I took special care of that, namely collecting them into a static list, and deleting them once on the cleanup process. So, we do not duplicate anything. We assemble every single Renderer object from the same GraphicsPipelines and Materials. It's useful because: 1) we don't duplicate anything, so less memory consuption, and 2) I can batch them together easily.
I use a batch renderer, so I bind the GraphicsPipeline (in Vulkan, it's the VkGraphicsPipeline object and the VBO, and in OpenGL, it's the VBO and the Shader) only once, and render every object using that VBO and shader pipeline. Remember how I mentioned that the components add themselves to lists? The RenderingEngine has a models map, which maps Pipelines to lists of Renderers, so we know which Renderers use which pipeline. This is why we only create one instance only, because we can compare the already added Pipelines as pointers, which is not a heavy test.
Every other engine works in a similar way. I hope you understood my design, and please let me know if you found anything that could be optimised far better because I overlooked something or if you didn't understand a part of my explanation.