18 hours ago, SephireX said:
Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance?
This depends on the level of memory management your engine is working at. Big engines tend to manage their objects without using smart/ref pointers or at least implement them by themselfs because anything about memory is runtime critical. The CPU is good at calculating hundrets of millions of instructions per second but Memory/File access just relates on systems inside your computer that are apart from the CPU; should mean that anything that is physical far away from the CPU is slow by definition.
The impact is even bigger on file access because of disc access time, cache misses and whatever happens here too.
I'm working mostly using plain pointers in my API level because they're easy to manipulate and I could for example cast an integer to a 4 byte array without any impact. My memory model is based on allocators that keep track on the ammount and size of memory requested and throw an assertion to the programmer when there are leaks occuring. Behind such an allocator may be the OS API malloc or a static/dynamic allocated chunk of memory that was loaded during initialization.
Memory leaks are just one side of the problem where fragmentation is the other one. Imagine of allocating 2 integers and 1 long. Those end in a memory layout that is 16 bytes range. You can release the long and allocate 2 additional integers, anything fine up to here but what happens if you release the long and allocate a short (2 byte) is that then there would be 6 byte range space. If that space is in between two integers then you can't allocate a long again in that space. So running your engine/game results in fragmentized memory the longer it runs.
This is why engines tend to manage their memory themself in different approaches. Some have a garbage collector, others manage different sized objects in different regions of memory they allocated as a huge block previously.
What I want to say here is that using smart pointers (the STL ones in C++ for example) will lead to memory fragmentation because those container classes aren't clever enougth to do anything else than malloc/new under the hood.
18 hours ago, SephireX said:
Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one
This is a question you have to ask when developing for multiple platforms and/or closed source. Normally, a non-plugin approach would have less management overhead and be possibly faster (a few instructions more per second so nothing much). I prefer the modular approach so having an own project per topic (a topic for example is the physics engine) and link them together in C++ as static linked libraries. So compile-time plugins if you want to mae it like that.
The classic plugin system loads a library dynamic at runtime and lets the OS connect it into your running application. This might be a performance impact too
18 hours ago, SephireX said:
The Banshee3D engine is an example of an engine that defines a common interface for physics, sound, renderer, rendering api and creates the implementations as plugins. This seems like a nice flexible approach instead of having the implementations as part of the main codebase. Are there any downsides to doing this?
OGRE is another example because they tend to be multiplatform compatible so loading DirectX, GL or Vulkan renderer at startup and then pass everything to those renderers.
The main question is; is it worth doing management overhead if you use the same renderer/sound/physics engine in 99% of the time? What are the benefits of having a pluginable system?
The only reason I see for an engine to support this except for extending functionality is either in the editor to support the workflow of editing and production code a little more or if your engine is closed source but you want users to use it for different projects so replacing the 3D render pipeline with a 2D one for example.
Making a multiplatform engine dosen't benefit from plugins over conditional compiling very much and plugins make debugging your code very hard especially if they effect each other.
Engines like Unreal, Lumberyard, Urho (and mine) are build from source nowdays so you have a custom tooling or make-system driven environment that selects the best solution for any platform you build those engines for. They compile different modules into the components of the engine and customizations are a lot easier than struggling with plugins.
But don't get me wrong, they also define how their APIs expect to work with them so you'll have an interface anyways