Edit: Just realized this might be under the wrong channel as it's related to engines/graphics rather than game design (sorry)
I'm trying to implement a camera for my game engine and looking at other engines as a reference there is a editor camera which you use to freely move around the scene, but then you also have cameras that you can add to the scene which would be used for the actual game and depending on the context (i.e being in an edit or play mode) the engine switches between these. What confuses me is if this editor camera is still considered a scene object that is just controlled differently depending on the context (i.e being in an edit mode) or is it considered part of the editors tooling and separate from the scene objects. I'd imagine there isn't a single correct answer to this, but any advice/tips to put me in the general direction or just to better understand how to design this would be helpful.
Game engine architecture. Are cameras considered a tool or scene object?
None
Either works, realistically. Having everything, including editor-related objects like cameras as scene-objects can generally be easier to implement, since you don't need any specific handling just to implement the feature. However, its also much more limiting/unclean, and might require additional workarounds (such as Unitys hideFlags).
I personally use an explicit handling for all editor-objects, which means I need additional abstractions for editor/rendering, but its a lot cleaner to have separate editor/ingame implementations.
But again, if you feel like scene-objects are easier for your case, you can do that too (and potentially refactor later if you ever find it limiting).
Typically in engines they're implemented as a component that can be manipulated in the world. They get attached to game objects, actors, or similar through composition, the same as other components like textures, meshes, collision objects, sound emitters, effects emitters, and so on.
Rendering system is decoupled from the simulation. There could be any number of camera components, any of them could be active or inactive at any time, and the logic in the components could direct the rendering system to do many different things. For example, a camera could provide a view for a picture-in-picture, or a minimap, or even inside another world object like a security monitor. There could be multiple cameras setting up eye positions for VR, which often uses 3 active cameras: right eye, left eye, and display on computer.
As it is logically yet another component attached to a game object, you could mount it on a shoulder or fixed position, more typically there is a floating camera object tethered to the player's character, orbiting, moving, and sliding around to give a third person view without being blocked by items in the world. As a component you can programmatically adjust it the same as any other component. You could script to have the component on several different cameras, like 4 different cameras watching a cut scene and then control which camera is live at any time.
Go crazy and even set up a virtual TV studio with camera components attached to video camera models, and people standing behind them animated as though they were running the broadcast, switching between cameras and having a little red light on the one that is currently active.
@Juliean if I'm interpreting your response correctly the editors camera would just be a scene object that is controlled while in some sort of edit mode (not actual code, but a rough draft of what I was thinking)
class SceneNode {};
class CameraNode : public SceneNode {};
class Scene {
private:
std::vector<SceneNode> nodes;
std::shared_ptr<Camera> activeCamera;
public:
void setActiveCamera();
void getActiveCamera();
};
class Editor {
Camera* editorCamera
void onUpdate() {
// update the camera
}
};
The second part of your message about handling explicit handling for editor objects I'm a little curious what you mean by that because (and I doubt this is remotely close to what you described) I tried making the editor camera it's own thing by not having Camera inherit from SceneNode
and instead is the base class for cameras but then if I wanted to have a CameraNode
I would need to derive from Camera
and SceneNode
which I don't think using multiple inheritance is ideal so trying to make the editor camera its own thing didn't go so well. Sorry if this is a lot to ask 😅
None
steamdog said:
@Juliean if I'm interpreting your response correctly the editors camera would just be a scene object that is controlled while in some sort of edit mode (not actual code, but a rough draft of what I was thinking)
In the simplified format, yes, it would look something like that.
steamdog said:
The second part of your message about handling explicit handling for editor objects I'm a little curious what you mean by that because (and I doubt this is remotely close to what you described) I tried making the editor camera it's own thing by not having Camera inherit from SceneNode and instead is the base class for cameras but then if I wanted to have a CameraNode I would need to derive from Camera and SceneNode which I don't think using multiple inheritance is ideal so trying to make the editor camera its own thing didn't go so well. Sorry if this is a lot to ask 😅
Well, the explicit handling is more complicated, so no wonder it is harder to understand. I also didn't go into detail, so let me try to explain a bit more:
In my own system, game-cameras are handled via a Camera3D-component (I'm using ECS). Those contain a Camera-object, which is the actual implementation of the camera-logic, containing the code for calculating view-matrices etc… Its only a lightweight non-virtual class, without any code for deriving or scripting the view; this is done via implementors like the Camera3D-components, which simply use it via composition. The core-renderer also does not depend on the Camera3D-component, but the Camera-objects.
Thus the editor-system itself does not instantiate a Camera3D-component, but simply have a Camera-object, that it applies to the RenderView that it manages. This object is, as described above, not really part of the world. Its only telling the editor-view to render a specific editor-tool controlled viewport. Some pseudo-code:
// base-camera object; represents a viewport into the world without any buisness logic
struct Camera
{
private:
Matrix m_mViewProjection;
};
// ECS-presentation of a camera that is part of the scene world
struct Camera3D: Component<Camera3D>
{
Camera camera;
};
// implementation of a render-view => those are used to have a window/texture to render into
class IRenderView
{
public:
// receives a base-camera object to use as the viewport; allowing any implementor to supply the camera
virtual void SetCamera(const Camera& camera) = 0;
};
// class handling the game-view in an editor-window
class EditorView
{
public:
void Init()
{
m_pView = renderer.CreateView();
// assign camera to view; camera can then be editor-controlled & will update the view
m_pView->SetCamera(m_camera);
}
private:
std::unique_ptr<IRenderView> m_pView;
Camera m_camera;
}
Hope this makes a bit more sense? Though I had a bit of trouble trying to find the right words, so let me know if it still doesn't fully make sense. The advantage of a system like that is, that the editors controlling is entirely decoupled from the game-world. If you do not do that, but instead use a system like what you originally described, then you'll end up with a game-world that partially contains objects that do not truley belong to it. And then you need logic to exclude them in systems that do not need to process them. For example, the editor-camera should obviously not be saved into the scene-file. It should also not be part of any save-games. It should not be shown in the scene-object view. A gameplay-script searching globally for all scene-cameras should not find it. So you need some way to make this camera not be like all the other scene-cameras that you might use, as per Frobs description.
Unity does that via a hideFlags-property, which is double, but a lot less clean.
Here is my take on how to design the camera and viewport and their relation to objects in the scene. Like Juliean I favor an ECS-like design where the camera is a component type. A key point is to make the rendering of the scene be controlled at the highest level by the GUI system (which is opposite the way Unity works). The scene viewport is rendered to the screen within a GUI widget. The game and editor both use this system.
When the game starts, it creates the “main menu” GUI. The main menu can then change the GUI to the “new-game-loading-view” if the user selects “new game”. The new-game-loading-view handles loading the assets and creating the scene, and when finished it changes the current GUI to the “game view”, which manages the game state and interface. The “game view” has game-specific logic and a scene viewport GUI which it displays on the screen. The game view keeps track of what camera and scene should be rendered in the scene viewport.
struct Camera // component type
{
// viewport parameters: aspect ratio, field of view, projection type, etc. (no position)
// Has functions for calculating projection matrices.
};
struct OpaqueReference
{
void* data;
const ResourceType* type; // Needed for downcasting, essentially just a UUID.
};
struct SceneObject // entity type
{
std::vector<OpaqueReference> components; // can contain a camera component
std::vector<SceneObject*> children;
SceneObject* parent;
Matrix4f localToWorldTransform; // recomputed every frame based on hierarchy
TRSTransform localToParentTransform; // separate position, rotation, scale
};
struct Scene
{
std::vector<OpaqueReference> resources; // Can contain SceneObjects or other things without position
};
class SceneRenderer // Base class, could have ForwardRenderer or DeferredRenderer subclasses
{
virtual bool renderScene( const Scene* scene, const Camera* camera, const Matrix4f& cameraToWorld, RenderTarget* target ) = 0;
};
struct SceneViewport : public GUIWidget // GUIWidget is base class for my GUI library
{
// Render the camera and scene using the scene renderer to the widget bounding box.
virtual bool renderWidget( GUIRenderer& guiRenderer, const Range3f& boundingBox ) override;
const Scene* scene;
const Camera* camera;
SceneRenderer* renderer;
};
It should be noted that this is not quite the whole implementation. This omits how when a Scene or SceneObject is added to the engine, a counterpart of that scene or scene object is created for each modality (graphics, physics, acoustics). This avoids doing any downcasting of components to concrete types except for when the scene object is first added. For example, when a SceneObject with a Camera component is added to the engine, the GraphicsSceneContext (graphics counterpart to a Scene) maintains a pointer to the cameras and their parent scene objects (for their transformations), so that if we want to render the scene with a particular camera we know where the camera is located. Similarly, every SceneObject that has a graphical representation has a GraphicsObject component, which has pointers to mesh(s) to render for that object and material overrides. When a GraphicsObject component is added to the engine, the GraphicsSceneContext remembers this association with its parent SceneObject so that it can maintain more efficient culling data structures internally. Before rendering a frame, the current local-to-world transform of the SceneObject is copied into the GraphicsObject and culling data structures are updated.
This design is working very well for me in my engine and the game I am working on. The key insights in my engine design are:
- All high-level logic and systems should be driven by GUIs, both in the editor and game. This avoids Unity-like nonsense of having to create a 3D scene just to display a 2D GUI.
- Low level engine logic (e.g. graphics rendering, physics, etc.) should all be done in centralized engine systems which have a well-defined update order. Components should NOT contain anything but data and small self-contained functions that have no external dependencies. Every other operation that has dependencies should be moved to an engine system. System-to-system dependencies are established when the engine is created.
Another thing to keep in mind is how to deal with saving and loading the game state. You will need some way to know what camera in the scene is currently active when the game is loaded, as well as what scene object is being controlled by the player. I have a relatively complex mechanism to handle this but it's outside the scope of this post.