I'm working on a system to support the in-game UI for my game engine. Below is a list of the scope and key features I intend to provide:
- Rendered on a 2D canvas on top of all gameplay graphics
- In other words, I want to restrict this to absolute basics. I wouldn't (yet) go into the weeds with “diegetic UI” that needs to be projected into 3D space (e.g what Doom 3 or Dead Space has)
- Uses basic primitives (triangles, quads, etc.) and textures
- I already have a rendering system to which I can submit draw commands for primitives, so I just need the higher-level logic to turn UI elements into batches of primitives
- Supports basic user interface widgets for things like menus: frames, buttons, labels, etc.
- This includes both rendering and event handling, potentially with state changes based on events (e.g button having a “hovered" and “clicked” state)
- Parent-child logic to allow for composite UI elements (e.g window with buttons) that can be modified in bulk (show/hide, etc.)
- No need for a fully fleshed out window management system! I aim to restrict the UI to “one active window/frame at any one time” (menu pages, etc.). Said windows may have internal child elements (buttons, labels, lists, etc.) but I wouldn't need anything more complex, like having to manage Z ordering and focus for multiple windows.
- The one exception to the above: modal dialog (i.e popup/notification) logic which is drawn on top and consumes all inputs until the user acknowledges it
- In-game UI elements: HUD, inventory, etc.
- Besides barebones functionality, I'm hoping to add some features that fit the aesthetic and add to the game experience (animations, sound FX, etc.)
- Convenient client-side API
- The actual projects using this system have a convenient API to create and manage UI elements. They have interfaces to get/set widget data (e.g getting the text from an input field), they can subscribe and respond to UI events and modify the UI state accordingly
What are some good case studies for an architecture to use to accomplish this, ideally using modern programming paradigms?
Some of the main dilemmas I currently have:
- What does “UI implemented through composition” even look like?
- If I go down an ECS-like route of widgets being “bundles of components”, the systems that handle events, input, rendering, etc. become much less straightforward.
- How to handle relationships (parent-child, one-to-many, etc.)?
- This is trivial (if inefficient) in OOP, but less so when it's implemented via IDs and component pools
- What do the systems look like?
- To pick rendering as an example: I no longer have a virtual
render
function to call, and my logic may in fact depend on what components are present. I also need to deal with Z order and other considerations.
- To pick rendering as an example: I no longer have a virtual
- What does the user-facing API look like?
- It's convenient for users if things are encapsulated in a base class pointer, and this convenience should not be ignored. If they have to start looking up individual components by ID, it could make the API much more of a slog to deal with.
- Generally speaking, assuming OOP is not the way to go, how should one break out of that mindset w.r.t UI?
- UI is the rare case where it seems to work decently enough, and many existing APIs (Qt, wxWidgets) still use this approach, though they are used for building entire GUI applications, not UI within an existing one
I listed below all the approaches I've already considered:
OOP
The most straightforward approach. Have something like a Widget
base class, all widget types inherit from this, allowing for parent-child relationships and an interface for all the necessary logic. Client code can subclass the relevant widgets if needed, and they can instantiate and combine widgets to create the UI.
I used this for my initial attempt, where I created a very simplified copy of the widgets in Qt. This does get the job done, but it feels clunky, and I figured I could do a lot better. A big advantage of OOP is how it can hide the “messy bits” from the client, meaning they just get a friendly interface to interact with (show/hide, subscribe to events, etc.), lifetime for them starts and ends with the Widget
object, and the backend is left to deal with all the hassle of things like resources and rendering logic.
The downside, however, is that it gets very complicated to achieve modularity through interfaces and inheritance. To give one example, it would be nice to be able to separate the logical aspects of buttons (a thing the user can activate to trigger some functionality) from the visuals (since it can be anything from a colored rectangle to an animated sprite). This gets very messy with OOP, and composition would obviously be better, but I haven't found any good case studies.
Dear ImGui
I already use this for developer/debug UI, and the draw lists would solve the rendering part. The logic for generating the UI is also relatively simple and straightforward.
However, I'm not sure immediate mode is good for in-game UI overall. To get things like fancy layouts and whatnot, it seems like it would add a lot of unwanted complexity, both to the API and the backend. It seems more suited for when the priority is functionality, and is less ideal for when the UI is “part of the game experience”.
Web-UI
Would allow using HTML, the contents of webpages cover pretty much all the requirements I have, and there is plenty of existing material to learn from.
However, it would also introduce a huge load on memory and processing, practically “running an engine within the engine". Feels like overkill.