Advertisement

Component based architecture for models in OpenGL

Started by October 07, 2018 07:33 PM
4 comments, last by lGuy 6 years, 3 months ago

Hello, I was just wondering whether it was a thing to use a component-based architecture to handle models in OpenGL much like in a entity component system. This structure has especially helped me in cases where I have models that need different resources. By that I mean, some models need a texture and texture coordinate data to go with it, others just need some color data (vec3). Some have a normals buffer whereas others just get their normals calculated on the fly (geometry shader). Some have an index buffer (rendered with glDrawElements) whereas others don't and get rendered using glDrawArrays etc...

Instead of branching off into complicated hierarchies to create models that have only certain resources, or coming up with strange ways resolve certain problems concerning checking which resources some models have, I just attach components to each model such as a vertex buffer or texture coordinate buffer or index buffer etc...

So, I was just wondering if I was using the other version of model handling wrong or whether this style of programming is a viable option and whether there are flaws that I am unable to foresee?

Firstly your choices aren't just inheritance hierarchies or ECS. Most software is built around composition! It's actually a core rule of OOP to prefer composition over inheritance when possible - so a hierarchy would probably be wrong here under traditional OOP too... 

The part of ECS where you can query whether a component type is present in a particular entity is formally known as the Service locator pattern. IMHO, this pattern fulfils your description of:

5 hours ago, lGuy said:

coming up with strange ways resolve certain problems concerning checking which resources some models have

i.e. With an ECS style solution you'll have lots of "if a normal buffer component is present then do...", which I feel will be a complicated mess. 

So:

6 hours ago, lGuy said:

some models need a texture and texture coordinate data to go with it, others just need some color data (vec3). Some have a normals buffer whereas others just get their normals calculated on the fly (geometry shader). Some have an index buffer (rendered with glDrawElements) whereas others don't and get rendered using glDrawArrays etc...

I solve this in my engine by having the shader file be the master decision maker -- The shader has a list of all the vertex attributes that it expects as inputs. 

I then also add some meta data to my shaders for how the vertex data should be arranged in memory -- e.g. A single buffer with position, normal, UV interleaved, or three buffers with one attribute each. Also whether the attributes should be floating point, 16bit signed ints, 8bit fractions, etc... 

My model importer can then look at the shader file that's been specified for a particular mesh and import the required attributes and store them in the appropriate structure (one buffer vs multiple, type conversions, etc). The importer can then also generate a structure that tells the runtime how to bind this data to the pipeline - VAO args for GL or an Input Layout in D3D. The importer can also generate a draw structure per mesh, specifying which "draw" function should be used by the runtime (indexed vs non-indexed) and the parameters (primitive type, number of primitives, buffer offsets, etc). 

At runtime I don't have a hierarchy of model types, nor dynamic composition and service location... Just buffer descriptions, draw item descriptions, VAO descriptions, binding command descriptions, pipeline state command descriptions. 

Advertisement
15 hours ago, Hodgman said:

I solve this in my engine by having the shader file be the master decision maker -- The shader has a list of all the vertex attributes that it expects as inputs. 

Thanks for your reply :). I have a question, how does your shader know whether it should compute the color of the fragment with a texture of just a color given through attributes?

By 'decision maker' I guess I meant dictator :D

I'll write a shader that does one specific thing - such as fetching color from a texture. When an artist is creating a model, they can choose which shaders to use on each part. If they choose a shader that requires a texture, then they'll have to specify one (or the model importer will complain until they do). Likewise if they choose a shader that requires vertex colors and they haven't painted any, the model importer will fail with an error telling them what they've done wrong. 

I'd that's a bit too inflexible, it's common to build an "uber shader" system on top of this. You can group together those two shaders with a simple decision tree that does some checks - if the user has plugged in a texture, use the first shader. If they've created per-vertex colours, use the second shader. If they've provided both, use a third shader that multiplies both colors together. If they've provided neither, use a 4th shader with a hard-coded grey color. 

You can automate a lot of this by writing your shaders with #if preprocessor commands, and some meta-data specifying what range of values is valid for a list of #defines. You can then iterate through every permutation of valid #defines and spit out a huge list of different shaders. 

e.g. If you had HAS_COLOR_MAP 0/1, HAS_COLOR_ATTRIB 0/1, HAS_NORMAL_MAP 0/1, that's 8 different unique shaders that can be generated. With these kinds of systems you don't want to add too many options or you'll quickly have thousands of shaders... Alternatively you can scan your content files to find out which permutations are actually required, and only generate those ones. 

Wow that's actually really cool haha. How would you store the shaders though ? Cause some models will probably use the same shader right? So there would need to be some way to map the shaders to certain models.

- Sorry for all these questions...

This topic is closed to new replies.

Advertisement