Advertisement

Mesh data and flexible data structure

Started by October 28, 2017 12:18 PM
6 comments, last by cozzie 7 years, 3 months ago

Hi all,

I'm struggling with the following workflow I have in my engine:

1. Prepare mesh in my asset tool, resulting in a nicely organized/ stripped chunk of mesh data (in a binary file)

2. Load the binary file with meshdata (deserialization)

3. Create a mesh object with it's GPU buffers etc.

Currently I've split the deserialization from the mesh class, because I think it's a good idea to have all binary IO/ serealization code separated from the logics that use the data. When something has to change on the IO side, I can simply change that piece of code.

The downside of this, is that steps 2 and 3 need some intermediate data structure, where the mesh data is stored, and which is then used by the mesh class for creating buffers, setting properties etc.

Consider this struct:


struct MESHDATA
{
	std::vector<Vertex::PosNormTanTex>		VerticesA;
	std::vector<uint>						Indices;
	uint									NrChunks;
	PHYSMATH::CR_BV							BoundingVolume;

	std::vector<SUBMESH>					Chunks;

	MESHDATA() : NrChunks(0) { }
};

This all works fine as long as the mesh's vertices have a Position, Normal, Tangent and texcoord.
One solution would be to give the meshdata struct a vector of the 'maxxed out' vertex type (with all possible elements), and then only store the elements that are available (leaving the rest empty). This would work, but then right before I create the GPU vertexbuffer, I would need to copy over all data to a vector of the right struct, before I create the buffer.

My goal is to prevent this additional copy of all vertices on the CPU side.

The only solution I've come up with so far, is this:


struct MESHDATA
{
	std::vector<Vertex::PosNormTanTex>		VerticesA;
	std::vector<Vertex::PosNormCol>			VerticesB;
	std::vector<Vertex::PosNormCol>			.... ;			// for all vertex types

	Vertex::Type							VertexType;		// used to select the applicable std::vector
	std::vector<uint>						Indices;
	uint									NrChunks;
	PHYSMATH::CR_BV							BoundingVolume;

	std::vector<SUBMESH>					Chunks;

	MESHDATA() : VertexType(Vertex::Type::UNKNOWN), NrChunks(0) { }
};

I know this works, but I was wondering if someone has better thoughts on how to tackle this.
The only downside here, I think, is having a bunch of unused std::vector's, but this doesn't really has to be an issue since it's only temporary/ on the stack when I load up meshes.
Note; I understand how I can determine the inputlayout/ vertexlayout, it's just that I'm looking for a way to store the data in the intermediate datastructure, and being able to use the data directly for sending it to the GPU (in the right format/ struct).

Any input is appreciated.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I look at this a little different.  I tend to break this into two separate items, what you have and then another class which represents what the graphics API expects.  The general idea is that from Max/Maya I spit out the intermediate structure which contains all the data available.  This is a 'slow' item since it is bloated and not formatted in a manner usable by the graphics API's.  Then I create the low level immutable graphics representation from the intermediate data which has done all the copies and interleaving you are mentioning.  This does mean that when I ask to render a mesh I load the big bloated data and perform the conversion step.  This probably sounds like what you are already doing, but there is a trick here.

When I make a request I generate a unique hash that represents the intermediate mesh, the target graphics api and any custom settings involved with the intermediate to graphics conversion process.  I then look in a cache for that key, if I have already performed the conversion I just grab the immutable cache version and use it, if I have not, perform the one time conversion and store it in the cache, potentially even saving it to disk for next run.  

Later in development, or whenever it becomes appropriate, you can offline generate all these variations, remove the intermediates and 'only' use the final graphics data.  This split becomes your asset pipeline eventually and if you leave the intermediate handling in engine, you can still use it for fallbacks to older API capabilities as needed.  A one time startup and processing overhead is not too much to ask of the end user, so long as it is not hours of processing of course.

Advertisement

Thanks. I actually already splitted that up to the asset pipeline tool. The mesh data I'm loading is already "made ready" to be loaded at runtime. The only problem is that I want to store the mesh data in my intermediate data structure, in a way that I can directly send it to the GPU (in the mesh class). Which so far I only schiefer by having a std::vector of all possible vertex formats in the struct (and using the one that's applicable for the specific mesh. If I use one big struct with all possible vertex elementen and just use the ones I need, then I need to copy all data to the correct struct right before creating the buffer (within the mesh class).

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I guess I didn't explain it well enough or our terminology is getting mixed up.  What you are describing is a multi-step process merged into a single step which is inherently going to have redundant data and cause this sort of problem.  Break down the pipeline into several steps and this is what I usually end up with:

Intermediate (My 'all' format or Collada, Obj whatever)
-> Basic prepared data (All data but in a GPU usable form, just not optimized and data which will be ignored by various materials.)
-> Material bound data (Unused data removed and optionally full vertex cache optimization.)
-> GPU ready


What I was describing is that the offline asset processing only does the first two steps, the material binding that reduces the vertex elements to only what is needed is what I was describing as the runtime cache portion.

There are a number of reasons to leave the material binding till runtime, primary among these is that the runtime is the only thing which actually knows what makes sense.  For instance, if I use a mesh with a pipeline that expects uv's and another pipeline that is the same except it doesn't use UV's, it is generally best to just reuse the same mesh and ignore the uv's in the second case.

All said and done, this is a case of pre-mature optimization until you actually understand what the game needs and what combinations make the most sense.  So, a little runtime cost is well worth it until later.

Thanks, I think I understand what you mean. For now I've split the pre-processing of the asset to a tool. The resulting mesh file is then read into the runtime engine.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

9 hours ago, cozzie said:

Thanks, I think I understand what you mean. For now I've split the pre-processing of the asset to a tool. The resulting mesh file is then read into the runtime engine.

^^which is absolutely okay, because pre-processed data allows for lowest load times.

On 28.10.2017 at 2:18 PM, cozzie said:

...
My goal is to prevent this additional copy of all vertices on the CPU side.
...

A solution is to think of the array of vertexes as a blob instead of structured data. You actually need to have some metadata, namely a specification of the vertex structure and the size (in bytes) of the vertex data. The specification can be s simple as an enum or as complex as a full-featured structural description. However, you'll use the specification then for both the buffer allocation and, if needed, for an overlay to access the vertexes on the CPU side.

I exactly do that for resource management. Resources are stored with metadata used just for loading, metadata that describe the resource (this is actually a serialization), and usually one or more blobs of data. The resource loader interprets its own metadata to understand what to load, uses a deserializer to load the resource metadata, and loads the blob as - well - just a blob.

Advertisement

Hey. I went for a std::vector of floats which can hold all vertex data independent of the elements. With this I have an enum that tells which elements there are per vertex. This same enum I can now also use for my inputlayouts. Also added a simple helper to retrieve the number of floats for one of the enum values.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement