Advertisement

Question about OpenGL Instancing and spritesheets. Very confused.

Started by September 16, 2020 05:39 AM
3 comments, last by EBZ 4 years, 2 months ago

I was able to get instancing working with OpenGL and created a grid of sprites (different positions)….but same art (sprite frame). I'm only passing the sprite coordinate matrix once as a uniform. Which is okay for none instanced sprites.

How would I go about passing different sprite coordinate matrices for each instance? Can or should I pass the matrices as an attribute the same way I did for the vec3 positions of the instances? I'm really confused about this :( Would anybody have any suggestions you're willing to share with me?

---------------------------------------------------------------
Extra Info on how I render my spritesheet frames:

In order to get the right frame from my sprite sheet I do the following from the CPU:

matrix = mat4(1.0f);
matrix = translate(matrix, vec3(frame.x, frame.y, layer));
matrix = scale(matrix, vec3(frame.width, frame.height, 1.0f));

Then on the GPU I do the following:
uv = (sprite_matrix * vec4(uv_coord, 0, 1)).xy;

The sprite matrix is a uniform and the uv_coord is of course an attribute. The resulting uv vec2 is then passed to the fragment shader. This works very nicely so far.

Keeping batching out of scope for the moment, everything you render in whatever API you use is considered to be an own “object”. This means that there are properties of an “object” that are similar to others and some that are different.

A material, so a combination of a shader and one or more texture(s) and other settings made in that specific shader, may be reused and referenced by multiple "objects". The position, rotation and scale is usually different for each of them.

A typical render pipeline is similar to an ordered list of “objects” used in a render pass. Switching materials (shader and/or texture) is an expensive treat because our GPUs are more efficient in parallelizing a render pass. This means that a large buffer with thousands of vertices is rendering faster than a lot of single “objects” with low poly count. That's why in a good render pipeline, calls to the GPU are sorted to limit the amount of single render passes in benefit for higher performance.

I used the term “object”, don't think of OOP but of a collection of properties that stay together for a single instance and this is what you need here. If an “object” can share a material with other “objects”, it can't share the matrix which means that you have to pass an individual matrix to the GPU for each instance you want to render.

EBZ said:
matrix = mat4(1.0f); matrix = translate(matrix, vec3(frame.x, frame.y, layer)); matrix = scale(matrix, vec3(frame.width, frame.height, 1.0f));

You have to perform this for each instance on every frame, every time you want to render the sprite and every time set the translate component to where you want the object to appear on screen, then push that matrix to your shader before you tell the GPU to render that instance

Advertisement

You can pass the matricies per sprite as vec4 vector, 3 floats for position and one float for uniform scale (that's what I have seen you have been using). Then set the transformation matrix as

[scale,0,0,posX]

[0,scale,0,posY]

[0,0,scale,,posZ]

[0,0,0,1]

And use that for transforming the sprite attributes.

What limits you is the index attribute of the vertex indicating what uniform to pick and transform by. One byte gets you 256 unique transfromation matricies. What is left is the 256 plane-mesh, where all verticies of each individual plane have different index, while planes are positioned the same (or close enough).

Das crazy. Thank you very much guys! Granted, excellent answers. You guys have been lots of help.

This topic is closed to new replies.

Advertisement