Hello everyone!
I would like to optimize the used space of vertex buffers.
For instance, if my model does not have colors per vertices, I would like to remove “color” component from vertex buffer (making it lesser) and pass the color of the whole model as uniform vec4 (or vec3); to my vertex shader. Or, if my model does not have normals per triangles, I would like to remove “normal” component from vertex buffer (making it lesser) and pass the normal of the whole model (actually in that case the “model” is only some sub-part of model, like one face or one side of the model) as uniform vec3; to my fragment or vertex shader.
Is such design correct in general?
If I am unclear please ask me additional questions.
Thank you!
For instance (color per vertex case; a trivial case just to show you the main idea):
// Vertex shader
// vertex buffer should contain XYZ coordinates (3*4 bytes) + RGB color (3 bytes)
// So, at least 15 bytes per vertex
varying vec3 v_color;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
v_color = gl_Color.rgb;
}
// Fragment shader
varying vec3 v_color;
void main()
{
/* Calculate the resulting color of fragment */
gl_FragColor = vec4(v_color, 1.0);
}
and I would like to do something like that (a trivial case just to show you the main idea):
// Vertex shader
// vertex buffer should contain XYZ coordinates only (3*4 bytes)
// So, at least 12 bytes per vertex
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
// Fragment shader
uniform vec3 u_color;
void main()
{
/* Calculate the resulting color of fragment */
gl_FragColor = vec4(u_color, 1.0);
}