Hi,
Matrix can be column or row major order.
OpenGL use column major order and directx row major order.
What is the best to use when a matrix class is written ?
Thanks
Hi,
Matrix can be column or row major order.
OpenGL use column major order and directx row major order.
What is the best to use when a matrix class is written ?
Thanks
Column or row major matrices is only half the question; column or row vectors is the second half. While the two APIs are written with different matrix storage modes, they are also written with different vector types, in mind. The net result is that both APIs have exactly the same memory layout of their matrices. In other words, you can use the same matrices as stored in memory in both APIs.
So whether you go with column major matrices and column vector, or row major matrices and row vectors, does not change anything as far as OpenGL or Direct3D are concerned. The only difference is whether you multiply your vectors on the left or the right hand side of the matrix as determined by whether you use row or column vectors.
I had posted this in the past. OpenGL or Direct3D does not care whether you used column major or row major .As long all your matrix multiplication and transformation follow the same rule. The only thing that OpenGL care about is that the element 12,13,14 of the matrix always represent the position. That means your xyz basic axis can be represented either as row or as column.
I store my matrix like that :
m16[ 0 ] = 1.0f; m16[ 4 ] = 0.0f; m16[ 8 ] = 0.0f; m16[ 12 ] = x;
m16[ 1 ] = 0.0f; m16[ 5 ] = 1.0f; m16[ 9 ] = 0.0f; m16[ 13 ] = y;
m16[ 2 ] = 0.0f; m16[ 6 ] = 0.0f; m16[ 10 ] = 1.0f; m16[ 14 ] = z;
m16[ 3 ] = 0.0f; m16[ 7 ] = 0.0f; m16[ 11 ] = 0.0f; m16[ 15 ] = 1.0f;
I have renderer on Direct3D11 and OpenGL, and I have to transpose to have it works on D3D11.
I don't understand what you said about one matrix who works for all, maybe one #ifdef who change index ?
The OpenGL specification is written with column vectors in mind, and this is how a translation matrix for column vectors look like:
1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1
Now store this matrix in column major order, and you get the following memory layout:
1 0 0 0 0 1 0 0 0 0 1 0 x y z 1
Direct3D, on the other hand, uses a row vector notation, and this is how a translation matrix for row vectors look like:
1 0 0 0
0 1 0 0
0 0 1 0
x y z 1
Now store this matrix in row major order, and you get the following memory layout:
1 0 0 0 0 1 0 0 0 0 1 0 x y z 1
See how the effect of changing both matrix storage mode and vector mode negates each other and the final memory layout is exactly the same?
There are two questions that you seem to be mixing: column vs. row major storage, and column vs. row vectors. Column vs. row major storage dictates how a two-dimensional matrix is stored in one-dimensional memory, while column vs. row vectors dictates whether you multiply your vector on the left or right hand side of the matrix. The two are completely independent choices, but together they determine the physical layout in linear memory. Column major storage and column vectors have exactly the same physical storage as row major storage and row vectors. That is why you can use the same data for both APIs.
But as BornToCode said, it is not actually correct to say that OpenGL uses column vectors and column major storage. That is why I'm writing that the specification is written with that notation, because you can use any storage mode and vector mode as long as the final memory layout is consistent with that OpenGL assumes. Same applies for Direct3D.
I store my matrix like that :
m16[ 0 ] = 1.0f; m16[ 4 ] = 0.0f; m16[ 8 ] = 0.0f; m16[ 12 ] = x; m16[ 1 ] = 0.0f; m16[ 5 ] = 1.0f; m16[ 9 ] = 0.0f; m16[ 13 ] = y; m16[ 2 ] = 0.0f; m16[ 6 ] = 0.0f; m16[ 10 ] = 1.0f; m16[ 14 ] = z; m16[ 3 ] = 0.0f; m16[ 7 ] = 0.0f; m16[ 11 ] = 0.0f; m16[ 15 ] = 1.0f;
I have renderer on Direct3D11 and OpenGL, and I have to transpose to have it works on D3D11.
I don't understand what you said about one matrix who works for all, maybe one #ifdef who change index ?
The reason why you need to transpose your matrix for d3d11 it is because the shader matrix memory layout is the transpose of the matrix layout you have store on the CPU side by default. So instead of 12,13,14. The element at 3,7,11 represent position. But in OpenGL GLSL shaders the matrix layout is the same at the one you have on the CPU side, which is why there is no need to transpose for GLSL.
The reason why you need to transpose your matrix for d3d11 it is because the shader matrix memory layout is the transpose of the matrix layout you have store on the CPU side by default. So instead of 12,13,14. The element at 3,7,11 represent position. But in OpenGL GLSL shaders the matrix layout is the same at the one you have on the CPU side, which is why there is no need to transpose for GLSL.
There is no way to counter this transpose ?