Advertisement

How to ensure that the vector in homogeneous coordinates is still a vector after transformation

Started by November 14, 2021 07:51 AM
6 comments, last by Rider_ 3 years ago

I performed an MVP transformation on the vertices of the model. In theory, I must apply the inverse transpose matrix of the MVP transformation to the normal.

For a vector, such as (x0, y0, z0), it is (x0, y0, z0, 0) in homogeneous coordinates. After transformation, it should still be a vector, like (x1, y1, z1, 0), This requires that the last row of the 4 * 4 transformation matrix is all 0 except for the elements in the last column, otherwise it will become (x1, y1, z1, n) after the transformation.

In fact, my MVP transformation matrix cannot satisfy this point after undergoing inverse transpose transformation.

Code:

Mat<4, 4> View(const Vec3& pos){

   Mat<4, 4> pan{1, 0, 0, -pos.x,

               0, 1, 0, -pos.y,

               0, 0, 1, -pos.z,

               0, 0, 0, 1};

   Vec3 v = Cross(camera.lookAt, camera.upDirection).Normalize();

   Mat<4, 4> rotate{v.x, v.y, v.z, 0,

                    camera.upDirection.x, camera.upDirection.y, camera.upDirection.z, 0,

                    -camera.lookAt.x, -camera.lookAt.y, -camera.lookAt.z, 0,

                    0, 0, 0, 1};

   return rotate * pan;

}



Mat<4, 4> Projection(double near, double far, double fov, double aspectRatio){

   double angle = fov * PI / 180;



   double t = -near * tan(angle / 2);

   double b = -t;

   double r = t * aspectRatio;

   double l = -r;



   Mat<4, 4> zoom{2 / (r - l), 0, 0, 0,

                   0, 2 / (t - b), 0, 0,

                   0, 0, 2 / (near - far), 0,

                   0, 0, 0, 1};

   Mat<4, 4> pan{1, 0, 0, -(l + r) / 2,

                   0, 1, 0, -(t + b) / 2,

                   0, 0, 1, -(near + far) / 2,

                   0, 0, 0, 1};

   Mat<4, 4> extrusion{near, 0, 0, 0,

                       0, near, 0, 0,

                       0, 0, near + far, -near * far,

                       0, 0, 1, 0};



   Mat<4, 4> ret = zoom * pan * extrusion;

   return ret;

}

Mat<4, 4> modelMatrix = Mat<4, 4>::identity();

Mat<4, 4> viewMatrix = View(camera.position);

Mat<4, 4> projectionMatrix = Projection(-0.1, -50, camera.fov, camera.aspectRatio);

Mat<4, 4> mvp = projectionMatrix * viewMatrix * modelMatrix;

Mat<4, 4> mvpInverseTranspose = mvp.Inverse().Transpose();

mvp:

-2.29032     0         0.763441   -2.68032e-16

0            -2.41421  0          0

-0.317495    0         -0.952486  2.97455

0.316228     0         0.948683   -3.16228

mvpInverseTranspose:

-0.392957   0          0.130986   0

0           -0.414214  0          0

-4.99       0          -14.97     -4.99

-4.69377    0          -14.0813   -5.01

None

The normal matrix is the upper-left 3x3 of transpose(invert(V M)), projection matrix is not included.

// SIMDTransform is 3x3 + position, SIMDBasis is 3x3.
const SIMDTransform modelView = state.cameraWorldToLocal * mesh.transform;
const SIMDBasis normalMatrix = modelView.basis.invert().transpose();
Advertisement

@Aressera Thanks, that is to say: For ModelView transformation, I should ignore its translation part.

But the projection transformation includes orthogonal and perspective. Orthogonal has no effect on the vector. I can’t understand why perspective can be ignored.

None

Typically a normal is used for lighting calculations. Those can be done in world space. I do them in view space for various reasons. However I'm not sure they really make sense after projection. I mean you've projected and compressed your coordinates. Even after doing that with light positions, I would think the angles would be different.

@Gnollrunner I tried ignoring the projection transformation, and the final lighting appeared deviated.

Do you mean that lighting calculations should be performed after model transformation? In this way I only need to apply the inverse transpose matrix of model transformation to the normal, instead of model * view

None

First most graphics books I've used have the rows and columns flipped from what's in your code. That's messing with me a bit. Also I generally multiply things by the M (sometimes called W for world) * V * P in that order, but I'm not sure how things work in your library.

In any case I think you should multiply your points by M then and your normals by inverse transpose of M for your lighting calculations. Alternatively you can go to view space and do your lighting calculations there by multiplying points by (M*V), and your normals by the inverse transpose of (M*V). You only really need P for the final screen positions and the Z depths positions (I gather for historical reasons). I don't even use inverse transpose myself since I don't have uneven scaling. I keep track of just the rotations in a separate matrix and use that for the normals. That's another option, but I might rethink that later. Also I don't even use a projection matrix because it seems to screw things up with large floating point numbers and it messes up the Z coordinates for the Z buffer calculation (again because of the large number range). I simply do the projection in a post processing step, again another option. But whatever you do, I don't think you want to include the projection in your inverse transpose.

Advertisement

@Gnollrunner Thanks! i get it.

None

This topic is closed to new replies.

Advertisement