Advertisement

Question about normals, tangents and matrices

Started by May 20, 2024 11:19 AM
1 comment, last by JoeJ 7 months ago

We usually have vertex format that contains position, texture, normal and tangent (sometimes even bitangent). And we also have object matrix and a normal matrix (inverse transpose).

This seems very excessive.

Wouldn't it be better to just have tangent and bitangent and have only regular position matrix? Then normal can be calculated as cross product of transformed tangent and bitangent. No need for storing vertex normal and having that extra inversed transposed matrix per object.

Or even better, if we don't use skew in the matrix transformations (but we do non-uniform scale), we can just use regular matrix but in the fourth column just store squared inverse matrix scale vector which is multiplied via normal before it is transformed with regular matrix (to double compensate for the wrong non-uniformn scaling). This can also be improved by dividing scale with one of the scaling component (like Z) and then we only have to store two scale factor X/Z and Y/Z in the fourth column and we are left with two extra spare places in the 4x4 matrix ( 34 and 44 element) which can be used for storing some color or some custom object information.

One other thing: why are we even using 4x4 matrices? We only need them from projection and that can be done via a single MAD command if projection data is stored in a float4((tanX, tanY, zn, 1) vector: float4 Project(float3 p) { return mad(p.xyzz, float4(_vProj.xy, 0, 1), float4(0,0,_vProj.z,0)); }

So my question is: why are all the samples all over the internet using inefficient vertex formats and normal matrices? Or did I get something wrong?

Ivica Kolic said:
Then normal can be calculated as cross product of transformed tangent and bitangent.

If adjacent vertices have the same UVs coordinates in one direction, which sometimes makes sense e.g. if we use a simple planar projection to generate UVs, cross product of tangents would give a zero normal.
That's probably the reason why some people use all 3 vectors - robustness in any case.

Ivica Kolic said:
We only need them from projection and that can be done via a single MAD command if projection data is stored in

I assume that's a legacy convention, since early T&L HW eventually had fixed function matrix multiply. Early vertex shaders used different HW units than early pixel shaders, so likely there still was the multiply.
Then GPUs got uniform shaders for anything and became scalar. No more matrix multiply at this point.
Currently it's coming back with tensor cores, although only for low precision afaik.
So yes, currently the projection matrix seems a waste eventually. But not so much if you can combine it with the model view matrix. Then you have just one matrix multiply per vertex, which is the minimum you need anyway, and it should be a win. (My gfx exp. is too outdated to tell what's common here.)

Ivica Kolic said:
and then we only have to store two scale factor X/Z and Y/Z in the fourth column and we are left with two extra spare places in the 4x4 matrix ( 34 and 44 element)

You can do that, but you can also use 3x4 or 4x3 matrices as needed if you don't need a column.
Often it also makes sense to use a quaternion and a vec3 instead.
It's not that people use 4x4 just everywhere without any worries.

But i also see that many do. Some might even think matrices have HW acceleration and should be used.
Often it causes a 4x4 matrix ending up in vector registers, which takes a lot of them, and then they wonder why occupancy goes down.

Ivica Kolic said:
So my question is: why are all the samples all over the internet using inefficient vertex formats and matrix packing?

If it's samples, then probably to keep things simple, so everybody can read and understand the code easily?Nothing wrong with that in times where most shaders are generated automatically from some node graph UI in U-engines anyway.

(All that said taking an imaginary defense position, to come up with some potential answers you asked for.
Personally i totally agree with your doubts, and yes - we should optimize harder if we see an opportunity for our specific application.)

This topic is closed to new replies.

Advertisement