Advertisement

why we use matrix and only multiplication as operator in transforms?

Started by September 17, 2018 12:11 PM
13 comments, last by Steven Ford 6 years, 4 months ago
2 hours ago, Steven Ford said:

Further to all of the very valid points up above, even if you had a use-case which could be expressed in a different way, by always using matrices to do these operations then you have a single code path and hence your code-base will be simpler to maintain.

The fact that you can then combine 55 operations together to form a single matrix and then apply that matrix to many thousand of objects is then icing on the cake!

Well, if you have a special case where you really only need to make translations, then you can just add the translation vector to the position vector of you objects. A pure translation matrix has a lot of zeros (and ones) in it and transforming by such a matrix leads to lots of unnecessary zero*something (one*something) multiplications. If this is what you were asking.

I can imagine a class responsible for managing some specific objects (that can only be translated) that doesn't deal with matrices but just vectors. Such a class would not have to be able to handle layered transformations of different kinds. And layering translations is always just adding the vectors together.

17 minutes ago, TomKQT said:

Well, if you have a special case where you really only need to make translations, then you can just add the translation vector to the position vector of you objects. A pure translation matrix has a lot of zeros (and ones) in it and transforming by such a matrix leads to lots of unnecessary zero*something (one*something) multiplications. If this is what you were asking.

I can imagine a class responsible for managing some specific objects (that can only be translated) that doesn't deal with matrices but just vectors. Such a class would not have to be able to handle layered transformations of different kinds. And layering translations is always just adding the vectors together.

True, it's up to the maintainer of the codebase to decide whether or not it's worth it. For me personally, the choice would be to go for the conceptually simple case (i.e. one way of consuming any transformation, with helper methods to create appropriate representations of simpler statements) with then an optimised matrix multiplier. GPUs are already set up to do so, and one can write a CPU version using intrinsics (to get 4 floats processed at a time) so that the excess calculations performed at a fixed cost, but in parallel).

Only if this didn't show the necessary performance, or it complicated other code would I allow for multiple code paths.

Advertisement
25 minutes ago, Steven Ford said:

GPUs are already set up to do so, and one can write a CPU version using intrinsics (to get 4 floats processed at a time)

But no, GPUs are not set up this way.

a.xyz + b.xyz is less instructions than a.xyzw + b.xyzw. Former is 3 adds, latter is 4 adds.

GPU SIMD only means to execute 32 or 64 of those adds in parallel (but still 3 or 4 times in order within each thread). There is no native float3 or float4 type anymore, and no instructions for those types either.

 

I don't think the opening question is about GPUs, but this seems a persistent misbelief so i point it out again. It is also one more argument for multiple specialized data structures vs. mat4x4 for everything.

 

 

@JoeJ agreed - SIMD (or even SIMT on the GPU) is not a panacea for performing excess work, it can merely reduce the elapsed time of doing the same volume of work compared to standard sequential code.

I guess my point is more around the case that depending on use-case it's possible to reduce the extra overhead at runtime of using matrices (as you say, there'll be more calculations to be done) so that the majority of at least my use-cases, I prefer to maintain a single code path and it doesn't cause me any actual problems.

This topic is closed to new replies.

Advertisement