I hate to rain on your parade, but...
It appears that you are using Cramer''s rule to do your inverse, with the method of cofactors used to find the determinants. Unfortunately, you''ve stumbled onto the slowest method possible for finding the inverse of a general N x N matrix. It *is* damned elegant, but it ain''t fast at all. I mean, it''s not too horribly slow for the 4 x 4 matrix your routine deals with, but it is way slower than Gaussian Elimination and other methods. In general, you should *NEVER* *EVER* use this method! Its only really a decent choice if you''re dealing with 3x3 or smaller matrices. Really.
Finding the inverse is similar to solving Ax = b, so lets look at Cramer''s rule for solving just for x and b being vectors, just to see how bad it is.
This method requires that you ultimately evaluate (N+1) determinants of order N. Using the method of cofactors as you are doing, each determinant requires (N factorial - 1) additions and sum from k=1 to N-1 of (N factorial/K factorial) multiplications. For your 4x4 system, the operation count is then:
operation count = (4+1)*doc,
|
where doc is the count of operations to determine one determinant, or:
doc = (4*3*2*1-1) + (4*3*2*1)/(1) + (4*3*2*1)/(2*1) + (4*3*2*1)/(3*2*1)or,doc = 63 operations *per* determinant
|
And the total operation count is 5*63 = 315 using your method for a 4 x 4 matrix.
Compare with simple Gaussian elimination that requires pow(N,3)/3 + pow(N,2) - N/3 multiplies/divides and pow(N,3)/3 + pow(N,2)/2 - 5*N/6 add/subtracts for:
operation count to solve a 4x4 system using simple Gaussian Elimination:
4*4*4/3 + 4*4 - 4/3 + 4*4*4/3 + 4*4/2 - 5*4/6 = 62 operations.
|
So simple Gaussian elimination is 5 times faster than your code.
Lets look further at the method you use. If the matrix was a 20 x 20 matrix instead, you''d end up calculatin 21 determinants of 20 x 20 matrices. The cost of *each* determinant is 2.4E+18 operations. That would take 73 *YEARS* for *EACH* determinant on a computer that could do 1000 MFLOPS, for a total of 21 * 73 = 1533 years to solve the system once. You might note that a 1Ghz Pentium III can do on the order of 1000 MFLOPS so your Pentium III would take more than one and a half millenia to solve the system using Cramer''s rule. Gaussian elimination on the other hand could solve the same system in 5910 operations, or 6 hundred thousandths of a second on the same machine.
So, Gaussian Elimination is quite significantly faster than your method, even for a 4x4 matrix. GE with pivoting or partial pivoting is more robust than standard GE, though standard GE is a bit faster.
There are methods for finding the inverse that are faster than Gaussian Elimination too, such as LU decomposition or QR factorization.
I hope you''re not upset that I point this out to you. Its only in the interest of education,
![](smile.gif)
.
Question. Why invert the matrix when you don''t have to? And my answer is, *don''t* invert if you don''t have to.
If you''re solving a system Ax = b, don''t invert. You don''t have to, plus its slow and potentially unstable due to floating point roundoff. One faster way is to do an LU decomposition on A and apply it to b.
If you are dealing with transformation matrices, the inverse can be found directly after you break the matrix into rotation, scale, and translate components. I haven''t counted operations to use this approach, but its more intuitive than using a generic algorithm to find the inverse and although possibly slower than GE, it should certainly still be way cheaper than Cramer''s rule. The inverse of a pure rotation transformation matrix (a 3x3 matrix) for example is just simply equal to the transpose of the original 3x3 rotation matrix and so no computation or recalculation at all is required. (More work required for a full 4 x 4 matrix with rotation, scaling, and translation.)
Some references for you:
Gene Golub and Charles van Loan: "Matrix Computations"
Steven Leon: "Linear Algebra with Applications"
For transformation matrices specifically, the OpenGL red book is pretty good---see appendices, plus David Eberly''s "3D Game Engine Design" has information on inverting transformation matrices, although the book is difficult to read.
The Steven Leon reference is more elementary, and has some straightforward discussion of operation counts and computational expense.
Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net