Advertisement

Faster vector Normalization

Started by December 09, 2002 04:35 AM
3 comments, last by Drazgal 22 years, 2 months ago
I''m currently writting my own maths routines for my 3D engine and have noticed that although my maths routines for vectors are by and large marginally faster than those supplied with DirectX the vector Normalization routine they have beats mine by about 2 to 1. Currently I divide 1 by the vector length then multiply X Y and Z of the vector by this. The strange thing is that DirectX (well atleast by my tests) actually Normalize a vector faster than they fidn the length. I was wondering if there was a technique for normalizing vectors than I am not aware of, one that doesn''t require finding the length or something (to avoid the slow sqrt). Thanks for any help clearing this up. Ballistic Programs
Perhaps they don''t actually use sqrt, but a custom, faster method?

Death of one is a tragedy, death of a million is just a statistic.
If at first you don't succeed, redefine success.
Advertisement
they use by a big chance the reverse square root function, rsq(x) = 1/sqrt(x)

is much faster to approximate most the time..

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Assuming you''re talking about the D3DX functions here...


1) Before comparing performance, make sure you''ve set compiler optimisations to maximum.


2) A comparison of 1 normalise versus another 1 normalise isn''t going to give you meaningful results - compare for 100000 vectors.


3) The D3DX and D3D PSGP take full advantage of SSE and 3DNow!, and that code was written for MS by engineers at Intel/AMD. IIRC both also have standalone optimised matrix/vector libraries available for download from their website. No doubt their code also declares the vectors as such (with newer compiler/processor pack) and uses intrinsics.


4) Zooming in to the level of comparing normalisation speed is the wrong way to go about optimisation. Look at the wider picture - is the normalisation actually required?, could the whole algorithm be be replaced?

For example if you need it in a dot product which is finding an angle, you _could_ adjust the other side to compensate in some cases: c = |a|*|b|*cos(theta)


5) Also look at the precision you require for the normals (i.e. would the user notice a 0.1% randomness in lighting?) and approximate, e.g. table part of the function or use a series expansion.



--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Thanks davepermen I''ll give that a go.

S1CA: I did 10000000 calculations to be sure

This topic is closed to new replies.

Advertisement