Advertisement

calculating square roots

Started by October 10, 2002 08:38 PM
16 comments, last by Samith 22 years, 4 months ago
h0-0t, in floatingpointmath, multiplication is faster than addition.

and on todays pc''s it doesn''t mather

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

I think the optimizer knows more than most of us here and will do the right thing...
Advertisement
quote:
Original post by h0-0t
you can replace (2*y) with y + y ... multiplications take quite a bit more clock cycles than additions.




I have experience coding at assembly level for newer cpu such as AthlonXP and P3, P4. And NO, multiplications are not slower than additions. Its a misconception. In fact, the clock cycles are almost the same for addition and multiplications and in some L1 caching conditions, multiplications will be just a tiny bit faster. Generally, asm programmers take the cost of add and mult to be the same.

Unless you are talking about x486 and below....


______________________________________________
(MSG_POST_DONE) ? InsertWittySignature() : ContinuePosting()

http://www.geocities.com/codeman_net/
- Hun Yen Kwoon
I''m also pretty sure, to be honest, that 2*y is also the exaxt same as just a bit shift to the left by 1. Since this only requires the use of the register that y is in, I think this allows for it to be optimized better. I''m not an assembly optimizing guy though, so I''m not sure. I am sure, however, that bit shifting y to the left 1 is the same as multiplying it by 2.
Tibre - that only works properly on integers. If you are using floating point, bit shifting destroys the number, due to the way floating point is stored.


God was my co-pilot but we crashed in the mountains and I had to eat him...
Landsknecht
My sig used to be, "God was my co-pilot but we crashed in the mountains and I had to eat him..."
But folks whinned and I had to change it.
quote:
Original post by Jan Wassenberg
Newton and Taylor are the usual suspects.
One other way is Heron's algorithm:
x(i+1) = ( x(i) + x/x(i) ) / 2



As a matter of fact, Heron's algorithm IS the Newtone-Raphson algorhitm in it's private case for sqrt:

f(x)=x2-c
f'(x)=2x

xn+1=xn-f(xn)/f'(xn)=
xn-(xn2-c)/2xn=
xn+(c-xn2)/2xn=
(2xn2+c- xn2)/2xn=
(xn2+c)/2xn=
1/2(xn+c/xn)

[edited by - z9u2K on October 19, 2002 5:01:56 PM]
Advertisement
Depending on the precision you want, taylor would be faster, because you wouldn't have to test "Xn-X(n-1) less than epsilon every loop", in the case of newton algorithm, just had to look wich degree to skip from the expansion... actually that's what happens with sin an cos taylor aproximation, i dont know if that simmetry happens with square root.
in terms of programing, comparing floats and making loops calls is slower than just sums and multiplications, right?(i might be very wrong..)


[edited by - guitarplayer on October 18, 2002 4:54:22 PM]
----------------------------I would rather burn dollars than USA flags... but they are too expensive!
> As a matter of fact, Heron''s algorithm IS the Newtone-Raphson algorhitm in it''s private case for sqrt <
Hehe yeah but I meant F(y) = y^-2 - x ; you get 1/sqrt(x), and then multiply by x.
E8 17 00 42 CE DC D2 DC E4 EA C4 40 CA DA C2 D8 CC 40 CA D0 E8 40E0 CA CA 96 5B B0 16 50 D7 D4 02 B2 02 86 E2 CD 21 58 48 79 F2 C3

This topic is closed to new replies.

Advertisement