Hello everyone. My name is Aaron V and I've been working towards learning how to do my own graphics programming, audio programming, and math library - you know, just for the knowledge.
All three turned out to involve this evil number called "pi" that has no end and meets the criteria for a PRNG. ;)
Anyway, I was studying up on Sine and it turns out it can be calculated to 1 degree but not further, however it can be approximated using a taylor series or a maclaurin series.
I want to calculate Sine accurately everywhere between 0 and 1/8 radians. I DON'T need Sine from -1/8 radians to 1/8 radians which is a more common request.
Is it possible to get 100% accuracy within this range? Obviously the computer can only do approximations, but just how accurate is the algorithm by default.
A specific question is how accurate is the taylor series for sine? Isn't there always an error margin for every range and every order?
What other ways can I approximate sine? Remember, I don't need negative space at all!
I have never studied calculus except in my spare time through cryptic wikipedia articles and way-to-specific youtube videos.
Even a link to something beginner friendly would be nice, but I don't want to spend a great deal of time studying this.
Thanks everybody!
Approximating Sine?
Anyway, I was studying up on Sine and it turns out it can be calculated to 1 degree but not further, however it can be approximated using a taylor series or a maclaurin series.
Huh? I'm not sure I understand what you mean by the sine function can only be calculated to 1 degree...
I want to calculate Sine accurately everywhere between 0 and 1/8 radians. I DON'T need Sine from -1/8 radians to 1/8 radians which is a more common request.
Is it possible to get 100% accuracy within this range? Obviously the computer can only do approximations, but just how accurate is the algorithm by default.
As accurate as you want it to be. The more terms you have, the more accurate it'll be. Observe:

The larger n is, the more terms you have and the more accurate your result will be. If your largest x = 1/8, and you want your error to be less than 0.0005, just make n big enough so that your error is less than 0.0005. In terms of computers, you can just figure out how accurate your data type (float, double, etc.) is, and just make sure that the error is small enough to give you an accurate answer for your needs. If n is infinity, then you're calculating the exact value, and the error is 0.
A specific question is how accurate is the taylor series for sine? Isn't there always an error margin for every range and every order?
The largest error is just the next term. That is, if you have n from the above equation, your error is less than:

[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
Cornstalks covered it pretty well, and using mathematical notation that I'm always too lazy to figure out how to do online.
Essentially if you know your numeric datatype, you can calculate to the precision limit for that type, and that's the answer. There's no point going any further. By expressing the error term as above with the same exponent as your current series sum, you can know whether it will affect the mantissa at all. In practice you wouldn't do that, you'd just pick an arbitrary error tolerance with the knowledge that your output will always have a magnitude less than 1. Although you could squeeze out extra precision around the zero value, there's very little point.
One thing to keep in mind is that many applications require sin, cos, etc to be as fast as possible, at a potential loss of quality. Due to precision limits errors will accumulate anyway if you don't take steps to counteract this, so minor errors in sin could be irrelevant. For example, rotating a vector by 1 degree 180 times is quite unlikely to give the same result as rotating once by 180 degrees. So don't get too caught up in your quest for accuracy unless the application warrants it.
Essentially if you know your numeric datatype, you can calculate to the precision limit for that type, and that's the answer. There's no point going any further. By expressing the error term as above with the same exponent as your current series sum, you can know whether it will affect the mantissa at all. In practice you wouldn't do that, you'd just pick an arbitrary error tolerance with the knowledge that your output will always have a magnitude less than 1. Although you could squeeze out extra precision around the zero value, there's very little point.
One thing to keep in mind is that many applications require sin, cos, etc to be as fast as possible, at a potential loss of quality. Due to precision limits errors will accumulate anyway if you don't take steps to counteract this, so minor errors in sin could be irrelevant. For example, rotating a vector by 1 degree 180 times is quite unlikely to give the same result as rotating once by 180 degrees. So don't get too caught up in your quest for accuracy unless the application warrants it.
Huh? I'm not sure I understand what you mean by the sine function can only be calculated to 1 degree...
It's what I got out of this article.
Also, a different article made it seem like higher orders were only necessary to calculate sine at more extreme values of x, hence the blunt of my question.
Thanks, Cornstalks.
Although you could squeeze out extra precision around the zero value, there's very little point
So Jeffery, it is true that less extreme values of x (I mean values closer to zero) are by defualt more accurate?
Any way to know just how accurate that would be? Don't tell me unless it's simple enough for me to grasp lol.
Thanks to you as well, Jeffery.
[quote name='Cornstalks' timestamp='1344989798' post='4969651']
Huh? I'm not sure I understand what you mean by the sine function can only be calculated to 1 degree...
It's what I got out of this article.
[/quote]
From what I understand from that article, it's just talking about how to calculate sin(1°). I don't see it suggesting that you cannot calculate other values, or suggesting any other limitations of sine.
Also, a different article made it seem like higher orders were only necessary to calculate sine at more extreme values of x.
Is it true that less extreme values of x (I mean values closer to zero) are by defualt more accurate?
Yes, that is true. If you look at the error term I posted, you'll see that the error depends on x: the higher x is, the higher the error bound is. To find out how many terms you need, take your largest possible x (1/8 in your case) that you accept, and the maximum error you will accept, and plug them into that error equation I posted to find out what n should be (that is, how many terms you need to have to make sure that even your largest x is within your acceptable error range).
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
find out what n should be (that is, how many terms you need to have to make sure that even your largest x is within your acceptable error range).
What should my acceptable error range be for graphics programming, audio programming, and a math library?
I should probably have two, right? One accurate one and one that's as fast as it can be?
Is a margin of error of .003 good enough for a full period of sine that uses four mults and three adds?
That's a bit faster than the Taylor series up to the seventh order, i think.
EDIT Thanks guys, I think I got it from here.
[quote name='Cornstalks' timestamp='1344991895' post='4969662']
find out what n should be (that is, how many terms you need to have to make sure that even your largest x is within your acceptable error range).
What should my acceptable error range be for graphics programming, audio programming, and a math library?
I should probably have two, right? One accurate one and one that's as fast as it can be?
[/quote]
I dunno, these things are often determined experimentally. Being able to set a compiler flag for low/high precision might be nice.
Is a margin of error of .003 good enough for a full period of sine that uses four mults and three adds?
That's a bit faster than the Taylor series up to the seventh order, i think.
A margin of error of 0.003 is pretty good for a full period (that is, [0, 2*pi)). The Taylor series doesn't do too well for larger values of x (for example, if you want to cover a full period (again, [0, 2*pi), it takes about 9 terms to get a maximal error of ~0.003), so yeah, I'd say four mults and three adds is pretty good for 0.003. But I don't know a lot of ways to compute sine, so someone might be able to comment on this better.
But if your range is [0, 1/8] then the Taylor series is pretty dang good. It only takes two terms (one subtraction and three multiplies) to get a maximal error of 0.000000254313151.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
The article that computes sin(1°) using roots is completely irrelevant in this situation. Computing a square root requires the same type of numerical approximations that computing the sine requires.
Besides using the Maclaurin series for sine (which in the range 0 to 1/8 only takes something like 6 terms to give you all the precision you can hold in a double), you should also look into the CORDIC algorithm.
Besides using the Maclaurin series for sine (which in the range 0 to 1/8 only takes something like 6 terms to give you all the precision you can hold in a double), you should also look into the CORDIC algorithm.
Taylor polynomials provide local approximations to a function. Better is to use global approximations that minimize some norm. My standard is to use minimax approximations (minimize the L-infinity norm for the difference between polynomial and function). The math for generating the polynomial coefficients is heavy, but the results are pleasing. DirectX Math used to use Taylor polynomials for sine and cosine, but the version shipping with Windows 8 (and DX 11.1) now uses minimax approximations.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement