▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
In learning about the sine and cosine integrals, I learned that for really big x the cosine integral is very near to sin(x)/x.
This lead to my discovery that for really big x, sin(x) is wrong on all of my calculators.
x = 1e50 is a good example. On the 48G, 49G+, 33S, and 28S, in radians mode sin(1e50) returns 8.68392787928e2.
On the 10 digit calcs (HP41CV and CX, 15C, 11C, 33C, 34C), I get 0.132816215.
The venerable 45 returns 0.653432214.
The actual 12digit value, according to Mathematica and Free42 Decimal (with 25 digits BCD precision) is 7.89672493429e1.
I find this fascinating, since the "wrongness" is consistent depending on whether the machine has 10 or 12 digits.
Any ideas? I am sure it is a rounding thing and the limitations of reducing such a big angle to the desired range.
Les
▼
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, Les:
Les posted:
"On the 10 digit calcs (HP41CV and CX, 15C, 11C, 33C, 34C), I get 0.132816215 [...] I am sure it is a rounding thing and the limitations of reducing such a big angle to the desired range."
That's correct. I'll discuss the 10digit case (HP15C, etc).
Big enough angles are internally reduced mod "Pi" to bring them
to proper (small) size, in the proper quadrant. The internal value of "Pi" used to
perform this reduction determines the result's accuracy and thus
that of the final sine value.
In the 10digit case, the internal value of "Pi" used for the
reduction is 3.141592653590, which is the correctly rounded
13digit value of Pi, i.e., three additional "guard" digits.
When this 13digit internal value of Pi is used to reduce 10^50 to the proper quadrant, you get:
10^50INT(10^50/3.141592653590+1)*3.141592653590 = 0.13320983018, exactly
and its sine is then:
sin(0.13320983018) = 0.132816215
which is the value the HP15C is returning. You can check these computations with your Mathematica using multiprecision to see that they do hold.
Best regards from V.
▼
Posts: 1,792
Threads: 62
Joined: Jan 2005
Hi, Valentin 
Thank you for the insights. I've wondered about the internals of this calculation.
Quote:
Big enough angles are internally reduced mod "Pi" to bring them to proper (small) size, in the proper quadrant. The internal value of "Pi" used to perform this reduction determines the result's accuracy and thus that of the final sine value.
This statement is consistent with methodology described in the June 1977 HewlettPackard Journal article, "Personal Calculator Algorthms II: Trigonometric Functions", and certainly explains the difference in the radianmode results using 13digit, 15digit and higherprecision values of pi.
The 1977 article also explains that each input argument x is first converted to radians and then MOD'd. It would seem smarter to do MOD(x, 360) when in degrees mode and MOD(x, 400) when in grads mode, to help avert and minimize these rounding errors.
The prePioneer (e.g., HP15C) and Pioneerbased (e.g., HP48) give consistent answers of sin(x) = 0.9848077530 for x in degrees, which matches the answer given by Windows XP Calculator. The reduced angle is 280 degrees. These results suggest that the integer MOD is being performed where appropriate.
It wasn't always done as well, due in part to limited ROM space. The HP35 returns answers that degrade with each order of magnitude after x exceeds 280 degrees, due to use of MOD(x*pi/180, 2*pi), and probably a coarser value of pi. There was no radians mode, even though all calculations were being done in radians.
The 1981 Casio fx3600P accepts trigonometric arguments only less than "4 circles" in magnitude (e.g, 1440 deg < x < 1440 deg).
Posts: 2,309
Threads: 116
Joined: Jun 2005
A good case can be made that any value between 1.0 and 1.0 is the correct result for sin(1e50).
Now if you asked for sin(1.00000000000000000000000000000000000000000000000000e50), that should have a specific result. However, your calculator doesn't have 50 digit precision, so you can't actually ask it for that.
▼
Posts: 1,193
Threads: 43
Joined: Jul 2005
Eric:
You found an excellent and graphic way to present this idea. I was just starting to wonder about the appropriateness of talking about a 1e50 angle, but your explanation closes the issue.
Congratulations!
Posts: 1,368
Threads: 212
Joined: Dec 2006
I agree wholeheartedly.
I take note that in the Numerical Recipes treatment of the sine, cosine, and related Fresnel integrals, the accuracy of the routines for these special functions when the argument is large is limited by accuracy of the trigonometric library routines when the argument is really big.
I also note that in Stephen Moshier's double precision routine in the Cephes library, sin.c will return an ERROR when the argument is greater than 2^30 (about 1e9), at which point the relative error drops suddenly from around 1e17 to 1e7. The code also warns that when the argument reaches about 2^49 (about 5.6e14) the results returned are "meaningless".
Since even the 12 digit calculators operate internally at a little less than 64 bit double precision, asking for a correct value of sin(1e50) is a little unfair.
That said, I am impressed that Free42, with only 25 digits of internal precision, gets it rightat least the 12 digits that are returned in the display.
Thanks for the insight.
Les
▼
Posts: 727
Threads: 43
Joined: Jul 2005
That said, I am impressed that Free42, with only 25 digits of internal precision, gets it rightat least the 12 digits that are returned in the display.
Free42, or more precisely, the BCD20 library, uses a 1060digit approximation of pi to do the mod 2*pi argument reduction. So, you can get incorrect results even out of Free42, by using arguments > 10^1060.
Getting the RAD mode trigs right for *all* arguments would require using a 10000digit approximation of pi. A lot of effort with very little practical payoff  much like the related case of making MOD return exact results for huge arguments...
 Thomas
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
Quote:
Free42, or more precisely, the BCD20 library, uses a 1060digit approximation of pi to do the mod 2*pi argument reduction. So, you can get incorrect results even out of Free42, by using arguments > 10^1060
Actually, you use the a 1060digit approximation to 1/(2*pi), but I see the logic of this in any case.
Frankly, I thought the argument reduction was used with a 25digit approximation to pi, so I am impressed it is so much more precise.
Les
Posts: 1,368
Threads: 212
Joined: Dec 2006
Quote:
So, you can get incorrect results even out of Free42, by using arguments > 10^1060.
Actually, Thomas, once x > 9.9e1027, sin(x) returns <Not a Number>.
Up to that point, compared to Mathematica, it seems to give pretty good results, i.e sin(9.9e1027)= 0.936195810146 in both settings. It seems you have set it up so that sin(x) returns either the right answer or none at all. Very wise!
Les
Edited: 25 Feb 2007, 8:45 p.m.
▼
Posts: 727
Threads: 43
Joined: Jul 2005
Quote:
Quote: So, you can get incorrect results even out of Free42, by using arguments > 10^1060.
Actually, Thomas, once x > 9.9e1027, sin(x) returns <Not a Number>.
Up to that point, compared to Mathematica, it seems to give pretty good results, i.e sin(9.9e1027)= 0.936195810146 in both settings. It seems you have set it up so that sin(x) returns either the right answer or none at all. Very wise!
Les
The credit should go to Hugh Steers, who wrote the BCD code used in Free42. And of course any criticism that the modtwopi() function doesn't work all the way up to 1e10000 should also be directed at him. ;)
Has anyone tested IEEE754 implementations to see if their trigs are accurate when given ridiculously large arguments? Just curious...
 Thomas
▼
Posts: 536
Threads: 56
Joined: Jul 2005
ha! i knew that 1060 digit pi constant would be useful oneday :)
when the argument is too big for the constant to return a sensible result, bcd20 returns nan as you've noticed. this is to prevent the result from being completely wrong. i'd rather people got nan as the answer rather than think it was correct.
for those interested in precision answers, you might like to run
my "exact" engine. download exact.exe from http://www.voidware.com (downloads section). exact is a multiprecision engine that verifies its answers and is never wrong (claim!!)
also bcd20 is now deployed as the calculation engine underneath hplua 0.2 (http://www.sf.net/projects/hplua)
regards,
Posts: 32
Threads: 2
Joined: Dec 2006
Quote: Has anyone tested IEEE754 implementations to see if their trigs are accurate when given ridiculously large arguments? Just curious...
James Gosling discusses why correct argument reduction makes trigonometric functions slower in Java than in C++ here. He says:
Quote: As far as I know, the K5 is the only x86 family CPU that did sin/cos accurately. AMD went back to being bitforbit compatibile with the old x87 behavior, assumably because too many applications broke. Oddly enough, this is fixed in Itanium.
Handling this is often a matter of giving correct results to meaningless input, but it is nice to leave it to the user/programmer to decide if a computation makes sense.
How to do this correctly was first described by Payne and Hanek in 1983, after the 8087 was developed, and after HP lost interest in making their calculators more perfect (or, after they got perfect enough for all practical purposes).
▼
Posts: 305
Threads: 17
Joined: Jun 2007
On James Gosling's blog page, he quotes extensively from Joe Darcy, his local floating point God.
Darcy says:
"The implementation challenge is that sin/cos are implemented using argument reduction whereby any input is mapped into a corresponding input in the [pi/4, pi/4] range. Since the period of sin/cos is pi and pi is transcendental, this amounts to having to compute a remainder from the division by a transcendental number, which is nonobvious. A few years after the x87 was designed, people figured out how to do this division as if by an exact value of pi."
He also says:
"The fix for this problem was figured out quite a long time ago. In the excellent paper 'The K5 transcendental functions' by T. Lynch, A. Ahmed, M. Schulte, T. Callaway, and R. Tisdale a technique is described for doing argument reduction as if you had an infinitely precise value for pi."
It sounds to me like Darcy is suggesting that a new technique has been discovered since the x87 was designed. I was eager to learn of this new technique.
I downloaded this paper from IEEEXplore and the only place where the authors mention range reduction for the sin function is on page 264, first column, where they say:
"The trigonometric functions reduce the domain of theta in [2^63,2^63] to a range of (pi/4,pi/4) by subtracting integer multiples of pi/2 from the input operand. Pi/2 is represented with up to about 256 bits depending on how much precision is required. The multiple precision arithmethc made handling such a precise value of pi straight forward and efficient."
I don't see anything new here. What they have done is to use an approximation to pi/2 with a large number of significant bits up to a maximum of 256 bits.
The Saturn processor based HP machines use a 31 decimal digit range reduction constant. Since these return 12 digit results, the largest input argument to the SIN function for which we ought to expect correct 12 digit results is about 10^(3112). On my HP50, SIN(1E20) (in radian mode, or course) returns a result that is correct to 12 digits.
The K5 SIN function presumably can return a double precision result (53 bit significand) to the user. If a 256 bit range reduction constant is used, then the SIN function with input arguments as large as 2^(25653) = 2^203 (about 1.286E61) should return full accuracy on the K5. It would be interesting to see how the K5 SIN function performs when given a substantially larger argument, say 1E90.
This paper was published in 1995, so when Joe Darcy says "The fix for this problem was figured out quite a long time ago." apparently he doesn't realize that the "fix" (as described in this paper anyway) was figured out even longer before 1995. Maybe he meant to refer to some other paper describing a truly new technique.
He said "Since the period of sin/cos is pi and pi is transcendental, this amounts to having to compute a remainder from the division by a transcendental number, which is nonobvious. A few years after the x87 was designed, people figured out how to do this division as if by an exact value of pi."
Nonobvious to whom? Certain other people figured it out no later than the early 80's (actually, I'm sure it was done well before that). Maybe he should have used HP calculators. The HP71 did this sort of thing (extended precision range reduction constant) in 1983.
I haven't obtained the ACM SIGNUM paper, but the abstract doesn't suggest that anything other than "reduction by division" (with a large range reduction constant) is being done. Multiple precision division is required, of course, and maybe the mechanics of doing that is the substance of this paper.
Were Payne and Hanek really the first to describe how to do it correctly? I wonder.
▼
Posts: 32
Threads: 2
Joined: Dec 2006
Yes, the K5 paper disappointed me too. The method described by Payne and Hanek is much more interesting, and I believe this is what Hugh Steers has implemented in the BCD20 library.
The idea is that instead of taking the remainder of division by 2PI, you multiply by 1/2PI, take the fractional part and multiply that by 2PI. Since only the fractional part is used, any of the digits in the constant 1/2PI that only contributes to the integer part can be ignored. So, to compute the remainder of 1E500 divided by 2PI, you can ignore the first 500 decimals of 1/2PI, and it is possible to do the calculation with only a little more precision than the required result, instead of 500+ digit precision.
So the calculation can be done in a reasonable time for even the largest values, costing only the storage space for all the needed digits of 1/2PI.
Posts: 2,309
Threads: 116
Joined: Jun 2005
Elementary Functions: Algorithms and Implementation by JeanMichel Muller, 2nd edition (October 2005) gives multiple algorithms for correct argument reduction for the trigonometic functions, as well as methods for producing correctly rounded results of those functions. I've found it to be a very useful book.
Posts: 1,792
Threads: 62
Joined: Jan 2005
Hi, Eric 
Quote:
A good case can be made that any value between 1.0 and 1.0 is the correct result for sin(1e50).
(NOTE: Edited for better phrasing, based on comments from Eric Smith)
"Correct" using an argument based upon significant digits, but then the result returned would be absolutely meaningless.
However, calculators treat input values as exact, and then generally return the best possible result. With a modern calculator, the result returned for sin(1E+50) degrees or grads will be exactly correct. (Radians is subject to roundoff error in the internal MOD, as pointed out previously.) Calculations for cases like these are not very practical, but precise results are possible when an input value can be represented exactly within 12 significant digits.
In a similar vein to what Les posted, it would not be unreasonable for a modern highend calculator to restrict valid realvalued arguments for trig functions to +/ 999,999,999,999.00. This is more than adequate for practicality, and would help to ensure accurate, meaningful results in any angular mode.
If the user really wants to know what the sine of 2.456 x 10^{23} degrees is, the user can do MOD(2.456 x 10^{23}, 360) first, then take the sine.
In fact, there is already a longstanding precedent for such a restriction on HP calculators. Since both input arguments to combination or permutation must be nonnegative integers, these functions restrict the two inputs to values verifiable as such  not exceeding 9,999,999,999 (preSaturn) or 999,999,999,999 (Saturn). Trigonometric arguments, of course, need not be integers, but the same principle of exactitude perhaps ought to apply.
 KS
Edited: 26 Feb 2007, 12:14 a.m. after one or more responses were posted
▼
Posts: 2,309
Threads: 116
Joined: Jun 2005
All calculations performed by the calculator use a finite (and small) number of significant digits. The calculator doesn't "assume" that when you enter 1E50, that it is exact. It just performs operations based on the first ten (or twelve, or whatever) digits being a one followed by zeros. The only assumption it makes is that the first truncated digit doesn't cause the last digit present to be rounded to a different value.
Anyhow, unless you can cite an example of a physical value on the order of 1E50 known to have an exact value to 51 places, I'll stand by my assertion that any value between 1.0 and 1.0 is the correct result.
Quote:
but then the result returned would be absolutely meaningless.
Yes. That's the point. That's why many IEEE floating point implementations will return a NaN rather than a numeric value.
▼
Posts: 1,792
Threads: 62
Joined: Jan 2005
Eric 
You're right; there was some poor phrasing in my previous post. Calculators don't really think to "assume". I editied my previous post and added some new material.
 KS
Posts: 882
Threads: 23
Joined: Jan 2005
Hi Les, try with LongFloat!
Greetings, Massimo
