Cosine redux
By jag on Jan 20, 2006
It's been quite a while since my last blog entry: I had a bit of a technology meltdown that I'm not quite done with yet :(
I was doing some recreational hacking over the holidays that involved evaluating cosines. I ended up doing (once again!) an implementation of cosine (don't ask why). Given the confused flamage that my previous posts on cosine generated, I figure that showing some code would be useful. I would never have thought that cosine would generate so much email traffic. Yes, I know about Taylor series. No I wasn't trying to insult centuries of standard mathematical practice. Performance is often the art of cheating carefully.
So, here's my implementation of cosine, with its argument in turns (1 turn == 360 degrees):
public static float cost(float f) { int bits = Float.floatToRawIntBits(f); int mantissa = (bits&0x7FFFFF)(1<<23); int shift = (bits<<1>>>24)127+9; // exponent, unbiased, with shift if(shift>=32  shift<=32) return 1; int fractionBits = shift>=0 ? mantissa<<shift : mantissa>>shift; int tableIndex = (fractionBits>>>(30resultPrecision))&tableSizeMask; switch(fractionBits>>>30) { // Quadrant is top two bits case 0: return cosTable[tableIndex]; case 1: return cosTable[tableSizeMasktableIndex]; case 2: return cosTable[tableIndex]; default/\*case 3\*/: return cosTable[tableSizeMasktableIndex]; } }Lets go through this slowly:

int bits = Float.floatToRawIntBits(f);
Get the IEEE 754 bits  int mantissa = (bits&0x7FFFFF)(1<<23);
The mantissa is the bottom 23 bits  to which the hidden bit must be prepended.  int shift = (bits<<1>>>24)127+9;
Extract the exponent, correct for the exponent bias, then add a bias to move the binary point to the top of the word.  if(shift>=32  shift<=32) return 1;
If the shift is too large, the fraction bits would be zero, therefore the result is 1.  int fractionBits = shift>=0 ? mantissa<<shirt : mantissa>>shift;
Shift the mantissa so that it's a fixed point number with the binary point at the top of the 32 bit int. The magic is in what's not here: because the argument is in turns, I get to ignore all of the integer bits (range reduction made trivial); and because it's the cosine function, which is symmetric about the origin, I get to ignore the sign.  int tableIndex = (fractionBits>>>(30resultPrecision))&tableSizeMask;
The top two bits are the quadrant... extract the bits below that to derive a table index.  switch(fractionBits>>>30) {
One case for each quadrant. This switch could be eliminated by making the table 4 times larger.  case 0: return cosTable[tableIndex];
Yes! It's just a table lookup! Truly trivial.  case 1: return cosTable[tableSizeMasktableIndex];
...
Table lookups are a very common way to implement mathematical functions, particularly the periodic ones like cosine. There are all kinds of elaborations. One of the most common for improving accuracy is to do some sort of interpolation between table elements (linear or cubic, usually).
Posted by Alex Lam on January 21, 2006 at 08:08 AM PST #
Posted by James Gosling on January 21, 2006 at 02:02 PM PST #
Posted by Alex Lam on January 22, 2006 at 01:04 AM PST #
I'm searching for a good arbitraryprecision arithmetics package for numerical computations with support for trigonometric functions.
Can you suggest one?
Posted by Axel Kramer on January 22, 2006 at 07:59 PM PST #
Posted by nile black on January 22, 2006 at 10:32 PM PST #
Posted by Daniel Steinberg on January 22, 2006 at 11:38 PM PST #
What's so grotesque about that? If every function requires information about the error bounds, then enforcing the desired error limits becomes quite complicated. This is doubly true, while there is no standard way to track and describe error bounds. Perhaps JSR 275 will address that. http://www.jcp.org/en/jsr/detail?id=275
I would think that "The Jackpot Way" to solve this problem would be to provide an editor for complex functions and equations. Then a function evaluation engine can evaluate the entire function, tracking the error along the way. Radically different evaluation methods could be chosen, simply by changing the acceptable error.
Total error is a pain to track, but it is almost always the only important thing. The error involved in any single stage of a calculation is only important to the extent that it impacts total error.
If you think this approach requires too much intelligence from the function evaluation engine to ever compete in terms of performance, would you say the same thing about Java itself?
Posted by Curt Cox on January 23, 2006 at 12:04 AM PST #
Posted by wes on January 23, 2006 at 07:06 PM PST #
Posted by Alex Lam on January 23, 2006 at 09:20 PM PST #