Numerics | Oracle Joseph D. Darcy's Oracle Blog4
https://blogs.oracle.com/darcy/numerics/rss
Sat, 24 Oct 2020 06:58:40 +0000FeedCreator 1.7.3Notions of Floating-Point Equality
https://blogs.oracle.com/darcy/notions-of-floating-point-equality
<p>Moving on from <br/><a href="http://blogs.sun.com/darcy/entry/api_design_identity_and_equality" title="API Design: Identity and Equality">identity and equality of objects</a>, different notions of equality are also surprisingly subtle in some numerical realms.</p><p>As <a href="http://mail.openjdk.java.net/pipermail/nio-dev/2009-November/000792.html" title="Nov. 2009 nio-dev thread on DoubleBuffer.compareTo is not anti-symmetric">comes up from time to time</a> and is often surprising, the "==" operator defined by IEEE 754 and used by Java for comparing floating-point values <br/>(<a href="http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.21.1" br/title="Numerical Equality Operators == and !=">JLSv3 §15.21.1</a>)<br/>is <em>not</em> an <i><a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a></i>. Equivalence relations satisfy three properties, reflexivity (something is equivalent to itself), symmetry (if <i>a</i> is equivalent to <i>b</i>, <i>b</i> is equivalent to <i>a</i>), and transitivity (if <i>a</i> is equivalent to <i>b</i> and <i>b</i> is equivalent to <i>c</i>, then <i>a</i> is equivalent to <i>c</i>).</p><p>The IEEE 754 standard defines four possible mutually exclusive <br/>ordering relations between floating-point values:</p><ul><li><p>equal</p><li><p>greater than</p><li><p>less than</p><li><p>unordered</p></ul><p>A NaN (Not a Number) is <em>unordered</em> with respective to every floating-point value, <br/>including itself. This was done so that NaNs would not quietly slip by without due notice. Since (NaN == NaN) is false, the IEEE 754 "==" relation is <em>not</em> an equivalence relation since it is not reflexive.</p><p>An equivalence relation partitions a set into equivalence classes; each member of an equivalence classes is "the same" as the other members of the classes for the purposes of that equivalence relation. In terms of numerics, one would expect equivalent values to result in equivalent numerical results in all cases. Therefore, the size of the equivalence classes over floating-point values would be expected to be one; a number would only be equivalent to itself. However, in IEEE 754 there are two zeros, -0.0 and +0.0, and they compare as equal under ==. For IEEE 754 addition and subtraction, the sign of a zero argument can at most affect the sign of a zero result. That is, if the sum or difference is not zero, a zero of either sign doesn't change the result. If the sum or differnece is zero and one of the arguments is zero, the other argument must be zero too:</p><ul><li><p>-0.0 + -0.0 ⇒ -0.0</p><li><p>-0.0 + +0.0 ⇒ +0.0</p><li><p>+0.0 + -0.0 ⇒ +0.0</p><li><p>+0.0 + +0.0 ⇒ +0.0</p></ul><p>Therefore, under addition and subtraction, both signed zeros are equivalent. However, they are <em>not</em> equivalent under division since 1.0/<b>-</b>0.0 ⇒ <br/>-∞ but 1.0/<b>+</b>0.0 ⇒ +∞ and -∞ and +∞ are <em>not</em> equivalent.<a href="#affine">1</a></p><p>Despite the rationales for the IEEE 754 specification to not define == as an equivalence relation, there are legitimate cases where one needs a true equivalence relation over floating-point values, such as when writing test programs, and cases where one needs a total ordering, such as when sorting. In my numerical tests I use a method <br/>that returns true for two floating-point values <i>x</i> and <i>y</i> if:<br><br/>((<i>x</i> == <i>y</i>) &&<br><br/> (if <i>x</i> and <i>y</i> are both zero they have the same sign)) || <br><br/>(x and y are both NaN)<br><br/>Conveniently, this is just computed by using (Double.compare(x, y) == 0). For sorting or a total order, the semantics of <a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#compare(double,%20double)">Double.compare</a> are fine; NaN is treated as being the largest floating-point values, greater than positive infinity, and -0.0 < +0.0. That ordering is the total order used by by <a href="http://java.sun.com/javase/6/docs/api/java/util/Arrays.html#sort(double[])">java.util.Arrays.sort(double[])</a>. In terms of semantics, it doesn't really matter where the NaNs are ordered with respect to ther values to as long as they are consistently ordered that way.<a href="#bitwise">2</a></p><p>These subtleties of floating-point comparison were also germane on the Project Coin mailing list last year; the <a href="http://mail.openjdk.java.net/pipermail/coin-dev/2009-March/000566.html">definition of floating-point equality was discussed in relation to adding support for relational operations based on a type implementing the Comparable interface</a>. That thread also broached the complexities involved in comparing <a href="http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html">BigDecimal</a> values.</p><p>The BigDecimal class has a natural ordering that is <em>inconsistent with equals</em>; that is for at least some inputs bd1 and bd2, <br><br/>c.compare(bd1, bd2)==0<br><br/>has a different boolean value than<br><br/>bd1.equals(bd2).<a href="#consistent">3</a><br><br/>In BigDecimal, the same numerical value can have multiple representations, such as (100 × 100) versus (10 × 101) versus (1 × 102). These are all "the same" numerically (compareTo == 0) but are <em>not</em> equals with each other. Such values are not equivalent under the operations supported by BigDecimal; for example (100 × 100) has a <i><a href="http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html#scale()">scale</a></i> of 0 while (1 × 102) has a scale of -2.<a href="#cohort">4</a></p><p>While subtle, the different notions of numerical equality each serve a useful purpose and knowing which notion is appropriate for a given task is an important factor in writing correct programs.</p><br/><blockquote><p><a name="affine">1</a> There are two zeros in IEEE 754 <br/>because there are two infinities. Another way to extend the real numbers to include infinity is to have a single (unsigned) projective infinity. In <br/>such a system, there is only one conceptual zero. Early x87 chips before IEEE 754 <br/>was standardized had support for both signed (affine) and projective <br/>infinities. Each style of infinity is more convenient for some kinds of computations.</p><p><a name="bitwise">2</a><br/>Besides the equivalence relation offered by Double.compare(x, y), another equivalence relation can be induced by either of the bitwise conversion routines, <a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#doubleToLongBits(double)">Double.doubleToLongBits</a> or <a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#doubleToRawLongBits(double)">Double.doubleToRawLongBits</a>. The former collapses all bit patterns that encode a NaN value into a single canonical NaN bit pattern, while the latter can let through a platform-specific NaN value. Implementation freedoms allowed by the original IEEE 754 standard have allowed different processor families to define different conventions for NaN bit patterns. </p><p><a name="consistent">3</a> I've at times considered whether it would be worthwhile to include an "@NaturalOrderingInconsistentWithEquals" annotation in the platform to flag the classes that have this quirk. Such an annotation could be used by various checkers to find potentially problematic uses of such classes in sets and maps.</p><p><a name="cohort">4</a> Building on wording developed for the BigDecimal specification under <a href="http://jcp.org/en/jsr/detail?id=13">JSR 13</a>, when I was editor of the <a href="http://en.wikipedia.org/wiki/IEEE_754-2008">IEEE 754 revision</a>, I introduced several pieces of decimal-related terminology into the draft. Those terms include <i>preferred exponent</i>, analogous to the preferred scale from BigDecimal, and <i>cohort</i>, "The set of all floating-point representations that represent a given floating-point number in a<br/>given floating-point format." Put in terms of BigDecimal, the members of a cohort would be all the BigDecimal numbers with the same numerical value, but distinct pairs of scale (negation of the exponent) and unscaled value.</p></blockquote>NumericsFri, 26 Feb 2010 01:00:00 +0000https://blogs.oracle.com/darcy/notions-of-floating-point-equalityJoe DarcyEverything Older is Newer Once Again
https://blogs.oracle.com/darcy/everything-older-is-newer-once-again
<p>Catching up on writing about more numerical work from years past, the <a href="http://www.ibm.com/developerworks/java/library/j-math2.html" br/title="Java's new math, Part 2: Floating-point numbers">second article</a> in a two-part series finished last year discusses some low-level floating-point manipulations methods I added to the platform over the course of JDKs 5 and 6.<br/>Previously, I published a <br/><a href="http://blogs.sun.com/darcy/entry/everything_old_is_new_again" br/title="Everything Old is New Again">blog entry reacting to</a> the <br/><a href="http://www.ibm.com/developerworks/java/library/j-math1/index.html"br/title="Java's new math, Part 1: Real numbers">first part</a> of the series.</p><p>JDK 6 enjoyed several numerics-related library changes. Constants for MIN_NORMAL, MIN_EXPONENT, and MAX_EXPONENT were added to the Float and Double classes. I also added to the Math and StrictMath classes the following methods for low-level manipulation of floating-point values:</p><ul><li><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#copySign(double,%20double)"><br/>public static double copySign(double magnitude, double sign)</a><li><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#getExponent(double)">public static int getExponent(double d)</a><li><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#nextAfter(double,%20double)">public static double nextAfter(double start, double direction)</a><li><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#nextUp(double)">public static double nextUp(double d)</a><li><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#scalb(double,%20int)">public static double scalb(double d, int scaleFactor)</a></ul><p>There are also overloaded methods for float arguments. <br/>In terms of the <a href="http://en.wikipedia.org/wiki/IEEE_754-1985">IEEE 754 standard from 1985</a>, the methods above provide the core functionality of the <i>recommended functions</i>. In terms of the <a href="http://en.wikipedia.org/wiki/IEEE_754-2008">2008 revision to IEEE 754</a>, analogous functions are integrated throughout different sections of the document.</p><p>While a student at Berkeley, I wrote a <br/><a href="http://www.jddarcy.org/Research/ieeerecd.pdf">tech report</a> on algorithms I developed for an earlier implementation of these methods, an implementation written many years ago when I was a summer intern at Sun.<br/>The <a href="http://hg.openjdk.java.net/jdk7/tl/jdk/file/84792500750c/src/share/classes/sun/misc/FpUtils.java" title="sun.misc.FpUtils as of Feb. 20, 2010">implementation of the recommended functions in the JDK<a> is a refinement of the earlier work, a refinement that simplified code, added <a href="http://hg.openjdk.java.net/jdk7/tl/jdk/file/84792500750c/test/java/lang/Math/IeeeRecommendedTests.java" title="IeeeRecommendedTests.java regression test as of Feb. 20, 2010">extensive</a> and <br/><a href="http://blogs.sun.com/darcy/entry/test_where_the_failures_are" br/title="Test where the failures are likely to be">effective</a> unit tests, and sported better performance in some cases.<br/>In part the simplifications came from <em>not</em> attempting to accommodate IEEE 754 features not natively supported in the Java platform, in particular rounding modes and sticky flags.</p><p>The primary purpose of these methods is to assist in in the development of math libraries in Java, such as the recent <br/><a href="http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/ad1e30930c6c">pure Java implementation of floor and ceil</a><br/>(<a href="http://bugs.sun.com/view_bug.do?bug_id=6908131" title="Pure Java implementations of StrictMath.floor(double) & StrictMath.ceil(double)">6908131</a>). <br/>This expected use-case drove certain API differences with the functions sketched by IEEE 754. For example, the getExponent method simply returns the unbiased value stored in the exponent field of a floating-point value rather than doing additional processing, such as computing the exponent needed to normalized a subnormal number, additional processing called for in some flavors of the 754 logb operation. Such additional functionality can actually slow down math libraries since libraries may not benefit from the additional filtering and may actually have to undo it.</p><p>The Math and StrictMath specifications of copySign have a small difference: the <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/StrictMath.html#copySign(double,%20double)" br/title="java.lang.StrictMath.copySign">StrictMath version</a> always treats NaNs as having a positive sign (a sign bit of zero) while the <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#copySign(double,%20double)" br/title="java.lang.Math.copySign">Math version</a> does not impose this requirement.<br/>The IEEE standard does not ascribe a meaning to the sign bit of a NaN and difference processors have different conventions NaN representations and how they propagate. However, if the source argument is not a NaN, the two copySign methods will produce equivalent results.<br/>Therefore, even if being used in a library where the results need to be completely predictable, the faster Math version of copySign can be used as long as the source argument is known to be numerical.</p><p>The recommended functions can also be used to solve a little floating-point puzzle: generating the interesting limit values of a floating-point format just starting with constants for 0.0 and 1.0 in that format:</p><ul><li><p>NaN is 0.0/0.0.</p><li><p>POSITIVE_INFINITY is 1.0/0.0.</p><li><p>MAX_VALUE is nextAfter(POSITIVE_INFINITY, 0.0).</p><li><p>MIN_VALUE is nextUp(0.0).</p><li><p>MIN_NORMAL is MIN_VALUE/(nextUp(1.0)-1.0).</p></ul>NumericsSat, 20 Feb 2010 21:41:48 +0000https://blogs.oracle.com/darcy/everything-older-is-newer-once-againJoe DarcyFinding a bug in FDLIBM pow
https://blogs.oracle.com/darcy/finding-a-bug-in-fdlibm-pow
<p>Writing up a piece of old work for some more <br/><a href="http://blogs.sun.com/darcy/entry/regex_for_integral_strings" title="Recognizing all valid integral strings with regular expressions">Friday fun</a>, an example of<br/><a href="http://blogs.sun.com/darcy/entry/test_where_the_failures_are" br/title="Test where the failures are likely to be">testing where the failures are likely to be</a> led to my independent discovery of a bug in the FDLIBM pow function, one of only two bugs fixed in <br/><a href="http://www.netlib.org/fdlibm/readme">FDLIBM 5.3</a>.<br/>Even back when this bug was fixed for Java some time ago <br/>(<a href="http://bugs.sun.com/view_bug.do?bug_id=5033578" title="Java should require use of latest fdlibm 5.3">5033578</a>), <br/>the FDLIBM library was well-established, widely used in the Java platform and elsewhere, and already thoroughly tested so I was quite proud my tests found a new problem. The next most recent change to the pow implementation was eleven years prior to the fix in 5.3.</p><p>The <a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#pow(double,%20double)">specification for Math.pow</a> is involved, with over two dozen special cases listed. When setting out to write tests for this method, I re-expressed the specification in a tabular form to understand what was going on. After a few iterations reminiscent of tweaking a <a href="http://en.wikipedia.org/wiki/Karnaugh_map">Karnaugh map</a>, the table below was the result.</p><br/><br/>Special Cases for FDLIBM pow and {Math, StrictMath}.pow<br/><br/><br/><br/><i>xy</i><br/><i>y</i><br/><br/><br/><i>x</i>–∞–∞ < <i>y</i> < 1–1–1 < <i>y</i> < 0–0.0+0.00 < <i>y</i> < 111 < <i>y</i> < +∞+∞NaN<br/><br/>–∞+0.0f2(<i>y</i>)1.0f1(<i>y</i>)+∞NaN<br/><br/>–∞ < <i>y</i> < –1+0.0f3(x, y)f3(x, y)+∞<br/><br/>–1NaN<a href="#c99_diff">†</a>NaN<a href="#c99_diff">†</a><br/><br/>–1 < <i>y</i> < 0+∞+0.0<br/><br/>–0.0+∞f1(y)f2(y)+0.0<br/><br/>+0.0+∞+0.0<br/><br/>0 < <i>y</i> < 1+∞ <i>x</i> +0.0<br/><br/>1NaN<a href="#c99_diff">†</a>1.0NaN<a href="#c99_diff">†</a><br/><br/>1 < <i>y</i> < +∞+0.0<i>x</i>+∞<br/><br/>+∞+0.0+∞<br/><br/>NaNNaNNaN<br/><br/><blockquote><p>f1(y) = isOddInt(y) ? –∞ : +∞;<br><br/>f2(y) = isOddInt(y) ? –0.0 : +0.0;<br><br/>f3(x, y) = isEvenInt(y) ? |<i>x</i>|<i>y</i> : (isOddInt(y) ? –|<i>x</i>|<i>y</i> : NaN);<br><br/><a name="c99_diff">† Defined to be +1.0 in C99, see §F.9.4.4 of the C99 specification</a>. <br/>Large magnitude finite floating-point numbers are all even integers (since the precision of a typical floating-point format is much less than its exponent range, a large number will be an integer times the base raised to a power). Therefore, by the reasoning of the C99 committee, pow(-1.0, ∞) was like pow(-1.0, <i>Unknown large even integer</i>) so the result was defined to be 1.0 instead of NaN. </p></blockquote><p>The range of arguments in each row and column are partitioned into eleven categories, ten categories of finite values together with NaN (Not a Number). Some combination of <i>x</i> and <i>y</i> arguments are covered by multiple clauses of the specification.<br/>A few helper functions are defined to simplify the presentation. As noted in the table, a cross-platform wrinkle is that the C99 specification, which came out after Java was first released, defined certain special cases differently than in FDLIBM and Java's Math.pow. </p><p>A regression test based on this tabular representation of pow special cases is <br/><a href="http://hg.openjdk.java.net/jdk7/jdk7/jdk/file/9027c6b9d7e2/test/java/lang/Math/PowTests.java" br/title="Current version as of February 12, 2010">jdk/test/java/lang/Math/PowTests.java</a>. The test makes sure each interesting combination in the table is probed at least once. Some combinations receive multiple probes.<br/>When an entry represents a range, the exact endpoints of the range are tested; in addition, other interesting interior points are tested too. For example, for the range 1 < <i>x</i>< +∞ the individual points tested are:</p><blockquote>+1.0000000000000002, // nextAfter(+1.0, +oo)<br/>+1.0000000000000004,<br/>+2.0,<br/>+Math.E,<br/>+3.0,<br/>+Math.PI,<br/>-(double)Integer.MIN_VALUE - 1.0,<br/>-(double)Integer.MIN_VALUE,<br/>-(double)Integer.MIN_VALUE + 1.0,<br/> double)Integer.MAX_VALUE + 4.0,<br/> (double) ((1L<<53)-1L),<br/> (double) ((1L<<53)),<br/> (double) ((1L<<53)+2L),<br/>-(double)Long.MIN_VALUE,<br/> Double.MAX_VALUE,</blockquote><p>Besides the endpoints, the interesting interior points include points worth checking because of transitions either in the IEEE 754 double format or a 2's complement integer format.</p><p>Inputs that used to fail under this testing include a range of severities, from the almost always numerical benign error of returning a wrongly signed zero, to returning a zero when the result should be finite nonzero result, to returning infinity for a finite result, to even returning a wrongly singed infinity!</p><blockquote>Selected Failing InputsFailure for StrictMath.pow(double, double):<br/> For inputs -0.5 (-0x1.0p-1) and <br/> 9.007199254740991E15 (0x1.fffffffffffffp52)<br/> expected -0.0 (-0x0.0p0)<br/> got 0.0 (0x0.0p0).<br/>Failure for StrictMath.pow(double, double):<br/> For inputs -0.9999999999999999 (-0x1.fffffffffffffp-1) and <br/> 9.007199254740991E15 (0x1.fffffffffffffp52)<br/> expected -0.36787944117144233 (-0x1.78b56362cef38p-2)<br/> got -0.0 (-0x0.0p0).<br/>Failure for StrictMath.pow(double, double):<br/> For inputs -1.0000000000000004 (-0x1.0000000000002p0) and <br/> 9.007199254740994E15 (0x1.0000000000001p53)<br/> expected 54.598150033144236 (0x1.b4c902e273a58p5)<br/> got 0.0 (0x0.0p0).<br/>Failure for StrictMath.pow(double, double):<br/> For inputs -0.9999999999999998 (-0x1.ffffffffffffep-1) and <br/> 9.007199254740992E15 (0x1.0p53)<br/> expected 0.13533528323661267 (0x1.152aaa3bf81cbp-3)<br/> got 0.0 (0x0.0p0).<br/>Failure for StrictMath.pow(double, double):<br/> For inputs -0.9999999999999998 (-0x1.ffffffffffffep-1) and <br/> -9.007199254740991E15 (-0x1.fffffffffffffp52)<br/> expected -7.38905609893065 (-0x1.d8e64b8d4ddaep2)<br/> got -Infinity (-Infinity).<br/>Failure for StrictMath.pow(double, double):<br/> For inputs -3.0 (-0x1.8p1) and <br/> 9.007199254740991E15 (0x1.fffffffffffffp52)<br/> expected -Infinity (-Infinity)<br/> got Infinity (Infinity).</blockquote><p>The <a href="http://blogs.sun.com/darcy/resource/FdlibmPowPatch.txt">code changes</a> to address the bug were fairly simple; corrections were made to extracting components of the floating-point inputs and sign information was propagated properly.</p><p>Even expertly written software can have errors and even long-used software can have unexpected problems. Estimating how often this bug in FDLIBM caused an issue is difficult, while the errors could be egregious, the needed inputs to elicit the problem were arguably unusual (even though perfectly valid mathematically). Thorough testing is key aspect of assuring the quality of numerical software, it is also helpful for end-users to be able to <a href="http://www.cs.berkeley.edu/~wkahan/7Oct09.pdf">examine the output of their programs</a> to help notice problems.</p>NumericsFri, 12 Feb 2010 09:25:00 +0000https://blogs.oracle.com/darcy/finding-a-bug-in-fdlibm-powJoe DarcyHexadecimal Floating-Point Literals
https://blogs.oracle.com/darcy/hexadecimal-floating-point-literals
<p>One of the more obscure language changes included back in JDK 5 was the addition of <i>hexadecimal floating-point literals</i> to the platform. As the name implies, hexadecimal floating-point literals allow literals of the float and double types to be written primarily in base 16 rather than base 10. The underlying primitive types use binary floating-point so a base 16 literal avoids various decimal ↔ binary rounding issues when there is a need to specify a floating-point value with a particular representation.</p><p>The conversion rule for decimal strings into binary floating-point values is that the binary floating-point value nearest the exact decimal value must be returned. When converting from binary to decimal, the rule is more subtle: the shortest string that allows recovery of the same binary value in the same format is to be used. While these rules are sensible, surprises are possible from the differing bases used for storage and display. For example, the numerical value 1/10 is <em>not</em> exactly representable in binary; it is a binary repeating fraction just as 1/3 is a repeating fraction in decimal. Consequently, the numerical values of 0.1f and 0.1d are <em>not</em> the same; the exact numeral value of the comparatively low precision float literal 0.1f is <br><br/>0.100000001490116119384765625<br><br/>and the shortest string that will convert to this value as a double is <br><br/>0.10000000149011612.<br><br/>This in turn differs from the exact numerical value of the higher precision double literal 0.1d,<br><br/>0.1000000000000000055511151231257827021181583404541015625. Therefore, based on decimal input, it is not always clear what particular binary numerical value will result.</p><p>Since floating-point arithmetic is almost always approximate, dealing with some rounding error on input and output is usually benign. However, in some cases it is important to exactly specify a particular floating-point value. For example, the Java libraries include constants for the <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#MAX_VALUE"><br/>largest finite</a> <br/>double value, numerically equal to (2-2-52)·21023, and the <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#MIN_VALUE"><br/>smallest nonzero value</a>, numerically equal to 2-1074. In such cases there is only one right answer and these particular limits are derived from the binary representation details of the corresponding IEEE 754 double format. Just based on those binary limits, it is not immediately obvious how to construct a minimal length decimal string literal that will convert to the desired values.</p><p>Another way to create floating-point values is to use a bitwise conversion method, such as <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#doubleToLongBits(double)">doubleToLongBits</a><br/>and<br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#longBitsToDouble(double)">longBitsToDouble</a>.<br/>However, even for numerical experts this interface is inhumane since all the gory bit-level encoding details of IEEE 754 are exposed and values created in this fashion are not regarded as <br/><a href="http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.28">constants</a>.<br/>Therefore, for some use cases it helpful to have a textual representation of floating-point values that is simultaneously human readable, clearly unambiguous, and tied to the binary representation in the floating-point format. Hexadecimal floating-point literals are intended to have these three properties, even if the readability is only in comparison to the alternatives!</p><p>Hexadecimal floating-point literals originated in C99 and were later included in the recent <a href="http://en.wikipedia.org/wiki/IEEE_754-2008">revision of the IEEE 754 floating-point standard</a>. <br/>The grammar for these literals in Java is given in <br/><a href="http://java.sun.com/docs/books/jls/third_edition/html/lexical.html#3.10.2">JLSv3 §3.10.2</a>:</p><blockquote><dl><dt><p><i>HexFloatingPointLiteral</i>:</p><dd> <p><i>HexSignificand BinaryExponent FloatTypeSuffixopt</i></p></dl></blockquote><p>This readily maps to the sign, significand, and<br/>exponent fields defining a finite floating-point value; <i>sign</i><b>0x</b><i>significand</i><b>p</b><i>exponent</i>.<br/>This syntax allows the literal </p><blockquote><p>0x1.8p1</p></blockquote><p>to be to used represent the value 3; 1.8hex × 21 = 1.5decimal × 2 = 3. <br/>More usefully, the maximum value of <br><br/>(2-2-52)·21023 can be written as<br><br/>0x1.fffffffffffffp1023<br><br/>and the minimum value of <br><br/>2-1074 can be written as<br><br/>0x1.0P-1074 or 0x0.0000000000001P-1022, which are clearly mappable to the various fields of the floating-point representation while being much more scrutable than a raw bit encoding.</p><p>Retroactively reviewing the possible <a href="http://blogs.sun.com/darcy/entry/so_you_want_to_change">steps</a> needed to add hexadecimal floating-point literals to the language:</p><br/><ol><li><p><b>Update the Java Language Specification</b>: As a purely syntactic changes, only a single section of the JLS had to updated to accommodate hexadecimal floating-point literals.</p><li><p><b>Implement the language change in a compiler</b>: Just the lexer in javac had to be modified to recognize the new syntax; javac used new platform library methods to do the actual numeric conversion.</p><li><p><b>Add any essential library support</b>: While not strictly necessary, the usefulness of the literal syntax is increased by also recognizing the syntax in <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#parseDouble(java.lang.String)">Double.parseDouble</a> and similar methods and outputting the syntax with <a href="http://java.sun.com/javase/6/docs/api/java/lang/Double.html#toHexString(double)">Double.toHexString</a>; analogous support was added in corresponding Float methods. In addition the new-in-JDK 5 Formatter "printf" facility included the <a href="http://java.sun.com/javase/6/docs/api/java/util/Formatter.html#dndec">%a format</a> for hexadecimal floating-point.</p><li><p><b>Write tests</b>: Regression tests (under test/java/lang/Double in the JDK workspace/repository) were included as part of the library support <br/>(<a href="http://bugs.sun.com/view_bug.do?bug_id=4826774"br/title="Add library support for hexadecimal floating-point strings">4826774</a>). </p><li><p><b>Update the Java Virtual Machine Specification</b>: No JVMS changes were needed for this feature.</p><li><p><b>Update the JVM and other tools that consume classfiles</b>: As a Java source language change, classfile-consuming tools were not affected.</p><li><p><b>Update the Java Native Interface (JNI)</b>: Likewise, new literal syntax was orthogonal to calling native methods.</p><li><p><b>Update the reflective APIs</b>: Some of the reflective APIs in the platform came after hexadecimal floating-point literals were added; however, only an API modeling the syntax of the language, such as the <a href="http://java.sun.com/javase/6/docs/technotes/guides/javac/index.html">tree API</a> might need to be updated for this kind of change.</p><li><p><b>Update serialization support</b>: New literal syntax has no impact on serialization.</p><li><p><b>Update the javadoc output</b>: One possible change to javadoc output would have been supplementing the existing entries for floating-point fields in the <a href="http://java.sun.com/javase/6/docs/api/constant-values.html">constant fields values page</a> with hexadecimal output; however, that change was not done.</p><br/></ol><p>In terms of language changes, adding hexadecimal floating-point literals is about as simple as a language change can be, only straightforward and localized changes were need to the JLS and compiler and the library support was clearly separated. Hexadecimal floating-point literals aren't applicable to that many programs, but when they can be used, they have extremely high utility in allowing the source code to clearly reflect the precise numerical intentions of the author.</p>NumericsThu, 04 Dec 2008 00:00:02 +0000https://blogs.oracle.com/darcy/hexadecimal-floating-point-literalsJoe DarcyEverything Old is New Again
https://blogs.oracle.com/darcy/everything-old-is-new-again
<p>I was heartened to recently come across the article <br/><i><a href="http://www.ibm.com/developerworks/java/library/j-math1/index.html?ca=drs-"><br/>Java's new math, Part 1: Real numbers<br/></a></i><br/>which detailed some of the additions I made to Java's math libraries over the years in JDK 5 and 6, including <br/><a href="http://bugs.sun.com/view_bug.do?bug_id=4851625"br/title="Add hyperbolic transcendental functions (sinh, cosh, tanh) to Java math library">hyperbolic trigonometric functions</a> (sinh, cosh, tanh),<br/><a href="http://bugs.sun.com/view_bug.do?bug_id=4347132"br/title="Want Math.cbrt() function for cube root">cube root</a>, <br/>and <br/><a href="http://bugs.sun.com/view_bug.do?bug_id=4074599"br/title="Math package: implement log10 (base 10 logarithm)">base-10 log</a>.</p><p>A few comments on the article itself, I would describe java.lang.StrictMath as java.lang.Math's fussy twin rather than evil twin. The availability of the StrictMath class allows developers who need cross-platform reproducible results from the math library to get them. Just because floating-point arithmetic is an approximation to real arithmetic doesn't mean it shouldn't be predictable! There are non-contrived circumstances where numerical programs are helped by having such strong reproducibility available. For example, to avoid unwanted communication overhead, certain parallel decomposition algorithms rely on different nodes being able to independently compute consistent numerical answers.</p><p>While the java.lang.Math class is not constrained to use the particular FDLIBM algorithms required by StrictMath, any valid Math class implementation still must meet that stated quality of implementation criteria for the methods. The criteria usually include a low worst-case relative error, as measures in <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#ulp(double)">ulps</a><br/> (units in the last place), and <i>semi-monotonicity</i>, whenever the mathematical function is non-decreasing, so is the floating-point approximation, likewise, whenever the mathematical function is non-increasing, so is the floating-point approximation </p><p>Simply adding more FDLIBM methods to the platform was quite easy to do; much of the effort for the math library additions went toward developing new tests, both to verify that the general quality of implementation criteria were being met as well as that verifying the particular algorithms were being used to implement the StrictMath methods. I'll discuss the techniques I used to develop those tests in a future blog entry.</p>NumericsWed, 29 Oct 2008 13:54:42 +0000https://blogs.oracle.com/darcy/everything-old-is-new-againJoe DarcyNorms: How to Measure Size
https://blogs.oracle.com/darcy/norms%3A-how-to-measure-size
<p>At times it is useful to summarize a set of values, say a vector of real numbers, as a single number representing the set's size. <br/>For example, distilling benchmark subcomponent scores into an overall score. One way to do this is to use a <i><a href="http://en.wikipedia.org/wiki/Vector_norm" title="Wikipedia on vector norms">norm</a></i>. <br/>Mathematically, a norm maps from a vector <i>V</i> of a given number of elements to a real number length such that the following properties hold:</p><ul><li> norm(<i>V</i>) ≥ 0 for all <i>V</i> and norm(<i>V</i>) = 0 if and only if <i>V</i> = 0 (positive definiteness)<li> norm(<i>c</i> · <i>V</i>) = abs(<i>c</i>) · norm(V) for real constant <i>c</i> (homogeneity)<li> norm(<i>U</i> + <i>V</i>) ≤ norm(<i>U</i>) + norm(<i>V</i>) (the triangle inequality)</ul><p>There are a few commonly used norms:</p><ul><li> 1-norm: sum of the absolute values (Manhattan length)<li> 2-norm: square root of the sum of the squares (Euclidean length)<li> ∞-norm: largest absolute value</ul><p>The first two norms are instances of <i>p-norms</i>. A <i>p</i>-norm adds up the result of raising the absolute value of each vector component to the <i>p</i>th power (squaring, or cubing, etc.) and then takes the <i>p</i>th root of the sum. The ∞-norm is the limit as <i>p</i> goes to infinity.</p><p>Given multiple possible norms, which one should be used? The 2-norm is often easier to work with since it is a differentiable function of the vector components, unlike the 1-norm and ∞-norm. On the other hand, the ∞-norm captures the worst-case behavior. Sometimes one norm is easier to compute than the others. <br/>Another norm might <a href="http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf" title="Kahan on why Matlab's Loss is Nobody's Gain">make an error analysis more tractable</a>. <br/>For vectors, in some sense it doesn't matter which norm is used because any two norms, norma and normb, are equivalent in the following sense, there are constants <i>c</i>1 and <i>c</i>2 such that<br></p><blockquote><i>c</i>1 · norma(V) ≤ normb(V) ≤ <i>c</i>2 · norma(<i>V</i>) <br></blockquote><p>This means that if one norm is tending toward zero, all other norms are tending toward zero too. For example, commonly in numerical linear algebra there is an iterative process that terminates once the norm of the error is small enough. Concretely, for vectors of size <i>n</i>, the common norms are related as follows:</p><blockquote>norm2(<i>V</i>) ≤ norm1(<i>V</i>) ≤ sqrt(n) · norm2(<i>V</i>)<br><br/>norm∞(<i>V</i>) ≤ norm2(<i>V</i>) ≤ sqrt(n) · norm∞(<i>V</i>)<br><br/>norm∞(<i>V</i>) ≤ norm1(<i>V</i>) ≤ n · norm∞(<i>V</i>)<br></blockquote><p>So to guarantee that the 1-norm is less than epsilon, it is enough to show that 2-norm is less than epsilon/sqrt(n).</p><p>However, in other ways the different norms are <em>not</em> equivalent; the norms can give different answers on the relative size of different vectors. Consider the three vectors <i>A</i>, <i>B</i>, and <i>C</i>:</p><blockquote><i>A</i> = [5, 0, 0]<br><br/><i>B</i> = [1, 3, 4]<br><br/><i>C</i> = [8/3, 8/3, 3]<br><br><br/><br/><br/>Vector<br/>1-norm<br/>2-norm<br/>∞-norm<br/><br/><br/><i>A</i> 5 5 <b>5</b><br/><br/><br/><i>B</i> 8 <b>≈5.1</b> 4<br/><br/><br/><i>C</i> <b>≈8.3</b> ≈4.8 3<br/><br/><br/>Biggest Vector C B A<br/><br/></blockquote><p>Each vector is considered the largest under one of the norms.</p><p>I've found the notion of norms to be useful in many different contexts. The performance differences between quicksort and mergesort can be described as quicksort having a better 1-norm but mergesort having a better ∞-norm. Buying more insurance coverage raises the 1-norm of your costs, but lowers your ∞-norm. A more conservative evaluation tends to focus on the worst-case outcome and thus favors something like the ∞-norm. For example, in the <br/><a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html" title="Javadoc for java.lang.Math">math library</a> <br/>the relative size of the error at any location must be less than the stated number of <a href="http://java.sun.com/javase/6/docs/api/java/lang/Math.html#ulp(double)">ulp</a>s<br/>(units in the last place). It is not good enough to have a low average error, but a few locations, or even one location, with an very inaccurate result. During software development, risk assessments evolve with the release life cycle. A change that is welcome early in the release may be rejected as too risky a few weeks before shipping; one way to view this phenomena is that a larger value of <i>p</i> is being used to compute risk assessments later in the release.</p><br/><b>References</b><br><br/><i><a href="http://www.ec-securehost.com/SIAM/ot56.html">Applied Numerical Linear Algebra</a></i>, <br/>James W. Demmel<br><br/><i><a href="http://portal.acm.org/citation.cfm?id=248979">Matrix Computations</a></i>, <br/>Gene H. Golub and Charles F Van Loan<br><br/><i><a href="http://www.ec-securehost.com/SIAM/ot50.html">Numerical Linear Algebra</a></i>, <br/>Lloyd N. Trefethen and David Bau, III<br>NumericsThu, 01 Mar 2007 15:20:32 +0000https://blogs.oracle.com/darcy/norms%3A-how-to-measure-sizeJoe DarcyWhat Every Computer Programmer Should Know About Floating-Point Arithmetic, Redux
https://blogs.oracle.com/darcy/what-every-computer-programmer-should-know-about-floating-point-arithmetic%2C-redux
<p>Next week on Wednesday, October 11, at the Silicon Valley <a href="http://www.accu-usa.org/">ACCU</a> meeting in San Jose, I'll be giving a version of my talk on <i>What Every Computer Programmer Should Know About Floating-Point Arithmetic</i>, previously seen at <a href="http://blogs.sun.com/darcy/entry/what_every_computer_programmer_should">Stanford</a> and <a href="http://blogs.sun.com/darcy/resource/J1_2003-TS-2281.pdf">JavaOne</a>. The meeting is open to the public and free of charge, so if you've ever wondered why adding up ten copies of 0.1d doesn't equal 1.0 or doubted the need for a floating-point value that is <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Double.html#NaN">not a number</a>, come on by.</p><p>After the talk, I'll post a copy of the slides.</p><p><b>Update:</b> <a href=" http://blogs.sun.com/darcy/resource/Wecpskafpa-ACCU.pdf">The slides.</a></p>NumericsWed, 04 Oct 2006 00:00:01 +0000https://blogs.oracle.com/darcy/what-every-computer-programmer-should-know-about-floating-point-arithmetic%2C-reduxJoe DarcyIEEE 754R Ballot
https://blogs.oracle.com/darcy/ieee-754r-ballot
<p>For a number of years, the venerable <a href="http://shop.ieee.org/ieeestore/Product.aspx?product_no=SS10116">IEEE 754</a> standard for binary floating-point arithmetic has been undergoing <a href="http://grouper.ieee.org/groups/754/">revision</a> and the committee's <a href="http://math.berkeley.edu/~scanon/754/">results</a> will soon be up for ballot. Back in 2003, I was editor of the draft for a few months and helped incorporate the decimal material.</p><p>The balloting process provides the opportunity for interested parties, such as consumers of the standard, to weigh in with comments; instructions for joining the ballot <a href="http://754r.ucbtest.org/balloting.txt">are available</a>. The deadline for signing up has been extended to October 21, 2006.</p><p>Major changes from 754 include:<ul><li> Support for decimal formats and arithmetic<li> Fused multiply add operation<li> More explicit conceptual model of levels of specification<li> Hexadecimal strings for binary floating-point values<li> Annexes giving recommendations on expression evaluation, alternate exception handling, and transcendental functions</ul></p>NumericsTue, 03 Oct 2006 15:11:45 +0000https://blogs.oracle.com/darcy/ieee-754r-ballotJoe DarcyWhat Every Computer Programmer Should Know About Floating-Point Arithmetic
https://blogs.oracle.com/darcy/what-every-computer-programmer-should-know-about-floating-point-arithmetic
I'm a part-time master's student in Stanford's <a href="http://icme.stanford.edu/">ICME</a> program and at the <br/><a href="http://icme.stanford.edu/Events/seminar.html">departmental seminar</a> <br/>I recently gave a talk, <br/><a href="http://blogs.sun.com/roller/resources/darcy/Wecpskafpa-StanfordIcme500.pdf"><i>What Every Computer Programmer Should Know About Floating-Point Arithmetic</i></a>. <br/>This is a refinement and update of <br/><a href="http://blogs.sun.com/roller/resources/darcy/JavaOneArchive.html">JavaOne talks</a> I've given with a similar title.NumericsFri, 23 Jun 2006 15:07:23 +0000https://blogs.oracle.com/darcy/what-every-computer-programmer-should-know-about-floating-point-arithmeticJoe DarcyBigDecimal Performance Enhancements
https://blogs.oracle.com/darcy/bigdecimal-performance-enhancements
Work with Xiobin<br/>Still more performance enhancements soon...<br/>Hacker's Delight<br/>norm of behavior, before, after<br/>graph of speed of multiply, add from<br/>1, 10, 100, 1000, 10000 digit numbers<br/>http://jroller.com/scolebourne/entry/java_7_what_to_do<br/>Do again with Xiobin's workNumericsTue, 16 Aug 2005 09:54:24 +0000https://blogs.oracle.com/darcy/bigdecimal-performance-enhancementsJoe Darcy