One of the more obscure language changes included back in JDK 5 was the addition of hexadecimal floating-point literals to the platform. As the name implies, hexadecimal floating-point literals allow literals of the float and double types to be written primarily in base 16 rather than base 10. The underlying primitive types use binary floating-point so a base 16 literal avoids various decimal ↔ binary rounding issues when there is a need to specify a floating-point value with a particular representation.
The conversion rule for decimal strings into binary floating-point values is that the binary floating-point value nearest the exact decimal value must be returned. When converting from binary to decimal, the rule is more subtle: the shortest string that allows recovery of the same binary value in the same format is to be used. While these rules are sensible, surprises are possible from the differing bases used for storage and display. For example, the numerical value 1/10 is not exactly representable in binary; it is a binary repeating fraction just as 1/3 is a repeating fraction in decimal. Consequently, the numerical values of 0.1f and 0.1d are not the same; the exact numeral value of the comparatively low precision float literal 0.1f is
0.100000001490116119384765625
and the shortest string that will convert to this value as a double is
0.10000000149011612.
This in turn differs from the exact numerical value of the higher precision double literal 0.1d,
0.1000000000000000055511151231257827021181583404541015625. Therefore, based on decimal input, it is not always clear what particular binary numerical value will result.
Since floating-point arithmetic is almost always approximate, dealing with some rounding error on input and output is usually benign. However, in some cases it is important to exactly specify a particular floating-point value. For example, the Java libraries include constants for the
largest finite
double value, numerically equal to (2-2^{-52})·2^{1023}, and the
smallest nonzero value, numerically equal to 2^{-1074}. In such cases there is only one right answer and these particular limits are derived from the binary representation details of the corresponding IEEE 754 double format. Just based on those binary limits, it is not immediately obvious how to construct a minimal length decimal string literal that will convert to the desired values.
Another way to create floating-point values is to use a bitwise conversion method, such as
doubleToLongBits
and
longBitsToDouble.
However, even for numerical experts this interface is inhumane since all the gory bit-level encoding details of IEEE 754 are exposed and values created in this fashion are not regarded as
constants.
Therefore, for some use cases it helpful to have a textual representation of floating-point values that is simultaneously human readable, clearly unambiguous, and tied to the binary representation in the floating-point format. Hexadecimal floating-point literals are intended to have these three properties, even if the readability is only in comparison to the alternatives!
Hexadecimal floating-point literals originated in C99 and were later included in the recent revision of the IEEE 754 floating-point standard.
The grammar for these literals in Java is given in
JLSv3 §3.10.2:
HexFloatingPointLiteral:
HexSignificand BinaryExponent FloatTypeSuffix_{opt}
This readily maps to the sign, significand, and
exponent fields defining a finite floating-point value; sign0xsignificandpexponent.
This syntax allows the literal
0x1.8p1
to be to used represent the value 3; 1.8_{hex} × 2^{1} = 1.5_{decimal} × 2 = 3.
More usefully, the maximum value of
(2-2^{-52})·2^{1023} can be written as
0x1.fffffffffffffp1023
and the minimum value of
2^{-1074} can be written as
0x1.0P-1074 or 0x0.0000000000001P-1022, which are clearly mappable to the various fields of the floating-point representation while being much more scrutable than a raw bit encoding.
Retroactively reviewing the possible steps needed to add hexadecimal floating-point literals to the language:
Update the Java Language Specification: As a purely syntactic changes, only a single section of the JLS had to updated to accommodate hexadecimal floating-point literals.
Implement the language change in a compiler: Just the lexer in javac had to be modified to recognize the new syntax; javac used new platform library methods to do the actual numeric conversion.
Add any essential library support: While not strictly necessary, the usefulness of the literal syntax is increased by also recognizing the syntax in
Double.parseDouble and similar methods and outputting the syntax with Double.toHexString; analogous support was added in corresponding Float methods. In addition the new-in-JDK 5 Formatter "printf" facility included the %a format for hexadecimal floating-point.
Write tests: Regression tests (under test/java/lang/Double in the JDK workspace/repository) were included as part of the library support
(title="Add library support for hexadecimal floating-point strings">4826774).
Update the Java Virtual Machine Specification: No JVMS changes were needed for this feature.
Update the JVM and other tools that consume classfiles: As a Java source language change, classfile-consuming tools were not affected.
Update the Java Native Interface (JNI): Likewise, new literal syntax was orthogonal to calling native methods.
Update the reflective APIs: Some of the reflective APIs in the platform came after hexadecimal floating-point literals were added; however, only an API modeling the syntax of the language, such as the tree API might need to be updated for this kind of change.
Update serialization support: New literal syntax has no impact on serialization.
Update the javadoc output: One possible change to javadoc output would have been supplementing the existing entries for floating-point fields in the constant fields values page with hexadecimal output; however, that change was not done.
In terms of language changes, adding hexadecimal floating-point literals is about as simple as a language change can be, only straightforward and localized changes were need to the JLS and compiler and the library support was clearly separated. Hexadecimal floating-point literals aren't applicable to that many programs, but when they can be used, they have extremely high utility in allowing the source code to clearly reflect the precise numerical intentions of the author.
Hi Joseph,
I was hoping to get your opinion on a debate we've been having about how to best transfer 32-bit Java floats to a Javascript application (which only has a 64-bit floating-point type).
This is for the Google Web Toolkit, which compiles Java code into Javascript.
Here is the URL:
http://code.google.com/p/google-web-toolkit/issues/detail?id=2897
If you could comment directly on that thread, that would be great!
Alex
@Alex,
I haven't read through the thread in great detail, but I'll offer a few comments.
Java semantics require distinct float and double types at some level. It is possible to emulate the result of float operations, add, subtract, multiply, divide, and square root by:
1) Converting the numerical float value to its double representation
2) Performing the operation to double precision
3) Rounding the double result down to float precision
(IIRC, an generalized outline of proof of this property is the numerical appendix of the 2nd edition of Hennessey and Patterson.)
For point 1), a representation that preserves the \*numerical value\* is needed. One such presentation would be the string of the float value converted to double (e.g from Java, Double.toString((double)f)). Another would be the hexadecimal representation; the toHexString output is exact so there aren't the same rounding considerations of preserving the original binary value as when doing through an intermediate decimal conversion. Using Float.toString(f) and then converting that string to double will in general \*not\* preserve the float value.
For point 3), this is straightforward to do with a cast or if one has access to the bit-level floating-point representation. While a student at Berkeley, a tech report I wrote includes an outline of how to extract the bit-level representation using normal floating-point operations ("Writing robust IEEE recommended functions in ``100 % Pure Java''(TM)", http://www.sonic.net/~jddarcy/Research/ieeerecd.pdf). However, there are somewhat more direct and natural ways to compute this rounding to reduced precision; using Dekker's tricks one can split the floating-point number at a given bit position and test the low-order bits against zero, etc. I'll leave working out the details as an "exercise for the reader," as some of my college textbooks like to say :-)