64 bit Java RTS
By Roland Westrelin on Jun 29, 2009
We are progressing on our new release of the Sun Java Real-Time System (Java RTS) and are making the first early access builds available for download. One of the big change with this release is the availability of a 64 bit Java RTS (on our supported platforms: Solaris sparc and x86, real-time Linux x86) that should, in practice, remove all limits on the heap size (a 32 bit Java RTS process is typically limited to 2 to 3 GBytes of heap size).
For Java RTS, going to 64 bits was challenging because we only support hotspot's client compiler and our code base is built from a JDK5 hotspot VM.
Why does RTS only support the client compiler?
A standard Java SE implementation (such as the hotspot VM) first runs methods interpreted and only when a method is identified as hot is it compiled. The VM runtime takes advantage of the interpreted execution to gather profile data and uses this data to perform more aggressive optimizations. Also, the VM runtime further optimizes for throughput by relying on optimistic optimizations: at compile time, the runtime uses the current state of the VM to perform aggressive optimizations that might later be invalidated. When that happens, the method is deoptimized (resumes execution interpreted) and a recompilation may only happen later on.
A real-time Java implementation must offer guarantees on maximum execution time. The lower the maximum execution time, the better, even when it results in an higher average execution time (lower throughput). What that means is that from the time the application enters its steady state, all code in the critical path must be compiled. A real-time VM has to compile code early and cannot benefit from extended interpreted execution and profile driven optimizations. It cannot limit compilation to hot methods but has to compile all critical methods. Finally, to keep the bound on execution time low, compiled methods cannot return to interpreted execution so no optimistic optimization can be used in the critical path.
Not all the performance benefit of hotspot's server compiler comes from its use of profile driven and optimistic optimizations but it is built with the assumption that these mechanisms are available and it is hard to isolate and disable them. On the other hand, The client compiler is a simpler compiler and it is relatively straightforward to have it produce very steady code. It is an example of trading throughput for higher determinism and lower latency.
Why is RTS still based on a JDK5 hotspot VM?
While the RTS VM is based on the hotspot's VM, it really is a different VM. The hotspot VM is optimized for throughput. The RTS VM is optimized for hard real-time and low latency. Implementing the RTSJ APIs and guaranteeing that all VM subsystems guarantee determinism requires substansial changes to the hotspot VM code. So updating the RTS code to follow hotspot changes and auditing new code to guarantee determinism is a time consuming task. So far, we found that sticking with the older code base and focusing on improving the real-time features of RTS is the best way for us to improve the product.
Why is it tricky to support 64 bit in RTS?
The hotspot VM runtime is largely written in C++ but its interpreter and compiled code are generated by an embedded assembler. On sparc, 64 bit native code is largely similar to 32 bit code (same registers, very similar instructions and ABI). On x86, 64 bit native code (so called amd64) is significantly different from 32 bit native code: more registers, extra instructions and extended instruction encoding, a different ABI. So for hotspot, going from a 32 bit VM to a 64 bit VM on x86 requires a large amount of changes to the compilers and interpreter. This work had only been done in hotspot on 64 bit x86 for the server compiler until recently: the JDK7 hotspot VM now has support for a 64 bit client on both sparc and x86. The work is still in progress.
So when we started the work to make JRTS 64 bit, we had the choice of converting our JDK5 client compiler to generate 64 bit code or to bring the client compiler from the JDK7 hotspot VM into Java RTS. Both options represent significant code changes and a lot of work. On one hand, as described above, going to 64 bit x86 is a complex task. On the other hand, a JDK7 VM has accumulated several years of code changes and is substantially different from a JDK5 VM. Anyway we chose this second option as, from JDK6, hotspot has an improved client compiler with better code generation. So getting the new client compiler largely solves the 64 bit problem and has the added benefit of improved performance.
Backporting the JDK7 client compiler into Java RTS, reconciling the code from the newer VM with the older one, enhancing it with RTSJ features, making sure that it generates deterministic code and correct 64 bit code is what has been keeping me busy over the 2.2 release cycle.