X

Buck's Blog

  • November 21, 2014

Where did all of these ConstPoolWrapper objects come from?!

David Buck
Principal Member of Technical Staff
Introduction

A very small number of applications using JRockit R28.2.2 and later may suffer performance issues when compared to earlier versions of R28. In this post, I will describe the issue in detail, and explain how to resolve or work around it.

What Changed in R28.2.2

In JRockit, object finalization requires that each instance of a finalizable class be registered with the JVM at object creation time so that the JVM knows to treat the instance differently (add it to the finalizer queue) when the garbage collector determines that the object is no longer live.

While developing R28, we unfortunately introduced a bug where finalizable objects that are instantiated from native code (think JNI's NewObject function) would never get registered. In other words, objects created from JNI, or other native code internal to the JVM, would simply be discarded by the garbage collector as soon as they are determined to be dead. Thankfully, objects created from pure Java code via the new keyword would still be finalized without issue.

While never executing finalizers that are expected to be called is bad enough, this bug indirectly had a much bigger impact. In JRockit, we depend on finalizers to help us prune our shared constant pool (a pool of constants from loaded classes and other long-living strings).

Each time Java code asks for a copy of an individual class's constant pool, we make a copy of the constant pool data for that class and store it on the object heap. This was necessary because JRockit never had a PermGen like HotSpot to keep class meta-data available for direct access from Java. Every copy of the constant pool on the object heap is wrapped in a ConstPoolWrapper object that holds the constant pool data. Each time a new copy is made, we also increment a reference counter for each string in the shared pool so that we can keep track of which strings are still live. When the ConstPoolWrapper is finalized, we decrement the applicable reference counters.

So what happens if the finalize method for each ConstPoolWrapper is never called? All sorts of bad things. The worst case scenario is that we eventually overflow the 32-bit reference counter and when the count loops back to zero, the string is removed from the pool, despite still being live! This can cause all sorts of crashes and other completely unpredictable behavior.

Fortunately, we were able to fix this issue in R28.2.2 , and finalizers for natively instantiated objects now work exactly as expected.

Performance Impact of this Fix

The negative performance impact of finalizable objects has been very well known since almost the very beginning of the Java platform. Finalization adds a significant amount of overhead to garbage collection. At best, every finalizable object we discard needs to be "collected" at least twice: once before finalization and once after. Also, depending on the JVM implementation and the nature of the finalizer's work, often the act of finalization needs to be single-threaded, resulting in scalability issues for multi-core systems.

Now that we have fixed the "forgotten" finalizers bug in R28.2.2, all of a sudden, every copy of a class's constant pool is now a finalizable object again. For the gross majority of applications, use of reflection or other activities that retrieve copies of constant pools are infrequent enough that this has no noticeable performance impact. But there are a few applications out there that can indirectly generate very large numbers of ConstPoolWrapper objects. For such applications, the finalizer thread(s) may have trouble keeping up with the massive number of ConstPoolWrapper objects being added to the finalizer queue. This can result in a significant increase in memory pressure. In a worst case scenario, this additional memory pressure can even lead to an OutOfMemoryError.

Identifying this Issue

If you suspect that you are hitting this issue, the quickest way to confirm is to use the heap_diagnostics command and examine the state of the finalizer queue. If you see tens of thousands of ConstPoolWrapper objects awaiting finalization, you are almost certainly impacted by this issue.

===
Finalizers:
    1271096  37979  77 1233040      0 Total for all Finalizers
    1125384      1  66 1125317      0 => jrockit/reflect/ConstPoolWrapper
===

Another clue would be if you notice that the finalizer thread is busy finalizing ConstPoolWrapper objects. For example, you may see something like the following in a thread dump:

===
"Finalizer" id=7 idx=0x50 tid=17238 prio=8 alive, native_blocked, daemon
    at jrockit/reflect/ConstPoolWrapper.finalize()V(Native Method)
    at jrockit/memory/Finalizer.doFinalize(Finalizer.java:29)
    at jrockit/memory/Finalizer.access$300(Finalizer.java:12)
    at jrockit/memory/Finalizer$4.run(Finalizer.java:186)
    at java/lang/Thread.run(Thread.java:662)
    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
    -- end of trace
===

Basically, seeing "ConstPoolWrapper" just about anywhere should be a considered a red flag that you may be facing this issue.

One final possible hint is if you notice a severe discrepancy between an hprof heap dump and the live-set size. Our hprof code currently may only dump a subset of the entire finalization queue. So if your heap is filled with ConstPoolWrappers awaiting finalization, you may see a big difference between the amount of heap in use and the size of any hprof dump files generated.

Resolving Issue at the Application Level

The majority of applications that we have seen impacted by this issue dynamically create a large number of new classes at runtime. The most common example of this is applications that (mis)use JAXB to instantiate a set of completely new XML parser classes for every single request they process.

The standard troubleshooting steps to follow in this case are to run your application with the -Xverbose:class flag enabled and examine the output to see what kinds of classes are being loaded continuously, even after the application should have reached its steady-state. Once you know the names of the classes that are being generated at runtime, hopefully you can determine why these classes are being created and possibly alter the application to not use classes is such a "disposable" manner.

For example, we have seen several customers who have changed their use of JAXB to create a single XML parser (or one-per-thread to avoid scalability issues) and reuse that as opposed to dynamically recreating the exact same classes again and again.

I should also point out that modifying your application to limit redundant class creation is a good idea on any JVM. Class loading is expensive and your application is very likely to see a performance benefit from the elimination of unneeded class loading regardless of which JVM you use.

JVM-Level Workaround

But of course, it is not always possible to modify your application. So I have made some changes in R28.3.2 to try and help users who run into this issue and can not resolve it at the application level:

1. I have added a new flag, -XX:-UseCPoolGC that will cause the ConstPoolWrappers to no longer be finalizable.

2. I have built overflow protection into the JVM's shared constant pool reference counter so that we do not experience the crashes and other stability issues that R28 versions before R28.2.2 faced.

So by adding this flag to your java command line, you should get the same performance that you did before R28.2.2, but also not face the crashes from overflowing the reference counter.

The one downside is that this could possibly lead to a native memory leak as an application that continuously created new classes with unique constant pool entries may never be able to prune some entries from the shared constant pool. While we have never seen an application where this memory consumption would even be noticeable, it is a risk, at least in theory. As a result, I only recommend using this new flag if you have confirmed that you are hitting the ConstPoolWrapper issue and are unable to resolve the issue by modifying your application.

Obviously a design-level change to JRockit so that it does not depend on finalization like this at all would be the "ideal fix". If we were going to do another major release of JRockit (R29), this would be something worth serious consideration. But given JRockit's legacy status and focus on stability for the remaining planned maintenance releases, I believe the workaround flag is the best choice for helping our users while not risking further unexpected changes in behavior.

Conclusion

If you are using a JRockit release R28.2.2 or later, and you notice unexplained memory pressure, collect heap_diagnostics output and look for a large number of ConstPoolWrappers in the finalizer queue. If you are hitting this issue, consider the application-level changes recommended above, or use the new -XX:-UseCPoolGC flag (added in R28.3.2) to work around the issue at the JVM level.

And finally (pun intended, sadly), you may wish to keep this in mind as another example of why any design that depends on the timely execution of finalizers is flawed. Finalizers are almost always bad news for performance and you simply can't rely on them being called quickly. In short: Java finalizer != c++ destructor.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha