JRockit: Artificial GC: Cleaning up Finished Threads


Sometimes we see JRockit users who experience frequent full GCs as the JVM tries to clean up after application threads that have finished executing. In this post I'll write about why this happens, how to identify it in your own environment, and what the possible solutions might be.

What exactly is happening?

If there are many threads that have finished execution, but their corresponding java.lang.Thread objects have not yet been garbage collected from the Java heap, the JVM will automatically trigger a full GC event to try and clean up the left over Thread objects.

Why does JRockit do this?

Due to JRockit's internal design, the JVM is unable to deallocate all of the internal resources associated with an execution thread until after the corresponding Thread object has been removed from the Java heap. While this is normally not a problem, if we go too long between garbage collections and the application creates (and then abandons) a lot of threads, the memory and other native resources consumed by these dead threads can grow very large. For example, if I run a test program that constantly "leaks" dead threads, but keeps strong references to each of the corresponding Thread objects, the memory footprint of the JVM can quickly grow to consume many gigabytes of off-heap (native) memory, just to retain data about our (almost) dead threads.

What does this actually look like in practice?

If you suspect that this is happening with your application, the easiest way to confirm is to enable JRockit's memdbg verbose logging module (*1). If a full collection is triggered by threads waiting for cleanup of their Thread objects, you will see something like the following at the start of the collection event (*2):

[DEBUG][memory ] [OC#7] GC reason: Artificial, description: Cleaning up Finished Threads.

Another verbose logging module that will give you information about these thread-related full GC events would be the thread module:

[INFO ][thread ] Trigging GC due to 5772 dead threads and 251 threads since last collect.

The above message indicates that there are currently 5772 threads that have finished executing, but the JVM is still waiting for their Thread objects to be collected by the GC subsystem before it can completely deallocate all of the resources associated with each thread. We also see that 251 new threads have been created since the last full GC triggered by the thread management code. In other words, "last collect" really means "last collection triggered by the thread cleanup code"; this number will not be reset back to zero by a "natural" GC event, only a thread cleanup GC will reset this number.

What can I do to avoid these collections?

The best solution is of course to not create so many threads! Threads are expensive. If an application is continuously generating (and then abandoning) so many new threads that a full GC for every 250 threads becomes a performance concern, perhaps the application can be redesigned to not use threads in such a disposable manner. For example, thread pooling may be a reasonable solution. Even if you run your application on HotSpot, creating so many temporary threads is very likely a performance issue. Changing your application to use threads in a more efficient manner is worth considering, regardless of which JVM you use.

But of course sometimes redesigning your application is not a viable option. Perhaps the code that is generating all of these threads is from some third party and you can't make changes. Perhaps you do plan on changing the application, but in the meantime, you need to do something about all of these full GC events. There is a possible JVM-side "solution", but it should be considered a temporary workaround.

JRockit has two diagnostic options that allow you to tune this behavior: MaxDeadThreadsBeforeGC and MinDeadThreadsSinceGC. There is a subtle difference between them, but basically they both allow you to modify the thresholds used to decide when the thread management code will trigger a full GC.

MaxDeadThreadsBeforeGC (default: 250)

This option specifies the number of dead threads (threads that have finished executing, but can not be cleaned up because we are waiting for the corresponding Thread objects to be removed from the Java heap) that will trigger an artificial full GC.

MinDeadThreadsSinceGC (default: 250)

This option specifies a minimum number of new threads that are created between each thread-cleaning GC event. The idea is that even if we trigger a full GC, some subset of the dead threads may still remain because there are still strong references to the Thread objects. If the number dead threads that survive a full GC are higher than MaxDeadThreadsBeforeGC, we could end up triggering a full GC for every single new thread that is created. The MinDeadThreadsSinceGC option guarantees that at least a certain number of new threads have been created since the last thread-cleaning GC, even if the the number of dead threads remains above MaxDeadThreadsBeforeGC.

Note that these two options correspond to the two numbers output from the Xverbose:thread output:

[INFO ][thread ] Trigging GC due to 5772 dead threads and 251 threads since last collect.

The first number, 5772, is the number of dead threads waiting to be cleaned up and is compared to the value of MaxDeadThreadsBeforeGC. The second number, 251, is the number of new threads that have been spawned since the last thread-cleaning full GC. This second number is compared to MinDeadThreadsSinceGC. Only if both of these numbers are above the set thresholds is a thread-cleaning GC triggered.

If you need to reduce the frequency of thread-cleaning full collections and are not able to modify the application, I would recommend to try setting MinDeadThreadsSinceGC to a larger number. The trade-off here is that you many see an increase in native memory consumption as the JVM is not able to proactively clean up after finished threads as often. Obviously, you should do a full round of performance and load testing before using either of these options in a production environment.

One more important point: these are both undocumented diagnostic options. On R28, in order to use either of these two options, you must add the -XX:+UnlockDiagnosticVMOptions option before any diagnostic options on the command line. For example, to set MinDeadThreadsSinceGC to 1000, you would use a command line like below:

$ java -XX:+UnlockDiagnosticVMOptions -XX:MaxDeadThreadsBeforeGC=1000 MyJavaApp

If you have read this entire post, understand the risks (increased native memory consumption) and are willing to thoroughly load test your application before rolling out to production, either of these options should be safe to use.

(*1) For details on using the Xverbose option, please have a look at the command line reference:

(*2) Please note that this example output is from R28. The corresponding verbose output from R27 will refer to the GC cause as "Floating Dead Threads".

Post a Comment:
  • HTML Syntax: NOT allowed

I am a member of the Java SE Sustaining Engineering team. I work on both JVMs (HotSpot and JRockit) and the various client technologies (AWT, Swing, Java2D, etc.). I will post mostly about the work I do and any interesting programming, troubleshooting, tuning tips or other random stuff I think somebody out there might want to read about.


« July 2016