Wednesday Mar 19, 2014

About G1 Garbage Collector, Permanent Generation and Metaspace

We received some questions around the G1 garbage collector and the use of Permanent Generation with it. There seems to be some confusion that the Hotspot JVM does not use permanent generation when G1 is used as the garbage collector. Here’s some clarification:

JDK 7: PermGen

Permanent Generation still exists in JDK 7 and its updates, and is used by all the garbage collectors. In JDK7, the effort to remove the permanent generation was started and some parts of the data residing in the permanent generation were moved to either the Java Heap or to the native heap. Permanent generation was not completely removed and it still exists in JDK 7 and its updates. Here's the list of things that were moved out of the permanent generation in JDK7:

  • Symbols were moved to the native heap

  • Interned strings were moved to the Java Heap

  • Class statics were moved to the Java Heap

JDK7: G1 and PermGen

With G1 collector, PermGen is collected only at a Full GC which is a stop-the-world (STW) GC. If G1 is running optimally then it does not do Full GCs. G1 invokes the Full GCs only when the PermGen is full or when the application does allocations faster than G1 can concurrently collect garbage.

With CMS garbage collector, we can use option -XX:+CMSClassUnloadingEnabled to collect PermGen space in the CMS concurrent cycle. There is no equivalent option for G1. G1 only collects PermGen during the Full stop-the-world GCs.

We can use options PermSize and MaxPermSize to tune the PermGen space size according to the application needs.

JDK8: PermGen

Permanent generation has been completely removed in JDK 8. This work has been done under the bug https://bugs.openjdk.java.net/browse/JDK-6964458. Options PermSize and MaxPermSize have also been removed in JDK 8.

Email to openjdk alias regarding the PermGen elimination project: http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-September/006679.html

JDK8: Metaspace

In JDK 8, classes metadata is now stored in the native heap and this space is called Metaspace. There are some new flags added for Metaspace in JDK 8:

  • -XX:MetaspaceSize=<NNN> where <NNN> is the initial amount of space(the initial high-water-mark) allocated for class metadata (in bytes) that may induce a garbage collection to unload classes. The amount is approximate. After the high-water-mark is first reached, the next high-water-mark is managed by the garbage collector
  • -XX:MaxMetaspaceSize=<NNN> where <NNN> is the maximum amount of space to be allocated for class metadata (in bytes). This flag can be used to limit the amount of space allocated for class metadata. This value is approximate. By default there is no limit set.
  • -XX:MinMetaspaceFreeRatio=<NNN> where <NNN> is the minimum percentage of class metadata capacity free after a GC to avoid an increase in the amount of space (high-water-mark) allocated for class metadata that will induce a garbage collection.
  • -XX:MaxMetaspaceFreeRatio=<NNN> where <NNN> is the maximum percentage of class metadata capacity free after a GC to avoid a reduction in the amount of space (high-water-mark) allocated for class metadata that will induce a garbage collection.

By default class metadata allocation is only limited by the amount of available native memory. We can use the new option MaxMetaspaceSize to limit the amount of native memory used for the class metadata. It is analogous to MaxPermSize. A garbage collection is induced to collect the dead classloaders and classes when the class metadata usage reaches MetaspaceSize (12Mbytes on the 32bit client VM and 16Mbytes on the 32bit server VM with larger sizes on the 64bit VMs). Set MetaspaceSize to a higher value to delay the induced garbage collections. After an induced garbage collection, the class metadata usage needed to induce the next garbage collection may be increased.


Saturday Jan 04, 2014

UseLargePages on Linux

There is a JVM option UseLargePages (introduced in JDK 5.0u5) that can be used to request large memory pages from the system if large pages memory is supported by the system. The goal of the large page support is to optimize processor Translation-Lookaside Buffers and hence increase performance.

Recently we saw few instances of HotSpot crashes with JDK7 on the Linux platform when using the large memory pages.

8013057: assert(_needs_gc || SafepointSynchronize::is_at_safepoint()) failed: only read at safepoint
https://bugs.openjdk.java.net/browse/JDK-8013057

8007074: SIGSEGV at ParMarkBitMap::verify_clear()
https://bugs.openjdk.java.net/browse/JDK-8007074

Cause: The cause of these crashes is the way mmap works on the Linux platform. If the large page support is enabled on the system, commit_memory() implementation of HotSpot on Linux platform tries to commit the earlier reserved memory with 'mmap' call using the large pages. If there are not enough number of large pages available, the mmap call fails releasing the reserved memory, allowing the same memory region to be used for other allocations. This causes the same memory region to be used for different purposes and leads to unexpected behaviors.

Symptoms: With the above mentioned issue, we may see crashes with stack traces something like this:
 V  [libjvm.so+0x759a1a]  ParMarkBitMap::mark_obj(HeapWord*, unsigned long)+0x7a
 V  [libjvm.so+0x7a116e]  PSParallelCompact::MarkAndPushClosure::do_oop(oopDesc**)+0xce
 V  [libjvm.so+0x485197]  frame::oops_interpreted_do(OopClosure*, RegisterMap const*, bool)+0xe7
 V  [libjvm.so+0x863a4a]  JavaThread::oops_do(OopClosure*, CodeBlobClosure*)+0x15a
 V  [libjvm.so+0x77c97e]  ThreadRootsMarkingTask::do_it(GCTaskManager*, unsigned int)+0xae
 V  [libjvm.so+0x4b7ec0]  GCTaskThread::run()+0x130
 V  [libjvm.so+0x748f90]  java_start(Thread*)+0x100

Here the crash happens while writting to an address 0x00007f2cf656eef0 in the mapped region of ParMarkBitMap. And that memory belongs to the rt.jar (from hs_err log file):
7f2cf6419000-7f2cf65d7000 r--s 039dd000 00:31 106601532                  /jdk/jdk1.7.0_21/jre/lib/rt.jar

Due to this bug, the same memory region got mapped for two different allocations and caused this crash.

Fixes:

8013057 strengthened the error handling of mmap failures on Linux platform and also added some diagnostic information for these failures. It is fixed in 7u40.

8007074 fixes the reserved memory mapping loss issue when using the large pages on the Linux platform. Details on this fix: http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-July/010117.html. It is fixed in JDK 8 and will also be included into 7u60, scheduled to be released in May 2014.

Workarounds:

1. Disable the use of large pages with JVM option -XX:-UseLargePages.

2. Increase the number of large pages available on the system. By having the sufficient number of large pages on the system, we can reduce the risk of memory commit failures and thus reduce the chances of hitting the large pages issue. Please see the details on how to configure the number of large pages here:
http://www.oracle.com/technetwork/java/javase/tech/largememory-jsp-137182.html

Other related fixes:

8026887: Make issues due to failed large pages allocations easier to debug
https://bugs.openjdk.java.net/browse/JDK-8026887

With the fix of 8013057, diagnostic information for the memory commit failures was added. It printed the error messages something like this:
os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
error='Cannot allocate memory' (errno=12)

With this fix of 8026887, this error message has been modified to suggest that the memory commit failed due to the lack of large pages, and it now looks like the following:
os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
error='Cannot allocate large pages, falling back to small pages' (errno=12)

This change has been integrated into 7u51.

The fix for 8007074 will be available in 7u60 and could not be included into 7u51, so this change (JDK-8026887) makes the error messages printed for the large pages related commit memory failures more informative. If we see these messages in the JVM logs that indicates that we have the risk of hitting the unexpected behaviors due to the bug 8007074.

8024838: Significant slowdown due to transparent huge pages
https://bugs.openjdk.java.net/browse/JDK-8024838

With the fix of 8007074, significant performance degradation was detected. This regression has been fixed with JDK-8024838 in JDK 8 and will also be included in JDK 7u60.

Wednesday Jul 31, 2013

Troubleshooting Long GC Pauses

Low pause times during the application run is the most important goal for many enterprise applications, especially for the transaction-based systems where long latencies can result in the transaction time-outs. For systems running on the Java Virtual Machines, garbage collections can sometimes be the cause of the long pauses.

In this post I am going to describe different scenarios where we can encounter long GC pauses and how we can diagnose and troubleshoot these GC pauses.

Causes

Following are the different situations that can cause long GC pauses during the application run.

1. Fragmentation in the heap

Fragmentation in the Java Heap can cause GCs to occur more frequently and also sometimes causing long pauses in the GCs. This is more probable in the case of Concurrent Mark Sweep collector, also known as CMS, where the tenured generation space is not compacted with the concurrent collections.

In case of the CMS, due to fragmentation in the tenured generation space, the young generation collections can face promotion failures and thus triggering 'Concurrent Mode Failure' collections that are stop-the-world Full GCs, and Full GCs take a long time to finish as compared to the concurrent collection pauses.

Due to the fragmentation, the direct allocations in the tenured generation may fail even when there is lot of free space available and thus causing Full GCs. Fragmentation can also cause frequent allocation failures and thus triggering frequent Full GCs that increase the overall time the application is paused for.

The following logs collected with the CMS collector show that the fragmentation in the CMS generation space is very high, that leads to the promotion failure during a young generation ParNew collection and then a 'concurrent mode failure'. A Full GC is done in the event of 'concurrent mode failure' that takes a very long time, 17.1365396 seconds to finish.

{Heap before GC invocations=7430 (full 24):
par new generation total 134400K, used 121348K [0x53000000, 0x5c600000, 0x5c600000)
eden space 115200K, 99% used [0x53000000, 0x5a07e738, 0x5a080000)
from space 19200K, 32% used [0x5a080000, 0x5a682cc0, 0x5b340000)
to space 19200K, 0% used [0x5b340000, 0x5b340000, 0x5c600000)
concurrent mark-sweep generation total 2099200K, used 1694466K [0x5c600000, 0xdc800000, 0xdc800000)
concurrent-mark-sweep perm gen total 409600K, used 186942K [0xdc800000, 0xf5800000, 0xfbc00000)
10628.167: [GC Before GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 103224160
Max Chunk Size: 5486
Number of Blocks: 57345
Av. Block Size: 1800
Tree Height: 36
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 371324
Max Chunk Size: 254
Number of Blocks: 8591 <---- High fragmentation
Av. Block Size: 43
free=103595484 frag=1.0000 <---- High fragmentation
Before GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
Tree Height: 0
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
free=0 frag=0.0000
10628.168: [ParNew (promotion failed)
Desired survivor size 9830400 bytes, new threshold 1 (max 1)
- age 1: 4770440 bytes, 4770440 total
: 121348K->122157K(134400K), 0.4263254 secs]10628,594: [CMS10630.887: [CMS-concurrent-mark: 7.286/8.682 secs] [Times: user=14.81 sys=0.34, real=8.68 secs] (concurrent mode failure): 1698044K->625427K(2099200K), 17.1365396 secs] 1815815K->625427K(2233600K), [CMS Perm : 186942K->180711K(409600K)]After GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 377269492
Max Chunk Size: 377269492
Number of Blocks: 1
Av. Block Size: 377269492
Tree Height: 1
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
free=377269492 frag=0.0000
After GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
Tree Height: 0
Statistics for IndexedFreeLists:
--------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
free=0 frag=0.0000
, 17.5645589 secs] [Times: user=17.82 sys=0.06, real=17.57 secs]
Heap after GC invocations=7431 (full 25):
par new generation total 134400K, used 0K [0x53000000, 0x5c600000, 0x5c600000)
eden space 115200K, 0% used [0x53000000, 0x53000000, 0x5a080000)
from space 19200K, 0% used [0x5b340000, 0x5b340000, 0x5c600000)
to space 19200K, 0% used [0x5a080000, 0x5a080000, 0x5b340000)
concurrent mark-sweep generation total 2099200K, used 625427K [0x5c600000, 0xdc800000, 0xdc800000)
concurrent-mark-sweep perm gen total 409600K, used 180711K [0xdc800000, 0xf5800000, 0xfbc00000)
}
Total time for which application threads were stopped: 17.5730653 seconds

2. Other OS activities happening at the time of GC

Sometimes the OS activities such as the swap space or networking activity happening at the time when GC is taking place can make the GC pauses last much longer. These pauses can be of the order of few seconds to some minutes.

If your system is configured to use swap space, Operating System may move inactive pages of memory of the JVM process to the swap space, to free up memory for the currently active process which may be the same process or a different process on the system. Swapping is very expensive as it requires disk accesses which are much slower as compared to the physical memory access. So, if during a garbage collection the system needs to perform swapping, the GC would seem to run for a very long time.

Following is the log of a young generation collection that lasts for 29.47 seconds.

{Heap before GC invocations=132 (full 0):
par new generation total 2696384K, used 2696384K [0xfffffffc20010000, 0xfffffffce0010000, 0xfffffffce0010000)
eden space 2247040K, 100% used [0xfffffffc20010000, 0xfffffffca9270000, 0xfffffffca9270000)
from space 449344K, 100% used [0xfffffffca9270000, 0xfffffffcc4940000, 0xfffffffcc4940000)
to space 449344K, 0% used [0xfffffffcc4940000, 0xfffffffcc4940000, 0xfffffffce0010000)
concurrent mark-sweep generation total 9437184K, used 1860619K [0xfffffffce0010000, 0xffffffff20010000, 0xffffffff20010000)
concurrent-mark-sweep perm gen total 1310720K, used 511451K [0xffffffff20010000, 0xffffffff70010000, 0xffffffff70010000)
2013-07-17T03:58:06.601-0700: 51522.120: [GC Before GC: : 2696384K->449344K(2696384K), 29.4779282 secs] 4557003K->2326821K(12133568K) ,29.4795222 secs] [Times: user=915.56 sys=6.35, real=29.48 secs]

Corresponding 'vmstat' output at 03:58:

kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id
20130717_035806 0 0 0 77611960 94847600 55 266 0 0 0 0 0 0 0 0 0 3041 2644 2431 44 8 48
20130717_035815 0 0 0 76968296 94828816 79 324 0 18 18 0 0 0 0 1 0 3009 3642 2519 59 13 28
20130717_035831 1 0 0 77316456 94816000 389 2848 0 7 7 0 0 0 0 2 0 40062 78231 61451 42 6 53
20130717_035841 2 0 0 77577552 94798520 115 591 0 13 13 0 0 13 12 1 0 4991 8104 5413 2 0 98

This minor GC takes around 29 secs to complete. The corresponding vmstat output shows that the available swap space drops down by ~600mb during this period. That means during this garbage collection some pages from the RAM were moved out to the swap space, not necessarily by the same process running on the system.

From the above, it is clear that the physical memory available on the system is not enough for all the processes running on the system. The way to resolve this is to run fewer processes or if possible, add more RAM to increase the physical memory of the system. In the case above, the specified maximum tenured generation size is set as 9G and out of that only 1.8G is occupied. So it makes sense to reduce the heap size to lower the memory pressure on the physical memory so as to avoid or minimize the swapping activity.

Apart from swapping, we should monitor if there is any i/o or network activity happening during the long GC pauses. These can be monitored using iostat and netstat tools. It is also helpful to see the CPU statistics with the mpstat tool to figure out if enough CPU resources were available during the GC pauses.

3. Insufficient heap size

If the application footprint is larger than the maximum heap space that we have specified for the JVM, it results in frequent collections. Due to the insufficient heap space, the allocation requests fail and the JVM needs to invoke garbage collections in an attempt to reclaim space for the allocations. But since it cannot claim much space with each collection, subsequent allocation failures result in more GC invocations.

These frequent Full GCs cause long pauses in the application run. For example, in the following case, the permanent generation is almost full and the allocation attempts into the permanent generation are failing, triggering the Full GCs.

166687.013: [Full GC [PSYoungGen: 126501K->0K(922048K)] [PSOldGen: 2063794K->1598637K(2097152K)] 2190295K->1598637K(3019200K) [PSPermGen: 165840K->164249K(166016K)], 6.8204928 secs] [Times: user=6.80 sys=0.02, real=6.81 secs]
166699.015: [Full GC [PSYoungGen: 125518K->0K(922048K)] [PSOldGen: 1763798K->1583621K(2097152K)] 1889316K->1583621K(3019200K) [PSPermGen: 165868K->164849K(166016K)], 4.8204928 secs] [Times: user=4.80 sys=0.02, real=4.81 secs]

Similarly, the frequent Full GCs can occur if there is insufficient space in the tenured generation for the allocations or promotions.

The solution for these long pauses is to identify the average footprint of the application and then specify the heap size accordingly.

4. Bug in the JVM

Sometimes these long pauses could be due to a bug in the JVM. For example, due to the following bugs in the JVM, Java applications may face long GC pauses.

  • 6459113: CMS+ParNew: wildly different ParNew pause times depending on heap shape caused by allocation spread
    • fixed in JDK 6u1 and 7
  • 6572569: CMS: consistently skewed work distribution indicated in (long) re-mark pauses
    • fixed in JDK 6u4 and 7
  • 6631166: CMS: better heuristics when combatting fragmentation
    • fixed in JDK 6u21 and 7
  • 6999988: CMS: Increased fragmentation leading to promotion failure after CR#6631166 got implemented
    • fixed in JDK 6u25 and 7
  • 6683623: G1: use logarithmic BOT code such as used by other collectors
    • fixed in JDK 6u14 and 7
  • 6976350: G1: deal with fragmentation while copying objects during GC
    • fixed in JDK 8

If you are running with a JVM version affected with these bugs, please upgrade to the version where these bugs are fixed.

5. Explicit System GCs

Check if there are any explicit System GCs happening. Requests to invoke these System GCs which are stop-the-world Full GCs could be coming from the System.gc() calls from some class in the application or from a some third party module. These explicit System GCs too can cause very long pauses.

164638.058: [Full GC (System) [PSYoungGen: 22789K->0K(992448K)] [PSOldGen: 1645508K->1666990K(2097152K)] 1668298K->1666990K(3089600K) [PSPermGen: 164914K->164914K(166720K)], 5.7499132 secs] [Times: user=5.69 sys=0.06, real=5.75 secs]

If you are using RMI and are observing explicit Full GCs on a regular interval, then these are coming from the RMI implementation that triggers System.gc() on a regular interval. This interval can be configured using the following system properties:

-Dsun.rmi.dgc.server.gcInterval=n
-Dsun.rmi.dgc.client.gcInterval=n

The default value for these properties in JDK 1.4.2 and 5.0 is 60000 milliseconds, and 3600000 milliseconds in JDK 6 and later releases.

If you want to disable the explicit Full GCs invoked using System.gc(), run the application with -XX:+DisableExplicitGC JVM option.

How to approach the problem

1. Collect GC logs with -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps and -XX:+PrintGCApplicationStoppedTime. In case of the CMS collector, add option -XX:PrintFLSStatistics=2 as well.

The GC logs can give us details on the nature and the frequency of the GC pauses i.e. they can provide answers to the questions like - are the long GC pauses occurring during young collections or old collections, and how frequently those collections are encountering long pauses.

2. Monitor the overall health of the system using OS tools like vmstat, iostat, netstat and mpstat etc. on Solaris and Linux platforms, and tools like Process Monitor and Task Manager on the Windows operating system.

3. Use GCHisto tool to visually analyze the GC Logs and figure out which GCs are taking long time and if there is a pattern in the occurrence of these collections.

4. Try to see from the GC logs if there are any signs of fragmentation in the Java Heap space.

5. Monitor if the specified Heap size is enough to contain the footprint of the application.

6. Check if you are running with a JVM that has a known bug related to the long GC pauses and then upgrade if that bug is fixed in a later version.

Monday Jun 18, 2012

Understanding G1 GC Logs

The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed.

We will be looking at the logs generated with JDK 1.7.0_04 using these options.

Option -XX:+PrintGCDetails

Here's a sample log of G1 collection generated with PrintGCDetails.

 
0.522: [GC pause (young), 0.15877971 secs]
   [Parallel Time: 157.1 ms]
      [GC Worker Start (ms):  522.1  522.2  522.2  522.2
       Avg: 522.2, Min: 522.1, Max: 522.2, Diff:   0.1]
      [Ext Root Scanning (ms):  1.6  1.5  1.6  1.9
       Avg:   1.7, Min:   1.5, Max:   1.9, Diff:   0.4]
      [Update RS (ms):  38.7  38.8  50.6  37.3
       Avg:  41.3, Min:  37.3, Max:  50.6, Diff:  13.3]
         [Processed Buffers : 2 2 3 2
          Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1]
      [Scan RS (ms):  9.9  9.7  0.0  9.7
       Avg:   7.3, Min:   0.0, Max:   9.9, Diff:   9.9]
      [Object Copy (ms):  106.7  106.8  104.6  107.9
       Avg: 106.5, Min: 104.6, Max: 107.9, Diff:   3.3]
      [Termination (ms):  0.0  0.0  0.0  0.0
       Avg:   0.0, Min:   0.0, Max:   0.0, Diff:   0.0]
         [Termination Attempts : 1 4 4 6
          Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5]
      [GC Worker End (ms):  679.1  679.1  679.1  679.1
       Avg: 679.1, Min: 679.1, Max: 679.1, Diff:   0.1]
      [GC Worker (ms):  156.9  157.0  156.9  156.9
       Avg: 156.9, Min: 156.9, Max: 157.0, Diff:   0.1]
      [GC Worker Other (ms):  0.3  0.3  0.3  0.3
       Avg:   0.3, Min:   0.3, Max:   0.3, Diff:   0.0]
   [Clear CT:   0.1 ms]
   [Other:   1.5 ms]
      [Choose CSet:   0.0 ms]
      [Ref Proc:   0.3 ms]
      [Ref Enq:   0.0 ms]
      [Free CSet:   0.3 ms]
   [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)]
 [Times: user=0.59 sys=0.02, real=0.16 secs]

This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time.

This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause.

0.522: [GC pause (young), 0.15877971 secs]

This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish.

Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions.

1.730: [GC pause (mixed), 0.32714353 secs]

Let's take a look at all the sub-tasks performed in this Evacuation Pause.

[Parallel Time: 157.1 ms]

Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms.

[GC Worker Start (ms): 522.1 522.2 522.2 522.2
Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1]

The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads.

[Ext Root Scanning (ms): 1.6 1.5 1.6 1.9
Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4]

This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads.

[Update RS (ms): 38.7 38.8 50.6 37.3
Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3]

Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions.

[Processed Buffers : 2 2 3 2
Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1]

This tells us the number of Update Buffers (mentioned above) processed by each worker thread.

[Scan RS (ms): 9.9 9.7 0.0 9.7
Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9]

These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set.

[Object Copy (ms): 106.7 106.8 104.6 107.9
Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3]

These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions.

[Termination (ms): 0.0 0.0 0.0 0.0
Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0]

Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again.

[Termination Attempts : 1 4 4 6
Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5]

This gives the number of times each thread has offered to terminate.

[GC Worker End (ms): 679.1 679.1 679.1 679.1
Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1]

These are the times in milliseconds at which each worker thread stopped.

[GC Worker (ms): 156.9 157.0 156.9 156.9
Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1]

These are the total lifetimes of each worker thread.

[GC Worker Other (ms): 0.3 0.3 0.3 0.3
Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0]

These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time.

[Clear CT: 0.1 ms]

This is the time spent in clearing the Card Table. This task is performed in serial mode.

[Other: 1.5 ms]

Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially.

[Choose CSet: 0.0 ms]

Time spent in selecting the regions for the Collection Set.

[Ref Proc: 0.3 ms]

Total time spent in processing Reference objects.

[Ref Enq: 0.0 ms]

Time spent in enqueuing references to the ReferenceQueues.

[Free CSet: 0.3 ms]

Time spent in freeing the collection set data structure.

[Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)]

This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection.

Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection.

Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions.

1.416: [GC pause (young) (initial-mark), 0.62417980 secs]
…....
2.042: [GC concurrent-root-region-scan-start]
2.067: [GC concurrent-root-region-scan-end, 0.0251507]
2.068: [GC concurrent-mark-start]
3.198: [GC concurrent-mark-reset-for-overflow]
4.053: [GC concurrent-mark-end, 1.9849672 sec]
4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs]
 [Times: user=0.00 sys=0.00, real=0.00 secs]
4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs]
 [Times: user=0.00 sys=0.00, real=0.00 secs]
4.090: [GC concurrent-cleanup-start]
4.091: [GC concurrent-cleanup-end, 0.0002721] 

The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause.

2.042: [GC concurrent-root-region-scan-start]

This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase.

2.067: [GC concurrent-root-region-scan-end, 0.0251507]

End of the concurrent root region scan phase and it lasted for 0.0251507 seconds.

2.068: [GC concurrent-mark-start]

Start of the concurrent marking at 2.068 secs from the start of the process.

3.198: [GC concurrent-mark-reset-for-overflow]

This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again.

4.053: [GC concurrent-mark-end, 1.9849672 sec]

End of the concurrent marking phase and it lasted for 1.9849672 seconds.

4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs]

This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing.

4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs]

Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M.

4.090: [GC concurrent-cleanup-start]

This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase.

4.091: [GC concurrent-cleanup-end, 0.0002721]

Concurrent cleanup phase took 0.0002721 secs to free up the empty regions.


Option -XX:G1PrintRegionLivenessInfo

Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle.

26.896: [GC cleanup
### PHASE Post-Marking @ 26.896
### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576

Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes.

### type address-range used prev-live next-live gc-eff
### (bytes) (bytes) (bytes) (bytes/ms)

This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region.

### FREE 0x02e00000-0x02f00000 0 0 0 0.0

This is a Free region.

### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0

Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0.

### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0

This is an Eden region.

### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0
### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0
### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0
### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0
### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0
### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0
### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0

These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set.

### SURV 0x09300000-0x09400000 16384 16384 16384 0.0

This is a Survivor region.

### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 %

At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size.


Option -XX:+G1PrintHeapRegions

G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed.

COMMIT/UNCOMMIT events

G1HR COMMIT [0x6e900000,0x6ea00000]
G1HR COMMIT [0x6ea00000,0x6eb00000]

Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address.

G1HR UNCOMMIT [0x72700000,0x72800000]
G1HR UNCOMMIT [0x72600000,0x72700000]

Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address.

GC Cycle events

G1HR #StartGC 7
G1HR CSET 0x6e900000
G1HR REUSE 0x70500000
G1HR ALLOC(Old) 0x6f800000
G1HR RETIRE 0x6f800000 0x6f821b20
G1HR #EndGC 7

This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process.

G1HR #StartFullGC 17
G1HR UNCOMMIT [0x6ed00000,0x6ee00000]
G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58
G1HR #EndFullGC 17

Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process.

ALLOC events

G1HR ALLOC(Eden) 0x6e800000

The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread.

G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000
G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000

Regions being used for the allocation of Humongous object. The object spans over two regions.

G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010

Single region being used for the allocation of Humongous object.

G1HR COMMIT [0x6ee00000,0x6ef00000]
G1HR COMMIT [0x6ef00000,0x6f000000]
G1HR COMMIT [0x6f000000,0x6f100000]
G1HR COMMIT [0x6f100000,0x6f200000]
G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000
G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000
G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000
G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010

Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object.

G1HR ALLOC(Old) 0x6f800000

Old region started being used for allocation during GC.

G1HR ALLOC(Survivor) 0x6fa00000

Region being used for copying old objects into during a GC.

Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries.

Other Events

G1HR RETIRE 0x6e800000 0x6e87bd98

Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation.

Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions:

For example:

1. ALLOC(Eden) Foo
2. ALLOC(Eden) Bar
3. StartGC

At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event:

1. ALLOC(Eden) Foo
2. RETIRE Foo top
3. ALLOC(Eden) Bar
4. StartGC

G1HR CSET 0x6e900000

Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet.

G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858

POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors.

POST-COMPACTION events are generated within the Full GC boundary.

G1HR CLEANUP 0x6f400000
G1HR CLEANUP 0x6f300000
G1HR CLEANUP 0x6f200000

These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards.

G1HR #StartGC 5
G1HR CSET 0x6f400000
G1HR CSET 0x6e900000
G1HR REUSE 0x6f800000

At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC.

G1HR ALLOC-FORCE(Eden) 0x6f800000

A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active.

G1HR EVAC-FAILURE 0x6f800000

During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects.


When Heap Regions are reclaimed ?

It is also worth mentioning when the heap regions in the G1 heap are reclaimed.

  • All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events.

  • All regions with CLEANUP events are reclaimed.

  • After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

Saturday Apr 07, 2012

Object Query Language Help

We can use a query language called Object Query Language with JHat and Serviceability Agent to query information from Java Heap. Help on this language is available with the JHat tool. I have put this handy help document on OQL here for quick reference.




JRockit hang fix

Fix for the JRockit hang I talked about in my previous blog would be available in R28.2.4.

Friday Feb 03, 2012

Diagnosis of a JRockit Deadlock

Recently I came across an interesting JRockit JVM deadlock. I am sharing the details of that deadlock, its diagnosis, and how to workaround that deadlock, which might be useful until the fix for that is available in a future JRockit release.

This deadlock occurred between an application Java thread and the JVM's Code Generation thread. The call traces for both the threads looked like this:

"[ACTIVE] ExecuteThread: '0' for queue:  'weblogic.kernel.Default (self-tuning)'" id=15 idx=0x54 tid=27249 prio=5 alive, native_waiting,daemon
at <unknown>(???.c)@0xffffe410
at eventTimedWaitNoTransitionImpl+79(event.c:90)@0xf7c910a0
at syncWaitForSignalNoTransition+81(synchronization.c:28)@0xf7e0d8a2
at innerNativeDoWait+894(nativelock.c:614)@0xf7d9e8cf
at nativeWait+71(nativelock.c:721)@0xf7d9ec28
at cbrCompileInCodeGenThread+451(compilerqueue.c:339)@0xf7c71b94
at dispatch_compile_request+78(compilerbroker.c:511)@0xf7c64eaf
at cbrGeableCodtRunneInfo+154(compilerbroker.c:580)@0xf7c6516b
at stubsCallJava+77(stubcall.c:112)@0xf7e061ce
at stubsCallJavaV+344(stubcall.c:276)@0xf7e065c9
at javaInvokeStaticVoidMethod+39(javacalls.c:178)@0xf7cd6098
at clsEnsureInitialized+485(class.c:256)@0xf7c4c446
at check_flags_and_clinit+28(compilerbroker.c:302)@0xf7c641fd
at cbrGetRunnableCodeInfo+37(compilerbroker.c:564)@0xf7c650f6
at stubsCallJava+77(stubcall.c:112)@0xf7e061ce
at javaInvokeMethod+280(javacalls.c:1128)@0xf7cd62a9

"(Code Generation Thread 1)" id=4 idx=0x30 tid=27235 prio=5 alive,native_waiting, daemon
at <unknown>(???.c)@0xffffe410
at eventTimedWaitNoTransitionImpl+79(event.c:90)@0xf7c910a0
at syncWaitForSignalNoTransition+81(synchronization.c:28)@0xf7e0d8a2
at innerNativeDoWait+894(nativelock.c:614)@0xf7d9e8cf
at nativeWait+71(nativelock.c:721)@0xf7d9ec28
at clsEnsureInitialized+334(class.c:219)@0xf7c4c3af
at check_flags_and_clinit+28(compilerbroker.c:302)@0xf7c641fd
at compileit+273(compilerbroker.c:317)@0xf7c64992
at cbrCompileRequest+16(compilerbroker.c:537)@0xf7c651b1
at cg_thread+876(compilerqueue.c:223)@0xf7c7168d
at thread_stub+318(lifecycle.c:808)@0xf7d5205f
at start_thread+225(:0)@0x64a832
at __clone+93(:0)@0x5cee0e

The above stack traces and the examination of the core-dump file revealed that the Java Thread (tid=27249) was initializing a class and was in the process of invoking 'clinit' method of that class. Method 'clinit' was not compiled, so it invoked dispatch_compile_request() to compile that method, which sent the compilation request to the Code Generation thread. The Code Generation thread accepted that request and got stuck in waiting for the initialization of the class of 'clinit' to complete. Now, function dispatch_compile_request() checks if the thread has enough (>64K) stack space available for code generation. If yes, then it compiles the method on the same thread otherwise passes the compilation request to the Code Generation (Compiler) thread.

if (!enough_stack) {
...
res = cbrCompileInCodeGenThread(req, TRUE);
} else {
res = compileit(req, req->method);
}

In this case, stack space available with Java thread (tid=27249) was around 63K and that was the cause of the problem. Here's what was happening:-

(i) While compiling a method, the Code Generation thread wants the class of that method to be initialized, and if it is not, it waits for the class to get initialized.
(ii) A Java thread which is in the middle of initializing a class requests Code Generation thread to compile a method of that class because it does not have enough stack space to do the compilation itself and waits for the Code Generation thread to finish the compilation.

The above scenario causes a deadlock.

A simple solution to avoid the above situation is to increase the stack size for the application Java threads so that there is always enough space on the stack that the compilation can be done on the same thread itself and the need to pass the compilation requests to the Code Generation thread does not arise at least for the uninitialized classes.

The fix for this bug is progress and will be available in a future JRockit patchset release.

About

poonam

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today