Sharing Troubleshooting and Debugging Experiences with Java and the JVM

Understanding CMS GC Logs

Poonam Parhar
Consulting Member of Technical Staff
CMS GC with -XX:+PrintGCDetails and -XX:+PrintGCTimeStamps prints a lot of information. Understanding this information can help in fine tuning various parameters of the application and CMS to achieve best performance.
Let's have a look at some of the CMS logs generated with 1.4.2_10:
39.910: [GC 39.910: [ParNew: 261760K->0K(261952K), 0.2314667 secs] 262017K->26386K(1048384K), 0.2318679 secs]
Young generation (ParNew) collection. Young generation capacity is 261952K and after the collection its occupancy drops down from 261760K to 0. This collection took 0.2318679 secs.
40.146: [GC [1 CMS-initial-mark: 26386K(786432K)] 26404K(1048384K), 0.0074495 secs]
Beginning of tenured generation collection with CMS collector. This is initial Marking phase of CMS where all the objects directly reachable from roots are marked and this is done with all the mutator threads stopped.
Capacity of tenured generation space is 786432K and CMS was triggered at the occupancy of 26386K.
40.154: [CMS-concurrent-mark-start]
Start of concurrent marking phase.
In Concurrent Marking phase, threads stopped in the first phase are started again and all the objects transitively reachable from the objects marked in first phase are marked here.
40.683: [CMS-concurrent-mark: 0.521/0.529 secs]
Concurrent marking took total 0.521 seconds cpu time and 0.529 seconds wall time that includes the yield to other threads also.
40.683: [CMS-concurrent-preclean-start]
Start of precleaning.
Precleaning is also a concurrent phase. Here in this phase we look at the objects in CMS heap which got updated by promotions from young generation or new allocations or got updated by mutators while we were doing the concurrent marking in the previous concurrent marking phase. By rescanning those objects concurrently, the precleaning phase helps reduce the work in the next stop-the-world “remark” phase.
40.701: [CMS-concurrent-preclean: 0.017/0.018 secs]
Concurrent precleaning took 0.017 secs total cpu time and 0.018 wall time.
40.704: [GC40.704: [Rescan (parallel) , 0.1790103 secs]40.883: [weak refs processing, 0.0100966 secs] [1 CMS-remark: 26386K(786432K)] 52644K(1048384K), 0.1897792 secs]
Stop-the-world phase. This phase rescans any residual updated objects in CMS heap, retraces from the roots and also processes Reference objects. Here the rescanning work took 0.1790103 secs and weak reference objects processing took 0.0100966 secs. This phase took total 0.1897792 secs to complete.
40.894: [CMS-concurrent-sweep-start]
Start of sweeping of dead/non-marked objects. Sweeping is concurrent phase performed with all other threads running.
41.020: [CMS-concurrent-sweep: 0.126/0.126 secs]
Sweeping took 0.126 secs.
41.020: [CMS-concurrent-reset-start]
Start of reset.
41.147: [CMS-concurrent-reset: 0.127/0.127 secs]
In this phase, the CMS data structures are reinitialized so that a new cycle may begin at a later time. In this case, it took 0.127 secs.
This was how a normal CMS cycle runs. Now let us look at some other CMS log entries:
197.976: [GC 197.976: [ParNew: 260872K->260872K(261952K), 0.0000688 secs]197.976: [CMS197.981: [CMS-concurrent-sweep: 0.516/0.531 secs]
(concurrent mode failure): 402978K->248977K(786432K), 2.3728734 secs] 663850K->248977K(1048384K), 2.3733725 secs]

This shows that a ParNew collection was requested, but it was not attempted because it was estimated that there was not enough space in the CMS generation to promote the worst case surviving young generation objects. We name this failure as “full promotion guarantee failure”.
Due to this, Concurrent Mode of CMS is interrupted and a Full GC is invoked at 197.981. This mark-sweep-compact stop-the-world Full GC took 2.3733725 secs and the CMS generation space occupancy dropped from 402978K to 248977K.
The concurrent mode failure can either be avoided by increasing the tenured generation size or initiating the CMS collection at a lesser heap occupancy by setting CMSInitiatingOccupancyFraction to a lower value and setting UseCMSInitiatingOccupancyOnly to true. The value for CMSInitiatingOccupancyFraction should be chosen appropriately because setting it to a very low value will result in too frequent CMS collections.
Sometimes we see these promotion failures even when the logs show that there is enough free space in tenured generation. The reason is 'fragmentation' - the free space available in tenured generation is not contiguous, and promotions from young generation require a contiguous free block to be available in tenured generation. CMS collector is a non-compacting collector, so can cause fragmentation of space for some type of applications. In his blog, Jon talks in detail on how to deal with this fragmentation problem:
Starting with 1.5, for the CMS collector, the promotion guarantee check is done differently. Instead of assuming that the promotions would be worst case i.e. all of the surviving young generation objects would get promoted into old gen, the expected promotion is estimated based on recent history of promotions. This estimation is usually much smaller than the worst case promotion and hence requires less free space to be available in old generation. And if the promotion in a scavenge attempt fails, then the young generation is left in a consistent state and a stop-the-world mark-compact collection is invoked. To get the same functionality with UseSerialGC you need to explicitly specify the switch -XX:+HandlePromotionFailure.
283.736: [Full GC 283.736: [ParNew: 261599K->261599K(261952K), 0.0000615 secs] 826554K->826554K(1048384K), 0.0003259 secs]
GC locker: Trying a full collection because scavenge failed
283.736: [Full GC 283.736: [ParNew: 261599K->261599K(261952K), 0.0000288 secs]

Stop-the-world GC happening when a JNI Critical section is released. Here again the young generation collection failed due to “full promotion guarantee failure” and then the Full GC is being invoked.
CMS can also be run in incremental mode (i-cms), enabled with -XX:+CMSIncrementalMode. In this mode, CMS collector does not hold the processor for the entire long concurrent phases but periodically stops them and yields the processor back to other threads in application. It divides the work to be done in concurrent phases in small chunks(called duty cycle) and schedules them between minor collections. This is very useful for applications that need low pause times and are run on machines with small number of processors.
Some logs showing the incremental CMS.
2803.125: [GC 2803.125: [ParNew: 408832K->0K(409216K), 0.5371950 secs] 611130K->206985K(1048192K) icms_dc=4 , 0.5373720 secs]
2824.209: [GC 2824.209: [ParNew: 408832K->0K(409216K), 0.6755540 secs] 615806K->211897K(1048192K) icms_dc=4 , 0.6757740 secs]

Here, the scavenges took respectively 537 ms and 675 ms. In between these two scavenges, iCMS ran for a brief period as indicated by the icms_dc value, which indicates a duty-cycle. In this case the duty cycle was 4%. A simple calculation shows that the iCMS incremental step lasted for 4/100 \* (2824.209 - 2803.125 - 0.537) = 821 ms, i.e. 4% of the time between the two scavenges.
Starting with 1.5, CMS has one more phase – concurrent abortable preclean. Abortable preclean is run between a 'concurrent preclean' and 'remark' until we have the desired occupancy in eden. This phase is added to help schedule the 'remark' phase so as to avoid back-to-back pauses for a scavenge closely followed by a CMS remark pause. In order to maximally separate a scavenge from a CMS remark pause, we attempt to schedule the CMS remark pause roughly mid-way between scavenges.
There is a second reason why we do this. Immediately following a scavenge there are likely a large number of grey objects that need rescanning. The abortable preclean phase tries to deal with such newly grey objects thus reducing a subsequent CMS remark pause.
The scheduling of 'remark' phase can be controlled by two jvm options CMSScheduleRemarkEdenSizeThreshold and CMSScheduleRemarkEdenPenetration. The defaults for these are 2m and 50% respectively. The first parameter determines the Eden size below which no attempt is made to schedule the CMS remark pause because the pay off is expected to be minuscule. The second parameter indicates the Eden occupancy at which a CMS remark is attempted.
After 'concurrent preclean' if the Eden occupancy is above CMSScheduleRemarkEdenSizeThreshold, we start 'concurrent abortable preclean' and continue precleanig until we have CMSScheduleRemarkEdenPenetration percentage occupancy in eden, otherwise we schedule 'remark' phase immediately.
7688.150: [CMS-concurrent-preclean-start]
7688.186: [CMS-concurrent-preclean: 0.034/0.035 secs]
7688.186: [CMS-concurrent-abortable-preclean-start]
7688.465: [GC 7688.465: [ParNew: 1040940K->1464K(1044544K), 0.0165840 secs] 1343593K->304365K(2093120K), 0.0167509 secs]
7690.093: [CMS-concurrent-abortable-preclean: 1.012/1.907 secs]
7690.095: [GC[YG occupancy: 522484 K (1044544 K)]7690.095: [Rescan (parallel) , 0.3665541 secs]7690.462: [weak refs processing, 0.0003850 secs] [1 CMS-remark: 302901K(1048576K)] 825385K(2093120K), 0.3670690 secs]

In the above log, after a preclean, 'abortable preclean' starts. After the young generation collection, the young gen occupancy drops down from 1040940K to 1464K. When young gen occupancy reaches 522484K which is 50% of the total capacity, precleaning is aborted and 'remark' phase is started.
Note that in 1.5, young generation occupancy also gets printed in the final remark phase.
For more detailed information and tips on GC tuning, please refer to the following documents:

Join the discussion

Comments ( 25 )
  • Aviran Harom Wednesday, November 19, 2008

    Is there any tool that reads CMS GC out put? (similar to HPJtune or HPJmeter)

  • Poonam Bajaj Wednesday, November 19, 2008

    Try GCHisto tool here:


  • Frank Brüseke Wednesday, December 3, 2008

    I'm using incremental CMS. According to your blog entry I can compute the CPU time a duty cycle took by applying your formula.

    As far as I have understood, the duty cycles take over the work that is normally peformed by the "concurrent mark"-phase. So, does the "ordinary line" saying how long the "concurrent-mark"-phase took deliver any valid information when using incremental mode?

    By "ordinary line" I'm referring to:

    40.683: [CMS-concurrent-mark: 0.521/0.529 secs]

  • guest Tuesday, March 27, 2012

    Hi Poonam

    Thanks for this information, its very useful

    In the traces

    [CMS-concurrent-mark: 0.080/0.080 secs] [Times: user=0.45 sys=0.02, real=0.08 secs]

    if CPU and real Time is 80 milliseconds why is the user time 450 milliseconds? what are user sys and real time ?


  • guest Tuesday, April 3, 2012


    I have a GC log which looks like:

    513037.621: [Full GC 513037.621: [CMS2012-03-20T06:02:52.976+0000: 513037.831: [CMS-concurrent-sweep: 0.627/1.042 secs] [Times: user=1.24 sys=0.02, real=1.04 secs]

    (concurrent mode failure): 2137896K->229746K(5072512K), 1.5890340 secs] 2275787K->229746K(5225856K), [CMS Perm : 128762K->127494K(220340K)] icms_dc=30 , 1.5892430 secs] [Times: user=1.57 sys=0.00, real=1.59 secs]

    My questions are:

    1. What could be the reason for concurrent mode failure in this case?

    2. What generation does 2137896K->229746K(5072512K) mean? Young Gen or Old Gen?

    3. Does 2275787K->229746K(5225856K) mean Heap Size Before GC, After GC and JVM Heap size?

    Any help will be appreciated.

  • Poonam Bajaj Thursday, April 5, 2012

    Received the following question which I am trying to answer here:



    I have a GC log which looks like:

    513037.621: [Full GC 513037.621: [CMS2012-03-20T06:02:52.976+0000: 513037.831: [CMS-concurrent-sweep: 0.627/1.042 secs] [Times: user=1.24 sys=0.02, real=1.04 secs]

    (concurrent mode failure): 2137896K->229746K(5072512K), 1.5890340 secs] 2275787K->229746K(5225856K), [CMS Perm : 128762K->127494K(220340K)] icms_dc=30 , 1.5892430 secs] [Times: user=1.57 sys=0.00, real=1.59 secs]

    My questions are:

    1. What could be the reason for concurrent mode failure in this case?

    2. What generation does 2137896K->229746K(5072512K) mean? Young Gen or Old Gen?

    3. Does 2275787K->229746K(5225856K) mean Heap Size Before GC, After GC and JVM Heap size?


    The logs indicate that there was a Full GC invoked at time stamp 513037.621 which interrupted the CMS cycle leading to 'concurrent mode failure'.

    2137896K->229746K(5072512K) tells us the heap occupancy of the tenured generation. 2137896K is the occupancy before collection, 229746K is the occupancy after collection and 5072512K is the CMS generation capacity.

    Yes, 2275787K->229746K(5225856K) means the total heap occupancy before GC, after GC and the total heap size.

  • guest Sunday, April 14, 2013


    Could you please explain Times Field in CMS out put? like USER: Sys: and REAL:


  • guest Monday, April 29, 2013

    Broken link

    In his blog, Jon talks in detail -


  • P Shaw Thursday, May 30, 2013
  • HP Wednesday, August 28, 2013


    my gc log as below:

    2013-08-27T16:59:53.952-0500: 201564.214: [GC 201564.214: [ParNew202182.528: [SoftReference, 0 refs, 0.0000110 secs]202182.528: [WeakReference, 1015 refs, 0.0001720 secs]202182.528: [FinalReference, 1625 refs, 0.0025210 secs]202182.531: [PhantomReference, 1 refs, 0.0000050 secs]202182.531: [JNI Weak Reference, 0.0000100 secs] (promotion failed): 3774912K->3774912K(3774912K), 619.3859950 secs]202183.600: [CMS CMS: abort preclean due to time 2013-08-27T17:10:14.937-0500: 202185.199: [CMS-concurrent-abortable-preclean: 7.271/626.671 secs] [Times: user=699.20 sys=98.95, real=626.55 secs]

    (concurrent mode failure)202189.179: [SoftReference, 614 refs, 0.0000710 secs]202189.179: [WeakReference, 5743 refs, 0.0007600 secs]202189.180: [FinalReference, 3380 refs, 0.0004430 secs]202189.180: [PhantomReference, 212 refs, 0.0000260 secs]202189.180: [JNI Weak Reference, 0.0000180 secs]: 8328471K->4813636K(8388608K), 18.5940310 secs] 11323577K->4813636K(12163520K), [CMS Perm : 208169K->208091K(346920K)], 637.9802050 secs] [Times: user=705.47 sys=98.79, real=637.85 secs]

    Total time for which application threads were stopped: 637.9820120 seconds

    Could you explain the log for me? I encountered the application unavailable quite often. Thanks

  • Poonam Bajaj Wednesday, August 28, 2013

    The log shows that there is a promotion failure:

    (promotion failed): 3774912K->3774912K(3774912K)

    which means that the attempt to promote tenured objects from the young to the tenured generation failed and that happened because the old generation was completely full.

    (concurrent mode failure)202189.179: [SoftReference, 614 refs, 0.0000710 secs]202189.179: [WeakReference, 5743 refs, 0.0007600 secs]202189.180: [FinalReference, 3380 refs, 0.0004430 secs]202189.180: [PhantomReference, 212 refs, 0.0000260 secs]202189.180: [JNI Weak Reference, 0.0000180 secs]: 8328471K->4813636K(8388608K)

    The promotion failure caused the 'concurrent mode failure' which invokes the Full GC that collects the heap in stop-the-world mode.

    Starting the CMS collection cycle earlier so that CMS can keep up with the allocation rate in the old gen would help. This can be done by setting the option CMSInitiatingOccupancyFraction to a lower value.

  • HP Wednesday, August 28, 2013

    Hi Sir,

    Could you advise the value for the CMSInitiatingOccupancyFraction ?

    And here is my JVM setting:






























    I have 24 CPU Intel(R) Xeon(R) CPU X5650 @ 2.67GHz in my machine

    RAM: 24675112 kB

    JDK version: java version "1.6.0_32"

  • HP Wednesday, August 28, 2013

    can I know the dafault value for CMSInitiatingOccupancyFraction in java "1.6.0_32" ? Thanks

  • Poonam Bajaj Wednesday, August 28, 2013

    By default the CMS collection cycle starts at 80% of the heap occupancy. You can start with setting CMSInitiatingOccupancyFraction=50 and then decrease it further if CMS is still not able to keep up with the allocation rate.



  • guest Tuesday, November 19, 2013

    Hi,I have GC log like this:

    250169.767: [GC 250169.767: [ParNew (promotion failed): 571584K->571584K(571584K), 0.6487910 secs]250170.416: [CMS250173.050: [CMS-concurrent-mark: 2.887/3.777 secs] [Times: user=10.86 sys=0.56, real=3.78 secs]

    (concurrent mode failure): 2268975K->2111899K(2516992K), 8.3732150 secs] 2766660K->2111899K(3088576K), [CMS Perm : 562899K->562896K(1048576K)], 9.0223120 secs] [Times: user=9.78 sys=0.28, real=9.02 secs]

    My question is how long is stop-the-world time? Is it 9.02 secs or 9.02-3.78=5.24 secs?


  • guest Wednesday, November 20, 2013

    The pause time for the Full GC that got invoked due to the concurrent-mode-failure is 9.02 secs. 3.78 secs is the time taken by the CMS-concurrent-mark phase which is a concurrent phase and is performed concurrently with the application threads running.



  • guest Monday, March 2, 2015


    I had a long pause in my Java application.

    I really hope you can guide me what can be the root cause -

    1. What is the meanning of GC printout below ?

    2. What can cause the huge freeze of 135 seconds?

    3. Does my JVM settings should be tuned ?

    2015-02-02T19:23:26.071+0900: [GC [ParNew: 1685037K->262144K(1835008K), 0.6401530 secs] 3392135K->2072928K(3932160K), 0.6409210 secs] [Times: user=2.41 sys=0.00, real=0.64 secs]

    2015-02-02T19:23:26.995+0900: [CMS-concurrent-abortable-preclean: 0.726/1.513 secs] [Times: user=6.35 sys=0.05, real=1.51 secs]

    2015-02-02T19:23:27.000+0900: [GC[YG occupancy: 1487917 K (1835008 K)][Rescan (parallel) , 0.1505430 secs][weak refs processing, 0.0019300 secs] [1 CMS-remark: 1810784K(2097152K)] 3298701K(3932160K), 0.1528590 secs] [Times: user=0.60 sys=0.00, real=0.15 secs]

    2015-02-02T19:23:27.263+0900: [GC [ParNew-*-*- stop_dispatcher.sh Mon Feb 2 19:25:13 JST 2015 -*-*-\n

    (promotion failed): 1835008K->1769242K(1835008K), 133.0546410 secs][CMS2015-02-02T19:25:43.162+0900: [CMS-concurrent-sweep: 2.919/136.008 secs] [Times: user=157.28 sys=24.75, real=135.99 secs]

    Here are my JVM settings -















  • Poonam Monday, March 2, 2015

    The log entry of interest is:

    2015-02-02T19:23:27.263+0900: [GC [ParNew-*-*- stop_dispatcher.sh Mon Feb 2 19:25:13 JST 2015 -*-*-\n

    (promotion failed): 1835008K->1769242K(1835008K), 133.0546410 secs][CMS2015-02-02T19:25:43.162+0900: [CMS-concurrent-sweep: 2.919/136.008 secs] [Times: user=157.28 sys=24.75, real=135.99 secs]

    This shows that a promotion failure occurred during this ParNew collection i.e. the promotion of the surviving objects from the young gen to tenured generation failed because it could not find enough space in the tenured gen space to promote the objects. And this interrupted the concurrent mode of the CMS GC and a Full GC was invoked. Here, the ParNew collection is reported to have taken around 133 secs but I would guess that it is the Full GC that would have taken this much time (though even for a Full GC it is a very long time)

    You can try to avoid the promotion failures and thereby these Full GCs by either increasing the size of the tenured generation or by initiating the CMS cycle early enough to keep up with the allocation rate by setting UseCMSInitiatingOccupancyOnly to true and CMSInitiatingOccupancyFraction to a lower value.



  • Mahesh Monday, May 11, 2015

    Hello Poonam,

    I see this in my gc logs:

    2015-05-11T05:34:14.041-0400: 19812.665: [Full GC (System) 19812.665: [CMS CMS: abort preclean due to time 2015-05-11T05:34:14.042-0400: 19812.665: [CMS-concurrent-abortable-preclean: 0.258/221.664 secs] [Times: user=4.13 sys=1.59, real=221.63 secs]

    The jvm parameters are:











    Specifically, what does "concurrent-abortable-preclean" refer to in this case?



  • Ashwini KM Tuesday, October 6, 2015


    we have one customer, who reported saying there is 100% CPU usage, We asked them to give thread dump and Top H command results.

    As i go through the thread dump, i find it as below, please help me understanding it. What might be the route cause?


    "Concurrent Mark-Sweep GC Thread" prio=10 tid=0x0000000045cce000 nid=0x10d1 runnable

    "Gang worker#0 (Parallel CMS Threads)" prio=10 tid=0x0000000045cc6000 nid=0x10cd runnable

    "Gang worker#1 (Parallel CMS Threads)" prio=10 tid=0x0000000045cc8000 nid=0x10ce runnable

    "Gang worker#2 (Parallel CMS Threads)" prio=10 tid=0x0000000045cc9800 nid=0x10cf runnable

    "Gang worker#3 (Parallel CMS Threads)" prio=10 tid=0x0000000045ccb800 nid=0x10d0 runnable

    "VM Periodic Task Thread" prio=10 tid=0x00002b8c48012000 nid=0x10da waiting on condition

    JNI global references: 1124


    par new generation total 436928K, used 95707K [0x00000006b0000000, 0x00000006d0000000, 0x00000006d0000000)

    eden space 349568K, 22% used [0x00000006b0000000, 0x00000006b4b9f6b8, 0x00000006c5560000)

    from space 87360K, 20% used [0x00000006c5560000, 0x00000006c6737900, 0x00000006caab0000)

    to space 87360K, 0% used [0x00000006caab0000, 0x00000006caab0000, 0x00000006d0000000)

    concurrent mark-sweep generation total 4718592K, used 3590078K [0x00000006d0000000, 0x00000007f0000000, 0x00000007f0000000)

    concurrent-mark-sweep perm gen total 262144K, used 217303K [0x00000007f0000000, 0x0000000800000000, 0x0000000800000000)

    2015-09-24T14:14:06.256+0200: 4505737.412: [CMS-concurrent-mark: 4.632/4.671 secs] [Times: user=17.95 sys=0.30, real=4.67 secs]

    2015-09-24T14:14:06.256+0200: 4505737.412: [CMS-concurrent-preclean-start]

    2015-09-24T14:14:07.135+0200: 4505738.291: [CMS-concurrent-preclean: 0.879/0.879 secs] [Times: user=0.91 sys=0.00, real=0.88 secs]

    2015-09-24T14:14:07.135+0200: 4505738.291: [CMS-concurrent-abortable-preclean-start]

    CMS: abort preclean due to time 2015-09-24T14:14:12.443+0200: 4505743.600: [CMS-concurrent-abortable-preclean: 4.120/5.309 secs] [Times: user=4.40 sys=0.06, real=5.31 secs]

    2015-09-24T14:14:12.448+0200: 4505743.604: [GC[YG occupancy: 256739 K (436928 K)]4505743.604: [Rescan (parallel) , 0.2066950 secs]4505743.811: [weak refs processing, 0.0000150 secs] [1 CMS-remark: 3590078K(4718592K)] 3846818K(5155520K), 0.2068540 secs] [Times: user=2.17 sys=0.00, real=0.21 secs]

    2015-09-24T14:14:12.656+0200: 4505743.812: [CMS-concurrent-sweep-start]

    2015-09-24T14:14:15.924+0200: 4505747.080: [CMS-concurrent-sweep: 3.257/3.268 secs] [Times: user=3.40 sys=0.04, real=3.27 secs]


    JVM parameters:

















  • Poonam Monday, October 12, 2015

    Hello Ashwini,

    I suggest collecting the ouput with 'prstat -L' and see which thread is most comsuming the CPU cycles. And then you can look at the stack trace of that thread that can be collected using 'pstack'.

    Hope this helps!



  • guest Tuesday, September 27, 2016

    Can you suggest on below one:

    1819643.511: [CMS-concurrent-preclean: 36.718/44.593 secs] [Times: user=27.75 sys=6.65, real=44.58 secs]

    1819643.512: [CMS-concurrent-abortable-preclean-start]

    1819644.056: [CMS-concurrent-abortable-preclean: 0.543/0.544 secs] [Times: user=0.72 sys=0.05, real=0.55 secs]

    Application time: 6.4744330 seconds

    1819644.086: [GC[YG occupancy: 747997 K (943744 K)]1819644.087: [Rescan (parallel) , 0.2755160 secs]1819644.363: [weak refs processing, 113.7004240 secs] [1 CMS-remark: 2512363K(5095424K)] 3260360K(6039168K), 113.9797320 secs] [Times: user=3.13 sys=1.07, real=113.96 secs]

    Total time for which application threads were stopped: 114.0110910 seconds

    1819758.068: [CMS-concurrent-sweep-start]

    Application time: 13.5514190 seconds

  • Poonam Tuesday, October 11, 2016

    Did you try enabling Parallel Reference processing with -XX:+ParallelRefProcEnabled option? This would definitely help in reducing the reference processing times.



  • guest Monday, February 6, 2017

    Dear Poonam, looking for your suggestion please?

    2017-02-06T02:03:20.113-0800: 4.573: [GC2017-02-06T02:03:20.113-0800: 4.574: [ParNew: 559232K->69582K(629120K), 0.0599530 secs] 559232K->69582K(6221568K), 0.0601250 secs] [Times: user=0.23 sys=0.19, real=0.06 secs]

    2017-02-06T02:03:30.756-0800: 15.216: [GC [1 CMS-initial-mark: 0K(5592448K)] 323035K(6221568K), 0.1478760 secs] [Times: user=0.22 sys=0.00, real=0.15 secs]

    2017-02-06T02:03:30.904-0800: 15.364: [CMS-concurrent-mark-start]

    2017-02-06T02:03:30.925-0800: 15.386: [CMS-concurrent-mark: 0.016/0.021 secs] [Times: user=0.11 sys=0.01, real=0.02 secs]

    2017-02-06T02:03:30.925-0800: 15.386: [CMS-concurrent-preclean-start]

    2017-02-06T02:03:30.937-0800: 15.397: [CMS-concurrent-preclean: 0.011/0.011 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]

    2017-02-06T02:03:30.937-0800: 15.397: [CMS-concurrent-abortable-preclean-start]

    2017-02-06T02:04:21.490-0800: 65.950: [GC2017-02-06T02:04:21.490-0800: 65.950: [ParNew: 628814K->42366K(629120K), 0.2279900 secs] 628814K->82713K(6221568K), 0.2281370 secs] [Times: user=0.64 sys=0.04, real=0.23 secs]

    CMS: abort preclean due to time 2017-02-06T02:04:22.892-0800: 67.352: [CMS-concurrent-abortable-preclean: 1.061/51.955 secs] [Times: user=16.35 sys=0.40, real=51.96 secs]

    2017-02-06T02:04:22.892-0800: 67.352: [GC[YG occupancy: 305684 K (629120 K)]2017-02-06T02:04:22.892-0800: 67.352: [Rescan (parallel) , 0.2439590 secs]2017-02-06T02:04:23.136-0800: 67.596: [weak refs processing, 0.0000300 secs]2017-02-06T02:04:23.136-0800: 67.596: [scrub string table, 0.0006370 secs] [1 CMS-remark: 40346K(5592448K)] 346030K(6221568K), 0.2447620 secs] [Times: user=1.92 sys=0.01, real=0.24 secs]

    2017-02-06T02:04:23.137-0800: 67.597: [CMS-concurrent-sweep-start]

    2017-02-06T02:04:23.150-0800: 67.610: [CMS-concurrent-sweep: 0.013/0.013 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]

    2017-02-06T02:04:23.150-0800: 67.610: [CMS-concurrent-reset-start]

    2017-02-06T02:04:23.180-0800: 67.640: [CMS-concurrent-reset: 0.030/0.030 secs] [Times: user=0.07 sys=0.02, real=0.03 secs]

    2017-02-06T02:04:24.301-0800: 68.761: [GC2017-02-06T02:04:24.301-0800: 68.761: [ParNew: 601598K->17889K(629120K), 0.0150810 secs] 641945K->58235K(6221568K) icms_dc=5 , 0.0152070 secs] [Times: user=0.11 sys=0.00, real=0.01 secs]

    2017-02-06T02:05:24.749-0800: 129.210: [GC [1 CMS-initial-mark: 40346K(5592448K)] 326235K(6221568K), 0.1834180 secs] [Times: user=0.18 sys=0.00, real=0.18 secs]

    2017-02-06T02:05:24.933-0800: 129.393: [CMS-concurrent-mark-start]

    2017-02-06T02:05:24.975-0800: 129.436: [CMS-concurrent-mark: 0.040/0.042 secs] [Times: user=0.14 sys=0.00, real=0.04 secs]

    2017-02-06T02:05:24.976-0800: 129.436: [CMS-concurrent-preclean-start]

    2017-02-06T02:05:24.985-0800: 129.445: [CMS-concurrent-preclean: 0.009/0.009 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]

    2017-02-06T02:05:24.985-0800: 129.445: [CMS-concurrent-abortable-preclean-start]

    2017-02-06T02:05:25.569-0800: 130.029: [GC2017-02-06T02:05:25.569-0800: 130.029: [ParNew: 577121K->17400K(629120K), 0.0165040 secs] 617467K->57746K(6221568K) icms_dc=5 , 0.0166400 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]

    2017-02-06T02:06:26.697-0800: 191.157: [GC2017-02-06T02:06:26.697-0800: 191.157: [ParNew: 576632K->19444K(629120K), 0.0148200 secs] 616978K->59791K(6221568K) icms_dc=5 , 0.0149790 secs] [Times: user=0.10 sys=0.00, real=0.02 secs]

    CMS: abort preclean due to time 2017-02-06T02:06:27.125-0800: 191.585: [CMS-concurrent-abortable-preclean: 0.076/62.140 secs] [Times: user=3.18 sys=0.11, real=62.14 secs]

    2017-02-06T02:06:27.126-0800: 191.586: [GC[YG occupancy: 292760 K (629120 K)]2017-02-06T02:06:27.126-0800: 191.586: [Rescan (parallel) , 0.2623390 secs]2017-02-06T02:06:27.388-0800: 191.848: [weak refs processing, 0.0016400 secs]2017-02-06T02:06:27.390-0800: 191.850: [scrub string table, 0.0006790 secs] [1 CMS-remark: 40346K(5592448K)] 333107K(6221568K), 0.2647960 secs] [Times: user=2.04 sys=0.01, real=0.27 secs]

    2017-02-06T02:06:27.391-0800: 191.851: [CMS-concurrent-sweep-start]

    2017-02-06T02:06:27.402-0800: 191.862: [CMS-concurrent-sweep: 0.011/0.011 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]

    2017-02-06T02:06:27.402-0800: 191.862: [CMS-concurrent-reset-start]

    2017-02-06T02:06:27.419-0800: 191.880: [CMS-concurrent-reset: 0.017/0.017 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]

    2017-02-06T02:06:27.843-0800: 192.304: [GC2017-02-06T02:06:27.844-0800: 192.304: [ParNew: 578676K->20307K(629120K), 0.0130150 secs] 618700K->60331K(6221568K) icms_dc=0 , 0.0131610 secs] [Times: user=0.09 sys=0.00, real=0.01 secs]

  • Parvez Sunday, April 9, 2017

    We are having high response at the OSB layer , might be due to heavy xquery processing . Do you see any problem with our JVM settings - GC logs (Below).

    we dont see any Full GC but Appdynamics shows heavy collection at OLD generation. Could you please suggest.

    JVM settings:









































Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.