X

Poonam Bajaj's Blog

  • March 19, 2014

About G1 Garbage Collector, Permanent Generation and Metaspace

Poonam Parhar
Consulting Member of Technical Staff


We received some questions around the G1 garbage collector
and the use of Permanent Generation with it. There seems to be some confusion
that the Hotspot JVM does not use permanent generation when G1 is used as the garbage
collector. Here’s some clarification:

JDK 7: PermGen

Permanent Generation still exists in JDK 7 and its updates,
and is used by all the garbage collectors. In JDK7, the effort to remove the
permanent generation was started and some parts of the data residing in the
permanent generation were moved to either the Java Heap or to the native heap.
Permanent generation was not completely removed and it still exists in JDK 7
and its updates. Here's the list of things that were moved out of the permanent
generation in JDK7:

  • Symbols were moved to the native heap

  • Interned strings were moved to the Java Heap

  • Class statics were moved to the Java Heap

JDK7: G1 and PermGen

With G1 collector, PermGen is collected only at a Full GC
which is a stop-the-world (STW) GC. If
G1 is running optimally then it does not do Full GCs. G1 invokes the Full GCs
only when the PermGen is full or when the application does allocations faster
than G1 can concurrently collect garbage.

With CMS garbage collector, we can use option
-XX:+CMSClassUnloadingEnabled to collect PermGen space in the CMS concurrent
cycle. There is no equivalent option for G1. G1 only collects PermGen during the Full stop-the-world GCs.

We can use options PermSize and MaxPermSize to tune the
PermGen space size according to the application needs.

JDK8: PermGen

Permanent generation has been completely removed in JDK 8.
This work has been done under the bug
https://bugs.openjdk.java.net/browse/JDK-6964458. Options PermSize and
MaxPermSize have also been removed in JDK 8.

Email to openjdk alias regarding the PermGen elimination
project: http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-September/006679.html

JDK8: Metaspace

In JDK 8, classes metadata is now stored in the native heap
and this space is called Metaspace. There are some new flags added for
Metaspace in JDK 8:

  • -XX:MetaspaceSize=<NNN>
    where <NNN> is the initial amount of space(the initial
    high-water-mark) allocated for class metadata (in bytes) that may induce a
    garbage collection to unload classes. The amount is approximate. After the
    high-water-mark is first reached, the next high-water-mark is managed by
    the garbage collector
  • -XX:MaxMetaspaceSize=<NNN>
    where <NNN> is the maximum amount of space to be allocated for class
    metadata (in bytes). This flag can be used to limit the amount of space
    allocated for class metadata. This value is approximate. By default there
    is no limit set.
  • -XX:MinMetaspaceFreeRatio=<NNN>
    where <NNN> is the minimum percentage of class metadata capacity
    free after a GC to avoid an increase in the amount of space
    (high-water-mark) allocated for class metadata that will induce a garbage
    collection.
  • -XX:MaxMetaspaceFreeRatio=<NNN>
    where <NNN> is the maximum percentage of class metadata capacity
    free after a GC to avoid a reduction in the amount of space
    (high-water-mark) allocated for class metadata that will induce a garbage
    collection.

By default class
metadata allocation is only limited by the amount of available native memory. We
can use the new option MaxMetaspaceSize to limit the amount of native memory
used for the class metadata. It is analogous to MaxPermSize.
A garbage collection is induced to collect the dead classloaders
and classes when the class metadata usage reaches MetaspaceSize (12Mbytes on
the 32bit client VM and 16Mbytes on the 32bit server VM with larger sizes on
the 64bit VMs). Set MetaspaceSize to a higher value to delay the induced
garbage collections. After an induced garbage collection, the class metadata usage
needed to induce the next garbage collection may be increased.


Join the discussion

Comments ( 22 )
  • guest Tuesday, March 25, 2014

    Hi,

    is the PermGen collection in CMS really concurrent when the -XX:+CMSClassUnloadingEnabled option is present? I thought it was piggybacking on the remark phase which is Stop-The-World.


  • guest Tuesday, April 1, 2014

    Having the G1 Collector not performing the equivalent of CMS's perm gen sweeping and CMS class unloading is bad for many RMI servers and App servers.

    Many people in the comms space want to use G1 in place of CMS as CMS causes stop-the-world collections due to heap fragmentation, whereas G1 is a compacting collector (so theoretically should not have the same frequency of stop the world collections).

    RMI servers often take connections from RMI clients, and the Java RMI runtime (server side) allocates classloaders per connection. One of the major benefits of concurrent+low pause collectors is a major reduction in long pause (several second) GCs. Yet thats not the case for RMI servers ... which are very common.

    App servers often have Application Deploy + Undeploy cycles whilst the JVM process is live. These cause classes and class loaders to be eligible for garbage collection.

    Its great that there is sufficient transparency here that this information is being disclosed, but this policy for treatment of the permanent generation appears as oversight in the G1 collector and may hinder its adoption.


  • Poonam Wednesday, April 2, 2014

    With CMS when -XX:+CMSClassUnloadingEnabled is used, the classes are unloaded in the CMS cycle though not concurrently. They are unloaded in the remark phase of the CMS cycle which is a stop-the-world phase.


  • Pawel Wednesday, April 2, 2014

    What would be the recommended setting for MaxPermSize for use with G1 GC, would it be still similar to what we used with ParallelGC or should this setting be different? How to determine what it should be set to?


  • Poonam Thursday, April 3, 2014

    Pawel,

    You should not see much difference in the PermGen space requirements when you migrate from ParallelGC to G1 collector. To begin with I would recommend keeping the same setting for MaxPermSize and analyze the G1 GC logs. If you see Full GCs happening due to PermGen getting full then increase the value set for MaxPermSize.


  • Kannan Friday, April 4, 2014

    Good article Poonam. Two questions:

    1. What is it that still resides in Permanent Generation in JDK 7? Is it the class metadata?

    2. Is this applicable to all the various GCs in JDK 7 or only for G1 gc ?

    Thanks,

    Kannan


  • guest Saturday, April 5, 2014

    Kannan,

    Yes, the classes metadata resides in the PermGen in JDK7 and this is true for all the collectors.

    Thanks,

    Poonam


  • guest Tuesday, April 15, 2014

    I recently tried switching to G1GC and ran into a situation where tomcat appeared to crash due to a PermGen error on a webapp redeploy. The G1GC logs indicate several Full GCs which is consistent with how it has been described, but there is no indication of the fact that I was running out of PermGen:

    2014-04-15T16:35:32.015+0000: 364584.229: Total time for which application threads were stopped: 0.1979430 seconds

    2014-04-15T16:35:32.865+0000: 364585.079: Application time: 0.8498930 seconds

    2014-04-15T16:35:32.865+0000: 364585.079: [Full GC 546M->282M(1280M), 2.3108810 secs]

    [Eden: 43.0M(56.0M)->0.0B(94.0M) Survivors: 8192.0K->0.0B Heap: 546.5M(1280.0M)->282.7M(1280.0M)]

    [Times: user=2.38 sys=0.00, real=2.31 secs]

    2014-04-15T16:35:35.176+0000: 364587.390: Total time for which application threads were stopped: 2.3114740 seconds

    2014-04-15T16:35:35.177+0000: 364587.390: Application time: 0.0001960 seconds

    2014-04-15T16:35:35.177+0000: 364587.391: [Full GC 282M->282M(1280M), 1.8853960 secs]

    [Eden: 0.0B(94.0M)->0.0B(94.0M) Survivors: 0.0B->0.0B Heap: 282.7M(1280.0M)->282.6M(1280.0M)]

    [Times: user=1.92 sys=0.00, real=1.89 secs]

    2014-04-15T16:35:37.062+0000: 364589.276: Total time for which application threads were stopped: 1.8858660 seconds

    2014-04-15T16:35:37.074+0000: 364589.288: Application time: 0.0117270 seconds

    2014-04-15T16:35:37.075+0000: 364589.288: [Full GC 282M->277M(1280M), 2.0369900 secs]

    [Eden: 1024.0K(94.0M)->0.0B(94.0M) Survivors: 0.0B->0.0B Heap: 282.9M(1280.0M)->277.4M(1280.0M)]

    [Times: user=2.07 sys=0.00, real=2.03 secs]


  • Poonam Wednesday, April 16, 2014

    Yes, there is a bug in G1 that it does not print the PermGen size change details with Full GC logs in JDK7. There is a bug JDK-8037275 filed for this

    and we will be fixing it soon.

    Thanks,

    Poonam


  • Poonam Tuesday, May 6, 2014

    The fix for JDK-8037275 has been integrated into jdk7u-dev and will be available in 7u80.

    Thanks,

    Poonam


  • guest Friday, May 22, 2015

    Hi,

    Would like to know if we have any guidelines for setting up MetaspaceSize and MaxMetaspaceSize?

    From the documentation available, there is no handy informaiton available on this topic.


  • guest Friday, May 22, 2015

    Hi,

    Would like to know if we have any guidelines for setting up MetaspaceSize and MaxMetaspaceSize?

    From the documentation available, there is no handy informaiton available on this topic.


  • Alexis Friday, February 12, 2016

    Hi there,

    I've recently been testing the metaspace with an application that uses Derby DB and load an amount of classes and then perform unload of these. In JDK 7 the PermGen is able to unload the classes when performing GCs.

    In JDK 8u60 to u74 I've been testing the behaviour with different maxMetaspace sizes and metaspacesize flags. The behaviour that I'm perceiving is that the JVM becomes unresponsive (like a zombie) making a lot of Full GCs due to metaspace threshold. The following settings were used:

    -XX:MaxMetaspaceSize=256m

    Initial and Max heap = 4096

    The behaviour can be seen in the logs like:

    2016/02/12 16:09:16 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:13:07 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-end, 0.0004524 secs]

    2016/02/12 16:16:55 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:21 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:22 | [Full GC (Metadata GC Threshold) 148M->148M(4096M), 0.3452918 secs]

    2016/02/12 16:18:53 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:54 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:54 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:55 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:56 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:57 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:57 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:58 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:59 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:18:59 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:19:00 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-end, 0.0000925 secs]

    2016/02/12 16:19:01 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-end, 0.0001064 secs]

    2016/02/12 16:19:02 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:19:02 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    2016/02/12 16:19:03 | [Full GC (Metadata GC Threshold) 138M->138M(4096M), 0.3650801 secs]

    2016/02/12 16:19:04 | [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) 138M->138M(4096M), 0.3275574 secs]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) 139M->139M(4096M), 0.3663767 secs]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-end, 0.0001882 secs]

    [Full GC (Metadata GC Threshold) 138M->138M(4096M), 0.3513902 secs]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) 138M->138M(4096M), 0.3450775 secs]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-end, 0.0001070 secs]

    ...

    [Full GC (Metadata GC Threshold) [GC concurrent-root-region-scan-start]

    [GC concurrent-root-region-scan-end, 0.0000440 secs]

    [GC concurrent-mark-start]

    146M->146M(4096M), 0.3809647 secs]

    Shutdown failed: Timed out waiting for the JVM to terminate.

    Dumping JVM state.

    Heap

    garbage-first heap total 4194304K, used 149826K [0x00000006c0000000, 0x00000006c0204000, 0x00000007c0000000)

    region size 2048K, 1 young (2048K), 0 survivors (0K)

    Metaspace used 125296K, capacity 139898K, committed 262144K, reserved 1189888K

    class space used 14994K, capacity 21204K, committed 121216K, reserved 1048576K

    If I use an initial high water mark, metaspacesize=128m for example, I'm seeing the same behaviour.

    Therefore, the question is: is this a potential bug since the metaspace used is not the maximum set and makes the JVM unresponsive?

    Thank you and cheers,

    /V


  • Poonam Wednesday, February 17, 2016

    Hello,

    The value of MetaspaceSize sets the initial high-water-mark for the metaspace to induce garbage collections. Its default value is platform dependent and ranges from 12MB to 20MB. When the application starts and as it loads classes into the metaspace, metaspace quickly reaches the high-water-mark inducing GCs and further increasing this threshold. Until the application reaches a stable state and all the startup set of classes are loaded, metaspace keeps growing invoking GCs with every step of its growth.

    These garbage collections occurring during the application startup can be avoided by setting a higher value for MetaspaceSize. Maybe you can try setting it to 256M and see if it avoids these initial GCs.

    Thanks,

    Poonam


  • Alexis Sunday, March 13, 2016

    Hi Poonam,

    Thanks for your response. Yes, I've tested this behaviour with 1GB of Metaspace (along with other high water mark settings) and I still got the error / behaviour, finally we set it as unlimited and it works fine. But I was curious how this works as the same piece of code has been tested with JDK 1.7.x and I see a slight impact on performance, but I don't see that the JVM becomes unresponsive. In any case, I've been testing this further and I believe it is expected (by design) as I lowered the size of the permGen on JDK 1.7.x and it has got the same unresponsiveness behaviour, but it is more responsive to report an OOM: PermGen whereas JDK 1.8.x gets unresponsive and I've got to wait some hours to see the OOM: Metaspace error.

    So maybe there are some differences between JDK 1.7 and 1.8 when reporting the error and maybe there is an opportunity to improve this situation so anyone experiencing the error can be aware of it promptly. Is it a good idea to open a RFE to be checked at Oracle's end?

    Cheers,

    /V


  • Jeff K Monday, May 2, 2016

    We've been chasing metaspace growth also since moving to JRE 1.8...

    JAVA_OPTS="-server -Djdk.tls.allowUnsafeServerCertChange=true -Dinstance=0 -Denv=prod-tm -Xms3g -Xmx3g -Xmn1g -XX:MetaspaceSize=256m -Xss256k -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=30 -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintTenuringDistribution -Xloggc:../logs/gc.log"

    GC.LOG contents after running 1 week:

    CommandLine flags: -XX:+AggressiveOpts -XX:InitialHeapSize=3221225472 -XX:+ManagementServer -XX:MaxHeapSize=3221225472 -XX:MaxNewSize=1073741824 -XX:MetaspaceSize=268435456 -XX:NewSize=1073741824 -XX:ParallelGCThreads=30 -XX:+PrintGC -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC -XX:+UseParallelOldGC

    Metaspace used 29874K, capacity 30512K, committed 30848K, reserved 1077248K

    Metaspace used 29874K, capacity 30512K, committed 30848K, reserved 1077248K

    Metaspace used 41300K, capacity 42752K, committed 42880K, reserved 1087488K

    Metaspace used 41300K, capacity 42752K, committed 42880K, reserved 1087488K







    Metaspace used 2074102K, capacity 2076740K, committed 2077740K, reserved 2351104K

    Metaspace used 2088217K, capacity 2090884K, committed 2091820K, reserved 2359296K

    Metaspace used 2088217K, capacity 2090884K, committed 2091820K, reserved 2359296K

    Metaspace used 2102453K, capacity 2105060K, committed 2105900K, reserved 2369536K

    Metaspace used 2102453K, capacity 2105060K, committed 2105900K, reserved 2369536K

    JConsole shows ~2.1GB of Metaspace usage but it should only get to 0.25 or so if it honors the 256m on the JAVA_OPTS at startup we thought. How can we find and/or fix this metaspace growth/leak..?


  • guest Monday, May 2, 2016

    Hi, I recently switched to JDK 8 and the metaspace keeps increasing and never goes down and eventually the system becomes unresponsive. How do I troubleshoot this issue? It works fine in JDK 7. How can I inspect what is there in the metaspace?


  • guest Tuesday, May 17, 2016

    we got "java.lang.OutOfMemoryError: Metaspace" error with following settings. What's best suggestion

    -XX:MetaspaceSize=512m

    -XX:MaxMetaspaceSize=512m

    -XX:+UseConcMarkSweepGC

    -XX:+CMSParallelRemarkEnabled

    -XX:+UseCMSInitiatingOccupancyOnly

    -XX:+ScavengeBeforeFullGC

    -XX:+CMSScavengeBeforeRemark

    -XX:+PrintGCDetai


  • Manuraj Wednesday, May 18, 2016

    I have following JVM settings, but the application is running out of memory in metasapce. How to resolve this issue? Seems like class loading is causing this.

    -XX:MetaspaceSize=512m

    -XX:MaxMetaspaceSize=512m

    -XX:+UseConcMarkSweepGC

    -XX:+CMSParallelRemarkEnabled

    -XX:+UseCMSInitiatingOccupancyOnly

    -XX:+ScavengeBeforeFullGC

    -XX:+CMSScavengeBeforeRemark

    -XX:+PrintGCDetai


  • guest Monday, May 23, 2016

    Hi All,

    We were upgrading one of our application to JDK1.8 setting metaspace to 512MB -- Application loads huge amount of classes and then perform unload of these. In JDK 7 the PermGen is able to unload the classes during FULL GC .

    In JDK 8u77, I've been testing the behaviour with different maxMetaspace sizes and metaspacesize flags. The behaviour that I'm noticing is that the JVM becomes unresponsive (like a zombie) making a lot of Full GCs due to metaspace threshold. The following settings were used:

    JAVA_OPTS="-server -Xms5120m -Xmx6144m -XX:+UseG1GC -XX:+PrintGCDetails -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -Dsun.security.ssl.allowUnsafeRenegotiation=true"

    JAVA_OPTS="$JAVA_OPTS -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512m"

    The behavior can be seen in the logs like:

    7569.373: [Full GC (Metadata GC Threshold) 7569.373: [GC concurrent-root-region-scan-start]

    7569.373: [GC concurrent-root-region-scan-end, 0.0003442 secs]

    7569.373: [GC concurrent-mark-start]

    246M->246M(5120M), 1.9860338 secs]

    [Eden: 0.0B(3192.0M)->0.0B(3072.0M) Survivors: 2048.0K->0.0B Heap: 246.7M(5326.0M)->246.1M(5120.0M)], [Metaspace: 511845K->511845K(1415168K)]

    [Times: user=2.60 sys=0.01, real=1.99 secs]

    7571.359: [Full GC (Last ditch collection)

    java.lang.OutOfMemoryError: Metaspace

    Dumping heap to /tmp/heapdump_17_05_2016_21_20_18.hprof ...

    Heap dump file created [379947926 bytes in 5.253 secs]

    Is this a bug in JDK1.8


  • Sathya Prakash Wednesday, April 5, 2017

    Thanks for wonderful information.

    Couple of below clarification required.

    1. By default metaspace is unlimited and GC happens after 16 MB, but GC happened at 160MB of Metaspace. What happens if metaspace keep growing more than 1GB.

    2. tried XX:MaxMetaspaceFreeRatio but no luck, it ran OOM error.

    Is there any guarantee for MaxMetaspaceFreeRatio? if i set 80 to it then whenever GC happens in metaspace then 80% should be freed.

    When our Metaspace crashed 310M, we have increased to 500M so i wanted to know if it crashed at 310M there are chance to crash at 500M right.

    I wanted to propose the removal of metaspace size, what would be strong points to make it.


  • Poonam Monday, April 10, 2017

    Yes, metaspace is unlimited unless its limit is specified with MaxMetaspaceSize. GC internally maintains what we call as the high-water-mark for the metaspace at which collections for the Metaspace are invoked, and it gets initialized with the value specified with MetaspaceSize. The HWM value is lowered or raised depending upon the free space after a GC.

    The value of MaxMetaspaceFreeRatio determines how much maximum free space should be available after a GC. If the committed space available after a GC is more than MaxMetaspaceFreeRatio value, then the HWM is lowered.

    I would suggest monitoring your Metaspace to find out your classes and metadata footprint and then configure your metaspace accordingly.

    Thanks,

    Poonam


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services