X

Poonam Bajaj's Blog

  • September 10, 2015

Why do I get message "CodeCache is full. Compiler has been disabled"?

Poonam Parhar
Consulting Member of Technical Staff

JVM JIT generates compiled code and stores that in a memory area called CodeCache. The default maximum size of the CodeCache on most of the platforms is 48M. If any application needs to compile large number of methods resulting in huge amount of compiled code then this CodeCache may become full. When it becomes full, the compiler is disabled to stop any further compilations of methods, and a message like the following gets logged:

Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
Code Cache  [0xffffffff77400000, 0xffffffff7a390000, 0xffffffff7a400000) total_blobs=11659 nmethods=10690 adapters=882 free_code_cache=909Kb largest_free_block=502656

When this situation occurs, JVM may invoke sweeping and flushing of this space to make some room available in the CodeCache. There is a JVM option UseCodeCacheFlushing that can be used to control the flushing of the Codeache. With this option enabled JVM invokes an emergency flushing that discards older half of the compiled code(nmethods) to make space available in the CodeCache. In addition, it disables the compiler until the available free space becomes more than the configured CodeCacheMinimumFreeSpace. The default value of CodeCacheMinimumFreeSpace option is 500K.

UseCodeCacheFlushing is set to false by default in JDK6, and is enabled by default since JDK7u4. This essentially means that in jdk6 when the CodeCache becomes full, it is not swept and flushed and further compilations are disabled, and in jdk7u+, an emergency flushing is invoked when the CodeCache becomes full. Enabling this option by default made some issues related to the CodeCache flushing visible in jdk7u4+ releases. The following are two known problems in jdk7u4+ with respect to the CodeCache flushing:

1. The compiler may not get restarted even after the CodeCache occupancy drops down to almost half after the emergency flushing.
2. The emergency flushing may cause high CPU usage by the compiler threads leading to overall performance degradation.

This performance issue, and the problem of the compiler not getting re-enabled again has been addressed in JDK8. To workaround these in JDK7u4+, we can increase the code cache size using ReservedCodeCacheSize option by setting it to a value larger than the compiled-code footprint so that the CodeCache never becomes full. Another solution to this is to disable the CodeCache Flushing using -XX:-UseCodeCacheFlushing JVM option.

The above mentioned issues have been fixed in JDK8 and its updates. Here's the list of the bugs that have been fixed in jdk8 and 8u20 that address these problems:
    * JDK-8006952: CodeCacheFlushing degenerates VM with excessive codecache freelist iteration (fixed in 8)
    * JDK-8012547: Code cache flushing can get stuck without completing reclamation of memory (fixed in 8)
    * JDK-8020151: PSR:PERF Large performance regressions when code cache is filled (fixed in 8)
    * JDK-8029091 Bug in calculation of code cache sweeping interval (fixed in 8u20)
    * 8027593: performance drop with constrained codecache starting with hs25 b111 (fixed in 8)


Join the discussion

Comments ( 10 )
  • guest Friday, October 2, 2015

    I have a couple Java8 code-cache questions..., specifically related to TieredCompilation (for a large Java Server Application):

    1. Can someone explain or provide a references to where it is explained the differences between TieredCompilation vs Non-TieredCompilation?

    2. What performance improvements should be expected with TieredCompilation vs Non-TieredCompilation? Startup only, better long term performance?

    Should be expect better performance?

    3. Since fundamentally TieredCompilation uses more CodeCache memory, does using more memory necessary mean worse/better performance? In

    testing we found more memory is used with TieredCompiliation (e.g. ~4x the amount of CodeCache memory when using TieredCompilation).

    4. Given all the previous problems and complexitiy around CodeCacheFlushing/etc., I would think the best approach would be to test/measure our application CodeCache

    utilization - then configure ReservedCodeCacheSize to be sufficient to no flushing is every done. Any comments related to this FatPants approach?

    For Java8 - as you notied a number of issue have been solved - in addition TieredCompilation is now ON by default. TieredCompilation was OFF by default in Java7

    I'm currently using 1.8.0_51 - the Java related defaults have changed, on the Redhat Linux platform I'm using these now are:

    java -XX:+PrintFlagsFinal | grep -i codecache

    uintx CodeCacheExpansionSize = 65536 {pd product}

    uintx CodeCacheMinimumFreeSpace = 512000 {product}

    uintx InitialCodeCacheSize = 2555904 {pd product}

    bool PrintCodeCache = false {product}

    bool PrintCodeCacheOnCompilation = false {product}

    uintx ReservedCodeCacheSize = 251658240 {pd product}

    bool UseCodeCacheFlushing = true {product}

    java -XX:+PrintFlagsFinal | grep -i TieredCompilation

    bool TieredCompilation = true {pd product}

    Note the default ReservedCodeCacheSize changed from Java7 of ~50MB to Java8 ~251MB ...


  • Poonam Tuesday, October 13, 2015

    Let me try to answer the above questions:

    1. You can refer to these slides on Tiered Compilation by Igor Veresov:

    http://www.slideshare.net/maddocig/tiered

    2. I think the performance should definitely be better with tiered compilation.

    3. I don't think using more code-cache should have any negative impact on the performance. However, if you have any numbers showing otherwise, we'd be interested in that data.

    4. If you can spare memory, then having a big enough code-cache that can fit your compiled-code footprint is always a good idea. It avoids the cycle of sweeping and refilling of codecache.


  • guest Tuesday, December 8, 2015

    The maximum value for the code cache is 2G?


  • Poonam Wednesday, February 17, 2016

    The default value of ReservedCodeCacheSize is 48M on most platforms. There is no max limit to the CodeCache size, it is limited by the memory available on the system, and it can be increased with -XX:ReservedCodeCacheSize=n.


  • guest Friday, September 23, 2016

    How to identify a leak in codecache?


  • guest Wednesday, January 11, 2017

    We are running with teh default 48MB using JDK 1.7.

    We used to see few such warnings when using JDK 1.7.0_79.

    But when we switched to 1.7.0_111, they increased by several magnitudes. I couldn't find any documentation that explained why there was such a huge increase with 1.7.0_111. Other than changing the ReservedCodeCache size, is there anything else that has to be done wrt this specific version of the JDK?


  • Poonam Wednesday, January 11, 2017

    7u111 has the bug fix for JDK-8006952: CodeCacheFlushing degenerates VM with excessive codecache freelist iteration, and that may be contributing towards the observed increase in the number of these messages.

    To avoid these message and the performance degradation due to the codecache flushing when it becomes full, please increase the codecache size using ReservedCodeCache so that the compiled code footprint fits comfortably in the specified codecache size.


  • guest Wednesday, January 11, 2017

    Thanks for the prompt response. When we increased the ReservedCodeCacheSize to 128m, we saw choppiness in the CPU graphs. Is that expected?

    Also, another thing we didnt understand very well,

    We are under the impression that if UseCodeCacheFlushing is ON, then the cache will be flushed before reaching the max limits, and the warnings are reported?

    Or could it be because MinCodeCacheFlushingInterval is set to 30, the the cache gets before the next flushing cycle?


  • guest Wednesday, January 11, 2017

    Thanks for the prompt reply.

    Can you explain why we get the 'CodeCache is full' warning even when UseCodeCacheFlushing is turned on. If UseCodeCacheFlushing is on, doesn't it mean that the cache is routinely flushed, never gets full, and we should never see the warning?

    Or could it be because the flushing interval is set to 30 seconds, and the cache is filling up before the flushing cycle kicks in?


  • Poonam Friday, January 13, 2017

    For your question regarding UseCodeCacheFlushing - CodeCache is regularly cleaned by the codecache sweeper to remove the dead compiled code even when it is not full.

    This option is used to determine what to do when the codecache becomes full. When the codecache becomes full and UseCodeCacheFlushing is enabled, then compiler is disabled and the codecache flushing is initiated. After flushing and making some space available in the codecache, compiler is enabled again. When this option is disabled, then the compiler gets disabled forever, and no attempt to clean the codecache is made. In this case, you won't see the compiler compiling any more methods and you won't see the 'compiler has been disabled' message again.

    The choppiness in CPU usage could be due to the CPU cycles being used in cleaning the codecache and recompilation of the discarded compiled code. And the application may run slow for a little while due to some of the methods running in the interpreted mode whose compiled code was discarded. Probably 128M is still not sufficient to fit your codecache. You can trying increasing it some more to avoid the cache cleaning and recompilation cycle.

    MinCodeCacheFlushingInterval is used to decide whether or not to initiate another cache flushing cycle if the codecache has reached the 'Full' state again and the last flushing had happened within this interval.

    You could use options -XX:+PrintCompilation and -XX:+LogCompilation to get detailed information on the compiler activities and the codecache usage.

    Thanks,

    Poonam


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.