X
  • Java
    September 12, 2019

Java on Container Like A Pro

Vivek Thakur
Principal Applications Engineer

JVM in containers

Modern day software systems are moving towards containers. But there are a few important factors to understand before we move our Java/JVM based applications to containers. These factors raise questions about Java's suitability for containers.

Imagine an environment in which 10 instances of an application have been deployed into a container. Suddenly the application starts throttling and falling short of the normal performance. What has happened? In order to allow multiple containers to run isolated side-by-side, we have specified it to be limited to one CPU (or the equivalent ratio in CPU shares). Unfortunately, the JVM will see the overall number of cores on that node (64) and use that value to initialize the number of default threads we have seen earlier. As started 10 instances, we end up with:

10 * 64 Jit Compiler Threads

10 * 64 Garbage Collection threads

10 * 64 …. And so on…

Moreover, our application, being limited in the number of CPU cycles it can use, is mostly dealing with switching between different threads, and cannot get any actual work done.

Suddenly the “package once, run anywhere" promise of containers seems violated.

Before JDK 8.0_131, the core count resources were determined by sysconf. That means that whenever we run it in a container, we are going to get the total number of processors available on the system, or in case of virtual machines, the virtual system. The same is true for default memory limits: the JVM will look at the host overall memory and use that for setting its defaults. We can say that the JVM is ignoring cgroups, which causes the previously mentioned problems. Unfortunately, there is no CPU or memory namespace (also, namespaces usually have a slightly different goal), so a simple /proc/meminfo from inside the container will still show us the overall memory on the host.

Java 8.0_131 - Onwards on containers!

Java now supports Docker CPU and memory limits. Let us look at what “support” actually means.

Please look into the Jira below, which shows the list of improvements in Java for Containers.

https://bugs.openjdk.java.net/browse/JDK-8146115: Improve Docker container detection and resource configuration usage

https://bugs.openjdk.java.net/browse/JDK-8196595


These changes are available in 8u192. [Released Oct 2018]
 

The JVM can recognize the memory and CPU configurations of the container in which it is running. For instance, if the Docker container is configured to run with 1024m of memory, the JVM can now detect that, and can in turn configure its Java heap and the sizes of its other memory pools accordingly. Therefore, to have a smaller footprint for your Docker instance, all you have to do is to control the size of that container instance. The same applies for the CPU configuration.  Focus on configuring the number of CPUs that you would like the container instance to use, and the JVM running inside it will be able to detect that configuration, and limit its CPU use to that configuration.

Memory

The JVM will now consider cgroups memory limits if the following flags are specified:

     -XX:+UseCGroupMemoryLimitForHeap

     -XX:+UnlockExperimentalVMOptions

In that case, the Max Heap space (if not overwritten) will be automatically be set to the limit specified by the cgroup. As we discussed earlier, the JVM is using memory besides the Heap, so this will not prevent a user from the OOM killer from removing their containers. However, especially given that the garbage collector will become more aggressive as the Heap fills up, this is already a great improvement.

CPU

The JVM will automatically detect cpusets and use the specified number of CPUs for initializing the default values discussed earlier. Unfortunately, most users (and especially container orchestrators, such as DC/OS) use CPU shares as the default CPU isolation. Moreover, with CPU shares you will still end up with the incorrect value for default parameters.

So what can we do?

We should consider manually overwriting the default parameter    (e.g., at least XMX for memory and XX:ParallelGCThreads, XX:ConcGCThreads for CPU) according to your specific cgroup limits.

Considering Non-Heap Memory in Java

The JVM memory consists the following segments:

  • Heap Memory
  • Non-Heap Memory, which is used by Java to store loaded classes and other meta-data
  • JVM code itself, JVM internal structures, loaded profiler agent code and data, etc.

The JVM has memory other than the heap, referred to as non-heap memory. It is created at the JVM startup and stores per-class structures, such as runtime constant pool, field and method data, and the code for methods and constructors, as well as interned Strings.

In previous versions of Java (before 1.8), JVM specifies a default space of 64 MB for PermGen space, which could be modified according to need for requirements over 64 MB.

In Java 8, PermGen has been renamed to Metaspace, with some subtle differences. It is important to note that Metaspace has an unlimited default maximum size (-XX:MaxMetaspaceSize=?MB). On the contrary, PermGen from Java 7 and earlier has a default maximum size of 64 MB on 32-bit JVM and 82 MB on the 64-bit version. Of course, these are not the same as the initial sizes. Java 7 and earlier versions start with something around 12-21 MB of the initial PermGen space.

However, MetaSize can be set, but there was a bug in Java that specified metaspacesize was not committing.  Ref https://bugs.openjdk.java.net/browse/JDK-8067100

After setting heap and MetaSpaceSize, what options are available?

There is no way to limit the native memory usage of an application. After Java heap and Metaspace are allocated, a Java application is free to usewhatever left in the system memory for other native allocations. We can limit the total memory available to the container itself by configuring the total memory of the container. However, if our application extensively uses Direct memory buffers (native allocations), we can control their maximum size by using the JVM option MaxDirectMemorySize, and that, in turn, will control the size of native allocations.

We can say that the problems described above are caused by the JVM ignoring cgroups.

Unfortunately, there is no CPU or memory namespace (also, namespaces usually have a slightly different goal), so a simple less /proc/meminfo from inside the container will still show us the overall memory on the host.

Join the discussion

Comments ( 1 )
  • Himanshu Tuesday, October 22, 2019
    I was more than happy to uncover this great site. I need to to thank you for your time due to this fantastic read!! I definitely enjoyed every bit of it and I have you bookmarked to see new information on your blog.
    Java Training in Bangalore
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.