Jon Masamitsu's Weblog

  • Java
    April 23, 2007

Get What You Need

Guest Author
As we all know by now
GC ergonomics exists to automatically adjust the sizes of the generations
to achieve a pause time goal and/or a throughput goal while using a minimum
heap size.
But sometimes that's not what you really want. I was talking to a user recently
who was using java.lang.Runtime.totalMemory() and java.lang.Runtime.freeMemory()
to monitor how full the heap was. The intent was that the user's application be
able to anticipate when a GC was coming and to throttle back on the number of
requests being serviced so as to avoid dropping requests due to the reduced throughput
when the GC pause did occur. The problem was that the capacity of the
heap (the amount returned by totalMemory()) was varying so much that anticipating
when a GC was coming was not very reliable. First the pretty picture.

Looking at the memory used plot first, I inferred from the graph that there
was some processing during initialization that required a lesser amount
of memory. After that there was activity that used and released memory until
about 6k seconds. At that point the activity died down until about 7.5k seconds
when there was an up tick in activity and then a quiet period again until about
11k seconds. After that there was pretty regular allocations being done.

Looking at the total memory I see what I'm expecting to see. The
initial total heap size at zero time is larger than is needed during the first phase
and the total memory drops. Then as allocations picked up the size of the
heap grew and, on average, stayed at a higher level until the first quiet period.
During the first quiet period the size total heap size decayed to a lower value
that corresponds to the lesser demand for allocations. Then there is the short
burst of activity with an corresponding growth in the heap. The toal heap again drops
during the second quiet period and then grows again as the activity picks up.

I like the behavior I see, but it wasn't what the user wanted.

The amount of variation in the total heap size during the active
periods is high. This is varying too quickly for this user.
The reason for the variations is that the GC ergonomics is trying to react to variations
in the behavior of the applications and so is making changes to the heap as soon as a variation in the application's behavior is recognized.
There are two ways to look
at this:

  • GC ergonomics was designed for the case where a steady state
    behavior has been reached, and as exhibited by the variations in the amount of
    memory used, this application doesn't reach a steady state (on the time
    scale of interest) and GC ergonomics is doing the best that it can
    under the circumstances, and
  • the user's application is what it is and what should he do?

The first bullet is the excuse I use for almost everything. Let's consider the second.

First a word about what GC ergonomics does by default that provides some
damping of variations.

GC ergonomics calculates the desired size of a generation (in terms of meeting the
goals specified by the user) and then steps
toward that size. There is a miminum size for a step and if the
distance to the desired size is less than that minimum step, then
the step is not made. The minimum step size is the page size
on the platform. This eliminates changes as we get close to the
goal and removes some of the jitter.

When deciding whether a goal is being met, GC ergonomics uses
weighted averages for quantities. For example, a weighted average
for each pause time is used. The weighted averages typically
change more slowly than the instantaneous values of the quantities
so their use also tends to damp out variations.

But if you really want a more stable heap size, here's what you
can try. These suggestions limit the range of changes that can
be made by GC ergonomics. The more minor limitations are given
first followed by the more serious ones, finally ending with turning
off GC ergonomics. You can just try the suggestions (in bold)
and skip over the explanations as you wish.

Reduce the jitter caused by the different sizes of the survivor spaces
(and the flip-flopping of the roles of the survivor spaces) by reducing
the size of the survivor spaces. Try NNN = 8 as a starter.



The total size of
the young generation is the size of eden plus the size of from-space.
Only from-space is counted because the space in the young generation
that contain objects allocated by the application is only eden and
from-space. The other survivor space (to-space) can be thought of as
scratch space for the copying collection (the type of collection
that is used for the young generation). Each survivor space alternately
plays the role of from-space (i.e. during a collection survivor space A
is from-space and in the next collection survivor space B is from-space).
Since the two survivor spaces can be of different sizes, just the swapping
can change the total size of the young generation. In a steady state
situation the survivor spaces tend to be the same size but in situations
where the application is changing behavior and GC ergonomics is trying
to adjust the sizes of the survivor spaces to get better performance,
the sizes of the survivor spaces are often different temporarily.

The default value for MinSurvivorRatio is 3 and the default value for
InitialSurvivorRatio is 8. Pick something in between. A smaller
value puts more space into the survivor spaces. That space might
go unused. A larger value limits the the size of the survivor
spaces and could result in objects being promoted to the tenured
generation prematurely. Note that the survivor space sizes are still
adjusted by GC ergonomics. This change only puts a limit on how large
they can get.

Reduce the variation in the young generation size by setting
a minimum and a maximum with -XX:NewSize=NNN and

GC ergonomics will continue to adjust the size of the young generation
within the range specified. If you want to make the minimum and maximum
limits of the young generation the same you
can use -XX:NewSize=NNN and -XX:MaxNewSize=NNN
or the flag -XmnNNN will also do that.

Reduce the range over which the generations can change by specifying
the minimum and maximum heap size with -Xms and -Xmx, respectively.

If you've already explicitly specified the size of the young generation, then
specifying the limit on the entire heap will in effect specify the limits
of the tenured generation.
You don't have to make the minimum and maximum the same but making
them the same will have the maximum effect in terms of reducing
the variations.

Turn off GC ergonomics with the flag -XX:-UseAdaptiveSizePolicy

The size of the generations, the sizes of the survivor spaces in the
young generation and the tenuring threshold stay at their starting
value throughout the execution of the VM. You can exercise this level of
control but the you're back to tuning the
GC yourself. But sometimes that really is the best solution. The user I
talked to liked the more predictable heap size better than the automatic tuning and
so turned off ergonomics.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha