Minimize Garbage Generation: GC is your Friend, not your Servant

Throughput oriented garbage-collectors, in particular generational, copying-based collectors, are very efficient at dealing with large quantities of garbage as they never have to visit garbage objects and so the cost of a GC pass is not dependent on the amount of garbage to be found. As these kinds of collectors exist in the mainstream Java SE implementations, there has been a tendency for developers to become very unconcerned with the amount of garbage they may generate, because they expect the collector will deal with it simply and efficiently. For deterministic, non-generational, garbage collectors, such as the Java RTS Real-Time Garbage Collector, where latency and pause-times are the main concerns, this is not the case: The cost of a GC cycle is dependent on both the number of live objects and the number of garbage objects! Consequently, algorithms that generate excessive amounts of garbage - which is an inefficiency even under generational collectors - can encounter much higher GC costs when run under Java RTS.

For example, consider this method that is used to build a string describing a set of attributes:

  String attributeString(Attribute[] attrs) {
    String str = "Attribute list: ";
    for (Attribute a : attrs) {
      str += a.toString();
      str += ", ";
    return str;

Because of the use of the string concatenation operator, this will be rewritten by the javac compiler to act the same as:

  String attributeString(Attribute[] attrs) {
    String str = "Attribute list: ";
    for (Attribute a : attrs) {
      StringBuilder sb = new StringBuilder();
      sb.append(", ");
      str = sb.toString();
    return str;

Now consider how many objects are created each time through the loop. There are three obvious places where objects are created:

  • There is a new StringBuilder instance.
  • There is a new String representing each attribute.
  • There is a new String created from the StringBuilder (and assigned back to the str variable).

But less obviously, if the append operations exceed the size of the internal character buffer in the StringBuilder then we will create a new, bigger buffer (and this could happen more than once per iteration! Note that as soon as str contains more characters than the default size of a StringBuilder then this expansion will occur when we do the initial sb.append(str) in each iteration!

Out of all these objects, only the final String created from the final StringBuilder remains live — everything else is garbage! This is grossly inefficient in terms of memory use and computational overhead for object creation and GC effort.

There are a few basic rules for minimizing object creation in code such as the above:

  1. Don't append to String objects. Strings are immutable so each append creates a new String and potentially throws the old one away.
  2. Convert StringBuilder objects (or StringBuffer objects) to String objects only when you need the actual String.
  3. Size StringBuilder (or StringBuffer) objects appropriately so that dynamic expansion of their internal character buffers is not needed.

With that in mind we can rewrite the above method much more efficiently (assuming we know how big attribute strings are expected to be):

  static final int ATTR_STRING_SIZE = 10;
  static final String ATTR_LIST = "Attribute list: ";
  static final String SEPARATOR = ", ";

  String attributeString(Attribute[] attrs) {
    int size = ATTR_LIST.length() +
               attrs.length \* (ATTR_STRING_SIZE + SEPARATOR.length());

    StringBuilder str = new StringBuilder(size);
    for (Attribute a : attrs) {
    return str.toString();

Now all we create on each iteration is the actual String for the current attribute. There is one StringBuilder created, which should never need to dynamically grow, and one String created from that StringBuilder.


The Title is misleading. Better name this: Don't waste your resources creating excess strings.

Interesting sidenote: where is it to find that the compiler replaces the String += operations in StringBuilder appends?

Regards from Germany
Thomas Nagel

Posted by Thomas Nagel on January 13, 2010 at 03:18 PM EST #

Actually, I'm not sure this is a great example. Not because you're wrong, but because the example above grows quadratically on the number of characters in the final string.

Meaning that not only do you overload the GC, you also can overload the CPU and slow down the entire app (if there's enough text).

Posted by am on January 13, 2010 at 04:33 PM EST #

This raises the question: could javac be enhanced to do this rewrite, rather than forcing the programmer to. Seems like it could, with flow control analysis, pull the "str = sb.toString();" statement out of the loop. Then it could pull the StringBuilder constructor call up out of the loop, too. Maybe it could make some smart guesses as to how big to resize the StringBuilder to...if it's going to loop through 10000 elements and it's already gone through 10 of them and the StringBuilder now contains 25 characters, 25000 would be a good guess.

Posted by Andy Tripp on January 13, 2010 at 07:35 PM EST #

Thanks for the comments.

The use of StringBuffer is mentioned in JLS Optimization of String Concatenation.

StringBuilder replaced StringBuffer when it was introduced.

javac is a pretty simple non-optimizing compiler so I don't think we'll see any improvements in this area. The JIT could do something more clever, but given the javac code it has to start I think it will have to some very clever analysis to be able to deduce that it can hoist the creation of the StringBuilder and only use one instance across the entire loop. I don't think any compiler would waste time trying to figure out a reasonable initial size though - that requires knowing what toString will do. Even estimating based on initial iterations (effectively loop unrolling) seems way too much effort for the return. :)

The real moral here is that String concatenation should only be used for very simple operations. Because we have a simple operator for this we tend to over use it - in my view. I think if people had to write the code using String methods then they'd see that what they were writing was not good code.

The other key point I'm making is that not all GCs are the same. In particular the real-time GC area, as I'm involved in the real-time Java domain.

Posted by David Holmes on January 14, 2010 at 01:27 AM EST #

As programmers we have a responsibility to understand the cost of what we instruct the machine to do. It's easy to forget that cost if the language does its best not to punish us too harshly for it, but ignoring that responsibility is akin to sloppiness (all too common a disease, I'm afraid).

So, thank you, David, for your article.

Posted by Udo Schuermann on January 14, 2010 at 11:40 PM EST #

My understanding is that the Sun 1.5+ compilers still transform String + operations into a StringBuffer not a StringBuilder being cautious and taking the thread-safe approach. I have no idea of the gc impact of one versus the other.

Posted by Joshua Tuberville on February 04, 2010 at 01:09 PM EST #

Joshua: Since JDK 5.0 was first released javac will use StringBuilder for String concatenation, provided the target classfile version is 1.5 or above (which is the default). The GC properties of StringBuilder and StringBuffer are effectively identical.

Posted by David Holmes on February 08, 2010 at 06:27 AM EST #

Hi David,

On a side note, does it make sense in general to try to minimize the number of classes created for the "domain objects" and try to re-use existing jdk classes such as String as much as possible, so as to reduce the number of classes that need to be loaded into memory and therefore the memory footprint and the resultant load on GC ?

For example, when designing an application, one can choose to go one extreme to create a type/class for every domain object, eg AttributeA, AttributeB, etc. which are just simple String wrapper, vs the alternative of using String directly. The former allows the programming model to be strongly typed and minimize the chance of mixing up positional values (when passed as constructor arguments for example) along with other benefits such as subtle differences in type specific behavior, but at the cost of many more classes created.


Posted by guest on November 24, 2011 at 10:29 PM EST #


Many local variables need not have to go through GC as they can be GCed when going out of scope. Is it possible to introduce this function.


Posted by Suminda Dharmasena on April 24, 2013 at 02:32 AM EST #


Stack-allocation of objects does arise from time to time. There are two sides to that:

- JIT optimizations using escape-analysis
- changes to the programming model

The problem with making this part of the programming model is that it greatly complicates the programming model. Now you have objects whose lifetime is limited to the execution scope in which they were allocated. This is in effect the same problem that ScopedMemory introduced in the Real-Time Specification for Java. You now need special rules to prevent longer-lived objects from holding references to shorter-lived ones - which has serious implications as ScopedMemory exposed. Plus you have additional GC issues because the "local" object still has to be scanned for heap references. So really this doesn't give you a practical programming model for a safe language like Java.

Posted by David Holmes on April 24, 2013 at 11:06 AM EST #


The 0bjects can be heap allocated nit be break the model but they will be collected at known points in the code opposed to another Thread at a unknown time. The object will be GCed only if possible. The scoping cannot be added if the object escapes the current context. If they hold reference to long lived objects which have run on their thread then these need not be collected in case of aggregated object. They will be GCed as usual. Also there are other references to these objects they need not be collected.

I believe a large number of references are local and many objects can be collected in scoped manner. If the compiler determines a object can be collected it will issue a delete at the required point. Objects this cannot be done will be part of GC. Contained objects through composition will also still exist if they live longer than the deleted object. This will not be without issues in implementation. Also the implementation will not be a walk in the park. But will reduce a lot of the GC overhead and issues like pauses.

Also I believe you can introduce annotations to deduce scope like:
GCAtBlockExit - for loops the GC happens when the
GCOnGCOfContainingObject - GC on GC of aggregate / composing object
GCOnStatementCompletion - GC parameters and return value (if not assigned) of a call when the function returns. Need to support statement level annotations.
GCOnLastResort - Only GC at last resort before outofmemory exception - class level
ExcludeFinalising - prevent finalizers running on the object - class level
GCUsingNewThread(Priority = n)
GCUsing(GCSystem) - all objects created in the specified scope will be GCed using specified GC - annotations need to support static references
GCSuspend - a synchronous object can be used so that the GC threads do not run while the code block is executing


Posted by Suminda Dharmasena on April 24, 2013 at 04:40 PM EST #

I believe 90 percent or more of the deletes can be decided by compile time analysis since Java does not have pointer arithmetic.

Why leave garbage to pile up if you can throw it out beforehand.

Posted by Suminda Dharmasena on April 24, 2013 at 04:52 PM EST #

What you describe requires additional analysis at every putfield to detect escape, and additional GC activity at the end of each "scope" to do the reclaiming. This falls under the "JIT optimizations using escape-analysis" category.

It could certainly be done, the question is whether it can be done in an effective and general way. The "barriers" needed to detect escape impact on performance.

Posted by David Holmes on April 24, 2013 at 08:44 PM EST #


This can be done at compile time in large number of cases. If it escapes that at the JIT level and pushing this to JIT will just increase unnecessary runtime overhead.

What needs to be done is:
- a simple delete if it can be determined at compile time
- runtime delete only if not determinable at compile time
- if the object escapes then passes to the GC system


Posted by Suminda Dharmasena on April 24, 2013 at 10:12 PM EST #

It can not be done at source compile time. A Java source compiler can only produce Java bytecodes and there are no bytecodes to express this. So it has to be done by a native compiler whether JIT, AOT or whatever.

Posted by David Holmes on April 24, 2013 at 10:16 PM EST #

A new instruction can be added to do this.

Posted by Suminda Dharmasena on April 24, 2013 at 10:25 PM EST #

Perhaps two instruction can make java much faster.

1) mark an object is locally contained / shred / or not sure - will help the JIT optimiser a lot
2) delete a object (fast when sure) / delete if not shared (show but to use when not sure)

Posted by Suminda Dharmasena on April 24, 2013 at 10:37 PM EST #

Alternative is to have embedded meta data in the bytecode of all analysis that can be done through static analysis at compile time.

This will eliminate the overhead of dynamic analysis.

Anyway the starting point can be to eliminate GC overhead by 90 to 99 percent which is a low hanging fruit to start with.

An object being GCed does not mean if it contains longer lived objects all there need to be GCed also. In these cases normal GC mechanism can take over.

Also ideally the application should be able to specify the GC it wants to use. Also an API framework can be provided to write application specific GC and ability for the code to communicate with the installed GC. This will be only for items which will need GC which can be made into a few through other schemes. Also the JDK can provide a Zing like GC.

Posted by Suminda Dharmasena on April 24, 2013 at 10:59 PM EST #


I am looking to see how we can get some level of user controlled memory management without using Unsafe.

How do I take it forward?


Posted by Suminda Dharmasena on May 11, 2013 at 07:12 PM EST #

I suggest you start a discussion on an appropriate OpenJDK mailing list - which one depends on exactly what you are proposing.

Posted by guest on May 13, 2013 at 11:38 AM EST #

I am looking for more control over GC and other functions like closable through annotations. At the trigger point the GC system will *try* to collect the object and any containing / aggregated objects if they can also be collected.

This will be a safe *delete*.

The mailing list may not be the best point to start. I think best is to get a few supporters 1st and do a JSR.

Posted by Suminda Dharmasena on May 18, 2013 at 04:38 AM EST #

You need to start with the mailing list(s) and get some supporters within the OpenJDK Community, then propose a Java Enhancement Proposal (JEP):

A JSR, if needed, comes much further down the track.

But lets not discuss this via blog comments - thanks.

Posted by guest on May 20, 2013 at 04:45 PM EST #

Post a Comment:
  • HTML Syntax: NOT allowed

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« September 2016