If the Java heap is barely large enough to hold all the live data, the JVM could be doing
almost continual garbage collections. For example if 98% of the data in the heap
is live, then there is only 2% that is available for new objects. If the application
is using that 2% for temporary objects, it can seem to be humming along quite nicely,
but not getting much work done. How can that be? Well the application runs until it has
allocated that 2% and then a garbage collection happens and recovers that 2%.
The application runs along happily allocating and the garbage collector runs along
respectfully collecting. Over and over and over. The application will be making forward
progress but maybe oh so slowly. Are you out of memory?
Back in the 1.4.1 days a customer noticed this type of behavior and asked for help
in detecting that bad situation. In 1.4.2 the throughput collector started throwing an
out-of-memory exception if the VM was spending the vast majority of its time
doing garbage collection and not recovering very much space in the Java heap.
In 5.0 the implementation was changed some, but the idea was the same. If you are
spending way too much time doing garbage collections, you're going to get an out-of-memory.
Interestingly enough this identified at least one case in our own usage of Java
applications where we were spending most of our time doing garbage collection.
We were happy to find it.
Why do I bring this up? Well, mostly because it was brought up in our GC meeting
this morning. If you're in this situation of spending most of your time in
garbage collection, I think you are out of memory and you need a bigger heap.
If you don't think that, you can turn off this behavior with the
command line flag -XX:-UseGCTimeLimit. May you never need it.