HeapDumpOnOutOfMemoryError option in 5.0u7 and 1.4.2_12.

5.0 Update 7 was released this week. Among the changes is the backport of the HeapDumpOnOutOfMemoryError option from Mustang. This VM option tells the HotSpot VM to generate a heap dump when OutOfMemoryError is thrown because the java heap or the permanent generation is full. A heap dump is useful in production systems where you need to diagnose an unexpected failure. Lots of developers sent mail asking to make this option available in the shipping releases. For 1.4.2 it will be available shortly in 1.4.2_12.

So how do you analyze these dumps? The legacy Heap Analysis Tool (HAT) is one option but it's old, provides only limited queries, and it can't import heap dumps generated by 64-bit VMs. A better choice is jhat which fixes many issues with HAT, provides new queries, and most important, it provides a query interface called Object Query Language to write your own queries. Sundar has some great blog entries to introduce OQL and get you started:

The jhat utility is included in Mustang (which you can download from the binaries snapshot site). It will happily munch on dumps produced by 5.0u7 and 1.4.2_12.

Another tool to analyze the heap dump is the YourKit Java Profiler. Anton Katilin and his colleagues in YourKit recently released version 5.5 of the product and this includes an "Import HPROF Snapshot" option to import the heap dump and convert it to the format used by the profiler.

Two other useful things to know are: (i) the heap dumps are platform independent and so you don't need to analyze the dump on the same system that produced it, and (ii) running with -XX:+HeapDumpOnOutOfMemoryError does not impact performance - it is simply a flag to indicate that a heap dump should be generated when the first thread throws OutOfMemoryError.

[Update 08/04/06: In the original entry I mentioned a potential issue with the low-pause collector. That turns out not to be a concern.]


Posted by Dih on May 26, 2006 at 07:13 PM PDT #


Posted by Andrew Timberlake's Blog on June 06, 2006 at 03:47 PM PDT #


Posted by Andrew Timberlake's Blog on June 06, 2006 at 03:48 PM PDT #

It looks to me that the heap dump generated is in binary format. Is there anyway to generate an ascii heap dump this way? I have tools that read ascii heap dump only.

Posted by Bill Au on June 08, 2006 at 01:49 AM PDT #

Hi Bill: No. Both jmap and JVM built-in dumper (triggered by -XX:+HeapDumpOnOutOfMemoryError) dump in binary format only. Only the hprof profiler can dump heap in the text format. Text format dumps would be very large in size. In theory, it should be possible to write an offline converter that converts binary format to text format dump. Binary format is documented here: http://heap-snapshot.dev.java.net (https://heap-snapshot.dev.java.net/files/documents/4282/31543/hprof-binary-format.html)

Posted by A. Sundararajan on June 11, 2006 at 07:32 PM PDT #

I seems I can't access 6379636. Any specific reason?

Posted by Taras Tielkes on July 01, 2006 at 08:10 PM PDT #

This is the link to the bug that is mentioned: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6379636

Posted by guest on July 13, 2006 at 10:34 PM PDT #

I'm sorry to say this, but memory profiling for Java was and still is a nightmare.

I have a java application I started developing with jdk 1.3, and we are now at jdk 1.6 and the memory profiling tools still suck big time.

Basically, as the time passed, I realised that any attempts to memory profile some Java applications are futile.

The only way to fix memory issues for such applications, is to buy more RAM. Then 64 bit systems to be able to add more RAM.

Consider a application that is used in production, uses about 2.5 GB of RAM most of the time, and runs on a server with 6 GB of RAM. 

About once a week, without any apparent reason, the application hangs because of excessive garbage collection.

First, I tried using JProfiler. Huge overhead. I couldn't use it for a production system.

Then I watched as the Netbeans Profiler was developed, and I even spent about a month beta testing the pre-release versions, and reporting any bugs I found.

Unfortunately, while the Netbeans Profiler adds less overhead compared to JProfiler, it still cannot be used in a production environment.

Today, I found out about the new and wonderfull tools, jmap, jhat, -XX:+HeapDumpOnOutOfMemoryError and HotSpotDiagnostic MBean.

Unfortunatelly, the tools don't work.
I performed a heap dump on JDK1.5.0_10, with jmap, and I tried to open the 2.9GB heap.bin file with jhat version 2.0 (from java version 1.6.0).

First, the jmap tool generated a storm of warnings:

Unknown oop at 0x00002aaac49e7298
Oop's klass is null
Unknown oop at 0x00002aaac4b95c68
Oop's klass is null

Then the jhat tool refused to open the file:

Reading from /usr/java/jdk1.5.0_10/bin/heap.bin...
Dump file created Thu Mar 01 19:18:11 EET 2007
java.io.IOException: Bad record length of -1395602087 at byte 0x540c33 of file.
        at com.sun.tools.hat.internal.parser.HprofReader.read(HprofReader.java:192)
        at com.sun.tools.hat.internal.parser.Reader.readFile(Reader.java:79)
        at com.sun.tools.hat.Main.main(Main.java:143)

Then, I tried the same operation on a smaller dump file, 193 MB, and I got a different error:

Reading from heap.bin...
Dump file created Thu Mar 01 20:52:07 EET 2007
Snapshot read, resolving...
Resolving 1487926 objects...
Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to com.sun.tools.hat.internal.model.JavaClass
        at com.sun.tools.hat.internal.model.JavaObjectArray.resolve(JavaObjectArray.java:68)
        at com.sun.tools.hat.internal.model.Snapshot.resolve(Snapshot.java:237)
        at com.sun.tools.hat.Main.main(Main.java:152)

I'm using Linux x86-64 and java for amd64.

(According to a Google search, there is a bug on this issue at http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6467192)

Then I tried to report the issues on bugs.sun.com. Surprise. The site doesn't work, it gives a timeout if you attempt to access any page:

Your request has timed out.

Please use the back button on your browser and resubmit. 

And as Sarrah Jessica Parker / Carrie Bradshow used to say in Sex and the City:
I can't help but wonder: Are the problems with bugs.sun.com in any way related with log files full of this stuff:

2704.825: [Full GC [PSYoungGen: 11813K->0K(1577984K)] [PSOldGen: 3276172K->1676296K(3320512K)] 3287985K->1676296K(4898496K) [PSPermGen: 53547K->53547K(107136K)], 26.7389050 secs]
5583.355: [Full GC [PSYoungGen: 1170K->0K(1620160K)] [PSOldGen: 3286016K->1505662K(3320512K)] 3287186K->1505662K(4940672K) [PSPermGen: 54310K->54310K(121600K)], 6.4911200 secs]



Posted by Vladimir Nicolici on March 01, 2007 at 04:17 AM PST #

This blog entry was to explain the -XX:+HeapDumpOnOutOfMemoryError option. I'm not aware of any bug reports to say that the resulting heap dump cannot be parsed by tools that can read this format. The problem you observed seems to be with the jmap -heap option in jdk5. This option cannot guarantee to generate a good heap dump because it may observe the heap in an inconsistent state. There is a new option in jmap in jdk6 called -dump which will take a snapshot of the heap of the target process. It is also reliable and uses the same mechanism as the HeapDumpOnOutOfMemoryError.

Posted by Alan on March 01, 2007 at 04:50 AM PST #

If we end up with a REALLY large hprof file (say 900MB) is there any way to split the file or anything of that nature so that we could run it? I am trying to run locally with as much memory as I can give my JVM (only a 2GB machine) and obviously I am running out of memory. Just trying to figure out what all I can do to get the hprof analyzed so that we can identify the problem in our app.

Posted by Mike on March 15, 2007 at 06:52 AM PDT #

Sorry, to clarify my comment I am trying to run HAT with my HPROF. The file was created by JVM version 1.4.2_13.

Posted by guest on March 15, 2007 at 06:53 AM PDT #

So if the JVM is out of memory, how can it generate a heap dump file?

Posted by John on March 29, 2007 at 05:14 AM PDT #

On personal opinion, I find this very helpful. Guys, I have also posted some more relevant info further on this, not sure if you find it useful: http://www.bidmaxhost.com/forum/

Posted by guanhua on April 05, 2007 at 12:22 AM PDT #

I ended up having to parse mine into a text format to find my issue, but in doing so learned a whole lot about this file :) If anyone needs the text parser email me at michael.bain@mckesson.com, I dont promise that it is pretty it just works.

Posted by Mike on April 05, 2007 at 01:07 AM PDT #

I am trying the option HeapDumpOnOutOfMemoryError with 1.5.0_09 but could not get the dump generated when this happens. Please help !

Posted by DK on April 09, 2007 at 09:52 AM PDT #

Huge heap dumps can be easily parsed and analyzed with the SAP memory analyzer available for free here: https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+Analysis There is also a description how to get heap dumps and when they are not written even though you configured HeapDumpOnOutOfMemoryError.

Posted by guest on July 16, 2007 at 11:59 PM PDT #

When is the Heap Dump generated at the time the exception is thrown? or when the exception is caught by the jvm top-level exception handler in the thread?

Most java apps I've seen catch Throwable even though it should not be done by a reasonable java app (as stated in the java doc)

Posted by Zoltan Farkas on October 30, 2007 at 08:54 AM PDT #


Posted by evden eve nakliyat on November 25, 2007 at 02:04 AM PST #

Post a Comment:
Comments are closed for this entry.



Top Tags
« July 2016

No bookmarks in folder