By user12820862 on Sep 19, 2005
HPROF fell on hard times in J2SE 1.3 and 1.4. One consequence of this was that the heap dumps weren't always
readable by HAT. The background to this is that HPROF used an experimental profiling interface called
JVMPI was designed for the original classic VM where it worked well. An implementation of JVMPI was created for
its replacement, the HotSpotTM VM, but it was problematic. The root of the
issue is that JVMPI wasn't really designed with modern virtual machines in mind. It required events from places
that are highly optimized in modern virtual machines. HPROF required the
event when started with the
heap=dump option. This was one of the more troublesome events as it
inhibited many optimizations, and didn't work with all garbage collection implementations.
HPROF returned to its glory days in J2SE 5.0 thanks to a complete make over by Kelly O'Hair The catalyst for the make over (actually a complete re-write) was JSR-163 which defined a new tool interface called the JVMTM Tool Interface. JVM TI broke from the past and didn't provide some events that one might expect in a profiling interface. In particular it doesn't provide an object allocation event - in its place the interface provides support for tools to do bytecode instrumentation. With HPROF re-implemented it meant that heap dumps were working again. HAT was back in business!
In Mustang (Java SE 6.0), HPROF gets new two new siblings which bring new ways to generate heap dumps.
The first sibling is the built-in heap dumper. In production environments you probably don't want to run
your application with an agent like HPROF as it adds significant overhead to the running application. But, if
your application fails with
then it might be useful to have a heap dump created automatically. This is where the built-in heap dumper comes
in. Once you have a heap dump you can browse offline and hopefully figure out what is consuming all the memory.
The built-in heap dumper can also be used to take a snapshot of the heap at other times. This is done using the using the jmap command line utility or jconsole monitoring and management console. This is also very useful - if you are monitoring an application with jconsole and you see the heap usage growing over time then you might want to take a snapshot of the heap to see what is going on.
- The second new sibling in the family specializes in pathology. If you are unlucky to get a crash then it might be useful to look at what is was in the heap at the time of the crash. This is where the second heap dumper comes in as it can generate a heap dump from a core file. It can also be used to obtain a heap dump from an application that is completely hung. A word of warning here - with a crash it is possible that there is some heap corruption so no guarantee that a heap dump can be obtained.
Before we meet the new heap dumpers it is important to mention that they generate simple heap dumps. That is, the dump files contain information about all the objects and classes in the heap but they do not contain information about where the objects are allocated. If you need this information then it is best to run with a JVM TI agent that records this. The NetBeans Profiler is particularly good at this.
Now let us meet the new heap dumpers ...
First, here is an example where we run an application, called ConsumeHeap, with a flag that tells the HotSpot VM to generate a generate a heap dump when we run out of memory:
$ java -XX:+HeapDumpOnOutOfMemoryError -mn256m -mx512m ConsumeHeap java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid2262.hprof ... Heap dump file created [531535128 bytes in 14.691 secs] Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at ConsumeHeap$BigObject.
(ConsumeHeap.java:22) at ConsumeHeap.main(ConsumeHeap.java:32) $
ConsumeHeap, as expected, fills up the java heap and runs out of memory. When java.lang.OutOfMemoryError is thrown a heap dump file is created. In this case the file is 507MB and is created as java_pid2262.hprof in the current directory. If you don't want big dump files in the application working directory then the HeapDumpPath option can be used to specify an alternative location - for example -XX:HeapDumpPath=/disk2/dumps will cause the heap dump to be generated in the /disk2/dumps directory.
[As a complete aside, notice that the java.lang.OutOfMemoryError has a stack trace - this is also new in Mustang - in J2SE 5.0 the OutOfMemoryError would have been thrown without any stack trace]
Now let's look at a second example:
C:\\> jmap -dump:file=app.bin 345 Dumping heap to C:\\temp\\app.bin ... Heap dump file created C:\\>
This example uses the
jmap utility to generate
a heap dump of the java application running as process 345. Astute readers will observe that
this example was done on Microsoft Windows but the jmap utility was only included
with the J2SE 5.0 releases for Solaris and Linux. This is semi-true for Mustang too but
jmap.exe is included and supports a subset of the options available on the other
platforms. In particular the
-dump option is there on all platforms.
Now let us look at an example that generates a heap dump from a core file. As crashes are rare I've cheated a bit
by getting a core file with the Solaris
$ gcore 5831 gcore: core.5831 dumped $ jmap -dump:file=app.bin `which java` core.5831 Attaching to core core.5831 from executable /opt/java/bin/java, please wait... Debugger attached successfully. Server compiler detected. JVM version is 1.6.0-ea-b52 Dumping heap to app.bin ... Unknown oop at 0xf14b8650 Oop's klass is 0xf14b54d0 Unknown oop at 0xf14e67a8 Oop's klass is null Heap dump file created $
The arguments after the
-dump option are the executable and the core image file. I
`which java` which gives me the pathname of
java. Needless to say these need to
match. Less obvious is that you that you can only use jmap from the same JDK build as the executable too. So for
example, if you have a core file from 6.0-ea-b52 then you need to use jmap from 6.0-ea-b52 to generate a heap
dump from the core file.
You might notice a few warnings in the output. The messages
Unknown oop at ... might be a bit
off-putting but remember that the core image is taken at an arbitrary time. There is no guarantee that the heap
and other data structures are in a consistent state when the crash dump is obtained.
At this point your disks are probably full of heap dumps and you are wondering how to examine them. In the introduction I mentioned the Heap Analysis Tool (HAT). HAT is now included in Mustang (since b51) as a command line utility called "jhat".
Getting started with jhat is easy - just give it the name of the heap dump file:
$ jhat java_pid2278.hprof Started HTTP server on port 7000 Reading from java_pid2278.hprof... Dump file created Sun Sep 18 17:18:38 BST 2005 Snapshot read, resolving... Resolving 6162194 objects... Chasing references, expect 12324 dots......................................................... Eliminating duplicate references.............................................................. Snapshot resolved. Server is ready.
At this point HAT has started a HTTP server on port 7000. Point your browser to
http://localhost:7000 to connect to the HAT server.
[HAT requires a lot of memory so if you try it and it fails with OutOfMemoryError then you might need to give it
a larger heap size (for example:
jhat -J-mx512m java_pid2278.hprof). The memory consumption of HAT
is currently being worked on - expect to see improvements in b54 or b55.]
When you connect to the HAT server you should see a list of all the classes in the heap dump. This is the
All Classes query. Scroll to the bottom of the page to see the other queries.
A full description of all the queries is more than I have time for today but you can read more in the HAT
Those that are familiar with HAT will not see many differences between HAT 1.1 and jhat in b52. However you can expect improvements in b53 (should be available on 9/23). For starters there are a number of robustness improvements - like being able to to parse incomplete and truncated heap dumps. There are also various fixes and finally HAT will be able to read heap dumps generated on 64-bit systems.
Small improvements aside, a useful (and very interesting) addition in b53 is that Sundar Athijegannathan has added a scripting interface to HAT. This make use of the new scripting API. This addition allows developers to enter arbitrary queries into browser and they aren't tied to canned queries that HAT provides. Keep an eye on Sundar's blog for updates on this topic.
To sum up: Mustang (Java SE 6.0) allows heap dumps to be obtained at out of memory time, at any time during the life time of the application, or even after an applications dies with a crash. Also with jhat included in the JDK it means that there is a simple out of the box utility to examine the heap dumps and do rudimentary memory analysis. So heap dumps are back, and back with a vengeance!