Heap dumps are back with a vengeance!

The HPROF agent has been around since 1998 and early betas of J2SETM 1.2. One of its more useful features, at the time, was the ability to generate a heap dump. The heap dump is a dump of all the live objects and classes. The HPROF agent also records where objects are allocated and these are written to the dump file too. Heap dumps aren't very useful without a tool to read them but a novel tool called the Heap Analysis Tool (HAT), courtesy of Bill Foote, was released at around the same time. HAT was very useful as it allowed developers to browse the heap dump, and do rudimentary queries to debug memory leak problems.

HPROF fell on hard times in J2SE 1.3 and 1.4. One consequence of this was that the heap dumps weren't always readable by HAT. The background to this is that HPROF used an experimental profiling interface called JVMPI. JVMPI was designed for the original classic VM where it worked well. An implementation of JVMPI was created for its replacement, the HotSpotTM VM, but it was problematic. The root of the issue is that JVMPI wasn't really designed with modern virtual machines in mind. It required events from places that are highly optimized in modern virtual machines. HPROF required the OBJECT_ALLOC event when started with the heap=dump option. This was one of the more troublesome events as it inhibited many optimizations, and didn't work with all garbage collection implementations.

HPROF returned to its glory days in J2SE 5.0 thanks to a complete make over by Kelly O'Hair The catalyst for the make over (actually a complete re-write) was JSR-163 which defined a new tool interface called the JVMTM Tool Interface. JVM TI broke from the past and didn't provide some events that one might expect in a profiling interface. In particular it doesn't provide an object allocation event - in its place the interface provides support for tools to do bytecode instrumentation. With HPROF re-implemented it meant that heap dumps were working again. HAT was back in business!

In Mustang (Java SE 6.0), HPROF gets new two new siblings which bring new ways to generate heap dumps.

  • The first sibling is the built-in heap dumper. In production environments you probably don't want to run your application with an agent like HPROF as it adds significant overhead to the running application. But, if your application fails with java.lang.OutOfMemoryError then it might be useful to have a heap dump created automatically. This is where the built-in heap dumper comes in. Once you have a heap dump you can browse offline and hopefully figure out what is consuming all the memory.

    The built-in heap dumper can also be used to take a snapshot of the heap at other times. This is done using the using the jmap command line utility or jconsole monitoring and management console. This is also very useful - if you are monitoring an application with jconsole and you see the heap usage growing over time then you might want to take a snapshot of the heap to see what is going on.

  • The second new sibling in the family specializes in pathology. If you are unlucky to get a crash then it might be useful to look at what is was in the heap at the time of the crash. This is where the second heap dumper comes in as it can generate a heap dump from a core file. It can also be used to obtain a heap dump from an application that is completely hung. A word of warning here - with a crash it is possible that there is some heap corruption so no guarantee that a heap dump can be obtained.

Before we meet the new heap dumpers it is important to mention that they generate simple heap dumps. That is, the dump files contain information about all the objects and classes in the heap but they do not contain information about where the objects are allocated. If you need this information then it is best to run with a JVM TI agent that records this. The NetBeans Profiler is particularly good at this.

Now let us meet the new heap dumpers ...

First, here is an example where we run an application, called ConsumeHeap, with a flag that tells the HotSpot VM to generate a generate a heap dump when we run out of memory:

$ java -XX:+HeapDumpOnOutOfMemoryError -mn256m -mx512m ConsumeHeap
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid2262.hprof ...
Heap dump file created [531535128 bytes in 14.691 secs]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at ConsumeHeap$BigObject.(ConsumeHeap.java:22)
        at ConsumeHeap.main(ConsumeHeap.java:32)
$

ConsumeHeap, as expected, fills up the java heap and runs out of memory. When java.lang.OutOfMemoryError is thrown a heap dump file is created. In this case the file is 507MB and is created as java_pid2262.hprof in the current directory. If you don't want big dump files in the application working directory then the HeapDumpPath option can be used to specify an alternative location - for example -XX:HeapDumpPath=/disk2/dumps will cause the heap dump to be generated in the /disk2/dumps directory.

[As a complete aside, notice that the java.lang.OutOfMemoryError has a stack trace - this is also new in Mustang - in J2SE 5.0 the OutOfMemoryError would have been thrown without any stack trace]

Now let's look at a second example:

C:\\> jmap -dump:file=app.bin 345
Dumping heap to C:\\temp\\app.bin ...
Heap dump file created
C:\\>

This example uses the jmap utility to generate a heap dump of the java application running as process 345. Astute readers will observe that this example was done on Microsoft Windows but the jmap utility was only included with the J2SE 5.0 releases for Solaris and Linux. This is semi-true for Mustang too but jmap.exe is included and supports a subset of the options available on the other platforms. In particular the -dump option is there on all platforms.

Now let us look at an example that generates a heap dump from a core file. As crashes are rare I've cheated a bit by getting a core file with the Solaris gcore command:

$ gcore 5831
gcore: core.5831 dumped
$ jmap -dump:file=app.bin `which java` core.5831            
Attaching to core core.5831 from executable /opt/java/bin/java, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 1.6.0-ea-b52
Dumping heap to app.bin ...
Unknown oop at 0xf14b8650
Oop's klass is 0xf14b54d0
Unknown oop at 0xf14e67a8
Oop's klass is null
Heap dump file created
$

The arguments after the -dump option are the executable and the core image file. I used `which java` which gives me the pathname of java. Needless to say these need to match. Less obvious is that you that you can only use jmap from the same JDK build as the executable too. So for example, if you have a core file from 6.0-ea-b52 then you need to use jmap from 6.0-ea-b52 to generate a heap dump from the core file.

You might notice a few warnings in the output. The messages Unknown oop at ... might be a bit off-putting but remember that the core image is taken at an arbitrary time. There is no guarantee that the heap and other data structures are in a consistent state when the crash dump is obtained.

At this point your disks are probably full of heap dumps and you are wondering how to examine them. In the introduction I mentioned the Heap Analysis Tool (HAT). HAT is now included in Mustang (since b51) as a command line utility called "jhat".

Getting started with jhat is easy - just give it the name of the heap dump file:

$ jhat java_pid2278.hprof
Started HTTP server on port 7000
Reading from java_pid2278.hprof...
Dump file created Sun Sep 18 17:18:38 BST 2005
Snapshot read, resolving...
Resolving 6162194 objects...
Chasing references, expect 12324 dots.........................................................
Eliminating duplicate references..............................................................
Snapshot resolved.
Server is ready.

At this point HAT has started a HTTP server on port 7000. Point your browser to http://localhost:7000 to connect to the HAT server.

[HAT requires a lot of memory so if you try it and it fails with OutOfMemoryError then you might need to give it a larger heap size (for example: jhat -J-mx512m java_pid2278.hprof). The memory consumption of HAT is currently being worked on - expect to see improvements in b54 or b55.]

When you connect to the HAT server you should see a list of all the classes in the heap dump. This is the All Classes query. Scroll to the bottom of the page to see the other queries. A full description of all the queries is more than I have time for today but you can read more in the HAT README.

Those that are familiar with HAT will not see many differences between HAT 1.1 and jhat in b52. However you can expect improvements in b53 (should be available on 9/23). For starters there are a number of robustness improvements - like being able to to parse incomplete and truncated heap dumps. There are also various fixes and finally HAT will be able to read heap dumps generated on 64-bit systems.

Small improvements aside, a useful (and very interesting) addition in b53 is that Sundar Athijegannathan has added a scripting interface to HAT. This make use of the new scripting API. This addition allows developers to enter arbitrary queries into browser and they aren't tied to canned queries that HAT provides. Keep an eye on Sundar's blog for updates on this topic.

To sum up: Mustang (Java SE 6.0) allows heap dumps to be obtained at out of memory time, at any time during the life time of the application, or even after an applications dies with a crash. Also with jhat included in the JDK it means that there is a simple out of the box utility to examine the heap dumps and do rudimentary memory analysis. So heap dumps are back, and back with a vengeance!

Comments:

(sorry I added this comment to the worng blog earlier - I have punished myself...) Alan - you noted that the message "Unknown oop at 0xblah" is "ok" when looking at heaps in core files, as things might be inconsistent at that point. I see the same messages when using jmap -histo pid etc. in JDK 1.5.0 and was wondering if the same thing applies there. My guess is it is the same as the tools just attach to the pid at any arbitrary point in time and so that would be much like the core file effect. Is this correct?

Posted by Chrsi Markle on October 17, 2005 at 02:43 PM PDT #

Yes, it is the same thing. The jmap, jinfo, and jstack utilities in JDK5.0 attach to the target VM using a non-cooperative mechanism and thus observe a snapshot of the process. In Mustang we have added a cooperative mechanism so that we can safepoint the VM and ensure that the tools print consistent output. So far only a small set of options uses this mechanism and of course we need to retain the non-cooperative approach for the difficult cases where the process is completely hung (also core dumps). In this blog entry I used jmap -dump:format=b <pid> and that uses the cooperative mechanism to ensure we get a consistent dump. If you're not getting a reasonable histogram from jmap with JDK5.0 then an alternative solution is the heap viewer demo. This is a tool agent that runs in the VM. You'll find the source and pre-built binaries in the demo/jvmti/heapViewer directory of the JDK. The output is similar to jmap -histo in that it prints a class-wise histogram of the objects in the heap.

Posted by Alan Bateman on October 17, 2005 at 06:50 PM PDT #

Will jmap -dump:format=a dump the HPROF dump in the ascii format? and why is the dump option not documentated in the jmap documenation?

Posted by Indrajit Poddar on February 15, 2006 at 05:12 AM PST #

The heap dump is in binary format (format=b). If you want an ascii file then it would be easy to write something to process the dump and emit an ascii version.

jmap -help will give you the full list of options. Here's the output on Solaris or Linux:

$ jmap -help
Usage:
    jmap [option] <pid>
        (to connect to running process)
    jmap [option] <executable <core>
        (to connect to a core file)
    jmap [option] [server_id@]<remote server IP or hostname>
        (to connect to remote debug server)

where <option> is one of:
    <none>               to print same info as Solaris pmap
    -heap                to print java heap summary
    -histo               to print histogram of java object heap
    -permstat            to print permanent generation statistics
    -finalizerinfo       to print information on objects awaiting finalization
    -dump:<dump-options> to dump java heap in hprof binary format
                         dump-options:
                           format=b     binary format
                           file=<file>  dump heap to <file>
                         Example: jmap -dump:format=b,file=heap.bin <pid>
    -F                   force. Use with -dump:<dump-options> <pid> or -histo
                         to force a heap dump or histogram when <pid> does not
                         respond.
    -h | -help           to print this help message
    -J<flag>             to pass <flag> directly to the runtime system

and on Windows:

C:\\> jmap -help
Usage:
    jmap -histo <pid>
      (to connect to running process and print histogram of java object heap
    jmap -dump:<dump-options> <pid>
      (to connect to running process and dump java heap)

    dump-options:
      format=b     binary default
      file=<file>  dump heap to <file>

    Example:       jmap -dump:format=b,file=heap.bin <pid>

The man page on java.sun.com will be updated post beta.

Posted by Alan Bateman on February 15, 2006 at 05:42 PM PST #

Hi, Are there spcifications for binary format of Hprof. What is the format of heap dump created by builtin heap dumper. I'm working on Java debugger and would like to create builtin UI to show HPROF data. Thanks

Posted by Sameer on April 10, 2006 at 11:05 PM PDT #

Is "jmap -histo" supposed to trigger a garbage collection? If it does not trigger a garbage collection, if you are trying to figuire out what objects are causing a memory leak, it will be hard to figuire out which objects are leaking, unless a gc occurs before the histogram is created. Any ideas on how a GC can be forced non-programatically before doing a "jmap -histo" entry?

Posted by Veerendra Chirala on May 19, 2006 at 06:17 AM PDT #

Do you know of an existing tool for converting heap dump from binary to ascii format? I wouldn't want to reinvent the wheel if it already exists.

Posted by Bill Au on June 08, 2006 at 02:10 AM PDT #

Using 1.5.0_07, I tried to trigger a dump using jmap with the following command:
jmap -heap:format=b 2231 > fa22_hprof_dump
as there is no -dump parameter on 1.5.0_07.
However, after successfully connecting to the VM process and working for a while, all I got is:
Exception in thread "main" sun.jvm.hotspot.debugger.UnalignedAddressException: Trying to read at address: 0x000000000000000a with alignment: 4
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1.checkAlignment(LinuxDebuggerLocal.java:163)
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.readCInteger(LinuxDebuggerLocal.java:449)
        at sun.jvm.hotspot.debugger.DebuggerBase.readAddressValue(DebuggerBase.java:425)
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.readOopHandle(LinuxDebuggerLocal.java:413)
        at sun.jvm.hotspot.debugger.linux.LinuxAddress.getOopHandleAt(LinuxAddress.java:98)
        at sun.jvm.hotspot.oops.Oop.getKlassForOopHandle(Oop.java:175)
        at sun.jvm.hotspot.oops.ObjectHeap.newOop(ObjectHeap.java:346)
        at sun.jvm.hotspot.runtime.JavaThread.getThreadObj(JavaThread.java:333)
        at sun.jvm.hotspot.utilities.AbstractHeapGraphWriter.writeJavaThreads(AbstractHeapGraphWriter.java:113)
        at sun.jvm.hotspot.utilities.AbstractHeapGraphWriter.write(AbstractHeapGraphWriter.java:98)
        at sun.jvm.hotspot.utilities.HeapHprofBinWriter.write(HeapHprofBinWriter.java:357)
        at sun.jvm.hotspot.tools.JMap.writeHeapHprofBin(JMap.java:134)
        at sun.jvm.hotspot.tools.JMap.run(JMap.java:71)
        at sun.jvm.hotspot.tools.Tool.start(Tool.java:204)
        at sun.jvm.hotspot.tools.JMap.main(JMap.java:126)
Do you per chance know anything I could do about it? Thanks, Jörg.

Posted by Jörg von Frantzius on July 19, 2006 at 09:54 PM PDT #

Using 1.5.0_07, I tried to trigger a dump using jmap with the following command:
jmap -heap:format=b 2231 > fa22_hprof_dump
as there is no -dump parameter on 1.5.0_07.
However, after successfully connecting to the VM process and working for a while, all I got is:
Exception in thread "main" sun.jvm.hotspot.debugger.UnalignedAddressException: Trying to read at address: 0x000000000000000a with alignment: 4
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1.checkAlignment(LinuxDebuggerLocal.java:163)
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.readCInteger(LinuxDebuggerLocal.java:449)
        at sun.jvm.hotspot.debugger.DebuggerBase.readAddressValue(DebuggerBase.java:425)
        at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.readOopHandle(LinuxDebuggerLocal.java:413)
        at sun.jvm.hotspot.debugger.linux.LinuxAddress.getOopHandleAt(LinuxAddress.java:98)
        at sun.jvm.hotspot.oops.Oop.getKlassForOopHandle(Oop.java:175)
        at sun.jvm.hotspot.oops.ObjectHeap.newOop(ObjectHeap.java:346)
        at sun.jvm.hotspot.runtime.JavaThread.getThreadObj(JavaThread.java:333)
        at sun.jvm.hotspot.utilities.AbstractHeapGraphWriter.writeJavaThreads(AbstractHeapGraphWriter.java:113)
        at sun.jvm.hotspot.utilities.AbstractHeapGraphWriter.write(AbstractHeapGraphWriter.java:98)
        at sun.jvm.hotspot.utilities.HeapHprofBinWriter.write(HeapHprofBinWriter.java:357)
        at sun.jvm.hotspot.tools.JMap.writeHeapHprofBin(JMap.java:134)
        at sun.jvm.hotspot.tools.JMap.run(JMap.java:71)
        at sun.jvm.hotspot.tools.Tool.start(Tool.java:204)
        at sun.jvm.hotspot.tools.JMap.main(JMap.java:126)
Do you per chance know anything I could do about it? Thanks, Jörg.

Posted by Jörg von Frantzius on July 19, 2006 at 10:04 PM PDT #

The heap dumping capabilities discussed in this blog are new in Mustang (Java SE 6) so that is why there isn't a jmap -dump option. The jmap -heap:format=b option that you found is a way to recover a heap dump from a core file or process image. It can observe the heap is an inconsistent state and that leads to exceptions such as this one. There have been some improvements recently so that it recovers from some errors but there still isn't a guarantee that heap dump will be complete and parsable - sorry. The built-in heap dump in Mustang does not suffer from these issues.

Posted by Alan on July 20, 2006 at 12:30 AM PDT #

Hi Alan, Im unable to read the heap dump(heab.bin) of java 1.5.0_05 using Hat tool or Netbeans profile 6.0(Heapwalker)and it result in "java.io.IOException:Stack trace not found for serial #0". I think there might be a problem with heab.bin. Have you tried heap dump of java 1.5.0_05 with Hat tool or Heap Walker ?. suggest me the way to read heap.bin file. heap.bin file of java 1.6 + Heap walker works fine. Thanks, - Chethan

Posted by chethan on October 19, 2006 at 07:20 PM PDT #

Chethan - with HAT I think you to need use the -stack false option so that it read dumps that don't have allocation stack traces. You shouldn't need this option if you are using jhat. In any case, as you are using 5.0u5 it means that the heap dump is generated from the process image and there is no guarantee that it will be parseable by HAT, jhat, or the NB heap walker.

Posted by Alan on October 19, 2006 at 07:41 PM PDT #

Hi Alan, Our product is java 1.4.x based and we have a memory leak. I tried newer versions of java 1.4 and even went to java 1.5, but was not able to get hat to work properly. After reading your blog, I installed jdk1.6.0. I was now able to successfully get a binary dump for the first time. However, when trying to read it w/ jdk1.6 jhat, I get the following error. Can you please advise as I've already spent 3 days struggling to get hat/jhat to work. Thanks, Dave Exception in thread "main" java.lang.ClassCastException: java.lang.Integer canno t be cast to com.sun.tools.hat.internal.model.JavaClass at com.sun.tools.hat.internal.model.JavaObjectArray.resolve(JavaObjectAr ray.java:68) at com.sun.tools.hat.internal.model.Snapshot.resolve(Snapshot.java:237) at com.sun.tools.hat.Main.main(Main.java:152)

Posted by Dave on November 19, 2006 at 09:56 PM PST #

I captured the dump file with 2 control-breaks during appropriate spots in the run and a last one at the kill of the jvm. -Xrunhprof:heap=dump,format=b I tried just a plain read of the file w/ jhat java.hprof thanks.

Posted by Dave on November 19, 2006 at 10:01 PM PST #

Dave - I believe that exception arises when the dump file is incomplete. It sounds like you are using the HPROF agent so this could be a HPROF agent bug. Can you submit a bug? In any case, if you are using 1.4.2, 5.0, or jdk6 then you can run with -XX:+HeapDumpOnOutOfMemoryError which will automatically generate a heap dump when OOME is thrown. The HPROF agent is not required to do that. Or, in jdk6 just do a "jmap -dump:file=<file> <pid>" to take a snapshot of the heap from a running VM. For either case the heap dump should always be readable by jhat, HAT, or other profilers that can import this format.

Posted by Alan on November 19, 2006 at 10:13 PM PST #

Hi alan i tried to use jhat ie : jmap -dump:format=b,file=heap.bin <ProcessID> jhat heap.bin. It gives the following error message. Reading from heap.bin... Dump file created Thu Mar 08 14:25:14 IST 2007 Snapshot read, resolving... Resolving 52286 objects... Chasing references, expect 10 dots.......... Eliminating duplicate references.......... Snapshot resolved. Exception in thread "main" java.lang.RuntimeException: java.lang.NullPointerException at com.sun.tools.hat.internal.oql.OQLEngine.init(OQLEngine.java:277) at com.sun.tools.hat.internal.oql.OQLEngine.<init>(OQLEngine.java:51) at com.sun.tools.hat.internal.server.QueryListener.setModel(QueryListener.java:59) at com.sun.tools.hat.Main.main(Main.java:189) Caused by: java.lang.NullPointerException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:591) at com.sun.tools.hat.internal.oql.OQLEngine.init(OQLEngine.java:256) ... 3 more I want to know how far it is dependant on vm, i mean to say if i use other than sun HotSpot vm, will jhat work fine.

Posted by varsha on March 26, 2007 at 04:45 PM PDT #

Hi Varsha: jhat's OQL engine depends on JavaScript engine (bundled in Sun's JDK 6). You may want to check whether JavaScript works properly - you may want to use "jrunscript" tool to check the same.

Posted by A. Sundararajan on March 26, 2007 at 05:54 PM PDT #

js> JavaScript, is this the right way to check to see whether javascript is working properly or not? How can i produce proof if there is vm dependency while using jhat. If there is no dependency on vm please let me know .

Posted by varsha on March 26, 2007 at 07:03 PM PDT #

@varsha - please tell us which JDK release and build you are using. If you are using an early build of jdk7 then the NPE might be explained by a build issues with some of the JavaScript classes that was recently fixed. As regards VM dependencies - I'm not aware of any VM specific dependencies in jhat or OQL. There's also nothing VM specific in the heap dump and there are various tools available to read that format.

Posted by Alan on March 26, 2007 at 07:06 PM PDT #

I am using IBM jdk6 and the build is 20070306_01.

Posted by varsha on March 26, 2007 at 07:14 PM PDT #

@varsha - its possible that IBM's jdk6 release includes the interfaces but not the JavaScript engine that OQL needs. jhat does have code to disable OQL when run with pre-jdk6 releases but it doesn't seem to handle the case that the APIs exist but not the engine. I would suggest just downloading Sun's jdk6 release and jhat/OQL should work fine.

Posted by Alan on March 26, 2007 at 07:25 PM PDT #

I am using the 64-bit JVM - 1.5.0_11-b03. I have successfully used HAT in the past on a 32-bit JVM. Our production system recently had an OutOfMemory Error and produced an .hprof file, as we were using the -XX:+HeapDumpOnOutOfMemoryError option. I tried running HAT to analyze this, but got the message "java.io.IOException: I'm sorry, but I can't deal with an identifier size of 8. I can only deal with 4." I tried using the "-d64" option when the JVM was started for HAT, but I got the same error. CAN I use HAT to analyze a file created with a 64-bit JVM? Are there any other Java tools that can be used with an .hprof file (I can see it's binary)?

Posted by Lynn on April 02, 2007 at 04:22 AM PDT #

@Lynn - the update to HAT is called jhat and it is bundled in jdk6. It can read dumps from 64-bit VMs.

Posted by Alan on April 04, 2007 at 06:04 PM PDT #

ergr

Posted by guest on June 06, 2007 at 05:46 PM PDT #

I'm confused :) Is there a way to get a heapdump from a running Sun JVM 1.5.0_05-b05, running on Linux 32Bit?

Posted by Rene A. on June 28, 2007 at 01:21 AM PDT #

Rene - in the upcoming 5.0u17 (and also 1.4.2u16) there will be a new option -XX:+HeapDumpOnCtrlBreak that will allow heap dumps to be generated on demand. The jmap -dump option that this blog was about is something that was new in jdk6.

Posted by A. on June 28, 2007 at 03:04 AM PDT #

Thanks!

Just to get this right. I'm running on 1.5.0_b5, and to get the -XX:+HeapDumpOnOutOfMemoryError option, I know I have to update at least to 1.5.0_07.

Since I'm searching for a way to get a heapdump from a production server (were I can't easily upgrade the JDK), I hoped I can get away with the following:
1) jps
2) jmap -dump:format=b,file=prog.bin pid

But it seems, that I would simply be the best to "just upgrade" to a newer 1.5.0_xx JVM and have -XX:+HeapDumpOnOutOfMemoryError available :)

Posted by Rene A. on June 28, 2007 at 05:48 PM PDT #

As I said, the jmap -dump option is new in jdk6. If you are on Solaris or Linux there is a jmap -heap option that is used to recover a heap dump from a core dump or hung process. That is available in 5.0. It's very much for post mortem analysis and there is no guarantee that it can recover a usable dump file. If you upgrading then maybe go for the latest (6.0 update 1 or 2 is best).

Posted by A. on June 28, 2007 at 06:02 PM PDT #

can I send the following commands to HPROF via the socket connection to ask it to do some jobs before the JVM agent(hprof) terminate?

-------------------------

DUMP HEAP 0x02

ALLOC SITES 0x03

HEAP SUMMARY 0x04

EXIT THE VM 0x05

DUMP TRACES 0x06

CPU SAMPLES 0x07

CONTROL 0x08

EOF 0xFF

----------------------

I got the info from hprof manual. ..demo\\jvmti\\hprof\\src\\manual.html

e.g. can I use the code as below to ask hprof to do heap dump?

---------------------

outputStream = new DataOutputStream(_socket.getOutputStream());

byte[] cmd = new byte[1];

cmd[0] = 0x02; //DUMP HEAP

outputStream.write(cmd);

----------------------

Posted by hobby on July 04, 2007 at 02:25 AM PDT #

And can we get the time stamp when the GC occured from HPROF?

Posted by hobby on July 04, 2007 at 06:35 PM PDT #

Alan, I seeing that with JDK 1.6.0 and JDK 1.6.0_01 jmap -histo is not triggering a full gc before creating a snapshot. This seemed to work correctly with mustang-b83. Do you know if for any reason this fix to do a full GC with jmap -histo has been dropped from JDK 1.6.0?

Posted by Veerendra Chirala on July 24, 2007 at 05:51 AM PDT #

Veerendra - you need the "live" sub-option to trigger a GC, ie: jmap -histo:live <pid> should do it.

Posted by A. on July 24, 2007 at 05:57 AM PDT #

Alan, My java-app in production is based on 1.5.0_11. So to troubleshoot the hangs and OOME, as i get these option, are they correct:
=> ctrl+brk on-demand dumps to be analyzed with jhat cannot be taken in the 1.5.0_11 , and have to upgrade to 1.5.0_17 (5.0u17) with XX:+HeapDumpOnCtrlBreak =OR= upgrade to 1.6 updt2 and use jmap -dump and jhat.

Q1. Is it use safe to use XX:+HeapDumpOnCtrlBreak and -XX:+HeapDumpOnOutOfMemoryError in production would not affect performance

Q2. Can I use windows 1.6 jstack on application based on jvm of 1.5 version? Doing this gives me error pre-6.0 jvm dll.
Whats the way to get stack trace of hung 1.5.0_11 process?

Nilesh

Posted by Nilesh on August 29, 2007 at 02:09 AM PDT #

I see at sun website, the lates jdk5 is u14, looks like u17 will take a long time to be available.

I see XX:+HeapDumpOnCtrlBreak is already available on u14, but JDK6 doesn't support it.

Posted by Claudio Miranda on November 13, 2007 at 08:39 PM PST #

You don't need HeapDumpOnCtrlBreak in jdk6. It is much easier to use jmap with the -dump option.

Posted by guest on November 13, 2007 at 08:41 PM PST #

I can get heap dump when running JDK 1.5 or 1.6 but I need to try and get heap dump when running on a live system running jre 1.5.11. We don't really want to install full JDK on live system - can you tell me which jars are required? Thanks.

Posted by John J Smith on November 25, 2007 at 10:19 PM PST #

Hi,

I am taking heap dump using javac
javac -J-agentlib:hprof=heap=all

The heap dump file generated doesnt have objects related to my product classes. It has only core objects related to built in java classes.

Please help me with this to take heap dump of entire heap.

Also, looks like jmap utility is not supported with jdk 1.5

-Thanks
Aditya

Posted by Aditya on February 25, 2008 at 02:07 PM PST #

And can we get the time stamp when the GC occured from HPROF ?

Posted by hit on May 15, 2008 at 07:10 PM PDT #

js> JavaScript, is this the right way to check to see whether javascript is working properly or not? How can i produce proof if there is vm dependency while using jhat. If there is no dependency on vm please let me know . ??

Posted by firma rehberi on May 15, 2008 at 07:12 PM PDT #

Do you know of an existing tool for converting heap dump from binary to ascii format? I wouldn't want to reinvent the wheel if it already exists.

Posted by izmir temizlik şirketleri on May 15, 2008 at 07:16 PM PDT #

Post a Comment:
Comments are closed for this entry.
About

user12820862

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News
Blogroll

No bookmarks in folder