Investigating java.lang.OutOfMemoryError with Apps 11i Middle Tier JVMs

If you had to guess what the error "java.lang.OutOfMemoryError" indicates when seen in your Apache log files, you would probably be right...

In most cases this message is telling you that the Java process has exhausted available memory and cannot process some requests successfully.   In many cases additional symptoms will be:

  • Poor web page performance
  • User requests being timed out
  • Java processes taking 100% CPU on your server.  
The Java Virtual Machine (JVM) will sometimes be restarted by oprocmgr as performance is so poor it presumes the JVM has died.

There can be other causes.  For example, this message can sometimes be given where there is the lack of some other resource such as threads per process (kernel parameter "max_thread_proc" on HP-UX) but this case should be easily identifiable from the exact message written out.

Where has all the memory gone?

Every object created in Java takes some memory.  Once all java code that references the java object has completed, then the Garbage Collector (GC) automatically removes the object and claims back the memory.

You can run out of memory due to:

  • Insufficient memory allocated to the JVM to cope with the amount of work
  • Suboptimal GC processing
  • A memory leak -- Java code is not releasing an object when it should
  • A Java code or operating system defect
How are JVMs configured with Apps 11i

Two important configuration files for Java in an Apps environment are:
    This file controls the JVM parameters in the "wrapper.bin.parameters=" entry.  When you install Apps originally or update the JDK version, the default settings are updated in your CONTEXT.xml file ("s_jvm_options" for the OACoreGroup JVM) and controlled by AutoConfig thereafter.

What can I do about OutOfMemoryError?

  • Are you using Oracle Configurator?
Configurator can sometimes require huge chunks of memory for a relatively long period of time, so Oracle now recommends setting up separate JVMs specifically for Configurator.  For more details see Setting Up a Dedicated JServ for Running Only Oracle Configurator" (Metalink Note 343677.1).

  • Increase available memory
An obvious choice, you may think, and this will often at least delay the effect of the issue, but may not always solve it.   You can achieve this by either increasing the number of JVMs or the memory settings ("-Xmx512M" for example).  In general, I would not recommend going much over 640MB per JVM, in order to keep the Garbage Collection time to a reasonable level.

  • Tuning GC

You can influence how Java performs Garbage Collection.   I would normally recomend reviewing and possibly backing out any non-default JVM parameters first, as these are sometimes just carried forward because they were used with previous JDK versions.  As always, any tuning should be undertaken in a TEST environment to ensure the changes have a positive impact.   

  • Using Concurrent Mark Sweep collector

With multi-CPU servers using JDK 1.4 and higher, the following parameters have been seen to improve performance and GC throughput on some environments:

-XX:+UseConcMarkSweepGC -XX:+UseParNewGC  

  • Identify the components that are using memory

Not always as easy as it sounds.  Generating class histograms or java profiling may be the only options. 

What data should I gather to investigate further ?

The initial investigation should focus on trying to identify the pattern of memory usage building up and how frequent GC is occurring.  The tools and information described below may help to achieve this

  • Review stdout files

Basic Garbage Collection information is normally written to the $IAS_ORACLE_HOME/Apache/Jserv/logs/jvm/*.stdout file for each JVM.  Additional Garbage Collection information can be logged, for example:

-XX:+PrintGCTimeStamps -XX:+PrintGCDetails

  • Add more verbose GC logging


This allows you to see how much of each memory pool is being used.  For example you may find it is the "Permanent Generation" space that is running out of memory, rather than the "Tenured Generation".  If this turns out to be the case, the "-XX:MaxPermSize" parameter could be increased to provide more memory.

  • Collect class histograms


Implemented in JDK 1.4 and higher by some vendors.  Can be very useful if available.  Beware of using this on HP-UX platforms running JDK 1.4 as it seems to crash the JVM rather than output the data!

  • Use JConsole

Discussed in a previous article: Using JConsole to monitor Apps 11i JVMs

  • Java Profiling


This should tell you exactly where the memory is being used, but has a huge impact on performance and is therefore not likely to be possible to gather this data, except for a lightly used test environment.

  • Other clues

Does the DMS output for the JVM show increasing active connections or threads?

Do JDBC connections to the database keep increasing? What module names do they relate to?

Does temporarily disabling "AM Pool" reduce memory consumption (Profile option "FND: Application Module Pool Enabled")?

Are you seeing PL/SQL "cursor leaks"?  See, "Understanding and Tuning the Shared Pool" (Metalink Note 62143.1) in section "Checking hash chain lengths" for SQL to check this.

  • Any Java Thread Deadlocks or Database Deadlocks?

Are Java processes spinning to 100% CPU or is CPU use very low when the error occurs?

  • Look for platform references

Review your operating system vendor's web site to look for known issues or required platform patches for your Java version.

My Production system is failing every few hours, what can I do right now?

A possible quick fix would be to increase memory and/or number of JVMs.  Be sure to ensure that this won't cause the operating system to start swapping!

You could also consider a scheduled bounce of Apache every 12 or 24 hours if possible. 

These steps may allow you to minimise the immediate impact on the business so you can investigate and implement changes in a methodical, safe, controlled and tested manner.

Where can I read more about Java memory and tuning?

There are many resources on the Internet to describe how Java uses memory and what steps to take for performance tuning.  The best place to start is your hardware vendors web site as the options available depend on their Java implementation and also the specific Java version you are using.  

For example:


Dealing with Java processes running out of memory can sometimes be as simple as increasing the memory available, or may require detailed investigation to identify the root cause.     

Although each case will have its own unique considerations, I hope that the information in this article has given you some ideas as to the general approach to take should the need arise.

Additional References


One simple thing to consider is making sure you are on the latest of JDK 1.5. Java 5 has much better GC handling than in earlier versions.

Posted by Chris Balodis on October 19, 2006 at 08:00 PM PDT #

Thanks for pointing that out Chris. I entirely agree with you and have had some feedback that eBiz benefits from JDK 1.5 particularly well in some cases.

I often advocate going to JDK 1.5 just for the improved diagnostics, but certainly there are performance and stability benefits that may be gained as well.

Posted by Mike Shaw on October 22, 2006 at 06:50 PM PDT #

Chris, your interpretation of the Note is correct:  11.5.9 is only certified with J2SE 1.4.2.  If you wish to use J2SE 1.5 (5.0), you'll need to upgrade to 11.5.10 CU1.I know it's not the news that you were hoping for, but our product development and certification teams are heavily focussed on both R12 and Fusion Apps at this point, so J2SE certifications with the older 11.5.x releases are necessarily taking a lower priority.  Regards,Steven 

Posted by Steven Chan on October 31, 2006 at 01:53 AM PST #

Interestingly per metalink note: 300482.1. Only 1.4.2 JSE is certified with 11.5.9 still. I would have thought this would have been rectified by now so that 11.5.9 could benefit from the latest JSE?
Maybe I'm just wishing too much? :-)

Posted by Chris Balodis on October 31, 2006 at 03:02 AM PST #

Post a Comment:
  • HTML Syntax: NOT allowed


« April 2014