If you had to guess what the error "java.lang.OutOfMemoryError" indicates when seen in your Apache log files, you would probably be right...
In most cases this message is telling you that the Java process has exhausted available memory and cannot process some requests successfully. In many cases additional symptoms will be:
- Poor web page performance
- User requests being timed out
- Java processes taking 100% CPU on your server.
The Java Virtual Machine (JVM) will sometimes be restarted by oprocmgr
as performance is so poor it presumes the JVM has died.
There can be other causes. For example, this message can sometimes be given where there is the lack of some other resource such as threads per process (kernel parameter "max_thread_proc
" on HP-UX) but this case should be easily identifiable from the exact message written out.
Where has all the memory gone?
Every object created in Java takes some memory. Once all java code that references the java object has completed, then the Garbage Collector (GC) automatically removes the object and claims back the memory.
You can run out of memory due to:
How are JVMs configured with Apps 11i
- Insufficient memory allocated to the JVM to cope with the amount of work
- Suboptimal GC processing
- A memory leak -- Java code is not releasing an object when it should
- A Java code or operating system defect
Two important configuration files for Java in an Apps environment are:
This file controls the JVM parameters in the "wrapper.bin.parameters=" entry. When you install Apps originally or update the JDK version, the default settings are updated in your CONTEXT.xml file ("s_jvm_options" for the OACoreGroup JVM) and controlled by AutoConfig thereafter.
What can I do about OutOfMemoryError?
- Are you using Oracle Configurator?
- Increase available memory
An obvious choice, you may think, and this will often at least delay the effect of the issue, but may not always solve it. You can achieve this by either increasing the number of JVMs or the memory settings ("-Xmx512M" for example). In general, I would not recommend going much over 640MB per JVM, in order to keep the Garbage Collection time to a reasonable level.
You can influence how Java performs Garbage Collection. I would normally recomend reviewing and possibly backing out any non-default JVM parameters first, as these are sometimes just carried forward because they were used with previous JDK versions. As always, any tuning should be undertaken in a TEST environment to ensure the changes have a positive impact.
- Using Concurrent Mark Sweep collector
With multi-CPU servers using JDK 1.4 and higher, the following parameters have been seen to improve performance and GC throughput on some environments:
- Identify the components that are using memory
Not always as easy as it sounds. Generating class histograms or java profiling may be the only options.
What data should I gather to investigate further ?
The initial investigation should focus on trying to identify the pattern of memory usage building up and how frequent GC is occurring. The tools and information described below may help to achieve this
Basic Garbage Collection information is normally written to the $IAS_ORACLE_HOME/Apache/Jserv/logs/jvm/*.stdout file for each JVM. Additional Garbage Collection information can be logged, for example:
- Add more verbose GC logging
This allows you to see how much of each memory pool is being used. For example you may find it is the "Permanent Generation" space that is running out of memory, rather than the "Tenured Generation". If this turns out to be the case, the "-XX:MaxPermSize" parameter could be increased to provide more memory.
Implemented in JDK 1.4 and higher by some vendors. Can be very useful if available. Beware of using this on HP-UX platforms running JDK 1.4 as it seems to crash the JVM rather than output the data!
Discussed in a previous article: Using JConsole to monitor Apps 11i JVMs
This should tell you exactly where the memory is being used, but has a huge impact on performance and is therefore not likely to be possible to gather this data, except for a lightly used test environment.
Does the DMS output for the JVM show increasing active connections or threads?
Do JDBC connections to the database keep increasing? What module names do they relate to?
Does temporarily disabling "AM Pool" reduce memory consumption (Profile option "FND: Application Module Pool Enabled")?
Are you seeing PL/SQL "cursor leaks"? See, "Understanding and Tuning the Shared Pool" (Metalink Note 62143.1) in section "Checking hash chain lengths" for SQL to check this.
- Any Java Thread Deadlocks or Database Deadlocks?
Are Java processes spinning to 100% CPU or is CPU use very low when the error occurs?
- Look for platform references
Review your operating system vendor's web site to look for known issues or required platform patches for your Java version.
My Production system is failing every few hours, what can I do right now?
A possible quick fix would be to increase memory and/or number of JVMs. Be sure to ensure that this won't cause the operating system to start swapping!
You could also consider a scheduled bounce of Apache every 12 or 24 hours if possible.
These steps may allow you to minimise the immediate impact on the business so you can investigate and implement changes in a methodical, safe, controlled and tested manner.
Where can I read more about Java memory and tuning?
There are many resources on the Internet to describe how Java uses memory and what steps to take for performance tuning. The best place to start is your hardware vendors web site as the options available depend on their Java implementation and also the specific Java version you are using.
Dealing with Java processes running out of memory can sometimes be as simple as increasing the memory available, or may require detailed investigation to identify the root cause.
Although each case will have its own unique considerations, I hope that the information in this article has given you some ideas as to the general approach to take should the need arise.