WebLogic Server: "TOO MANY OPEN FILES" Exception
By Gokhan Gungor on Apr 26, 2012
In Solaris and Linux, the file descriptors parameter specifies the number of open files permitted per process. If this value is too low or your application requires too many files to be opened or there is a huge number of concurrent users then WebLogic may throw the "TOO MANY OPEN FILES" exception. Note that on UNIX systems, each socket connection to WebLogic Server consumes a file descriptor.
Caused by: java.io.IOException: java.io.IOException: error=24, Too many open files
... 40 more
If you face this error, then you must increase the limit beyond the current value for the WebLogic. We must also ensure that the operating system is able to handle the increase in the number of open files.
On OS level we can use the following to check or increase the file descriptor limit;
$ ps –ef |grep java (Get the WebLogic process id)
$ pfiles processId | grep rlimit (Display the current file descriptor limit)
$ ulimit -H -n (Display hard limit, cannot be exceeded)
$ ulimit -S -n (Display soft limit may be increased up to hard limit value)
$ ulimit -S -n 4096 (Increase the soft limit to 4096)
By default, WebLogic Server configures 1024 file descriptors. WebLogic limits the number of open file descriptors by checking the OS limit in a shell script. You can configure WebLogic specific number from WEBLOGIC_HOME/common/bin/commEnv.sh script (note that this script is used on each domain created, if you want the domain specific limits you can use DOMAIN_HOME/bin/setDomainEnv.sh).
Note that when you tune the number of file descriptors for WebLogic Server, your changes should be in balance with any changes made to the complete message timeout parameter. A higher complete message timeout setting results in a socket not closing until the message timeout occurs, which therefore results in a longer hold on the file descriptor. So if the complete message timeout setting is high, the file descriptor limit should also be set high.