WebSphere Application Server and T5440

Today, we are announcing the next generation server based on UltraSPARC T2 Plus processor and this server happens to be a monster in terms of performance. It can consolidate a lot of servers in a single box providing the similar combined throughput. For the server details and other blogs related to this server I suggest you to visit T5440 Blog index(http://blogs.sun.com/allanp/entry/sun_s_4_chip_cmt).
In this blog, I summarize the performance of IBM WebSphere Application Server (WAS)  on T5440. To test and benchmark the performance, I use the most recent release WAS v7, that is based on Java EE 5 spec and JDK 6, with numerous features and performance enhancements. As I said earlier that this new Sun server is a monster in terms of performance, the WAS software set up and configuration require some planning and appropriate allocation of the system resources.   When you do the psrinfo(1M) command on this server, it will report to you that the system has 256 "processors".  This means that the processing power of this system far exceeds the software scalability of a single instance of the Application Server's. Thus, you will need multiple instances of WAS to drive the system to its full utilization.  Solaris Containers provides the most efficient way to accomplish such configuration as Solaris Containers provides process space isolation among different WAS instance as well as allocating proper system resources.
For maximizing the utilization of the system, I configured the environment as follows.

  • I created 7 Solaris Containers, allocated 32 processor threads to each of the 6 Containers, and allocated the remaining processor threads for the other container and the global zone.  Then, I used the following WAS configuration:

  • initialHeapSize="2500" maximumHeapSize="2500"

  • -server -Xmn2048m -XX:+AggressiveOpts -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:+UseParallelOldGC -Dcom.ibm.CORBA.TransportMode=Pluggable -Dcom.ibm.ws.pm.batch=true -Dcom.ibm.ws.pm.deferredcreate=true -Dcom.ibm.CORBA.FragmentSize=3000 -Dcom.ibm.ws.pm.useLegacyCache=false -Dcom.ibm.ws.pm.grouppartialupdate=true -Djavax.xml.transform.TransformerFactory=org.apache.xalan.processor.TransformerFactoryImpl -Djavax.xml.parsers.SAXParserFactory=org.apache.xerces.jaxp.SAXParserFactoryImpl -Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl -Dorg.apache.xerces.xni.parser.XMLParserConfiguration=org.apache.xerces.parsers.XML11Configuration

  • Disable the PMI feature.

  • Disable the System.out logging from the admin console if you want you can do the same by editing the config file. by editing the trace service like - startupTraceSpecification="\*=info:SystemOut=off"

  • DynaCache - The DynaCache was used for this benchmark so the default size of the cahce was set to 2000 which wasn't enough for this test which can be adjusted accordingly based on the application need as I set it to 3000.

  • For thread pool tuning the only thing that was relevant in this benchmark was tuning the WebContainer thread pool right so I did set it to 80 and which was more that enough and I think It can be scaled down a bit but havn't tried.

  • Database connection pool was set it to same value as WebContainer pool and set the min and max to the same value.

For database I did not want to run into the network/disk contention so I created the DB in /tmp and I had to use two databases for this purpose. This was just to eliminated the Database configuration headaches and just measure the App Server machine scalability. These database were connected point-to-point with App Server box and the App Server instances were using the point-to-point connection for the DB.
This resulted in getting to 1.8X scalability of its predecessor system(T5140) in terms of throughput(which is measured for this benchmark in terms of requests served per second or more commonly known as req/sec).
So In nutshell if you are looking for another system to consolidate your WebSphere deployment with 1.8X throughput capacity of earlier UltraSparc T2 plus system then T5440 may be just the right system for you.


Websphere is complext to config

Posted by Bill on October 17, 2008 at 05:38 PM PDT #

Just put was6.1 on a 5440. Went like a dream, single threaded performance is much better and the install and patch is less painful because it all happens much quicker.

I set


As there are 16 cores, is this OK?

Two other things whilst I think about them

Why do IBM always comment out the #ulimit 10000 in setupCmdLine.sh (it core dumps if you don't uncomment this as it runs out of file descriptors)

And it appears that many people (I was at the WebSphere User Group last week) think you have to wait for the solaris version of websphere to keel over before you can get a stack trace.

kill -3 websphere_server_pid will write out a stack trace to native_stdout

In many ways Solaris is _the_ platform to run websphere as the JVM used is the Sun JVM and it has loads more eyeballs on it (IMHO). And the T1 processor family is really good at it.

And to Bill... All application servers are complicated to configure and setup thats the nature of the beast.

Posted by Martyn Ayshford on March 11, 2009 at 08:13 PM PDT #

yes that is correct and ur settings are right. I wouldn't bother to touch the ulimit and my assumptions are that it is legacy no one wants to clean up that line. We should be fine without this settings. Grab a copy of redbook in which we have discussed about this.
For network related stuff I suggest you bookmark this:

And finally I agree Solaris is the platform for WebSphere.

Posted by Dileep Kumar on March 23, 2009 at 02:25 PM PDT #

I am configuring WAS 61 ND on Solaris 10 and the box config is as below. It has 64 Gig of Physical Memory, but I am experiencing issues in bringing up more then 5 JVM's. If I am trying to bring up 6th JVM, It throws OutOfMemory error in the logs or it just doesn't start up.

Please suggest where I can look to resolve this issue. Could it be Solaris 10 tunning issue for WebSphere 61, if Yes, please suggest the necessary changes I need to do. The physical memory is only 3.5 Gig utilized and CPU utilization is negiligible

SunOS apsz8231 5.10 Generic_138888-08 sun4v sparc SUNW,SPARC-Enterprise-T5220

Posted by Jasbani on June 29, 2009 at 12:37 AM PDT #

We have installed 2 Instances of server (WAS ND 61) on T5220
I have used similar jvm args settings given on blog.
Still base timings of method i am running in java, are better on my laptop then on server.
and there is huge difference. on my laptop execution finishes in 500ms and on server it takes 800ms.
please guide me, how to analyse this problem.

Thank you

Posted by Praveen Jindal on October 03, 2009 at 12:00 AM PDT #

This is extremely helpful. I’ve been trying to find ways to engage readers more effectively and after reading your comments I feel like I have a much better idea of how to drive more of a discussion on my posts.

Posted by ssk on January 12, 2010 at 05:57 AM PST #

I met a problem as flow.On T5220 with solaris10 system, I write a program with 10 threads. I schedue them in one loop. At first ,they work well, every thread been schedule about 3 ms one after another. About 12 days later, then 800ms in one loop.How this happends? wish your answer.

Posted by guoguoping on December 18, 2010 at 12:07 AM PST #

Post a Comment:
Comments are closed for this entry.



« July 2016