Tuesday Jun 09, 2009

Speed up your SSL operation for IBM HTTP Server on Ultra SPARC T2 systems

Sometime back we have published a Sun Blueprint (Accelerating IBM HTTP Server Cryptographic Operations Using Sun Servers with CoolThreads Technology) detailing the steps needed to get your IBM HTTP Server use the on-chip crypto processor on Ultra SPARC T2 based system for SSL operation. This will give a free SSL operation boost without buying additional hardware for such operations.
The documentation lists all the steps needed to get your IBM HTTP Server and GSKit working  with this on-chip crypto module on Ultra SPARC T2 processor.  In addition to how to configure it, it also has results from some of the performance testing that has been done to measure the performance gain. Your milegae may vary depending on your type of workload but if you are making lot of new client connection and serving "HTTPS" traffic then this would something that is available to you free you want to consider. It wil help you take care of your SSL handshakes operations.
Another important aspect of this is that GSKit is a common library that has been used by IBM in lot of products. And as its evident from the name, Global Security Kit, it is security related implementation to be used across different products such as PKCS#11 or so. Some more details can be found at my prior blog about GSKit. This implies that if you want to hookup with PKCS#11 provider and take advantage of on-chip cryptography for other products that can be done too. You must note that this integration has happened at certain level of IBM HTTP Server so it requires certain version of GSKit embedded with the product for which you will try to take advantage.


Monday Mar 23, 2009

Free cryptography for WebSphere on Ultra SPARC T2 and T2 plus systems

If you have been being hit by the processing overhead of SSL stuff and have to buy special purpose hardware to speed up things, then may be its time to leave all this processing to Ultra SPARC T2 and T2 Plus based system. Which has builtin cryptogarphic support on the chip and websphere supports this:

Here are the details from IBM WebSite: http://www.ibm.com/developerworks/java/jdk/security/50/secguides/pkcs11implDocs/IBMPKCS11SupportList.html
http://www.ibm.com/developerworks/java/jdk/security/60/secguides/pkcs11implDocs/IBMPKCS11SupportList.html

For details of setup you can follow the WebSphere Infocenter.
Please note that this feature will be available only after certain fix pack release so it will not work unless you upgrade your JDK to required level as suggested in the doc.

If you want to learn more about the Cryptographic Acceleration offered by these processor I would encourage to read this Sun Blueprint titled "Using the Cryptographic Accelerators in the UltraSPARC® T1 and T2 Processors".

Monday Oct 13, 2008

WebSphere Application Server and T5440

Today, we are announcing the next generation server based on UltraSPARC T2 Plus processor and this server happens to be a monster in terms of performance. It can consolidate a lot of servers in a single box providing the similar combined throughput. For the server details and other blogs related to this server I suggest you to visit T5440 Blog index(http://blogs.sun.com/allanp/entry/sun_s_4_chip_cmt).
In this blog, I summarize the performance of IBM WebSphere Application Server (WAS)  on T5440. To test and benchmark the performance, I use the most recent release WAS v7, that is based on Java EE 5 spec and JDK 6, with numerous features and performance enhancements. As I said earlier that this new Sun server is a monster in terms of performance, the WAS software set up and configuration require some planning and appropriate allocation of the system resources.   When you do the psrinfo(1M) command on this server, it will report to you that the system has 256 "processors".  This means that the processing power of this system far exceeds the software scalability of a single instance of the Application Server's. Thus, you will need multiple instances of WAS to drive the system to its full utilization.  Solaris Containers provides the most efficient way to accomplish such configuration as Solaris Containers provides process space isolation among different WAS instance as well as allocating proper system resources.
For maximizing the utilization of the system, I configured the environment as follows.


  • I created 7 Solaris Containers, allocated 32 processor threads to each of the 6 Containers, and allocated the remaining processor threads for the other container and the global zone.  Then, I used the following WAS configuration:

  • initialHeapSize="2500" maximumHeapSize="2500"

  • -server -Xmn2048m -XX:+AggressiveOpts -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:+UseParallelOldGC -Dcom.ibm.CORBA.TransportMode=Pluggable -Dcom.ibm.ws.pm.batch=true -Dcom.ibm.ws.pm.deferredcreate=true -Dcom.ibm.CORBA.FragmentSize=3000 -Dcom.ibm.ws.pm.useLegacyCache=false -Dcom.ibm.ws.pm.grouppartialupdate=true -Djavax.xml.transform.TransformerFactory=org.apache.xalan.processor.TransformerFactoryImpl -Djavax.xml.parsers.SAXParserFactory=org.apache.xerces.jaxp.SAXParserFactoryImpl -Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl -Dorg.apache.xerces.xni.parser.XMLParserConfiguration=org.apache.xerces.parsers.XML11Configuration

  • Disable the PMI feature.

  • Disable the System.out logging from the admin console if you want you can do the same by editing the config file. by editing the trace service like - startupTraceSpecification="\*=info:SystemOut=off"

  • DynaCache - The DynaCache was used for this benchmark so the default size of the cahce was set to 2000 which wasn't enough for this test which can be adjusted accordingly based on the application need as I set it to 3000.

  • For thread pool tuning the only thing that was relevant in this benchmark was tuning the WebContainer thread pool right so I did set it to 80 and which was more that enough and I think It can be scaled down a bit but havn't tried.

  • Database connection pool was set it to same value as WebContainer pool and set the min and max to the same value.



For database I did not want to run into the network/disk contention so I created the DB in /tmp and I had to use two databases for this purpose. This was just to eliminated the Database configuration headaches and just measure the App Server machine scalability. These database were connected point-to-point with App Server box and the App Server instances were using the point-to-point connection for the DB.
This resulted in getting to 1.8X scalability of its predecessor system(T5140) in terms of throughput(which is measured for this benchmark in terms of requests served per second or more commonly known as req/sec).
So In nutshell if you are looking for another system to consolidate your WebSphere deployment with 1.8X throughput capacity of earlier UltraSparc T2 plus system then T5440 may be just the right system for you.

About

dkumar

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today