Sunday Aug 20, 2006

WebSphere AS on Sun Fire T2000 and DB2 on Sun Fire X4200 Benchmark

Here is another benchmark from Sun which is quite interesting. Few highlights that I can think :

  • First Official WebSphere Application Server Benchmark on Solaris that I am aware
  • First WebSphere AS Benchmark running on a 32-threads/8-core UltraSPARC T1 based Sun Fire T2000
  • First WebSphere AS benchmark which has DB2 running On Solaris 10 x64 on the backend
  • First WAS/DB2 both on Solaris running in mixed (SPARC and AMD64) environment
  • First single socket machine to even reach that number for an appserver

With the availability of WebSphere Application Server and DB2 on Sun Fire T2000 and the new availability of WebSphere Application Server and DB2 on Solaris 10 x64 (Sun Fire X4200), suddenly the horizons have changed in terms of performance and pricing. (Don't forget WebSphere/DB2 are priced at a little less than 3 processor pricing on Sun Fire T2000 and on x64 classs machine they are based on per socket which means they are priced at 2 processor pricing on Sun Fire X4200.) Suddenly the perception that WebSphere/DB2 on Solaris is expensive has now changed drastically.

Welcome to the new and better world of Solaris.

Disclosure: One Sun Fire X4200 (4 cores, 2 chips) and one Sun Fire T2000 (8 cores, 1 chip) 616.22 SPECjAppServer2004 JOPS@Standard. SPECjAppServer2004 JOPS@Standard. SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/13/06.

 

Wednesday Aug 16, 2006

Benchmark Winning DB2 V8 on Solaris 10 x64

Few months ago, a team working near Winchester, UK delivered a new product for Solaris 10 x64. The product was DB2 V8.2.4 for Solaris 10 x64. Considering it was the first ever binary release of DB2 for Solaris x64, we decided to give it a try and it worked great. Then we decided to try it in a Solaris 10 Zone and once again it delivered. (And we did not have to apply any special patches/fixpacks. We used the GA release binaries of DB2 on Solaris x64). And as they say the rest is history... The first ever Benchmark using DB2 on Solaris x64 as the backend (in a Solaris 10 Container on Sun Fire X4600) is finally published.

Disclosure: One Sun Fire X4600 (16 cores, 8 chips) 1000.86 SPECjAppServer2004 JOPS@Standard, SPECjAppServer2004 JOPS@Standard. SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/13/06.

 

Monday Aug 14, 2006

WebSphere MQ on Sun Fire T2000 using Solaris 10

Looking for Performance Evaluation Numbers of WebSphere MQ on Sun Fire T2000. Check this following link at ibm.com. Or if the link target looks confusing try a direct link to the WebSphere MQ on Sun Fire T2000 report .

 

Tuesday Aug 08, 2006

Optimizing IBM DB2 for Solaris 10 1/06 OS on Sun Fire T2000 Server

Optimizing IBM DB2 for Solaris 10 1/06 OS on Sun Fire T2000 Server : This article discusses various deployment tips to optimize IBM DB2 on the Sun Fire T2000 server running the Solaris 10 1/06 Operating System (OS).

 

Saturday Jul 29, 2006

DB2 9 available on Solaris 10 (SPARC). x64 release to follow soon

The new DB2 9 is now available for Solaris 10 (SPARC). The release for Solaris 10 x64 will follow soon.

A 90 day trial of DB2 9 Enterprise for Solaris can be downloaded from the DB2 9 download website.

Friday Jul 28, 2006

Sun Cluster HA Agent for WebSphere MQ on Solaris x64

I just got news from Neil that the Sun Cluster 3.1 8/05 HA agent for WebSphere MQ 6.0.1 on Solaris x64 is now released. You can download it free from the website.

 

Monday Jul 17, 2006

10 Sun Fire X4500 bundle => 20 + TB DSS or BIDW Config?

I noticed today that there is a 24TB 10-server bundle of Sun Fire X4500 (previously known as Project Thumper) which was actually discounting the server by a whopping 32%. This could actually lead enterprise customers to buy those configurations rather than other configuration just based on the price advantage.

So if enterprise customer wants to buy that 10-X4500 bundle, how will they use it?

Again I am narrowing my focus based on what I understand little bit about the field (which I agree is not enough) and making relevant assumptions.

  • Targeted for Enterprise Customers
  • Have an Enterprise License for a database (eg DB2)
  • Are interested in setting up a performant big data warehouse

This assumption then changes many of the implementation details mentioned in the previous blog entry. For example in this scenario I would probably select RAID-10 instead of RAID-Z. I would take advantage of distributed features of DB2 - Dynamic Partition Feature since it involves multiple physical servers connected via network or inifiband.

How did I come to the 20TB+ mentioned in the header? Well read on and then it might be clear on how I arrived at the figure.

Now I know the bundle comes in 10 but my design was more based on a single X4500 configuration which can then be replicated over the other 9 servers. (After all that's the Horizontal Scaling philosophy).

Each X4500 has 2-socket, 4-cores of AMD 285 with 16GB RAM and 4 10/100/1000 BaseT Ethernet Ports per Server. Now I am of the opionion that alway keep one unit of virtual CPU free for quick access to the system and management tasks. With this philosophy of mine, I would use only 3-cores of the CPUs for DB2. Since I have already made the decision of using distributed DB2 and keeping in mind the NUMAness of memory of Opteron, I would use 3 Nodes or partitions of DB2 for the three cores that I plan to use for the database. (Thanks to integration of DB2 and Solaris Resource Pools in DB2 V8.1 fixpack 4 this is easy to do by just creating projects and entering the project id in db2nodes.cfg and all partitions can start in their respective projects containing a resource pool of 1 core each).

Now comes the hard part, disk configurations. There are numerous design patterns available at your disposal with the immense number of spindles available. However remember that same design will have to be replicated on other 9 servers too. Hence if it is too cumbersome to manage one server it will be way to cumbersome to manage 10 servers. So I am going to try some simple configurations with some reasoning behind it but not to a very fine granular strategy of separating out spindles for various tablespaces.

That said I would still use two mirrored disk for the root file system. This leaves 46 disks. Since I plan to use three partitions with RAID-1 storage I need the highest common factor of 3 which is even and less than 46. 45 will result in 15 spindles for each node. 15 to be doesn't seem optimal for RAID-10 configuration but 14 does and also I then have a disk "spare" for each of the mirror nodes. That sounds interesting then I will end up having the disks layout as follows

DISKS  0   1   2   3   4   5   6   7   
c0: P0 P0 P0 P0 P2 P2 P2 P2
c1: P1 P1 P1 P1 P0 P0 P0 P0
c6: P2 P2 P2 P2 P1 P1 P1 P1
c7: S0 P0 P0 P0 P2 P2 P2 S2
c5: B0 B1 P2 P2 P2 P1 P1 P1
c4: S1 P1 P1 P1 P0 P0 P0 X

So now I have each pool of 7 x 250GB = 1750GB for each of the DB2 partition. If I divide the space as Tables: 750 GB,Index : 250 GB,Temp : 500 GB,LOBS : 200 GB,Logs : 25 GB

Then I have a setup that handles about 3 x 750GB = 2250 GB per X4500 or 225000 GB or roughly 22TB for the 10-server bundle of X4500.

Lets estimate the feeds and speeds of such a setup.

Each DB2 node then has access to about 350MB/sec ( 7x assuming 50MB/sec/platter) IO read bandwidth which probably won't be saturated with a single opteron core which will also be manipulating the data as it reads it so the setup probably won't be IO Bound which is important in a data warehousing environment. Fortunately we also have one free processing core in the server so all interrupts and administrative operations can be diverted to that core and hence the opteron cores assigned to DB2 will be working to the fullest processing DB2 data. Since the two cores shares the cache, best strategy will be to use the DB2 Node 0 or the coordinating node on the CPU with the free core and the other processor will handle DB2 Node 1 and DB2 Node 2 on the Sun Fire X4500.

Now since the DB2 Nodes will be communicating with each other, an ideal way for each server will be to use 1 network adapter for external interface and use aggregation on the other 3 network adapters and connected to a private network amongst the 10 servers to allow about 3Gbps communication bandwidth amongst the servers.

Most of the things (maybe except the additional network switch and ethernet cables) are already part of the 10-server bundle so it really makes it easier to manage. Plus except for the network cables for inter-server communication and the power cables there are no messy cables for the storage in this setup which makes it elegant to look at in the data center.

Now I need someone to read this entry and try it out. :-)

 

Tuesday Jul 11, 2006

Sun Fire X4500 with DB2 as a database appliance

The new Sun Fire X4500 (codename: Project Thumper) with its huge storage capacity automatically brings to my imagination on how to use it as a database appliance.

Now I know that each database has different ways of utilizing system capacity and in this scenario I want to capture the way I would try setting up a particular database before it is really ready for an appliance like deployment.

Note the following is not something that I have tried out on Sun Fire X4500 but would like to try it out.

Now Sun Fire X4500 has 48 drives available. Out of which c5t0d0 and c5t4d0 are desginated as bootable drives so I would rather leave them alone (or actually mirror the root file systems over these two drives for HA purposes).

That leaves 46 SATA drives still available. There are 6 SATA Controllers each controlling 8 SATA drives. I also want to use ZFS - the free bundled advanced filesystem in Solaris 10 update 2 which is also included with Sun Fire X4500.

Now ZFS has concepts of pools and filesystems. So I will have to create pools of disks to be used which will have the availability criterias (RAID-10, RAID-Z, RAID-0). To decide on the pools I will have to decide whether I want maximum storage or maximum availability.

I am assuming that I want to benefit from maximum storage for a hypothetical business database which is primarily being used for generating reports, etc. In such case I would rule out RAID-10 which has a penalty of 50% of disk space (though has much better availability than RAID-Z). Also I want to design the pools such that it has the least logical number of single point of failures (hardware, etc). So the best way to get maximum benefit with minimum single point of failures is to create 8 pool containing 1 disk from each controller (of course two pool will only 5 disks big since two disks are used for root file system) so in the event of a failed controller, the recovery can be done since it will ruin only one disk in the pool.

So based on that logic (which may not be correct), I will end up with 8 pools of RAID-Z protection out of which 6 pools will have an effective storage of 5xdiskcapacity and 2 pools with 4xdiskcapacity (assuming 1 disk overhead for RAID-Z which is typical for parity based availability)

Of course ZFS makes it easy to create these 8 pools in 8 lines and also mount them to be available readily:

# zpool create pool0 raidz c0t0d0 c1t0d0 c6t0d0 c7t0d0 c4t0d0 
# zpool create pool1 raidz c0t1d0 c1t1d0 c6t1d0 c7t1d0 c4t1d0 c5t1d0
# zpool create pool2 raidz c0t2d0 c1t2d0 c6t2d0 c7t2d0 c4t2d0 c5t2d0
# zpool create pool3 raidz c0t3d0 c1t3d0 c6t3d0 c7t3d0 c4t3d0 c5t3d0
# zpool create pool4 raidz c0t4d0 c1t4d0 c6t4d0 c7t4d0 c4t4d0 
# zpool create pool5 raidz c0t5d0 c1t5d0 c6t5d0 c7t5d0 c4t5d0 c5t5d0
# zpool create pool6 raidz c0t6d0 c1t6d0 c6t6d0 c7t6d0 c4t6d0 c5t6d0
# zpool create pool7 raidz c0t7d0 c1t7d0 c6t7d0 c7t7d0 c4t7d0 c5t7d0

This will automatically create mountpoints /pool0, /pool1, ... /pool7

Now this is where it will get more database specific.

In this first entry, lets take a case of hypotethical DB2 BIDW environment that we are setting up on Sun Fire X4500. Since DB2 V8.2.4 is already available for Solaris 10 x64, it is feasible to try it out. Lets simplify that we need 4 tablespaces for DB2 such that one does all the tables ts_data, one does all the indices, ts_index, one for all temporary tablespaces ts_temp and one for LOBS tab_lob which should cover. Suddenly it sounds that we will need too many scripts to create and lot more thinking to lay it out.

I think I would just take the easy way out with DB2 Automated Storage.

I would create a directory in each pool so as not to clutter the directory and gave appropriate file permission for the db2 instance user

# mkdir -p /pool0/mydb2db  /pool1/mydb2db /pool2/mydb2db /pool3/mydb2db
# mkdir -p /pool4/mydb2db  /pool5/mydb2db /pool6/mydb2db /pool7/mydb2db
# chown db2inst1:db2grp1 /pool0/mydb2db  /pool1/mydb2db /pool2/mydb2db /pool3/mydb2db
# chown db2inst1:db2grp1 /pool4/mydb2db  /pool5/mydb2db /pool6/mydb2db /pool7/mydb2db

After creating the db2inst1 DB2 instance, I would just create the database from the db2inst1 user as shown in the following SQL script.

CREATE mydb2db ON /pool1/mydb2db, /pool2/mydb2db, /pool3/mydb2db, /pool5/mydb2db, \\
                  /pool6/mydb2db, /pool7/mydb2db, /pool4/mydb2db ;
UPDATE DB CFG FOR mydb2db USING newlogpath  /pool0/mydb2db;
CREATE USER TEMPORARY TABLESPACE tab_temp;
CREATE REGULAR TABLESPACE tab_data;
CREATE REGULAR TABLESPACE tab_index;
CREATE LONG TABLESPACE tab_lob; 

And I think my basic setup for a DB2 database will be ready for workload. I would expect that if you are using the 250GB SATA Disks (the 12TB version) then the above setup can be used for a database size of roughly 5TB very comfortably with almost equal scratch area for number crunching analysis in a 4 Rack Unit size. Thats a super WOW !!!

Guess what!! Since Sun Fire X4500 is a 2 socket dual-core AMD64 based system, it would qualify for a DB2 Workgroup Edition (16GB RAM limit). Also if you downsize the RAM via Zones and Resource Capping for DB2 to 4GB it can even potentially qualify for DB2 Express Edition with limit of 2 processors (I believe IBM considers sockets as processor units in case of x86 architecture). This can bring down the cost of the database also to a very reasonable cost..thats a second WOW!!!.

Who would have thought that an enterprise class combination of DB2 with Solaris and ZFS which can handle over 5TB (raw) database size at a commodity pricing for the database and also commodity pricing for the storage server.

Now the question is .. are 4 cores enough? For an SMB space who typically has 1-4 data cruncher analysts.. Maybe it will be enough considering the cost benefits that the system provides. Also remember anything more than 2 sockets will mean an increase in database price to Enterprise editions.. So yeah it was well designed with focus on impact of License fees impact on anything bigger than two processors.

Again it is a perfect BIDW platform for cost sensitive customers and yet have an enterprise class system.

Of course I believe you will have to turn DB2's INTRA_PARALLEL flag ON with a default degree of 4 (for the four cores) to spawn more processes to utilize the IO bandwidth it has available at its disposal for the execution of a query.

db2 update dbm cfg using INTRA_PARALLEL YES
db2 update db cfg for mydb2db using DFT_DEGREE 4

I can't wait to try it out. If somebody else is trying this combination out please send me your feedback too.

 

Wednesday May 10, 2006

DB2 for Solaris 10 x64 now available

DB2 V8.2.4 for Solaris 10 01/06 x64 is finally available. While it is still hard to find on ibm.com/db2 website, if you have access to IBM Software Catalog (via Passport Advantage,PartnerWorld, etc) then you can request/search for it via the part number C909HML.

Following are the quick tips for improving your experience with DB2 on Solaris x64

  • Need Solaris 10 1/06 or greater (no support for Solaris 9, Solaris 8 on the x64 Platform)
  • Only 64-bit DB2 instance supported (no support for 32-bit DB2 instance on Solaris x64 which restricts you currently to use 64 bit enabled platforms using AMD64/EMT64 like the Sun Fire X4200 )
  • # isainfo
    amd64 i386
    
  • You will need 64-bit JVM installed on your system. If you are using Solaris 10 1/06 then it has 64-bit JVM for 1.5 and not 1.4.2 . Hence DB2 V8.2.4 on Solaris x64 uses JDK_PATH in the dbm cfg of DB2 pointing to /usr/java instead of /usr/j2se default (DB2 on Solaris SPARC). If the path is pointing to something else use the following command to reset it right for Solaris 10 x64
  • db2 update dbm cfg using JDK_PATH /usr/java
    

Also the valuable db2osconf is still not available on this version for Solaris x64, and hence the following will help you to get started with a decent database size . (Example shown assumes instance owner name as db2inst1, replace it with your instance owner name)

projadd -U db2inst1 user.db2inst1
projmod -a -K "project.max-shm-ids=(priv,4k,deny)" user.db2inst1
projmod -a -K "project.max-sem-ids=(priv,4k,deny)" user.db2inst1
projmod -a -K "project.max-shm-memory=(priv,4G,deny)" user.db2inst1
projmod -a -K "project.max-msg-ids=(priv,4k,deny)" user.db2inst1

The above helps to increase the limits of shared memory segments, semaphore ids and total memory that can be used for shared memory. Of course I still used the traditional way to set the following values on a Sun Fire 4200 with Solaris 10 1/06 for DB2 V8.2.4 that I had used for a recent test (I still need to verify if this is the right way set these values)

set msgsys:msginfo_msgmni = 3584
set semsys:seminfo_semmni = 4096
set shmsys:shminfo_shmmax = 15392386252
set shmsys:shminfo_shmmni = 4096

Other quick tips: If you install it on 32-bit Solaris 10 01/06, it will take you through the software binaries install successfully, but the instance creation step will fail. So please make sure that you are using 64-bit Solaris 10 01/06 otherwise it will result in wasted resources. Please share your experiences with us.

 

Tuesday Apr 11, 2006

WebSphere MQ for Solaris 10 x64 Download

Quick link to download WebSphere MQ for Solaris 10 x64 .

 

About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today