Sun Fire X4500 with DB2 as a database appliance

The new Sun Fire X4500 (codename: Project Thumper) with its huge storage capacity automatically brings to my imagination on how to use it as a database appliance.

Now I know that each database has different ways of utilizing system capacity and in this scenario I want to capture the way I would try setting up a particular database before it is really ready for an appliance like deployment.

Note the following is not something that I have tried out on Sun Fire X4500 but would like to try it out.

Now Sun Fire X4500 has 48 drives available. Out of which c5t0d0 and c5t4d0 are desginated as bootable drives so I would rather leave them alone (or actually mirror the root file systems over these two drives for HA purposes).

That leaves 46 SATA drives still available. There are 6 SATA Controllers each controlling 8 SATA drives. I also want to use ZFS - the free bundled advanced filesystem in Solaris 10 update 2 which is also included with Sun Fire X4500.

Now ZFS has concepts of pools and filesystems. So I will have to create pools of disks to be used which will have the availability criterias (RAID-10, RAID-Z, RAID-0). To decide on the pools I will have to decide whether I want maximum storage or maximum availability.

I am assuming that I want to benefit from maximum storage for a hypothetical business database which is primarily being used for generating reports, etc. In such case I would rule out RAID-10 which has a penalty of 50% of disk space (though has much better availability than RAID-Z). Also I want to design the pools such that it has the least logical number of single point of failures (hardware, etc). So the best way to get maximum benefit with minimum single point of failures is to create 8 pool containing 1 disk from each controller (of course two pool will only 5 disks big since two disks are used for root file system) so in the event of a failed controller, the recovery can be done since it will ruin only one disk in the pool.

So based on that logic (which may not be correct), I will end up with 8 pools of RAID-Z protection out of which 6 pools will have an effective storage of 5xdiskcapacity and 2 pools with 4xdiskcapacity (assuming 1 disk overhead for RAID-Z which is typical for parity based availability)

Of course ZFS makes it easy to create these 8 pools in 8 lines and also mount them to be available readily:

# zpool create pool0 raidz c0t0d0 c1t0d0 c6t0d0 c7t0d0 c4t0d0 
# zpool create pool1 raidz c0t1d0 c1t1d0 c6t1d0 c7t1d0 c4t1d0 c5t1d0
# zpool create pool2 raidz c0t2d0 c1t2d0 c6t2d0 c7t2d0 c4t2d0 c5t2d0
# zpool create pool3 raidz c0t3d0 c1t3d0 c6t3d0 c7t3d0 c4t3d0 c5t3d0
# zpool create pool4 raidz c0t4d0 c1t4d0 c6t4d0 c7t4d0 c4t4d0 
# zpool create pool5 raidz c0t5d0 c1t5d0 c6t5d0 c7t5d0 c4t5d0 c5t5d0
# zpool create pool6 raidz c0t6d0 c1t6d0 c6t6d0 c7t6d0 c4t6d0 c5t6d0
# zpool create pool7 raidz c0t7d0 c1t7d0 c6t7d0 c7t7d0 c4t7d0 c5t7d0

This will automatically create mountpoints /pool0, /pool1, ... /pool7

Now this is where it will get more database specific.

In this first entry, lets take a case of hypotethical DB2 BIDW environment that we are setting up on Sun Fire X4500. Since DB2 V8.2.4 is already available for Solaris 10 x64, it is feasible to try it out. Lets simplify that we need 4 tablespaces for DB2 such that one does all the tables ts_data, one does all the indices, ts_index, one for all temporary tablespaces ts_temp and one for LOBS tab_lob which should cover. Suddenly it sounds that we will need too many scripts to create and lot more thinking to lay it out.

I think I would just take the easy way out with DB2 Automated Storage.

I would create a directory in each pool so as not to clutter the directory and gave appropriate file permission for the db2 instance user

# mkdir -p /pool0/mydb2db  /pool1/mydb2db /pool2/mydb2db /pool3/mydb2db
# mkdir -p /pool4/mydb2db  /pool5/mydb2db /pool6/mydb2db /pool7/mydb2db
# chown db2inst1:db2grp1 /pool0/mydb2db  /pool1/mydb2db /pool2/mydb2db /pool3/mydb2db
# chown db2inst1:db2grp1 /pool4/mydb2db  /pool5/mydb2db /pool6/mydb2db /pool7/mydb2db

After creating the db2inst1 DB2 instance, I would just create the database from the db2inst1 user as shown in the following SQL script.

CREATE mydb2db ON /pool1/mydb2db, /pool2/mydb2db, /pool3/mydb2db, /pool5/mydb2db, \\
                  /pool6/mydb2db, /pool7/mydb2db, /pool4/mydb2db ;
UPDATE DB CFG FOR mydb2db USING newlogpath  /pool0/mydb2db;
CREATE USER TEMPORARY TABLESPACE tab_temp;
CREATE REGULAR TABLESPACE tab_data;
CREATE REGULAR TABLESPACE tab_index;
CREATE LONG TABLESPACE tab_lob; 

And I think my basic setup for a DB2 database will be ready for workload. I would expect that if you are using the 250GB SATA Disks (the 12TB version) then the above setup can be used for a database size of roughly 5TB very comfortably with almost equal scratch area for number crunching analysis in a 4 Rack Unit size. Thats a super WOW !!!

Guess what!! Since Sun Fire X4500 is a 2 socket dual-core AMD64 based system, it would qualify for a DB2 Workgroup Edition (16GB RAM limit). Also if you downsize the RAM via Zones and Resource Capping for DB2 to 4GB it can even potentially qualify for DB2 Express Edition with limit of 2 processors (I believe IBM considers sockets as processor units in case of x86 architecture). This can bring down the cost of the database also to a very reasonable cost..thats a second WOW!!!.

Who would have thought that an enterprise class combination of DB2 with Solaris and ZFS which can handle over 5TB (raw) database size at a commodity pricing for the database and also commodity pricing for the storage server.

Now the question is .. are 4 cores enough? For an SMB space who typically has 1-4 data cruncher analysts.. Maybe it will be enough considering the cost benefits that the system provides. Also remember anything more than 2 sockets will mean an increase in database price to Enterprise editions.. So yeah it was well designed with focus on impact of License fees impact on anything bigger than two processors.

Again it is a perfect BIDW platform for cost sensitive customers and yet have an enterprise class system.

Of course I believe you will have to turn DB2's INTRA_PARALLEL flag ON with a default degree of 4 (for the four cores) to spawn more processes to utilize the IO bandwidth it has available at its disposal for the execution of a query.

db2 update dbm cfg using INTRA_PARALLEL YES
db2 update db cfg for mydb2db using DFT_DEGREE 4

I can't wait to try it out. If somebody else is trying this combination out please send me your feedback too.

 

Comments:

What about Sun's Postgres initiative? I would be more interested if Sun could do something creative there to make warehousing easier.

Posted by Idaho on July 17, 2006 at 07:18 AM EDT #

The savings mentioned by using DB2 Workgroup Edition and/or DB2 Express Edition are not possible. Currently, IBM only offers DB2 Enterprise Server Edition for Solaris x86-64 and that's if you can get them to admit it's available! While DB2 V9.1 supposedly will include Solaris x86-64 versions of Workgroup and Express Edition, I seriously doubt IBM's commitment to the Solaris platform - especially x86-64. I haven't been able to get an IBM rep to return an email acknowledging the existence of Solaris x86-64 DB2! The only mention on their website about DB2 for Solaris x86-64 is as an option for other products (e.g. Web Broker) and a couple of lines in the DB2 Solaris requirements doc. Not very encouraging!

Posted by Paul Smith on July 19, 2006 at 11:38 AM EDT #

Let me see if I can resolve that problem for you. Can you contact me offline via an email?

Posted by Jignesh Shah on July 19, 2006 at 10:33 PM EDT #

Update:

Check out DB2 V8 requirements which now says Solaris UltraSPARC or x86-64 CPU

http://www-306.ibm.com/software/data/db2/udb/sysreqs_v8.html

Also check out the updated Solaris Prerequisite for DB2 at http://www-1.ibm.com/support/docview.wss?rs=71&uid=swg21176756

Another update regarding Workgroup Edition and Express Edition coming soon.

Posted by Jignesh Shah on July 24, 2006 at 12:54 AM EDT #

Post a Comment:
Comments are closed for this entry.
About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today