Friday Oct 12, 2007

World Record Price/Performance benchmark with DB2/Solaris 10 on Sun Fire X4500

We just published a world record price/performance result using the industry standard TPC-H data ware housing benchmark at a 3 TeraByte scale factor using DB2 9.1 with Solaris 10 8/07 with 10 Sun Fire X4500 (aka thumper).

The benchmark report is available at the tpc.org website. You can download either the Executive summary report or the Full Disclosure report

Few highlights of the benchmark are as follows:

  • Best Price/Performance in 3TB scale factor
  • First database industry standard benchmark publication for Sun Fire X4500
  • First DB2/Solaris TPC-H benchmark after a gap of 6 years
  • First DB2 on Solaris x64 for Data Warehouse type of workload ever
  • First DB2 9 result on Solaris with x64 based system
  • Shows why Sun Fire X4500 is the best cost effective system in a role of Data Warehouse appliance

The benchmark consisted of overall 20 AMD opterons (dual-cores) and hence really cost effective in terms of database license fees incurred. The overall system provided about 15GB/sec of simultaneous IO which itself is amazing if not surprising for Sun Fire X4500. Though the overall disk capacity was very high  not all was used for the benchmark which means plenty of Data Marts can be stored on the same system too.

My related blog entry on the subject is also available to get some insight into it.

Required Disclosure:

  Sun Fire X4500 cluster 38,672.4 QphH@3000GB, $29.39 USD $/QphH@3000GB,   available 10/12/07. TPC-H, QphH, $/QphH tm of Transaction Processing Performance  Council (TPC). More info www.tpc.org.

 

Friday May 18, 2007

DBonSolaris.info

I just bought a domain name DBonSolaris.info from GoDaddy.com. They even gave an ad-supported free hosting for it. Using the hosting tools I found that they provide a wiki application from pmwiki.org. Not exactly similar to all the other wikis that I have used till date.

Anyhow I started putting some information on it. My idea is to use it to organize all database blog entries, articles, etc in a way easily consumed by someone who is looking for that information. Check it out and also see if you can also add information on it. If it doesnt save your information, do let me know. Since this is not entirely "custom", there are many limitation on what can be done using this wiki application. But afterall it is the information that counts.

Try DBonSolaris.info and let me know what you think about it.

Monday Apr 09, 2007

How to use DB2 V8 and V9 in Solaris 10 resource pools

In OLTP with heavy network benchmarks conducted with DB2 on Solaris 10, it was found that often there is a need of putting DB2 on their own pool of processors specially seperating them from processors that are also servicing the network and also by exploiting the FX scheduler for DB2 without making it the default for the system.

Here is a quick guide on how to use DB2 in Solaris 10 resource pools. Considering you have followed the setup very similar to my previous blog entry. Here are the additional steps to configure pools for DB2. (This entry assumes that you have not enable pools before on your system)

Enable pools on your system using the following command

# pooladm -e

Now initialize the pools based on your existing environment as a starting point

# pooladm -s 

Now view the existing pool configuration using the following command

# pooladm

You will see that it has a default pool called pool_default and a default processor set called pset_default. We now carve out a processor set and pool for DB2 from it. Assume that the current pset_default has about 16 processors in it and the system has two active devices on it. Then we can allocate DB2 with about 14 processors as follows:

Create a text file called pool.host with the following contents

modify pset pset_default (uint pset.min=2; uint pset.max=2)
create pset db2pset  (uint pset.min=14; uint pset.max=14)
create pool db2pool (string pool.schedule="FX")
associate pool db2pool (pset db2pset)

Then modify the existing pool configuration as follows

# poolcfg -f pool.host

Activate the modified pool configuration

# pooladm -c

Verify the changes using the following command:

# pooladm

Also we get more bang for the buck by turning off interrupts on these dedicated pool for DB2. This can be achieved by observing the cpu ids assigned to db2pset and turning them off as follows:

# psradm -i 0-14

Where CPU IDs 0 to 14 are assigned to db2pset. (Your CPU IDs may vary)

Now you have pools setup for DB2. To make DB2 use its own pool we again use the projects database to setup the default pool for the DB2 Instance owner to use the new pool (assuming you have already defined project user.db2inst1)

# projmod -a -K "project.pool=db2pool" user.db2inst1

Now for a single partitioned database instance, when you login using a fresh shell and issue db2start it will start the database in db2pool. If you are using DPF then you have to use db2nodes.cfg to set the pool name as the 5th column of the logical node entries.

If for some reason you want to revert to default setup then it can be done as follows

# pooladm -x

And also if you want to disable the pool facilities then you use

# pooladm -d

Now the real benefit for customers by seperating the CPUs for network and system resoures and DB2 is License fees. You can now use sub-capacity licensing here only for the size of the pool used for DB2 and donot pay the DB2 License for the CPUs which are only doing network, disk and other system activities. This way you get the maximum saving and returns on the investment in DB2 on Solaris 10.

Monday Mar 19, 2007

Sun Studio tools and DB2 V8.2 on Solaris 10 Zones

Every DBA or System Administrations or Software Developers find some need of "drilling-down" to the call-flow of how an application is interacting with the Operating System beneath it.

Various tools and features including DTrace exists to solve the problem of finding "What's going on" on a particular setup. Whilte DTrace is more dynamic, it will be overkill for DTrace to capture all events and then a huge post-processing job to understand the what all those events really mean and how they are interacting with each other.

This is where Sun Studio Performance Analyzer tools still have an edge over DTrace (which is a personal view, many may not agree). Specially when the engineer trying to figure out the problem does not have access to the system that has the problem.

We now look at how Sun Studio Performance Analyzer tools should be used to collect such information about DB2 V8.2 running on Solaris 10 (Zones or otherwise).

Typically, information is collected using the "collect" tool and the information is analyzed using the GUI "analyzer" tool. There is also a text analyzer tool also available called "er_print". The text analyzer tool is primarily useful when the information (which is reffered to as "experiments" in this context) is on a remote server somewhere in west coast and the engineer is located in east coast. (Java remote display event handling still has a long way to go.) Generally the experiment is collected by starting the application as "collect $applicationname" or by invoking dbx on the process and setting "set collector enable true" and "cont" the process.

However things are not that simple with DB2 V8.2. DB2 V8.2 is multi-processed database engine which means if you want to collect on each processes, you have to manually enable it on each process. Also since DB2 V8.2 (that includes DB2 V9.1 and all prior versions) uses setuid flags to use root priviledges to start the DB2 database engine. The collect tool works by means of preloading certain library files which Solaris security mechanism does not allow by default for unsecured libraries to happen between "userid" boundaries (user -> root -> user). Also other problem is default file permissions, so if root creates any file, then the user will not have write permissions to write to it.

The good news is that there is a work around to solve all of the above problems and thats the real content of this blog entry.

For the blog entry sake, lets assume that "db2inst1" is the DB2 Instance owner that hosts the database and /export/home/db2inst1 is the home directory path of the user "db2inst1". You will need "root" access (or atleast get the system administrator to do it for you) in order to setup all the workarounds. (This is where Solaris 10 Zones are ideal, since SysAdmins can create a zone and give developers root access to the zone without really compromising the entire system. This is how ground work for this blog entry was being executed.) In this case, DB2 Instance directory will then be /export/home/db2inst1/sqllib. For a single partitioned DB2 instance (where the content of sqllib/db2nodes.cfg is exactly one line), if you try to execute the collect statement on db2start, you will see the following message:

-bash-3.00$ /opt/SUNWspro/bin/collect  -F all -A copy -d /export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/test.1.er ...
ld.so.1: db2start: warning: 
/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime/libcollector.so
: open failed: illegal insecure pathname
03/14/2007 14:30:58     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.

DB2 has started and an experiment is created however, the experiment will be empty because the libcollector.so library is not allowed to cross setuid flags since it has an insecured path name. Also when you cross such setuid environments, few files created for profile information while the process has effective userid of root will not be available for the process when the effective userid is "db2inst1" again. Hence umask also has to be modified.

To work around this problem one has to use the crle command using root priviledges as follows:

First find out the default settings:

# crle -64

Default configuration file (/var/ld/64/ld.config) not found
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    /lib/secure/64:/usr/lib/secure/64  (system default)

Modify trusted directories path by using the old list and appending the directory that contains the dbxruntime libraries

# crle -64 -s  /lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime

NOTE: There should be no trailing forward slash after the directory name otherwise it will not work for that directory.

Verify the setting again. Note we are changing only for 64-bit binaries.

# crle -64

Configuration file [version 4]: /var/ld/64/ld.config
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    
/lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime

Command line:
  crle -64 -c /var/ld/64/ld.config -s \\
/lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime


Now stop the DB2 engine if it is still running and then modify umask and restart DB2 engine as follows

-bash-3.00$ umask 000
-bash-3.00$ /opt/SUNWspro/bin/collect   -F all -A copy -d \\
/export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/test.1.er ...
03/19/2007 10:46:53     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
-bash-3.00$ db2sampl
-bash-3.00$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/SUN64 8.2.6
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

-bash-3.00$ db2 "select count(\*) from employee"

1
-----------
         32

  1 record(s) selected.

-bash-3.00$ db2 connect reset
DB20000I  The SQL command completed successfully.
-bash-3.00$ db2stop
03/19/2007 10:52:37     0   0   SQL1064N  DB2STOP processing was successful.
SQL1064N  DB2STOP processing was successful.
-bash-3.00$

When db2stop is executed, it also marks the end of the collection of the DB2 engine instance. Now you can use "er_print" or the "analyzer" GUI as follows:

-bas-3.00$ cd /export/home/db2inst1/experiments
-bash-3.00$ /opt/SUNWspro/bin/er_print test.1.er

In this case since we know it has lots of sub processes we load it easily as follows:

-bash-3.00$ /opt/SUNWspro/bin/er_print test.1.er/_\*.er
(/opt/SUNWspro/bin/er_print) limit 10
(/opt/SUNWspro/bin/er_print) functions
Functions sorted by metric: Exclusive User CPU Time

Excl.     Incl.      Name
User CPU  User CPU
  sec.      sec.
30.111    30.111     
 1.651     1.651     take_deferred_signal
 1.231     1.231     memset
 0.741     1.081     sqlno_qnc_bound_by_func(sqlno_globals\*,sqlno_set,sqlno_set,sqlnq_qnc\*,sqlno_qtb\*,int\*,
SQLNO_BND\*,int\*,sqlnq_qnc\*\*)
 0.700     6.545     sqlnp_parser(sqlnp_cb\*)
 0.550     0.550     memcpy
 0.550     1.281     sqlogmblkEx
 0.500     0.771     sqlno_bit2intD(sqlno_globals\*,sqlno_sdb\*,sqlno_apcb\*,sqlno_set,sqlno_iset\*\*)
 0.470     2.051     sqlno_prop_ff(sqlno_globals\*,sqlno_apcb\*,sqlno_qtb\*,sqlno_join_context\*,sqlno_set,sqlno_set,
sqlno_set,float\*,sqlno_part\*,SQLNN_BOOLEAN,float\*,float\*,sqlno_set\*,sqlno_table\*,sqlno_table\*)
 0.350     0.921     sqlno_fd_normalize_order(sqlno_globals\*,sqlno_order_class\*,sqlno_order_class\*,
sqlno_plan_properties\*,sqlno_apcb\*,sqlno_order_class\*\*,sqlno_set\*)

(/opt/SUNWspro/bin/er_print)

If you use the analyzer GUI then you have to select all processes before you start looking at the data.

Also if this is associated with a technical problem, you can tar up the directory /export/home/experiments and see it to the technical support team which can also look at the data from it using the same tools.

In another entry we will see how to collect data using the Sun Studio tools when DB2 engine is used with DPF - Database Partition Feature. This is easily identified when there are more than 1 lines in the SHOME/sqllib/db2nodes.cfg file.

Thursday Jan 18, 2007

World Record SPECjAppServer2004 result with DB2/Solaris as backend

The deadly combination of WebLogic/DB2/Solaris 10 has striked again. It got back to the lead on World record numbers for the new year with 7,174.56 JOPS@Standard.

This time the configuration used DB2 V8.2.6 on Sun Fire E6900 with 24 UltraSPARC IV+ 1800Mhz CPU and 4 Sun StorEdge 3510 Fiber Channel Arrays .

Again this proves DB2 when deployed using Solaris 10 on UltraSPARC IV+ based system with Sun StorEdge arrays is great in terms of performance for Application Server/OLTP workloads.

Disclosure Statement:
SPECjAppServer2004 10 Sun Fire X8420 (80 cores, 40 chips) and 1 Sun Fire E6900 (48 cores, 24 chips) 7174.56 SPECjAppServer2004 JOPS@Standard
SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. All results from www.spec.org as of 01/18/07.

Sunday Jan 14, 2007

DB2 on UltraSPARC T1 (aka Niagara I) based system Benchmark Publication

On Jan 9,2006 Sun published SpecJAppServer2004 benchmark with WebLogic/DB2/Solaris 10 using Sun Fire T2000 servers using UltraSPARC T1 processors and Sun StorEdge 3320 storage array. The result is 801.70 SPECjAppServer2004 JOPS@Standard .

This was the first public benchmark ever to use DB2 V8.2 on Sun Fire T2000. The published benchmark runs DB2 on Sun Fire T2000 with 6-cores 1Ghz UltraSPARC T1 . The DB2 license that would be required for the config is 6 x 30PVU= 180 PVUs (or using the old terminology about 1.8 CPU Licenses). This proves that the combination of DB2 on Sun Fire T2000 is an attractive platform considering various metrics like database License Prices, Power Ratings, Volume used by the server, etc.

Disclosure Statement:
SPECjAppServer2004 Sun Fire T2000 (8 cores, 1 chip) 801.70 JOPS@Standard.
SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. All results from www.spec.org as of 01/15/07.

About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today