Friday Oct 12, 2007

World Record Price/Performance benchmark with DB2/Solaris 10 on Sun Fire X4500

We just published a world record price/performance result using the industry standard TPC-H data ware housing benchmark at a 3 TeraByte scale factor using DB2 9.1 with Solaris 10 8/07 with 10 Sun Fire X4500 (aka thumper).

The benchmark report is available at the tpc.org website. You can download either the Executive summary report or the Full Disclosure report

Few highlights of the benchmark are as follows:

  • Best Price/Performance in 3TB scale factor
  • First database industry standard benchmark publication for Sun Fire X4500
  • First DB2/Solaris TPC-H benchmark after a gap of 6 years
  • First DB2 on Solaris x64 for Data Warehouse type of workload ever
  • First DB2 9 result on Solaris with x64 based system
  • Shows why Sun Fire X4500 is the best cost effective system in a role of Data Warehouse appliance

The benchmark consisted of overall 20 AMD opterons (dual-cores) and hence really cost effective in terms of database license fees incurred. The overall system provided about 15GB/sec of simultaneous IO which itself is amazing if not surprising for Sun Fire X4500. Though the overall disk capacity was very high  not all was used for the benchmark which means plenty of Data Marts can be stored on the same system too.

My related blog entry on the subject is also available to get some insight into it.

Required Disclosure:

  Sun Fire X4500 cluster 38,672.4 QphH@3000GB, $29.39 USD $/QphH@3000GB,   available 10/12/07. TPC-H, QphH, $/QphH tm of Transaction Processing Performance  Council (TPC). More info www.tpc.org.

 

Monday Apr 09, 2007

How to use DB2 V8 and V9 in Solaris 10 resource pools

In OLTP with heavy network benchmarks conducted with DB2 on Solaris 10, it was found that often there is a need of putting DB2 on their own pool of processors specially seperating them from processors that are also servicing the network and also by exploiting the FX scheduler for DB2 without making it the default for the system.

Here is a quick guide on how to use DB2 in Solaris 10 resource pools. Considering you have followed the setup very similar to my previous blog entry. Here are the additional steps to configure pools for DB2. (This entry assumes that you have not enable pools before on your system)

Enable pools on your system using the following command

# pooladm -e

Now initialize the pools based on your existing environment as a starting point

# pooladm -s 

Now view the existing pool configuration using the following command

# pooladm

You will see that it has a default pool called pool_default and a default processor set called pset_default. We now carve out a processor set and pool for DB2 from it. Assume that the current pset_default has about 16 processors in it and the system has two active devices on it. Then we can allocate DB2 with about 14 processors as follows:

Create a text file called pool.host with the following contents

modify pset pset_default (uint pset.min=2; uint pset.max=2)
create pset db2pset  (uint pset.min=14; uint pset.max=14)
create pool db2pool (string pool.schedule="FX")
associate pool db2pool (pset db2pset)

Then modify the existing pool configuration as follows

# poolcfg -f pool.host

Activate the modified pool configuration

# pooladm -c

Verify the changes using the following command:

# pooladm

Also we get more bang for the buck by turning off interrupts on these dedicated pool for DB2. This can be achieved by observing the cpu ids assigned to db2pset and turning them off as follows:

# psradm -i 0-14

Where CPU IDs 0 to 14 are assigned to db2pset. (Your CPU IDs may vary)

Now you have pools setup for DB2. To make DB2 use its own pool we again use the projects database to setup the default pool for the DB2 Instance owner to use the new pool (assuming you have already defined project user.db2inst1)

# projmod -a -K "project.pool=db2pool" user.db2inst1

Now for a single partitioned database instance, when you login using a fresh shell and issue db2start it will start the database in db2pool. If you are using DPF then you have to use db2nodes.cfg to set the pool name as the 5th column of the logical node entries.

If for some reason you want to revert to default setup then it can be done as follows

# pooladm -x

And also if you want to disable the pool facilities then you use

# pooladm -d

Now the real benefit for customers by seperating the CPUs for network and system resoures and DB2 is License fees. You can now use sub-capacity licensing here only for the size of the pool used for DB2 and donot pay the DB2 License for the CPUs which are only doing network, disk and other system activities. This way you get the maximum saving and returns on the investment in DB2 on Solaris 10.

Monday Mar 19, 2007

Sun Studio tools and DB2 V8.2 on Solaris 10 Zones

Every DBA or System Administrations or Software Developers find some need of "drilling-down" to the call-flow of how an application is interacting with the Operating System beneath it.

Various tools and features including DTrace exists to solve the problem of finding "What's going on" on a particular setup. Whilte DTrace is more dynamic, it will be overkill for DTrace to capture all events and then a huge post-processing job to understand the what all those events really mean and how they are interacting with each other.

This is where Sun Studio Performance Analyzer tools still have an edge over DTrace (which is a personal view, many may not agree). Specially when the engineer trying to figure out the problem does not have access to the system that has the problem.

We now look at how Sun Studio Performance Analyzer tools should be used to collect such information about DB2 V8.2 running on Solaris 10 (Zones or otherwise).

Typically, information is collected using the "collect" tool and the information is analyzed using the GUI "analyzer" tool. There is also a text analyzer tool also available called "er_print". The text analyzer tool is primarily useful when the information (which is reffered to as "experiments" in this context) is on a remote server somewhere in west coast and the engineer is located in east coast. (Java remote display event handling still has a long way to go.) Generally the experiment is collected by starting the application as "collect $applicationname" or by invoking dbx on the process and setting "set collector enable true" and "cont" the process.

However things are not that simple with DB2 V8.2. DB2 V8.2 is multi-processed database engine which means if you want to collect on each processes, you have to manually enable it on each process. Also since DB2 V8.2 (that includes DB2 V9.1 and all prior versions) uses setuid flags to use root priviledges to start the DB2 database engine. The collect tool works by means of preloading certain library files which Solaris security mechanism does not allow by default for unsecured libraries to happen between "userid" boundaries (user -> root -> user). Also other problem is default file permissions, so if root creates any file, then the user will not have write permissions to write to it.

The good news is that there is a work around to solve all of the above problems and thats the real content of this blog entry.

For the blog entry sake, lets assume that "db2inst1" is the DB2 Instance owner that hosts the database and /export/home/db2inst1 is the home directory path of the user "db2inst1". You will need "root" access (or atleast get the system administrator to do it for you) in order to setup all the workarounds. (This is where Solaris 10 Zones are ideal, since SysAdmins can create a zone and give developers root access to the zone without really compromising the entire system. This is how ground work for this blog entry was being executed.) In this case, DB2 Instance directory will then be /export/home/db2inst1/sqllib. For a single partitioned DB2 instance (where the content of sqllib/db2nodes.cfg is exactly one line), if you try to execute the collect statement on db2start, you will see the following message:

-bash-3.00$ /opt/SUNWspro/bin/collect  -F all -A copy -d /export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/test.1.er ...
ld.so.1: db2start: warning: 
/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime/libcollector.so
: open failed: illegal insecure pathname
03/14/2007 14:30:58     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.

DB2 has started and an experiment is created however, the experiment will be empty because the libcollector.so library is not allowed to cross setuid flags since it has an insecured path name. Also when you cross such setuid environments, few files created for profile information while the process has effective userid of root will not be available for the process when the effective userid is "db2inst1" again. Hence umask also has to be modified.

To work around this problem one has to use the crle command using root priviledges as follows:

First find out the default settings:

# crle -64

Default configuration file (/var/ld/64/ld.config) not found
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    /lib/secure/64:/usr/lib/secure/64  (system default)

Modify trusted directories path by using the old list and appending the directory that contains the dbxruntime libraries

# crle -64 -s  /lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime

NOTE: There should be no trailing forward slash after the directory name otherwise it will not work for that directory.

Verify the setting again. Note we are changing only for 64-bit binaries.

# crle -64

Configuration file [version 4]: /var/ld/64/ld.config
  Default Library Path (ELF):   /lib/64:/usr/lib/64  (system default)
  Trusted Directories (ELF):    
/lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime

Command line:
  crle -64 -c /var/ld/64/ld.config -s \\
/lib/secure/64:/usr/lib/secure/64:/opt/SUNWspro/bin/../prod/bin/../lib/v9/dbxruntime


Now stop the DB2 engine if it is still running and then modify umask and restart DB2 engine as follows

-bash-3.00$ umask 000
-bash-3.00$ /opt/SUNWspro/bin/collect   -F all -A copy -d \\
/export/home/db2inst1/experiments $HOME/sqllib/adm/db2start
Creating experiment database /export/home/db2inst1/experiments/test.1.er ...
03/19/2007 10:46:53     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
-bash-3.00$ db2sampl
-bash-3.00$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/SUN64 8.2.6
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

-bash-3.00$ db2 "select count(\*) from employee"

1
-----------
         32

  1 record(s) selected.

-bash-3.00$ db2 connect reset
DB20000I  The SQL command completed successfully.
-bash-3.00$ db2stop
03/19/2007 10:52:37     0   0   SQL1064N  DB2STOP processing was successful.
SQL1064N  DB2STOP processing was successful.
-bash-3.00$

When db2stop is executed, it also marks the end of the collection of the DB2 engine instance. Now you can use "er_print" or the "analyzer" GUI as follows:

-bas-3.00$ cd /export/home/db2inst1/experiments
-bash-3.00$ /opt/SUNWspro/bin/er_print test.1.er

In this case since we know it has lots of sub processes we load it easily as follows:

-bash-3.00$ /opt/SUNWspro/bin/er_print test.1.er/_\*.er
(/opt/SUNWspro/bin/er_print) limit 10
(/opt/SUNWspro/bin/er_print) functions
Functions sorted by metric: Exclusive User CPU Time

Excl.     Incl.      Name
User CPU  User CPU
  sec.      sec.
30.111    30.111     
 1.651     1.651     take_deferred_signal
 1.231     1.231     memset
 0.741     1.081     sqlno_qnc_bound_by_func(sqlno_globals\*,sqlno_set,sqlno_set,sqlnq_qnc\*,sqlno_qtb\*,int\*,
SQLNO_BND\*,int\*,sqlnq_qnc\*\*)
 0.700     6.545     sqlnp_parser(sqlnp_cb\*)
 0.550     0.550     memcpy
 0.550     1.281     sqlogmblkEx
 0.500     0.771     sqlno_bit2intD(sqlno_globals\*,sqlno_sdb\*,sqlno_apcb\*,sqlno_set,sqlno_iset\*\*)
 0.470     2.051     sqlno_prop_ff(sqlno_globals\*,sqlno_apcb\*,sqlno_qtb\*,sqlno_join_context\*,sqlno_set,sqlno_set,
sqlno_set,float\*,sqlno_part\*,SQLNN_BOOLEAN,float\*,float\*,sqlno_set\*,sqlno_table\*,sqlno_table\*)
 0.350     0.921     sqlno_fd_normalize_order(sqlno_globals\*,sqlno_order_class\*,sqlno_order_class\*,
sqlno_plan_properties\*,sqlno_apcb\*,sqlno_order_class\*\*,sqlno_set\*)

(/opt/SUNWspro/bin/er_print)

If you use the analyzer GUI then you have to select all processes before you start looking at the data.

Also if this is associated with a technical problem, you can tar up the directory /export/home/experiments and see it to the technical support team which can also look at the data from it using the same tools.

In another entry we will see how to collect data using the Sun Studio tools when DB2 engine is used with DPF - Database Partition Feature. This is easily identified when there are more than 1 lines in the SHOME/sqllib/db2nodes.cfg file.

Tuesday Mar 13, 2007

How to use DB2 V8.2 in Solaris 10 Zones


This is more like an hands on on how-to use DB2 V8.2 in Solaris 10 Zones I have a Solaris 10 system which already has DB2 V8.2 fixpack 13 installed in /opt/IBM/DB2/V8.2. Currently there are no zones define on the system So we start by creating a local zone

# mkdir -p /export/zone/testbed
# chmod 700 /export/zone/testbed

# zonecfg -z testbed
testbed: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:testbed> create
zonecfg:testbed> set zonepath=/export/zone/testbed
zonecfg:testbed> add net
zonecfg:testbed:net> set address=10.0.0.1
zonecfg:testbed:net> set physical=eri0
zonecfg:testbed:net> end
zonecfg:testbed> verify
zonecfg:testbed> commit
zonecfg:testbed> exit


# zoneadm -z testbed install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <14661> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1018> packages on the zone.
Initialized <1018> packages on zone.
Zone  is initialized.
Installation of <110> packages was skipped.
The file  contains a log of the zone installation.
#

Please review the install_log file very carefully as it highlights all packages that could not be added to the local zone. This clearly indicates if that package was installed without any problems or not. For example in my install_log file I see

\*\*\* package <mqm> was not installed:

The package <mqm> is either not fully installed in the
global zone, or the package contains a request script. Only packages
that are fully installed in the global zone and that do not contain a
request script can be installed when a new zone is created. If this
package does not contain a request script, you can correct this
problem by removing and reinstalling the package in the global zone.

This indicates that since the WebSphere MQ package that was installed in the global zone uses a request script, it did not install it by default into the local zone. In this particular installation, I verified that all the 42 packages starting with "db2" were installed into the local zone without any problems.

NOTE: While this "inheritance" works, from my talks with the IBM DB2 support team, their recommendation is not to install DB2 on the global zone at all, but install it directly under the local zone, once the local zone is available)

# zoneadm -z testbed boot
# zlogin -C testbed
[Connected to zone 'testbed' console]

You did not enter a selection.
What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT52

..

Follow the prompts to complete the Zone initialization, otherwise the zone will not be fully available. When the system identification step is completed and you logout use ~. to exit from the console mode.

# zlogin testbed
[Connected to zone 'testbed' pts/4]
Last login: Tue Mar 13 07:09:25 on pts/4
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
# uname -a
SunOS testbed 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Blade-1000
# cat /etc/release
                       Solaris 10 6/06 s10s_u2wos_09a SPARC
           Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                             Assembled 09 June 2006

# cd /opt/IBM/db2/V8.1
# pwd
/opt/IBM/db2/V8.1

It inherited my DB2 V8.2 installation from global zone where it was already installed before I created the new local zone

NOTE AGAIN : While this works, from my talks with the IBM DB2 support team, their recommendation is not to install DB2 on the global zone at all, but install it directly under the local zone, once the local zone is available)

# mkdir /export/home
# groupadd db2iadm1
# useradd -g db2iadm1 -s /usr/bin/bash -d /export/home/db2inst1 -m db2inst1
64 blocks
# projadd -U db2inst1 user.db2inst1
# projmod -a -K "project.max-shm-ids=(priv,4k,deny)" user.db2inst1
# projmod -a -K "project.max-sem-ids=(priv,4k,deny)" user.db2inst1
# projmod -a -K "project.max-shm-memory=(priv,4G,deny)" user.db2inst1
# projmod -a -K "project.max-msg-ids=(priv,4k,deny)" user.db2inst1
# useradd -g other -s /usr/bin/bash -d /export/home/db2fenc1 -m db2fenc1
64 blocks
#

The recommended way to figure out the ballpark values for the above project parameters is to to run the db2osconf utility in Global zone and use the recommended values.

# cd /opt/IBM/db2/V8.1/instance
# ./db2icrt -w 64 -p 50000 -u db2fenc1 db2inst1
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
DBI1070I Program db2icrt completed successfully.


# su - db2inst1
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
-bash-3.00$ db2licm -a db2license.txt
DBI1402I License added successfully.

-bash-3.00$ db2start
03/13/2007 16:09:43     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
-bash-3.00$
-bash-3.00$ db2level
DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08026"
with level identifier "03070106".
Informational tokens are "DB2 v8.1.0.120", "s060801", "U808889", and FixPak
"13".
Product is installed at "/opt/IBM/db2/V8.1".
    
    
Now verifying the newly created instance
      
      
-bash-3.00$ db2sampl
-bash-3.00$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/SUN64 8.2.6
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

-bash-3.00$ db2 "select count(\*) from employee"

1
-----------
         32

  1 record(s) selected.

-bash-3.00$ db2 connect reset
DB20000I  The SQL command completed successfully.
-bash-3.00$

Hopefully this helps getting over the fear of using Zones and DB2. Again refer to previous blog entry for the white paper on DB2 Containers.
With this my more than two years old original entry of DB2 V8.1 (prior to release of DB2 V8.2) in Solaris 10 zones before Solaris 10 was released is now obsolute.


 

Thursday Jan 18, 2007

World Record SPECjAppServer2004 result with DB2/Solaris as backend

The deadly combination of WebLogic/DB2/Solaris 10 has striked again. It got back to the lead on World record numbers for the new year with 7,174.56 JOPS@Standard.

This time the configuration used DB2 V8.2.6 on Sun Fire E6900 with 24 UltraSPARC IV+ 1800Mhz CPU and 4 Sun StorEdge 3510 Fiber Channel Arrays .

Again this proves DB2 when deployed using Solaris 10 on UltraSPARC IV+ based system with Sun StorEdge arrays is great in terms of performance for Application Server/OLTP workloads.

Disclosure Statement:
SPECjAppServer2004 10 Sun Fire X8420 (80 cores, 40 chips) and 1 Sun Fire E6900 (48 cores, 24 chips) 7174.56 SPECjAppServer2004 JOPS@Standard
SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. All results from www.spec.org as of 01/18/07.

Sunday Jan 14, 2007

DB2 on UltraSPARC T1 (aka Niagara I) based system Benchmark Publication

On Jan 9,2006 Sun published SpecJAppServer2004 benchmark with WebLogic/DB2/Solaris 10 using Sun Fire T2000 servers using UltraSPARC T1 processors and Sun StorEdge 3320 storage array. The result is 801.70 SPECjAppServer2004 JOPS@Standard .

This was the first public benchmark ever to use DB2 V8.2 on Sun Fire T2000. The published benchmark runs DB2 on Sun Fire T2000 with 6-cores 1Ghz UltraSPARC T1 . The DB2 license that would be required for the config is 6 x 30PVU= 180 PVUs (or using the old terminology about 1.8 CPU Licenses). This proves that the combination of DB2 on Sun Fire T2000 is an attractive platform considering various metrics like database License Prices, Power Ratings, Volume used by the server, etc.

Disclosure Statement:
SPECjAppServer2004 Sun Fire T2000 (8 cores, 1 chip) 801.70 JOPS@Standard.
SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. All results from www.spec.org as of 01/15/07.

Monday Jan 01, 2007

DB2 9 (V9.1 fixpack 1) now available on Solaris 10 x64

DB2 V9.1 for Solaris 10 x64 (64-bit instance) is now available for download from IBM. (Functionally it is equivalent to DB2 V9.1 fixpack 1 for Solaris SPARC)

The Part Numbers for DB2 products on Solaris x64 are as follows:

  • DB2 Enterprise Server Edition V9.1 for Solaris x86-64 Multilingual (C94Z2ML)
  • DB2 Client V9.1 for Solaris x86-64 Multilingual (C94Z5ML)
  • DB2 Runtime Client V9.1 Solaris x86-64 Multilingual (C94Z6ML)
  • DB2 Workgroup Server Edition V9.1 Solaris X86-64 Multilingual (C94Z3ML)
  • DB2 Express Edition V9.1 Solaris X86-64 Multilingual (C94Z4ML)

Note: DB2 9 fixpack 1 supports ZFS zvols as raw devices. Supports Solaris Zones by way of "inheritance" which means DB2 9 has to be installed before creating zones (unlike DB2 V8.2). Supports only 64-bit Solaris platforms (SPARC,AMD64, EMT64) and only 64-bit DB2 instances.

Try it out!!!

Wednesday Oct 25, 2006

DB2 V8.2 in Solaris 10 Zones/Containers

Cherry Shu has published Deploying IBM DB2 UDB V8.2.x in Containers in the Solaris 10 OS . It's a must-read if you want to use DB2 in Solaris 10 Zones. It also highlights the order of Zone creation and DB2 installation is supported and what happens if orders is changed. It covers setting of IPC tunables within Zones. Great Article!

 

Wednesday Sep 20, 2006

DB2 Fixpack Downloads for Solaris

This entry will be updated regularly to provide quick links to download DB2 Fixpack downloads for Solaris. (Handy to be bookmarked). As I get links for DB2 9 I will add them here.

 

Monday Sep 18, 2006

World Record Benchmark using DB2 on Solaris / UltraSPARC IV+

Sun delivered a world record SpecJAppServer2004 Benchmark which used DB2 on Solaris as the database backend. The earlier benchmark used the UltraSPARC T1 based Sun Fire T2000 for the Application Server and x64 based Sun Fire X4200 for the database tier. In the recent world record benchmark, the application server is on Sun Blade 8000 series using Opteron based X8400 blades and the database is on UltraSPARC IV+ based Sun Fire E6900.

Why this benchmark is special for DB2 on Solaris customers?

  • First DB2 on Solaris/SPARC published benchmarks in about 5 years
  • First published benchmark from Sun using DB2 on UltraSPARC IV+ based server

With the release of this benchmark Sun has now multiple SpecJAppServer2004 benchmarks in the public domain using DB2 on Solaris (x64 and UltraSPARC) using multiple Applications Server (like BEA WebLogic, IBM WebSphere) on various Sun Fire and Sun Blade systems.

To List them again:

  1. BEA WebLogic Application Server and DB2 on Solaris x64 running in separate Zones in Sun Fire X4600
  2. IBM WebSphere Application Server on Sun Fire T2000 and IBM DB2 on Sun Fire X4200
  3. BEA WebLogic Application Server on Sun Blade 8000 Modular Series and DB2 on Sun Fire E6900

Solaris proves itself to be the best platform for (various) databases.

Disclosure: SPECjAppServer2004 Sun Blade 8000 (10 Sun Blade X8400, 80 cores, 40 chips) and 1 Sun Fire E6900 (48 cores, 24 chips) 6662.98 JOPS@Standard, One Sun Fire X4600 (16 cores, 8 chips) 1000.86 SPECjAppServer2004 JOPS@Standard, One Sun Fire X4200 (4 cores, 2 chips) and one Sun Fire T2000 (8 cores, 1 chip) 616.22 SPECjAppServer2004 JOPS@Standard. SPECjAppServer2004 JOPS@Standard. SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/13/06.

 

About

Jignesh Shah is Principal Software Engineer in Application Integration Engineering, Oracle Corporation. AIE enables integration of ISV products including Oracle with Unified Storage Systems. You can also follow me on my blog http://jkshah.blogspot.com

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today