Tuesday Sep 22, 2009

Confusing coredump when running db2icrt on Solaris

Recently when I was installing DB2 V9.5 on a brand new Sun system running Solaris x64, I got a coredump when creating a DB2 instance by 'db2icrt' command. The error message is :

"Segmentation Fault - core dumped"

This looks like a memory access error. But it's also very confusing because I had the same version of DB2 running on another server and I never had the problem of creating an instance on it.

Spending some time poking around on both systems, I finally understood what's going wrong on my new system - I had a typo in the server name in /etc/hosts file! As the 'hostname' returned a wrong name, DB2 got unknown host error when ping it, which triggered the segmentation fault. Problem solved, though the error message is a bit misleading.

Friday Oct 10, 2008

Running IBM DB2 UDB on Sun SPARC Enterprise T5440

The newly announced Sun SPARC Enterprise T5440 is based on the UltraSPARC T2 Plus. The quad socket server packs 32 cores/256 threads in a 4RU enclosure and can support up to 1/2 TB of memory. How to scale DB2 UDB database server on this powerful system to make a full use of its capacity? The following suggested some best practices and tuning guidelines for DB2 deployments on Sun SPARC Enterprise T5440.

First of all, before doing anything on the system, ensure that the system's firmware is up-to-date. This is especially important for running DB2 on CMT systems, which have a hypervisor that sits between the hardware and operating system.  The recommended System Firmware version for T5440 is 7.2.0 or above.  Note that DB2 can crash on T51x0 and T52x0 systems using early firmware versions. See IBM's tech notes for more information.  The latest versions of Sun System Firmware can be downloaded from http://www.sun.com/bigadmin/patches/firmware.

T5440 comes up with Solaris 10 5/08 release. The LDoms on T5440 supports Solaris 10 11/06 and above. The required kernel patch is 137111-06 or above, and the nxge patch 138048-05 for software Large Segment Offload (LSO) support. It is always recommended to upgrade to the latest update release of Solaris 10.

Using large memory pages with DB2 on Solaris is always a good practice. Large memory pages are intended to improve the performance of the kernel and database applications by improving utilization of the processor's TLB. DB2 mainly uses large pages for database shared memory, which includes bufferpools, database heap, package cache, lock heap, catalog cache and utility heap. The DB2 database shared memory is allocated by Intimate Shared Memory (ISM) segments. Recent Solaris 10 updates have improved default behavior on the allocation of large pages. By default, Solaris 10  5/08 will use a maximum of 4M pages for DB2 shared memory segments. To enable 256M pages, add the following tunable to /etc/system and reboot.

      \*enable 256M pages
      set max_uheap_lpsize=0x10000000

For TCP/IP tunables, run the following commands from the command line as the root user:
ndd -set /dev/tcp tcp_xmit_hiwat 262144
ndd -set /dev/tcp tcp_recv_hiwat 262144

DB2 OLTP applications may benefit from Fixed Priority scheduling class on T5440. DB2 processes in the fixed priority class stay at a fixed priority level and with a fixed time quantum. So there are no frequent ping-pong of priorities and related changes in their time quantum as those in Time Sharing scheduling class (the default class for all processes). To use the Fixed Priority (FX) scheduling class, just just run the priocntl(1) command from the command line as the root user:
# priocntl -s -c FX -i projid <projid for a db2 instance>

"ps -ecf <DB2-PID>" can print out the scheduling class and its priority assigned for a db2 process. The default priority "0" usually works well for DB2.

As T5440 is a NUMA architecture, memory latency should be considered when running database applications that are very sensitive to memery latency. Fortunately, the threaded design of UltraSPARC T2 Plus chip acts to minimize the impact of memory latency. DB2 also supports the optimal placement of a database partition's processes and memory, which means a database partition's processes and memory can be restricted to an Locality Group (lgroup), so there are lower memory latencies and increased performance. The following shows how to achieve this.

In my experiment, I created 3 Solaris Containers on a T5440, and deployed one DB2 instance in each container, one DB2 instance in the global zone. So altogether there are 4 DB2 instances on the system. Four resource pools were created for each zone:

create pset db1_pset (uint pset.min = 64; uint pset.max=64)
create pset db2_pset (uint pset.min = 64; uint pset.max=64)
create pset db3_pset (uint pset.min = 64; uint pset.max=64)
create pool db1_pool (string pool.scheduler="FX")
create pool db2_pool (string pool.scheduler="FX")
create pool db3_pool (string pool.scheduler="FX")
associate pool db1_pool (pset db1_pset)
associate pool db2_pool (pset db2_pset)
associate pool db3_pool (pset db3_pset)

Here are the commands to create one of the non-global zone:
# zonecfg -z db2svr1
db2svr1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:db2svr1> create
zonecfg:db2svr1> set zonepath=/export/home/zones/db2svr1
zonecfg:db2svr1> set autoboot=true
zonecfg:db2svr1> add net
zonecfg:db2svr1:net> set address=129.148.63.235
zonecfg:db2svr1:net> set physical=nxge0
zonecfg:db2svr1:net> end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/db2data
zonecfg:db2svr1:fs> set special=/dev/dsk/c4t600A0B8000388C940000039048BE7480d0s4
zonecfg:db2svr1:fs> set raw=/dev/rdsk/c4t600A0B8000388C940000039048BE7480d0s4
zonecfg:db2svr1:fs> set type=ufs
zonecfg:db2svr1:fs> end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/db2log
zonecfg:db2svr1:fs> set special=/dev/dsk/c4t600A0B8000388C940000038E48BE72A4d0s4
zonecfg:db2svr1:fs> set raw=/dev/rdsk/c4t600A0B8000388C940000038E48BE72A4d0s4
zonecfg:db2svr1:fs> set type=ufs
zonecfg:db2svr1:fs> end
zonecfg:db2svr1>add inherit-pkg-dir
zonecfg:db2svr1:inherit-pkg-dir>set dir=/opt/ibm
zonecfg:db2svr1:inherit-pkg-dir>end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/opt/IBM
zonecfg:db2svr1:fs> set special=/opt/IBM
zonecfg:db2svr1:fs> set type=lofs
zonecfg:db2svr1:fs> set options=[rw,nodevices]
zonecfg:db2svr1:fs> end
zonecfg:db2svr1> set pool=db1_pool
zonecfg:db2svr1>verify
zonecfg:db2svr1> commit
zonecfg:db2svr1>exit

Each non-global has its local file systems mounted, and it can access the DB2 executables in the global zone.

# zoneadm list -vi
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   1 db2svr1          running    /export/home/zones/db2svr1     native   shared
   2 db2svr2          running    /export/home/zones/db2svr2     native   shared
   3 db2svr3          running    /export/home/zones/db2svr3     native   shared

By binding the zone to a resource pool, for DB2 processes running inside the zone, the percentage of memory accesses that are statisfied locally should increase dramatically. This approach generally got ~10% performance improvment for DB2.

Use poolstat(1) command monitor the resource pool cpu utilization:
# poolstat 5
                              pset
id pool                 size used load
  0 pool_default           64 60.3  253
  1 db1_pool               64 55.5  240
  2 db2_pool               64 51.7  249
  3 db3_pool               64 53.2  252


This way of monitoring makes it straight forward to see which zones or applications are saturated the CPU. Then you can adjust the resources accordingly to fit their needs.

With Solaris Container technology, we can have the flexibility to maximize DB2 performance by running multiple DB2 instances on T5440. You can also take advantage of other zero cost virtualization technologies like LDoms to scale or consolidate solutions on a single system.

About

This is a blog to talk about technical information regarding running IBM DB2 database on Solaris.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today