Thursday Jun 03, 2010

DB2 Support Statements for Solaris 10 Containers, LDoms, and ZFS

 I am asked by customers from time to time regarding if some new Solaris 10 features are supported by DB2, here I find the following support statements from the Installation Requirements section in the DB2 Information Center:

DB2 database products support the following Solaris concepts:

  • Solaris Logical Domains (LDoms)
  • Solaris Zones
  • ZFS filesystems
I have published an article regarding how to deploy DB2 in Solaris Containers/Zones. I'll post new blog entries to talk about how to deploy DB2 in LDoms and the tips to use ZFS very soon.

Monday Oct 12, 2009

Using Sun Storage F5100 Flash Array together with Sun Storage J4400 to Improve Query Performance for Data Warehousing Workload

The new Sun Storage F5100 Flash Array provides a simple, high performance and low power consumption solid state storage solution to help accelerate database operations. The flash array is a serial attached SCSI (SAS0 device based on serial ATA (SATA) flash devices called Sun Flash Modules. It provides low latency at affordable price. The value proposition of this product is to deliver fast access to critical data and enhances the responsiveness of database applications.

Data warehousing applications are generally I/O bound in nature. And because there are more reads than writes, this kind of workload should be able to take more advantage of the F5100 Flash Array as flash devices generally favor reads over writes.

This write-up presents the results of a database study that I did recently to compare the performance of storage architectures taking advantage of Sun Storage F5100 Flash Array with storage architecture based only on traditional hard disks.

The testing environment was built using a Sun Fire X4275 server with two quad-core Intel Xeon X5570 processors and 72 GB of RAM to host the DB2 database server. The Sun Fire X4275 server runs the Solaris 10 OS (Update 8) and the DB2 UDB V9.5 Fixpak 4. Four SAS HBA cards installed in the database server to support the access to four array domains of the F5100 Flash Array, and up to 960 GB of raw storage (40 Sun Flash Modules). Host mirroring across HBAs and domains can be used to increase data availability, which reduces the usable capacity to 480 GB. The second SAS channel of two HBAs are connected to a Sun Storage J4400, which contains 24 x 146 GB disk drives. Two Storage I/O Modules (SIM 0 and SIM 1) are used with the J4400 to support dual-path. A Sun Fire X4200 is used as the workload driver with two duo-core AMD Opteron processors and 16 GB RAM.

The deployment for the testing is shown as the following diagram.

Due to time limitation, the testing was performed against a small data warehouse. The size of the table spaces is around 200GB. Access to index table spaces and user temporary table space on the Sun Storage F5100 Flash Array is through four SAS HBAs. Access to the production data table spaces and log containers are through 24-disk based RAID 0 volume in J4400. The page size is 32KB. A UFS file system is created for each volume and mounted with the “forcedirectio” option.

Both atomic queries such as sorts / joins, and advanced queries containing complex business logic were both tested in order to compare the performance gains of using F5100 Flash Array in both cases. I/O Per Second (IOPS) and I/O bandwidth are the main performance metrics for the measurement.

Sorts / Joins

DB2 use temporary table space to store data during its execution in a transient or temporary work table. For e.g., when doing sorts and joins, if the the size of the sort heap can not fit in all temporary data, DB2 overflows the data to the temporary table space.

Three joins including nested loop join, merge join and hash join were perform during the test process. The I/O activity for disk-only test is as the follows:


The average disks usage is measured for each individual disk in the J4400, the usage for the whole volume is actually between 48-100%.

In Comparison, the I/O activity for flash / disk system is as below:


By putting the temporary table space in the F5100 Flash Array, the IOPS is increased from 5629 to 12983, which is 3.2 times improvement.


Complex Queries

These queries contain more advanced business logic, mixed with both sequential reads and random reads. Besides data table spaces, temporary table spaces and index table spaces are also heavily accessed during the queries execution.

The I/O activity for disk-only test is as the follows:



The I/O activity for the flash / disk system is as below:



The IOPS improves 77% to 11345 compared to 6416 in the disk-only system. The queries execution time improves 10.8 times to 180 seconds compared to 1952 seconds in the disk-only system.



Adding a Sun Storage F5100 Flash Array to off-load index and temporary table spaces I/O processing from a disk system to flash-based storage can improve system I/O performance because index and temporary table spaces I/O have been moved to the flash device so there is less workload for the disk to support, and the spinning disk can get the remaining work done more quickly.


The above testing shows how the combination of flash and disk technology can be applied to improve IBM DB2 UDB database performance for data warehousing workload. In the experiments, indexes and temporary tablespaces are stored on the F5100 Flash Array, which improves the overall performance by reducing the application response time and increasing the I/O throughput. Hybrid flash/disk technology is becoming a practical solution for database applications.




Tuesday Sep 22, 2009

Confusing coredump when running db2icrt on Solaris

Recently when I was installing DB2 V9.5 on a brand new Sun system running Solaris x64, I got a coredump when creating a DB2 instance by 'db2icrt' command. The error message is :

"Segmentation Fault - core dumped"

This looks like a memory access error. But it's also very confusing because I had the same version of DB2 running on another server and I never had the problem of creating an instance on it.

Spending some time poking around on both systems, I finally understood what's going wrong on my new system - I had a typo in the server name in /etc/hosts file! As the 'hostname' returned a wrong name, DB2 got unknown host error when ping it, which triggered the segmentation fault. Problem solved, though the error message is a bit misleading.

Monday Apr 13, 2009

Get a Performance Jump on Sun Blade X6270


[Read More]

Thursday Mar 26, 2009

DB2 on Sun OpenStorage Blueprint

My new article - "Deploying DB2 Database on the Sun Storage 7000 Unified Storage System" has been published as a Sun Blueprint.

You can also go to the wiki page to comment and rate it.

Here is the brief introduction.

For DB2 applications, Sun Storage 7000 Unified Storage Systems meet a range of functional requirements for availability, capacity, and performance. This Sun BluePrints™ article describes the steps needed to configure and install a DB2 database in a Sun Storage 7000 Unified Storage System environment. The article addresses the following topics:

• Preparing a Sun Storage 7000 Unified Storage System for DB2 database storage
• Preparing a server running the Solaris™ Operating System (Solaris OS) to use NFS file systems or Internet SCSI (iSCSI) logical unit numbers (LUNs)
• Installing the DB2 software on the server and creating a database on the storage system
• Separating database transaction log files from data files and storing them on the storage system
• Taking a snapshot of an online DB2 database and recovering from the snapshot

Wednesday Mar 11, 2009

A Performance Study of Using SSDs in IBM DB2 Applications

 Introduction 

Database transactions are by nature random and usually come with small record sizes. Because the records are brief when compared to the time required to reach the data location, database performance is paced by the ability to locate and retrieve information (mostly disk access) on the storage. Thus database applications are very sensitive to I/O wait time. No matter how fast the processor runs, it has to constantly wait on mechanical storage devices for its data.

Unlike magnetic disks, flash memory SSD is a purely electronic device without any moving part. Therefore, flash memory can provide uniform random access speed. The ability of flash memory to quickly perform a sector read or a sector write located anywhere in flash memory is one of the key characteristics that database applications can take advantage of.

On the other hand, in order to update an existing data item stored in flash memory, a time-consuming erase operation mush be performed before overwriting. The read and write speed of flash memory is asymmetric as it takes longer to write. This property of asymmetric speed should be considered when using flash memory for database applications.

I did some experiments recently to evaluate SSD as stable storage for DB2 database workloads and try to identify the areas where SSD can be best utilized. 

Before presenting the results from the workload analysis and performance study in the following sections, let me list the experimental settings briefly.

 Model Configuration 
 1x Sun Fire X4450  4 x Dual Core Intel Xeon @2.93 Ghz, 16 GB RAM, 4 x 143 GB SAS 10K RPM drives
 2 x SSD Disks  32 GB
 1 x Sun StorageTek 2540  1 RAID controller, 12 x 146 GB SAS 15K RPM drives
 

The operating system on the X4450 is Solaris Express Community Edition snv_106, which has the latest integration for ZFS L2ARC.

Test Scenarios and Results

DB2 can use separate storage spaces for database tables, indexes, log files and temporary data. Various experiments are done to test the performance of using SSD for different DB2 storage spaces, and the combination of using SSD with ZFS.

1. SSD for Transaction Log

In OLTP applications, when a transaction commits, the log records created by the transaction have to be written to the storage. When a large number of concurrent transactions commit at a rapid rate, the log records will be requested to be flushed to disk very often. In most cases writing log records is a significant performance bottleneck.   

The OLTP workload I used for my tests is built in-house at Sun. The data size is about 5GB. It exhibits little locality and sequentiality in data accesses, most I/O activities happened at the steady phase are writes (e.g,  forcing to write log records at commit time). The prefetching and write buffering become less effective for this type of workload, and the performance is more closely limited by disk latency than disk bandwidth and capacity.  

In 1 and 2 experiments, “HDD test” used one hard drive for table spaces containers, and one another hard drive for the log container. “SSD test” used one SSD for indexes and hot table containers (the rest of the tables are on a hard drive), and one another SSD for the log container. The page size for both tests is 4K.

The throughput result is as the following:

As we can see, the throughput is doubled with the SSD test. In this regard, extremely low latency of  SSDs can reduce the average transaction commit time and improve the throughput of transaction processing significantly.

2 SSD for Indexes and Hot Data

There are few reads activities in 1 because most of the data had been loaded into DB2 buffer pools during the ramp-up phase. To test the reads performance, a couple of sort queries were performed, with the use of a small size (2M) buffer pool. The following diagram shows the elapsed time taken to process the sort queries. 
 

 The sorts on a SSD ran 2.5 times faster than on a hard disk. 

More reads activities are tested in the following experiments.

3 SSD as L2ARC for ZFS

The experiments in this section use another DSS workload, which mainly stresses I/O. 

A new feature called L2ARC has been added into ZFS which enables using a SSD as a high-performance secondary cache that sits between the ARC in RAM and data on disk. Experiments were done to explore how DB2 can benefit from using ZFS with SSD as L2ARC.  

Three tests were conducted with the following configurations:
3.1. A ZFS pool which consisted 8 disks was used to hold the database data.
3.2. A ZFS pool which consisted 8 disks was used to hold the database data. Two SSDs were added to the zfs pool as L2ARC. 
3.3. A UFS + DirectIO file system based on a 8-disk volume was used to hold the database data.

The data size is around 30GB. Queries such as table scans, index scans, sorts and joins were ran for each test.  The page size for all tests is 32K bytes.
 

With the use of L2ARC, the performance of sequential reads was improved by a factor of 1.3, and that of random reads was improved by a factor of 2. 

 Another significant difference is the read latency. The iostat outputs for test 1 and 3 showed the average service time was around 12-13ms, while in test 2 the service time for the L2ARC cache device is around 2-2.5ms, which is about 6 times faster than the disks were delivering. That also explains the less time took on the reads activities with the use of L2ARC.

 

Summary

I/O performance has become ever critical in achieving high performance of database applications. This write-up shows how flash memory SSD, as a storage alternative to hard disk, can improve the overall DB2 transaction processing. The key is to have the data you access the most on the fastest storage.

Flash SSD could be good fits for frequently read tables and indexes due to its superior random reads performance. External sorting is another operation that can benefit from the low latency of SSD, because the read pattern of external sorting is quite random during the merge phase in particular. The use of low write latency SSD as a dedicated storage device for transaction log can reduce the commit time delay considerably.

Using SSD as L2ARC for ZFS showed great enhancement on the query executions and significantly reduced the service time. 

 As flash SSD gets less expensive it becomes more practical to have more of database applications on SSD.

Monday Nov 10, 2008

Another Fast Backup and Recovery Solution for DB2 Database Applications

Today, Sun announces the Sun Storage 7000 Unified Storage Systems a.k.a. Amber Road, the first Open Storage appliances. I tried out the product a while ago, with running DB2 workload to use it as database table space containers and log containers. What impressed me most is its simplicity in administration and comprehensive analytics. There are no commands needed for installation and configuration, all these can be done via click and drag in GUI interface. Dtrace is used underneath to trouble shoot, analyze, and optimize the system, and all these features are presented via graphics.

Running a DB2 database with data and transaction log files stored on Sun Storage 7000 Unified Storage Systems has several advantages:


  1. Extremely Fast Backup: Backup performance can be significantly improved using Snapshot copies in conjunction with DB2's set write suspend and set write resume commands. Snapshot copies can be created in a matter of seconds, regardless of the size of the database. This reduces the database backup window from hours to seconds and allows DBAs to take frequent full database backups without having to take the database offline.

  2. Quick Recovery: Since there is no data copying involved, an incredible amount of time is saved as the file system is put back to the original state it was at the time the Snapshot was taken.

  3. Availability and Reliability: The Amber Road Snapshot can create database backup in a matter of seconds without bringing a database down. Amber Road provides RAID-Z which is more reliable than RAID-5. It allows for better distribution of parity and eliminates the "RAID-5 write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a RAID-Z group.

  4. No Impact on System Response Time: Because a Snapshot copy is simply a picture of the file system at a specific pint-in-time and creating a database backup using Snapshot does not involve actual data I/O, the process of backing up a database virtually has no performance impact on the system response time.

In the future blogs, I will introduce some hands-on examples on how to use the Sun Storage 7000 Unified Storage Systems with DB2.


Thursday Nov 06, 2008

DB2 Express-C on OpenSolaris

DB2 Express-C is a free version of IBM DB2 database server. But it is not a crippled or trial version of DB2. It shares the same code as the other commercial DB2 editions and it has a very generous license, which doesn't impose limits on your database. This means there are no connection limits, no database size limits, and no user limits. It supports 2 cores and 2 GB RAM for absolutely free. Compared with SQL Server 2005 Express and Oracle 10g Express Edition that have limitation on DB size, DB2 Express-C has the most generous offering.


In Allan Packer's blog “Are Proprietary Databases Doomed?”, he pointed out that for some companies database licenses have become their single biggest IT cost. The impact is probably greater on small and medium-sized companies that don't have the same ability to command the hefty discounts that larger companies typically enjoy from database vendors. Fortunately we have more choices today. If you don't want to cost an arm or leg on databases, you can either come to the camp of Open Source Databases, or for any reasons you don't want the Open Source Databases, DB2 Express-C could be a good option for you.


The latest version of DB2 Express-C v9.5 for Solaris 10 x64 had been released for months. You can get it from here. Besides Solaris 10 for x64, you can also deploy Express-C on OpenSolaris. OpenSolaris contains the subset of the source code for the Solaris Operating System, but with an open source license. It is a viable alternative to Linux but with more unique features. Among the familiar Sun features are the powerful DTrace dynamic tracing tool and the impressive ZFS filesystem, neither of which are likely to make it to Linux due to licensing and personality conflicts.


If you are familiar with Solaris 10, then there is no gap for you to use OpenSolaris. Actually it is more easier to install and deploy OpenSolaris, either on your x64/x86 server, or on your laptop, or on your SPARC server (yep, the coming OpenSolaris 2008.11 release will support SPARC).


To deploy DB2 Express-C V9.5 on OpenSolaris, you don't need to bother to read any installation/configuration documents for these two products, if you know how to deploy DB2 UDB on Solaris 10. The DB2 utilities for Solaris platform such as db2osconf and db2ptree work the same way as DB2 UDB on Solaris 10. Any everything else just works fine as well. Because there is almost no change in the way of installation and configuration, the only exception is that DB2 Express-C doesn't support non-root installation.


Since OpenSolaris supports Solaris Containers technology, you can consolidate DB2 Express-C with other applications in different containers on a single server. Dedicated CPUs and memory caps can be assigned to each container for resource isolation. For e.g.,


global# zonecfg -z myzone
zonecfg:myzone> add dedicated-cpu
zonecfg:myzone:dedicated-cpu> set ncpus=1-2
zonecfg:myzone:dedicated-cpu> end
zonecfg:myzone> add capped-memory
zonecfg:myzone:capped-memory> set physical=2g
zonecfg:myzone:capped-memory> set swap=20g
zonecfg:myzone:capped-memory> end
zonecfg:myzone> exit


DB2 Express-C plus OpenSolaris provides a reliable, secure and scalable data server solution, and it is costs-free!

Who says that – The best things in life are free!

Friday Oct 10, 2008

Running IBM DB2 UDB on Sun SPARC Enterprise T5440

The newly announced Sun SPARC Enterprise T5440 is based on the UltraSPARC T2 Plus. The quad socket server packs 32 cores/256 threads in a 4RU enclosure and can support up to 1/2 TB of memory. How to scale DB2 UDB database server on this powerful system to make a full use of its capacity? The following suggested some best practices and tuning guidelines for DB2 deployments on Sun SPARC Enterprise T5440.

First of all, before doing anything on the system, ensure that the system's firmware is up-to-date. This is especially important for running DB2 on CMT systems, which have a hypervisor that sits between the hardware and operating system.  The recommended System Firmware version for T5440 is 7.2.0 or above.  Note that DB2 can crash on T51x0 and T52x0 systems using early firmware versions. See IBM's tech notes for more information.  The latest versions of Sun System Firmware can be downloaded from http://www.sun.com/bigadmin/patches/firmware.

T5440 comes up with Solaris 10 5/08 release. The LDoms on T5440 supports Solaris 10 11/06 and above. The required kernel patch is 137111-06 or above, and the nxge patch 138048-05 for software Large Segment Offload (LSO) support. It is always recommended to upgrade to the latest update release of Solaris 10.

Using large memory pages with DB2 on Solaris is always a good practice. Large memory pages are intended to improve the performance of the kernel and database applications by improving utilization of the processor's TLB. DB2 mainly uses large pages for database shared memory, which includes bufferpools, database heap, package cache, lock heap, catalog cache and utility heap. The DB2 database shared memory is allocated by Intimate Shared Memory (ISM) segments. Recent Solaris 10 updates have improved default behavior on the allocation of large pages. By default, Solaris 10  5/08 will use a maximum of 4M pages for DB2 shared memory segments. To enable 256M pages, add the following tunable to /etc/system and reboot.

      \*enable 256M pages
      set max_uheap_lpsize=0x10000000

For TCP/IP tunables, run the following commands from the command line as the root user:
ndd -set /dev/tcp tcp_xmit_hiwat 262144
ndd -set /dev/tcp tcp_recv_hiwat 262144

DB2 OLTP applications may benefit from Fixed Priority scheduling class on T5440. DB2 processes in the fixed priority class stay at a fixed priority level and with a fixed time quantum. So there are no frequent ping-pong of priorities and related changes in their time quantum as those in Time Sharing scheduling class (the default class for all processes). To use the Fixed Priority (FX) scheduling class, just just run the priocntl(1) command from the command line as the root user:
# priocntl -s -c FX -i projid <projid for a db2 instance>

"ps -ecf <DB2-PID>" can print out the scheduling class and its priority assigned for a db2 process. The default priority "0" usually works well for DB2.

As T5440 is a NUMA architecture, memory latency should be considered when running database applications that are very sensitive to memery latency. Fortunately, the threaded design of UltraSPARC T2 Plus chip acts to minimize the impact of memory latency. DB2 also supports the optimal placement of a database partition's processes and memory, which means a database partition's processes and memory can be restricted to an Locality Group (lgroup), so there are lower memory latencies and increased performance. The following shows how to achieve this.

In my experiment, I created 3 Solaris Containers on a T5440, and deployed one DB2 instance in each container, one DB2 instance in the global zone. So altogether there are 4 DB2 instances on the system. Four resource pools were created for each zone:

create pset db1_pset (uint pset.min = 64; uint pset.max=64)
create pset db2_pset (uint pset.min = 64; uint pset.max=64)
create pset db3_pset (uint pset.min = 64; uint pset.max=64)
create pool db1_pool (string pool.scheduler="FX")
create pool db2_pool (string pool.scheduler="FX")
create pool db3_pool (string pool.scheduler="FX")
associate pool db1_pool (pset db1_pset)
associate pool db2_pool (pset db2_pset)
associate pool db3_pool (pset db3_pset)

Here are the commands to create one of the non-global zone:
# zonecfg -z db2svr1
db2svr1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:db2svr1> create
zonecfg:db2svr1> set zonepath=/export/home/zones/db2svr1
zonecfg:db2svr1> set autoboot=true
zonecfg:db2svr1> add net
zonecfg:db2svr1:net> set address=129.148.63.235
zonecfg:db2svr1:net> set physical=nxge0
zonecfg:db2svr1:net> end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/db2data
zonecfg:db2svr1:fs> set special=/dev/dsk/c4t600A0B8000388C940000039048BE7480d0s4
zonecfg:db2svr1:fs> set raw=/dev/rdsk/c4t600A0B8000388C940000039048BE7480d0s4
zonecfg:db2svr1:fs> set type=ufs
zonecfg:db2svr1:fs> end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/db2log
zonecfg:db2svr1:fs> set special=/dev/dsk/c4t600A0B8000388C940000038E48BE72A4d0s4
zonecfg:db2svr1:fs> set raw=/dev/rdsk/c4t600A0B8000388C940000038E48BE72A4d0s4
zonecfg:db2svr1:fs> set type=ufs
zonecfg:db2svr1:fs> end
zonecfg:db2svr1>add inherit-pkg-dir
zonecfg:db2svr1:inherit-pkg-dir>set dir=/opt/ibm
zonecfg:db2svr1:inherit-pkg-dir>end
zonecfg:db2svr1> add fs
zonecfg:db2svr1:fs> set dir=/opt/IBM
zonecfg:db2svr1:fs> set special=/opt/IBM
zonecfg:db2svr1:fs> set type=lofs
zonecfg:db2svr1:fs> set options=[rw,nodevices]
zonecfg:db2svr1:fs> end
zonecfg:db2svr1> set pool=db1_pool
zonecfg:db2svr1>verify
zonecfg:db2svr1> commit
zonecfg:db2svr1>exit

Each non-global has its local file systems mounted, and it can access the DB2 executables in the global zone.

# zoneadm list -vi
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   1 db2svr1          running    /export/home/zones/db2svr1     native   shared
   2 db2svr2          running    /export/home/zones/db2svr2     native   shared
   3 db2svr3          running    /export/home/zones/db2svr3     native   shared

By binding the zone to a resource pool, for DB2 processes running inside the zone, the percentage of memory accesses that are statisfied locally should increase dramatically. This approach generally got ~10% performance improvment for DB2.

Use poolstat(1) command monitor the resource pool cpu utilization:
# poolstat 5
                              pset
id pool                 size used load
  0 pool_default           64 60.3  253
  1 db1_pool               64 55.5  240
  2 db2_pool               64 51.7  249
  3 db3_pool               64 53.2  252


This way of monitoring makes it straight forward to see which zones or applications are saturated the CPU. Then you can adjust the resources accordingly to fit their needs.

With Solaris Container technology, we can have the flexibility to maximize DB2 performance by running multiple DB2 instances on T5440. You can also take advantage of other zero cost virtualization technologies like LDoms to scale or consolidate solutions on a single system.

Friday Sep 19, 2008

Why need to create a project for a DB2 instance owner and how ?

To run a DB2 database on a Solaris server, we need to set the System V tunables. Before Solaris 10, this is done by setting the kernel parameters in /etc/system. With Solaris 10, though the /etc/system settings are still valid, there are some drawbacks of using /etc/system file to set the System V tunables.

1. It is not available in Solaris Containers.
2. The system needs a reboot every time you change the file.
3. The settings affect the overall system.

We should use resource controls which have more advantages over /etc/system:
1. It is supported in Solaris Containers.
2. Any changes are dynamic, no reboot required
3. We can have different resource control settings for different DB2 instances or other applications running on the same system.

It is recommended to create a project for each DB2 instance owner. The simplest approach is to edit /etc/project file directly as user root. e.g.,

user.db2inst1:1234:::root:project.max-shm-memory=(privileged,123379030425,deny);project.max-shm-ids=(privileged,24576,deny);project.max-msg-ids=(privileged,20480,deny);project.max-sem-ids=(privileged,24576,deny);

db2inst1 is the instance name. Become user db2inst1, run id(1M) command to verify the project user.db2inst1 has been associated with the user. e.g.,

$id -p

uid=7001(db2inst1) gid=5001(db2iadm) projid=1234(user.db2inst1)

And run prctl(1M) command to verify the IPC resource control settings:

$prctl $$

Alternatively, the project entry can also be set as the following,

db2inst1proj:1234:::root:project.max-shm-memory=(privileged,123379030425,deny);project.max-shm-ids=(privileged,24576,deny);project.max-msg-ids=(privileged,20480,deny);project.max-sem-ids=(privileged,24576,deny);

in addition, add the following to /etc/user_attr,
db2inst1::::project=db2inst1proj

DB2 provides a utility called db2osconf to generate the recommended values for IPC tunables on the Solaris platform. 

# ./db2osconf 

set msgsys:msginfo_msgmni = 20480
set semsys:seminfo_semmni = 24576 
set shmsys:shminfo_shmmax = 123379030425 
set shmsys:shminfo_shmmni = 24576

But the utility (in DB2 v8.x and v9.1) only prints out the kernel parameters, you can refer to System V IPC Configuration to map the kernel parameters to the resource controls.

In the latest DB2 V9.5 FP 3, the db2osconf utility has been enhanced to add the support for resource controls.

About

This is a blog to talk about technical information regarding running IBM DB2 database on Solaris.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today