A Performance Study of Using SSDs in IBM DB2 Applications


Database transactions are by nature random and usually come with small record sizes. Because the records are brief when compared to the time required to reach the data location, database performance is paced by the ability to locate and retrieve information (mostly disk access) on the storage. Thus database applications are very sensitive to I/O wait time. No matter how fast the processor runs, it has to constantly wait on mechanical storage devices for its data.

Unlike magnetic disks, flash memory SSD is a purely electronic device without any moving part. Therefore, flash memory can provide uniform random access speed. The ability of flash memory to quickly perform a sector read or a sector write located anywhere in flash memory is one of the key characteristics that database applications can take advantage of.

On the other hand, in order to update an existing data item stored in flash memory, a time-consuming erase operation mush be performed before overwriting. The read and write speed of flash memory is asymmetric as it takes longer to write. This property of asymmetric speed should be considered when using flash memory for database applications.

I did some experiments recently to evaluate SSD as stable storage for DB2 database workloads and try to identify the areas where SSD can be best utilized. 

Before presenting the results from the workload analysis and performance study in the following sections, let me list the experimental settings briefly.

 Model Configuration 
 1x Sun Fire X4450  4 x Dual Core Intel Xeon @2.93 Ghz, 16 GB RAM, 4 x 143 GB SAS 10K RPM drives
 2 x SSD Disks  32 GB
 1 x Sun StorageTek 2540  1 RAID controller, 12 x 146 GB SAS 15K RPM drives

The operating system on the X4450 is Solaris Express Community Edition snv_106, which has the latest integration for ZFS L2ARC.

Test Scenarios and Results

DB2 can use separate storage spaces for database tables, indexes, log files and temporary data. Various experiments are done to test the performance of using SSD for different DB2 storage spaces, and the combination of using SSD with ZFS.

1. SSD for Transaction Log

In OLTP applications, when a transaction commits, the log records created by the transaction have to be written to the storage. When a large number of concurrent transactions commit at a rapid rate, the log records will be requested to be flushed to disk very often. In most cases writing log records is a significant performance bottleneck.   

The OLTP workload I used for my tests is built in-house at Sun. The data size is about 5GB. It exhibits little locality and sequentiality in data accesses, most I/O activities happened at the steady phase are writes (e.g,  forcing to write log records at commit time). The prefetching and write buffering become less effective for this type of workload, and the performance is more closely limited by disk latency than disk bandwidth and capacity.  

In 1 and 2 experiments, “HDD test” used one hard drive for table spaces containers, and one another hard drive for the log container. “SSD test” used one SSD for indexes and hot table containers (the rest of the tables are on a hard drive), and one another SSD for the log container. The page size for both tests is 4K.

The throughput result is as the following:

As we can see, the throughput is doubled with the SSD test. In this regard, extremely low latency of  SSDs can reduce the average transaction commit time and improve the throughput of transaction processing significantly.

2 SSD for Indexes and Hot Data

There are few reads activities in 1 because most of the data had been loaded into DB2 buffer pools during the ramp-up phase. To test the reads performance, a couple of sort queries were performed, with the use of a small size (2M) buffer pool. The following diagram shows the elapsed time taken to process the sort queries. 

 The sorts on a SSD ran 2.5 times faster than on a hard disk. 

More reads activities are tested in the following experiments.

3 SSD as L2ARC for ZFS

The experiments in this section use another DSS workload, which mainly stresses I/O. 

A new feature called L2ARC has been added into ZFS which enables using a SSD as a high-performance secondary cache that sits between the ARC in RAM and data on disk. Experiments were done to explore how DB2 can benefit from using ZFS with SSD as L2ARC.  

Three tests were conducted with the following configurations:
3.1. A ZFS pool which consisted 8 disks was used to hold the database data.
3.2. A ZFS pool which consisted 8 disks was used to hold the database data. Two SSDs were added to the zfs pool as L2ARC. 
3.3. A UFS + DirectIO file system based on a 8-disk volume was used to hold the database data.

The data size is around 30GB. Queries such as table scans, index scans, sorts and joins were ran for each test.  The page size for all tests is 32K bytes.

With the use of L2ARC, the performance of sequential reads was improved by a factor of 1.3, and that of random reads was improved by a factor of 2. 

 Another significant difference is the read latency. The iostat outputs for test 1 and 3 showed the average service time was around 12-13ms, while in test 2 the service time for the L2ARC cache device is around 2-2.5ms, which is about 6 times faster than the disks were delivering. That also explains the less time took on the reads activities with the use of L2ARC.



I/O performance has become ever critical in achieving high performance of database applications. This write-up shows how flash memory SSD, as a storage alternative to hard disk, can improve the overall DB2 transaction processing. The key is to have the data you access the most on the fastest storage.

Flash SSD could be good fits for frequently read tables and indexes due to its superior random reads performance. External sorting is another operation that can benefit from the low latency of SSD, because the read pattern of external sorting is quite random during the merge phase in particular. The use of low write latency SSD as a dedicated storage device for transaction log can reduce the commit time delay considerably.

Using SSD as L2ARC for ZFS showed great enhancement on the query executions and significantly reduced the service time. 

 As flash SSD gets less expensive it becomes more practical to have more of database applications on SSD.


Much has been written about the variability of random write performance between different SSD suppliers. I do not see any disclosure when configuring an x4450 with SSD - which manufacturer Sun is sourcing them from. Can anyone assure me that we are using Intel SSD's either the Intel X25-E or the Intel X25-M?

Additionally can anyone comment on any drawback to configuring a x4450 with 6 SSD's and no traditional Hard Drives?

Thanks for your help,

Posted by John Hoffmann on March 27, 2009 at 05:31 AM EDT #

Hi John, yes the manufacturer of the SSDs I used for my test is Intel. Using 6 SSDs to replace HDDs in a X4450 might not be enough for larger size databases, and not appropriate for DSS workload.

Posted by Cherry Shu on March 31, 2009 at 05:09 AM EDT #

see 24 Samsung SSDs in a raid array:

Posted by engel on April 01, 2009 at 11:19 PM EDT #

Cherry, you have commented that the SSDs are from Intel - (1)

The section "1. SSD for Transaction Log" has a critical flaw. You highlighted that moving from HDD to SSD doubled the throughput. Logging is sequential in nature, and unless a whole lot of rollbacks were occurring, the activity on these drives can be characterized as append only.

(a) HDDs are great for sequential writes, and on par with SSDs, unless the database was doing small writes. If the database was doing small writes, they can easily be coalesced in the memory of disk controllers, translating them into big writes.

(b) In case you are not using disk controllers with memory, databases need to sync logs onto persistent/durable storage before acknowledging commits on a transaction. It is not clear if you turned off write-buffers on both Intel SSDs and the hard drive. As I understand the SSDs have a much heavier penalty compared to HDDs when the write-buffer is disabled. I would request to see these results in your blog next - that will be very interesting.

Posted by Darpan Dinker on April 06, 2009 at 01:59 PM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed

This is a blog to talk about technical information regarding running IBM DB2 database on Solaris.


« March 2015