By tonyn on Jun 14, 2005
SVM default interlace and resync buffer values
I'm a newbie and a recent blogging convert :-) Writing has never been my biggest hobby though I really enjoy reading. However, OpenSolaris and seeing all the beneficial information/conversations generated from fellow Sun employees' blogs inspired me to write a short one.
I'm Tony Nguyen, another engineer in the Solaris Volume Manager team. Btw, the SVM team is a wonderful collection of engineers/people and I really enjoy being a part of the team. Anyway, there seems to be a common misperception about SVM performance(with the default interlace and resync buffer values) that we hear from time to time. While there's current work to modify these default values to achieve more optimal default performance, sysadmins can certainly get the improved performance with some simple modifications to how they create SVM metadevices. This blog will discuss how increasing default stripes and raid5 interlace values and mirror resync buffer size to improve the overall SVM performance.
The current default interlace size for both stripe and RAID(raid5) metadevices is 16K or 32blks. This interlace value implies data is stored in 16K chunks and the consecutive 16K chunks spread across subcomponents(columns of a RAID or components of a stripe) in a metadevice. In processing I/O, SVM reads/writes interlace amount of data from each component simultaneously. The current resync buffer size is set at 64K or 128blks. The resync buffer size is the amount of data transferred between one submirror(or submirror component) to another submirror when SVM performs data synchronization for a mirror/raid1 metadevice(e.g attaching of a submirror).
Now, the 16K and 64K values for interlace and resync buffer, respectively, are due to the I/O capability of both older hardware and Solaris releases. It's quite obvious that small interlace and resync buffer values would produce a high number of I/O operations for large I/O sizes since the I/O is done in small chunks. Naturally, increasing the interlace and resync buffer values would improve the I/O and resync performance. But how large should these values be?
We've conducted some performance testing with values ranging from 32K to 1024K for both the interlace and resync buffer sizes. The results showed the 512K interlace and resync buffer size seems to give the optimal additional performance, almost twice the throughput when I/O size is greater than 16K and half the submirror resync time(when metattaching submirrors or hotsparing components). The resync buffer size of 1024k gives only an additional 4-8% resync performance compared 512k buffer size. So how does one use these new default values?
Specify the 512k interlace value when creating metadevices: RAID(raid5): metainit d5 -r cxtxdxsx cytydysy cztzdzsz -i 512k Stripe(raid0): metainit d0 3 1 cxtxdxsx 1 cytydysy cztzdzsz -i 512k Changing the resync buffer size - Edit /etc/system to specify: set md_mirror:md_resync_bufsz = 1024 - Reboot the machine
Let me know if you found this blog to be of some use or would like to know about some other areas in SVM. By the way, other SVM engineers who've posted excellent blogs are:
And the team plans to post more SVM related blogs in the near future, stay tuned.
Technorati Tag: Solaris