Analyzing the Sun Storage 7000 with Filebench

Today, I am proud to have contributed to the industry's first open storage appliances, the Sun Storage 7000 Unified Storage Systems from Sun Microsystems. Open Storage delivers open-source software on top of standard x86 based commodity hardware delivered as an appliance, at a fraction of cost of propriery hardware. Sun's open-source software brings the added advantage of flexibility, ease of customization, as well as support from Sun Microsystems and a large, growing open-source community.

At a high level, the Sun Storage 7000 has the following components:

1)x86 based Sun Servers: Sun Storage 7000 comes with 1, 2, and 4 sockets of Quad-core AMD-Opteron processors. You can easily configure the amount of processing power and RAM according to your own storage requirements.

2) Open Solaris based appliance software: Along with existing technology of Open Solaris such as ZFS, DTrace, and Zones, the Sun Storage 7000 supports a cool-graphical browser based user-interface, that lets you configure your storage server, set up your storage environment, and then easily monitor various metrics such as CPU, network, and disk usage. I have extensively made use of an excellent feature called Analytics (described later in this blog article).

3) Solid-state devices: Sun Storage 7000 may be configured with different number of solid-state devices, which may act as an extra layer of cache for disk I/O. We have two types of solid-state devices: for read and write operations, named readzillas and logzillas respectively. These devices have been shown to help deliver better read or write performance compared to what SATA 7200rpm disk based storage would sustain. You can read more about these devices in Adam Leventhal's excellent blog.

4) JBODS (Just a Bunch of Disks). Sun Storage 7000 may be connected to arrays of disks. One example of an external disk-array is the Sun Storage J4400 Array , which contains 24 750 GB, 7200 rpm SATA disks.

One of the tools we used to evaluate this platform is Filebench, an open-source framework for simulating applications on file systems. We deploy a Sun Storage 7410, a 4-socket, 16-core system configured with 128 GB RAM, two Sun Storage J4400s, 1 Sun Multithreaded 10 GigE card, 3 logzillas, and 2 readzillas. This appliance is connected to 18 Sun v40z clients via a switch. All clients have a 1 GigE interface.

We configured the storage as a mirrored RAID10. A ZFS filesystem was created for each client and then mounted using NFSv4. While Filebench has support for synchronizing threads, our main challenge was to synchronize different instances of filebench running on the different clients, so that they may simultaneously perform operations on our Storage appliance. We ran a variety of workloads such as single and multi-threaded streaming-read, streaming writes, random-read, and file creation. While no workload can replicate the exact requirements our customers may have, we hope that the above workloads are greatly illustrative of the power of our Sun Storage 7400. You may read Roch's interesting blog on how the workloads were designed.

Coordinating so many clients was no easy task, and we struggled for days writing a script, which is available in the toolkit here. While we have added sufficient comments to the script, it is by no means ready for easy installation and use. Please do communicate with me if you plan to use it. We also monitor CPU, network, and disk metrics on our applicance using Dtrace (Please see 11metrics.d in the toolkit). Using DTrace has minimum but not negligable overhead, so the following results should be 3-5% lower than what we would achieve without DTrace running in the background.

Results from different filebench workloads are as follows:

Test Name (Executed Per Client) Aggregate Metric (18 clients) Network I/O (MBytes/s) CPU Util %
1 thread streaming reads from 20G uncached set, 30 sec 871.0 MBytes/s 936 82
1 thread streaming reads from same set, 30 sec 1012.6 MBytes/s 1086 68
20 threads streaming reads from a different 20G uncached set, 30 sec 924.2 MBytes/s 1008 88
10 threads streaming reads from same set, 30 sec 1005.9 MBytes/s 1071 83
1 thread streaming write, 120 sec 461.5 MBytes/s 589 68
20 threads streaming write, 120 sec 444.5 MBytes/s 503 81
20 threads 8K random read from a different 20G uncached set, 30 sec 6204 IOPS 52 14
128 threads 8K random read from same set, 30 sec 7047 IOPS 57 23
128 threads 8K synchronous writes to 20G set, 120 sec 24555 IOPS 111 73

We use the ZFS record size of 128K for the MBytes/s oriented tests, and 8k record size for the IOPS (I/O Operations per second) based tests. The record size of 8k helps deliver better performance by better allignment of the read/write requests with the record sizes. Please also note that the file sets mentioned above are 20 GBytes per client. So for 18 clients with 20 Gbytes per client, the total working set for these tests was 360 Gbytes.

A few observations from the data:

(1) For streaming read tests, we sustain network I/O of close to 1 GByte/sec. LSO helps us significantly in this regard (Please refer to this blog article on benefits of LSO). Also, please read how our team helped deliver better LSO here.

(2) Reading from a cached dataset, improves streaming read performance by about 15-20%. You may observe that caching helps improve CPU utilization and reduces disk activity.

(3) One thread in one client can read close to 50 MByte/sec. With 18 clients (18 threads) we can get to 900 MBytes/s. Therefore, a multithreaded read per client (20 threads per client, 360 threads) does not increase the read performance by much.

(4) For random reads, we are bottlenecked by the IOPS we can do per disk (which is about 150-180 from a 7200 rpm disk). Using the same system attached to more disks, Roch acheived between 28559/36478 IOPS (cold run/warm run) from a 400GB dataset.

(5) We get great synchronous write performance because the logzillas allow much lesser write latencies than what the raw disk would have provided.

Using analytics, we can easily track how the applicance is behaving for each workload. Here is a screen shot from analytics. You can see the trendy plots of how the appliance behaved during the test. The three charts show NFSv4 operations per second, CPU utilization per second, and disk I/O operations per second. While these charts show aggregate metrics, you can easily break down on a type of operation, per CPU, and per disk basis, respectively.



In conclusion, the Sun Storage 7000 series has integrated a nice bunch of goodies, which combined with open-source software, should give a new direction of cheap, proprietory storage in the years to come.
Comments:

[Trackback] There are two interesting articles about the new storage from Sun: Amitabha Banerje wrote about an benchmark with filebench on the 7410. Another good article is Josh Simons' article about his experiences with the Sun Storage 7000 Unified Storage System...

Posted by c0t0d0s0.org on November 12, 2008 at 01:35 AM PST #

Are there any published iSCSI benchmark numbers anywhere we can share with a customer?

thanks
Gary Francis

Posted by Gary Francis on December 24, 2008 at 02:46 AM PST #

Sorry for the late reply. Most of our work is based on 10 GigE, so I do not have access to any iSCSI numbers.

Posted by Amitabha Banerjee on January 08, 2009 at 06:22 AM PST #

Can I get a copy of the command line arguments so I can run this at a customer?

Posted by John Keane on February 10, 2009 at 05:28 AM PST #

John the scripts are available as Amitabha mentions here :
http://blogs.sun.com/hotnets/resource/11metrics.tar
Be prepared to spend some time to set it up and gather the data.

Posted by Roch on February 11, 2009 at 10:09 PM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

This blog discusses my work as a performance engineer at Sun Microsystems. It touches upon key topics of performance issues in operating systems and the Solaris Networking stack.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today