FS perf 102 : Filesystem Bandwith

Now that you can grab the disks's BW, the next question is "How do i see what BW my local file system can push?". First lets check writes for ZFS:

fsh-mullet# /bin/time sh -c 'lockfs -f .; mkfile 1g 1g.txt; lockfs -f .'
real       17.1
user        0.0
sys         1.1

So that's 1GB/17.1s = ~62MB/s for a 1 gig file. During the mkfile(1M), you can use iostat(1M) to see how much disk BW is going on:

fsh-mullet# iostat -Mxnz 1
              
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  541.0    0.0   67.6  0.0 35.0    0.0   64.7   1 100 c0t1d0
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  567.0    0.0   70.3  0.0 33.9    0.0   59.9   1 100 c0t1d0
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  254.9    0.0   29.0  0.0 15.7    0.0   61.6   0  64 c0t1d0
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  528.1    0.0   66.0  0.0 35.0    0.0   66.2   1 100 c0t1d0

We can also use zpool(1M) to show just the IO for zfs:

fsh-mullet# zpool iostat 1
bw_hog      32.5K  33.7G      0    538      0  67.4M
bw_hog       189M  33.6G      0     30      0   459K
bw_hog       189M  33.6G      0      0      0      0
bw_hog       189M  33.6G      0    509      0  63.7M
bw_hog       189M  33.6G      0    544      0  68.1M
bw_hog       189M  33.6G      0    544      0  68.1M
bw_hog       189M  33.6G      0    535      0  67.0M

Now let's look at UFS writes:

fsh-mullet# /bin/time sh -c 'lockfs -f .; mkfile 1g 1g.txt; lockfs -f .'
real       18.7
user        0.1
sys         6.3

So UFS is doing 1GB/18.7s = ~57MB/s. Let's see some of that iostat:

fsh-mullet# iostat -Mxnz 1
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    4.0   70.0    0.0   58.9  0.0 10.8    0.0  145.6   0  99 c0t1d0
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    3.0   70.0    0.0   57.8  0.0 10.6    0.0  144.5   0  99 c0t1d0
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    4.0   70.0    0.0   59.4  0.0 11.2    0.0  151.3   0  99 c0t1d0

This was done on a 2-way v210 sparc box, using a SCSI disk.

And why the 'lockfs' call you ask? This ensures that all data is flushed to disk - and measuring how long it takes to do something that doesn't necessarily get flushed is just not legit in this case. Persistent data is good.

Comments:

This is all interesting, but what about using some REAL storage rather than a wimpy single SCSI drive? Also, some real benchmarks would be good as well. Maybe try a 30TB SE-6920 or a Flex380, running a benchmark through the ZFS filesystem like perhaps the SPC-1 benchmark? http://www.storageperformance.org/ Then, compare ZFS to UFS, VxFS and QFS, now that would be a real comparison!

Posted by Russ on November 22, 2005 at 03:27 AM PST #

Yes, real storage would be nice - but let me mention my intent with this entry was not to publish numbers, but to show how to grab some very very simple numbers for filesystem performance - to encourage people to try ZFS out.

We're still tweaking the performance of ZFS and real numbers will come out soon.

Posted by eric kustarz on November 22, 2005 at 06:14 AM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

erickustarz

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today