Filesystem Benchmarks: iozone

In my last post I discussed the vxbench I/O load generator which may (or may not) be available from Symantec for the use of all. Recent work with Windows 2003 Server has given me the excuse to use Iozone which has many things in common with vxbench. In fact I feel a taxonomic table coming on:

Feature

VxBench

Iozone

Open sourceNo. Copyrighted but freely available.Yes: ANSI C
Async I/OYes: aread, awrite, arand_mixed workloadsYes -H, -k options
Memory mapped I/OYes. mmap_read, mmap_write workloadsYes. -B option
Multi-process workloadsYes. -P/-p optionsYes. Default
Multi-threaded workloadsYes -t optionYes. -T option
Single stream measurementYes.Yes.
Spreadsheet outputNo.Yes.
Large file compatableNo.Yes.
Random reads/writesYes. rand_[read|write|mixed]Yes. -i 2 option
Strided I/OYes stride=n suboptYes. -j option
Simulate compute delayYes. sleep workload, sleeptime=n secondsYes. -J milliseconds
Caching optionsO_SYNC, O_DSYNC, direct I/O, unbuffered I/OO_SYNC,
OS'sSolaris, AIX, HP, Linux. Not MS WinAs vxbench + MS Win. POSIX

There are challenges in using these tools; the first is that these are not benchmarks; they are load generators with no load (benchmark) defined. And there are two approaches to defining a load (a.) how many operations of a specfic type can be achieved in a set time. (b.) How long does it take to complete a specific number of operations. The difference, for a lot of people, is a matter of taste. The consequence is that each new analyst who approaches these tools starts to write a new cookbook.

Another challenge is that the principle dimensions of performance in a benchmark are

  1. Latency - how long until the first byte is returned to the user or committed to disk and the operation returned from.
  2. Throughput - how much data under different access patterns can be sent to or retrieved from permenant storage.
  3. Efficiency - How much of the system's resources were consumed in moving data to and from storage rather than doing computation upon it. (Resources can be memory, CPU cycles, hardware and software synchronisation mechanisms. In this Millenium we also bring in the consumption of electricity and the generation of heat.

Load generators including the ones we are discussing are pretty good on the first two counts but no good at the third. That distinction marks the difference between a load generator and benchmarking framework such as Filebench, SLAMD or the tools such as Loadrunner from Mercury. It is no minor matter to coordinate the gathering of system metrics with the execution of the workload. Its even more difficult to achieve this accross distributed systems sharing access to a filesystem such as NFS or Shared QFS. In this case a common and precise idea of the current time needs to be maintained accross the systems.

Tools such as Iozone and Vxbench need to be embedded in scripted frameworks to do performance metrics collection - In several Unixes it simply means running any and every tool whose name ends in "stat" in the background. In Microsofts world there are the CIM probes accessible through VBscript or Perl and in Solaris 10, dtrace provides access to arbitary counters.


Putting Iozone to Work

Using Iozone we can generate output similar to the graphs below.

I created a 32 Gb volume accross 12 (Seagate ST13640 disks on 2 JBODS connected via 2 Adaptec Ultra320 SCSI controllers to a Dual 2 Ghz AMD Opteron with 2 Gb RAM). For the care and feeding of this sandbox, I am grateful to Paul Humphreys and his band of lab engineers.

I then ran iozone and collected the results. As it dumps straight into spreadsheet format you can quickly do some interesting graphical analysis such as this example at the tools' website. However I was after something more mundane.

The first graph below is from OpenOffice. I don't like it much because it follows the data so you end with a powers-of-two x-axis which hides important detail. Also all Openoffice graphs tend to look the same without extensive fiddling.

The graph below it is done with R and although it is plainer, I think it gives a clearer picture.

Here is the data and R code - not a lot to it really. I continue to urge you to use this tool as I have in the past. James Holtman has made a compelling case for the use of R in performance analysis in his paper Visualisation of Performance Data (CMG2005) and The Use of "R" for System Performance Analysis (CMG2004). Sadly CMG do not make their papers available to the wider community.

size  FW      NW      FR      NR
4     426.78  327.59  627.4   560.92
8     544.29  467.6   733.93  672.68
16    594.61  362.57  878.66  725.4
32    628.45  587.71  883.71  754.75
64    662.49  606.35  886.44  748.12
128   664.56  619.54  846.31  815.49
256   700.33  666.51  933.96  769.15
512   12.55   13.6    664.16  660.76
1024  8.8     10.77   600.13  592.36
require(lattice)
g_data <- read.table("C:\\\\home\\\\dominika\\\\FATvNTFS.csv", header=T)
attach(g_data)

plot(size, FR, type="l",
   main="NTFS and FAT32 I/O Performance",
   sub="Sequential Reads/Writes to 1 Gb File in 32 Gb Filesystem",
   xlim=c(0,1024),
   ylim=c(0,1000),
   xlab="I/O size (Kb)",
   ylab="I/O rate (Mb/s)",
   lty=5,col=5, lwd=2  )

lines(size,FW,lty=2,col=2, lwd=2)
lines(size,NW,lty=3,col=3, lwd=2)
lines(size,NR,lty=4,col=4, lwd=2)

text(150,700,"FAT Wr"); text(130,600,"NTFS Wr")
text(350,750,"NTFS Rd"); text(400,800,"FAT Rd")

The graph appears to show us a good deal but its what it doesn't show that has to be remembered - the qualitative side to all this.

The expectation of several people I showed it to had been that NTFS being the more modern filesystem should have better performance. Not so but for good reasons. Yes in the simple case FAT32 is faster than NTFS. Out and out performance is not the point of NTFS. It has many value-add features not found in FAT such as file and directory permissions, encryption, compression, quotas, content-addressability (indexing) and so forth. These come at a cost as do other features in NTFS that the OS relies on to provide such facilities as shadow copy and replication.

Longer code path - longer to wait for those I/Os to return!

Comments:

Post a Comment:
Comments are closed for this entry.
About

dom

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today