Vdbench: dangerous use of stopafter=100, possibly inflating throughput results.

In short: doing random I/O against very small files can inflate your throughput numbers.

When doing random I/O against a file using File system Workload Definition (FWD) parameters Vdbench needs to know when to stop using the currently selected file.
The ‘stopafter=100’ parameter (default 100) tells Vdbench to stop after 100 blocks. For Vdbench 5.02 you can also specify ‘stopafter=nn%’, or ‘nn %’ of the size of the file.

This all works great, but here’s the catch: if your file size is very small, for instance just 8k, the default stopafter=100 value will cause the same block to be read 100 times.

The stopafter= parameter was really only meant for large files, and this side effect was not anticipated.

Solution:
For Vdbench 5.01, change ‘stopafter=’ to a value that matches the file size. ‘stopafter=’ allows for only one fixed value so if you have multiple different file sizes this won’t work for you.
For Vdbench502 (beta), use stopafter=100%. This makes sure that you never read or write more blocks than that the file contains.
I will modify 502 as soon as possible to change the default value to be no more than the current file size.

Note: 5.02 is currently only available (in beta) internally at Sun/Oracle.

Henk.

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Blog for Henk Vandenbergh, author of Vdbench, and Sun StorageTek Workload Analysis Tool (Swat). This blog is used to keep you up to date about anything revolving around Swat and Vdbench.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today