Vdbench: dangerous use of stopafter=100, possibly inflating throughput results.
By Henk Vandenbergh on Jan 26, 2010
In short: doing random I/O against very small files can inflate your throughput numbers.
When doing random I/O against a file using File
Workload Definition (FWD) parameters Vdbench needs to know when to stop
the currently selected file.
The ‘stopafter=100’ parameter (default 100) tells Vdbench to stop after 100 blocks. For Vdbench 5.02 you can also specify ‘stopafter=nn%’, or ‘nn %’ of the size of the file.
This all works great, but here’s the catch: if your file size is very small, for instance just 8k, the default stopafter=100 value will cause the same block to be read 100 times.
The stopafter= parameter was really only meant for large files, and this side effect was not anticipated.
For Vdbench 5.01, change ‘stopafter=’ to a value that matches the file size. ‘stopafter=’ allows for only one fixed value so if you have multiple different file sizes this won’t work for you.
For Vdbench502 (beta), use stopafter=100%. This makes sure that you never read or write more blocks than that the file contains.
I will modify 502 as soon as possible to change the default value to be no more than the current file size.
Note: 5.02 is currently only available (in beta) internally at Sun/Oracle.