# First Past the Line

## Analyzing performance benchmarking data.

### Eoin LawlessEoin.Lawless@Sun.ComVersion 1.1

Imagine you're a developer. You've just thought up a clever little algorithm for, lets say, improving latency of small packets on a 10Gig network interface. Now you want to see exactly how much better it is. You run a standard benchmark, like iperf, on both the original code and your new improved code. But - horror of horrors! - the old code appears to be faster. You run the test again, this time the new code is better. To be sure you run the comparison a third time - and it's a dead heat. What's going on? How can you say definitively which is better.

Unfortunately benchmark results of complex computer systems rarely generate a completely reproducible result. Almost invariably there is some noise - in fact as Brendan Gregg recently demonstrated, actual noise, his shouting, can cause a spike in disk drive latency.

In this article I outline a procedure for benchmark results analysis. I'll be using the R statistics package, but the ideas are not tied to R and many other stats packages or spreadsheets can be used.

R is a variant of the S statistics programming language. It is open source and freely available from http://www.r-project.org. Binaries are available there for Windows and Mac OS X, and it is easily compiled on any recent Solaris.

I'm going to introduce just enough R to enable people compare two sets of performance benchmark results. I'm not going to explain R syntax in any great detail - it is done better at http://cran.r-project.org/doc/manuals/R-intro.html. Although I use the R software, the overall approach is the important aspect, not R specific details. Finally, if you are impatient, and want the quick version, jump to the checklist.

## Understand the benchmark

Before using benchmark results to analyze or compare performance, it's essential to understand what the benchmark is measuring. What is a sensible result? For example, if we're measuring latency of small packet transfers, we'd expect to see milli or micro second numbers, not pico seconds or minutes. Is a larger result better, or a smaller result better? Often when looking at raw numbers, we forget what they represent. You don't want to boast to your boss that you've improved small packet latency by making it larger!

What size change is important 1%, 5% ? When making many successive small changes, you may need to ensure that small regressions don't accumulate, conversely when comparing major changes, a very minor regression may be acceptable.

## Comparing Data Sets

The exact criteria that our performance testing needs to meet will vary from situation to situation. A reasonable set of requirements are:

• We want to detect changes of 1% or greater
• We want more than 80% probability of finding a real change
• We don't want more than 5% false positives (this is called the significance level)
• We want to do as few benchmark runs as possible

## Look at the data!

Raw numbers are hard to absorb. Plotting the data, preferably in more than one format, will make obvious features jump out. Let's start our tour of R by doing just that. Here's two sample sets of data, comparing openssl results from successive builds of Solaris Nevada:

 Build 97 Build 98 600870.36 596775.85 606059.30 597114.60 600799.39 592075.09 611484.07 605196.20 605925.89 592342.02 606033.32 605277.01 611066.67 592401.92 600784.94 597161.81 600697.75 604817.72 611689.59 597408.31 600719.45 592312.69 606255.10 597660.42 610618.37 604943.36 611113.47 604761.17 605951.83 592601.77

Now look at the same data plotted side-by-side as bar charts: and as a strip chart:

The data files are called snv_97.data and snv_98.data. Start R and load the data:

```> basefile <- read.table("snv_97.data", header = FALSE,  col.names= c("basedata"))
> attach(basefile)
> attach(testfile)
> basedata
[1] 600870.4 606059.3 600799.4 611484.1 605925.9 606033.3 611066.7 600784.9
[9] 600697.8 611689.6 600719.4 606255.1 610618.4 611113.5 605951.8
> testdata
[1] 596775.8 597114.6 592075.1 605196.2 592342.0 605277.0 592401.9 597161.8
[9] 604817.7 597408.3 592312.7 597660.4 604943.4 604761.2 592601.8
> datamax <- max(basedata, testdata)
[1] 611689.6
> datamin <- min(basedata, testdata)
[1] 600697.8
```
Let's have a look at a bar plot:
```par(mfcol=c(1,2) )  # to plot side by
> barplot(basedata, xlab="Iteration", ylab="Result", ylim=c(0,datamax\*1.1))
> barplot(testdata, xlab="Iteration", ylab="Result", ylim=c(0,datamax\*1.1))
> dev.off(2)
```
Now we want to display a strip chart containing both sets of results:
```> plot.new()
> stripchart(basedata, method="stack", xlim=c( 0.95 \* datamin, 1.05 \* datamax), col="red", at=1.2, offset=1)
> stripchart(testdata, method="stack", add=TRUE, at=0.8, col="blue", offset=1)
> dev.off(2)
```
We can save the output by first doing:
```> bitmap(file="stripchart.png", type="png256", height=3)
```

At this point I should note that R comes with a comprehensive help. So, for example,

```> help(stripchart)
```
displays help for the stripchart command. There is also an online introduction, which is more general than this guide.

By a visual inspection we can see there is trouble here. Both sets of data show that the results are clumped into groups. Before trying to compare the performance of these two builds it's necessary to find out what is happening and why the data is clumping like this. Time to use Solaris's famed diagnostic tools: dtrace, vmstat, mpstat, intrstat and friends.

## Testing for (mis)behaviour

As we've seen, data can sometimes have odd behaviours that make a straightforward comparison difficult, useless or even misleading. R has many statistical tests to check data for (mis)behaviours. As we've seen already, some of these issues are clearly visible when we plot or graph the data. However other behaviour is either not as obvious, or else we may wish to have a programmatic way of identifying anomolies. In particular there are a few common issues with data sets that make a naive comparison of results either misleading or plain wrong:
• Multimodal data
• Outliers
• Non-normal data (when used with tests that assume normality)
• Insufficient data

#### Is the data multimodal

We say a dataset is multimodal if the frequency of results has two or more distinct peaks. Tests for multimodality are not include in the base R package, but can be optionally installed from the diptest package.
```> install.packages("diptest")
```
Unfortunately this package is not easy to use, and I will skip a detailed example. Generally the stripchart makes multiple modes visually obvious, though the diptest is very useful for automatic analysis of large numbers of data sets.

#### Are there outliers

An outlier is a single point that is distant from the bulk of the data set. There is an unfortunate tendency to blindly discard inconvenient outliers, especially when they make a comparison look bad. In our experience, outliers are often due to a benchmark misconfiguration or error, (ie a damaged cable, faulty disk) but they can also be symptomatic of real problems. There is an optional add on package for R called outliers that can be used to test data sets for their presence.
```> install.packages("outliers")
# package  installs from the web
> library(outliers)
> grubbs.test(basedata)

Grubbs test for one outlier

data:  basedata
G = 1.2892, U = 0.8728, p-value = 1
alternative hypothesis: highest value 611689.59 is an outlier
```
The p-value in this case is 1, so we accept that there are no outliers.

#### Is the data 'normal'

If the data is normal it is much easier to use it in comparisons. Normality effectively means that results are clustered around a central mean value, with 95% of results within two standard deviations of the mean. In addition, the frequency of a result falls off very rapidly the further from the mean it is. R comes with a test called the Shapiro-Wilk normality test, used as follows:
```> shapiro.test(basedata)

Shapiro-Wilk normality test

data:  basedata
W = 0.8351, p-value = 0.01076
```
If the p-value is less then 0.05 then the data is probably not normal, as we see in this case. This means that we cannot use the Student T-Test to compare this data set with another. This is not problem though - we'll can always use the Wilcoxon test which does not require normal data.

#### Do we have enough data?

There is an abundance of theory to calculate the number of results required to detect a change of a give size (the theory is called Power Analysis). R includes a suitable test, but unfortunately we need to know the mean and standard deviation of the result data distribution before we can apply the test.

We can estimate these quantities from the data set if it contains three or more points. With a small number of points its accuracy will be low, however it does serve as a starting point. We'll use two new sample data sets as the first two are not normal: s10u5_10.data s10u6_07b.data.

```> basefile <- read.table("s10u5_10.data", header = FALSE,  col.names= c("basedata"))
> attach(basefile)
> attach(testfile)
> basedata
[1] 325.9843 321.4696 326.9845
> testdata
[1] 319.1681 318.2914 317.6821 319.5697 320.9252 322.6830 321.3920 316.2342
[9] 318.3186 323.7438 317.3700
>
```
The strip chart gives a clear picture of the distribution of results. (Base data, basedata, is in red).
We will use the power.t.test to see if we have enough results to detect a 1% change.
```> sd <- max( sd(basedata), sd(testdata))
> d <- mean(basedata) \* 0.01
> power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")

Two-sample t test power calculation

n = 13.87397
delta = 3.248128
sd = 2.938169
sig.level = 0.05
power = 0.8
alternative = two.sided

NOTE: n is number in \*each\* group

```
So, using our estimated values of the mean and standard deviation, we need to get 14 datapoints to be able to detect a 1% change in performance. Note however that a bad estimate of the standard deviation, sd, will result in a very large estimate of the number of data points required. In this case I would concentrate on getting extra numbers for basedata.

See http://www.stat.uiowa.edu/~rlenth/Power for a more sophisticated applet for calculating required sample sizes. Note however that power analysis is easy to get wrong . As a general rule, if it is easy to get extra benchmark runs, then do so!

## Comparing the data sets

The final step, once we have confirmed both sets of data are usable, is to compare the datasets. There are three possible outcomes:
• The performance has changed
• The performance is the same
• The results are inconclusive
Hopefully we have exlcuded the third possibility by gathering enough data and investigating bimodal datasets and outlier datapoints.

#### Has the performance changed?

The statistics procedure for comparing two sets of data for a difference is called hypothesis testing. We have two alternative hypotheses, conventionally called the null hypothesis and the alternative hypothesis. The null hypothesis represents no change, nothing happening. The alternative hypothesis represents a new finding or an important result. The choice of null hypothesis is very important, the idea being that less harm is caused by accepting the null hypothesis than by rejecting it. Some examples:

Scenario Null hypothesis Alternative Reason
Doctor testing a new cancer drug The drug has no effect The drug improves survival rates by more than 4% A new drug should not be sold unless it is certain it does good.
A food company wants to use a new food colouring The additive may cause harm to some people The additive causes no harm Need very strong proof that additive causes no harm
Performance Regression Testing (ie Performance QE) No change Change greater than 1% Crying wolf with insufficient evidence wastes developers time

We are looking for a change (either an improvement or a regression). There are many methods for hypothesis testing, each with multiple subvariants. We typically use just two, the Student T-Test and the Wilcoxon Signed Rank test. Of these the Wilcoxon is more general.

Both tests calculate the probability (p-value) that the spread of results could occur if the null hypothesis is true. We will only reject the null hypothesis if this probability is very low, typically 0.05, or 5%. This figure is refered to as the significance level of the test. If the p-value is less than the significance level, then we must act on the basis that there is a change.

Bear in mind, that even if we don't reject the null hypothesis (ie don't find a change), this does not imply that all is good, it could be that there is a change but that the data we have collected is inconclusive. Hence the important of checking the data and making sure we have enough datapoints.

#### Is performance the same?

Just because our data does not show an performance change, doesn't necessarily mean that performance is unchanged. Our data might simply be inadequate for the job of detecting a change of the size we're interested in. We have rejected the alternative hypothesis, but that doesn't mean we can accept the null hypothesis.

Is there a way to test whether we can accept the null hypothesis (ie performance has stayed the same)? As it turns out there is a relatively simple way. If the 90% confidence interval reported by the Wilcox (or T) test lies entirely within 1% of the baseline average, then we can safely assume the test build performance has remained with 1% of the base build performance. (See http://web.vims.edu/fish/faculty/pdfs/hoenig2.pdf for details.)

#### Example 1

We will use two sets of results from an openssl benchmark, the baseline running on snv_106 and the test on snv_107 . Neither data set is multimodal, or has outliers.
```> basefile <- read.table("ssl.snv_106.data", header = FALSE,  col.names= c("basedata"))
> attach(basefile)
> attach(testfile)
> power.t.test( sd=sd(basedata), sig.level=0.05, power=0.75, delta=mean(basedata)\*0.01, type="two.sample")

Two-sample t test power calculation

n = 5.296159
delta = 285.0265
sd = 155.7760
sig.level = 0.05
power = 0.75
alternative = two.sided
```
For testdata, power.t.test cannot compute n at the default power of 0.75 (75%), so we use a higher (better) power level:
```> power.t.test( sd=sd(testdata), sig.level=0.05, power=0.95, delta=(mean(testdata)\*0.01), type="two.sample")

Two-sample t test power calculation

n = 2.135454
delta = 281.677
sd = 42.21613
sig.level = 0.05
power = 0.95
alternative = two.sided
```
We have more than enough data. We can now safely compare the two data sets. We will use the Wilcoxon test, even though the T-Test is applicable here, since the Wilcoxon can be applied more widely. Be careful with the ordering of basedata and testdata in the function call. With the ordering below (test data, then baseline data) a regression for a "bigger is better" benchmark will show as negative and an improvement as positive.
```> wilcox.test(testdata,basedata, conf.int=TRUE)

Wilcoxon rank sum test

data:  testdata and basedata
W = 0, p-value = 2.896e-07
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
-432.64 -234.41
sample estimates:
difference in location
-265.75

```
Here is how the result is interpreted:
 Null hypothesis No change in performance Alternative hypothesis Improvement or regression p-value A p-value less than 0.05 indicates a change in performance 95 percent confidence interval The change in performance is 95% sure to be within these limites difference in location negative is a regression, positive an improvement
In our case the p-value is very low, so we definitely have a change. Our benchmark is "bigger is better" and the change is negative, so we have a regression.

#### Example 2

``` basefile <- read.table("jes.snv_103.data", header = FALSE,  col.names= c("basedata"))
attach(basefile)
attach(testfile)
sd <- max( sd(basedata), sd(testdata))
d <- mean(basedata) \* 0.01
power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")
mean(basedata) \*0.01

> wilcox.test(testdata, basedata, conf.int=TRUE, conf.level=0.9)
Wilcoxon rank sum test

data:  testdata and basedata
W = 111, p-value = 0.917
alternative hypothesis: true location shift is not equal to 0
90 percent confidence interval:
-1.918  2.146
sample estimates:
difference in location
0.3145

```
As before we interpret the output of the Wilcox test as follows:
 Null hypothesis No change in performance Alternative hypothesis Improvement or regression p-value A p-value less than 0.05 indicates a change in performance 90 percent confidence interval The change in performance is 90% sure to be within these limites difference in location negative is a regression, positive an improvement

Our p-value is 0.917, which is far higher than 0.05, so this data does not indicate any change in performance between the two builds. However, does that imply the converse? If the 90% confidence interval is within 1% of the base mean value, then we can accept that the performance is unchanged.
```> mean(basedata) \* 0.01
[1] 9.729103
```
Our 90% confidence level is from -1.918 to 2.146, which is well within ± 9.7291, so for this benchmark we can accept that performance is unchanged.

#### Example 3

```> basefile <- read.table("web.s10u7_01.data", header = FALSE,  col.names= c("basedata"))
> attach(basefile)
> attach(testfile)
> sd <- max( sd(basedata), sd(testdata))
> d <- mean(basedata) \* 0.01
> power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")
> mean(basedata) \*0.01
```
In this final example we look at a case where the data is inconculsive. The datasets are from two builds of solaris 10, and the benchmark is a standard web server benchmark ( web.s10u7_01.data and web.s10u7_02.data). The Wilcox test does not indicate any change in performance. However, the 90% confidence interval ( -18 to 480) is far wider than the ± 1% of the base mean (± 32), so we cannot assume that the performance has stayed constant. Our data is not adequate and we need to investigate further. For instance, several results look like outliers, is there an intermittant performance problem?
```>  wilcox.test(basedata,testdata, conf.int=TRUE, conf.level=0.9)

Wilcoxon rank sum test

data:  basedata and testdata
W = 25, p-value = 0.3301
alternative hypothesis: true location shift is not equal to 0
90 percent confidence interval:
-18 460
sample estimates:
difference in location
19
> mean(basedata) \* 0.01
[1] 32.6425

```

## Summary and Checklist

You may have noted that I make very little mention of the average value of our results. This is deliberate. Unless we first check for outliers, multiple modes and normality, it makes little sense to directly compare average results.

Finally, please let me know of any mistakes.

This has been a very rough guide to statistical analysis of performance benchmark results. Here is a checklist to help

• Make sure you understand the benchmark you are running
• Are the results sensible?
• Is larger better or is smaller better?
• What size change is significant to you?
• Look at the data - strip chart and histogram
• Are there any obvious anomalies
• Do the numbers make rough sense in the context
• Is the data 'normal'? If so we can use the Student T-Test.
• Is the data multimodal
• If the data is multimodal, the average is no longer a useful measure.
• Investigate the reasons for the multimodal results,
• Is there an outlier?
• Is there a reason (ie obvious configuration error or runtime failure in benchmark)
• If data is normal
• Do we have enough results to detect the size change we're interested in?
• Does the Student T test confirm a difference between the results?
• Does the Wilcox Test confirm difference between the results?
• If the data is unimodal but not normal
• Do we have enough results to detect the size change we're interested in [use t-test power analysis as a rough estimate]
• Does the Wilcoxon test confirm difference between the results?
• If we don't detect a change in performance, can we say that performance is unchanged (is the 90% confidence interval with than a ±1% range of the base mean).