Tuesday Sep 22, 2009

Two Recent News Stories

Netflix Prize awarded

The Netflix Prize was finally awarded three years after the competition began. A team of seven individuals (BellKor's Pragmatic Chaos) scooped the $1 million prize.

Netflix is a movie rental company, and in 2006 were looking for a way to improve their movie recommendation system. This was important to them, as it is to Amazon and pretty much every other online retailer, since it allows customers quickly find products that will appeal to them, and hence increase sales and return custom.

They released a database of 100 million movie ratings from almost half a million customers. The challenge was to predict the ratings these users would give to a quiz dataset of movies. Netflix knew and withheld these ratings and marked competitors by how closely their predictions match the real scores that customers gave to withheld ratings.

The competition is important in that it significantly advanced the state of the art in recommender systems (aka collaborative filtering) in an open and collaborative manner.

A whole host of techniques were either developed or enhanced to tackle the problem and to handle the large size of the database - the largest publically available database of customer rating data.

One interesting outcome was that no single method of analysing the data was sufficient to achieve the required quality of predictions. Combinations or ensembles of many different techniques provided the best results.

All the details, including the papers describing the winning algorithms, are available at the Netflix Prize website.

Dead Fish responds to Human Emotion

This story has been doing the rounds of Slashdot and the Register recently, but is pertinent here given my interest in statistics.

The researchers in human brain mapping used the standard statistical tools for analysing MRI data of human brains and applied it to a fish. A dead Atlantic salmon to be exact. The test otherwise followed normal lines: "the salmon was shown a series of photographs depicting human individuals in social situations. The salmon was asked to determine what emotion the individual in the photo must have been experiencing."

Afterwards the found correlations between brain activity as measured by the MRI equipment and analysed using standard techniques, and the photographs of the people. Thus proving that a dead fish responds to the emotions displayed by humans in social situations.... or perhaps that the standard statistical analysis needs more rigour.

The original poster is online.

Friday Sep 04, 2009

Confidence and Deviation

Confidence and Deviation

The seventh circle of benchmarking hell is reserved for the analyst who quotes a single number when reporting the performance of a system, and supplies no other information. In the outer circles of this particular hell the analyst might provide details such as how many times the benchmark was run, what units the result is in, complete details on the configuration of the system and benchmark etc.

For marketing purposes it may be desirable to have a pithy result like "15% faster than the competition!" However if we have a genuine wish to understand one or more sets of performance data, then there are lots of statistical tools that can help.

In this entry I'm going to look at two related concepts, and explain the important differences between them: standard deviation and confidence intervals.

Although in theory computers are deterministic - return the same output for a given input - in practice the inputs are so numerous and varied that we cannot completely control them all. For example, consider a disk IO benchmark - a sudden noise or an increase in vibration due to a fan starting can cause a change in performance.

In order to account for the range of possible results we typically perform a benchmark multiple times, while controlling as far as possible external disturbances. At the end we will have a set of numbers. It is not feasible to quote the entire set of numbers to describe the performance the system. Usually we use the average value to describe the performance. However this doesn't take any account of the range of individual results. This is where standard deviation makes its appearance. It is estimated using a simple formula:

but don't worry about the exact details for now.

It is essentially a measure of dispersion or of how spread out the numbers are, while the mean is a measure of the central point of the range.

The mean and standard deviation are intrinsic properties of a system. It is usually not possible to directly measure them, but we can estimate them by making a number of measurements. For example: do people prefer the colour blue to the colour red? We can't ask everyone in the world, so we estimate the answer by asking a sample of the population. In the same way we estimate the mean and standard deviation of a systems performace by running a benchmark a number of times, and using the average value to estimate the mean, and the formula above to estimate the standard deviation.

It is important to note that our estimate of the standard deviation does not tell us anything about how accurate our estimate of the mean is. A large (estimated) standard deviation does not imply that our estimate of the mean is bad, and conversely a small (estimated) standard deviation does not imply that our estimate of the mean is highly accurate. (It is bad only in so far as it makes the performance of a system less predictable). These two measures are unchanging properties of the system that we merely estimating. But we don't know how good our estimates are.

This is where confidence intervals come in. We typically calculate the confidence interval for the mean but it can be done for the estimate of the standard deviation as well. It provides a description of the data in the form "we are 95% certain that the mean of the population is in the range X to Y." This is a statement about the accuracy of our estimates. The smaller the range of X to Y, the more accurate. We can generally make this range smaller if required by running more benchmarks. This is quite different to the standard deviation (or mean) which has a fixed value that we estimate, but the real (unknown) value of which will not change by rerunning the benchmark.

To summarize:

  • Standard deviation is a measure of the intrinsic variability of a system
  • The 95% confidence interval is a measure of the quality of our estimate of the mean (or standard deviation).
  • We can reduce the standard deviation by improving the system itself
  • We can reduce the size of the confidence interval by running the benchmark lots of times.

Finally, I am not going to describe the mechanics of calculating the 95% confidence interval. Wikipedia describes it quite well, and most stats packages will do the heavy lifting for you; I like to use R.

I hope this helps understand where and when standard deviation and confidence intervals can be used, and keep you out of benchmark analysis hell. There are some caveats - for instance, the assumption of normalacy - but I will leave those for another day.

Tuesday Sep 01, 2009

Bad practices exposed

There is an interesting (and frightening!) study recently published in the ACM Transactions on Storage: A nine year study of file system and storage benchmarking together with a short summary of the results and a set of recommendations.

It surveys nine years worth of papers on file system and storage benchmarks and makes for sobering reading:

We found that most popular benchmarks are flawed, and many research papers used poor benchmarking practices and did not provide a clear indication of the system's true performance.
and:
Finally, only about 45% of the surveyed papers included any mention of statistical dispersion.
We can only hope that the paper will raise awareness of the important of rigor and thoroughness in performance benchmarking.

Tuesday Feb 17, 2009

First Past the Line - Analyzing Performance Benchmarking Data

First Past the Line

Analyzing performance benchmarking data.

Eoin Lawless
Eoin.Lawless@Sun.Com
Version 1.1

Imagine you're a developer. You've just thought up a clever little algorithm for, lets say, improving latency of small packets on a 10Gig network interface. Now you want to see exactly how much better it is. You run a standard benchmark, like iperf, on both the original code and your new improved code. But - horror of horrors! - the old code appears to be faster. You run the test again, this time the new code is better. To be sure you run the comparison a third time - and it's a dead heat. What's going on? How can you say definitively which is better.

Unfortunately benchmark results of complex computer systems rarely generate a completely reproducible result. Almost invariably there is some noise - in fact as Brendan Gregg recently demonstrated, actual noise, his shouting, can cause a spike in disk drive latency.

In this article I outline a procedure for benchmark results analysis. I'll be using the R statistics package, but the ideas are not tied to R and many other stats packages or spreadsheets can be used.

R is a variant of the S statistics programming language. It is open source and freely available from http://www.r-project.org. Binaries are available there for Windows and Mac OS X, and it is easily compiled on any recent Solaris.

I'm going to introduce just enough R to enable people compare two sets of performance benchmark results. I'm not going to explain R syntax in any great detail - it is done better at http://cran.r-project.org/doc/manuals/R-intro.html. Although I use the R software, the overall approach is the important aspect, not R specific details. Finally, if you are impatient, and want the quick version, jump to the checklist.

Understand the benchmark

Before using benchmark results to analyze or compare performance, it's essential to understand what the benchmark is measuring. What is a sensible result? For example, if we're measuring latency of small packet transfers, we'd expect to see milli or micro second numbers, not pico seconds or minutes. Is a larger result better, or a smaller result better? Often when looking at raw numbers, we forget what they represent. You don't want to boast to your boss that you've improved small packet latency by making it larger!

What size change is important 1%, 5% ? When making many successive small changes, you may need to ensure that small regressions don't accumulate, conversely when comparing major changes, a very minor regression may be acceptable.

Comparing Data Sets

The exact criteria that our performance testing needs to meet will vary from situation to situation. A reasonable set of requirements are:

  • We want to detect changes of 1% or greater
  • We want more than 80% probability of finding a real change
  • We don't want more than 5% false positives (this is called the significance level)
  • We want to do as few benchmark runs as possible

Look at the data!

Raw numbers are hard to absorb. Plotting the data, preferably in more than one format, will make obvious features jump out. Let's start our tour of R by doing just that. Here's two sample sets of data, comparing openssl results from successive builds of Solaris Nevada:

Build 97Build 98
600870.36596775.85
606059.30597114.60
600799.39592075.09
611484.07605196.20
605925.89592342.02
606033.32605277.01
611066.67592401.92
600784.94597161.81
600697.75604817.72
611689.59597408.31
600719.45592312.69
606255.10597660.42
610618.37604943.36
611113.47604761.17
605951.83592601.77

Now look at the same data plotted side-by-side as bar charts:
and as a strip chart:

The data files are called snv_97.data and snv_98.data. Start R and load the data:

> basefile <- read.table("snv_97.data", header = FALSE,  col.names= c("basedata"))
> testfile <- read.table("snv_98.data", header = FALSE,  col.names= c("testdata"))
> attach(basefile)
> attach(testfile)
> basedata
 [1] 600870.4 606059.3 600799.4 611484.1 605925.9 606033.3 611066.7 600784.9
 [9] 600697.8 611689.6 600719.4 606255.1 610618.4 611113.5 605951.8
> testdata
 [1] 596775.8 597114.6 592075.1 605196.2 592342.0 605277.0 592401.9 597161.8
 [9] 604817.7 597408.3 592312.7 597660.4 604943.4 604761.2 592601.8
> datamax <- max(basedata, testdata)
[1] 611689.6
> datamin <- min(basedata, testdata)
[1] 600697.8
Let's have a look at a bar plot:
par(mfcol=c(1,2) )  # to plot side by
> barplot(basedata, xlab="Iteration", ylab="Result", ylim=c(0,datamax\*1.1))
> barplot(testdata, xlab="Iteration", ylab="Result", ylim=c(0,datamax\*1.1))
> dev.off(2)
Now we want to display a strip chart containing both sets of results:
> plot.new()
> stripchart(basedata, method="stack", xlim=c( 0.95 \* datamin, 1.05 \* datamax), col="red", at=1.2, offset=1)
> stripchart(testdata, method="stack", add=TRUE, at=0.8, col="blue", offset=1)
> dev.off(2)
We can save the output by first doing:
> bitmap(file="stripchart.png", type="png256", height=3)

At this point I should note that R comes with a comprehensive help. So, for example,

> help(stripchart)
displays help for the stripchart command. There is also an online introduction, which is more general than this guide.

By a visual inspection we can see there is trouble here. Both sets of data show that the results are clumped into groups. Before trying to compare the performance of these two builds it's necessary to find out what is happening and why the data is clumping like this. Time to use Solaris's famed diagnostic tools: dtrace, vmstat, mpstat, intrstat and friends.

Testing for (mis)behaviour

As we've seen, data can sometimes have odd behaviours that make a straightforward comparison difficult, useless or even misleading. R has many statistical tests to check data for (mis)behaviours. As we've seen already, some of these issues are clearly visible when we plot or graph the data. However other behaviour is either not as obvious, or else we may wish to have a programmatic way of identifying anomolies. In particular there are a few common issues with data sets that make a naive comparison of results either misleading or plain wrong:
  • Multimodal data
  • Outliers
  • Non-normal data (when used with tests that assume normality)
  • Insufficient data

Is the data multimodal

We say a dataset is multimodal if the frequency of results has two or more distinct peaks. Tests for multimodality are not include in the base R package, but can be optionally installed from the diptest package.
> install.packages("diptest")
Unfortunately this package is not easy to use, and I will skip a detailed example. Generally the stripchart makes multiple modes visually obvious, though the diptest is very useful for automatic analysis of large numbers of data sets.

Are there outliers

An outlier is a single point that is distant from the bulk of the data set. There is an unfortunate tendency to blindly discard inconvenient outliers, especially when they make a comparison look bad. In our experience, outliers are often due to a benchmark misconfiguration or error, (ie a damaged cable, faulty disk) but they can also be symptomatic of real problems. There is an optional add on package for R called outliers that can be used to test data sets for their presence.
> install.packages("outliers")
# package  installs from the web
> library(outliers)
> grubbs.test(basedata)

        Grubbs test for one outlier

data:  basedata 
G = 1.2892, U = 0.8728, p-value = 1
alternative hypothesis: highest value 611689.59 is an outlier 
The p-value in this case is 1, so we accept that there are no outliers.

Is the data 'normal'

If the data is normal it is much easier to use it in comparisons. Normality effectively means that results are clustered around a central mean value, with 95% of results within two standard deviations of the mean. In addition, the frequency of a result falls off very rapidly the further from the mean it is. R comes with a test called the Shapiro-Wilk normality test, used as follows:
> shapiro.test(basedata)

        Shapiro-Wilk normality test

data:  basedata 
W = 0.8351, p-value = 0.01076
If the p-value is less then 0.05 then the data is probably not normal, as we see in this case. This means that we cannot use the Student T-Test to compare this data set with another. This is not problem though - we'll can always use the Wilcoxon test which does not require normal data.

Do we have enough data?

There is an abundance of theory to calculate the number of results required to detect a change of a give size (the theory is called Power Analysis). R includes a suitable test, but unfortunately we need to know the mean and standard deviation of the result data distribution before we can apply the test.

We can estimate these quantities from the data set if it contains three or more points. With a small number of points its accuracy will be low, however it does serve as a starting point. We'll use two new sample data sets as the first two are not normal: s10u5_10.data s10u6_07b.data.

> basefile <- read.table("s10u5_10.data", header = FALSE,  col.names= c("basedata"))
> testfile <- read.table("s10u6_07b.data", header = FALSE,  col.names= c("testdata"))
> attach(basefile)
> attach(testfile)
> basedata
[1] 325.9843 321.4696 326.9845
> testdata
 [1] 319.1681 318.2914 317.6821 319.5697 320.9252 322.6830 321.3920 316.2342
 [9] 318.3186 323.7438 317.3700
>
The strip chart gives a clear picture of the distribution of results. (Base data, basedata, is in red).
We will use the power.t.test to see if we have enough results to detect a 1% change.
> sd <- max( sd(basedata), sd(testdata))
> d <- mean(basedata) \* 0.01
> power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")

     Two-sample t test power calculation

              n = 13.87397
          delta = 3.248128
             sd = 2.938169
      sig.level = 0.05
          power = 0.8
    alternative = two.sided

 NOTE: n is number in \*each\* group

So, using our estimated values of the mean and standard deviation, we need to get 14 datapoints to be able to detect a 1% change in performance. Note however that a bad estimate of the standard deviation, sd, will result in a very large estimate of the number of data points required. In this case I would concentrate on getting extra numbers for basedata.

See http://www.stat.uiowa.edu/~rlenth/Power for a more sophisticated applet for calculating required sample sizes. Note however that power analysis is easy to get wrong . As a general rule, if it is easy to get extra benchmark runs, then do so!

Comparing the data sets

The final step, once we have confirmed both sets of data are usable, is to compare the datasets. There are three possible outcomes:
  • The performance has changed
  • The performance is the same
  • The results are inconclusive
Hopefully we have exlcuded the third possibility by gathering enough data and investigating bimodal datasets and outlier datapoints.

Has the performance changed?

The statistics procedure for comparing two sets of data for a difference is called hypothesis testing. We have two alternative hypotheses, conventionally called the null hypothesis and the alternative hypothesis. The null hypothesis represents no change, nothing happening. The alternative hypothesis represents a new finding or an important result. The choice of null hypothesis is very important, the idea being that less harm is caused by accepting the null hypothesis than by rejecting it. Some examples:

Scenario Null hypothesis Alternative Reason
Doctor testing a new cancer drug The drug has no effect The drug improves survival rates by more than 4% A new drug should not be sold unless it is certain it does good.
A food company wants to use a new food colouring The additive may cause harm to some people The additive causes no harm Need very strong proof that additive causes no harm
Performance Regression Testing (ie Performance QE) No change Change greater than 1% Crying wolf with insufficient evidence wastes developers time

We are looking for a change (either an improvement or a regression). There are many methods for hypothesis testing, each with multiple subvariants. We typically use just two, the Student T-Test and the Wilcoxon Signed Rank test. Of these the Wilcoxon is more general.

Both tests calculate the probability (p-value) that the spread of results could occur if the null hypothesis is true. We will only reject the null hypothesis if this probability is very low, typically 0.05, or 5%. This figure is refered to as the significance level of the test. If the p-value is less than the significance level, then we must act on the basis that there is a change.

Bear in mind, that even if we don't reject the null hypothesis (ie don't find a change), this does not imply that all is good, it could be that there is a change but that the data we have collected is inconclusive. Hence the important of checking the data and making sure we have enough datapoints.

Is performance the same?

Just because our data does not show an performance change, doesn't necessarily mean that performance is unchanged. Our data might simply be inadequate for the job of detecting a change of the size we're interested in. We have rejected the alternative hypothesis, but that doesn't mean we can accept the null hypothesis.

Is there a way to test whether we can accept the null hypothesis (ie performance has stayed the same)? As it turns out there is a relatively simple way. If the 90% confidence interval reported by the Wilcox (or T) test lies entirely within 1% of the baseline average, then we can safely assume the test build performance has remained with 1% of the base build performance. (See http://web.vims.edu/fish/faculty/pdfs/hoenig2.pdf for details.)

Example 1

We will use two sets of results from an openssl benchmark, the baseline running on snv_106 and the test on snv_107 . Neither data set is multimodal, or has outliers.
> basefile <- read.table("ssl.snv_106.data", header = FALSE,  col.names= c("basedata"))
> testfile <- read.table("ssl.snv_107.data", header = FALSE,  col.names= c("testdata"))
> attach(basefile)
> attach(testfile)
> power.t.test( sd=sd(basedata), sig.level=0.05, power=0.75, delta=mean(basedata)\*0.01, type="two.sample")

     Two-sample t test power calculation

              n = 5.296159
          delta = 285.0265
             sd = 155.7760
      sig.level = 0.05
          power = 0.75
    alternative = two.sided
For testdata, power.t.test cannot compute n at the default power of 0.75 (75%), so we use a higher (better) power level:
> power.t.test( sd=sd(testdata), sig.level=0.05, power=0.95, delta=(mean(testdata)\*0.01), type="two.sample")

     Two-sample t test power calculation

              n = 2.135454
          delta = 281.677
             sd = 42.21613
      sig.level = 0.05
          power = 0.95
    alternative = two.sided
We have more than enough data. We can now safely compare the two data sets. We will use the Wilcoxon test, even though the T-Test is applicable here, since the Wilcoxon can be applied more widely. Be careful with the ordering of basedata and testdata in the function call. With the ordering below (test data, then baseline data) a regression for a "bigger is better" benchmark will show as negative and an improvement as positive.
> wilcox.test(testdata,basedata, conf.int=TRUE)

        Wilcoxon rank sum test

data:  testdata and basedata
W = 0, p-value = 2.896e-07
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
 -432.64 -234.41
sample estimates:
difference in location
               -265.75

Here is how the result is interpreted:
Null hypothesisNo change in performance
Alternative hypothesisImprovement or regression
p-value A p-value less than 0.05 indicates a change in performance
95 percent confidence intervalThe change in performance is 95% sure to be within these limites
difference in locationnegative is a regression, positive an improvement
In our case the p-value is very low, so we definitely have a change. Our benchmark is "bigger is better" and the change is negative, so we have a regression.

Example 2


 basefile <- read.table("jes.snv_103.data", header = FALSE,  col.names= c("basedata"))
 testfile <- read.table("jes.snv_104.data", header = FALSE,  col.names= c("testdata"))
 attach(basefile)
 attach(testfile)
 sd <- max( sd(basedata), sd(testdata))
 d <- mean(basedata) \* 0.01
 power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")
 mean(basedata) \*0.01

> wilcox.test(testdata, basedata, conf.int=TRUE, conf.level=0.9)
        Wilcoxon rank sum test

data:  testdata and basedata
W = 111, p-value = 0.917
alternative hypothesis: true location shift is not equal to 0
90 percent confidence interval:
 -1.918  2.146
sample estimates:
difference in location
                0.3145

As before we interpret the output of the Wilcox test as follows:
Null hypothesisNo change in performance
Alternative hypothesisImprovement or regression
p-value A p-value less than 0.05 indicates a change in performance
90 percent confidence intervalThe change in performance is 90% sure to be within these limites
difference in locationnegative is a regression, positive an improvement

Our p-value is 0.917, which is far higher than 0.05, so this data does not indicate any change in performance between the two builds. However, does that imply the converse? If the 90% confidence interval is within 1% of the base mean value, then we can accept that the performance is unchanged.
> mean(basedata) \* 0.01
[1] 9.729103
Our 90% confidence level is from -1.918 to 2.146, which is well within ± 9.7291, so for this benchmark we can accept that performance is unchanged.

Example 3


> basefile <- read.table("web.s10u7_01.data", header = FALSE,  col.names= c("basedata"))
> testfile <- read.table("web.s10u7_02.data", header = FALSE,  col.names= c("testdata"))
> attach(basefile)
> attach(testfile)
> sd <- max( sd(basedata), sd(testdata))
> d <- mean(basedata) \* 0.01
> power.t.test( sd=sd, sig.level=0.05, power=0.8, delta=d, type="two.sample")
> mean(basedata) \*0.01
In this final example we look at a case where the data is inconculsive. The datasets are from two builds of solaris 10, and the benchmark is a standard web server benchmark ( web.s10u7_01.data and web.s10u7_02.data). The Wilcox test does not indicate any change in performance. However, the 90% confidence interval ( -18 to 480) is far wider than the ± 1% of the base mean (± 32), so we cannot assume that the performance has stayed constant. Our data is not adequate and we need to investigate further. For instance, several results look like outliers, is there an intermittant performance problem?
>  wilcox.test(basedata,testdata, conf.int=TRUE, conf.level=0.9)

        Wilcoxon rank sum test

data:  basedata and testdata
W = 25, p-value = 0.3301
alternative hypothesis: true location shift is not equal to 0
90 percent confidence interval:
 -18 460
sample estimates:
difference in location
                    19
> mean(basedata) \* 0.01
[1] 32.6425


Summary and Checklist

You may have noted that I make very little mention of the average value of our results. This is deliberate. Unless we first check for outliers, multiple modes and normality, it makes little sense to directly compare average results.

Finally, please let me know of any mistakes.

This has been a very rough guide to statistical analysis of performance benchmark results. Here is a checklist to help

  • Make sure you understand the benchmark you are running
    • Are the results sensible?
    • Is larger better or is smaller better?
    • What size change is significant to you?
  • Look at the data - strip chart and histogram
    • Are there any obvious anomalies
    • Do the numbers make rough sense in the context
  • Is the data 'normal'? If so we can use the Student T-Test.
  • Is the data multimodal
    • If the data is multimodal, the average is no longer a useful measure.
    • Investigate the reasons for the multimodal results,
  • Is there an outlier?
    • Is there a reason (ie obvious configuration error or runtime failure in benchmark)
  • If data is normal
    • Do we have enough results to detect the size change we're interested in?
    • Does the Student T test confirm a difference between the results?
    • Does the Wilcox Test confirm difference between the results?
  • If the data is unimodal but not normal
    • Do we have enough results to detect the size change we're interested in [use t-test power analysis as a rough estimate]
    • Does the Wilcoxon test confirm difference between the results?
  • If we don't detect a change in performance, can we say that performance is unchanged (is the 90% confidence interval with than a ±1% range of the base mean).

Bibliography

About

Snippets of code, useful tips, and observations of working with Sun.

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today