Sun's Performance Lifestyle - PerfPit, Non Debug BFU's and what happens


Among the the services that my group, Performance QA, provides for the Solaris development community, two services tend to be highlighted as extremely important, PerfPIT and Performance SelfTest. I have already mentioned both of these services in a previous post Enabling Suns Performance Lifestyle.

Both PerfPIT and Performance Selftest use the same underlying mechanisms to evaluate the impact on overall performance of a developers changes, so lets go through the process in a bit more detail.

Background Knowledge

The process for building Solaris is pretty straight forward, and is documented in detail in the Building OpenSolaris section of the developer reference on One of the possible images that can be created during a build is a BFU Archive, which is generated when you set the "-a" flag in your nightly options.

Personally I use the following options in nightly (your mileage may vary though).

NIGHTLY_OPTIONS="-MNazmni";             export NIGHTLY_OPTIONS

Oh, you might have noted the -z option above, we like the -z option, everyone should like the -z option. Why? It gzips everything for you. Which of course means that moving the BFU archives around is much, much quicker.

A BFU, aka Blindingly Fast Update, aka Bonwick Faulkner Upgrade, archive is used to upgrade Solaris to a newer rev without completely reinstalling the system. Further details on using BFU to install a new image can be found at here. Now with the backround reading out of the way.....


At a very high level a developer needs to provide
  • Non debug baseline BFU archives
  • Non debug BFU archives with just the developers changes
  • A Solaris Express (Nevada) build number to run these changes on
to use both PerfPIT and Performance Selftest.

Firstly the non debug BFU archives are extremely important, performance runs with debugging enabled have pretty much zero usefullness (and thanks to DTrace no one ever needs to use a debug build to try and narrow down a performance problem on Solaris again). To this end we automatically reject non-debug BFU archives. As an example of how detrimental the impact of debugging can be on performance, heres a small extract from a table of results comparing a debug BFU archive against a recent non debug BFU archive on top of a Solaris Express install (by way of explanation this was a BFU that I ran manually by mistake, and it didn't get the "is it a debug BFU" check).

System Benchmark Test Bfu Archive(debug) Base Bfu Archive %Standard Deviation %Change
SunFire V240 2 x 1002MHz am_thread_2500_100 312.60 123.22 0.34% -153.70
am_thread_2500_150 471.23 185.52 0.82% -154.00
am_thread_2500_200 754.06 247.12 0.31% -205.14

Not quite what you want to see on a Monday morning ;).

The Solaris Express build requested is quite important as well, generally people use the latest, greatest, released bits. You know that it just makes sense.

Building The BFU Archives

We generally ask that a developer builds his/her baseline and test BFU on the same machine. There are multiple reasons for this, but primarily it is to ensure that the only delta which exists between the baseline and test BFU archives is the actual code changes. Historically we have seen phantom performance degradations in our upstreadm testing which were resolved to issues such as what compiler was used, the libc version on build system etc. Building both BFU archives on the same system just saves time all round.

So the process is

  • Create your workspace
  • Build your baseline non debug BFU archives
  • Apply your changes
  • Build the BFU archives with your changes
  • Install the BFU archives on a machine to ensure that they don't create warm bricks...
    For this just install the baseline BFU archive first, make sure it boots, and then install your changed BFU archives.
    This is just a sanity test, but again it saves time in the long run.

The Benchmarking Process

Okay, so someone has got this far, BFU's are ready to go, what happens then? The BFU archives are brought into our lab (firewalled away to remove any unwanted variables from the benchmarking process) and the runs are scheduled. Once a run starts, we go through the following steps.
  • 1. Install requested machine(s) with the Solaris Express version requested
  • 2. BFU the machine with the baseline BFU archives
  • 3. Execute the benchmarks and collect the results
  • 4. Reinstall the machine (to ensure a completely clean run)
  • 5. BFU the machine with the test BFU archives
  • 6. Execute the benchmarks and collect the results
And then we gather all the results and do some analysis on them, this is completely automatic unless something is out of reasonable bounds for improvement or degradation. In that case the suspect results are flagged and manually reviewed. Finally we send the results back to the developer and all things going according to plan everything is green with big plus signs in front of four digit percentage gains (okay I did say according to plan, but it would be a plan that finished with one of Nialls favourite lines - "all goals met, all pigs watered and ready to fly". Realistically though gains in the high double digits have been seen with certain projects, and small incremental gains are continously seen).
Technorati Tag(s) : ,

Post a Comment:
Comments are closed for this entry.



« August 2016