Thursday Mar 20, 2008

Performance tuning recipe

Dan Berger posted a comment about the compiler flags we'd used for Ruby. Basically, we've not done compiler flag tuning yet, so I'll write a quick outline of the first steps in tuning an application.

  • First of all profile the application with whatever flags it usually builds with. This is partly to get some idea of where the hot code is, but it's also useful to have some idea of what you're starting with. The other benefit is that it's tricky to debug build issues if you've already changed the defaults.
  • It should be pretty easy at this point to identify the build flags. Probably they will flash past on the screen, or in the worse case, they can be extracted (from non-stripped executables) using dumpstabs or dwarfdump. It can be necessary to check that the flags you want to use are actually the ones being passed into the compiler.
  • Of course, I'd certainly use spot to get all the profiles. One of the features spot has that is really useful is to archive the results, so that after multiple runs of the application with different builds it's still possible to look at the old code, and identify the compiler flags used.
  • I'd probably try -fast, which is a macro flag, meaning it enables a number of optimisations that typically improve application performance. I'll probably post later about this flag, since there's quite a bit to say about it. Performance under the flag should give an idea of what to expect with aggressive optimisations. If the flag does improve the applications performance, then you probably want to identify the exact options that provide the benefit and use those explicitly.
  • In the profile, I'd be looking for routines which seem to be taking too long, and I'd be trying to figure out what was causing the time to be spent. For SPARC, the execution count data from bit that's shown in the spot report is vital in being able to distinguish from code that runs slow, or code that runs fast but is executed many times. I'd probably try various other options based on what I saw. Memory stalls might make me try prefetch options, TLB misses would make me tweak the -xpagesize options.
  • I'd also want to look at using crossfile optimisation, and profile feedback, both of which tend to benefit all codes, but particularly codes which have a lot of branches. The compiler can make pretty good guesses on loops ;)

These directions are more a list of possible experiments than necessary an item-by-item checklist, but they form a good basis. And they are not an exhaustive list...

Thursday Apr 12, 2007

Training workload quality in CPU2006

The paper on training workload quality in CPU2006 is basically an update of previous work on training workloads in CPU2000.

The training workloads in CPU2006 are pretty good. Surprisingly, as pointed out in the paper, SPEC didn't have to change many of the workloads to make this happen. This supports the hypothesis that generally programs run through the same code paths regardless of the input.

What I like about this work is that it provides a way of assessing training workload quality. This directly addresses one of the concerns of some people about using profile feedback for optimisation: whether the selected training workload is going to be a bad choice for their actual workload.

In terms of the methodology, it's tempting to think that the best test would be to measure performance before and after. That particular approach was rejected because the performance gain is a function of both the quality of the training workload, and the ability of the compiler to exploit that workload (together with a fair mix of whether there is anything the compiler can do to improve performance even with the knowledge). So a performance gain doesn't necessarily mean a good quality training workload, and the absence of a gain doesn't necessarily mean a poor quality training workload.

The final metrics of code coverage and branch behaviour are about as hardware/compiler independent as possible. It should be possible to break down any program on any hardware to a very similar blocks even if the instructions in those blocks end up being different. So the approach seemed particularly good when evaluating cross platform benchmarks.

For those interested in using this method, there's a pretty detailed how-to guide on the developer portal.

About

Darryl Gove is a senior engineer in the Solaris Studio team, working on optimising applications and benchmarks for current and future processors. He is also the author of the books:
Multicore Application Programming
Solaris Application Programming
The Developer's Edge
Free Download

Search

Categories
Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today
Bookmarks
The Developer's Edge
Solaris Application Programming
Publications
Webcasts
Presentations
OpenSPARC Book
Multicore Application Programming
Docs