Code complete: burn this chapter

That's a somewhat inflammatory title for this post, but continuing from my previous post on the book Code Complete, I think that the chapters (25 & 26) on performance really do not contain good advice or good examples.

To make this more concrete, consider the example on pg 593 where Steve McConnell compares the performance of these two code fragments:

OriginalUnrolled
for i = 1 to 10
  a[ i ] = i
end for







a[ 1 ] = 1
a[ 2 ] = 2
a[ 3 ] = 3
a[ 4 ] = 4
a[ 5 ] = 5
a[ 6 ] = 6
a[ 7 ] = 7
a[ 8 ] = 8
a[ 9 ] = 9
a[ 10 ] = 10

Steve finds that Visual Basic and Java run the unrolled version of the loop faster.

There's a couple of examples that talk about incorrect access ordering for arrays. Here's some C code that illustrates the problem:

Slow codeFast code
for (column=0; column < max_column; column++) 
{
  for (row=0; row < max_row; row++) 
  {
    data[row][column]=stuff();
  }
}
for (row=0; row < max_row; row++) 
{
  for (column=0; column < max_column; column++)
  {
    data[row][column]=stuff();
  }
}

On page 599 it is suggested that the slow code is inefficient because it might cause paging to disk, on page 623 it is suggested that the higher trip count loop should be inside to amortise the initialisation overhead for each execution of the inner loop. Neither of these explanations is right. As I'm sure most of you recognise the code is slow because of cache misses incurred when accessing non-adjacent memory locations. There is a cost to initialisation of the inner loop, but nothing significant, and yes, you could get paging to disk - but only if you are running out of memory (and if you're running out of memory, you're hosed anyway!). You're more likely to get TLB misses (and perhaps that is what Mr McConnell intended to say.

I consider the above issues to be quite serious, but, unfortunately, I'm not terribly happy with the rest of the material. Hence my recommendation to ignore (or burn ;) these chapters. I'll go through my other reservations now.

Lack of details. The timing information is presented with no additional information (pg 623) "C++ Straight Time = 4.75 Code-Tuned Time = 3.19 Time Savings = 33%". What was the compiler? What compiler flags were given? What was the test harness?

The book presents it as somehow that "C++" runs this code slowly, but in reality it's more likely to be a test of the effectiveness of the compiler, and the ability of the user to use the compiler. I'd be surprised if any compiler with minimal optimisation enabled did not do the loop interchange operation necessary to get good performance. Which leads to my next observation:

Don't compilers do this? I think the book falls into one of the common "optimisation book" traps, where lots of ink is spent describing and naming the various optimisations. This gives the false impression that it is necessary for the expert programmer to be able to identify these optimisations and apply them to their program. Most compilers will apply all these optimisations - afterall that is what compilers are supposed to do - take the grudgery out of producing optimal code. It's great for page count to enumerate all the possible ways that code might be restructured for performance, but for most situations the restructuring will lead to code that has the same performance.

Profiling. It's not there! To me the most critical thing that a developer can do to optimise their program is to profile it. Understanding where the time is being spent is the necessary first step towards improving the performance of the application. This omission is alarming. The chapter already encourages users to do manual optimisations where there might be no gains (at the cost of time spent doing restructuring that could be better spent writing new code, and the risk that the resulting code is less maintainable), but without profiling the application, the users are basically encouraged to do this over the entire source code, not just the lines that actually matter.

Assembly language. Yes, I love assembly language, there's nothing I enjoy better than working with it (no comment), but I wouldn't encourage people to drop into it for performance reasons, unless they had utterly exhausted every other option. The book includes an example using Delphi where the assembly language version ran faster than the high-level version. My guess is that the compilers had some trouble with aliasing, and hence had more loads than were necessary - a check of the assembly code that the compilers generated would indicate that, and then it's pretty straight forward to write assembly-language-like high level code that the compiler can produce optimal code for. [Note, that I view reading and analysing the code at the assembly language level to be very useful, but I wouldn't recommend leaping into writing assembly language without a good reason.]

So what would I recommend:

  • Profile. Always profile. This will indicate where the time is being spent, and what sort of gains you should expect from optimising parts of the application.
  • Know the tools. Make sure that you know what compiler flags are available, and that you are requesting the right kind of things from the compiler. All too often there are stories about how A is faster than B, which are due to people not knowing how to use the tools.
  • Identify those parts of the code where the time is spent, and examine them in detail to determine if it's a short coming of the compiler, the compiler flags, or an ambiguity in the source code, that causes time to be spent there. Many performance problems can be solved with by adding a new flag, or perhaps a minor tweak to the source code.
  • Only when you have exhausted all other options, and you know that you can get a significant performance gain should you start wildly hacking at the source code, or recoding parts in assembly language.

The other thing to recommend is a read of Bart Smaalder's Performance Anti-patterns.

Comments:

I've steered clear of McConnell's book largely because it's ungodly thick, but also because the reviews I've seen are simply non-critical. Thanks for taking the time to point out where it's weak -- now I know it's a real book with real flaws, and will give (some of) it a chance.<p> At the risk of sounding the sycophant, I use the first portion of your book when I teach performance to admins all the time to remind them of the important of examining how a troublesome application was compiled.

Posted by Michael Ernest on June 11, 2009 at 02:06 AM PDT #

I thought you had invented a new word - "take the grudgery out of producing optimal code" - but apparently the term is known to Urban Dictionary. It does seem like a good term for what compilers ought to do.

Posted by John Henning on June 11, 2009 at 02:33 AM PDT #

@Michael. It's a good book. I'm always impressed when someone delivers a _good_ book that thick! Too many books end up not having a great mix of source to insight. Glad you find mine useful.

@John. It wouldn't be the first time I've hit that problem. I got lots of blank faces when I described someone as "having kittens". (http://dictionary.reference.com/browse/have+kittens) or "living in cloud cuckoo land" (http://idioms.thefreedictionary.com/live+in+cloud-cuckoo+land)

Posted by Darryl Gove on June 11, 2009 at 03:12 AM PDT #

Actually the only reason to stay away from assembler (or "assembly", as you call it) is because he isn't portable.

Otherwise, if it were, that's all I'd ever write code in, all day long, day in, day out.

I find it much easier to program in assembler than in a high-level language. The code is so simple and so fast, once a library of routines is built up, anything can be put together on short notice.

Posted by UX-admin on June 11, 2009 at 06:07 AM PDT #

What edition of CC are you looking at? The 2nd Edition came out in 2004. 1st in 1993 (the one I have). Cache might not have been a problem in the early 90's. But it certainly is now.

It is interesting that all the optimization techniques I learned back in the 70's for the Cray and CDC machines are now pessimizations for today's architectures and compilers.

Things change, so advice can get stale.

Posted by Richard Friedman on June 25, 2009 at 03:40 PM PDT #

2nd Edition.

Posted by Darryl Gove on June 25, 2009 at 04:25 PM PDT #

Post a Comment:
Comments are closed for this entry.
About

Darryl Gove is a senior engineer in the Solaris Studio team, working on optimising applications and benchmarks for current and future processors. He is also the author of the books:
Multicore Application Programming
Solaris Application Programming
The Developer's Edge

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
5
6
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
The Developer's Edge
Solaris Application Programming
Publications
Webcasts
Presentations
OpenSPARC Book
Multicore Application Programming
Docs