Sunday Oct 12, 2008

The Seduction of Single-Threaded Performance

The following is a dramatization. It is used to illustrate some concepts regarding performance testing and architecting of computer systems. Artistic license may have been taken with events, people and time-lines. The performance data I have listed is real and current however.

I got contacted recently by the Systems Architect of latestrage.com. He has been a happy Sun customer for many years, but was a little displeased when he took delivery of a beta test system of one of our latest UltraSPARC servers.

"Not very fast", he said.

"Is that right, how is it not fast?", I inquired eagerly.

"Well, it's a lot slower than one of the LowMarginBrand x86 servers we just bought", he trumpeted indignantly.

"How were you measuring their speed?", I asked, getting wary.

"Ahh, simple - we were compressing a big file. We were careful to not let it be limited by I/O bandwidth or memory capacity, though..."

What then ensues is a discussion about what was being used to test "performance", whether it matches latestrage.com's typical production workload and further details about architecture and objectives.

Data compression utilities are a classic example of a seemingly mature area in computing. Lots of utilities, lots of different algorithms, a few options in some utilities, reasonable portability between operating systems, but one significant shortcoming - there is no commonly available utility that is multi-threaded.

Let me pretend I am still in this situation of using compression to evaluate system performance, and I am wanting to compare the new Sun SPARC Enterprise T5440 with a couple of current x86 servers. Here is my own first observation about such a test, using a single-threaded compression utility:

Single-Threaded Throughput

Now if you browse down to older blog entries, you will see I have written my own multi-threaded compression utility. It consists of a thread to read data, as many threads to compress or decompress data as demand requires, and one thread to write data. Let me see whether I can fully exploit the performance of the T5440 with Tamp...

Well, this turned out to be not quite the end of the story. I designed my tests with my input file located on a TMPFS (in-memory) filesystem, and with the output being discarded. This left the system focusing on the computation of compression, without being obscured by I/O. This is the same objective that latestrage.com had.

What I found on the T5440 was that Tamp would not use more than 12-14 threads for compression - it was limited by the speed at which a single thread could read data from TMPFS.

So, I chose to use another dimension by which we can scale up work on a server - add more sources of workload. This is represented by multiple "Units of Work" in my chart below.

After completing my experiments I discovered that, as expected, the T5440 may disappoint if we restrict ourselves to a workload that can not fully utilize the available processing capacity. If we add more work however, we will find it handily surpasses the equivalent 4-socket quad-core x86 systems.

Multi-Threaded Throughput

Observing Single-Thread Performance on a T5440

A little side-story, and another illustration of how inadequate a single-threaded workload is at determining the capability of the T5440. Take a look at the following output from vmstat, and answer this question:

Is this system "maxed out"?

(Note: the "us", "sy" and "id" columns list how much CPU time is spent in User, System and Idle modes, respectively)

 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr d0 d1 d2 d3   in   sy   cs us sy id 
 0 0 0 1131540 12203120 1  8  0  0  0  0  0  0  0  0  0 3359 1552 419  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3364 1558 431  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1478 420  0  0 99 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3354 1500 441  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1549 460  0  0 99 

Well, the answer is yes. It is running a single-threaded process, which is using 100% of one CPU. For the sake of my argument we will say the application is the critical application on the system. It has reached it's highest throughput and is therefore "maxed out". You see, when one CPU represents less than 0.5% of the entire CPU capacity of a system, then a single saturated CPU will be rounded down to 0%. In the case of the T5440, one CPU is 1/256th or 0.39%.

Here is a tip for watching a system that might be doing nothing, but then again might be doing something as fast as it can:

$ mpstat 3 | grep -v ' 100$'

This is what you might see:

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    2   0   48   204    4    2    0    0    0    0   127    1   1   0  99
 32    0   0    0     2    0    3    0    0    0    0     0    0   8   0  92
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    1   0   49   205    5    3    0    0    0    0   117    0   1   0  99
 32    0   0    0     4    0    5    0    0    1    0     0    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    0   0   48   204    4    2    0    0    0    0   103    0   1   0  99
 32    0   0    0     3    0    4    0    0    0    0     3    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0

mpstat uses "usr", "sys", and "idl" to represent CPU consumption. For more on "wt" you can read my older blog.

For more on utilization, see the CPU/Processor page on solarisinternals.com

To read more about the Sun SPARC Enterprise T5440 which is announced today, go to Allan Packer's blog listing all the T5440 blogs.

Tamp - a Multi-Threaded Compression Utility

Some more details on this:

  • It uses a freely-available Lempel-Ziv-derived algorithm, optimised for compression speed
  • It was compiled using the same compiler and optimization settings for SPARC and x86.
  • It uses a compression block size of 256KB, so files smaller than this will not gain much benefit
  • I was compressing four 1GB database files. They were being reduced in size by a little over 60%.
  • Browse my blog for more details and a download

Friday Sep 26, 2008

Tamp - a Lightweight Multi-Threaded Compression Utility

UPDATE: Tamp has been ported to Linux, and is now at version 2.5

Packages for Solaris (x86 and SPARC), and a source tarball are available below.

Back Then

Many years ago (more than I care to remember), I saw an opportunity to improve the performance of a database backup. This was before the time of Oracle on-line backup, so the best choice at that time was to:

  1. shut down the database
  2. export to disk
  3. start up the database
  4. back up the export to tape

The obvious thing to improve here is the time between steps 1 and 3. We had a multi-CPU system running this database, so it occurred to me that perhaps compressing the export may speed things up.

I say "may" because it is important to remember that if the compression utility has lower throughput than the output of the database export (i.e. raw output; excluding any I/O operations to save that data) we may just end up with a different bottleneck, and not run any faster; perhaps even slower.

As it happens, this era also pre-dated gzip and other newer compression utilities. So, using the venerable old "compress", it actually was slower. It did save some disk space, because Oracle export files are eminently compressible.

So, I went off looking for a better compression utility. I was now more interested in something that was fast. It needed to not be the bottleneck in the whole process.

What I found did the trick - It reduced the export time by 20-30%, and saved some disk space as well. The reason why it saved time was that it was able to compress at least as fast as Oracle's "exp" utility was able to produce data to compress, and it eliminated some of the I/O - the real bottleneck.

More Recently

I came across a similar situation more recently - I was again doing "cold" database restores and wanted to speed them up. It was a little more challenging this time, as the restore was already parallel at the file level, and there were more files than CPUs involved (72). In the end, I could not speed up my 8-odd minute restore of ~180GB, unless I already had the source files in memory (via the filesystem cache). That would only work in some cases, and is unlikely to work in the "real world", where you would not normally want this much spare memory to be available to the filesystem.

Anyway, it took my restore down to about 3 minutes in cases where all my compressed backup files were in memory - this was because it had now eliminated all read I/O from the set of arrays holding my backup. This meant I had eliminated all competing I/O's from the set of arrays where I was re-writing the database files.

Multi-Threaded Lightweight Compression

I could not even remember the name of the utility I used years ago, but I knew already that I would need something better. The computers of 2008 have multiple cores, and often multiple hardware threads per core. All of the current included-in-the-distro compression utilities (well, almost all utilities) for Unix are still single-threaded - a very effective way to limit throughput on a multi-CPU system.

Now, there are a some multi-threaded compression utilities available, if not widely available:

  • PBZIP2 is a parallel implementation of BZIP2. You can find out more here
  • PIGZ is a parallel implementation of GZIP, although it turns out it is not possible to decompress a GZIP stream with more than one thread. PIGZ is available here.

Here is a chart showing some utilities I have tested on a 64-way Sun T5220. The place to be on this chart is toward the bottom right-hand corner.

Here is a table with some of the numbers from that chart:

Utility Reduction (%) Elapsed (s)
tamp 66.18 0.31
pigz --fast 71.18 1.04
pbzip2 --fast 77.17 4.17
gzip --fast 71.10 16.13
gzip 75.73 40.29
compress 61.61 18.21

To answer your question - yes, tamp really is 50-plus-times faster than "gzip --fast".

Tamp

The utility I have developed is called tamp. As the name suggests, it does not aim to provide the best compression (although it is better than compress, and sometimes beats "gzip --fast").

It is however a proper parallel implementation of an already fast compression algorithm.

If you wish to use it, feel free to download it. I will be blogging in the near future on a different performance test I conducted using tamp.

Compression Algorithm

Tamp makes use of the compression algorithm from Quick LZ version 1.40. I have tested a couple of other algorithms, and the code in tamp.c can be easily modified to use a different algorithm. You can get QuickLZ from here (you will need to download source yourself if you want to build tamp).

Update, Jan 2012 - changed the downloads to .zip files, as it seems blogs.oracle.com interprets a download of a file ending in .gz as a request to compress the file via gzip before sending it. That confuses most people.

Resources

About

Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today