Wednesday Jan 16, 2008

Throughput computing series: System Concurrency and Parallelism

Most environments have some open source SW that is used as part of the application stack. Depending on the packages, this can take a fair amount of time to configure and compile. To speed the install process, parallelism can easily be used to take advantage of the throughput of CMT servers.

Let us consider the following five open source packages:
  • httpd-2.2.6
  • mysql-5.1.22-rc
  • perl-5.10.0
  • postgresql-8.2.4
  • ruby-1.8.6

The following experiments will time the installation of these packages in both a serial, parallel, and concurrent fashion.

Parallel builds

After the "configure" phase is complete, these packages are all compiled using gmake. This is where parallelism within each job can be used to speed the install process. By using the "gmake -j" option, the level of parallelism can specified for each of the packages. This can dramatically improve the overall compile time as seen below.

compile time \*without\* concurrency
    • Jobs were ran in a serial fashion but with parallelism within the job itself.
    • 79% reduction in compile time at 32 threads/job.

Concurrency and Parallelism

The build process for the various packages are not each able to be parallelized perfectly. In fact, the best overall gain of any of the packages is 6x out of 32. This is where concurrency comes into play. If we start all the compiles at the same time and use parallelism as well, this further reduces the overall build time.

compile times with concurrency and parallelism
    • All 5 jobs were run concurrently with 1 and 32 (threads/job).
    • 88% overall reduction in compile time from serial to parallel with concurrency.
    • 42% reduction in compile time over parallel jobs ran serially.

Load it up!

Hopefully, this helps to better describe how to achieve better system throughput through parallelism and concurrency. Sun's CMT servers are multi-threaded machines which are capable of a high level of throughput. Whether you are building packages from source or installing pre-build packages, you have to load up the machine to see throughput.

Monday Jan 14, 2008

Throughput computing series: Parallel commands (pbzip2)

In this installment of the throughput computing series, I will explore how to get parallelism from the system point of view. The system administrator who first begins to configure the system will start forming impressions from the moment the shrink wrap comes off the server. First impressions and potential parallel options will be explored in this entry.

Off with the shrink-wrap... on with the install

Unfortunately, most installation processes involve a fair number of single-threaded procedures. As mentioned before, the CMT processor is designed to be a processor that optimizes the overall throughput of a server - often to the detriment of single threaded processes. There are several schools of thought on this one. First is, why bother - the install process happens but once and it really doesn't matter. That is true for most typical environments. But the current trend toward grid computing and virtualization makes "time to provision" often a critical factor. To help speed provisioning, there are some things that can be done by using parallelized commands and concurrency.

pbzip2 to the rescue

A very common time-consuming part of provisioning is the packing/unpacking of SW packages. Commonly, gzip or bzip is used to unpack data and packages, but this is not a parallel program. Fortunately, there is a parallel version of bzip that has been made available. "pbzip2" allows you to specify the level of parallelism in order to speed the compression/decompression process.

I spent a little time experimenting with the pbzip program after repeated interactions that always seemed to come back to "gzip" performance. I decided to do some quick benchmarks with pbzip2 using both the T2000(8core@1.4GHz) and v20z(AMD 2cores@2.2GHz).

pbzip2 benchmark

The setup used a 135M text file. This file was the trade_history.txt created using the egen program distributed by the tpc council for the TPC-E benchmark. This file was compressed using the following simple test script:
    #!/bin/ksh
    
    for i in 1 2 4 8 16 32
    do
      print "pbzip2 compress: ${i} threads\\n" 
      timex pbzip2 -p${i} small.txt
      print "pbzip2 decompress: ${i} threads\\n" 
      timex pbzip2 -d -p${i} small.txt.bz2
    done
    
    
T2000 pbzip2 throughput T2000 pbzip2 throughput


At lower thread counts, the v20z with two AMD cores does better. This is expected since the AMD x64 processor is optimized single-threaded performance. But you can see as you crank up the thread count, the T2000 starts to really shine. This demonstrates my main point that to push massive throughput within a single application, you need lots of threads and parallelism.

    ...The next entry will explore how concurrency and parallelism can help improve build times.

Tuesday Jan 08, 2008

Throughput computing series: Defining Throughput Computing

This is the first installment in a series of entries that discuss different aspects of throughput computing. This series aims to improve the understanding of how SPARC CMT servers can be utilized to increase business throughput. Let's start with a definition.

What is throughput computing?

Oxford American defines "throughput" as:
    "The amount of materials or items passing through a system or process"
In computer terms, throughput computing is the amount of "work" that can be done in a given period of time. Things like "orders per second", "paychecks per hour", "queries per second", "webpages per minute",... are all metrics of throughput. These measures help define the amount of work a system can complete in a given period of time.

Misguided throughput metrics

  • Latency or Response time is not a throughput metric.
  • CPU % is not a throughput metric.
  • IO wait% is definitely not a throughput metric... or anything other than a measure of idle time :)
  • The "Load average" of a system is not a measure of throughput. OK... you get the idea.

Job level parallelism

Job level parallelism is about taking a single job and breaking it into multiple pieces. Say you have 10,000 letters to put stamps on. If it takes 3 seconds per letter, you would need 30,000 seconds or more than 8 hours to complete the task. Now consider you are a teacher and you bring the letters to class. There are 20 students in the class so each student will place stamps on 500 letters. With only 500 letters to complete per student, the job can be done in only 1500 seconds or 25 minutes.

In terms of throughput, one person processes one letter every 3 seconds or 60/3 = 20 letters per minute... and a class of 20 students can process 20\*20 = 400 letters per minute.

Concurrency

Concurrency comes from running multiple jobs or applications together on a system. A job may be single-threaded or use multiple threads of execution as discussed above. These jobs need not be related or even from the same application. To further increase concurrency, virtualization is often used to run multiple concurrent OS images on the same machine in-order to take advantage of modern multi-threaded systems.

Putting it all together with Chip Multi-Threading

Denis Sheanan sums up Sun's throughput computing initiative in his paper on CMT as:
    "Sun’s Throughput Computing initiative represents a new paradigm in system design, with a focus toward maximizing the overall throughput of key commercial workloads rather than the speed of a single thread of execution. Chip multi-threading (CMT) processor technology is key to this approach, providing a new thread-rich environment that drives application throughput and processor resource utilization while effectively masking memory access latencies."
The salient point is that you must have an application that has multiple threads of execution in-order to take advantage of CMT. Multiple threads of execution could come from a single job that has been parallelized or from multiple jobs of different types that run concurrently.

Resources

Monday Jan 07, 2008

Throughput computing series... getting the most of your SPARC CMT server.

I was thinking about the development of a CMT throughput benchmark, but it occurred to me that there are many \*good\* examples of throughput already out with the benchmarks we publish... just look at the bmseer postings on the Recent T2 results and the long line of performance records on the T2000.

The biggest disconnect with CMT servers is a misunderstanding of throughput and multi-threaded applications. I made a posting last year which touched on some initial impressions, but I thought it would be a good idea to dig in further.

This entry is to kick off a series of postings that explore different aspects of throughput computing in a CMT environment. The rough outline is as follows:
    Overview
    • Definition of Throughput computing, multi-threading, and concurrency.
    Explore system parallelism
    • Unix commands and parallel options
    • Concurrent builds/compiles.
    • Configuring the system for parallelism
    Configuring applications for parallelism
    • Concurrency vs multi-threading
    • Single-Threaded jobs
    Database parallelism with Oracle
    • Parallel loader and datapump
    • Index build parallelism
    • Concurrent processing in Oracle
    • Configuring Oracle for CMT servers
About

This blog discusses performance topics as running on Sun servers. The main focus is in database performance and architecture but other topics can and will creep in.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News

No bookmarks in folder

Blogroll

No bookmarks in folder