Monday Apr 06, 2009

New Feature for Sysbench - Generate Transactions at a Steady Rate

Perhaps I am becoming a regular patcher of sysbench...

I have developed a new feature for sysbench - the ability to generate transactions at a steady rate determined by the user.

This mode is enabled using the following two new options:
--tx-rate
Rate at which sysbench should attempt to send transactions to the database, in transactions per second. This is independent of num_threads. The default is 0, which means to send as many as possible (i.e., do not pause between the end of one transaction and the start of another. It is also independent of other options like --oltp-user-delay-min and --oltp-user-delay-max, which add think time between individual statements generated by sysbench.
--tx-jitter
Magnitude of the variation in time to start transactions at, in microseconds. The default is zero, which asks each thread to vary its transaction period by up to 10 percent (i.e. 10\^6 / tx-rate \* num-threads / 10). A standard pseudo-random number generator is used to decide each transaction start time.

My need for these options is simple - I want to generate a steady load for my MySQL database. It is one thing to measure the maximum achievable throughput as you change your database configuration, hardware, or num-threads. I am also interested in how the system (or just mysqld's) utilization changes, at the same transaction rate, when I change other variables.

An upcoming post will demonstrate a use of sysbench in this mode.

For the moment my new feature can be added to sysbench 0.4.12 (and probably many earlier versions) via this patch. These changes are tested on Solaris, but I did choose only APIs that are documented as also available on Linux. I have also posted my patch on sourceforge as a sysbench feature enhancement request.

Wednesday Jan 14, 2009

You Learn Something Every Day

Just learned how to save about a bazillion keystrokes over the remainder of my file-editing & programming life.

This is because I just learned that C-M-l (or Control-Meta-l, where "Meta" is the "Diamond" key on a Sun keyboard) is the (X)Emacs key sequence for "switch-to-other-buffer".

I have been doing this via Control-x, "b", Enter, or in other words, switch-buffer, then pressing Enter to accept the default, which has the same definition as "other-buffer". And I do it all the time.

D'oh...

By the way, I have been using (X)Emacs for approximately 20 years. I was lucky enough to find it when I first started on Unix, because I felt Vi was not powerful enough. Of course, any mention of Emacs and Vi in the same breath is likely to start a war, so I apologize to those who are not interested...

Wednesday Dec 17, 2008

MySQL 5.1 Memory Allocator Bake-Off

After getting sysbench running properly with a scalable memory allocator (see last post), I can now return to what I was originally testing - what memory allocator is best for the 5.1 server (mysqld).

This stems out of studies I have made of some patches that have been released by Google. You can read about the work Google has been doing here.

I decided I wanted to test a number of configurations based on the MySQL community source, 5.1.28-rc, namely:

  • The baseline - no Google SMP patch, default memory allocator (5.1.28-rc)
  • With Google SMP patch, mem0pool enabled, no custom malloc (pool)
  • With Google SMP patch, mem0pool enabled, linked with mtmalloc (pool-mtmalloc)
  • With Google SMP patch, mem0pool disabled, linked with tcmalloc (TCMalloc)
  • With Google SMP patch, mem0pool disabled, linked with umem (umem)
  • With Google SMP patch, mem0pool disabled, linked with mtmalloc (mtmalloc)

Here are some definitions, by the way:

mem0pool InnoDB's internal "memory pools" feature, found in mem0pool.c (NOTE: Even if this is enabled, other parts of the server will not use this memory allocator - they will use whatever allocator is linked with mysqld)
tcmalloc The "libtcmalloc_minimal.so.0.0.0" that is built from google-perftools-0.99.2
Hoard The Hoard memory allocator, version 3.7.1
umem The libumem library (included with Solaris)
mtmalloc The mtmalloc library (included with Solaris)

My test setup was a 16-CPU Intel system, running Solaris Nevada build 100. I chose to use only an x86 platform, as I was not able to build tcmalloc on SPARC. I also chose to run with the database in TMPFS, and with an innoDB buffer size smaller than the database size. This was to ensure that we would be CPU-bound if possble, rather than slowed by I/O.

If I built any package (no need for mtmalloc or umem), I used GCC 4.3.1, except for Hoard, which seemed to prefer the Sun Studio 11 C compiler (over Sun Studio 12 or GCC).

My test was a sysbench OLTP read-write run, of 10 minutes. Each series of runs at different thread counts is preceded by a database re-build and 20 minute warmup. Here are my throughput results for 1-32 SysBench threads, in transactions per second:

These results show that while the Google SMP changes are a benefit, the disabling of InnoDB's mem0pool does not seem to provide any further benefit for my configuration. My results also show that TCMalloc is not a good allocator for this workload on this platform, and Hoard is particularly bad, with significant negative scaling above 16 threads.

The remaining configurations are pretty similar, with mtmalloc and umem a little ahead at higher thread counts.

Before I get a ton of comments and e-mails, I would like to point out that I did some verification of my TCMalloc builds, as the results I got surprised me. I verified that it was using the supplied assembler for atomic routines, and I built it with optimization (-O3) and without.

I also discovered that TCMalloc was emitting this diagnostic when mysqld was starting up:

src/tcmalloc.cc:151] uname failed assuming no TLS support (errno=0)

I rectified this with a change in tcmalloc.cc, and called this configuration "TCMalloc -O3, TLS". It is shown against the other two configurations below.

I often like to have a look at what the CPU cost of different configurations are. This helps to demonstrate headroom, and whether different throughput results may be due to less efficient code or something else. The chart below lists what I found - note that this is system-wide CPU (user & system) utilization, and I was running my SysBench client on the same system.

Lastly, I did do one other comparison, which was to measure how much each memory allocator affected the virtual size of mysqld. I did not expect much difference, as the most significant consumer - the InnoDB buffer pool - should dominate with large long-lived allocations. This was indeed the case, and memory consumption grew little after the initial start-up of mysqld. The only allocator that then caused any noticable change was mtmalloc, which for some reason made the heap grow by 35MB following a 5 minute run (it was originally 1430 MB)

References

Friday Dec 12, 2008

Scalability and Stability for SysBench on Solaris

My mind is playing "Suffering Succotash..."

I have been working on MySQL performance for a while now, and the team I am in have discovered that SysBench could do with a couple of tweaks for Solaris.

Sidebar - sysbench is a simple "OLTP" benchmark which can test multiple databases, including MySQL. Find out all about it here , but go to the download page to get the latest version.

To simulate multiple users sending requests to a database, sysbench uses multiple threads. This leads to two issues we have identified with SysBench on Solaris, namely:

  • The implementation of random() is explicitly identified as unsafe in multi-threaded applications on Solaris. My team has found this is a real issue, with occasional core-dumps happening to our multi-threaded SysBench runs.
  • SysBench does quite a bit of memory allocation, and could do with a more scalable memory allocator.

Neither of these issues are necessarily relevant only to Solaris, by the way.

Luckily there are simple solutions. We can fix the random() issue by using lrand48() - in effect a drop-in replacement. Then we can fix the memory allocator by simply choosing to link with a better allocator on Solaris.

To help with a decision on memory allocator, I ran a few simple tests to check the performance of the two best-known scalable allocators available in Solaris. Here are the results ("libc" is the default memory allocator):

Throughput

To see the differences more clearly, lets do a relative comparison, using "umem" (A.K.A. libumem) as the reference:

Relative Throughput

So - around 20% less throughput at 16 or 32 threads. Very little difference at 1 thread, too (where the default memory allocator should be the one with the lowest synchronization overhead).

Where you see another big difference is CPU cost per transaction:

CPU Cost

I will just point out two other reasons why I would recommend libumem:

I have logged these two issues as sysbench bugs:

However, if you can't wait for the fixes to be released, try these:

Sunday Oct 12, 2008

The Seduction of Single-Threaded Performance

The following is a dramatization. It is used to illustrate some concepts regarding performance testing and architecting of computer systems. Artistic license may have been taken with events, people and time-lines. The performance data I have listed is real and current however.

I got contacted recently by the Systems Architect of latestrage.com. He has been a happy Sun customer for many years, but was a little displeased when he took delivery of a beta test system of one of our latest UltraSPARC servers.

"Not very fast", he said.

"Is that right, how is it not fast?", I inquired eagerly.

"Well, it's a lot slower than one of the LowMarginBrand x86 servers we just bought", he trumpeted indignantly.

"How were you measuring their speed?", I asked, getting wary.

"Ahh, simple - we were compressing a big file. We were careful to not let it be limited by I/O bandwidth or memory capacity, though..."

What then ensues is a discussion about what was being used to test "performance", whether it matches latestrage.com's typical production workload and further details about architecture and objectives.

Data compression utilities are a classic example of a seemingly mature area in computing. Lots of utilities, lots of different algorithms, a few options in some utilities, reasonable portability between operating systems, but one significant shortcoming - there is no commonly available utility that is multi-threaded.

Let me pretend I am still in this situation of using compression to evaluate system performance, and I am wanting to compare the new Sun SPARC Enterprise T5440 with a couple of current x86 servers. Here is my own first observation about such a test, using a single-threaded compression utility:

Single-Threaded Throughput

Now if you browse down to older blog entries, you will see I have written my own multi-threaded compression utility. It consists of a thread to read data, as many threads to compress or decompress data as demand requires, and one thread to write data. Let me see whether I can fully exploit the performance of the T5440 with Tamp...

Well, this turned out to be not quite the end of the story. I designed my tests with my input file located on a TMPFS (in-memory) filesystem, and with the output being discarded. This left the system focusing on the computation of compression, without being obscured by I/O. This is the same objective that latestrage.com had.

What I found on the T5440 was that Tamp would not use more than 12-14 threads for compression - it was limited by the speed at which a single thread could read data from TMPFS.

So, I chose to use another dimension by which we can scale up work on a server - add more sources of workload. This is represented by multiple "Units of Work" in my chart below.

After completing my experiments I discovered that, as expected, the T5440 may disappoint if we restrict ourselves to a workload that can not fully utilize the available processing capacity. If we add more work however, we will find it handily surpasses the equivalent 4-socket quad-core x86 systems.

Multi-Threaded Throughput

Observing Single-Thread Performance on a T5440

A little side-story, and another illustration of how inadequate a single-threaded workload is at determining the capability of the T5440. Take a look at the following output from vmstat, and answer this question:

Is this system "maxed out"?

(Note: the "us", "sy" and "id" columns list how much CPU time is spent in User, System and Idle modes, respectively)

 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr d0 d1 d2 d3   in   sy   cs us sy id 
 0 0 0 1131540 12203120 1  8  0  0  0  0  0  0  0  0  0 3359 1552 419  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3364 1558 431  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1478 420  0  0 99 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3354 1500 441  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1549 460  0  0 99 

Well, the answer is yes. It is running a single-threaded process, which is using 100% of one CPU. For the sake of my argument we will say the application is the critical application on the system. It has reached it's highest throughput and is therefore "maxed out". You see, when one CPU represents less than 0.5% of the entire CPU capacity of a system, then a single saturated CPU will be rounded down to 0%. In the case of the T5440, one CPU is 1/256th or 0.39%.

Here is a tip for watching a system that might be doing nothing, but then again might be doing something as fast as it can:

$ mpstat 3 | grep -v ' 100$'

This is what you might see:

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    2   0   48   204    4    2    0    0    0    0   127    1   1   0  99
 32    0   0    0     2    0    3    0    0    0    0     0    0   8   0  92
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    1   0   49   205    5    3    0    0    0    0   117    0   1   0  99
 32    0   0    0     4    0    5    0    0    1    0     0    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    0   0   48   204    4    2    0    0    0    0   103    0   1   0  99
 32    0   0    0     3    0    4    0    0    0    0     3    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0

mpstat uses "usr", "sys", and "idl" to represent CPU consumption. For more on "wt" you can read my older blog.

For more on utilization, see the CPU/Processor page on solarisinternals.com

To read more about the Sun SPARC Enterprise T5440 which is announced today, go to Allan Packer's blog listing all the T5440 blogs.

Tamp - a Multi-Threaded Compression Utility

Some more details on this:

  • It uses a freely-available Lempel-Ziv-derived algorithm, optimised for compression speed
  • It was compiled using the same compiler and optimization settings for SPARC and x86.
  • It uses a compression block size of 256KB, so files smaller than this will not gain much benefit
  • I was compressing four 1GB database files. They were being reduced in size by a little over 60%.
  • Browse my blog for more details and a download

Friday Sep 26, 2008

Tamp - a Lightweight Multi-Threaded Compression Utility

UPDATE: Tamp has been ported to Linux, and is now at version 2.5

Packages for Solaris (x86 and SPARC), and a source tarball are available below.

Back Then

Many years ago (more than I care to remember), I saw an opportunity to improve the performance of a database backup. This was before the time of Oracle on-line backup, so the best choice at that time was to:

  1. shut down the database
  2. export to disk
  3. start up the database
  4. back up the export to tape

The obvious thing to improve here is the time between steps 1 and 3. We had a multi-CPU system running this database, so it occurred to me that perhaps compressing the export may speed things up.

I say "may" because it is important to remember that if the compression utility has lower throughput than the output of the database export (i.e. raw output; excluding any I/O operations to save that data) we may just end up with a different bottleneck, and not run any faster; perhaps even slower.

As it happens, this era also pre-dated gzip and other newer compression utilities. So, using the venerable old "compress", it actually was slower. It did save some disk space, because Oracle export files are eminently compressible.

So, I went off looking for a better compression utility. I was now more interested in something that was fast. It needed to not be the bottleneck in the whole process.

What I found did the trick - It reduced the export time by 20-30%, and saved some disk space as well. The reason why it saved time was that it was able to compress at least as fast as Oracle's "exp" utility was able to produce data to compress, and it eliminated some of the I/O - the real bottleneck.

More Recently

I came across a similar situation more recently - I was again doing "cold" database restores and wanted to speed them up. It was a little more challenging this time, as the restore was already parallel at the file level, and there were more files than CPUs involved (72). In the end, I could not speed up my 8-odd minute restore of ~180GB, unless I already had the source files in memory (via the filesystem cache). That would only work in some cases, and is unlikely to work in the "real world", where you would not normally want this much spare memory to be available to the filesystem.

Anyway, it took my restore down to about 3 minutes in cases where all my compressed backup files were in memory - this was because it had now eliminated all read I/O from the set of arrays holding my backup. This meant I had eliminated all competing I/O's from the set of arrays where I was re-writing the database files.

Multi-Threaded Lightweight Compression

I could not even remember the name of the utility I used years ago, but I knew already that I would need something better. The computers of 2008 have multiple cores, and often multiple hardware threads per core. All of the current included-in-the-distro compression utilities (well, almost all utilities) for Unix are still single-threaded - a very effective way to limit throughput on a multi-CPU system.

Now, there are a some multi-threaded compression utilities available, if not widely available:

  • PBZIP2 is a parallel implementation of BZIP2. You can find out more here
  • PIGZ is a parallel implementation of GZIP, although it turns out it is not possible to decompress a GZIP stream with more than one thread. PIGZ is available here.

Here is a chart showing some utilities I have tested on a 64-way Sun T5220. The place to be on this chart is toward the bottom right-hand corner.

Here is a table with some of the numbers from that chart:

Utility Reduction (%) Elapsed (s)
tamp 66.18 0.31
pigz --fast 71.18 1.04
pbzip2 --fast 77.17 4.17
gzip --fast 71.10 16.13
gzip 75.73 40.29
compress 61.61 18.21

To answer your question - yes, tamp really is 50-plus-times faster than "gzip --fast".

Tamp

The utility I have developed is called tamp. As the name suggests, it does not aim to provide the best compression (although it is better than compress, and sometimes beats "gzip --fast").

It is however a proper parallel implementation of an already fast compression algorithm.

If you wish to use it, feel free to download it. I will be blogging in the near future on a different performance test I conducted using tamp.

Compression Algorithm

Tamp makes use of the compression algorithm from Quick LZ version 1.40. I have tested a couple of other algorithms, and the code in tamp.c can be easily modified to use a different algorithm. You can get QuickLZ from here (you will need to download source yourself if you want to build tamp).

Update, Jan 2012 - changed the downloads to .zip files, as it seems blogs.oracle.com interprets a download of a file ending in .gz as a request to compress the file via gzip before sending it. That confuses most people.

Resources

Saturday Sep 06, 2008

Installing Solaris from a USB Disk

I regularly do a full install of a Solaris Development release onto my laptop. Why full? Well, that is another story for another day, but it is not because the Solaris Upgrade software; including Live Upgrade; is lacking.

I decided I no longer see the sense of burning a DVD to do this; and I know that Solaris can boot from a USB device.

I used James C. Liu's blog as an inspiration, but the following is what I have found worked well to boot an install image located on a USB disk. You may also be interested in the Solaris Ready USB FAQ.

NOTE: This procedure only has a chance of working if you have a version of Solaris 10 or later that uses GRUB and has a USB driver that works with your drive.

  1. Set up an 8GB "Solaris2" partition on the USB drive using fdisk. Make it the active partition.
  2. Set up a UFS slice using all but the first cylinder of that 8GB as slice 0 using format. Run newfs. Mount.

    The first cylinder ends up being dedicated to a "boot" slice. I do not know what it is used for, perhaps avoidance of overwriting PC-style partition table & boot program.

  3. Mount the DVD ISO using lofiadm/mount (hint: google lofiadm solaris iso)
  4. Use cpio to copy the contents of the DVD ISO into the UFS partition on the USB drive, e.g:

    # cd <rootdir of DVD ISO>
    # find . | cpio -pdum <rootdir of USB filesystem>
    

  5. Run installgrub to install the stage1 & stage2 files from the DVD ISO onto the USB drive If the filesystem on your USB drive has mounted as /dev/dsk/c2t0d0s0 for example, then use:

    # cd <rootdir of DVD ISO>
    # /sbin/installgrub boot/grub/stage1 boot/grub/stage2 /dev/rdsk/c2t0d0s0
    

  6. Boot off the USB disk. It uses the same GRUB install that would be on a DVD.
  7. Now, I can not remember whether the next step was either:

    • Wait for the install to fail (unable to find distribution), or:

    • Exit/quit out of installation

    ...but you need to get to a shell.

  8. Manually mount the USB partition at /cdrom

    NOTE: your controller numbers are probably not as you expect at this point, so double-check what you are mounting.

  9. Re-start the install
    I used "suninstall". I think you can use "solaris-install" instead.

The install seemed to run fine from there, however it went through a sysconfig stage after the reboot.

Then I ended up with one teeny problem - my X server would not start.

I discovered some issues with fonts, and then decided to check the install log. I discovered a number of packages had reported status like:


Installation of <SUNWxwfnt> partially failed.
19997 blocks
pkgadd: ERROR: class action script did not complete successfully

Installation of <SUNWxwcft> partially failed.

Installation of <SUNW5xmft> partially failed.

Installation of <SUNW5ttf> partially failed.

Installation of <SUNWolrte> partially failed.

Installation of <SUNWhttf> partially failed.

I have since pkgrm/pkadd-ed these packages (using -R while running the laptop on an older release with the new boot environment mounted), and all is now well.

Thursday Sep 04, 2008

Building GCC 4.x on Solaris

I needed to build GCC 4.3.1 for my x86 system running a recent development build of Solaris. I thought I would share what I discovered, and then improved on.

I started with Paul Beach's Blog on the same topic, but I knew it had a couple of shortcomings, namely:

  • No mention of a couple of pre-requisites that are mentioned in the GCC document Prerequisites for GCC
  • A mysterious "cannot compute suffix of object files" error in the build phase
  • No resolution of how to generate binaries that have a useful RPATH (see Shared Library Search Paths for a discussion on the importance of RPATH).

I found some help on this via this forum post, but here is my own cheat sheet.

  1. Download & install GNU Multiple Precision Library (GMP) version 4.1 (or later) from sunfreeware.com. This will end up located in /usr/local.
  2. Download, build & install MPFR Library version 2.3.0 (or later) from mpfr.org. This will also end up in /usr/local.
  3. Download & unpack the GCC 4.x base source (the one of the form gcc-4.x.x.tar.gz) from gcc.gnu.org
  4. Download my example config_make script, edit as desired (you probably want to change OBJDIR and PREFIX, and you may want to add other configure options.
  5. Run the config_make script
  6. "gmake install" as root (although I instead create the directory matching PREFIX, make it writable by the account doing the build, then "gmake install" using that account).

You should now have GCC binaries that look for the shared libraries they need in /usr/sfw/lib, /usr/local/lib and PREFIX/lib, without anyone needing to set LD_LIBRARY_PATH. In particular, modern versions of Solaris will have a libgcc_s.so in /usr/sfw/lib.

If you copy your GMP and MPFR shared libraries (which seem to be needed by parts of the compiler) into PREFIX/lib, you will also have a self-contained directory tree that you can deploy to any similar system more simply (e.g. via rsync, tar, cpio, "scp -pr", ...)

Monday Apr 21, 2008

Comparing the UltraSPARC T2 Plus to Other Recent SPARC Processors

Update - now the UltraSPARC T2 Plus has been released, and is available in several new several Sun servers. Allan Packer has published a new collection of blog entries that provide lots of detail.

Here is my updated table of details comparing a number of current SPARC processors. I can not guarantee 100% accuracy on this, but I did quite a bit of reading...

Name UltraSPARC IV+® SPARC64TM VI UltraSPARCTM T1 UltraSPARCTM T2 UltraSPARCTM T2 Plus
Codename Panther Olympus-C Niagara Niagara 2 Victoria Falls
Physical
process 90nm 90nm 90nm 65nm 65nm
die size 335 mm2 421 mm2 379 mm2 342 mm2
pins 1368 1933 1831
transistors 295 M 540 M 279 M 503 M
clock 1.5 – 2.1 GHz 2.15 – 2.4 GHz 1.0 – 1.4 GHz 1.0 – 1.4 GHz 1.2 – 1.4 GHz
Architecture
cores 2 2 8 8 8
threads/core 1 2 4 8 8
threads/chip 2 4 32 64 64
FPU : IU 1 : 1 1 : 1 1 : 8 1 : 1 1 : 1
integration 8 × small crypto 8 × large crypto, PCI-E, 2 × 10Gbe 8 × large crypto, PCI-E, multi-socket coherency
virtualization domains1 hypervisor
L1 i$ 64K/core 128K/core 16K/core
L1 d$ 64K/core 128K/core 8K/core
L2 cache (on-chip) 2MB, shared, 4-way, 64B lines 6MB, shared, 10-way, 256B lines 3MB, shared, 12-way, 64B lines 4MB, shared, 16-way, 64B lines
L3 cache 32MB shared, 4-way, tags on-chip, 64B lines n/a n/a
MMU on-chip
on-chip, 4 × DDR2 on-chip, 4 × FB-DIMM on-chip, 2 × FB-DIMM
Memory Models TSO TSO TSO, limited RMO
Physical Address Space 43 bits 47 bits 40 bits
i-TLB 16 FA + 512 2-way SA 64 FA
d-TLB 16 FA + 512 2-way SA 64 FA 128 FA
combined TLB 32 FA + 2048 2-way SA
Page sizes 8K, 64K, 512K, 4M, 32M, 256M 8K, 64K, 512K, 4M, 32M, 256M 8K, 64K, 4M, 256M
Memory bandwidth2 (GB/sec) 9.6 25.6 60+ 32

Footnotes

  • 1 - domains are implemented above the processor/chip level
  • 2 - theoretical peak - does not take cache coherency or other limits into account

Glossary

  • FA - fully-associative
  • FPU - Floating Point Unit
  • i-TLB - Instruction Translation Lookaside Buffer (d means Data)
  • IU - Integer (execution) Unit
  • L1 - Level 1 (similarly for L2, L3)
  • MMU - Memory Management Unit
  • RMO - Relaxed Memory Order
  • SA - set-associative
  • TSO - Total Store Order

References:

Tuesday Apr 08, 2008

What Drove Processor Design Toward Chip Multithreading (CMT)?

I thought of a way of explaining the benefit of CMT (or more specifically, interleaved multithreading - see this article for details) using an analogy the other day. Bear with me as I wax lyrical on computer history...

Deep back in the origins of the computer, there was only one process (as well as one processor). There was no operating system, so in turn there were no concepts like:

  • scheduling
  • I/O interrupts
  • time-sharing
  • multi-threading

What am I getting at? Well, let me pick out a few of the advances in computing, so I can explain why interleaved multithreading is simply the next logical step.

The first computer operating systems (such as GM-NAA I/O) simply replaced (automated) some of the tasks that were undertaken manually by a computer operator - load a program, load some utility routines that could be used by the program (e.g. I/O routines), record some accounting data at the completion of the job. They did nothing during the execution of the job, but they had nothing to do - no other work could be done while the processor was effectively idle, such as when waiting for an I/O to complete.

Then muti-processing operating systems were developed. Suddenly we had the opportunity to use the otherwise wasted CPU resource while one program was stalled on an I/O. In this case the O.S. would switch in another program. Generically this is known as scheduling, and operating systems developed (and still develop) more sophisticated ways of sharing out the CPU resources in order to achieve the greatest/fairest/best utilization.

At this point we had enshrined in the OS the idea that CPU resource was precious, not plentiful, and there should be features designed into the system to minimize its waste. This would reduce or delay the need for that upgrade to a faster computer as we continued to add new applications and features to existing applications. This is analogous to conserving water to offset the need for new dams & reservoirs.

With CMT, we have now taken this concept into silicon. If we think of a load or store to or from main (uncached) memory as a type of I/O, then thread switching in interleaved multithreading is just like the idea of a voluntary context switch. We are not giving up the CPU for the duration of the "I/O", but we are giving up the execution unit, knowing that if there is another thread that can use it, it will.

In a way, we are delaying the need to increase the clock rate or pipe-lining abilities of the cores by taking this step.

Now the underlying details of the implementation can be more complex than this (and they are getting more complex as we release newer CPU architectures like the UltraSPARC T2 Plus - see the T5140 Systems Architecture Whitepaper for details), but this analogy to I/O's and context switches works well for me to understand why we have chosen this direction.

To continue to throw engineering resources at faster, more complicated CPU cores seems to be akin to the idea of the mainframe (the closest descendant to early computers) - just make it do more of the same type of workload.

See here for the full collection of UltraSPARC T2 Plus blogs

Thursday Feb 21, 2008

Margins in Consumer Telephony

Here is a little observation on telephone margins that is dear to my heart. Below is a list of rates (in US dollars per minute, taxes and other fees not shown) for various methods of calling from the US to a land-line in Australia. The last four options use VoIP.

Source Carrier Add-on Plan Add-on $/month Rate
Land-line AT&T none – Peak - $4.00
Land-line AT&T none – Off-peak - $2.76
Mobile AT&T none - $3.49
Mobile AT&T World Connect $3.99 $0.09
Land-line AT&T Occasional Calling $1.00 $1.75
Land-line AT&T Worldwide Value Calling $5.00 $0.09
Land-line Time-Warner Cable
- $0.10
Land-line Comcast

$0.09
Land-line Vonage

$0.05
Land-line AT&T CallVantage

$0.04
Land-line Callcentric

$0.0231
Land-line CallWithUs

$0.0148

As you may see, there is a 27000% range in these numbers. Even with that one carrier there is a 100x range. Plenty of opportunity for profit.

Hopefully it is useful to be aware there can be some very steep rates for ex-pat Aussies to call home if they are away from their preferred carrier.

I have been quite satisfied with CallWithUs, if anyone is interested. They even have a call-back feature if I want to call from my mobile.

While I'm on the topic, I should also mention this helpful message I got from my wireless (mobile) provider (although they are no longer my provider):

When you're on the go and don't have the info you need, AT&T 411 is here to help. Whether you're searching for a business or residence - dial 4-1-1 to get quick access to phone numbers and addresses. Plus, with AT&T 411 you can find movie times, driving directions and more. And it's just $1.79 per call plus standard airtime charges.\*

Thanks for the reminder - I will be vigilant to avoid that $1.79 charge, and stick to 1-800-FREE411...

Wednesday Feb 13, 2008

Utilization - Can I Have More Accuracy Please?

Just thought I would share another Ruby script - this one takes the output of mpstat, and makes it more like the output of mpstat -a, only the values are floating point. I wrote it to process mpstat -a that I got from a customer. It can also cope with the date (in Unix ctime format) being prepended to every line. Here is some sample output:

CPUs minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 4 7.0 0.0 26.0 114.25 68.0 212.75 16.75 64.75 11.75 0.0 141.25 1.0 1.0 0.0 98.5
 4 0.75 0.0 929.75 2911.5 1954.75 10438.75 929.0 4282.0 715.0 0.0 6107.25 39.25 35.75 0.0 25.25
 4 0.0 0.0 892.25 2830.25 1910.5 10251.5 901.5 4210.0 694.5 0.0 5986.0 38.5 35.0 0.0 26.75
 4 0.0 0.0 941.5 2898.25 1926.75 10378.0 911.75 4258.0 698.0 0.0 6070.5 39.0 35.5 0.0 25.25
 4 0.0 0.0 893.75 2833.75 1917.75 10215.0 873.75 4196.25 715.25 0.0 5925.25 38.0 34.75 0.0 27.25

The script is here.

Interestingly, you can use this to get greater accuracy on things like USR and SYS than you would get if you just used vmstat, sar, iostat or mpstat -a. This depends on the number of CPUs you have in your system though.

Now, if you do not have a lot of CPUs, but still want greater accuracy, I have another trick. This works especially well if you are conducting an experiment and can run a command at the beginning and end of the experiment. This trick is based around the output of vmstat -s:

# vmstat -s
[...]
 54056444 user   cpu
 42914527 system cpu
1220364345 idle   cpu
        0 wait   cpu

Those numbers are "ticks" since the system booted. A tick is usually 0.01 seconds.

NEW: I have now uploaded a script that uses these statistics to track system-wide utilization.

Friday Nov 02, 2007

Comparing the UltraSPARC T2 to Other Recent SPARC Processors

This is now a placeholder. You probably want to read my updated blog on SPARC processor details to get the latest.

Friday Aug 31, 2007

How Event-Driven Utilization Measurement is Better than Sample-Based

...and how to measure both at the same time

With the delivery of Solaris 10, Sun made two significant changes to how system utilization is measured. One change was to how CPU utilisation is measured

Solaris used to (and virtually all other POSIX-like OS'es still) measure CPU utilisation by sampling it. This happened once every "clock tick". A clock tick is a kernel administrative routine which is executed once (on one CPU) for every clock interrupt that is received, which happens once every 10 milliseconds. At this time, the state of each CPU was inspected, and a "tick" would be added to each of the "usr", "sys", "wt" or "idle" buckets for that CPU.

The problem with this method is two-fold:

  • It is statistical, which is to say it is an approximation of something, derived via sampling
  • The sampling happens just before the point when Solaris looks for threads that are waiting to be woken up to do work.

Solaris 10 now uses microstate accounting. Microstates are a set of finer-grained states of execution, including USR, SYS, TRP (servicing a trap), LCK (waiting on an intra-process lock), SLP (sleeping), LAT (on a CPU dispatch queue), although these all fall under one of the traditional USR, SYS and IDLE. These familiar three are still used to report system-wide CPU utilisation (e.g. in vmstat, mpstat, iostat), however you can see the full set of states each process is in via "prstat -m".

The key difference in system-wide CPU utilization comes in how microstate accounting is captured - it is captured at each and every transition from one microstate to another, and it is captured in nanosecond resolution (although the granularity of this is platform-dependent). To put it another way it, it is event-driven, rather than statistical sampling.

This eliminated both of the issues listed above, but it is the second issue that can cause some significant variations in observed CPU utilization.

If we have a workload that does a unit of work that takes less than one clock tick, then yields the CPU to be woken up again later, it is likely to avoid being on a CPU when the sampling is done. This is called "hiding from the clock", and is not difficult to achieve (see "hide from the clock" below).

Other types of workloads that do not explicitly behave like this, but do involve processes that are regularly on and off the CPU can look like they have different CPU utilization on Solaris releases prior to 10, because the timing of their work and the timing of the sampling end up causing an effect which is sort-of like watching the spokes of a wheel or propeller captured on video. Another factor involved in this is how busy the CPUs are - the closer a CPU is to either idle or fully utilized, the more accurate sampling is likely to be.

What This Looks Like in the Wild

I was recently involved in an investigation where a customer had changed only their operating system release (to Solaris 10), and they saw an almost 100% increase (relative) in reported CPU utilization. We suspected that the change to event-based accounting may have been a factor in this.

During our investigations, I developed a DTrace utility which can capture CPU utilization that is like that reported by Solaris 10, then also measure it the same way as Solaris 9 and 8, all at the same time.

The DTrace utility, called util-old-new, is available here. It works by enabling probes from the "sched" provider to track when threads are put on and taken off CPUs. It is event-driven, and sums up nanoseconds the same way Solaris 10 does, but it also tracks the change in a system variable, "lbolt64" while threads are on CPU, to simulate how many "clock ticks" the thread would have accumulated. This should be a close match, because lbolt64 is updated by the clock tick routine, at pretty much the same time as when the old accounting happened.

Using this utility, we were able to prove that the change in observed utilisation was pretty much in line with the way Solaris has changed how it measures utilisation. The up-side for the customer was that their understanding of how much utilisation they had left on their system was now more accurate. the down side was that they now had to re-assess whether, and by how much, this changed the amount of capacity they had left.

Here is some sample output from the utility. I start the script when I already have one CPU-bound thread on a 2-CPU system, then I start up one instance of Alexander Kolbasov's "hide-from-clock", which event-based accounting sees, but sample-based accounting does not:

mashie[bash]# util-old-new 5
NCPUs = 2
Date-time              s8-tk/1000   s9-tk/1000      ns/1000
2007 Aug 16 12:12:14          508          523          540
2007 Aug 16 12:12:19          520          523          553
2007 Aug 16 12:12:24          553          567          754
2007 Aug 16 12:12:29          549          551          798
2007 Aug 16 12:12:34          539          549          810
\^C

The Other Change in Utilization Measurement

By the way, the other change was to "hard-wire" the Wait I/O ("%wio" or "wt" or "wait time") statistic to zero. The reasoning behind this is that CPU's do not wait for I/O (or any other asynchronous event) to complete - threads do. Trying to characterize how much a CPU is not doing anything in more than one statistic is like having two fuel gauges on your car - one for how much fuel remains for highway driving, and another for city driving.

References & Resources

P.S. This entry is intended to cover what I have spoken about in my previous two entries. I will soon delete the previous entries.

Thursday Jul 12, 2007

nicstat - Update for Solaris & Linux

I have made a minor change to nicstat on Solaris and Linux. The way it schedules its work has been improved.

Use the links from my latest entry on nicstat for the latest source and binaries.

I will write up a more detailed explanation along with a treatise on the merits of different scheduling methodologies in a post in the near future.

About

Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today