Wednesday Mar 05, 2014

Random or Sequential I/O? How to find out with DTrace

One of the perennial performance or sizing questions for a workload is how much I/O it generates. This can be broken down several ways:

  • Network versus disk
  • Read versus write
  • Random versus sequential (for the disk component)

Many of the metrics have always been easy to quantify. There are standard operating system tools to measure disk I/O, network packets and bytes, etc. The question of how much random versus sequential I/O is much harder to answer, but it can be an important one if your storage is a potential limiter of performance and that storage has a significant amount of "conventional" disk - i.e. spindles of rotating rust.

Sequential I/O on conventional disks can generally be served at a higher throughput because conventional disks can avoid almost all rotational delay and seek time penalties. Random I/O will always incur these penalties, at a varying rate.

So if you need to decompose your workload enough to understand how much Random versus Sequential I/O there is, what do you do? You may be able to discuss it with the application developer to get their take on how the application behaves, what it stores, and how and when it needs to fetch data into memory or write or update it on disk. This information (or access to the developer) is not always available though.

What about if I can get inside the application and measure the I/O as it happens? That is a task for DTrace. I have developed a script that tallies up all I/O for all filesystems and can tell you how much of the I/O is Sequential (a read or write at a location in a file is then followed by a read or write at the next location for that file) or Random.

Here is some example output from the script, where I first ran a Sequential filebench test:

Sample interval: 120 seconds
End time: 2011 Mar 23 10:42:20

FYI: Page I/O on VREG vnodes after read/write:
  fop_read      fop_getpage      2

/u                         rd/s    wr/s   rdKB/s   wrKB/s   acc/look:       0
             sequential 2616.20 0.62501  2676408 0.718235    readdir:       0
                 random 2.65836 0.20834 2713.623 0.641998  geta/seta:       0
                   page 0.00833       0 0.033334        0      cr/rm:       0
                  TOTAL 2618.86 0.83334  2679122 1.360233      other: 2.86669

then I ran a random filebench test:

Sample interval: 60 seconds
End time: 2011 Mar 22 12:09:28

FYI: Page I/O on VREG vnodes after read/write:
  fop_write      fop_putpage      1
  fop_read       fop_getpage      5
  fop_write      fop_getpage     18

/u                         rd/s    wr/s   rdKB/s   wrKB/s   acc/look: 2.28871
             sequential 157.036 161.446 1255.552 1118.578    readdir:       0
                 random 20113.5 17119.0 160860.2 136919.0  geta/seta:       0
                   page       0 4.27672        0 17.10687      cr/rm: 0.05012
                  TOTAL 20270.6 17284.7 162115.7 138054.7      other: 5.26237

As you can see the script outputs a table breaking down reads and writes by operations and by KB/s, but also by "sequential", "random" and "page". There are also some totals, plus some statistics on a few other VFS operations.

What does "page" mean? Well, this script uses Solaris' virtual filesystem interface to measure filesystem I/O. Through this interface we can see conventional read and write operations, however there are also "page" operations that can happen which do not necessarily map to a read or write from an application. Because of this uncertainty I decided to list the I/O via these page operations separately. They will generally be small for an application that uses conventional I/O calls, but may be large if an application for is using the mmap() interface for example.

For the purposes of this blog entry I have simplified the output of the script. It normally outputs a table for every filesystem that has I/O during its run. The script could easily be modified to select a desired filesystem. It could also be modified to select I/O related to a specific user, process, etc.

References

Monday Feb 24, 2014

nicstat update - version 1.95

Another minor nicstat release is now available.

Changes for Version 1.95, January 2014

Common

  • Added "-U" option, to display separate read and write utilization.
  • Simplified display code regarding "-M" option.

Solaris

  • Fixed fetch64() to check type of kstats
  • Fixed memory leak in update_nicdata_list()

Availability

nicstat source and binaries are available from sourceforge.

History

For more history on nicstat and examples of use, see my earlier entry

Thursday Jul 28, 2011

nicstat update - version 1.90

Yes! A new version is now available with some long-awaited features. Many thanks to those who suggested improvements and helped with testing.

Changes for Version 1.90, April 2011

Common

  • nicstat.sh script, to provide for automated multi-platform deployment. See the Makefile's for details.
  • Added "-x" flag, to display extended statistics for each interface.
  • Added "-t" and "-u" flags, to include TCP and UDP (respectively) statistics. These come from tcp:0:tcpstat and udp:0:udpstat on Solaris, or from /proc/net/snmp and /proc/net/netstat on Linux.
  • Added "-a" flag, which equates to "-tux".
  • Added "-l" flag, which lists interfaces and their configuration.
  • Added "-v" flag, which displays nicstat version.

Solaris

  • Added use of libdladm.so:dladm_walk_datalink_id() to get list of interfaces. This is better than SIOCGLIFCONF, as it includes interfaces given exclusively to a zone.
    NOTE: this library/routine can be (by default is) linked in to nicstat in "lazy" mode, meaning that a Solaris 11 binary built with knowledge of the routine will also run on Solaris 10 without failing when the routine or library is not found - in this case nicstat will fall back to the SIOGLIFCONF method.
  • Added search of kstat "link_state" statistics as a third method for finding active network interfaces. See the man page for details.

Linux

  • Added support for SIOCETHTOOL ioctl, so that nicstat can look up interface speed/duplex (i.e. "-S" flag not necessarily needed any longer).
  • Removed need for LLONG_MAX, improving Linux portability.

Availability

nicstat source and binaries are available from sourceforge.

History

For more history on nicstat, see my earlier entry

Thursday Sep 17, 2009

querystat - DTrace script to monitor your queries, query cache and server thread pre-emption

I was recently helping some colleagues check what was happening with their MySQL queries, and wrote a DTrace script to do it. Time to share that script.

First of all, a look at some output from the script:

mashie[bash]# ./querystat.d -p `pgrep mysqld`
Tracing started at 2009 Sep 17 16:28:35
2009 Sep 17 16:28:38   throughput 3 queries/sec
2009 Sep 17 16:28:41   throughput 4 queries/sec
2009 Sep 17 16:28:44   throughput 528 queries/sec
2009 Sep 17 16:28:47   throughput 1603 queries/sec
2009 Sep 17 16:28:50   throughput 1676 queries/sec
\^C
Tracing ended   at 2009 Sep 17 16:28:51
Average latency, all queries: 107 us
Latency distribution, all queries (us): 
           value  ------------- Distribution ------------- count    
              16 |                                         0        
              32 |@@                                       170      
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        3728     
             128 |@@@@@                                    533      
             256 |                                         26       
             512 |                                         18       
            1024 |                                         2        
            2048 |                                         1        
            4096 |                                         0        
            8192 |                                         1        
           16384 |                                         1        
           32768 |                                         0        
Query cache statistics:
    count             hit: 6
    count            miss: 4474
    avg latency      miss: 107 (us)
    avg latency       hit: 407 (us)
Latency distribution, for query cache hit (us): 
           value  ------------- Distribution ------------- count    
              64 |                                         0        
             128 |@@@@@@@@@@@@@                            2        
             256 |@@@@@@@                                  1        
             512 |@@@@@@@@@@@@@@@@@@@@                     3        
            1024 |                                         0        
Latency distribution, for query cache miss (us): 
           value  ------------- Distribution ------------- count    
              16 |                                         0        
              32 |@@                                       170      
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        3728     
             128 |@@@@@                                    531      
             256 |                                         25       
             512 |                                         15       
            1024 |                                         2        
            2048 |                                         1        
            4096 |                                         0        
            8192 |                                         1        
           16384 |                                         1        
           32768 |                                         0        
Average latency when query WAS NOT pre-empted: 73 us
Average latency when query     WAS pre-empted: 127 us
Pre-emptors:
[...]
   mysql                                     6
   Xorg                                     18
   sched                                    25
   firefox-bin                              44
   sysbench                               3095

You can see that while the script is running (prior to pressing <Ctrl>-C), we get a throughput count every 3 seconds.

Then we get some totals, some averages, and even some distribution histograms, covering all queries, then with breakdowns on whether we used the query cache, and whether the thread executing the query was pre-empted.

This may be useful for determining things like:

  • Do I have some queries in my workload that consume a lot more CPU than others?
  • Is the query cache helping or hurting?
  • Are my database server threads being pre-empted (kicked off the CPU) by (an)other process(es)?

Things have become easier since I first tried this, and had to use the PID provider to trace functions in the database server.

If you want to try my DTrace script, get it from here. NOTE: You will need a version of MySQL with DTrace probes for it to work.

Friday Sep 04, 2009

nicstat - the Solaris and Linux Network Monitoring Tool You Did Not Know You Needed

Update - Version 1.95, January 2014

Added "-U" option, to display separate read and write utilization. Simplified display code regarding "-M" option. For Solaris, fixed fetch64() to check type of kstats andf ixed memory leak in update_nicdata_list(). Full details at the entry for version 1.95

Update - Version 1.92, October 2012

Added "-M" option to display throughput in Mbps (Megabits per second). Fixed some bugs. Full details at the entry for version 1.92

Update - Version 1.90, July 2011

Many new features available, including extended NIC, TCP and UDP statistics. Full details at the entry for version 1.90

Update - February 2010

Nicstat now can produce parseable output if you add a "-p" flag. This is compatible with System Data Recorder (SDR). Links below are for the new version - 1.22.

Update - October 2009

Just a little one - nicstat now works on shared-ip Solaris zones.

Update - September 2009

OK, this is heading toward overkill...

The more I publish updates, the more I get requests for enhancement of nicstat. I have also decided to complete a few things that needed doing.

The improvements for this month are:

  • Added support for a "fd" or "hd" (in reality anything starting with an upper or lower-case F or H) suffix to the speed settings supplied via the "-S" option. This advises nicstat the interface is half-duplex or full-duplex. The Linux version now calculates %Util the same way as the Solaris version.
  • Added a script, enicstat, which uses ethtool to get speeds and duplex modes for all interfaces, then calls nicstat with an appropriate -S value.
  • Made the Linux version more efficient.
  • Combined the Solaris and Linux source into one nicstat.c. This is a little ugly due to #ifdef's, but that's the price you pay.
  • Wrote a man page.
  • Wrote better Makefile's for both platforms
  • Wrote a short README
  • Licensed nicstat under the Artistic License 2.0

All source and binaries will from now on be distributed in a tarball. This blog entry will remain the home of nicstat for the time being.

Lastly, I have heard the requests for easier availability in OpenSolaris. Stay tuned.

Update - August 2009

That's more like it - we should get plenty of coverage now :)

A colleague pointed out to me that nicstat's method of calculating utilization for a full-duplex interface is not correct.

Now nicstat will look for the kstat "link_duplex" value, and if it is 2 (which means full-duplex), it will use the greater of rbytes or wbytes to calculate utilization.

No change to the Linux version. Use the links in my previous post for downloading.

Update - July 2009

I should probably do this at least once a year, as nicstat needs more publicity...

A number of people have commented to me that nicstat always reports "0.00" for %Util on Linux. The reason for this is that there is no simple way an unprivileged user can get the speed of an interface in Linux (quite happy for someone to prove me wrong on that however).

Recently I got an offer of a patch from David Stone, to add an option to nicstat that tells it what the speed of an interface is. Pretty reasonable idea, so I have added it to the Linux version. You will see this new "-S" option explained if you use nicstat's "-h" (help) option.

I have made another change which makes nicstat more portable, hence easier to build on Linux.

History

A few years ago, a bloke I know by the name of Brendan Gregg wrote a Solaris kstat-based utility called nicstat. In 2006 I decided I needed to use this utility to capture network statistics in testing I do. Then I got a request from a colleague in PAE to do something about nicstat not being aware of "e1000g" interfaces.

I have spent a bit of time adding to nicstat since then, so I thought I would make the improved version available.

Why Should I Still Be Interested?

nicstat is to network interfaces as "iostat" is to disks, or "prstat" is to processes. It is designed as a much better version of "netstat -i". Its differences include:

  • Reports bytes in & out as well as packets.
  • Normalizes these values to per-second rates.
  • Reports on all interfaces (while iterating)
  • Reports Utilization (rough calculation as of now)
  • Reports Saturation (also rough)
  • Prefixes statistics with the current time

How about an example?

eac-t2000-3[bash]# nicstat 5
    Time      Int   rKB/s   wKB/s   rPk/s   wPk/s    rAvs    wAvs %Util    Sat
17:05:17      lo0    0.00    0.00    0.00    0.00    0.00    0.00  0.00   0.00
17:05:17  e1000g0    0.61    4.07    4.95    6.63   126.2   628.0  0.04   0.00
17:05:17  e1000g1   225.7   176.2   905.0   922.5   255.4   195.6  0.33   0.00
    Time      Int   rKB/s   wKB/s   rPk/s   wPk/s    rAvs    wAvs %Util    Sat
17:05:22      lo0    0.00    0.00    0.00    0.00    0.00    0.00  0.00   0.00
17:05:22  e1000g0    0.06    0.15    1.00    0.80   64.00   186.0  0.00   0.00
17:05:22  e1000g1    0.00    0.00    0.00    0.00    0.00    0.00  0.00   0.00
eac-t2000-3[bash]# nicstat -i e1000g0 5 4
    Time      Int   rKB/s   wKB/s   rPk/s   wPk/s    rAvs    wAvs %Util    Sat
17:08:49  e1000g0    0.61    4.07    4.95    6.63   126.2   628.0  0.04   0.00
17:08:54  e1000g0    0.06    0.04    1.00    0.20   64.00   186.0  0.00   0.00
17:08:59  e1000g0   239.2    2.33   174.4   33.60  1404.4   71.11  1.98   0.00
17:09:04  e1000g0    0.01    0.04    0.20    0.20   64.00   186.0  0.00   0.00

For more examples, see the man page.

References & Resources

Friday Aug 14, 2009

nicstat - Update for Solaris only

Update - August 2009

That's more like it - we should get plenty of coverage now :)

A colleague pointed out to me that nicstat's method of calculating utilization for a full-duplex interface is not correct.

Now nicstat will look for the kstat "link_duplex" value, and if it is 2 (which means full-duplex), it will use the greater of rbytes or wbytes to calculate utilization.

No change to the Linux version. Use the links in my previous post for downloading.

Monday Apr 27, 2009

pstime - a mash-up of ps(1) and ptime(1)

I have done some testing in the past where I needed to know the amount of CPU consumed by a process more accurately than I can get from the standard set of operating system utilities.

Recently I hit the same issue - I wanted to collect CPU consumption of mysqld.

To capture process CPU utilization over an interval on Solaris, about the best I can get is the output from a plain "prstat" command, which might look like:

mashie ) prstat -c -p `pgrep mysqld` 5 2
Please wait...
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
  7141 mysql     278M  208M cpu0    39    0   0:38:13  40% mysqld/45
Total: 1 processes, 45 lwps, load averages: 0.63, 0.33, 0.18
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
  7141 mysql     278M  208M cpu1    32    0   0:38:18  41% mysqld/45
Total: 1 processes, 45 lwps, load averages: 0.68, 0.34, 0.18

I am after data from the second sample only (still not sure exactly how prstat gets data for the fist sample, which comes out almost instantaneously), so you can guess I will need some sed/perl that is a litte more complicated than I would prefer.

pstime reads PROCFS (i.e.. the virtualized file-system mounted on /proc) and captures CPU utilization figures for processes. It will report the %USR and %SYS either for a specific list of processes, or every process running on the system (i.e., running at both sample points). The start sample time is recorded in high resolution at the time a process' data is captured, and then again after N seconds, where N is the first parameter supplied to pstime.

The default output of pstime is expressed as either a percentage of whole system CPU, or CPU seconds, with four significant digits. Solaris itself records the original figures in nanosecond resolution, although we do not expect today's hardware to be that accurate.

Here is an example:

mashie ) pstime 10 `pgrep sysbench\\|mysqld`
  UID    PID  %USR  %SYS COMMAND
mysql   7141 44.17 3.391 /u/dist/mysql60-debug/bin/mysqld --defaults-file=/et
mysql  19870 2.517 2.490 sysbench --test=oltp --oltp-read-only=on --max-time=
mysql  19869 0.000 0.000 /bin/sh -p ./run-sysbench

Downloads

Monday Apr 06, 2009

New Feature for Sysbench - Generate Transactions at a Steady Rate

Perhaps I am becoming a regular patcher of sysbench...

I have developed a new feature for sysbench - the ability to generate transactions at a steady rate determined by the user.

This mode is enabled using the following two new options:
--tx-rate
Rate at which sysbench should attempt to send transactions to the database, in transactions per second. This is independent of num_threads. The default is 0, which means to send as many as possible (i.e., do not pause between the end of one transaction and the start of another. It is also independent of other options like --oltp-user-delay-min and --oltp-user-delay-max, which add think time between individual statements generated by sysbench.
--tx-jitter
Magnitude of the variation in time to start transactions at, in microseconds. The default is zero, which asks each thread to vary its transaction period by up to 10 percent (i.e. 10\^6 / tx-rate \* num-threads / 10). A standard pseudo-random number generator is used to decide each transaction start time.

My need for these options is simple - I want to generate a steady load for my MySQL database. It is one thing to measure the maximum achievable throughput as you change your database configuration, hardware, or num-threads. I am also interested in how the system (or just mysqld's) utilization changes, at the same transaction rate, when I change other variables.

An upcoming post will demonstrate a use of sysbench in this mode.

For the moment my new feature can be added to sysbench 0.4.12 (and probably many earlier versions) via this patch. These changes are tested on Solaris, but I did choose only APIs that are documented as also available on Linux. I have also posted my patch on sourceforge as a sysbench feature enhancement request.

Wednesday Dec 17, 2008

MySQL 5.1 Memory Allocator Bake-Off

After getting sysbench running properly with a scalable memory allocator (see last post), I can now return to what I was originally testing - what memory allocator is best for the 5.1 server (mysqld).

This stems out of studies I have made of some patches that have been released by Google. You can read about the work Google has been doing here.

I decided I wanted to test a number of configurations based on the MySQL community source, 5.1.28-rc, namely:

  • The baseline - no Google SMP patch, default memory allocator (5.1.28-rc)
  • With Google SMP patch, mem0pool enabled, no custom malloc (pool)
  • With Google SMP patch, mem0pool enabled, linked with mtmalloc (pool-mtmalloc)
  • With Google SMP patch, mem0pool disabled, linked with tcmalloc (TCMalloc)
  • With Google SMP patch, mem0pool disabled, linked with umem (umem)
  • With Google SMP patch, mem0pool disabled, linked with mtmalloc (mtmalloc)

Here are some definitions, by the way:

mem0pool InnoDB's internal "memory pools" feature, found in mem0pool.c (NOTE: Even if this is enabled, other parts of the server will not use this memory allocator - they will use whatever allocator is linked with mysqld)
tcmalloc The "libtcmalloc_minimal.so.0.0.0" that is built from google-perftools-0.99.2
Hoard The Hoard memory allocator, version 3.7.1
umem The libumem library (included with Solaris)
mtmalloc The mtmalloc library (included with Solaris)

My test setup was a 16-CPU Intel system, running Solaris Nevada build 100. I chose to use only an x86 platform, as I was not able to build tcmalloc on SPARC. I also chose to run with the database in TMPFS, and with an innoDB buffer size smaller than the database size. This was to ensure that we would be CPU-bound if possble, rather than slowed by I/O.

If I built any package (no need for mtmalloc or umem), I used GCC 4.3.1, except for Hoard, which seemed to prefer the Sun Studio 11 C compiler (over Sun Studio 12 or GCC).

My test was a sysbench OLTP read-write run, of 10 minutes. Each series of runs at different thread counts is preceded by a database re-build and 20 minute warmup. Here are my throughput results for 1-32 SysBench threads, in transactions per second:

These results show that while the Google SMP changes are a benefit, the disabling of InnoDB's mem0pool does not seem to provide any further benefit for my configuration. My results also show that TCMalloc is not a good allocator for this workload on this platform, and Hoard is particularly bad, with significant negative scaling above 16 threads.

The remaining configurations are pretty similar, with mtmalloc and umem a little ahead at higher thread counts.

Before I get a ton of comments and e-mails, I would like to point out that I did some verification of my TCMalloc builds, as the results I got surprised me. I verified that it was using the supplied assembler for atomic routines, and I built it with optimization (-O3) and without.

I also discovered that TCMalloc was emitting this diagnostic when mysqld was starting up:

src/tcmalloc.cc:151] uname failed assuming no TLS support (errno=0)

I rectified this with a change in tcmalloc.cc, and called this configuration "TCMalloc -O3, TLS". It is shown against the other two configurations below.

I often like to have a look at what the CPU cost of different configurations are. This helps to demonstrate headroom, and whether different throughput results may be due to less efficient code or something else. The chart below lists what I found - note that this is system-wide CPU (user & system) utilization, and I was running my SysBench client on the same system.

Lastly, I did do one other comparison, which was to measure how much each memory allocator affected the virtual size of mysqld. I did not expect much difference, as the most significant consumer - the InnoDB buffer pool - should dominate with large long-lived allocations. This was indeed the case, and memory consumption grew little after the initial start-up of mysqld. The only allocator that then caused any noticable change was mtmalloc, which for some reason made the heap grow by 35MB following a 5 minute run (it was originally 1430 MB)

References

Friday Dec 12, 2008

Scalability and Stability for SysBench on Solaris

My mind is playing "Suffering Succotash..."

I have been working on MySQL performance for a while now, and the team I am in have discovered that SysBench could do with a couple of tweaks for Solaris.

Sidebar - sysbench is a simple "OLTP" benchmark which can test multiple databases, including MySQL. Find out all about it here , but go to the download page to get the latest version.

To simulate multiple users sending requests to a database, sysbench uses multiple threads. This leads to two issues we have identified with SysBench on Solaris, namely:

  • The implementation of random() is explicitly identified as unsafe in multi-threaded applications on Solaris. My team has found this is a real issue, with occasional core-dumps happening to our multi-threaded SysBench runs.
  • SysBench does quite a bit of memory allocation, and could do with a more scalable memory allocator.

Neither of these issues are necessarily relevant only to Solaris, by the way.

Luckily there are simple solutions. We can fix the random() issue by using lrand48() - in effect a drop-in replacement. Then we can fix the memory allocator by simply choosing to link with a better allocator on Solaris.

To help with a decision on memory allocator, I ran a few simple tests to check the performance of the two best-known scalable allocators available in Solaris. Here are the results ("libc" is the default memory allocator):

Throughput

To see the differences more clearly, lets do a relative comparison, using "umem" (A.K.A. libumem) as the reference:

Relative Throughput

So - around 20% less throughput at 16 or 32 threads. Very little difference at 1 thread, too (where the default memory allocator should be the one with the lowest synchronization overhead).

Where you see another big difference is CPU cost per transaction:

CPU Cost

I will just point out two other reasons why I would recommend libumem:

I have logged these two issues as sysbench bugs:

However, if you can't wait for the fixes to be released, try these:

Sunday Oct 12, 2008

The Seduction of Single-Threaded Performance

The following is a dramatization. It is used to illustrate some concepts regarding performance testing and architecting of computer systems. Artistic license may have been taken with events, people and time-lines. The performance data I have listed is real and current however.

I got contacted recently by the Systems Architect of latestrage.com. He has been a happy Sun customer for many years, but was a little displeased when he took delivery of a beta test system of one of our latest UltraSPARC servers.

"Not very fast", he said.

"Is that right, how is it not fast?", I inquired eagerly.

"Well, it's a lot slower than one of the LowMarginBrand x86 servers we just bought", he trumpeted indignantly.

"How were you measuring their speed?", I asked, getting wary.

"Ahh, simple - we were compressing a big file. We were careful to not let it be limited by I/O bandwidth or memory capacity, though..."

What then ensues is a discussion about what was being used to test "performance", whether it matches latestrage.com's typical production workload and further details about architecture and objectives.

Data compression utilities are a classic example of a seemingly mature area in computing. Lots of utilities, lots of different algorithms, a few options in some utilities, reasonable portability between operating systems, but one significant shortcoming - there is no commonly available utility that is multi-threaded.

Let me pretend I am still in this situation of using compression to evaluate system performance, and I am wanting to compare the new Sun SPARC Enterprise T5440 with a couple of current x86 servers. Here is my own first observation about such a test, using a single-threaded compression utility:

Single-Threaded Throughput

Now if you browse down to older blog entries, you will see I have written my own multi-threaded compression utility. It consists of a thread to read data, as many threads to compress or decompress data as demand requires, and one thread to write data. Let me see whether I can fully exploit the performance of the T5440 with Tamp...

Well, this turned out to be not quite the end of the story. I designed my tests with my input file located on a TMPFS (in-memory) filesystem, and with the output being discarded. This left the system focusing on the computation of compression, without being obscured by I/O. This is the same objective that latestrage.com had.

What I found on the T5440 was that Tamp would not use more than 12-14 threads for compression - it was limited by the speed at which a single thread could read data from TMPFS.

So, I chose to use another dimension by which we can scale up work on a server - add more sources of workload. This is represented by multiple "Units of Work" in my chart below.

After completing my experiments I discovered that, as expected, the T5440 may disappoint if we restrict ourselves to a workload that can not fully utilize the available processing capacity. If we add more work however, we will find it handily surpasses the equivalent 4-socket quad-core x86 systems.

Multi-Threaded Throughput

Observing Single-Thread Performance on a T5440

A little side-story, and another illustration of how inadequate a single-threaded workload is at determining the capability of the T5440. Take a look at the following output from vmstat, and answer this question:

Is this system "maxed out"?

(Note: the "us", "sy" and "id" columns list how much CPU time is spent in User, System and Idle modes, respectively)

 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr d0 d1 d2 d3   in   sy   cs us sy id 
 0 0 0 1131540 12203120 1  8  0  0  0  0  0  0  0  0  0 3359 1552 419  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3364 1558 431  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1478 420  0  0 99 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3354 1500 441  0  0 100 
 0 0 0 1131540 12203120 0  0  0  0  0  0  0  0  0  0  0 3366 1549 460  0  0 99 

Well, the answer is yes. It is running a single-threaded process, which is using 100% of one CPU. For the sake of my argument we will say the application is the critical application on the system. It has reached it's highest throughput and is therefore "maxed out". You see, when one CPU represents less than 0.5% of the entire CPU capacity of a system, then a single saturated CPU will be rounded down to 0%. In the case of the T5440, one CPU is 1/256th or 0.39%.

Here is a tip for watching a system that might be doing nothing, but then again might be doing something as fast as it can:

$ mpstat 3 | grep -v ' 100$'

This is what you might see:

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    2   0   48   204    4    2    0    0    0    0   127    1   1   0  99
 32    0   0    0     2    0    3    0    0    0    0     0    0   8   0  92
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    1   0   49   205    5    3    0    0    0    0   117    0   1   0  99
 32    0   0    0     4    0    5    0    0    1    0     0    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    0   0   48   204    4    2    0    0    0    0   103    0   1   0  99
 32    0   0    0     3    0    4    0    0    0    0     3    0  14   0  86
 48    0   0    0     6    0    0    5    0    0    0     0  100   0   0   0

mpstat uses "usr", "sys", and "idl" to represent CPU consumption. For more on "wt" you can read my older blog.

For more on utilization, see the CPU/Processor page on solarisinternals.com

To read more about the Sun SPARC Enterprise T5440 which is announced today, go to Allan Packer's blog listing all the T5440 blogs.

Tamp - a Multi-Threaded Compression Utility

Some more details on this:

  • It uses a freely-available Lempel-Ziv-derived algorithm, optimised for compression speed
  • It was compiled using the same compiler and optimization settings for SPARC and x86.
  • It uses a compression block size of 256KB, so files smaller than this will not gain much benefit
  • I was compressing four 1GB database files. They were being reduced in size by a little over 60%.
  • Browse my blog for more details and a download

Monday Apr 21, 2008

Comparing the UltraSPARC T2 Plus to Other Recent SPARC Processors

Update - now the UltraSPARC T2 Plus has been released, and is available in several new several Sun servers. Allan Packer has published a new collection of blog entries that provide lots of detail.

Here is my updated table of details comparing a number of current SPARC processors. I can not guarantee 100% accuracy on this, but I did quite a bit of reading...

Name UltraSPARC IV+® SPARC64TM VI UltraSPARCTM T1 UltraSPARCTM T2 UltraSPARCTM T2 Plus
Codename Panther Olympus-C Niagara Niagara 2 Victoria Falls
Physical
process 90nm 90nm 90nm 65nm 65nm
die size 335 mm2 421 mm2 379 mm2 342 mm2
pins 1368 1933 1831
transistors 295 M 540 M 279 M 503 M
clock 1.5 – 2.1 GHz 2.15 – 2.4 GHz 1.0 – 1.4 GHz 1.0 – 1.4 GHz 1.2 – 1.4 GHz
Architecture
cores 2 2 8 8 8
threads/core 1 2 4 8 8
threads/chip 2 4 32 64 64
FPU : IU 1 : 1 1 : 1 1 : 8 1 : 1 1 : 1
integration 8 × small crypto 8 × large crypto, PCI-E, 2 × 10Gbe 8 × large crypto, PCI-E, multi-socket coherency
virtualization domains1 hypervisor
L1 i$ 64K/core 128K/core 16K/core
L1 d$ 64K/core 128K/core 8K/core
L2 cache (on-chip) 2MB, shared, 4-way, 64B lines 6MB, shared, 10-way, 256B lines 3MB, shared, 12-way, 64B lines 4MB, shared, 16-way, 64B lines
L3 cache 32MB shared, 4-way, tags on-chip, 64B lines n/a n/a
MMU on-chip
on-chip, 4 × DDR2 on-chip, 4 × FB-DIMM on-chip, 2 × FB-DIMM
Memory Models TSO TSO TSO, limited RMO
Physical Address Space 43 bits 47 bits 40 bits
i-TLB 16 FA + 512 2-way SA 64 FA
d-TLB 16 FA + 512 2-way SA 64 FA 128 FA
combined TLB 32 FA + 2048 2-way SA
Page sizes 8K, 64K, 512K, 4M, 32M, 256M 8K, 64K, 512K, 4M, 32M, 256M 8K, 64K, 4M, 256M
Memory bandwidth2 (GB/sec) 9.6 25.6 60+ 32

Footnotes

  • 1 - domains are implemented above the processor/chip level
  • 2 - theoretical peak - does not take cache coherency or other limits into account

Glossary

  • FA - fully-associative
  • FPU - Floating Point Unit
  • i-TLB - Instruction Translation Lookaside Buffer (d means Data)
  • IU - Integer (execution) Unit
  • L1 - Level 1 (similarly for L2, L3)
  • MMU - Memory Management Unit
  • RMO - Relaxed Memory Order
  • SA - set-associative
  • TSO - Total Store Order

References:

Wednesday Feb 13, 2008

Utilization - Can I Have More Accuracy Please?

Just thought I would share another Ruby script - this one takes the output of mpstat, and makes it more like the output of mpstat -a, only the values are floating point. I wrote it to process mpstat -a that I got from a customer. It can also cope with the date (in Unix ctime format) being prepended to every line. Here is some sample output:

CPUs minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 4 7.0 0.0 26.0 114.25 68.0 212.75 16.75 64.75 11.75 0.0 141.25 1.0 1.0 0.0 98.5
 4 0.75 0.0 929.75 2911.5 1954.75 10438.75 929.0 4282.0 715.0 0.0 6107.25 39.25 35.75 0.0 25.25
 4 0.0 0.0 892.25 2830.25 1910.5 10251.5 901.5 4210.0 694.5 0.0 5986.0 38.5 35.0 0.0 26.75
 4 0.0 0.0 941.5 2898.25 1926.75 10378.0 911.75 4258.0 698.0 0.0 6070.5 39.0 35.5 0.0 25.25
 4 0.0 0.0 893.75 2833.75 1917.75 10215.0 873.75 4196.25 715.25 0.0 5925.25 38.0 34.75 0.0 27.25

The script is here.

Interestingly, you can use this to get greater accuracy on things like USR and SYS than you would get if you just used vmstat, sar, iostat or mpstat -a. This depends on the number of CPUs you have in your system though.

Now, if you do not have a lot of CPUs, but still want greater accuracy, I have another trick. This works especially well if you are conducting an experiment and can run a command at the beginning and end of the experiment. This trick is based around the output of vmstat -s:

# vmstat -s
[...]
 54056444 user   cpu
 42914527 system cpu
1220364345 idle   cpu
        0 wait   cpu

Those numbers are "ticks" since the system booted. A tick is usually 0.01 seconds.

NEW: I have now uploaded a script that uses these statistics to track system-wide utilization.

Friday Nov 02, 2007

Comparing the UltraSPARC T2 to Other Recent SPARC Processors

This is now a placeholder. You probably want to read my updated blog on SPARC processor details to get the latest.

Friday Aug 31, 2007

How Event-Driven Utilization Measurement is Better than Sample-Based

...and how to measure both at the same time

With the delivery of Solaris 10, Sun made two significant changes to how system utilization is measured. One change was to how CPU utilisation is measured

Solaris used to (and virtually all other POSIX-like OS'es still) measure CPU utilisation by sampling it. This happened once every "clock tick". A clock tick is a kernel administrative routine which is executed once (on one CPU) for every clock interrupt that is received, which happens once every 10 milliseconds. At this time, the state of each CPU was inspected, and a "tick" would be added to each of the "usr", "sys", "wt" or "idle" buckets for that CPU.

The problem with this method is two-fold:

  • It is statistical, which is to say it is an approximation of something, derived via sampling
  • The sampling happens just before the point when Solaris looks for threads that are waiting to be woken up to do work.

Solaris 10 now uses microstate accounting. Microstates are a set of finer-grained states of execution, including USR, SYS, TRP (servicing a trap), LCK (waiting on an intra-process lock), SLP (sleeping), LAT (on a CPU dispatch queue), although these all fall under one of the traditional USR, SYS and IDLE. These familiar three are still used to report system-wide CPU utilisation (e.g. in vmstat, mpstat, iostat), however you can see the full set of states each process is in via "prstat -m".

The key difference in system-wide CPU utilization comes in how microstate accounting is captured - it is captured at each and every transition from one microstate to another, and it is captured in nanosecond resolution (although the granularity of this is platform-dependent). To put it another way it, it is event-driven, rather than statistical sampling.

This eliminated both of the issues listed above, but it is the second issue that can cause some significant variations in observed CPU utilization.

If we have a workload that does a unit of work that takes less than one clock tick, then yields the CPU to be woken up again later, it is likely to avoid being on a CPU when the sampling is done. This is called "hiding from the clock", and is not difficult to achieve (see "hide from the clock" below).

Other types of workloads that do not explicitly behave like this, but do involve processes that are regularly on and off the CPU can look like they have different CPU utilization on Solaris releases prior to 10, because the timing of their work and the timing of the sampling end up causing an effect which is sort-of like watching the spokes of a wheel or propeller captured on video. Another factor involved in this is how busy the CPUs are - the closer a CPU is to either idle or fully utilized, the more accurate sampling is likely to be.

What This Looks Like in the Wild

I was recently involved in an investigation where a customer had changed only their operating system release (to Solaris 10), and they saw an almost 100% increase (relative) in reported CPU utilization. We suspected that the change to event-based accounting may have been a factor in this.

During our investigations, I developed a DTrace utility which can capture CPU utilization that is like that reported by Solaris 10, then also measure it the same way as Solaris 9 and 8, all at the same time.

The DTrace utility, called util-old-new, is available here. It works by enabling probes from the "sched" provider to track when threads are put on and taken off CPUs. It is event-driven, and sums up nanoseconds the same way Solaris 10 does, but it also tracks the change in a system variable, "lbolt64" while threads are on CPU, to simulate how many "clock ticks" the thread would have accumulated. This should be a close match, because lbolt64 is updated by the clock tick routine, at pretty much the same time as when the old accounting happened.

Using this utility, we were able to prove that the change in observed utilisation was pretty much in line with the way Solaris has changed how it measures utilisation. The up-side for the customer was that their understanding of how much utilisation they had left on their system was now more accurate. the down side was that they now had to re-assess whether, and by how much, this changed the amount of capacity they had left.

Here is some sample output from the utility. I start the script when I already have one CPU-bound thread on a 2-CPU system, then I start up one instance of Alexander Kolbasov's "hide-from-clock", which event-based accounting sees, but sample-based accounting does not:

mashie[bash]# util-old-new 5
NCPUs = 2
Date-time              s8-tk/1000   s9-tk/1000      ns/1000
2007 Aug 16 12:12:14          508          523          540
2007 Aug 16 12:12:19          520          523          553
2007 Aug 16 12:12:24          553          567          754
2007 Aug 16 12:12:29          549          551          798
2007 Aug 16 12:12:34          539          549          810
\^C

The Other Change in Utilization Measurement

By the way, the other change was to "hard-wire" the Wait I/O ("%wio" or "wt" or "wait time") statistic to zero. The reasoning behind this is that CPU's do not wait for I/O (or any other asynchronous event) to complete - threads do. Trying to characterize how much a CPU is not doing anything in more than one statistic is like having two fuel gauges on your car - one for how much fuel remains for highway driving, and another for city driving.

References & Resources

P.S. This entry is intended to cover what I have spoken about in my previous two entries. I will soon delete the previous entries.

About

Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today