Thursday Sep 04, 2008

Building GCC 4.x on Solaris

I needed to build GCC 4.3.1 for my x86 system running a recent development build of Solaris. I thought I would share what I discovered, and then improved on.

I started with Paul Beach's Blog on the same topic, but I knew it had a couple of shortcomings, namely:

  • No mention of a couple of pre-requisites that are mentioned in the GCC document Prerequisites for GCC
  • A mysterious "cannot compute suffix of object files" error in the build phase
  • No resolution of how to generate binaries that have a useful RPATH (see Shared Library Search Paths for a discussion on the importance of RPATH).

I found some help on this via this forum post, but here is my own cheat sheet.

  1. Download & install GNU Multiple Precision Library (GMP) version 4.1 (or later) from sunfreeware.com. This will end up located in /usr/local.
  2. Download, build & install MPFR Library version 2.3.0 (or later) from mpfr.org. This will also end up in /usr/local.
  3. Download & unpack the GCC 4.x base source (the one of the form gcc-4.x.x.tar.gz) from gcc.gnu.org
  4. Download my example config_make script, edit as desired (you probably want to change OBJDIR and PREFIX, and you may want to add other configure options.
  5. Run the config_make script
  6. "gmake install" as root (although I instead create the directory matching PREFIX, make it writable by the account doing the build, then "gmake install" using that account).

You should now have GCC binaries that look for the shared libraries they need in /usr/sfw/lib, /usr/local/lib and PREFIX/lib, without anyone needing to set LD_LIBRARY_PATH. In particular, modern versions of Solaris will have a libgcc_s.so in /usr/sfw/lib.

If you copy your GMP and MPFR shared libraries (which seem to be needed by parts of the compiler) into PREFIX/lib, you will also have a self-contained directory tree that you can deploy to any similar system more simply (e.g. via rsync, tar, cpio, "scp -pr", ...)

Wednesday Feb 13, 2008

Utilization - Can I Have More Accuracy Please?

Just thought I would share another Ruby script - this one takes the output of mpstat, and makes it more like the output of mpstat -a, only the values are floating point. I wrote it to process mpstat -a that I got from a customer. It can also cope with the date (in Unix ctime format) being prepended to every line. Here is some sample output:

CPUs minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 4 7.0 0.0 26.0 114.25 68.0 212.75 16.75 64.75 11.75 0.0 141.25 1.0 1.0 0.0 98.5
 4 0.75 0.0 929.75 2911.5 1954.75 10438.75 929.0 4282.0 715.0 0.0 6107.25 39.25 35.75 0.0 25.25
 4 0.0 0.0 892.25 2830.25 1910.5 10251.5 901.5 4210.0 694.5 0.0 5986.0 38.5 35.0 0.0 26.75
 4 0.0 0.0 941.5 2898.25 1926.75 10378.0 911.75 4258.0 698.0 0.0 6070.5 39.0 35.5 0.0 25.25
 4 0.0 0.0 893.75 2833.75 1917.75 10215.0 873.75 4196.25 715.25 0.0 5925.25 38.0 34.75 0.0 27.25

The script is here.

Interestingly, you can use this to get greater accuracy on things like USR and SYS than you would get if you just used vmstat, sar, iostat or mpstat -a. This depends on the number of CPUs you have in your system though.

Now, if you do not have a lot of CPUs, but still want greater accuracy, I have another trick. This works especially well if you are conducting an experiment and can run a command at the beginning and end of the experiment. This trick is based around the output of vmstat -s:

# vmstat -s
[...]
 54056444 user   cpu
 42914527 system cpu
1220364345 idle   cpu
        0 wait   cpu

Those numbers are "ticks" since the system booted. A tick is usually 0.01 seconds.

NEW: I have now uploaded a script that uses these statistics to track system-wide utilization.

Friday Aug 31, 2007

How Event-Driven Utilization Measurement is Better than Sample-Based

...and how to measure both at the same time

With the delivery of Solaris 10, Sun made two significant changes to how system utilization is measured. One change was to how CPU utilisation is measured

Solaris used to (and virtually all other POSIX-like OS'es still) measure CPU utilisation by sampling it. This happened once every "clock tick". A clock tick is a kernel administrative routine which is executed once (on one CPU) for every clock interrupt that is received, which happens once every 10 milliseconds. At this time, the state of each CPU was inspected, and a "tick" would be added to each of the "usr", "sys", "wt" or "idle" buckets for that CPU.

The problem with this method is two-fold:

  • It is statistical, which is to say it is an approximation of something, derived via sampling
  • The sampling happens just before the point when Solaris looks for threads that are waiting to be woken up to do work.

Solaris 10 now uses microstate accounting. Microstates are a set of finer-grained states of execution, including USR, SYS, TRP (servicing a trap), LCK (waiting on an intra-process lock), SLP (sleeping), LAT (on a CPU dispatch queue), although these all fall under one of the traditional USR, SYS and IDLE. These familiar three are still used to report system-wide CPU utilisation (e.g. in vmstat, mpstat, iostat), however you can see the full set of states each process is in via "prstat -m".

The key difference in system-wide CPU utilization comes in how microstate accounting is captured - it is captured at each and every transition from one microstate to another, and it is captured in nanosecond resolution (although the granularity of this is platform-dependent). To put it another way it, it is event-driven, rather than statistical sampling.

This eliminated both of the issues listed above, but it is the second issue that can cause some significant variations in observed CPU utilization.

If we have a workload that does a unit of work that takes less than one clock tick, then yields the CPU to be woken up again later, it is likely to avoid being on a CPU when the sampling is done. This is called "hiding from the clock", and is not difficult to achieve (see "hide from the clock" below).

Other types of workloads that do not explicitly behave like this, but do involve processes that are regularly on and off the CPU can look like they have different CPU utilization on Solaris releases prior to 10, because the timing of their work and the timing of the sampling end up causing an effect which is sort-of like watching the spokes of a wheel or propeller captured on video. Another factor involved in this is how busy the CPUs are - the closer a CPU is to either idle or fully utilized, the more accurate sampling is likely to be.

What This Looks Like in the Wild

I was recently involved in an investigation where a customer had changed only their operating system release (to Solaris 10), and they saw an almost 100% increase (relative) in reported CPU utilization. We suspected that the change to event-based accounting may have been a factor in this.

During our investigations, I developed a DTrace utility which can capture CPU utilization that is like that reported by Solaris 10, then also measure it the same way as Solaris 9 and 8, all at the same time.

The DTrace utility, called util-old-new, is available here. It works by enabling probes from the "sched" provider to track when threads are put on and taken off CPUs. It is event-driven, and sums up nanoseconds the same way Solaris 10 does, but it also tracks the change in a system variable, "lbolt64" while threads are on CPU, to simulate how many "clock ticks" the thread would have accumulated. This should be a close match, because lbolt64 is updated by the clock tick routine, at pretty much the same time as when the old accounting happened.

Using this utility, we were able to prove that the change in observed utilisation was pretty much in line with the way Solaris has changed how it measures utilisation. The up-side for the customer was that their understanding of how much utilisation they had left on their system was now more accurate. the down side was that they now had to re-assess whether, and by how much, this changed the amount of capacity they had left.

Here is some sample output from the utility. I start the script when I already have one CPU-bound thread on a 2-CPU system, then I start up one instance of Alexander Kolbasov's "hide-from-clock", which event-based accounting sees, but sample-based accounting does not:

mashie[bash]# util-old-new 5
NCPUs = 2
Date-time              s8-tk/1000   s9-tk/1000      ns/1000
2007 Aug 16 12:12:14          508          523          540
2007 Aug 16 12:12:19          520          523          553
2007 Aug 16 12:12:24          553          567          754
2007 Aug 16 12:12:29          549          551          798
2007 Aug 16 12:12:34          539          549          810
\^C

The Other Change in Utilization Measurement

By the way, the other change was to "hard-wire" the Wait I/O ("%wio" or "wt" or "wait time") statistic to zero. The reasoning behind this is that CPU's do not wait for I/O (or any other asynchronous event) to complete - threads do. Trying to characterize how much a CPU is not doing anything in more than one statistic is like having two fuel gauges on your car - one for how much fuel remains for highway driving, and another for city driving.

References & Resources

P.S. This entry is intended to cover what I have spoken about in my previous two entries. I will soon delete the previous entries.

Friday Mar 16, 2007

Simplifying "lockstat -I" Output (or Ruby Is Better Than Perl)

There, I said it. I have been a Perl scripter for nearly 20 years (since early version 3). I think Ruby has pretty much everything Perl has, and more, like:

  • It is really object-oriented (rather than Perl's bolt-on objects). I am much more likely to end up using this functionality.
  • It has operator overloading
  • It has a "case" statement
  • There is no Obfuscated Ruby Contest (though IMHO there could be one)

Anyway, enough religous argument. I want to thank my colleague Neel for putting me onto it, and now I will provide a simple example.

Post-Processing "lockstat -I"

The Ruby script below post-processes the output of "lockstat -I". Why you ask? - well, because the output of "lockstat -I" can be tens of thousands of lines, and it does not coalesce by CPU.

The script below will coalesce by CPU, and PC within a function (useful if you forgot the "-k" flag to lockstat). A very simple change will also make it coalesce by PIL, but I will leave that as an exercise for the reader.

Ruby Post-Processor Script

One thing about this is that I would write this script almost exactly the same way if I used Perl. That is another plus of Ruby - it is easy to pick-up for a Perl programmer.

#!/bin/env ruby -w
# lockstatI.rb -	Simplify "lockstat -I" output
#
# http://blogs.sun.com/timc, 16 Mar 2007
#
# SYNOPSIS
#	lockstat -I [-i 971 -n <nnnnnn> ] sleep <nnn> | lockstatI.rb

PROG = "lockstatI"

#-- Once we have printed values that cover this proportion, then quit
CUTOFF = 95.0

DASHES = ('-' \* 79) + "\\n"

#========================================================================
#	MAIN
#========================================================================

print "#{PROG} - will display top #{CUTOFF}% of events\\n"

events = 0
period = 0
state = 0
counts = {}

#-- The classic state machine
ARGF.each_line do |line|
    next if line == "\\n"
    case state
    when 0
	if line =~ /\^Profiling interrupt: (\\d+) events in (\\d.\*\\d) seconds/
	then
	    puts line
	    state = 1
	    events = $1
	    period = $2
	    next
	end
    when 1
	if line == DASHES then
	    state = 2
	end
    when 2
	if line == DASHES then
	    break
	end
	f = line.split
	count = f[0]
	cpu, pil = f[5].split('+')

	#-- Coalesce PCs within functions; i.e. do not differentiate by
	#-- offset within a function.  Useful if "-k" was not specified
	#-- on the lockstat command.
	caller = f[6].split('+')[0]

	if pil then
	    caller = caller + "[" + pil + "]"
	end
	counts[caller] = counts[caller].to_i + count.to_i
    end
end

#-- Give me an array of keys sorted by descending value
caller_keys = counts.keys.sort { |a, b| counts[b] <=> counts[a] }

#-- Dump it out
printf "%12s  %5s  %5s  %s\\n", "Count", "%", "cuml%", "Caller[PIL]"
cuml = 0.0
caller_keys.each do |key|
    percent = counts[key].to_f \* 100.0 / events.to_f
    cuml += percent
    printf "%12d  %5.2f  %5.2f  %s\\n", counts[key], percent, cuml, key
    if cuml >= CUTOFF then
	break
    end
end

Example Output

Free beer to the first person to tell me what this is showing. It was not easy to comprehend the 90,000 line "lockstat -I" output prior to post-processing it though. You get this problem when you have 72 CPUs...

lockstatI - will display top 95.0% of events
Profiling interrupt: 4217985 events in 60.639 seconds (69559 events/sec)
       Count      %  cuml%  Caller[PIL]
     1766747  41.89  41.89  disp_getwork[11]
     1015005  24.06  65.95  (usermode)
      502560  11.91  77.86  lock_set_spl_spin
       83066   1.97  79.83  lock_spin_try[11]
       62670   1.49  81.32  mutex_enter
       53883   1.28  82.60  mutex_vector_enter
       40847   0.97  83.57  idle
       40024   0.95  84.51  lock_set_spl[11]
       27004   0.64  85.15  splx
       17432   0.41  85.57  send_mondo_set[13]
       15876   0.38  85.94  atomic_add_int
       14841   0.35  86.30  disp_getwork

Wednesday Feb 14, 2007

nicstat - the Solaris Network Monitoring Tool You Did Not Know You Needed

Update

This is a placeholder entry - see the latest blog on nicstat, for the current source and binaries.

About

Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today