Sunday Jan 17, 2010

More Sun Ray on OpenSolaris build 130 workarounds

One more thing for the Sun Ray on build 130. Whether this is the last remains to be seen.

Now that gdm is run via smf gnome-session is now run via ctrun(1) so that it gets it's own contract and therefore any problems it has do not result in gdm being restarted.

However the Sun Ray sessions are not started that way. Hence I was seeing all users logged out if just one session had a problem:

So a rather trivial failure such as this:

Jan 16 11:10:47 pearson genunix: [ID 603404 kern.notice] NOTICE: core_log: metacity[7448] core dumped: /var/cores/core.metacity.7448 

would result in gdm restarted:


[ Jan 16 00:06:08 Method "start" exited with status 0. ]
[ Jan 16 11:10:47 Stopping because process dumped core. ]
[ Jan 16 11:10:47 Executing stop method ("/lib/svc/method/svc-gdm stop"). ]
[ Jan 16 11:10:47 Method "stop" exited with status 0. ]

which in turn means all the users were logged out. Ooops.

The solution was simple but like the previous workarounds leaves your warranty in tatters!

# mv /usr/bin/gnome-session /usr/bin/gnome-session.orig
# cat > /usr/bin/gnome-session << EOF
> #!/bin/ksh -p
> exec ctrun -l child \\$0.orig \\$@
> EOF
# chmod 755 /usr/bin/gnome-session

This results in all your gnome sessions having their own contract as their parent is ctrun:

: pearson FSS 33 $; ptree $(pgrep -o -u gdm gnome-session)
22433 /usr/sbin/gdm-binary
  22440 /usr/lib/gdm-simple-slave --display-id /org/gnome/DisplayManager/Displa
    22965 ctrun -l child /usr/bin/gnome-session.orig --autostart=/usr/share/gdm
      22967 /usr/bin/gnome-session.orig --autostart=/usr/share/gdm/autostart/Lo
        23062 /usr/lib/gdm-simple-greeter
        23063 gnome-power-manager
: pearson FSS 34 $; 

and means that any failures are now ring-fenced to just that session.

Sunday Dec 13, 2009

Twitter as a sysadmin tool?

After the disk failures I have suffered I have decided to start using twitter monitor the health of my home server and network as this is easier to pick up than on my phone email. Also it provides a nice log.

Using a modified version of the command line twitter script http://blogs.sun.com/sandip/entry/tweeting_from_command_line_using modified so that it will read the account information from a file. I have changed the standard script I have for watching the system to use twitter rather than email to notify me of issues. It posts to the twitter account "syslogathome" which I now follow, you can too but I'm sure you won't want to.

My script to check things is here.

#!/bin/ksh

export PATH=/usr/sbin:/usr/bin

function tweet
{
        echo tweeting $1
        /usr/local/bin/tweet.py "$1"
}
function tweet_services
{
        typeset zone="$1 "
        if [[ "$1" != "global" ]]
        then
                typeset zl="pfexec zlogin $1 "
        else
                typeset zl=""
        fi
        ${zl}svcs -x | nawk '/\^svc:/ { s=$0 } /\^Reason:/ { print s,$0 }' | while read line
        do
                tweet "$zone$line"
        done
}
function tweet_zfs
{
        zpool list -H -o name,health | while read zfs state
        do
                [[ "$state" != "ONLINE" ]] && tweet "$zfs $state"
        done
}
function tweet_disks
{
        export IFS="    "
        kstat -p -m sderr -s "Predictive Failure Analysis" | while read err value
        do
                (( $value != 0 )) && tweet "$err     $value"
        done
}
function tweet_net
{
        typeset speed=$(dladm show-linkprop -co VALUE -p speed nge0)
        if (( $speed != 1000 ))
        then
                tweet "network running in degraded state $speed" 
        fi
}
function tweet_phone
{
        if ! ping phone 1 > /dev/null 2>&1
        then
                tweet "Phone is not responding"
        fi
}

for zone in $(zoneadm list)
do
        tweet_services $zone
done
tweet_zfs
tweet_disks
tweet_net
tweet_phone

Monday Dec 07, 2009

So long NIS+, it was fun

With the push of this feature into Solaris:

6874309 Remove NIS+ from Solaris 
PSARC/2009/530 Removal of NIS+

a bit of Solaris history is made. The namespace that was to replace NIS (YP) has been survived by the system it was to replace.

NIS+ was the default name service in Solaris 2.0 and it was a long while before Sun relented and shipped a NIS (YP) server for the release. As a support engineer however NIS+ was interesting and was reasonably secure.


The flaws however limited it's adoption:

  • Servers could not be in the domain they served. This was eventually fixed however I find it amazing that we have the same situtation now with LDAP where native LDAP clients can't be served by themselves.

  • It was hard. The technical difficulties of getting NIS+ name spaces to work since they both used secure RPC and were used by secure RPC gave it a reputation for being hard to set up and unreliable. The reliability however has been resolved such that there were many large scale deployments that ran successfully.

  • The use of secure RPC made short running programs very expensive if they used NIS+1. So scripts that did NIS+ were slow.

NIS+ allowed me to learn many things:

  • I wrote a NIS+2html gate way that allowed you to navigate an entire NIS+ namespace from a browser (the browser was mosaic) using cgi.

  • An interposing library that allowed you to see all the NIS+ calls being made.

  • A TCL library giving direct access to the NIS+ library calls. This allowed very fast scripting since only one secure RPC session has to be generated.

Unfortunately none of them made it out from Sun as this was long before we became more open.


However it's future looked sealed when it's EOF was announced in Solaris 9 but a surprise reprieve allowed it to live in Solaris 10. It looks like the same will not be true for OpenSolaris. If you are still using NIS+ then you need to be finalizing your plans to move to LDAP!

It seems my baby is unlikely to make it to 21.


So long NIS+. It was fun.

1Each process would have to generate a secure RPC session key and negotiate a secure connection with the server. If the process then only made a single call to the server this session key would then be thrown away.

Wednesday Dec 02, 2009

Tracing getipnodesXXXX calls

When I wrote the D script to decode gethostbyname() and gethostbyaddr() library calls I fully intended to proactive write the script to do getipnodebyname() and getipnodebyaddr() and for that matter all the getXbyY routines. However that spare five minutes never arrived so it was only today while investigating a real issue that I had need for a script to decode getipnodebyaddr(). Fortunately taking the other script and modifying to work with getipnodebyXXXX was not that hard.

It can only decode 5 addresses per call before it runs out of space for DIF as it has to do more than the gethostbyXXXX() version since it has to cope with both IPv4 and IPv6 addresses:

dhcp-10-18-9-247-gmp03# dtrace -32 -CZs gethostbyXXXX.d -c "getent ipnodes ibm.com"
129.42.17.103	ibm.com
129.42.18.103	ibm.com
129.42.16.103	ibm.com
Look up: ibm.com:
Host: ibm.com
	h_address[0]: 0:0:0:0:0:0:0:0:0:0:ff:ff:81:2a:11:67
	h_address[1]: 0:0:0:0:0:0:0:0:0:0:ff:ff:81:2a:12:67
	h_address[2]: 0:0:0:0:0:0:0:0:0:0:ff:ff:81:2a:10:67

dhcp-10-18-9-247-gmp03# dtrace -32 -CZs getipnodebyXXXX.d -c "smbadm list"  
[\*] [CJG]
[\*] [cjg.uk.sun.com]
	[+x6250a-vbox10.cjg.uk.sun.com] [10.18.8.140]
[\*] [CJG] [S-1-5-21-1782774743-1218725973-889210084]
[.] [DHCP-10-18-9-24] [S-1-5-21-277162072-319636157-2443625992]
Look up: x6250a-vbox10:
Host: x6250a-vbox10.cjg.uk.sun.com
	h_address[0]: 10.18.8.140

The script is here. Feel free to use it.

Tuesday Nov 24, 2009

Clear up those temporary files

One of my (many) pet peeves are shell scripts that fail to delete any temporary files they use. Included in this pet peeve are shell scripts that create more temporary files than they absolutely need, in most cases the number is 0 but there are a few cases where you really do need a temporary file but if it is temproary make sure you always delete the file.

The trick here is to use the EXIT trap handler to delete the file. That way if your script is killed (unless it is kill with SIGKILL) it will still clean up. Since you will be using mktemp(1) to create your temporary file and you want to minimize any race condition where the file could be left around you need to do (korn shell):

trap '${TMPFILE:+rm ${TMPFILE}}' EXIT

TMPFILE=$(mktemp /tmp/$0.temp.XXXXXX)

if further down the script you delete or rename the file all you have to do is unset TMPFILE eg:

mv $TMPFILE /etc/foo && unset TMPFILE

Thursday Nov 12, 2009

The Kings of Computing use dtrace!

I've said many times that dtrace is not just a wonderful tool for developers and performance gurus. The Kings of Computing, which are of course System Admins, also find it really useful.

There is an ancient version of make called Parallel make that occasionally suffers from a bug (1223984) where it gets into a loop like this:

waitid(P_ALL, 0, 0x08047270, WEXITED|WTRAPPED)	Err#10 ECHILD
alarm(0)					= 30
alarm(30)					= 0
waitid(P_ALL, 0, 0x08047270, WEXITED|WTRAPPED)	Err#10 ECHILD
alarm(0)					= 30
alarm(30)					= 0
waitid(P_ALL, 0, 0x08047270, WEXITED|WTRAPPED)	Err#10 ECHILD

This will then consume a CPU and the users CPU shares. The application is never going to be fixed so the normal advice is not to use it. However since it can be NFS mounted from anywhere I can't reliably delete all copies of it so occasionally we will see run away processes on our build server.

It turns out this is a snip to fix with dtrace. Simply look for cases where the wait system call returns an error and errno is set to ECHILD (10) and if that happens 10 times in a row for the same process and that process does not call fork then stop the process.

The script is simple enough for me to just do it on the command line:


# dtrace -wn 'syscall::waitsys:return / arg1 <= 0 && 
execname == "make.bin" && errno == 10  && waitcount[pid]++ > 20 / {

	stop();

	printf("uid %d pid %d", uid, pid) }

syscall::forksys:return / arg1 > 0 / { waitcount[pid] = 0 }'
dtrace: description 'syscall::waitsys:return ' matched 2 probes
dtrace: allowing destructive actions
CPU     ID                    FUNCTION:NAME
  2  20588                   waitsys:return uid 36580 pid 29252
  3  20588                   waitsys:return uid 36580 pid 2522
  5  20588                   waitsys:return uid 36580 pid 28663
  7  20588                   waitsys:return uid 36580 pid 29884
 10  20588                   waitsys:return uid 36580 pid 941
 15  20588                   waitsys:return uid 36580 pid 1098

This was way easier then messing around with prstat, truss and pstop!

Sunday Nov 08, 2009

Access hour by day of the week

At the request of the users the access hours for Sun Ray users in the house have been relaxed so that on Friday and Saturday nights the Sun Ray's in bedrooms can be used later.

This required that the access hour script be updated to understand the day of the week and hence the access_hour file also is updated in an incompatible way. There is now an extra column representing the days of the week when the rule is applied as the first column after the name of the user. The day of the week field will take a wild card '\*' or ranges (1-5) for Monday to Friday, or lists (1,3,5). Sunday is day 0 as any self respecting geek would have it.

The new access_file I have looks something like this:

    user0:0-4:0001:2300:P8.00144f7dc383
    
    user2:0-4:0630:2300
    
    user3:0-4:0630:2230
    
    user4:0-4:0630:2100
    
    user4:5-6:0630:2200
    

The script is still here: http://blogs.sun.com/chrisg/resource/check_access_hours

Friday Oct 09, 2009

Preparing for OpenSolaris @ home

Since the "nevada" builds of Solaris next are due to end soon and for some time the upgrade of my home server has involved more than a little bit of TLC to get it to work I will be moving to an OpenSolaris build just as soon as I can.

However before I can do this I need to make sure I have all thesoftware to provide home service. This is really a note to myself to I don't forget anything.

  • Exim Mail Transfer Agent (MTA). Since I am using certain encryption routines, virus detection and spamassassin I was unable to use the standard MTA, sendmail, when the system was originaly built and have been using exim, from blastwave. I hope to build and use exim without getting all the cruft that comes from the Blastwave packaged. So far this looks like it will be simple as OpenSolaris now has OpenSSL.

  • An imapd. Currently I have a blastwave version but again I intend to build this from scratch again the addition of OpenSSL and libcrypto should make this easy.

  • Clamav. To protect any Windows systems and to generally not pass on viri to others clamav has been scanning all incoming email. Again I will build this from scratch as I already do.

  • Spamassassin. Again I already build this for nevada so building it for OpenSolaris will be easy.

  • Ddclient. Having dynamic DNS allows me to login remotely and read email.

  • Squeezecenter. This is a big issue and in the past has proved hard to get built thanks to all the perl dependacies. It is for that reason I will continue to run it in a zone so that I don't have to trash the main system. Clearly with all my digital music loaded into the squeezecentre software this has to work.

I'm going to see if I can jump through the legal hoops that will allow me to contribute the builds to the contrib repository via Source Juicer. However as this is my spare time I don't know whether the legal reviews will be funded.

Due to the way OpenSolaris is delivered I also need to be more careful about what I install. rather than being able to choose everything. First I need my list from my laptop. Then in addtion to that I'll need

  • Samba - pkg:/SUNWsmba

  • cups - pkg:/SUNWcups

  • OpenSSL - pkg:/SUNWopenssl

Oh and I'll need the Sun Ray server software.

Thursday Sep 10, 2009

Using dtrace to find double frees

Some of the most common failures that result in customer calls are misuses of the memory allocation routines, malloc, calloc, realloc, valloc, memalign and free. There are many ways in which you can misuse these routines and the data that they return and the resulting failures often occur within the routines even though the problem is with the calling program.

I'm not going to discuss here all the ways you can abuse these routines but look at a particular type abuse. The double free. When you allocate memory using these routines it is your responsibility to free it again so that the memory does not “leak”. However you must only free the memory once. Freeing it more than once is a bug and the results of that are undefined.

This very simple code has a double free:

#include <stdlib.h>

void
doit(int n, char \*x)
{
        if (n-- == 0)
                free(x);
        else
                doit(n,x);
}
int
main(int argc, char \*\*argv)
{
        char \*x;
        char \*y;

        x = malloc(100000);
        
        doit(3, x);
        doit(10, x);
}

and if you compile and run that program all appears well;


However a more realistic program could go on to fail in interesting ways leaving you with the difficult task of finding the culprit. It is for that reason the libumem has good checking for double frees:


: exdev.eu FSS 26 $;  LD_PRELOAD=libumem.so.1 /home/cg13442/lang/c/double_free
Abort(coredump)
: exdev.eu FSS 27 $; mdb core
Loading modules: [ libumem.so.1 libc.so.1 ld.so.1 ]
> ::status
debugging core file of double_free (64-bit) from exdev
file: /home/cg13442/lang/c/double_free
initial argv: /home/cg13442/lang/c/double_free
threading model: native threads
status: process terminated by SIGABRT (Abort), pid=18108 uid=14442 code=-1
> ::umem_status
Status:         ready and active
Concurrency:    16
Logs:           (inactive)
Message buffer:
free(e53650): double-free or invalid buffer
stack trace:
libumem.so.1'umem_err_recoverable+0xa6
libumem.so.1'process_free+0x17e
libumem.so.1'free+0x16
double_free'doit+0x3a
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'doit+0x4d
double_free'main+0x100
double_free'_start+0x6c

> 

Good though this is there are situations when libumem is not used and others where it can't be used1. In those cases it is useful to be able to use dtrace to do this and any way it is always nice to have more than one arrow in your quiver:


: exdev.eu FSS 54 $; me/cg13442/lang/c/double_free 2> /dev/null              <
/usr/sbin/dtrace -qs doublefree.d -c /home/cg13442/lang/c/double_free 2> /dev/null
Hit Control-C to stop tracing
double free?
	Address: 0xe53650
	Previous free at: 2009 Jun 23 12:23:22, LWP -1
	This     free at: 2009 Jun 23 12:23:22, LWP -1
	Frees 42663 nsec apart
	Allocated 64474 nsec ago by LWP -1

              libumem.so.1`free
              double_free`doit+0x3a
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d

: exdev.eu FSS 56 $; 

If run as root you can get the the real LWP values that did the allocation and the frees:

: exdev.eu FSS 63 $; pfexec /usr/sbin/dtrace -qs doublefree.d -c /home/cg1344>
Hit Control-C to stop tracing
double free?
	Address: 0xe53650
	Previous free at: 2009 Jun 23 14:21:29, LWP 1
	This     free at: 2009 Jun 23 14:21:29, LWP 1
	Frees 27543 nsec apart
	Allocated 39366 nsec ago by LWP 1

              libumem.so.1`free
              double_free`doit+0x3a
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d
              double_free`doit+0x4d

: exdev.eu FSS 64 $;

Here is the script in all it's glory.

#!/usr/sbin/dtrace -qs

BEGIN
{
	printf("Hit Control-C to stop tracing\\n");
}
ERROR 
/ arg4 == DTRACEFLT_KPRIV || arg4 == DTRACEFLT_UPRIV /
{
	lwp = -1;
}

pid$target::realloc:entry,
pid$target::free:entry
{
	self->addr = arg0;
	self->recurse++;
}

pid$target::realloc:return,
pid$target::free:return
/ self->recurse /
{
	self->recurse--;
	self->addr = 0;
}

pid$target::malloc:entry,
pid$target::memalign:entry,
pid$target::valloc:entry,
pid$target::calloc:entry,
pid$target::realloc:entry,
pid$target::realloc:entry,
pid$target::free:entry
/ lwp != -1 && self->lwp == 0 /
{
	self->lwp = curlwpsinfo->pr_lwpid;
}

pid$target::malloc:entry,
pid$target::calloc:entry,
pid$target::realloc:entry,
pid$target::memalign:entry,
pid$target::valloc:entry,
pid$target::free:entry
/ self->lwp == 0 /
{
	self->lwp = lwp;
}

pid$target::malloc:return,
pid$target::calloc:return,
pid$target::realloc:return,
pid$target::memalign:return,
pid$target::valloc:return
{
	alloc_time[arg1] = timestamp;
	allocated[arg1] = 1;
	free_walltime[arg1] = 0LL;
	free_time[arg1] = 0LL;
	free_lwpid[arg1] = 0;
	alloc_lwpid[arg1] = self->lwp;
	self->lwp = 0;
}

pid$target::realloc:entry,
pid$target::free:entry
/ self->recurse == 1 && alloc_time[arg0] && allocated[arg0] == 0 /
{
	printf("double free?\\n");
	printf("\\tAddress: 0x%p\\n", arg0);
	printf("\\tPrevious free at: %Y, LWP %d\\n", free_walltime[arg0],
		free_lwpid[arg0]);
	printf("\\tThis     free at: %Y, LWP %d\\n", walltimestamp,
		self->lwp);
	printf("\\tFrees %d nsec apart\\n", timestamp - free_time[arg0]);
	printf("\\tAllocated %d nsec ago by LWP %d\\n",
		timestamp - alloc_time[arg0], alloc_lwpid[arg0]);

	ustack(10);
}

pid$target::realloc:entry,
pid$target::free:entry
/ self->recurse == 1 && alloc_time[arg0] && allocated[arg0] == 1 /
{
	free_walltime[arg0] = walltimestamp;
	free_time[arg0] = timestamp;
	free_lwpid[arg0] = self->lwp;

	allocated[arg0] = 0;
}

pid$target::free:entry
/self->lwp && self->recurse == 0/
{
	self->lwp = 0;
}

1Most of the cases it “can't” be used is because it finds fatal problems early on in the start up of applications. Then the application writers make bizarre claims that this is a problem with libumem and will tell you it is not supported with their app. In fact the problem is with the application.

Wednesday Sep 09, 2009

Understanding iostat

1Iostat has been around for years and until Dtrace came along and allowed us to look more deeply into the kernel was the tool for analysing how the io subsystem was working in Solaris. However interpreting the output has proved in the past to cause problems.

First if you are looking at latency issues it is vital that you use the smallest time quantum to iostat you can, which as of Solaris 10 is 1 second. Here is a sample of some output produced from “iostat -x 1”:

                  extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
                 extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       5.0 1026.5    1.6 1024.5  0.0 25.6   24.8   0  23 
                 extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 


The first thing to draw your attention to is the Column “%b” which the manual tells you is:

%b percent of time the disk is busy (transactions in progress)

So in this example the disk was “busy”, ie had at least one transaction (command) in progress for 23% of the time period. Ie 0.23 seconds as the time period was 1 second.

Now look at the “actv” column. Again the manual says:

actv average number of transactions actively being serviced (removed from the queue but not yet completed)
This is the number of I/O operations accepted, but not yet serviced, by the device.
In this example the average number of transactions outstanding for this time quantum was 25.6. Now here is the bit that is so often missed. Since we know that all the transactions actually took place within 0.23 seconds and were not evenly spread across the full second the average queue depth when busy was 100/23 \* 25.6 or 111.3. Thanks to dtrace and this D script you can see the actual IO pattern2:

Even having done the maths iostat smooths out peaks in the IO pattern and thus under reports the peak number of transactions as 103.0 when the true value is 200.
The same is true for the bandwidth. The iostat above comes reports 1031.5 transactions a second (r/s + w/s) again though this does not take into account that all those IO requests happened in 0.23 seconds. So the true figure for the device would be 1031.5 \* 100/23 which is 4485 transations/sec.
If we up the load on the disk a bit then you can conclude more from the iostat:
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
                 extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       5.0 2155.7    1.6 2153.7 30.2 93.3   57.1  29  45 
                 extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       0.0 3989.1    0.0 3989.1 44.6 157.2   50.6  41  83 
                 extended device statistics                 
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd3       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 

Since the %w column is non zero, and from the manual %w is:

%w percent of time there are transactions waiting for service (queue non-empty)

This is telling us that the device's active queue was full. So on the third line of the above output the devices queue was full for 0.41 seconds. Since the queue depth is quite easy to find out3 and in this case was 256, you can deduce that the queue depth for that 0.41 seconds was 256. Thus the average for the 0.59 seconds left was (157.2-(0.41\*256))/0.59 which is 88.5. The graph of the dtrace results tells a different story:


These examples demonstrate what can happen if your application dumps a large number of transactions onto a storage device while the through put will be fine and if you look at iostat data things can appear ok if the granularity of the samples is not close to your requirement for latency any problem can be hidden by the statistical nature of iostat.

1Apologies to those who saw a draft of this on my blog briefly.

2The application creating the IO attempts to keep 200 transations in the disk at all the time. It is interesting to see that it fails as it does not get notification of the completion of the IO until all or nearly all the outstanding transactions have completed.

3This command will do it for all the devices on your system:

echo '\*sd_state::walk softstate | ::print -d -at "struct sd_lun" un_throttle' | pfexec mdb -k

however be warned the throttle is dynamic so dtrace gives the real answer.

Monday Sep 07, 2009

Recovering /etc/name_to_major

What do you do if you manage to delete or corrupt /etc/name_to_major? Assuming you don't have a backup a ZFS snapshot or an alternative boot environment, in which case you probably are in the wrong job, you would appear to be in trouble.

First thing is not to panic. Do not reboot the system. If you do that it won't boot and your day has just got a whole lot worse. The data needed to rebuild /etc/name_to_major is in the running kernel so it can be rebuilt from that. If your system an x86 system it is also in the boot archive.

However if you have no boot archive or have over written it with the bad name_to_system this script will extract it from the kernel, all be it slowly:

#!/bin/ksh
i=0
while ((i < 1000 ))
do
print "0t$i::major2name" | mdb -k | read x && echo $x $i
let i=i+1 
done

1Redirect that into a file then move the remains of your /etc/name_to_major out of the way and copy the file in place.

Next time make sure you have a back up or snapshot or alternative boot environment!

1You will see lots of errors of the form “mdb: failed to convert major number to name” these are to be expected. They can be limited to just one by adding “|| break” to the mdb line but that assumes that you have no holes in the major number listings which you may have if you have removed a device, so best to not risk that.

Thursday Sep 03, 2009

Improved sd.conf format

Editing sd.conf has always been somewhat difficult thanks to it not being a documented interface and that the interface was never inteded to be exposed and it was even architecture specific. Fortunately Micheal documented it, which meant that it was known even if syntax remained obscure.

However after ARC case 2008/465 was approved and the changes pushed as part of bug 6518995 you can now use more a human readable syntax1:

sd-config-list=
        "ATA     VBOX HARDDISK", "disksort:false";

As it turns out the “disksort”2 option along with the thottle-max and throttle-min are the ones I most often want to tune.

Here is the current list of tunables lifted straight from the ARC case.


Tunable_Name

Commitment

Data_Type

cache-nonvolatile

Private

BOOLEAN

controller-type

Private

UINT32

delay-busy

Committed

UINT32

disksort

Private

BOOLEAN

timeout-releasereservation

Private

UINT32

reset-lun

Private

BOOLEAN

retries-busy

Private

UINT32

retries-timeout

Committed

UINT32

retries-notready

Private

UINT32

retries-reset

Private

UINT32

throttle-max

Private

UINT32

throttle-min

Private

UINT32


1This reminds me of the change to /etc/printcap that allowed you to specify the terminal flags as strings rather than as a bitmap. All the mystery seemed to be removed!

2While I used disksort as an example for this case I can't think of any reason why you would have it enabled for a virtual disk in VirtualBox.

Thursday Aug 27, 2009

Starting remote X applications

Someone has posted a script to start a remote xterm on BigAdmin which exposes a number of issues I thought it would be better if google stood some chance of finding a better answer or at least an answer that does not rely on inherently insecure settings.

Remote X applications should be started using ssh -X so that the X traffic is encrypted and if you add -C compressed which can be a significant performance boost. So a script to do this could be handy although to be honest knowing the ssh options or having them set as the default in your .ssh/config is just as easy:

: exdev.eu FSS 31 $; egrep '\^(Compress|ForwardX)' ~/.ssh/config
ForwardX11 yes
Compression yes
: exdev.eu FSS 32 $; ssh -f pearson /usr/X11/bin/xterm         
: exdev.eu FSS 33 $; 

or more usefully to start graphical tools:

: exdev.eu FSS 33 $; ssh -f pearson pfexec /usr/sadm/admin/bin/dhcpmgr
: exdev.eu FSS 34 $; 

However if you really want a script to do it here is one that will and no need to mess with your .ssh/config

#!/bin/ksh
REMOTE_PATH=${REMOTE_PATH:-${PATH}}
APP=${0##\*/}
if (( $# < 1 )) 
then
        print "USAGE: ${APP} host [args]" >&2
        exit 1
fi
host=$1
shift
exec /usr/bin/ssh -o ClearAllForwardings=yes -C -Xfn $host \\
        PATH=${REMOTE_PATH} pfexec ${APP#r} $@

If you save this into a file called “rxterm” then running “rxterm remotehost” will start an xterm on the system remotehost assuming you can ssh to that system.

More entertainingly you can save it as “rdhcpmgr” and it will start the dhcpmgr program on a remote system and securely display it on your current display (assuming your PATH includes /usr/sadm/admin/bin and your profile allows you access to that application). You can use it to start any application by simple naming it after the application in question with a preceding “r”.

Tuesday Aug 04, 2009

Making a simple script faster

Many databases get backed up by simply stopping the database copying all the data files and then restarting the database. This is fine for things that don't require 24 hour access. However if you are concerned about the time it takes to take the back up then don't do this:

stop_database
cp /data/file1.db .
gzip file1.db
cp /data/file2.db .
gzip file2.db
start_database

Now there are many ways to improve this using ZFS and snapshots being one of the best but if you don't want to go there then at the very least stop doing the “cp”. It is completely pointless. The above should just be:

stop_database
gzip < /data/file1.db > file1.db
gzip < /data/file2.db > file2.db
start_database

You can continue to make it faster by backgrounding those gzips if the system has spare capacity while the back up is running but that is another point. Just stopping those extra copies will make life faster as they are completely unnecessary.

Friday Jul 31, 2009

Adding a Dtrace provider to the kernel

Since writing scsi.d I have been pondering if there should really be a scsi dtrace provider that allows you to do all that scsi.d does and more. Since the push of 6797025 that both removed the main reason for not doing this and also gave impetus to do it as scsi.d needed incompatible changes to use the new return function as the return “probe”.

This work is very much work in progress and may or may not see the light of day due to some other issues around scsi addressing, however I thought I would document how I added a kernel dtrace provider so if you want to you don't have to do so much searching1.

Adding the probes themselves is simplicity itself using the DTRACE_PROBEN() macros. Following the convention I added this macro:

#define	DTRACE_SCSI_2(name, type1, arg1, type2, arg2)			\\
	DTRACE_PROBE2(__scsi_##name, type1, arg1, type2, arg2);

to usr/src/uts/common/sys/sdt.h. Then after including <sys/sdt.h> in each file I put this macro in each of the places I wanted my probes:

 	DTRACE_SCSI_2(transport, struct scsi_pkt \*, pkt,
 	    struct scsi_address \*, P_TO_ADDR(pkt))

The bit that took a while to find was how to turn these into a provider. To do that edit the file “usr/src/uts/common/dtrace/sdt_subr.c” and create the attribute structure2:

 static dtrace_pattr_t scsi_attr = {
 { DTRACE_STABILITY_EVOLVING, DTRACE_STABILITY_EVOLVING, DTRACE_CLASS_ISA },
 { DTRACE_STABILITY_PRIVATE, DTRACE_STABILITY_PRIVATE, DTRACE_CLASS_UNKNOWN },
 { DTRACE_STABILITY_PRIVATE, DTRACE_STABILITY_PRIVATE, DTRACE_CLASS_UNKNOWN },
 { DTRACE_STABILITY_PRIVATE, DTRACE_STABILITY_PRIVATE, DTRACE_CLASS_ISA },
 { DTRACE_STABILITY_EVOLVING, DTRACE_STABILITY_EVOLVING, DTRACE_CLASS_ISA },
 };

and add it to the sdt_providers array:


	{ "scsi", "__scsi_", &scsi_attr, 0 },

than add the probes to the sdt_args array:

	{ "scsi", "transport", 0, 0, "struct scsi_pkt \*", "scsi_pktinfo_t \*"},
	{ "scsi", "transport", 1, 1, "struct scsi_address \*", "scsi_addrinfo_t \*"},
	{ "scsi", "complete", 0, 0, "struct scsi_pkt \*", "scsi_pktinfo_t \*"},
	{ "scsi", "complete", 1, 1, "struct scsi_address \*", "scsi_addrinfo_t \*"},

Finally you need to create a file containing the definitions of the output structures, scsi_pktinfo_t and scsi_addrinfo_t and define translators for them. That goes into /usr/lib/dtrace and I called mine scsa.d (there is already one called scsi.d).

/\*
 \* CDDL HEADER START
 \*
 \* The contents of this file are subject to the terms of the
 \* Common Development and Distribution License (the "License").
 \* You may not use this file except in compliance with the License.
 \*
 \* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
 \* or http://www.opensolaris.org/os/licensing.
 \* See the License for the specific language governing permissions
 \* and limitations under the License.
 \*
 \* When distributing Covered Code, include this CDDL HEADER in each
 \* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
 \* If applicable, add the following below this CDDL HEADER, with the
 \* fields enclosed by brackets "[]" replaced with your own identifying
 \* information: Portions Copyright [yyyy] [name of copyright owner]
 \*
 \* CDDL HEADER END
 \*/
/\*
 \* Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
 \* Use is subject to license terms.
 \*/

#pragma D depends_on module scsi
#pragma D depends_on provider scsi

inline char TEST_UNIT_READY = 0x0;
#pragma D binding "1.0" TEST_UNIT_READY
inline char REZERO_UNIT_or_REWIND = 0x0001;
#pragma D binding "1.0" REZERO_UNIT_or_REWIND

inline char SCSI_HBA_ADDR_COMPLEX = 0x0040;
#pragma D binding "1.0" SCSI_HBA_ADDR_COMPLEX

typedef struct scsi_pktinfo {
	caddr_t pkt_ha_private;
	uint_t	pkt_flags;
	int	pkt_time;
	uchar_t \*pkt_scbp;
	uchar_t \*pkt_cdbp;
	ssize_t pkt_resid;
	uint_t	pkt_state;
	uint_t 	pkt_statistics;
	uchar_t pkt_reason;
	uint_t	pkt_cdblen;
	uint_t	pkt_tgtlen;
	uint_t	pkt_scblen;
} scsi_pktinfo_t;

#pragma D binding "1.0" translator
translator scsi_pktinfo_t  < struct scsi_pkt \*P > {
	pkt_ha_private = P->pkt_ha_private;
	pkt_flags = P->pkt_flags;
	pkt_time = P->pkt_time;
	pkt_scbp = P->pkt_scbp;
	pkt_cdbp = P->pkt_cdbp;
	pkt_resid = P->pkt_resid;
	pkt_state = P->pkt_state;
	pkt_statistics = P->pkt_statistics;
	pkt_reason = P->pkt_reason;
	pkt_cdblen = P->pkt_cdblen;
	pkt_tgtlen = P->pkt_tgtlen;
	pkt_scblen = P->pkt_scblen;
};

typedef struct scsi_addrinfo {
	struct scsi_hba_tran	\*a_hba_tran;
	ushort_t a_target;	/\* ua target \*/
	uchar_t	 a_lun;		/\* ua lun on target \*/
	struct scsi_device \*a_sd;
} scsi_addrinfo_t;

#pragma D binding "1.0" translator
translator scsi_addrinfo_t  < struct scsi_address \*A > {
	a_hba_tran = A->a_hba_tran;
	a_target = !(A->a_hba_tran->tran_hba_flags & SCSI_HBA_ADDR_COMPLEX) ?
		0 : A->a.spi.a_target;
	a_lun = !(A->a_hba_tran->tran_hba_flags & SCSI_HBA_ADDR_COMPLEX) ?
		0 : A->a.spi.a_lun;
	a_sd = (A->a_hba_tran->tran_hba_flags & SCSI_HBA_ADDR_COMPLEX) ?
		A->a.a_sd : 0;
};

again this is just enough to get going so I can see and use the probes:

jack@v4u-2500b-gmp03:~$ pfexec dtrace -P scsi -l
   ID   PROVIDER            MODULE                          FUNCTION NAME
 1303       scsi              scsi                    scsi_transport transport
 1313       scsi              scsi                 scsi_hba_pkt_comp complete
jack@v4u-2500b-gmp03:~$ 

While this all works well for parallel scsi getting the address of devices on fibre is not clear to me. If you have any suggestions I'm all ears.

1If there is such a document already in existence then please add a comment. I will just wish I could have found it.

2These may not be the right attributes but gets me to the point it compiles and can be used in a PoC.

Friday Jul 24, 2009

gethrtime and the real time of day

Seeing Katsumi Inoue blogging about Oracle 10g reporting timestamps using the output from gethrtime() reminded me that I have had on occasion wished I had a log to map hrtime to the current time. As Katsumi points out the output of gethrtime() is not absolutely tied to the current time. So there is no way to take the output from it and tell when in real time the output was generated unless you have some reference point. To make things more complex the output is reset each time the system reboots.

For this reason it is useful to keep a file that contains a history of the hrtime and the real time so that any logs can be retrospectively coerced back into a readable format.

There are lots of ways to do this but since on this blog we seem to be in Dtrace mode here is how using dtrace

pfexec /usr/sbin/dtrace -o /var/log/hrtime.log -qn 'BEGIN,tick-1hour,END {
printf("%d:%d.%9.9d:%Y\\n",
        timestamp, walltimestamp/1000000000,
        walltimestamp%1000000000, walltimestamp);
}'


Then you get a nice file that contains three columns. The hrtime, the time in seconds since January 1st 1970 and a human readable representation of the time in the current timezone:

: s4u-10-gmp03.eu TS 39 $; cat /var/log/hrtime.log    
5638545510919736:1248443226.350000625:2009 Jul 24 14:47:06
5642145449325180:1248446826.279995332:2009 Jul 24 15:47:06

I have to confess however that using Dtrace for this does not feel right, not least as you need to be root for this to be reliable and also the C code is trivial to write, compile and run from cron and send the output to syslog:

: exdev.eu FSS 39 $; cat  ./gethrtime_base.c
#include <sys/time.h>
#include <stdio.h>

int
main(int argc, char \*\*argv)
{
	hrtime_t hrt = gethrtime();
	struct timeval tv;
	gettimeofday(&tv, NULL);

	printf("%lld:%d.%6.6d:%s", hrt, tv.tv_sec, tv.tv_usec,
			ctime(&tv.tv_sec));
}
: exdev.eu FSS 40 $; make ./gethrtime_base
cc    -o gethrtime_base gethrtime_base.c 
: exdev.eu FSS 41 $;  ./gethrtime_base
11013365852133078:1248444379.163215:Fri Jul 24 15:06:19 2009
: exdev.eu FSS 42 $; 
./gethrtime_base | logger -p daemon.notice -t hrtime
: exdev.eu FSS 43 $;  tail -10 /var/adm/messages | grep hrtime
Jul 24 15:32:33 exdev hrtime: [ID 702911 daemon.notice] 11014939896174861:1248445953.109855:Fri Jul 24 15:32:33 2009
Jul 24 16:09:21 exdev hrtime: [ID 702911 daemon.notice] 11017148054584749:1248448161.131675:Fri Jul 24 16:09:21 2009
: exdev.eu FSS 50 $; 

Wednesday Jul 22, 2009

1,784,593 the highest load average ever?

As I cycled home I realised there was one more thing I could do on the exploring the limits of threads and processes on Solaris. That would be the highest load average ever. Modifying the thread creator program to not have each thread sleep once started but instead wait until all the threads were set up and then go into an infinite compute loop that should get me the highest load average possible on a system or so you would think.

With 784001 threads the load stabilised at:

10:16am  up 18:07,  2 users,  load average: 22114.50, 22022.68, 21245.781

Which was somewhat disappointing. However an earlier run with just 780,000 threads managed to peak the load at 1,784,593 while it was exiting:

 7:44am  up 15:35,  2 users,  load average: 1724593.79, 477392.80, 188985.10

I' still pondering how 780000 thread can result in a load average of more than 1 million.

Friday Jul 17, 2009

10 Steps to OpenSolaris Laptop Heaven

If you have recently come into possession of a Laptop onto which to load Solaris then here are my top tips:

  1. Install OpenSolaris. At the time of writing the release is 2009.06, install that, parts of this advice may become obsolete with later releases. Do not install Solaris 10 or even worse Nevada. You should download the live CD and burn it onto a disk boot that and let it install but before you start the install read the next tip.

  2. Before you start the install open a terminal so that you can turn on compression on the root pool once it it created. You have to keep running “zpool list” until you see the pool is created and then run (pfexec zfs set compression=on rpool). You may think that disk is big but after a few months you will be needing every block you can get. Also laptop drives are so slow that compression will probably make things faster.

  3. Before you do anything after installation take a snapshot of the system so you can always go back (pfexec beadm create opensolaris@initialinstall). I really mean this.

  4. Add the extras repository. It contains virtualbox, the flash plugin for firefox, true type fonts and more. All you need is a sun online account. See https://pkg.sun.com/register/ and http://blogs.sun.com/chrisg/entry/installing_support_certificates_in_opensolaris

  5. Decide whether you want to use the development or support repository. If in doubt choose the supported one. Sun employees get access to the support repository. Customers need to get a support contract. (http://www.opensolaris.com/learn/subscriptions/). Then update to the latest bigs (pfexec pkg image-update).

  6. Add any extra packages you need. Since I am now writing this retrospectively there may be things missing. My starting list is:

    • OpenOffice (pfexec pkg install openoffice)

    • SunStudio (pfexec pkg install sunstudioexpress)

    • Netbeans (pfexec pkg install netbeans)

    • Flash (pkfexec pkg install flash)

    • Virtualbox (pfexec pkg install virtualbox)

    • TrueType fonts (pfxec pkg install ttf-fonts-core)

  7. If you are a Sun Employee install the punchin packages so you can access SWAN. I actually rarely use this as I have a Solaris 10 virtualbox image that I use for punchin so I can be both on and off SWAN at the same time but it is good to have the option.

  8. Add you keys to firefox so that you can browse the extras and support repositories from firefox. See http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Browse+the+Support+and+Extra+Repositories.

  9. Go to Fluendo and get and install the free mp3 decoder. They also sell a complete and legal set of decoders for the major video formats, I have them and have been very happy with them. They allow me to view the videos I have cycling events.

  10. Go to Adobe and get acroread. I live in hope that at some point this will be in a repository either at Sun or one Adobe runs so that it can be installed using the standard pkg commands but until then do it by hand.

Enjoy.

Thursday Jul 16, 2009

784956 Processes

This week we had a customer claiming that they were unable to create more then 60,000 processes. This turned out to be due to them tuning max_nproc, maxuprc and maxpid but not setting segkpsize so the system would run out of “memory” before it ran into the resource limits for process.

Tuning segkpsize to 8G resolved it but I just had to see how many processes I could get running on an M8000.

Using these settings in /etc/system:

set segkpsize=0x300000
set pidmax=999999
set maxuprc=999990
set max_nprocs=999999

and a simple forker program:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>

int
main(int argc, char \*\*argv)
{
        pid_t pid;
        int count=0;
        while(count < argc == 2 ? 100 : atoi(argv[1]) &&
            (pid = fork()) != -1) {
                if (pid != 0 ) {
                        /\* Parent \*/
                        if (count % 1000 == 0)
                                printf("%d\\n", count);
                        count++;
                } else {
                        pause();
                        exit(0);
                }
        }
        if (pid < 0)
                perror("fork");
        printf("%d\\n", count);
}

I was slightly disappointed at the result:


$ ./forker 100000
1000
2000
3000
.....
782000
783000
784000
fork: Resource temporarily unavailable
784956
$

Only 784956 processes, plus the ones already running when the system booted. Trying to count them with ps obviously fails but mdb gives the real count.

# ps -e| wc
ksh: cannot fork: too many processes
# 
# echo nproc::print -d | mdb -k  
0t785025
# 

Someone must have managed to get more.

Tuesday Jun 30, 2009

Using dtrace to track down memory leaks

I've been working with a customer to try and find a memory “leak” in their application. Many things have been tried, libumem, and the mdb ::findleaks command all with no success.

So I was, as I am sure others before me have, pondering if you could use dtrace to do this. Well I think you can. I have a script that puts probes into malloc et al and counts how often they are called by this thread and when they are freed often free is called.

Then in the entry probe of the target application note away how many calls there have been to the allocators and how many to free and with a bit of care realloc. Then in the return probe compare the number of calls to allocate and free with the saved values and aggregate the results. The principle is that you find the routines that are resulting in allocations that they don't clear up. This should give you a list of functions that are possible leakers which you can then investigate1.

Using the same technique I for getting dtrace to “follow fork” that I described here I ran this up on diskomizer, a program that I understand well and I'm reasonably sure does not have systemic memory leaks. The dtrace script reports three sets of results.

  1. A count of how many times each routine and it's descendents have called a memory allocator.

  2. A count of how many times each routine and it's descendents have called free or realloc with a non NULL pointer as the first argument.

  3. The difference between the two numbers above.

Then with a little bit of nawk to remove all the functions for which the counts are zero gives:

# /usr/sbin/dtrace -Z -wD TARGET_OBJ=diskomizer2 -o /tmp/out-us \\
	-s /tmp/followfork.d \\
	-Cs /tmp/allocated.d -c \\
         "/opt/SUNWstc-diskomizer/bin/sparcv9/diskomizer -f /devs -f background \\
          -o background=0 -o SECONDS_TO_RUN=1800"
dtrace: failed to compile script /tmp/allocated.d: line 20: failed to create entry probe for 'realloc': No such process
dtrace: buffer size lowered to 25m
dtrace: buffer size lowered to 25m
dtrace: buffer size lowered to 25m
dtrace: buffer size lowered to 25m
 
# nawk '$1 != 0 { print  $0 }' < /tmp/out.3081
allocations
           1 diskomizer`do_dev_control
           1 diskomizer`set_dev_state
           1 diskomizer`set_state
           3 diskomizer`report_exit_reason
           6 diskomizer`alloc_time_str
           6 diskomizer`alloc_time_str_fmt
           6 diskomizer`update_aio_read_stats
           7 diskomizer`cancel_all_io
           9 diskomizer`update_aio_write_stats
          13 diskomizer`cleanup
          15 diskomizer`update_aio_time_stats
          15 diskomizer`update_time_stats
          80 diskomizer`my_calloc
         240 diskomizer`init_read
         318 diskomizer`do_restart_stopped_devices
         318 diskomizer`start_io
         449 diskomizer`handle_write
         606 diskomizer`do_new_write
        2125 diskomizer`handle_read_then_write
        2561 diskomizer`init_buf
        2561 diskomizer`set_io_len
       58491 diskomizer`handle_read
       66255 diskomizer`handle_write_then_read
      124888 diskomizer`init_read_buf
      124897 diskomizer`do_new_read
      127460 diskomizer`expect_signal
freecount
           1 diskomizer`expect_signal
           3 diskomizer`report_exit_reason
           4 diskomizer`close_and_free_paths
           6 diskomizer`update_aio_read_stats
           9 diskomizer`update_aio_write_stats
          11 diskomizer`cancel_all_io
          15 diskomizer`update_aio_time_stats
          15 diskomizer`update_time_stats
          17 diskomizer`cleanup
         160 diskomizer`init_read
         318 diskomizer`do_restart_stopped_devices
         318 diskomizer`start_io
         442 diskomizer`handle_write
         599 diskomizer`do_new_write
        2125 diskomizer`handle_read_then_write
        2560 diskomizer`init_buf
        2560 diskomizer`set_io_len
       58491 diskomizer`handle_read
       66246 diskomizer`handle_write_then_read
      124888 diskomizer`do_new_read
      124888 diskomizer`init_read_buf
      127448 diskomizer`cancel_expected_signal
mismatch_count
     -127448 diskomizer`cancel_expected_signal
          -4 diskomizer`cancel_all_io
          -4 diskomizer`cleanup
          -4 diskomizer`close_and_free_paths
           1 diskomizer`do_dev_control
           1 diskomizer`init_buf
           1 diskomizer`set_dev_state
           1 diskomizer`set_io_len
           1 diskomizer`set_state
           6 diskomizer`alloc_time_str
           6 diskomizer`alloc_time_str_fmt
           7 diskomizer`do_new_write
           7 diskomizer`handle_write
           9 diskomizer`do_new_read
           9 diskomizer`handle_write_then_read
          80 diskomizer`init_read
          80 diskomizer`my_calloc
      127459 diskomizer`expect_signal

#

From the above you can see that there are two functions that create and free the majority of the allocations and the allocations almost match each other, which is expected as they are effectively constructor and destructor for each other. The small mismatch is not unexpected in this context.

However it is the vast number of functions that are not listed at all as they and their children make no calls to the memory allocator or have exactly matching allocation and free that are important here. Those are the functions that we have just ruled out.

From here it is easy now to drill down on the functions that are interesting you, ie the ones where there are unbalanced allocations.


I've uploaded the files allocated.d and followfork.d so you can see the details. If you find it useful then let me know.

1Unfortunately the list is longer than you want as on SPARC it includes any functions that don't have their own stack frame due to the way dtrace calculates ustackdepth, which the script makes use of.

2The script only probes particular objects, in this case the main diskomizer binary, but you can limit it to a particular library or even a particular set of entry points based on name if you edit the script.

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today