Thursday Aug 27, 2009

Starting remote X applications

Someone has posted a script to start a remote xterm on BigAdmin which exposes a number of issues I thought it would be better if google stood some chance of finding a better answer or at least an answer that does not rely on inherently insecure settings.

Remote X applications should be started using ssh -X so that the X traffic is encrypted and if you add -C compressed which can be a significant performance boost. So a script to do this could be handy although to be honest knowing the ssh options or having them set as the default in your .ssh/config is just as easy:

: FSS 31 $; egrep '\^(Compress|ForwardX)' ~/.ssh/config
ForwardX11 yes
Compression yes
: FSS 32 $; ssh -f pearson /usr/X11/bin/xterm         
: FSS 33 $; 

or more usefully to start graphical tools:

: FSS 33 $; ssh -f pearson pfexec /usr/sadm/admin/bin/dhcpmgr
: FSS 34 $; 

However if you really want a script to do it here is one that will and no need to mess with your .ssh/config

if (( $# < 1 )) 
        print "USAGE: ${APP} host [args]" >&2
        exit 1
exec /usr/bin/ssh -o ClearAllForwardings=yes -C -Xfn $host \\
        PATH=${REMOTE_PATH} pfexec ${APP#r} $@

If you save this into a file called “rxterm” then running “rxterm remotehost” will start an xterm on the system remotehost assuming you can ssh to that system.

More entertainingly you can save it as “rdhcpmgr” and it will start the dhcpmgr program on a remote system and securely display it on your current display (assuming your PATH includes /usr/sadm/admin/bin and your profile allows you access to that application). You can use it to start any application by simple naming it after the application in question with a preceding “r”.

Wednesday Jun 25, 2008

Why check the digest of files you copy and what to do when they don't match

I'm always copying data from home to work and less often from work to home. Mostly these are disk images. I always check the md5 sum just out of paranoia. It turns out you can't be paranoid enough! The thing to remember if the check sums don't match is not to copy the file again but use rsync. It will bring over just the blocks that are corrupt.

: FSS 43 $; scp .
diskimage.fat.bz2    100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1825 MB 11:10:31    
: FSS 44 $; digest -a md5 diskimage.fat.bz2    
: FSS 45 $; ssh digest -a md5 /tank/tmp/diskimage.fat.bz2
: FSS 46 $; ls -l diskimage.fat.bz2
-rw-r-----   1 cg13442  staff    1913779931 Jun 25 08:56 diskimage.fat.bz2
: FSS 47 $; rsync diskimage.fat.bz2            
: FSS 48 $; digest -a md5 diskimage.fat.bz2                        
: FSS 49 $; 

Since my home directory is now on ZFS and I snapshot every time my card gets inserted into the Sun Ray I can now take a look at what went wrong. Using my zfs_versions script I can get a list of the different versions of the file from all the snapshots:

: FSS 56 $; digest -a md5 $( zfs_versions diskimage.fat.bz2 | nawk '{ print $NF }')
(/home/cg13442/.zfs/snapshot/user_snap_2008-06-25-05:51:57/diskimage.fat.bz2) = 0a193e0e80dbf83beabca12de09702a0
(/home/cg13442/.zfs/snapshot/user_snap_2008-06-25-05:54:44/diskimage.fat.bz2) = 7aa78dba6a7556fe10115aa5fc345bad
(/home/cg13442/.zfs/snapshot/user_snap_2008-06-25-07:05:34/diskimage.fat.bz2) = c6a77429920f258dfca1dbbd5018a69c
(/home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:06:39/diskimage.fat.bz2) = 674f69eec065da2b4d3da4bf45c7ae5f
(/home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:38:22/diskimage.fat.bz2) = 191f26762d5b48e0010a575b54746e80
: FSS 57 $;

So the last two files in the list represent the corrupted file and the good file:

: FSS 57 $; cmp -l /home/cg13442/.zfs/snapshot/user_snap_2008-06-2>
cmp -l /home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:06:39/diskimage.fat.bz2 /home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:38:22/diskimage.fat.bz2 | head -10                 
84262913   0 360
84262914   0  14
84262915   0 237
84262916   0  25
84262917   0 342
84262918   0 304
84262919   0  41
84262920   0  12
84262921   0 372
84262922   0  20
: FSS 58 $;

and there appear to be blocks of zeros.

: FSS 58 $; cmp -l /home/cg13442/.zfs/snapshot/user_snap_2008-06-2>
cmp -l /home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:06:39/diskimage.fat.bz2 /home/cg13442/.zfs/snapshot/user_snap_2008-06-25-09:38:22/diskimage.fat.bz2 | nawk '$2 != 0 { print $0 } $2 == 0 { count++ } END { printf("%x\\n", count ) }'
: FSS 58 $;

or at least 0x23d8c bytes were zero that should not have been. Need to see if I can reproduce this.

Anyway the moral is always check the md5 digest and if it is wrong use rsync to correct it.

Tuesday Mar 11, 2008

zone copy, aka zcp.

After messing around with zones for a few minutes it became clear that it would be really useful if there was a zcp command that worked just like scp(1) but used zlogin as the transport rather than using ssh. For those cases when you are root and don't want to mess with ssh authorizations since you know you can zlogin without a password anyway.

Specifically I wanted to be able to do:

# zcp  /etc/resolv.conf

Well it turns out that this is really easy to do. The trick is to let scp(1) do the heavy lifting for you and use zlogin(1) act as your transport. So I knocked together this script. You need to install it on your path called “zcp” and then make a hard link in the same directory called “zsh”. For example:

# /usr/sfw/bin/wget --quiet
# cp /usr/local/bin/zcp 
# ln /usr/local/bin/zcp /usr/local/bin/zsh
# chmod 755  /usr/local/bin/zsh

Now the glorious simplicity of zcp, I'll even trhow in recursvice copy for free:

# zcp -r /etc/inet
ipqosconf.1.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2503       00:00    
config.sample        100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3204       00:00    
wanboot.conf.sample  100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3312       00:00    
hosts                100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
ipnodes              100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
netmasks             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   384       00:00    
networks             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   372       00:00    
inetd.conf           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1519       00:00    
sock2path            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   566       00:00    
protocols            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1901       00:00    
services             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  4201       00:00    
mipagent.conf-sample 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6274       00:00    
mipagent.conf.fa-sam 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6232       00:00    
mipagent.conf.ha-sam 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  5378       00:00    
ntp.client           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   291       00:02    
ntp.server           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2809       00:00    
slp.conf.example     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  5750       00:00    
ntp.conf             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   155       00:00    
ntp.keys             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   253       00:00    
inetd.conf.orig      100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6961       00:00    
ntp.drift            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|     6       00:00    
ipsecalgs            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   920       00:00    
ike.preshared        100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   308       00:00    
ipseckeys.sample     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   510       00:00    
datemsk.ndpd         100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|    22       00:00    
ipsecinit.sample     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2380       00:00    
ipaddrsel.conf       100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   545       00:00    
inetd.conf.preupgrad 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6563       00:00    
hosts.premerge       100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   112       00:00    
ipnodes.premerge     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|    61       00:00    
hosts.postmerge      100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
ipqosconf.2.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3115       00:00    
ipqosconf.3.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1097       00:00    

I'll file and RFE for this to go into Solaris and update this entry when I have the number.

Update: The Bug ID is 6673792. The script now also supports zsync and zdist although niether of those have been tested yet.

Thursday Nov 15, 2007

nautlius meet ssh://

I blogged a while back about nautilus being able to browse and access file systems via ssh using ssh:// but pointed out no real geek would use it. Well I now have a real situation where it is actually easier than using the command line. ssh:// is that place. For manipulating your webrevs or even uploading them even easier than rsync.

Pity about nautilus crashing more than occasionally but still impressive.

Wednesday Nov 14, 2007

scp, sftp, rsync to supportfiles?

As I cycled to work today I was reflecting on how easy it was to upload my webrev onto and contrasting that with uploading and downloading files to

For those not familiar with when you register one you have the option to upload upto 3 ssh authorised key to the site. If you are already familiar with ssh this is a simple cut'n'paste job. Eg:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAv8aRUVQTgIqhXDJ/VAHzDEGCd3slBlAUtqjw0FytjkPeLkqPUJAQ2RBS5mN9g8IXO9uzDIZ/no0HW87J1kZhGdy/gKc/7E/z6moVG0ZWzKotfQ+AYGvH5E1WXpIpCuWPqNTEo0RMIvoGR3AwJeznKU1omQwItvQ6j+zU7cGHLZc= cg13442@estale

Then I can use scp(1), sftp(1) or rsync(1) to upload files to

Now how cool would it be if that was the case for supportfiles? When our customers register they can upload the Authorised keys and then just use scp et al to manage any files they need to upload. No messing with passwords, web pages or curl. I can't help thinking system admins, who are exactly the people who upload to supportfiles, would like this.

Thursday Aug 30, 2007

More ssh-add & gnome-keyring.

I've updated my gnome-keyring SSH_ASKPASS program to improve the user experience. However to get this 100% I need some changes to ssh-add so that there is a stable interface between it and the SSH_ASKPASS program.

The new version will read the environment variable GNOME_KEY_ASKPASS and if that is an executable and gnome-keyring needs to prompt for a pass phrase it will use that program to do the prompt, reading the pass phrase from standard out of that program, in the same way that SSH_ASKPASS does for ssh-add. It will then store that pass phrase in the keyring and output that to standard output for ssh-add.

So to use this I have this in my .dtprofile file:

: FSS 184 $; tail  -11 ~/.dtprofile
if whence gnome-keyring > /dev/null
        export SSH_ASKPASS=gnome-keyring
        if  whence xsshaskpass > /dev/null
                export GNOME_KEY_ASKPASS=xsshaskpass
elif whence xsshaskpass > /dev/null
        export SSH_ASKPASS=xsshaskpass
: FSS 185 $; 

Then of course you need the xsshaskpass program. This just pops up a window and prompts the user to enter the key. There are lots of these around and I've always wondered why solaris does not have one (if it does let me know). Since they are trivially simple to write I guess it is just another way of making Solaris a little bit more elite. Here is my solution to this. Save it as xsshaskpass somewhere in your path and make it executable:

#!/usr/bin/ksh -p
if [[ -x /usr/bin/wish ]]  ; then
# \\
        exec /usr/bin/wish -f "$0" ${1+"$@"} 
elif [[ -x /usr/sfw/bin/wish8.3 ]]  ; then
# \\
        exec /usr/sfw/bin/wish8.3 -f "$0" ${1+"$@"} ; else
# \\
        exec wish -f "$0" ${1+"$@"} ; fi
. config -borderwidth 10
label .l -text "[lindex $argv 0]"
entry .e -width 30 -show {\*}
frame .buts
button .buts.doit -text o.k. -command { puts [.e get ] ; exit 0}
button .buts.quit -text quit -command { exit 0}
pack .buts.doit .buts.quit -side left
pack .l .e .buts
tkwait window .
exit 0

The nice thing about this is that this is all you have to do to set this up and could be set up by the administrator. When ssh-add first runs when you login it will prompt you twice (see below) for your pass phrase and that then gets stored in the gnome-keyring. Assuming you entered the correct pass phrase then that is it. You never have to enter your ssh pass phrase again.

However since there is no way for the gnome-keyring program to know if the pass phrase that is read from the user is good it can end up storing a bad pass phrase in the keyring. To minimize this risk it prompts the user twice for the pass phrase until the user enters the same phrase twice. Once a bad pass phrase is in the keyring you have to use gnome-keyring-manager to delete it. Unfortunately all the gnome-keyring program has to go on when a bad passphrase is found is that is called with the arguments “Bad passphrase, try again: " which does not tell the program which key is bad. There are various hacks that could be performed to work around this but I'm coming to the conclusion the simplest would be to modify ssh-add to have it put the name of the file for which it is prompting into the environment of the SSH_ASKPASS program and hence the gnome-keyring program so that it can be read from there. With that in place it would not matter if a bad pass phrase was stored in the keyring since when the user eventually gets the pass phrase right it would still be stored.

Friday Aug 24, 2007

ssh-add meets gnome-keyring.

Now that we have the gnome keyring for storing passwords in and the excellent pidgin now uses it so I have to type my passphrase in so that pidgin can login it was irritating me that I also have to type in a passphrase for ssh.

So I wrote a small program gnome-keyring.c and a Makefile which wil allow you to store your ssh passphrase in the gnome keyring and then have ssh-add use the same program to retrieve the key. To use it save the two files in a new directory and in that directory type “make”. (This kind of assumes you have a compiler). Then install the resulting binary in your path.

Now to save away your ssh passphrase in the gnome keyring type

: principia IA 35 $; gnome-keyring -s
enter password: 
Reenter password: 
: principia IA 36 $; gnome-keyring   
easy to guess
: principia IA 37 $; 

Now if you set the environment variable SSH_ASKPASS to be gnome-keyring in your .dtprofile eg:


and then have your gnome session call “ssh-add” when the session starts you will be prompted for the gnome-keyring passphrase and you never have to type the ssh one.

I've only tested this on nevada build 71.

Irritatingly after I wrote this I did a google search for “ssh gnome-keyring” and discovered that I had reinvented the wheel, but I enjoyed it.


I've updated the program to be able to cope with having different passphrases for differnent ssh keys. This is a bit of a hack as it relies on the arguments that ssh-add passes to the program to work out which key to use but it works.

: principia IA 169 $; gnome-keyring -s /home/cg13442/.ssh/id_rsa
enter password: 
Reenter password: 
: principia IA 170 $; gnome-keyring -g /home/cg13442/.ssh/id_rsa
not so easy to guess
: principia IA 171 $; gnome-keyring -s /home/cg13442/.ssh/id_dsa
enter password: 
Reenter password: 
: principia IA 172 $; gnome-keyring -g /home/cg13442/.ssh/id_dsa
easy to guess
: principia IA 173 $; 

Saturday Jul 14, 2007

/net can be evil

I've written before about what a fan I am of the automounter. However the curse of the automounter is laziness. Direct mounts I covered but the next topic is the “/net” mount point.

Like direct automount points “/net” has it's uses, however when it is used without thought it is evil. The thing that has to be thought about is that “/net” quickly leads you into some of the eight fallacies of distributed computing: which I reproduce here from Geoff Arnold's blog:

Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences.
  1. The network is reliable
  2. Latency is zero
  3. Bandwidth is infinite
  4. The network is secure
  5. Topology doesn’t change
  6. There is one administrator
  7. Transport cost is zero
  8. The network is homogeneous

Now since when using “/net” you are just a user not a developer that cuts you some slack with me. However if you are a engineer looking at crash dumps that are many gigabytes via “/net” or uploading from a local tape to and from an NFS server on the other side of the world, or even close but over a WAN, you need to be aware of fallacies 1,2,3 and 7. Then wonder if there is a better way, invariably there is a faster, less resource hungry way to do this if you can login to a system closer to the NFS server.

If that is the case then you should get yourself acquainted some of the options to ssh(1). Specifically compression, X11 forwarding and for the more adventurous agent forwarding.

Wednesday Mar 28, 2007

Shared shell. Very cool.

If you work for Sun, or are a Sun Spectrum customer and ever want to share a login session with someone else while doing your work then check out the shared shell:

For collaboration with people in Sun it is fantastic whether you are both in Sun or a customer and Sun. (I suspect that two customers could use the same thing to collaborate as well , I could see nothing in the T's & Cs that would forbid that but I am not a Lawyer). I particularly like the ability to “write” on the screen.


Monday Feb 26, 2007

nautilus ssh://

I know no real geek would dream of using nautlius or any file browser, after all the pinnacle of UI design was the screen on the vt220, however this is so cool as to make it worth it. If only for your “friends” so that they will stop using ftp. I know this is not new news to many but I was surprised to discover a few people who I expected to know this did not. I'll not name names.

The nautilus file browser will allow you to browse files over ssh. If you have a system “” into which you can ssh, with or without a password then enter this URI into the nautlius location bar: “ssh://” If you can't see the location bar hit the pencil symbol it allows you to type it in.

I did this on a system running Nevada build 58 and it then proceeded to ask me if I wished to store my passwords in the gnome keyring and then popped up a window displaying the root file system of my remote system.

Now if you bookmark that you can easily copy data from one place to another using a secure and if you have set up ssh to do it, compressed channel.

Very cool on a laptop and you need to copy files onto your server or if you want to browse your home server from work.


Tuesday Jan 24, 2006

X11 forwarding 101

I got asked this today:

After I su to root how can I forward an X session over ssh?

This actually hits a huge bug bear of mine, that of people using the xhost command to open up the X server. That is bad but if those same people also have root access well that is just the end. You don't need to open all of X to get this to work. Here is the shell function I use to achieve this:

function xroot
        xauth extract ${1:-${TMPDIR:-/tmp}/.Xauthority} :${DISPLAY#\*:} && \\
        echo export DISPLAY=:${DISPLAY#\*:}  && \\
        echo export XAUTHORITY=${1:-${TMPDIR:-/tmp}/.Xauthority}

This assumes you are using MIT-MAGIC-COOKEI-1 authentication, I dabbled with the SUN-RPC authentication but that requires a fully integrated name space. All the shell function does is use the xauth command to copy the record for the current display from my .Xauthority file into /tmp and then echo the DISPLAY and XAUTHORITY variables so that they can easily be cut and pasted. It does this as typically my .Xauthority file is on an NFS mounted home directory that root can not access.

So here it is in action:

Sun Microsystems Inc.   SunOS 5.11      snv_30  October 2007
: FSS 1 $; xroot
export DISPLAY=:30.0
export XAUTHORITY=/tmp/cg13442/636397/.Xauthority
: FSS 2 $; su - kroot
Sun Microsystems Inc.   SunOS 5.11      snv_30  October 2007
estale <kroot> # export DISPLAY=:30.0
estale <kroot> # export XAUTHORITY=/tmp/cg13442/636397/.Xauthority
estale <kroot> # set -o vi
estale <kroot> # xterm -e sleep 10
estale <kroot> #

There is more that the shell function could to to verify that the file it chooses for the .Xauthority is safe, but I don't need that as I have TMPDIR set to be a directory that no one else has access to.


Tuesday Jan 18, 2005

More proxy configuration

Seeing this posting about changing proxy configurations depending on location reminded me that there are many ways to skin a cat. This one has worked ever since Netscape started supporting PAC files for proxy configuration, first on my home system, then my laptop and now everywhere. It relies on me always creating an ssh tunnel to the proxy server, well you would wouldn't you so that you can get all that HTML compressed.

Since JDS on Solaris 10 supports the use of the automatic proxy configuration as well this now works just perfectly for all the clients that will use the defaults.

Whilst not the fastest way, as it involves a name service look up before it connects to the web site it is very functional, ie it works:

function FindProxyForURL(url, host) {
	if (isResolvable(host)) {
		return "DIRECT";
	} else {
		return "PROXY localhost:8080; ";

Wednesday Sep 29, 2004

A tunnel to my automounter

As I have said previously, I really like the automounter, and feel It my geeky duty to push my luck with what can be done with it.

When in the office we have a standard automounter mount point /share/install which allows access to all the install images of all the software that we have. Now when at home I wanted the same thing but to get the data from the office over ADSL. But then ssh will do compression which I have found can significantly improve access times. Could I tunnel NFS over ssh and still get the automounter to do it's stuff?

First you have to tunnel the NFS tcp port over ssh:

ssh -C -L 6049:nfs-server:2049  myzone.atwork

where nfs-server is the name of the nfs server of the install images and myzone.atwork is the name of a host at work that can access the nfs server.

Now thanks to nfs URLs I can mount the file system using:

mount nfs://localhost:6049/export/install

Automounting requires a small amount of hackery to workaround a feature of the automounter where it assumes any mount from a “local” address can be achieved using a loopback mount. So the map entry for install looks like this:

install / -fstype=xnfs nfs://

Then in /usr/lib/fs/xfns I have a mount script:

#!/bin/ksh -p
exec /usr/sbin/mount $@

And viola I have automounting nfs over a compressed ssh tunnel, mainly because I can! I can then live upgrade my home system over nfs via an ssh tunnel with compression to each new build as it comes out.

This also allows the pleasant side effect of being able to pause any install from the directory by using pstop(1) to stop the ssh process and prun(1) to continue it, which can be useful if I want to have better interactive perfomance over the network for a period while the upgrade continues.


This is the old blog of Chris Gerhard. It has mostly moved to


« July 2016