Thursday Aug 06, 2009

Monitoring mounts

Sometimes in the course of being a system administrator it is useful to know what file systems are being mounted and when and what mounts fail and why. While you can turn on automounter verbose mode that only answers the question for the automounter.

Dtrace makes answering the general question a snip:

: FSS 24 $; cat mount_monitor.d                         
#!/usr/sbin/dtrace -qs

/ args[1]->dir /
        self->dir = args[1]->flags & 0x8 ? args[1]->dir : 
/ self->dir != 0 /
        printf("%Y domount ppid %d, %s %s pid %d -> %s", walltimestamp, 
              ppid, execname, self->dir, pid, arg1 == 0 ? "OK" : "failed");
/ self->dir != 0 && arg1 == 0/
        self->dir = 0;
/ self->dir != 0 && arg1 != 0/
        printf("errno %d\\n", arg1);
        self->dir = 0;
: FSS 25 $; pfexec /usr/sbin/dtrace -qs  mount_monitor.d
2009 Aug  6 12:57:57 domount ppid 0, sched /share/consoles pid 0 -> OK
2009 Aug  6 12:57:59 domount ppid 0, sched /share/chroot pid 0 -> OK
2009 Aug  6 12:58:00 domount ppid 0, sched /share/newsrc pid 0 -> OK
2009 Aug  6 12:58:00 domount ppid 0, sched /share/build2 pid 0 -> OK
2009 Aug  6 12:58:00 domount ppid 0, sched /share/chris_at_play pid 0 -> OK
2009 Aug  6 12:58:00 domount ppid 0, sched /share/ws_eng pid 0 -> OK
2009 Aug  6 12:58:00 domount ppid 0, sched /share/ws pid 0 -> OK
2009 Aug  6 12:58:03 domount ppid 0, sched /home/tx pid 0 -> OK
2009 Aug  6 12:58:04 domount ppid 0, sched /home/fl pid 0 -> OK
2009 Aug  6 12:58:05 domount ppid 0, sched /home/socal pid 0 -> OK
2009 Aug  6 12:58:07 domount ppid 0, sched /home/bur pid 0 -> OK
2009 Aug  6 12:58:23 domount ppid 0, sched /net/ pid 0 -> OK
2009 Aug  6 12:58:23 domount ppid 0, sched /net/ pid 0 -> OK
2009 Aug  6 12:58:23 domount ppid 0, sched /net/ pid 0 -> OK
2009 Aug  6 12:59:45 domount ppid 8929, Xnewt /tmp/.X11-pipe/X6 pid 8935 -> OK

In particular that last line if repeated often can give you a clue to things not being right.

Saturday Jul 14, 2007

/net can be evil

I've written before about what a fan I am of the automounter. However the curse of the automounter is laziness. Direct mounts I covered but the next topic is the “/net” mount point.

Like direct automount points “/net” has it's uses, however when it is used without thought it is evil. The thing that has to be thought about is that “/net” quickly leads you into some of the eight fallacies of distributed computing: which I reproduce here from Geoff Arnold's blog:

Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences.
  1. The network is reliable
  2. Latency is zero
  3. Bandwidth is infinite
  4. The network is secure
  5. Topology doesn’t change
  6. There is one administrator
  7. Transport cost is zero
  8. The network is homogeneous

Now since when using “/net” you are just a user not a developer that cuts you some slack with me. However if you are a engineer looking at crash dumps that are many gigabytes via “/net” or uploading from a local tape to and from an NFS server on the other side of the world, or even close but over a WAN, you need to be aware of fallacies 1,2,3 and 7. Then wonder if there is a better way, invariably there is a faster, less resource hungry way to do this if you can login to a system closer to the NFS server.

If that is the case then you should get yourself acquainted some of the options to ssh(1). Specifically compression, X11 forwarding and for the more adventurous agent forwarding.

Friday Jun 15, 2007

Direct automount points are evil

As I have said before, I love the automounter. I love that via executable maps you can get it to do things that you really should not be able to do, like restore files to be a poor mans HSM or archive retrieval solution. I love that automount maps can contain more automount points to form a hiearchy (eg: from our own automounter set up):

: FSS 10 $; nismatch chroot auto_share
chroot -fstype=autofs
: FSS 11 $; 

which allows this.

There is however one feature of the automounter that while well intentioned and some times required, is evil.

I speak of direct automount points.

They are evil for many reasons.

  1. They pollute the name space.

  2. They can't be added dynamically

  3. They can't be deleted dynamically

  4. They encourage sloppy administration.

I think it is point 4 that really wins it for me and I have an example I found today. I found it as my lab system had mounted up a load of nfs mount points that it should not. Now if the server goes away my session hangs when I had no need to mount the server in the first place. The reason it had mounted it was due to name space pollution such that “find . -xdev ....” triggers the mount of direct mount points but not indirect mount points1.

What do I mean sloppy administration? I'll take the example from today:

: FSS 8 $; niscat auto_direct.org_dir | grep cdrom
/cdroms/jumpstart foohost:/export/jumpstart
/cdroms/prefcs -rw foohost:/export/prefcs
/cdroms/fcs -rw foohost:/export/fcs
: FSS 9 $; 

Instead of just one indirect mount point with the entries added to an indirect automount table we have an extra directory that only contains mount points, which is really the definition of an indirect mount point. That in our case the mount points could live under an existing mount point just adds to the irritation. So now to fix this the automount table has to be updated and then every system in the domain needs rebooting or at the very least the automount command run.

It should be the goal of every administrator to have an empty auto_direct table.

1Indirect mounts don't get triggered as they sit below another mount point so the find stops before reading the directory. So even if the mount point is browseable the mounts don't get triggered.

Wednesday Sep 29, 2004

A tunnel to my automounter

As I have said previously, I really like the automounter, and feel It my geeky duty to push my luck with what can be done with it.

When in the office we have a standard automounter mount point /share/install which allows access to all the install images of all the software that we have. Now when at home I wanted the same thing but to get the data from the office over ADSL. But then ssh will do compression which I have found can significantly improve access times. Could I tunnel NFS over ssh and still get the automounter to do it's stuff?

First you have to tunnel the NFS tcp port over ssh:

ssh -C -L 6049:nfs-server:2049  myzone.atwork

where nfs-server is the name of the nfs server of the install images and myzone.atwork is the name of a host at work that can access the nfs server.

Now thanks to nfs URLs I can mount the file system using:

mount nfs://localhost:6049/export/install

Automounting requires a small amount of hackery to workaround a feature of the automounter where it assumes any mount from a “local” address can be achieved using a loopback mount. So the map entry for install looks like this:

install / -fstype=xnfs nfs://

Then in /usr/lib/fs/xfns I have a mount script:

#!/bin/ksh -p
exec /usr/sbin/mount $@

And viola I have automounting nfs over a compressed ssh tunnel, mainly because I can! I can then live upgrade my home system over nfs via an ssh tunnel with compression to each new build as it comes out.

This also allows the pleasant side effect of being able to pause any install from the directory by using pstop(1) to stop the ssh process and prun(1) to continue it, which can be useful if I want to have better interactive perfomance over the network for a period while the upgrade continues.


This is the old blog of Chris Gerhard. It has mostly moved to


« July 2016