Thursday Sep 06, 2007

Sharing some thoughts on Seedcamp

I attended Seedcamp on Wednesday 5th September as a guest on one of the many panels they held - this one was around APIs.

Summary: Fascinating, a huge opportunity for Sun and the developers.

Seedcamp aims to bring together developers, venture capitalists, business mentors and technology partners to seed the next generation of network service businesses.

As someone who works for Sun I sometimes wonder more people don't take advantage of our great technology especially as most of it, such as OpenSolaris, is free. I suspect that (a) people don't know what we have, and/or (b) they can't find anyone to talk to about it.

Like UNIX, we have an powerful box of tools but I think we too often leave it up to the ingenuity of the user to make the most of it. The opportunity is getting people to realise we have some great stuff and also how it can help them.

As for my time there ... amazingly, "Web 2.0" wasn't mentioned once.

I sat on a panel with various people representing Google, Microsoft, Yahoo!, Mozilla and BT. They were all working in web services in some fashion:

  • Google (Google APIs)
  • Microsoft (Windows Live API)
  • Yahoo! (YUI)
  • Mozilla (Firefox extensions)
  • BT (Web21C)

The panel was all about start-ups finding the best way to spend their hard-won investment capital. Facebook app? Firefox extension? Using YUI, Windows Live or Google? Desktop or web service?

Given that I'm a Solaris engineer which essentially means I'm a systems programmer I was the odd one out on the panel. Clearly Java is something we'd very much like to promote but that wasn't the topic of conversation.

Where I think Sun can win is catching them as they grow by offering them scalability, DRAS and an open platform with all the tools they would expect from a UNIX-based OS. Renting a service from an ISP will take you so far but if you're hitting exponential growth you need to get some solid technology in there.

To do that we need to do more to get {Open}Solaris on start-ups' laptops and servers. The Mozilla guy (unprompted) advocated OpenSolaris as DTrace was such a powerful tool when it came to debugging, performance and scalability.

From a personal perspective I was amazed at the wild productivity of this group of people. Many were working on multiple projects at the same time and their personal networks were diverse meeting each other as they flitted from job to project to web community.

What struck me was that despite their focus being web based services they meet frequently in person. Foocamp, Barcamp, unconferences and just partying.

I was surprised that there was such a busy and diverse community of web developers in London and the UK. It's not just the Bay Area.

Following the panel session I had a wide-ranging corridor conversation with a few people including the Yahoo! guy and someone working on multiple projects including School of Everything. Thanks for your time guys.

Inspiring stuff.

Wednesday Jun 27, 2007

OpenSolaris Troubleshooting - The Unofficial Tourist Guide

I had the pleasure of presenting to the London OpenSolaris User Group (LOSUG) in June. The topic was OpenSolaris Troubleshooting - The Unofficial Tourist Guide.

My experience of debugging problems on non-Solaris systems was one of 'I know what I want to do, but I don't know how they do it', which led me to paraphrase the famous LP Hartly quote, "Diagnosing problems on unfamiliar operating systems is a foreign country: they do things differently there".

It also wanted to link the essence of good troubleshooting to the practical details of getting the data. I'm an Sun Global Resolution Troubleshooting (SGRT) programme leader - which is Sun's implementation of the Kepner-Tregoe processes. Linking the abstract troubleshooting process to gathering actual useful data is one of the trickier aspects when it comes to teaching SGRT.

Suffice to say, OpenSolaris has a rich set of diagnostic tools - not just DTrace - and this talk covers the ones I use regularly.

The PDF of the presentation, which includes an update to the slides on how dbx and mdb deal with core files from other systems, is available on the LOSUG OpenSolaris pages.

It's a presentation I plan to keep updated so if you have any comments please let me know.

Thursday Jun 14, 2007

Fixing terminfo so that terminal applications like vi(1) restore the screen

Ever noticed in Solaris that when you finish using vi(1) the terminal screen is not restored? Similarly true for applications like screen(1) and prstat(1M).

This was irritating me somewhat a few months ago but I've only now just logged bug 6569261 (xterm, xtermc, xterm-color: smcup/rmcup definitions are incorrect/missing) on it.

Essentially the problem is that the smcup and rmcup definitions don't work with our gnome-terminal and xterm implementations.

terminfo(4) defines smcup and rmcup as follows:

       exit_ca_mode       rmcup  te       String to end programs
                                          that use cup

       enter_ca_mode      smcup  ti       String to begin programs
                                          that use cup

These sequences are typically used by terminal applications that control the cursor (eg vi). smcup switches to an alternate screen buffer, rmcup switches back.

The practical effect of this is that the scrolling screen buffer is replaced by a non-scrolling screen buffer when running vi and restored when vi exits.

The rmcup/smcup terminfo descriptions fail to work on Solaris xterm and gnome-terminal implementations. The scolling buffer is not restored and, in some cases, strange escape sequences are seen (eg mysterious '0' characters).

The Solaris xtermc and xterm-color terminfo defines:


These escape sequences don't make much sense for xterm. The xterm terminfo has nothing for rmcup/smcup.

The Sun Freeware terminfo definitions are more as expected:

  xterm-color rmcup=\\E[2J\\E[?47l\\E8,

For 'standard' xterm: smcup saves the cursor and switches to the alternate screen buffer; rmcup clears the screen, returns to the normal screen buffer and restores the cursor position.

  xterm       rmcup=\\E[?1049l

The 1049 sequence is a XFree addition which our xterm doesn't appear to support. Our current gnome-terminal (snv_65) does support this sequence.

  xtermc      rmcup=\\E@0\\E[?4r

This is as broken as our definition.

The workaround is to define smcup and rmcup appropriately. Here's one way to fix it:

  1. mkdir $HOME/terminfo
  2. TERM=xterm infocmp > xterm.src
  3. Edit xterm.src and add/change rmcup and smcup:
    rmcup=\\E[2J\\E[?47l\\E8, smcup=\\E7\\E[?47h,
    every entry must be terminated in a comma (,).
  4. TERMINFO=$HOME/terminfo tic xterm.src
  5. TERMINFO=$HOME/terminfo TERM=xterm xterm

You should now find that your scrolling terminal window is retored after 'fixed' window terminal applications finish.

Thursday May 03, 2007

Open or Closed Discussions? Mail alias usage @ Sun - focus is the key

Chris recently blogged on Open or Closed Discussions? - his comments on how Sun people use mail aliases (massively) and in particular whether these discussions should be open (eg on OpenSolaris forums) or closed (internal aliases). Essentially I believe there are good reasons to use both - which was Chris's conclusion unless I'm very much mistaken.

I see mail aliases being effective communication devices when the participants:

  • Share a common interest
  • Share common goal(s)

As most mail aliases represent specific topics the former is usually satisfied but it's the latter that is often overlooked.

For example, if I'm trying to resolve a customer issue on a specific topic then I'm more likely to get a better technical discussion on an internal support alias than an external one. Why? Because, as we all work for Sun, we share common goals. In particular satisfying the customer and thus growing our service revenue.

Don't be mistaken - the OpenSolaris forums are an excellent technical resource and you might get a better answer there from both Sun and non-Sun contributors. I'm deeply impressed by the Sun/Solaris technical know-how of the OpenSolaris community that lies outside Sun. However, in the long run it's in Sun's interest to develop strong, internal, focussed technical communities as that's an important part of how we stay in business.

The key word is focus. When I work through my TODO list my priorities are more focussed on satisfying paying customers. I spend a greater portion of my mail alias time responding to internal requests than on OpenSolaris aliases. It's not to say I don't contribute to OpenSolaris aliases - it's just I don't spend as much time there.

Shared interest and common goals = focus.

Wednesday Feb 28, 2007

Process exit status explored: SMF and what does shell's $? represent?


All UNIX shells I've seen allow access to the exit value of the last command through the $? macro. The ksh(1) man page neatly states:

    The value of a simple-command is its exit status if it terminates
    normally. If it terminates abnormally due to receipt of a signal,
    the value is the signal number plus 128.  See signal.h(3HEAD) for
    a list of signal values. Obviously, normal exit status values 129
    to 255 cannot be distinguished from abnormal exit caused by
    receiving signal numbers 1 to 127.

How is exit status set? And how much can we derive from it? The reason I asked this question was I was trying to understand SMF's handling of process exit status.

SMF digression

I was looking at the SMF method failure messages. There are three method failed error paths in usr/src/cmd/svc/startd/method.c method_run():

signal"%s: Method \\"%s\\" failed due to signal %s.\\n"
exit()"%s: Method \\"%s\\" failed with exit status %d.\\n"
other"%s: Method \\"%s\\" failed with exit status %d.\\n"

What I was confused about is under what conditions the third error path can ever be taken. Note the ambiguity in that the last two error messages are identical.

Exit status

Exit status is documented in the wait(3c) man page. This is what we have:

      In the  following,  status  is  the  object  pointed  to  by

        o  If the child process terminated due to an _exit() call,
           the  low  order 8 bits of status will be 0 and the high
           order 8 bits will contain the low order 7 bits  of  the
           argument  that the child process passed to _exit(); see

        o  If the child process terminated due to  a  signal,  the
           high order 8 bits of status will be 0 and the low order
           7bits will contain the number of the signal that caused
           the  termination.  In  addition, if  WCOREFLG is set, a
           "core   image"   will   have   been    produced;    see
           signal.h(3HEAD) and wait.h(3HEAD).

In other words:

If lower 8 bits are zero, we called exit() or fell off the end of main(), exit status in the top 8 bits.

If lower 8 bits are not zero and the upper 8 bits are zero, we took a signal, the top bit (WCOREFLG) is set in the lower 8 bit we also core dumped. The signal taken is in the lower 7 bits.

What this doesn't document is when the lower 8 bits are non-zero and the upper 8 bits are non-zero. The exit status is set by wstat():

 \* convert code/data pair into old style wait status
wstat(int code, int data)
    int stat = (data & 0377);

    switch (code) {
    case CLD_EXITED:
        stat <<= 8;
    case CLD_DUMPED:
        stat |= WCOREFLG;
    case CLD_KILLED:
    case CLD_TRAPPED:
    case CLD_STOPPED:
        stat <<= 8;
        stat |= WSTOPFLG;
        stat = WCONTFLG;
        cmn_err(CE_PANIC, "wstat: bad code");
        /\* NOTREACHED \*/
    return (stat);

If the lower 8 bits and the upper 8 bits are non-zero it looks like a CLD_STOPPED or CLD_TRAPPED. Reading the source further the upper 8 bits will be the signal that caused it.

What you also have to bear in mind is that applications interpret the status returned from wait(3c). For ksh(1) this is as documented above, but what of SMF?

SMF handling of exit status

The SMF code looks like this (with comments from me):

   if (!WIFEXITED(ret_status)) {

       WIFEXITED tests for ((int)((stat)&0xFF) == 0)

       We didn't exit cleanly, so let's find out why

     if (WIFSIGNALED(ret_status)) {

       WIFSIGNALED does this test:
       ((int)((stat)&0xFF) > 0 && (int)((stat)&0xFF00) == 0) \*/

       We already know the first test is non-zero and, assuming
       I've not confused my types, ANDing 0xFF with a signed int
       will always be >=0 as we implicitly cast to the larger
       type (unsigned), AND with 0xFF and then explicitly cast
       back to signed. In other words, it's something of a no-op.

       The second test is more interesting as it relates to the
       overloading of the exit status. If the upper 8 bits are
       zero then it's a simple signal.

       Log: "%s: Method \\"%s\\" failed due to signal %s.\\n"

       Signal is derived using WTERMSIG(), see
       wait.h(3head). Currently that's:

       #define	WTERMSIG(stat)		((int)((stat)&0x7F))

     } else {

       We can only reach this clause if we have something more
       complex than a signal, like the child stopping.

       Log: "%s: Method \\"%s\\" failed with exit status %d.\\n"

       The exit status is: ((int)(((stat)>>8)&0xFF))

       As noted above, I believe the value is the signal that
       that caused the CLD_STOPPED or CLD_TRAPPED.

     \*\* Jump out \*\*

   Normal exit
   \*exit_code = WEXITSTATUS(ret_status);
   if (\*exit_code != 0) {

     Log: "%s: Method \\"%s\\" failed with exit status %d.\\n"



There's an ambiguity in the SMF method failed code in that we don't distinguish between a clean exit and the child being stopped or trapped.

As far as shells are concerned, the man pages need to be checked as regards the handling of the exit status and the resulting $? exit value presented by the shell.

Friday Feb 16, 2007

Dereferencing user space pointers in DTrace(1M) - a worked example


Every time I write a DTrace script I learn something new. This time is was all about exploring a user process problem and using copyin().

Take a look at bug 6523693 (mountd core dump in nfsauth_prog on S10). I could have added some debugging to the mountd source or tinkered about with truss(1) but it was (a) Friday afternoon, and (b) it's not the Solaris 10 way :-)

What I was trying to find ...

Having explored the problem and the core dump a little I found that check_client() was being called with a NULL clnames pointer.

in_access_list(ffbffc60, 0, ffa33, 1, 2a7d0, 19400)
check_client_old(0, ffbffc60, 0, 1, ffa30, ffa33))
nfsauth_access_svc(ffbffcd4, ffbffcd0, 2d330, ff32f928, 4b59c, cfb40)
nfsauth_prog+0x98(fdb38, fdb38, 2d330, 0, ff0d2000, 1)`_svc_prog_dispatch+0x184(fdb38, fdc18, 2d330, 1, 162a0, 1)`_svc_run_mt+0x640(fdb38, ff32ce34, 0, ff322f58, ffffff39, ff327054)`svc_run+0x94(193dc, ff327018, 2a008, ff322f58, 4f788, 0)
main+0x754(19000, 19000, 12c00, 18400, 3, 19000)
_start+0x108(0, 0, 0, 0, 0, 0)

The OpenSolaris version of nfsauth_access_svc looks like this:

     52 static void
     53 nfsauth_access(auth_req \*argp, auth_res \*result)
     54 {
     55 	struct netconfig \*nconf;
     56 	struct nd_hostservlist \*clnames = NULL;
     57 	struct netbuf nbuf;
     58 	struct share \*sh;
     59 	char tmp[MAXIPADDRLEN];
     60 	char \*host = NULL;
     62 	result->auth_perm = NFSAUTH_DENIED;
     64 	/\*
     65 	 \* Convert the client's address to a hostname
     66 	 \*/
     67 	nconf = getnetconfigent(argp->req_netid);
     68 	if (nconf == NULL) {
     69 		syslog(LOG_ERR, "No netconfig entry for %s", argp->req_netid);
     70 		return;
     71 	}
     73 	nbuf.len = argp->req_client.n_len;
     74 	nbuf.buf = argp->req_client.n_bytes;
     76 	if (netdir_getbyaddr(nconf, &clnames, &nbuf)) {
     77 		host = &tmp[0];
     78 		if (strcmp(nconf->nc_protofmly, NC_INET) == 0) {
     79 			struct sockaddr_in \*sa;
     81 			/\* LINTED pointer alignment \*/
     82 			sa = (struct sockaddr_in \*)nbuf.buf;
     83 			(void) inet_ntoa_r(sa->sin_addr, tmp);
     84 		} else if (strcmp(nconf->nc_protofmly, NC_INET6) == 0) {
     85 			struct sockaddr_in6 \*sa;
     86 			/\* LINTED pointer \*/
     87 			sa = (struct sockaddr_in6 \*)nbuf.buf;
     88 			(void) inet_ntop(AF_INET6, sa->sin6_addr.s6_addr,
     89 				    tmp, INET6_ADDRSTRLEN);
     90 		}
     91 		clnames = anon_client(host);
     92 	}
     94 	/\*
     95 	 \* Now find the export
     96 	 \*/
     97 	sh = findentry(argp->req_path);
     98 	if (sh == NULL) {
     99 		syslog(LOG_ERR, "%s not exported", argp->req_path);
    100 		goto done;
    101 	}
    103 	result->auth_perm = check_client(sh, &nbuf, clnames, argp->req_flavor);

The combination of static function names and pointers to pointers made this somewhat hard to debug using the pid provider. In particular, I wanted to decode struct nd_hostservlist \*clnames.

Dereferencing a user address pointer

We first decode the call to netdir_getbyaddr(nconf, &clnames, &nbuf) which graciously provides us with the address of the pointer:

/self->nfsauth = 1/
	self->arg1 = arg1;
	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p", self->clnames);

We save the value of arg1 (&clnames) as a thread-local variable so that we can use it later to monitor what happens to that pointer. As it it's a user address pointer we can't look at it directly, we have to copy it into the kernel.

copyin() copies from the user address to DTrace scratch space. It returns a DTrace pointer to the result. At this point we have to assume that we're examining a 32-bit process - pointers are 32-bit - so the copyin() copies 4 bytes. To use this further we need to cast it to what it represents ... in this case a pointer to a 32-bit number. To examine that value we dereference the DTrace pointer. The result is what clnames is pointing to - which should be NULL at this point.

On the return from that function we can check what clnames is set to, and decode it if possible. As DTrace doesn't have conditionals we have to use predicates:

/self->nfsauth = 1 && \*(uint32_t \*)copyin(self->arg1, 4)/
	printf("returned %d, ", arg1);

	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p\\n", self->clnames);

	this->h_cnt = (int \*)copyin(self->clnames, 4);
	this->h_hostservs = (uint32_t \*)copyin(self->clnames + 4, 4);
	printf("h_cnt %d, ", \*this->h_cnt);
	printf("h_hostservs %#p\\n", \*this->h_hostservs);

	this->h_hostp = (uint32_t \*)copyin(\*this->h_hostservs, 4);
	this->h_host = copyinstr(\*this->h_hostp);
	printf("h_host %s", this->h_host);

/self->nfsauth = 1 && ! \*(uint32_t \*)copyin(self->arg1, 4)/
	printf("returned %d, ", arg1);

	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p", self->clnames);

The predicate dereferences the clnames pointer and only decodes the result if it's non-NULL.

The probe actions further demonstrate dereferencing user pointers. The structure we're picking apart looks like this:

struct nd_hostservlist {
	int			h_cnt;		/\* number of nd_hostservs \*/
	struct nd_hostserv	\*h_hostservs;	/\* the entries \*/

As self->clnames is a DTrace pointer to a 32-bit user address we can use copyin() again to get what it's pointing to ... in this case a structure. The first member of which is an int ...

	this->h_cnt = (int \*)copyin(self->clnames, 4);

The second member of which is another pointer to another structure:

	this->h_hostservs = (uint32_t \*)copyin(self->clnames + 4, 4);

The latter can be dereferenced again using copyin() to decode:

struct nd_hostserv {
	char		\*h_host;	/\* the host name \*/
	char		\*h_serv;	/\* the service name \*/

This is the complete script

#!/usr/sbin/dtrace -s

 \* This DTrace script traces calls to netdir_byaddr() et al. in
 \* nfsauth_access_svc(). It's somewhat complicated by some of the
 \* functions being static and the general complexity of manipulating
 \* user pointers in DTrace's kernel space. It handles both resolved
 \* and unresolved names.

 \* This was written to debug 6523693 (mountd core dump in nfsauth_prog
 \* on S10) where clnames was NULL on calling client_check(). It should
 \* be started:
 \*	# ./mountd-auth-debug.d $(pgrep -x mountd)

	self->nfsauth = 1;

/self->nfsauth = 1/
	self->nfsauth = 0;

/self->nfsauth = 1/
	self->arg1 = arg1;
	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p", self->clnames);

/self->nfsauth = 1 && \*(uint32_t \*)copyin(self->arg1, 4)/
	printf("returned %d, ", arg1);

	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p\\n", self->clnames);

	this->h_cnt = (int \*)copyin(self->clnames, 4);
	this->h_hostservs = (uint32_t \*)copyin(self->clnames + 4, 4);
	printf("h_cnt %d, ", \*this->h_cnt);
	printf("h_hostservs %#p\\n", \*this->h_hostservs);

	this->h_hostp = (uint32_t \*)copyin(\*this->h_hostservs, 4);
	this->h_host = copyinstr(\*this->h_hostp);
	printf("h_host %s", this->h_host);

/self->nfsauth = 1 && ! \*(uint32_t \*)copyin(self->arg1, 4)/
	printf("returned %d, ", arg1);

	self->clnames = \*(uint32_t \*)copyin(self->arg1, 4);
	printf("clnames %#p", self->clnames);

/self->nfsauth = 1/
	printf("host '%s'", copyinstr(arg0));

/self->nfsauth = 1/
	printf("returned %#p\\n", arg1);

	this->h_cnt = (int \*)copyin(arg1, 4);
	this->h_hostservs = (uint32_t \*)copyin(arg1 + 4, 4);
	printf("h_cnt %d, ", \*this->h_cnt);
	printf("h_hostservs %#p\\n", \*this->h_hostservs);

	this->h_hostp = (uint32_t \*)copyin(\*this->h_hostservs, 4);
	this->h_host = copyinstr(\*this->h_hostp);
	printf("h_host %s", this->h_host);

/self->nfsauth = 1/
	printf("clnames %#p\\n", arg2);

	this->h_cnt = (int \*)copyin(arg2, 4);
	this->h_hostservs = (uint32_t \*)copyin(arg2 + 4, 4);
	printf("h_cnt %d, ", \*this->h_cnt);
	printf("h_hostservs %#p\\n", \*this->h_hostservs);

	this->h_hostp = (uint32_t \*)copyin(\*this->h_hostservs, 4);
	this->h_host = copyinstr(\*this->h_hostp);
	printf("h_host %s", this->h_host);


As we say in Perl TIMTOWTDI and I did this as much as an exercise as anything else but, think about this:

I can send this script to any customer running Solaris 10 and decode their mountd behaviour associated with the bug. No debug binaries, no intrusive and wasteful trussing or complicated debugger procedures.

As any DTrace person will tell you, most DTrace scripts are evolved as they explore the problem. DTrace is a superb observability tool but like all observations it matters where you look and having found one clue you follow the trail to the next.

Think: Treasure trail ... enjoy :-)

PS Bonus points for anyone who can spot the obvious lack of error checking in that code

Thursday Feb 01, 2007

Understanding snoop(1M) NFSv3 file handles


Reading the snoop(1M) trace of NFS traffic you'll see references to file handles, but interpreting these is not straightforward. As any NFS engineer should tell you, file handles are opaque to the NFS client and are meaningful only to the NFS server that issued them.

For Solaris the file handle is derived from the underlying file system so some parts of the file handle are meaningful only to the underlying file system.

Here's an example snoop(1M) output:

  3   0.00000 v4u-450f-gmp03 -> v4u-80a-gmp03 NFS C GETATTR3 FH=FD0D
  4   0.00035 v4u-80a-gmp03 -> v4u-450f-gmp03 NFS R GETATTR3 OK

So what does FH=FD0D mean? File handles are surely longer than that?


To make things easier to read, snoop(1M) hashes the file handle to 16-bits. Check the sum_filehandle() code in OpenSolaris:

	int len;
	int i, l;
	int fh = 0;

	for (i = 0; i < len; i += 4) {
		l = getxdr_long();
		fh \^= (l >> 16) \^ l;

	return (fh);

To see the complete file handle we need the verbose output:

# snoop -p3,4 -v -i /tmp/snoop2.out | grep NFS:
NFS:  ----- Sun NFS -----
NFS:  Proc = 1 (Get file attributes)
NFS:  File handle = [FD0D]
NFS:   0080001000000002000A0000000091EF23696D2E000A000000008FDF5F48F2A0
NFS:  ----- Sun NFS -----
NFS:  Proc = 1 (Get file attributes)
NFS:  Status = 0 (OK)
NFS:    File type = 1 (Regular File)
NFS:    Mode = 0644
NFS:     Setuid = 0, Setgid = 0, Sticky = 0
NFS:     Owner's permissions = rw-
NFS:     Group's permissions = r--
NFS:     Other's permissions = r--
NFS:    Link count = 1, User ID = 0, Group ID = 0
NFS:    File size = 301, Used = 1024
NFS:    Special: Major = 0, Minor = 0
NFS:    File system id = 137438953488, File id = 37359
NFS:    Last access time      = 01-Feb-07 15:12:16.398735000 GMT
NFS:    Modification time     = 01-Feb-07 15:12:16.410570000 GMT
NFS:    Attribute change time = 01-Feb-07 15:12:16.410570000 GMT

Note the file system ID and file ID values for later.

To break down that file handle you need to understand the NFS server's implementation, for NFSv3 on OpenSolaris we have:

typedef struct {
	fsid_t	_fh3_fsid;			/\* filesystem id \*/
	ushort_t _fh3_len;			/\* file number length \*/
	char	_fh3_data[NFS_FH3MAXDATA];	/\* and data \*/
	ushort_t _fh3_xlen;			/\* export file number length \*/
	char	_fh3_xdata[NFS_FH3MAXDATA];	/\* and data \*/
} fhandle3_t;

Which means ...

fh3_fsid    0080001000000002
fh3_len     000A
fh3_data    0000000091EF23696D2E
fh3_xlen    000A
fh3_xdata   000000008FDF5F48F2A0

The fh3_fsid is itself a compressed version of the dev_t for the device and the file system type, check cmpldev(). Essentially:

  • the first 32 bits (0x00800010) are the major number shifted right 14 bits plus the minor number
  • the second 32 bits (0x00000002) are the file system type, see struct vfssw

That compressed fsid is what you see in mnttab, for this case:

# grep 800010 /etc/mnttab
/dev/dsk/c0t0d0s0       /       ufs     rw,intr,largefiles,logging,xattr,onerror=panic,dev=800010       1170093759

Reassuringly, file system type 2 is ufs.

The fh3_data is derived from the underlying file system (ufs) which is a ufid structure:

struct ufid {
	ushort_t ufid_len;
	ushort_t ufid_flags;
	int32_t	ufid_ino;
	int32_t	ufid_gen;

So 0000000091EF23696D2E breaks down as:

ufid_flags   0000
ufid_ino     000091EF
ufid_gen     23696D2E

Reassuringly again, ufid_ino (the inode) makes sense:

# mdb
> 000091EF=U
> !ls -li /export
total 922
     37359 -rw-r--r--   1 root     root         301 Feb  1 15:12 motd

That's the file I was checking from the NFS client and it matches the file ID from the snoop output.

The fh3_xdata represents the export data, ie the exported file system. The inode number in this case is 0x00008FDF. Checking:

> !share
-               /export   rw   ""  
> 00008FDF=U
> !ls -lid /export
     36831 drwxr-xr-x   2 root     sys          512 Feb  1 15:12 /export

If you've been paying attention you might be wondering what happened to the file system ID (137438953488). This is the uncompressed dev_t value. We can check it by compressing it (14 bit shift of the major, add the minor):

> 0t137438953488=J
> (0t137438953488>>0t14) + 0t137438953488&0xffffffff=J

Yes, that looks familiar.


As already noted, NFS file handles are only meaningful to the NFS server and this example is just for the Solaris NFSv3 implementation. However, I hope it's given some insight into how that works and with this knowledge it's relatively easy to match snoop(1M) file handles to files on the NFS server.

Wednesday Jan 10, 2007

pfiles(1), locked files and a pinch of salt

Following a few days working this out I thought I'd share my thoughts on why the locking information returned by pfiles(1) should be taken with a pinch of salt.

What pfiles(1) tells us about locked open files

On Solaris, if we run pfiles(1) on a process that has open files it checks each file to see if it has any locks. The way it does this is to use fcntl(2) and F_GETLK. This is more complex than it first appears as it runs the system call from the context of the process being inspected. Suffice to say, it produces output like this:

# pfiles 102226
102226: /home/foo/lock /home/foo/testfile
  Current rlimit: 256 file descriptors
   0: S_IFCHR mode:0620 dev:301,0 ino:12582918 uid:54321 gid:7 rdev:24,1
   1: S_IFCHR mode:0620 dev:301,0 ino:12582918 uid:54321 gid:7 rdev:24,1
   2: S_IFCHR mode:0620 dev:301,0 ino:12582918 uid:54321 gid:7 rdev:24,1
   3: S_IFREG mode:0666 dev:314,5 ino:2557867 uid:54321 gid:10 size:343
      advisory write lock set by system 0x1 process 102225

All fine and dandy, but what is meant by "system" and is that process ID actually useful?

How the process and system IDs are derived

To understand this we need to look much closer at how these values are set and in particular whether we're using NFSv3 or NFSv4. I'm ignoring NFSv2 but that will behave similarly to NFSv3.

There are two sides to this, who sets the lock and who checks it with pfiles(1).

Setting the lock

For NFSv3 we use lockd/statd and the locking code passes the PID of the locking process to the server. There is no 'system id' passed to the server. The NFSv3 server passes the PID to underlying file system in the flock structure of the VOP_FRLOCK() call.

For NFSv4 it is considerably more complicated but essentially it passes what should be an opaque 'lock_owner' structure to the server. This can only by understood by the client that sent it. The NFSv4 server creates a unique identifier which it assigns to the PID member of the flock structure. As with NFSv3 this flock structure is passed to the underlying file system.

The pfiles(1) output

The pfiles(1) output is based on a F_GETLK call for the open file. This returns a structure with l_pid and l_sysid. The values of these is as follows:

For NFSv3 the locking code returns the PID passed to the underlying file system in the original VOP_FRLOCK() call. For locks set for NFSv3 this is the true PID of the locking process. The sysid may be set to zero in which case it is ignored by pfiles(1). If the lock was held by a process on this machine the sysid will be set to the system id of the NFS server plus 0x4000. For locks set by NFSv4 the PID is not the PID but the unique identifier used at lock time on the server.

For NFSv4, if we're the client that set the lock we can return sensible values for the PID and system ID - though the latter is \*always\* the client itself. If we're not the client we fabricate the PID and sysid as follows:

  • sysid: we take the sysid of the server and add 1
  • pid: we add up all the bytes in the clientid and owner
   15370 		/\*
   15371 		 \* Construct a new sysid which should be different from
   15372 		 \* sysids of other systems.
   15373 		 \*/
   15375 		flk->l_sysid++;
   15376 		flk->l_pid = lo_to_pid(&lockt_denied->owner);

Interpreting the results

So, with that in mind we have the following result:

NFSv3 set the lock, read by NFSv3

Process ID is correct, no system identification. eg:
      advisory write lock set by process 102250

Or perhaps:

      advisory write lock set by system 0x4001 process 102265

NFSv4 set the lock, read by NFSv3

Process ID is unique but meaningless to the client. No system identification. eg:
      advisory write lock set by process 776458

NFSv3 set the lock, read by NFSv4

The lock owner will not match on the client, result is fabricated. eg:
      advisory write lock set by system 0x2 process 512

NFSv4 set the lock, read by NFSv4

If the lock owner matches, we return the true PID of the process running on this system. The sysid is that of the server as understood by the client. eg:
      advisory write lock set by system 0x1 process 102313
If the lock owner doesn't match the client the result is fabricated. eg:
      advisory write lock set by system 0x2 process 1580


While checking this out I found and logged two curious bugs in this area.

If pfiles(1) is run on an x64 box the 64-bit version is run. If the target process is 32-bit it mixes up the process and system id. Compare these two outputs:

# /usr/bin/amd64/pfiles 2353 | grep advisory
      advisory write lock set by system 0x943
# /usr/bin/i86/pfiles 2353 | grep advisory  
      advisory write lock set by system 0x1 process 2371

Jan 11th - the following is not a bug

The other bug is that for failed lock attempts on NFSv4 or NFSv3 the PID is not set, in my test code I set the l_pid to 12345.

My code output:

F_GETLK returned type 2, sysid 1, pid 2371
F_SETLK returned type 2, sysid 1, pid 12345

truss -t fcntl -vall output

fcntl(3, F_GETLK, 0x08047C60)                   = 0
        typ=F_WRLCK  whence=SEEK_SET start=0     len=0     sys=1  pid=2371
fcntl(3, F_SETLK, 0x08047C60)                   Err#11 EAGAIN
        typ=F_WRLCK  whence=SEEK_SET start=0     len=0     sys=1  pid=12345

Notice the PID is wrong when the lock fails but is correct when tested?

The fcntl(2) man page clearly states that the l_pid and l_sysid values are only used for F_GETLK calls:

     The l_pid and l_sysid fields are used only with  F_GETLK  or
     F_GETLK64  to return the process ID of the process holding a
     blocking lock and to indicate which system is  running  that

One more complication - pfiles(1) doesn't show locks held by the process itself

The reason pfiles(1) fails to show locks held by a process is due to the way that pfiles works. It effectively hijacks the process being checked and makes the fcntl(2) call within that process context. Processes can re-lock files they already have open and locked as per the fcntl(2) man page:

     There will be at most one type of lock set for each byte  in
     the  file.  Before  a  successful  return  from  an F_SETLK,
     F_SETLK64, F_SETLKW, or F_SETLKW64 request when the  calling
     process has previously existing locks on bytes in the region
     specified by the request, the previous lock  type  for  each
     byte  in  the  specified  region will be replaced by the new
     lock type.


The output of pfiles(1) is tremendously useful but the locking information is not necessarily accurate or useful.

If you're running pfiles(1) against an NFSv4 file then if the process ID matches a local process it's more than likely the process holding the lock. Otherwise, it's meaningless.

If you're running pfiles(1) against an NFSv3 file then it's harder. Again if the process ID makes sense on this client then it's probably correct. Otherwise you could check it against known NFS clients of the same file system ... but that's time consuming.

To properly identify lock owners some mdb(1) or dtrace(1M) is required - and this blog entry is already too long :-)

Friday Dec 08, 2006

SIGBUS versus SIGSEGV - according to siginfo.h(3head)

Having asked a number of colleagues I failed to find a consistent answer to the question of the differences between SIGBUS and SIGSEGV. According to the Solaris signal(3head) man page we have:

     Name             Value   Default     Event
     SIGBUS           10      Core        Bus Error
     SIGSEGV          11      Core        Segmentation Fault

So I dug a bit further and found that siginfo_t can tell you more about the origins of the signal, in particular we have, from the siginfo.h(3head) man page:

     Signal         Code                 Reason
     SIGSEGV        SEGV_MAPERR          address not mapped to object
                    SEGV_ACCERR          invalid permissions for mapped object
     SIGBUS         BUS_ADRALN           invalid address alignment
                    BUS_ADRERR           non-existent physical address
                    BUS_OBJERR           object specific hardware error

Obviously this may be open to interpretation but that clarifies a few things for me.

For the techie take a look at the OpenSolaris source code for the trap() function. Here we see the handling for various types of trap including page faults. For example, there's a section where a decision is made as to return SIGBUS or SIGSEGV:

	case T_WIN_OVERFLOW + T_USER:	/\* window overflow in ??? \*/
	case T_WIN_UNDERFLOW + T_USER:	/\* window underflow in ??? \*/
	case T_SYS_RTT_PAGE + T_USER:	/\* window underflow in user_rtt \*/
	case T_INSTR_MMU_MISS + T_USER:	/\* user instruction mmu miss \*/
	case T_DATA_MMU_MISS + T_USER:	/\* user data mmu miss \*/
	case T_DATA_PROT + T_USER:	/\* user data protection fault \*/
		switch (type) {
		 \* In the case where both pagefault and grow fail,
		 \* set the code to the value provided by pagefault.
		(void) instr_size(rp, &addr, rw);
		bzero(&siginfo, sizeof (siginfo));
		siginfo.si_addr = addr;
		if (FC_CODE(res) == FC_OBJERR) {
			siginfo.si_errno = FC_ERRNO(res);
			if (siginfo.si_errno != EINTR) {
				siginfo.si_signo = SIGBUS;
				siginfo.si_code = BUS_OBJERR;
				fault = FLTACCESS;
		} else { /\* FC_NOMAP || FC_PROT \*/
			siginfo.si_signo = SIGSEGV;
			siginfo.si_code = (res == FC_NOMAP) ?
			fault = FLTBOUNDS;

I was digging around this following a discussion regarding bug 6466257 (mmap file writing fails on nfs3 client with EMC nas device) and the signals delivered by mmap(2). The man page suggests that either SIGBUS or SIGSEGV can be returned for a number of error conditions but doesn't seem sure which. The answer is, "it depends".

So my conclusion is sadly another question - can an application developer infer anything from a SIGBUS versus a SIGSEGV? The answer, I believe, is yes - but quite often the result is the same which is to fix the code :-)

Wednesday Oct 27, 2004

Finally worked out how to use ::grep

The mdb(1) man page has this terse entry for ::grep:

     ::grep command

         Evaluate the specified command string,  and  then  print
         the  old  value  of  dot if the new value of dot is non-
         zero. If the command contains whitespace or  metacharac-
         ters,  it must be quoted. The ::grep dcmd can be used in
         pipelines to filter a list of addresses.

I've struggled to work out how to use it till now. Take a look at this:

> ::walk umem_log | ::grep '\*(.+4) == 37ac048' | ::bufctl_audit 

The libumem umem_log walker extracts all the entries from the transaction log but I only want to see those that refer to a particular buffer (0x37ac048).

To do this we pipe the walker to ::grep which runs a command and then evaluates the value of '.', the command in this case is just an expression. From the mdb(1) man page:

     expression [! word ...] [ ; ]

         A command can consist only of an arithmetic  expression.
         The  expression is evaluated and the dot variable is set
         to its value, and then the previous dcmd  and  arguments
         are executed using the new value of dot.

The expression tests the value of the contents of '.+4'. Why? well...

typedef struct umem_bufctl_audit {
        struct umem_bufctl      \*bc_next;       /\* next bufctl struct \*/
        void                    \*bc_addr;       /\* address of buffer \*/
        struct umem_slab        \*bc_slab;       /\* controlling slab \*/
        umem_cache_t            \*bc_cache;      /\* controlling cache \*/
        hrtime_t                bc_timestamp;   /\* transaction time \*/
        thread_t                bc_thread;      /\* thread doing transaction \*/
        struct umem_bufctl      \*bc_lastlog;    /\* last log entry \*/
        void                    \*bc_contents;   /\* contents at last free \*/
        int                     bc_depth;       /\* stack depth \*/
        uintptr_t               bc_stack[1];    /\* pc stack \*/
} umem_bufctl_audit_t;

"/usr/include/umem_impl.h" [Read only] line 138 of 381 --36%-- 

If the contents match the expression evaluates to '1' which means ::grep prints the old value of '.'. Hurrah.

Jan 24th 2007 - I now know what was confusing me here:

There's still an outstanding question about why "the previous dcmd and arguments are executed using the new value of dot" doesn't appear to apply in the ::grep line. eg:

> 1234=Y
                1970 Jan  1 02:17:40    
> ::grep .
> ::grep '.=Y' 
                1970 Jan  1 02:17:40    
> .
                1970 Jan  1 02:17:40    
> =Y
                1970 Jan  1 02:17:40    

I'm not going to worry about it.

The man page excerpt I quoted refers to using an expression on its own with no pipeline. In other words, if you just have an expression on the line it repeats the last dcmd with the new value of dot. Within a ::grep dcmd this doesn't apply.




« July 2016