Thursday Jul 24, 2014

Crashdump restructuring in Solaris

In Solaris 11.2 the crashdump restructuring project changed the way how dump data are stored. Data which are not pure kernel pages now go into separate files. Together with my colleague Sriman we made it to happen.

The first noticeable change was a change in the layout of the crash directory. The files are stored under /var/crash/data/uuid/ directory. The long hexadecimal string (uuid) was added to better align with FMA - it's actually uuid (universally unique ID) of the crash event which can be found in fmadm faulty output. Actually, if you look at FMA panic events from earlier versions you can see that the resource string for the event was already designed this way, it's just materialized with this project.

For example, after 2 panic events the /var/crash directory will look like this:

0 -> data/404778fb-88da-4188-f222-8da174d44fa4
1 -> data/6e50417e-95fc-4ab4-e9a8-abbe80bc6b48
bounds
data/
  404778fb-88da-4188-f222-8da174d44fa4/
    vmcore-zfs.0
    vmcore.0
    vmdump-zfs.0
    vmdump.0
  6e50417e-95fc-4ab4-e9a8-abbe80bc6b48/
    vmdump-zfs.1
    vmdump.1

The 0, 1 symlinks maintain the sequential ordering of the old layout.

The example reflects a configuration when savecore is not automatically run after boot (i.e. dumpadm -n is in effect) and the administrator has extracted the first crash dump by hand (running savecore 0 in /var/crash/0/ directory). If you take a look at the console after the system rebooted after panic there are commands which you can copy-n-paste to the terminal to perform the extraction.

The other change in the above example is that there is new vmcore-zfs.N file. This is not the only new file which can appear. Depending on dumpadm(1M) configuration there can be files like:

  • vmcore.N - core kernel pages
  • vmcore-zfs.N - ZFS metadata (ZIO buffers)
  • vmcore-proc.N - process pages
  • vmcore-other.N - other pages (ZFS data, free pages)

By splitting the dump into multiple files it is possible to transfer just vmcore.N file for analysis to quickly assess what caused the panic and transfer the rest of the files later on.

If any of the "auxiliary" files is missing, mdb will report it:

root@va64-v20zl-prg06:/var/crash/0# mdb 0
mdb: failed to locate file ./vmcore-zfs.0. Contents of ZFS metadata (ZIO buffers) 
will not be available
Loading modules: [ unix genunix specfs dtrace mac cpu.generic 
cpu_ms.AuthenticAMD.15 uppc pcplusmp zvpsm scsi_vhci zfs mpt sd ip hook neti arp
 usba kssl sockfs lofs idm cpc nfs fcip fctl ufs logindmux ptm sppp ipc ]
> ::status
debugging crash dump vmcore.0 (64-bit) from va64-v20zl-prg06
operating system: 5.12 dump.on12 (i86pc)
usr/src version: 23877:f2e76e2d0329:on12_51+48
usr/closed version: 1961:cad017e4c7e4:on12_51+4
image uuid: 404778fb-88da-4188-f222-8da174d44fa4
panic message: forced crash dump initiated at user request
complete: yes, all pages present as configured
dump content: kernel [LOADED,UNVERIFIED] (core kernel pages)
              zfs [MISSING] (ZFS metadata (ZIO buffers))
panicking PID: 101575 (not dumped)

Another choice is not to dump some of the sections at all. E.g. to dump only pages kernel and currently running process at the time of panic but not ZFS metadata the system can be configured as:

dumpadm -c curproc-zfs

Also, the unix.N file is no longer extracted automatically (it can be done with -u option for savecore(1M) if you need the file for some reason) since it is embedded in vmcore.N file; mdb will find it automatically.

How to load the files into mdb with all these files around ? The easiest way how to access an extracted dump is to use just the suffix, i.e.

  mdb N

which will pick up all the files based on metadata in the main vmcore.N file. This worked even before this change, except there was just one file (2 if counting unix.N).

It is still possible to specify the files by hand, just remember to put the main file (vmcore.N) as the first argument:

  mdb vmcore.N vmcore-zfs.N ...

The other change (which is hard to notice unless you're dealing with lots of crash dump files) was that we laid out the infrastructure in kernel/mdb/libkvm to be properly backwards compatible w.r.t. on-disk crash dump format. As a result mdb will automatically load crash dump files produced in earlier Solaris versions. Currently it supports 3 latest versions.

Thursday Feb 09, 2012

OpenGrok 0.11 setup on Solaris 11

OpenGrok 0.11 has been just released (see Lubos' post with release notes). This is nice version number coincidence to try it on Solaris 11. In case you are wondering what is OpenGrok, it is blindigly fast source code search and cross reference engine accessible over web. It is written in Java. It is also behind the source code browser on src.opensolaris.org, albeit running older version. For more information about the project take a look at its project page.

Now, how to get OpenGrok running for your source code base on Solaris 11. I will illustrate this on source code coming from three different Source Code Management systems (for complete support see the full list). The complete setup on freshly installed Solaris 11 has 6 main steps:

  1. Install pre-requisities first:
    1. install couple of Source Code Management systems (depends on your needs):
      • install Mercurial:
            pkg install developer/versioning/mercurial
        
      • install CVS
            pkg install developer/versioning/cvs
        
      • git
            pkg install developer/versioning/git
        
    2. download, compile and install exuberant ctags:
      pkg install developer/gcc-45
      pkg install system/header
      wget http://prdownloads.sourceforge.net/ctags/ctags-5.8.tar.gz
      tar xfz ctags-5.8.tar.gz
      cd ctags-5.8
      ./configure && make && make install
      
    3. install Tomcat6:
        pkg install web/java-servlet/tomcat
      
  2. Now download and install OpenGrok package:
    location=http://hub.opensolaris.org/bin/download/Project+opengrok/files/
    pkgadd -d $location/OSOLopengrok-0.11.pkg OSOLopengrok
    
  3. Mirror some source code as webservd user (note that OpenGrok by itself does not synchronize or mirror source code, this has to be done separately):
    cd /var/opengrok/src/
    cvs -d anonymous@cvs.openssl.org:/openssl-cvs co -rOpenSSL_1_0_0-stable openssl
    hg clone ssh://anon@hg.opensolaris.org/hg/opengrok/trunk opengrok-dev
    git clone http://git.samba.org/samba.git
    
      Run the following first (as root) to make sure history indexing does not prompt to confirm the identify when consulting with remote repositories (CVS):
      
      # store the pubkeys
      ssh-keyscan -t rsa,dsa cvs.openssl.org >> /etc/ssh/known_hosts
      ssh-keyscan -t rsa,dsa hg.opensolaris.org >> /etc/ssh/known_hosts
      
  4. Deploy and start the web application (as root):
    EXUBERANT_CTAGS=/usr/local/bin/ctags \
        /usr/opengrok/bin/OpenGrok deploy && \
        svcadm enable tomcat6
    
  5. Index the source code and send the configuration to the running instance (as webservd user):
    EXUBERANT_CTAGS=/usr/local/bin/ctags \
       /usr/opengrok/bin/OpenGrok index
    
  6. enable the service (as root):
    svcadm enable opengrok
    

OpenGrok is now accessible at http://SERVER_HOSTNAME:8080/source/ (where SERVER_HOSTNAME is the hostname of the server on which the above setup was done).

Except for the part with ctags it is pretty streamlined and no brainer process. Hopefully the exuberant-ctags package will be available again from the standard Oracle pkg repositories.

And here is the result:

Monday Nov 28, 2011

Netcat I/O enhancements

When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...).

Now, let's take a look at the new functionality:

/usr/bin/netcat alternative program name (symlink)
-b bufsize I/O buffer size
-E use exclusive bind for the listening socket
-e program program to execute
-F no network close upon EOF on stdin
-i timeout extension of timeout specification
-L timeout linger on close timeout
-l -p port addr previously not allowed usage
-m byte_count Quit after receiving byte_count bytes
-N file pattern for UDP scanning
-I bufsize size of input socket buffer
-O bufsize size of output socket buffer
-R redir_spec port redirection
addr/port[/{tcp,udp}] syntax of redir_spec
-Z bypass zone boundaries
-q timeout timeout after EOF on stdin

Obviously, the Swiss army knife of networking tools just got a bit thicker.

While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example:

  • the port redirector allows to convert TCP stream to UDP datagrams.
  • the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily.
  • the socket linger option can be used to produce TCP RST segments by setting the timeout to 0
  • execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script
  • etc.

If you find some other helpful ways use please share via comments.

Manual page nc(1) contains more details, along with examples on how to use some of these new options.

Thursday Mar 31, 2011

KSSL presentation

Way back I presented about KSSL in CZOSUG meeting. The presentation seems to be gone from opensolaris.org infrastructure so I have copied the file here.

Wednesday Jun 02, 2010

netcat and network stack debugging

One of the options of the netcat program (/usr/bin/nc) available in OpenSolaris (if you don't have it installed simply run pkg install SUNWnetcat) is the -D (aka debugging) option. Only recently I realized that not everyone out there knows how to capture the debugging data once this option is set since this is pretty Solaris specific. What this option does is basically trigger a flag inside the ip kernel module specific for given connection structure (conn_t). netcat does this by calling setsockopt() system call with SO_DEBUG option set for given socket. As a result conn_debug flag is set inside conn_t structure associated with the socket. This flag is then consulted in various functions manipulating the structure.

When there is an interesting event and the conn_debug is set the function calls strlog() kernel function to record the data. For example, here's a snippet of usr/src/uts/common/inet/tcp/tcp_input.c:tcp_input_listener():

   1366 	if (listener->tcp_conn_req_cnt_q >= listener->tcp_conn_req_max) {
   1367 		mutex_exit(&listener->tcp_eager_lock);
   1368 		TCP_STAT(tcps, tcp_listendrop);
   1369 		TCPS_BUMP_MIB(tcps, tcpListenDrop);
   1370 		if (lconnp->conn_debug) {
   1371 			(void) strlog(TCP_MOD_ID, 0, 1, SL_TRACE|SL_ERROR,
   1372 			    "tcp_input_listener: listen backlog (max=%d) "
   1373 			    "overflow (%d pending) on %s",
   1374 			    listener->tcp_conn_req_max,
   1375 			    listener->tcp_conn_req_cnt_q,
   1376 			    tcp_display(listener, NULL, DISP_PORT_ONLY));
   1377 		}
   1378 		goto error2;
   1379 	}

To capture the data logged via strlog it's necessary to know the STREAMS module ID, which in our case is TCP_MOD_ID which is defined in usr/src/uts/common/inet/tcp_impl.h as 5105.

To read the data one can use either strace(1M) command line tool or strerr(1M) daemon which both produce text logs. To read everything, one can use this command (needs read access to /dev/log so has to run under root):

# strace 5105 all all

Here are two examples using netcat: at first we try to bind to a port which we don't have privileges for:

$ nc -D -l -4 -p 23
nc: Permission denied

which produces the following entry from strace:

000004 12:11:04 19c7ff6a  1 ..E 5105 0 ip: [ID 234745 kern.debug] tcp_bind: no priv for port 23

next we try to bind to already occupied port:

$ nc -D -l -4 -p 4444
nc: Address already in use

which produces the following entry from strace:

000005 12:15:33 19c86878  1 ..E 5105 0 ip: [ID 326978 kern.debug] tcp_bind: requested addr busy

This is of course traceable via normal tools such as ppriv(1) or truss(1) but the point is that much fine grained details can be captured from the network modules. The format of the log entries is explained in the strace(1M) man page.

Friday Oct 30, 2009

signal() versus sigaction() on Solaris

This entry is mostly for newcomers to Solaris/OpenSolaris from UNIX-like systems. When I had been taught about signal() and sigaction() my understanding was that sigaction() is just a superset of signal() and also POSIX conformant but otherwise they accomplish the same thing. This is indeed the case for some of UNIX-like operating systems. In Solaris, as I only recently discovered (to my dismay :)), it's different.

Consider the following code (please ignore the fact it's not strictly checking return values and that the signal handler is not safe):

#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>

void sig_handler(int s) {
	printf("Got signal! Sleeping.\\n");

	sleep(10);

        printf("returning from signal handler\\n");
}

int main(void) {
        struct sigaction s_action;

	printf("Setting signal handler: ");
#ifdef POSIX_SIGNALS
	printf("sigaction\\n");
	(void) sigemptyset(&s_action.sa_mask);
	s_action.sa_handler = sig_handler;
        s_action.sa_flags = 0;
	(void) sigaction(SIGHUP, &s_action, (struct sigaction \*) NULL);
#else
	printf("signal\\n");
	signal(SIGHUP, sig_handler);
#endif

	printf("Waiting for signal\\n");
	while(1)
		pause();

	return (0);
}

Now try to compile and run with and without the -DPOSIX_SIGNALS and send 2 SIGHUP signals to the process within the 10 seconds window (so the second signal is received while the signal handler is still running). With sigaction(), the signal will be caught by the handler in both of the cases. With signal() however, the second signal will cause the process to exit. This is because kernel will reset the signal handler to default upon receiving the signal for the first time. This is described in the signal(3C) man page in a somewhat hidden sentence inside the second paragraph (it really pays out to read man pages slowly and with attention to detail):

     If signal()  is
     used,  disp  is  the address of a signal handler, and sig is
     not  SIGILL, SIGTRAP, or  SIGPWR, the system first sets  the
     signal's disposition to  SIG_DFL before executing the signal
     handler.

The sigaction(2) man page has this section:

     SA_RESETHAND    If set and the signal is caught, the  dispo-
                     sition of the signal is reset to SIG_DFL and
                     the signal will not be blocked on  entry  to
                     the  signal  handler  (SIGILL,  SIGTRAP, and
                     SIGPWR cannot be  automatically  reset  when
                     delivered; the system silently enforces this
                     restriction).

sigaction() does not set the flag by default which results in the different behavior. I found out that this behavior has been present since Solaris 2.0 or so.

In fact, signal() routine from libc is implemented via sigaction(). From $SRC/lib/libc/port/sys/signal.c:

     58 /\*
     59  \* SVr3.x signal compatibility routines. They are now
     60  \* implemented as library routines instead of system
     61  \* calls.
     62  \*/
     63 
     64 void(\*
     65 signal(int sig, void(\*func)(int)))(int)
     66 {
     67 	struct sigaction nact;
     68 	struct sigaction oact;
     69 
     70 	CHECK_SIG(sig, SIG_ERR);
     71 
     72 	nact.sa_handler = func;
     73 	nact.sa_flags = SA_RESETHAND|SA_NODEFER;
     74 	(void) sigemptyset(&nact.sa_mask);
     75 
     76 	/\*
     77 	 \* Pay special attention if sig is SIGCHLD and
     78 	 \* the disposition is SIG_IGN, per sysV signal man page.
     79 	 \*/
     80 	if (sig == SIGCHLD) {
     81 		nact.sa_flags |= SA_NOCLDSTOP;
     82 		if (func == SIG_IGN)
     83 			nact.sa_flags |= SA_NOCLDWAIT;
     84 	}
     85 
     86 	if (STOPDEFAULT(sig))
     87 		nact.sa_flags |= SA_RESTART;
     88 
     89 	if (sigaction(sig, &nact, &oact) < 0)
     90 		return (SIG_ERR);
     91 
     92 	return (oact.sa_handler);
     93 }

I am pretty sure that the SA_RESETHAND flag is set in signal() in order to preserve backwards compatibility.

This means that to solve this problem with signal(), one should set the signal handler again in the signal handler itself. However, this is not a complete solution since there is still a window where the signal can be delivered and the handler is set to SIG_DFL - the default handler which is exit in case of SIGHUP as the signal.h(3HEAD) man page explains in really useful table:

          Name        Value   Default                    Event
     SIGHUP           1       Exit       Hangup (see termio(7I))
     ...

Now let's look at FreeBSD. Its SIGNAL(3) man page contains this separate paragraph:

     The handled signal is unblocked when the function returns and the process
     continues from where it left off when the signal occurred.  Unlike previ-
     ous signal facilities, the handler func() remains installed after a sig-
     nal has been delivered.

The second sentence is actually printed in bold letters. I also tried on Linux and NetBSD and the behavior is the same as in FreeBSD.

So, to conclude all of the above: using signal() is really not portable.

Tuesday Sep 01, 2009

Netcat as small packet factory

Recently I needed to test a bug fix in in.iked(1M) (should say libike.so with which in.iked is linked) after which the daemon should respond to IKEv2 requests with Notification message telling the peer to fall back to IKEv1 (previously it did not respond to IKEv2 packets at all). This can be tested by:

  • installing a OS instance which supports IKEv2 and initiating from there
  • writing a simple program (C/Perl/etc.) which will construct the UDP payload

Surely, there should be easier way how to send a UDP paket with arbitrary (in my case ISAKMP) payload. It turns out this is very easy to do just from command line with nc(1) which is available in OpenSolaris (install it via 'pkg install SUNWnetcat'). Let's try to send some garbage first to see if it works:

perl -e 'print "\\x41\\x41";' | nc -u rpe-foo.czech 500

Yep, tshark(1) (in OpenSolaris shipped by default with Wireshark) reports an IKE packet, malformed one (which is not surprising):

Capturing on eri0
  0.000000 10.18.144.12 -> 10.18.144.11 ISAKMP [Malformed Packet]

0000  00 0a e4 2f 61 eb 00 03 ba 4e 3d 38 08 00 45 00   .../a....N=8..E.
0010  00 1e 26 98 40 00 ff 11 20 fb 0a 12 90 0c 0a 12   ..&.@... .......
0020  90 0b e2 66 01 f4 00 0a 34 57 41 41               ...f....4WAA

Our two A's are there just after the UDP header (Ethernet header 14 bytes, IP header 20 bytes, UDP 8 bytes, in sum 42 bytes and our 2 bytes are just after first 8 bytes on 3rd line).

With that we can go and construct IKEv1 packet first to see if the daemon will react upon it. We will need to construct the payload which is a IKEv1 header. IKEv1 is defined in RFC 2409 (The Internet Key Exchange (IKE)). IKEv1 uses ISAKMP header definition so we need to look into RFC 2408 (Internet Security Association and Key Management Protocol (ISAKMP)) for the actual header definition. It's there in section 3.1:

                         1                   2                   3
     0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    !                          Initiator                            !
    !                            Cookie                             !
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    !                          Responder                            !
    !                            Cookie                             !
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    !  Next Payload ! MjVer ! MnVer ! Exchange Type !     Flags     !
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    !                          Message ID                           !
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    !                            Length                             !
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

I'd like to construct a packet which resembles first packet sent by IKEv1 Initiator. So, our packet code (similar to shell code) will look like this (without thinking too much of what should the values look like):

  • Initiator's cookie, must not be zero
        \\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88
    
  • Responder's cookie, must be zero in the initial packet from Initiator
        \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00
    
  • next payload, let's try 0 first
        \\x00
    
  • Major and Minor Version (4 bits each)
        \\x10
    
  • Exchange Type
                                Exchange Type      Value
                             NONE                    0
                             Base                    1
                             Identity Protection     2
                             Authentication Only     3
                             Aggressive              4
                             Informational           5
                             ISAKMP Future Use     6 - 31
                             DOI Specific Use     32 - 239
                             Private Use         240 - 255
    
    So let's try Base first:
        \\x01
    
  • Flags (Initiator)
        \\x00
    
  • Message ID
        \\x66\\x66\\x66\\x66
    
  • Length
        \\x28
    

We need to massage our packet code into command line. The code:

 \\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88
 \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00
 \\x00
 \\x10
 \\x01
 \\x00
 \\x66\\x66\\x66\\x66
 \\x28

We want source port to be 500 as well because of section 2.5.1 in RFC 2408 so use the -p option (this requires the net_privaddr privilege so either become root or use pfexec(1)). Also, we do not need to wait for the response so use -w option:

perl -e 'print "\\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x01\\x00\\x66\\x66\\x66\\x66\\x28";' \\
    | nc -w 1 -p 500 -u rpe-foo.czech 500

The packet was received but there was no reply and tshark still considers this as Malformed Packet. Let's check the header again - oh yeah, the Length field has 4 bytes, not just one. Let's try again:

perl -e 'print "\\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x01\\x00\\x66\\x66\\x66\\x66\\x28\\x00\\x00\\x00";' \\
    | nc -w 1 -p 500 -u rpe-foo.czech 500

Okay, this is our Base exchange but still not response:

294.029154 10.18.144.12 -> 10.18.144.11 ISAKMP Base

0000  00 0a e4 2f 61 eb 00 03 ba 4e 3d 38 08 00 45 00   .../a....N=8..E.
0010  00 38 26 a7 40 00 ff 11 20 d2 0a 12 90 0c 0a 12   .8&.@... .......
0020  90 0b 01 f4 01 f4 00 24 34 71 11 22 33 44 55 66   .......$4q."3DUf
0030  77 88 00 00 00 00 00 00 00 00 00 10 01 00 66 66   w.............ff
0040  66 66 28 00 00 00                    

Let's try something more provocative and set the Exchange type to Identity protection:

perl -e 'print "\\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x02\\x00\\x66\\x66\\x66\\x66\\x28\\x00\\x00\\x00";' \\
    | nc -w 1 -p 500 -u rpe-foo.czech 500

Oh yeah, this finally deserved a response:

383.050874 10.18.144.12 -> 10.18.144.11 ISAKMP Identity Protection (Main Mode)

0000  00 0a e4 2f 61 eb 00 03 ba 4e 3d 38 08 00 45 00   .../a....N=8..E.
0010  00 38 26 a8 40 00 ff 11 20 d1 0a 12 90 0c 0a 12   .8&.@... .......
0020  90 0b 01 f4 01 f4 00 24 34 71 11 22 33 44 55 66   .......$4q."3DUf
0030  77 88 00 00 00 00 00 00 00 00 00 10 02 00 66 66   w.............ff
0040  66 66 28 00 00 00                                 ff(...

383.051672 10.18.144.11 -> 10.18.144.12 ISAKMP Informational

0000  00 03 ba 4e 3d 38 00 0a e4 2f 61 eb 08 00 45 00   ...N=8.../a...E.
0010  00 99 d3 8b 40 00 ff 11 73 8c 0a 12 90 0b 0a 12   ....@...s.......
0020  90 0c 01 f4 01 f4 00 85 ed 05 11 22 33 44 55 66   ..........."3DUf
0030  77 88 85 75 8e 0f fa a5 5d de 0b 10 05 00 69 a5   w..u....].....i.
0040  63 e4 00 00 00 7d 00 00 00 61 00 00 00 01 01 10   c....}...a......
0050  00 1e 11 22 33 44 55 66 77 88 85 75 8e 0f fa a5   ..."3DUfw..u....
0060  5d de 80 0c 00 01 00 06 00 39 55 44 50 20 50 61   ]........9UDP Pa
0070  63 6b 65 74 20 64 6f 65 73 20 6e 6f 74 20 63 6f   cket does not co
0080  6e 74 61 69 6e 20 65 6e 6f 75 67 68 20 64 61 74   ntain enough dat
0090  61 20 66 6f 72 20 49 53 41 4b 4d 50 20 70 61 63   a for ISAKMP pac
00a0  6b 65 74 80 08 00 00                              ket....

Now that we proved to ourselves that we can construct semi-valid packet it's time to try IKEv2. IKEv2 header is defined in RFC 4306, section 3.1:

                           1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      !                       IKE_SA Initiator's SPI                  !
      !                                                               !
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      !                       IKE_SA Responder's SPI                  !
      !                                                               !
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      !  Next Payload ! MjVer ! MnVer ! Exchange Type !     Flags     !
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      !                          Message ID                           !
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      !                            Length                             !
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

On a first sight, it looks the same (for backward compatibility). However, some of the values are different. For IKE header, the main differences are in the Exchange Type and Flags:

                       Exchange Type            Value

                       RESERVED                 0-33
                       IKE_SA_INIT              34
                       IKE_AUTH                 35
                       CREATE_CHILD_SA          36
                       INFORMATIONAL            37
                       RESERVED TO IANA         38-239
                       Reserved for private use 240-255

IKE_SA_INIT is our guy ('echo 0t34=x | mdb' produces 0x22).

The flags are now used to indicate the exchange. Set 3rd bit to say we are the Initiator. We will retain the source port even though IKEv2 supports ports other than 500 and 4500 because we're dealing with IKEv1 implementation. Now slightly change our packet code (don't forget to change the Version field to 2.0):

perl -e 'print "\\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x20\\x22\\x08\\x66\\x66\\x66\\x66\\x28\\x00\\x00\\x00";' \\
    | nc -w 1 -p 500 -u rpe-foo.czech 500

And we got a nice response (since the responder runs recent version libike.so):

1013.190867 10.18.144.12 -> 10.18.144.11 ISAKMP IKE_SA_INIT

0000  00 0a e4 2f 61 eb 00 03 ba 4e 3d 38 08 00 45 00   .../a....N=8..E.
0010  00 38 26 aa 40 00 ff 11 20 cf 0a 12 90 0c 0a 12   .8&.@... .......
0020  90 0b 01 f4 01 f4 00 24 34 71 11 22 33 44 55 66   .......$4q."3DUf
0030  77 88 00 00 00 00 00 00 00 00 00 20 22 08 66 66   w.......... ".ff
0040  66 66 28 00 00 00                                 ff(...

1013.192005 10.18.144.11 -> 10.18.144.12 ISAKMP Informational

0000  00 03 ba 4e 3d 38 00 0a e4 2f 61 eb 08 00 45 00   ...N=8.../a...E.
0010  00 83 d3 8d 40 00 ff 11 73 a0 0a 12 90 0b 0a 12   ....@...s.......
0020  90 0c 01 f4 01 f4 00 6f 66 da 11 22 33 44 55 66   .......of.."3DUf
0030  77 88 5c 36 e3 75 a2 7b 8e fe 0b 10 05 00 87 03   w.\\6.u.{........
0040  0c f5 00 00 00 67 00 00 00 4b 00 00 00 01 01 10   .....g...K......
0050  00 05 11 22 33 44 55 66 77 88 5c 36 e3 75 a2 7b   ..."3DUfw.\\6.u.{
0060  8e fe 80 0c 00 01 00 06 00 23 49 6e 76 61 6c 69   .........#Invali
0070  64 20 49 53 41 4b 4d 50 20 6d 61 6a 6f 72 20 76   d ISAKMP major v
0080  65 72 73 69 6f 6e 20 6e 75 6d 62 65 72 80 08 00   ersion number...
0090  00   

The only thing which is not nice is our terminal since nc(1) dumped the binary packet to it. Let's try again with some post-processing:

# perl -e 'print "\\x11\\x22\\x33\\x44\\x55\\x66\\x77\\x88\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x20\\x22\\x08\\x66\\x66\\x66\\x66\\x28\\x00\\x00\\x00";'     
| nc -w 1 -p 500 -u rpe-foo.czech 500 | od -c
0000000 021   "   3   D   U   f   w 210 237   y 254 264 351 333 007 344
0000020 013 020 005  \\0   [ 251 244   j  \\0  \\0  \\0   g  \\0  \\0  \\0   K
0000040  \\0  \\0  \\0 001 001 020  \\0 005 021   "   3   D   U   f   w 210
0000060 237   y 254 264 351 333 007 344 200  \\f  \\0 001  \\0 006  \\0   #
0000100   I   n   v   a   l   i   d       I   S   A   K   M   P       m
0000120   a   j   o   r       v   e   r   s   i   o   n       n   u   m
0000140   b   e   r 200  \\b  \\0  \\0
0000147

The fix is obviously in place.

Monday May 04, 2009

External contributions to testing community

I have just integrated couple of changes which I believe are the first contributed externally to the Testing community as an open-source contribution. The changes add couple of new tests to the nc test suite to cover the enhancement described in PSARC/2008/680 (which is present in Nevada since build 106). This is the stuff which allows you to run nc(1) in client mode with complex portlist specifications. Previously it was possible only to use simple port ranges like 22-80, with this change one can connect to e.g. 22,24,50-80,66,1024-2048. Little example how it might be useful:

$ nc -v -z grok.czech 22,25,80-88,8080
Connection to grok.czech 22 port [tcp/ssh] succeeded!
nc: connect to 129.157.71.49 port 25 [host grok.czech] (tcp) failed: Connection refused
Connection to grok.czech 80 port [tcp/\*] succeeded!
nc: connect to 129.157.71.49 port 81 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 82 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 83 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 84 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 85 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 86 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 87 [host grok.czech] (tcp) failed: Connection refused
nc: connect to 129.157.71.49 port 88 [host grok.czech] (tcp) failed: Connection refused
Connection to grok.czech 8080 port [tcp/\*] succeeded!

Back to the testing part. The putback (yes, stcnv-gate is still using Teamware) log for this change looks like this (I have modified Erik's e-mail a bit):

6786859 portranges_complex_spec is missing the listener
6754842 extended port list specification needs to be tested
Code contributed by Erik Trauschke <erik.trauschke AT freenet.de>

I think this is really nice example of the ideal state - the contributor not only did the feature part but also the testing part. It shows a great degree of responsibility - not just throwing some code "over the fence" but fully participating in the process to ensure the quality even in the long term.

The tests are both positive and negative. Each purpose in portranges directory is numbered and the following numbers match the test purpose numbers:

  • 5-12 ensure nc treats ports 0 and 65536 as invalid
    Previously, it was possible to listen on ports 0 and 65536, with Erik's changes this is no longer true so we need to add regression tests for both cases (client/server) and both ports.
  • 13-19 see if various malformed port list specifications are considered as invalid
    Each purpose needs not only positive tests which make sure the functionality actually works but also negative tests which ensure it does not misbehave. In this case, invalid port list specifications are thrown at nc(1) to see it reacts accordingly (with error, that is).
  • 20-25 test the functionality of complex port lists
    This is the bunch of test which see if the functionality actually works.
  • 26 tests reallocation
    Since the internal representation of the port list is now dynamically allocated and there is a default pre-allocated value which is reallocated if needed we need to test the case of reallocation.

To be able to do such integration there is now a Test development process. It's similar to the process used in ON community but it's more lightweight. The main difference is that the request-sponsor part is done informally via the testing-discuss mailing list and there is no list of bugs to pick up from. But don't be shy, whether you're adding new functionality or completely new program, the Testing community is here to help you.

Tuesday Mar 31, 2009

Command line history and editing for IPsec utilities

Since the days when John Beck added command line editing to zonecfg Mark Phalan did similar thing to Kerberos utilities and Huie-Ying Lee to sftp. IPsec utilities (ipseckey(1M) and ikeadm(1M)) offered the ability to enter commands in interactive mode for a long time but only since Nevada build 112, the commands support command line editing and history too. Again, thanks to libtecla (shipped with Solaris/OpenSolaris).

Lessons learned:

  • adding full-blown command line editing support is hard.
    Adding the initial support is quite easy. However, more advanced features could require substantial work. This is especially true for tab completion. For sftp Huie-Ying decided to add tab completion in the future phase because of the ambiguities when completing names of files (when to complete local files versus remote files).
    I did the same with tab completion for IPsec utilities - the integration only delivers basic command line editing support, without tab completion. The problem with ipseckey(1M) and ikeadm(1M) is that their grammar is quite bifurcated and has contexts. For example, you cannot use encr_alg with AH SAs in ipseckey. Or, it would be erroneous to tab complete a valid command in the middle of entering a key if the key hex sequence contained sub-string of a valid command. The hardest part is I think offering the right tables of valid commands in given context. E.g. in our case a command line in our case must start with top-level command. Each top-level command offers several valid sub-commands and we do not offer invalid sub-commands for given top-level command so there is a necessity to track the state of the finite state machine describing the grammar contexts.
    Also, after the user entered src we do not want to allow him to enter it again on the same command line. Also, if the user already entered say add esp spi we are expecting SPI number, not a command name.
    Ideally, to solve this problem in nice way there should be a meta library (or additional API in libtecla) which would offer the ability to link command tables and set the contexts.
  • interruptible cycles in command line mode
    ipseckey's monitor command reads from a PF_KEY socket in a loop. The loop is normally interruptible by SIGINT. To do so in libtecla environment (we do not want to exit the command line upon SIGINT and yet still need to interrupt the cycle), something like this is needed:
         static void
         monitor_catch(int signal)
         {
         if (!interactive)
                 errx(signal, gettext("Bailing on signal %d."), signal);
         }
    
         void
         doreadcycle(void)
         {
         ...
            
         /\* Catch \^C. \*/
         newsig.sa_handler = monitor_catch;
         newsig.sa_flags = 0;
         (void) sigemptyset(&newsig.sa_mask);
         (void) sigaddset(&newsig.sa_mask, SIGINT);
         (void) sigaction(SIGINT, &newsig, &oldsig);
    
         for (; ; ) {
                 rc = read(keysock, samsg, sizeof (get_buffer));
                 /\* handle the data \*/
         }
    
         /\* restore old behavior \*/
         if (interactive)
                 (void) sigaction(SIGINT, &oldsig, NULL);
         }
    
  • interaction with SMF
    While it's fine to bail out in interactive mode with error, due to the nature of IPsec commands (they can read the config files using the same routines as for interactive mode and they are used as SMF services to bring up IPsec policy and keys after boot) we need to distinguish the interactive and non-interactive mode.
  • maximum command line history value
    It seems that the second parameter to new_GetLine() - histlen is commonly misunderstood. This variable does not express the number of maximum lines in the history but instead maximum size of the history buffer in bytes. If the buffer becomes full, libtecla does not trim the last line but shifts instead.
    Given the first parameter to new_GetLine() expresses maximum command line size (in bytes) one needs to do some calculations and estimates on what will be needed too avoid too big buffer - ipseckey is used to enter key material so the line could become quite long. Say we wanted to keep 1024 lines. If the maximum length of the line is 1024 this will give us 1 megabyte buffer which seems too much for a simple application. Thus I did some guessing and set the buffer size accordingly:
    For "common" ipseckey configuration commands (think moderately bifurcated 'add') it's cca 300 characters. Mostly however, the users enter query commands like 'flush esp', 'dump ah' and the like so this is somewhere around say 30 characters. Say 30% of the commands are configuration and the rest is queries. To hold 100 such commands only cca 10K memory is required. In the end I chose 64K to be able to hold 15 of the biggies (4K) commands.

Friday Nov 07, 2008

Automatic webrev upload

I will start this one a little bit generically..

Part of standard OpenSolaris/Solaris development process is code review. To facilitate a review, a so-called webrev is needed. A webrev is set of HTML/text/PDF pages and documents which display all the changes between local repository containing the changes and its parent repository. To produce a webrev, simply switch to a repository and run the webrev script (it is part of SUNWonbld package, which can be downloaded from OpenSolaris download center.):

$ cd /local/Source/bugfix-314159.onnv
$ webrev

Assuming /opt/onbld/bin is present in your PATH a webrev will be generated under /local/Source/bugfix-314159.onnv/webrev/ directory.

For OpenSolaris changes, the webrev is usually uploaded to cr.opensolaris.org (every OpenSolaris member has an account automatically created for him) which serves it under http://cr.opensolaris.org/~OSol_username/ (where OSol_username is your OpenSolaris username) and a request for review with a link to the webrev is sent to one of the mailing lists relevant to the change.
Dan Price has written a script which produces RSS feed out of recently uploaded webrevs which is pretty handy substitute for feeds from news/headlines/magazines :)

For a long time I was basically doing the following:

$ cd /local/Source/bugfix-foo.onnv && webrev
$ scp -r webrev cr.opensolaris.org:bugfix-foo.onnv

This had two flaws: first it was slow (because of rcp protocol over SSH channel) and second I had to delete it via separate command (use sftp(1) and rename the old webrev to .trash directory) before uploading new version of the webrev (otherwise couple of permissions errors would follow).

To solve the first problem, rsync (with SSH transport) can be used which makes the upload nearly instantaneous. Second problem can be worked around by using incremental webrevs. Still, this does not seem good enough for code reviews with many iterations.

So, the change made in CR 6752000 introduces the following command line options for automatic webrev upload:

  • -U uploads the webrev
  • -n suppresses webrev generation
  • -t allows to specify custom upload target

webrev.1 man page has been updated to explain the usage. For common OpenSolaris code reviews the usage will probably mostly look like this:

$ webrev -O -U

This will upload the webrev to cr.opensolaris.org under directory named according to local repository name. Further invocations will replace the remote webrev with fresh version.
But it is possible to get more advanced. After the initial webrev is posted, an incremental webrev can be both generated and posted. Assuming you're switched to the repository (via bldenv) and we're dealing with 4th round of code review the following command will perform the task:

webrev_name=`basename $CODEMGR_WS`
webrev -O -U -o $CODEMGR_WS/${webrev_name}.rd4 \\
    -p $CODEMGR_WS/${webrev_name}.rd3

The above commands hide maybe not-so-obvious behavior so I'll try to explain it in the table:

+---------------------------+------------------------+-----------------------------------------------------+
| command                   | local webrev directory | remote webrev directory                             |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O -U              | $CODEMGR_WS/webrev/    | cr.opensolaris.org/~OSOLuser/`basename $CODEMGR_WS` |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O -o \\            | $CODEMGR_WS/my_webrev/ | cr.opensolaris.org/~OSOLuser/my_webrev              |
|   $CODEMGR_WS/my_webrev   |                        |                                                     |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O \\               | $CODEMGR_WS/fix.rd2/   | cr.opensolaris.org/~OSOLuser/fix.rd2                |
|  -p $CODEMGR_WS/fix.rd1 \\ |                        |                                                     |
|  -o $CODEMGR_WS/fix.rd2   |                        |                                                     |
+---------------------------+------------------------+-----------------------------------------------------+

Basically, without the -o flag webrev will generate the webrev to local directory named 'webrev' but it will upload it to the directory named after basename of local repository. With the -o flag webrev will use the name of root directory of the repository it is called from for both local and remote storage. This is done to keep the default behavior of generating local webrev to directory named 'webrev'. At the same time, uploading different webrevs to the same remote directory named 'webrev' does not make sense.

NOTE: This behavior is also valid in the case when not enclosed in a repository via ws or bldenv, I have just used $CODEMGR_WS to express root directory of a workspace/repository.

Also, now it is possible to call webrev from within Cadmium Mercurial plugin, so all webrev commands can be prefixed with hg.

All in all, it was fun dealing with webrevs of webrev. I am looking forward to more entries in the RSS feed :)

NOTE: It will take some time before the changes appear in SUNWonbld packages offered by the download center so it's better to update the sources from the ssh://anon@hg.opensolaris.org/hg/onnv/onnv-gate repository and build and upgrade the SUNWonbld package from there.

Sunday Apr 13, 2008

poll(2) and POLLHUP with pipes in Solaris

During nc(1) preintegration testing, short time before it went back I had found that 'cat /etc/passwd | nc localhost 4444' produced endless loop with 100% CPU utilization, looping in calls doing poll(2) (I still remember my laptop suddenly getting much warmer than it should be and CPU fan cranking up). 'nc localhost 4444 < /etc/password' was not exhibiting that behavior.
The cause was a difference between poll(2) implementation on BSD and Solaris. Since I am working on Netcat in Solaris again (adding more features, stay tuned), it's time to take a look back and maybe even help people porting similar software from BSD to Solaris.

The issue appears because POLLHUP is set in read events bitfield for stdin after pipe is closed (or to be more precise - after the producer/write end is done) on Solaris. poll.c (which resembles readwrite() function from nc) illustrates the issue:

01 #include <stdio.h>
02 #include <poll.h>
03 
04 #define LEN  1024
05 
06 int main(void) {
07      int timeout = -1;
08      int n;
09      char buf[LEN];
10      int plen = LEN;
11 
12      struct pollfd pfd;
13 
14      pfd.fd = fileno(stdin);
15      pfd.events = POLLIN;
16 
17      while (pfd.fd != -1) {
18              if ((n = poll(&pfd, 1, timeout)) < 0) {
19                      err(1, "Polling Error");
20              }
21              fprintf(stderr, "revents = 0x%x [ %s %s ]\\n",
22                  pfd.revents,
23                  pfd.revents & POLLIN ? "POLLIN" : "",
24                  pfd.revents & POLLHUP ? "POLLHUP" : "");
25      
26              if (pfd.revents & (POLLIN|POLLHUP)) {
27                      if ((n = read(fileno(stdin), buf, plen)) < 0) {
28                              fprintf(stderr,
29                                  "read() returned neg. val (%d)\\n", n);
30                              return;
31                      } else if (n == 0) {
32                              fprintf(stderr, "read() returned 0\\n", n);
33                              pfd.fd = -1;
34                              pfd.events = 0;
35                      } else {
36                              fprintf(stderr, "read: %d bytes\\n", n);
37                      }
38              }
39      }
40 }

Running it on NetBSD (chosen because my personal non-work mailbox is hosted on a machine running it) produces the following:

otaku[~]% ( od -N 512 -X -v /dev/zero | sed 's/ [ \\t]\*/ /g'; sleep 3 ) | ./poll
revents = 0x1 [ POLLIN  ]
read: 1024 bytes
revents = 0x1 [ POLLIN  ]
read: 392 bytes
revents = 0x11 [ POLLIN POLLHUP ]
read() returned 0

I had to post-process the output of od(1) (because of difference between output of od(1) on NetBSD and Solaris) and slow the execution down a bit (via sleep) in order to make things more visible (try to run the command without the sleep and the pipe will be closed too quickly). On OpenSolaris the same program produces different pattern:

moose:~$ ( od -N 512 -X -v /dev/zero | sed 's/ [ \\t]\*/ /g' ; sleep 3 ) | ./poll 
revents = 0x1 [ POLLIN  ]
read: 1024 bytes
revents = 0x1 [ POLLIN  ]
read: 392 bytes
revents = 0x10 [  POLLHUP ]
read() returned 0

So, the program is now obviously correct. Had the statement on line 26 checked only POLLIN, the command above (with or without the sleep) would go into endless loop on Solaris:

revents = 0x11 [ POLLIN POLLHUP ]
read: 1024 bytes
revents = 0x11 [ POLLIN POLLHUP ]
read: 392 bytes
revents = 0x10 [  POLLHUP ]
revents = 0x10 [  POLLHUP ]
revents = 0x10 [  POLLHUP ]
...

Both OSes set POLLHUP after the pipe is closed. The difference is that while BSD always indicates POLLIN (even if there is nothing to read), Solaris strips it after data stream ended. So, which one is correct ? poll() function as described by OpenGroup says that "POLLHUP and POLLIN are not mutually exclusive". This means both implementations seem to conform to the IEEE Std 1003.1, 2004 Edition standard (part of POSIX) in this respect.

However, the POSIX standard also says:

    In each pollfd structure, poll ( ) shall clear the revents member, except that where the application requested a report on a condition by setting one of the bits of events listed above, poll ( ) shall set the corresponding bit in revents if the requested condition is true. In addition, poll ( ) shall set the POLLHUP, POLLERR, and POLLNVAL flag in revents if the condition is true, even if the application did not set the corresponding bit in events.

This might be still ok even though POLLIN flag remains to be set in NetBSD's poll() even after no data are available for reading (try to comment out lines 33,34 and run as above) because the standard says about POLLIN flag: For STREAMS, this flag is set in revents even if the message is of zero length.

Without further reading it is hard to tell how exactly should POSIX compliant poll() look like. On the Austin group mailing list there was a thread about poll() behavior w.r.t. POLLHUP suggesting this is fuzzy area.

Anyway, to see where exactly is POLLHUP set for pipes in OpenSolaris go to fifo_poll(). The function _sets_ the revents bit field to POLLHUP so the POLLIN flag is wiped off after that. fifo_poll() is part of fifofs kernel module which has been around in Solaris since late eighties (I was still in elementary school the year fifovnops.c appeared in SunOS code base :)). NetBSD has fifofs too but the POLLHUP flag gets set via bit logic operation in pipe_poll() which is part of syscall processing code. The difference between OpenSolaris and NetBSD (whoa, NetBSD project uses OpenGrok !) POLLHUP attitude (respectively) is now clear:

Thursday Apr 03, 2008

ZFS is going to save my laptop data next time

The flashback is still alive even weeks after: the day before my presentation at FIRST Technical Colloquium in Prague I brought my 2 years old laptop with the work-in-progress slides to the office. Since I wanted to finish the slides in the evening a live-upgrade process was fired off on the laptop to get fresh Nevada version. (of course, to show off during the presentation ;))
LU is very I/O intensive process and the red Ferrari notebooks tend to get _very_ hot. In the afternoon I noticed that the process failed. To my astonishment, the I/O operations started to fail. After couple of reboots (and zpool status / fmadm faulty commands) it was obvious that the disk cannot be trusted anymore. I was able to rescue some data from the ZFS pool which was spanning the biggest slice of the internal disk but not all data. (ZFS is not willing to get corrupted data out.) My slides were lost as well as other data.

After some time I stumbled upon James Gosling's blog entry about ZFS mirroring on laptop. This get me started (or more precisely I was astonished and wondered how is it possible that this idea escaped me because at that time ZFS had been in Nevada for a long time) and I have discovered several similar and more in-depth blog entries about the topic.
After some experiments with borrowed USB disk it was time to make it reality on a new laptop.

The process was a multi-step one:

  1. First I had to extend the free slice #7 on the internal disk so it spans the remaining space on the disk because it was trimmed after the experiments. In the end the slices look like this in format(1) output:
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       3 -  1277        9.77GB    (1275/0/0)   20482875
      1 unassigned    wm    1278 -  2552        9.77GB    (1275/0/0)   20482875
      2     backup    wm       0 - 19442      148.94GB    (19443/0/0) 312351795
      3       swap    wu    2553 -  3124        4.38GB    (572/0/0)     9189180
      4 unassigned    wu       0                0         (0/0/0)             0
      5 unassigned    wu       0                0         (0/0/0)             0
      6 unassigned    wu       0                0         (0/0/0)             0
      7       home    wm    3125 - 19442      125.00GB    (16318/0/0) 262148670
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wu       1 -     2       15.69MB    (2/0/0)         32130
    
  2. Then the USB drive was connected to the system and recognized via format(1):
    AVAILABLE DISK SELECTIONS:
           0. c0d0 
              /pci@0,0/pci-ide@12/ide@0/cmdk@0,0
           1. c5t0d0 
              /pci@0,0/pci1025,10a@13,2/storage@4/disk@0,0
    
  3. Live upgrade boot environment which was not active was deleted via ludelete(1M) and the slice was commented out in /etc/vfstab. This was needed to make zpool(1M) happy.
  4. ZFS pool was created out of the slice on the internal disk (c0d0s7) and external USB disk (c5t0d0). I had to force it cause zpool(1M) complained about the overlap of c0d0s2 (slice spanning the whole disk) and c0d0s7:
    # zpool create -f data mirror c0d0s7 c5t0d0
    
    For a while I have struggled with finding a name for the pool (everybody seems either to stick to the 'tank' name or come up with some double-cool-stylish name which I wanted to avoid because of the likely degradation of the excitement from that name) but then chosen the ordinary data (it's what it is, after all).
  5. I have verified that it is possible to disconnect the USB disk and safely connect it while an I/O operation is in progress:
    root:moose:/data# mkfile 10g /data/test &
    [1] 10933
    root:moose:/data# zpool status
      pool: data
     state: ONLINE
     scrub: none requested
    config:
    
    	NAME        STATE     READ WRITE CKSUM
    	data        ONLINE       0     0     0
    	  mirror    ONLINE       0     0     0
    	    c0d0s7  ONLINE       0     0     0
    	    c5t0d0  ONLINE       0     0     0
    
    errors: No known data errors
    
    It survived it without a hitch (okay, I had to wait for the zpool command to complete a little bit longer due to the still ongoing I/O but that was it) and resynced the contents automatically after the USB disk was reconnected:
    root:moose:/data# zpool status
      pool: data
     state: DEGRADED
    status: One or more devices are faulted in response to persistent errors.
    	Sufficient replicas exist for the pool to continue functioning in a
    	degraded state.
    action: Replace the faulted device, or use 'zpool clear' to mark the device
    	repaired.
     scrub: resilver in progress for 0h0m, 3.22% done, 0h5m to go
    config:
    
    	NAME        STATE     READ WRITE CKSUM
    	data        DEGRADED     0     0     0
    	  mirror    DEGRADED     0     0     0
    	    c0d0s7  ONLINE       0     0     0
    	    c5t0d0  FAULTED      0     0     0  too many errors
    
    errors: No known data errors
    
    Also, with heavy I/O it is needed to mark the zpool as clear after the resilver completes via zpool clear data because the USB drive is marked as faulty. Normally this will not happen (unless the drive really failed) because I will be connecting and disconnecting the drive only when powering on or shutting down the laptop, respectively.
  6. After that I have used Mark Shellenbaum's blog entry about ZFS delegated administration (it was Mark who did the integration) and ZFS Delegated Administration chapter from OpenSolaris ZFS Administration Guide and created permissions set for my local user and assigned those permissions to the ZFS pool 'data' and the user:
      # chmod A+user:vk:add_subdirectory:fd:allow /data
      # zfs allow -s @major_perms clone,create,destroy,mount,snapshot data
      # zfs allow -s @major_perms send,receive,share,rename,rollback,promote data
      # zfs allow -s @major_props copies,compression,quota,reservation data
      # zfs allow -s @major_props snapdir,sharenfs data
      # zfs allow vk @major_perms,@major_props data
    
    All of the commands had to be done under root.
  7. Now the user is able to create a home directory for himself:
      $ zfs create data/vk
    
  8. Time to setup the environment of data sets and prepare it for data. I have separated the data sets according to a 'service level'. Some data are very important (e.g. presentations ;)) to me so I want them multiplied via the ditto blocks mechanism so they are actually present 4 times in case of copies dataset property set to 2. Also, documents are not usually accompanied by executable code so the exec property was set to off which will prevent running scripts or programs from that dataset.
    Some data are volatile and in high quantity so they do not need any additional protection and it is good idea to compress them with better compression algorithm to save some space. The following table summarizes the setup:
       dataset             properties                       comment
     +-------------------+--------------------------------+------------------------+
     | data/vk/Documents | copies=2                       | presentations          |
     | data/vk/Saved     | compression=on exec=off        | stuff from web         |
     | data/vk/DVDs      | compression=gzip-8 exec=off    | Nevada ISOs for LU     |
     | data/vk/CRs       | compression=on copies=2        | precious source code ! |
     +-------------------+--------------------------------+------------------------+
    
    So the commands will be:
      $ zfs create -o copies=2 data/vk/Documents
      $ zfs create -o compression=gzip-3 -o exec=off data/vk/Saved
      ...
    
  9. Now it is possible to migrate all the data, change home directory of the user to /data/vk (e.g. via /usr/ucb/vipw) and relogin.

However, this is not the end of it but just beginning. There are many things to make the setup even better, to name a few:

  • setup some sort of automatic snapshots for selected datasets
    The set of scripts and SMF service for doing ZFS snapshots and backup (see ZFS Automatic For The People and related blog entries) made by Tim Foster could be used for this task.
  • make zpool scrub run periodically
  • detect failures of the disks
    This would be ideal to see in Gnome panel or Gnome system monitor.
  • setup off-site backup via SunSSH + zfs send
    This could be done using the hooks provided by Tim's scripts (see above).
  • Set quotas and reservations for some of the datasets.
  • Install ZFS scripts for Gnome nautilus so I will be able to browse, perform and destroy snapshots in nautilus. Now which set of scripts to use ? Chris Gerhard's or Tim Foster's ? Or should I just wait for the official ZFS support for nautilus to be integrated ?
  • Find how exactly will the recovery scenario (in case of laptop drive failure) will look like.
    To import the ZFS pool from the USB disk should suffice but during my experiments I was not able to complete it successfully.

With all the above the data should be safe from disk failure (after all disks are often called "spinning rust" so they are going to fail sooner or later) and also the event of loss of both laptop and USB disk.

Lastly, a philosophical thought: One of my colleagues considers hardware as a necessary (and very faulty) layer which is only needed to make it possible to express the ideas in software. This might seem extreme but come to think of it. ZFS is special in this sense - being a software which provides that bridge, it's core idea to isolate the hardware faults.

Friday Feb 15, 2008

FIRST Technical Colloquium in Prague

Two weeks ago (yeah, I am a slacker) FIRST technical colloquium was held in Prague and we (me and Sasha) were given the opportunity to attend (the fact the Derrick serves as FIRST chair in the steering comittee has of course something to do with it).

I only attended one day of the technical colloquium (Tuesday 29th). The day was filled with various talks and presentations. Most of them were performed by various CERT teams members from around the world. This was because this event was a joint meeting of FIRST and TF-CSIRT. It was definitely interesting to see very different approaches to the shared problem set (dealing with incidents, setting up honey pots, building forensic analysis labs, etc.). Not only these differences stemmed from sizes of the networks and organizations but also (and that was kind of funny) from nationalities.
In the morning I talked about the integration of Netcat into Solaris, describing the process, current features and planned enhancements and extensions.

The most anticipated talk was by Adam Laurie who is entertaining guy involved in many hacker-like activities (see e.g. A hacker games the hotel article by Wired) directed at proving insecurities in many publicly used systems.
Adam (brother of Ben Laurie, author of Apache-SSL and OpenSSL contributor) first started with intro about satellite scanning, insecure hotel safes (with backdoors installed by the manufacturers which can be overcome by a screwdriver). Then he proceeded to talk about RFID chips, mainly about cloning.

Also, at the "social event" in the evening I had the pleasure to share a table with Ken van Wyk who is overall cool fellow and the author of Secure coding and Incident response books from O'Reilly.

In overall, it was interesting to see so many security types in a room and get to know some of them.

Monday Feb 11, 2008

Grepping dtrace logs

I have been working on a tough bug for some non-trivial time. The bug is a combination of race condition and data consistency issues. To debug this I am using multi-threaded apache process and dtrace heavily. The logs produced by dtrace are huge and contain mostly dumps of internal data structures.

The excerpt from such log looks e.g. like this:

  1  50244       pk11_return_session:return       128   8155.698
  1  50223            pk11_RSA_verify:entry       128   8155.7069
  1  50224           pk11_get_session:entry       128   8155.7199
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8

  1  50224           pk11_get_session:entry       128   8155.7199

  1  50244       pk11_return_session:return       128   8155.698

Side note: This is post-processed log (probe together with their bodies are timestamp sorted, time stamps are converted to miliseconds - see Coloring dtrace output entry for details and scripts).

Increasingly, I was asking myself questions which resembled this one: "when function foo() was called with data which contained value bar ?"

This quickly lead to a script which does the tedious job for me. dtrace-grep.pl accepts 2 or 3 parameters. First 2 are probe pattern and data pattern, respectively. Third, which is optional, is input file (if not supplied, stdin will be used). Example of use on the above pasted file looks like this:

~/bin$ ./dtrace-grep.pl pk11_get_session 0x00186248 /tmp/test
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8


~/bin$ 

Now I have to get back to work, to do some more pattern matching. Oh, and the script is here.

Thursday Jan 17, 2008

Adding dtrace SDT probes

It seems that many developers and dtrace users found themselves in a position where they wanted to add some SDT probes to a module to get more insight into what's going on but the had to pause and were thinking "okay, more probes. But where to put them ? Do I really need the additional probes when I already have the fbt ones ?". To do this, systematic approach is needed in order not to over-do or under-do. I will use KSSL (Solaris kernel SSL proxy [1]) for illustration.

With CR 6556447, tens of SDT probes were introduced into KSSL module and other modules which interface with it. Also, in addition to the new SDT probes, in KSSL we got rid of the KSSL_DEBUG macros compiled only in DEBUG kernels and substituted them with SDT probes. As a result, much better observability and error detection was achieved with both debug and non-debug kernels. The other option would be to create KSSL dtrace provider but that would be too big gun for what is needed to achieve.

Generically, the following interesting data points for data gathering/observation can be identified in code:

  • data paths
    When there is a more than one path how data could flow into a subsystem. E.g. for TCP we have couple of cases how SSL data could reach KSSL input queue. To identify where from exactly was tcp_kssl_input() called we use SDT probes:
    	if (tcp->tcp_listener || tcp->tcp_hard_binding) {
    ...
    		if (tcp->tcp_kssl_pending) {
    			DTRACE_PROBE1(kssl_mblk__ksslinput_pending,
    			    mblk_t \*, mp);
    			tcp_kssl_input(tcp, mp);
    		} else {
    			tcp_rcv_enqueue(tcp, mp, seg_len);
    		}
    	} else {
    ...
    		/\* Does this need SSL processing first? \*/
    			if ((tcp->tcp_kssl_ctx != NULL) &&
    			    (DB_TYPE(mp) == M_DATA)) {
    				DTRACE_PROBE1(kssl_mblk__ksslinput_data1,
    				    mblk_t \*, mp);
    				tcp_kssl_input(tcp, mp);
    			} else {
    				putnext(tcp->tcp_rq, mp);
    				if (!canputnext(tcp->tcp_rq))
    					tcp->tcp_rwnd -= seg_len;
    			}
    ...
    
  • data processed in while/for cycles
    To observe what happens in each iteration of the cycle. Can be used in code like this:
    while (mp != NULL) {
    
      DTRACE_PROBE1(kssl_mblk__handle_record_cycle, mblk_t \*, mp);
    
      /\* process the data \*/
      ...
    
      mp = mp->b_cont;
    }
    
  • switch statements
    If significant/non-trivial processing happens inside switch it may be useful to add SDT probes there too. E.g.:
      content_type = (SSL3ContentType)mp->b_rptr[0];
      switch(content_type) {
        /\* select processing according to type \*/
        case content_alert:
           DTRACE_PROBE1(kssl_mblk__content_alert, mblk_t \*, mp);
           ...
           break;
        case content_change_cipher_spec:
           DTRACE_PROBE1(kssl_mblk__change_cipher_spec, mblk_t \*, mp);
           ...
           break;
        default:
           DTRACE_PROBE1(kssl_mblk__unexpected_msg, mblk_t \*, mp);
           break;
      }
    
  • labels which cannot be (easily) identified in other way
    Useful if code which follows the label is generic (assignments, no function calls), e.g.:
                                    /\*
                                     \* Give this session a chance to fall back to
                                     \* userland SSL
                                     \*/
                                    if (ctxmp == NULL)
                                            goto no_can_do;
    ...
    no_can_do:
                                    DTRACE_PROBE1(kssl_no_can_do, tcp_t \*, tcp);
                                    listener = tcp->tcp_listener;
                                    ind_mp = tcp->tcp_conn.tcp_eager_conn_ind;
                                    ASSERT(ind_mp != NULL);
    

You've surely noticed that same of the probe definitions above have common prefix (kssl_mblk-). This is one of the things which make SDT probes so attractive. With prefixes it is possible to do the e.g. following:

sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
  stack(); ustack();
}

The important part is that we do not specify module of function name. The implicit wildcard (funcname/probename left out) combined with explicit wildcard (prefix + asterisk) will lead to all KSSL error probes to be activated regardless of in which module or function there are defined. This is obviously very useful for technologies which span multiple Solaris subsystems or modules (such as KSSL).

The nice thing about the error probes is that they could be leveraged in test suites. For each test case we can first run dtrace script with the above probeset covering all KSSL errors in the background and after the test completes just check if it produced some data. If it did, then the test case can be considered as failed. No need to check kstat(1M) (and other counters), log files, etc.

Also, thanks to the way how dtrace probes are activated we can have both generic probeset (using this for lack of better term) as above with addition of probe specific action, e.g.:

/\* probeset of all KSSL error probes \*/
sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
}

/\* 
  the probe definition is:
     DTRACE_PROBE2(kssl_err__bad_record_size,
         uint16_t, rec_sz, int, spec->cipher_bsize);
 \*/
sdt:kssl:kssl_handle_record:kssl_err-bad_record_size
{
  trace(timestamp);
  tracemem(arg0, 32);
  printf("rec_sz = %d , cipher_bsize = %d\\n", arg1, arg2);
}

If probe kssl_err-bad_record_size gets activated the generic probe will be activated (and fires) too because the probeset contains the probe.

Similarly to the error prefix, we can have data specific prefix. For KSSL it is kssl_mblk- prefix which could be used for tracing all mblks (msgb(9S)) as they flow through TCP/IP, STREAMS and KSSL modules. With such probes it is possible to do e.g. the following:

/\* how many bytes from a mblk to dump \*/
#define DUMP_SIZE       48

/\* use macros from  \*/
#define MBLKL(mp)       ((mp)->b_wptr - (mp)->b_rptr)
#define DB_FLAGS(mp)    ((mp)->b_datap->db_flags)

#define PRINT_MBLK_INFO(mp)                                             \\
        printf("mblk = 0x%p\\n", mp);                                    \\
        printf("mblk size = %d\\n", MBLKL((mblk_t \*)mp));                \\
        PRINT_MBLK_PTRS(mp);

#define PRINT_MBLK(mp)                                                  \\
                trace(timestamp);                                       \\
                printf("\\n");                                           \\
                PRINT_MBLK_INFO(mp);                                    \\
                printf("DB_FLAGS = 0x%x", DB_FLAGS((mblk_t \*)mp));      \\
                tracemem(((mblk_t \*)mp)->b_rptr, DUMP_SIZE);            \\
                tracemem(((mblk_t \*)mp)->b_wptr - DUMP_SIZE,            \\
                        DUMP_SIZE);

sdt:::kssl_mblk-\*
{
  trace(timestamp);
  printf("\\n");
  PRINT_MBLK(arg0)
}

This is actually an excerpt from my (currently internal) KSSL debugging suite.
An example of output from such probe can be seen in my Coloring dtrace output post.

For more complex projects it would be waste to stop here. Prefixes could be further structured. However, this has some drawbacks. In particular, I was thinking about having kssl_mblk- and kssl_err- prefixes. Now what to do for places where an error condition occurred _and_ we would like to see the associated mblk ? Using something like kssl_mblk_err-\* comes to ones mind. However, there is a problem with that - what about the singleton cases (only mblk, only err). Sure, using multiple wildcards in dtrace is possible (e.g. syscall::\*read\*:) but this will make it ugly and complicated given the number of mblk+err cases (it's probably safe to assume that the number of such cases will be low). Simply, it's not worth the hassle. Rather, I went with 2 probes.
To conclude, using structured prefixes is highly beneficial only for set of probes where categories/sub-prefixes create non-intersecting sets (e.g. data type and debug level).

Of course, all of the above is not valid only for kernel but also for custom userland probes !

[1] High-level description of KSSL can be found in blueprint 819-5782.

About

blog about security and various tools in Solaris

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today