Thursday Nov 20, 2008

Testing netcat

After multiple rounds of code review the netcat (or nc) test suite is now finally in the onnv-stc2gate. The test suite has its home in the OpenSolaris Networking community (see the networking tests page for the list of networking test suites).
The source code is present in the src/suites/net/nc/ directory and SUNWstc-netcat packages can be downloaded from OpenSolaris Download center.

Before I go further, this is how it looks like when the test suite is run (the output is trimmed a bit):

vk:honeymooners:/opt/SUNWstc-nc$ run_test nc
Validating Arguments...
New TET_ROOT for this run : /var/tmp/honeymooners_27828
The results will be available in /var/tmp/results.27828
tcc: journal file is /var/tmp/results.27828/testlog
12:45:57  Execute /tests/dflag/tc_dflag
12:46:04  Execute /tests/hflag/tc_hflag
12:46:05  Execute /tests/kflag/tc_kflag
12:46:11  Execute /tests/nflag/tc_nflag
12:46:15  Execute /tests/portranges/tc_portranges
12:46:23  Execute /tests/pflag/tc_pflag
12:46:26  Execute /tests/sflag/tc_sflag
12:46:35  Execute /tests/Uflag/tc_Uflag
12:46:36  Execute /tests/vflag/tc_vflag
12:46:43  Execute /tests/zflag/tc_zflag
12:46:46  Execute /tests/iflag/tc_iflag
12:46:59  Execute /tests/lflag/tc_lflag
12:47:29  Execute /tests/rflag/tc_rflag
12:48:16  Execute /tests/Tflag/tc_Tflag
12:48:33  Execute /tests/uflag/tc_uflag
12:48:50  Execute /tests/wflag/tc_wflag
##################################################
TC /tests/dflag/tc_dflag

TP 1 tc_dflag PASS
##################################################
TC /tests/hflag/tc_hflag

TP 1 tc_hflag PASS

...

##################################################
                 SUMMARY      
                 =======      
 
Number of Tests : 50

PASS            : 50
FAIL            : 0
UNRESOLVED      : 0
UNINITIATED     : 0
OTHER           : 0
 
##################################################

Test Logs are at /var/tmp/results.27828, Journal File = /var/tmp/results.27828/testlog 

vk:honeymooners:/opt/SUNWstc-nc$

It's been almost a year since I started developing the test suite last Christmas (see the initial blog entry about nc-tet). Since then, I have lost part of the source code in hard drive crash, had to redo the source tree structure, fix ksh style, fix numerous bugs in test suite code and make the test suite more robust. One might ask whether having test suite for such a simple program like nc(1) was worth the hassle. I have only one answer to that: absolutely. First, it gives a confidence of not breaking (most of; see below) existing things when changing/adding functionality and second it helped me (and I hope the others participating/observing the code review on testing-discuss too) to explore what it takes to write a test suite from scratch (I will not go here into details whether I prefer CTI-TET over STF and vice versa).

The Beautiful code book (which I really recommend for anyone tinkering with any source code) contains a chapter called Beautiful tests by Alberto Savoia. I hope that at least some of the test purposes in nc test suite have some degree of beautifulness of at least one of the ways highlighted by Alberto (1. simplicity/efficiency, 2. help making the software being tested better in terms of quality and testability, 3. breadth/thoroughness).

One of the important questions for a test suite is code coverage level. Obviously, for software adhering to the OpenSolaris interface taxonomy model it is important that the test suite exercises all of the Committed interfaces and execution paths around those interfaces. For nc(1) this means a subset of the command line options and their arguments (see PSARC 2007/389 for the actual list). The key is certainly to test the features which are likely to break with an intrusive code change.

Very crude view of test coverage for nc(1) test suite (counting test purposes gives only very remote idea about real coverage but at least provides visual image) looks like this:

       rflag: +
       Tflag: +++++---
       pflag: +
       iflag: +-
       vflag: ++
       kflag: +
       Uflag: +-
       dflag: +
       uflag: ++-
       sflag: +-
       hflag: +
       nflag: +-
       wflag: +
  portranges: +---
       lflag: ++++++++----------

One plus character stands for one positive test purpose, minus is negative test purpose.

Side note: the above ASCII graph was produced using test-coverage-graph.sh script (which presumes certain naming scheme for test purpose files). Just pipe a file listing into the script with test purpose filenames compliant to the scheme used in ontest-stc2 gate and it will spew out graph similar to the above.

In the above musing about code coverage I left out an important piece - why some of the features are not tested. For nc(1) the yet untested part is the SOCKS protocol support. Basically, this is because test suite environment does not contain SOCKS server to test against. There might not be many people using the -x/-X options but from my own experience nothing is more frustrating than discovering some old dusty corner which had to be fixed long time ago or removed completely. So for now, on my workstation which sits behind SOCKS proxy I have the following in ~/.ssh/config for a server outside corporate network which hosts my personal mailbox so it is accessed every day:

Host bar
  User foo
  Hostname outside.kewl.org
  # NOTE: for nc(1) testing
  ProxyCommand /usr/bin/nc -x socks-proxy.foothere.bar outside.kewl.org %p
  ForwardAgent no
  ForwardX11 no

This ensures (along with upgrades of the workstation to recent Nevada builds periodically) that SOCKS support gets tested as well. And yes, ssh-socks5-proxy-connect(1) and ssh-http-proxy-connect(1) are not really needed.

Now with the test suite in place, anybody modifying nc(1) (there are some RFEs for nc in the oss-bit-size list and other bugfixes or features are also welcome) can have pretty high confidence that his change will not break things. Yes, this means that more nc(1) features are coming.

Friday Nov 07, 2008

Automatic webrev upload

I will start this one a little bit generically..

Part of standard OpenSolaris/Solaris development process is code review. To facilitate a review, a so-called webrev is needed. A webrev is set of HTML/text/PDF pages and documents which display all the changes between local repository containing the changes and its parent repository. To produce a webrev, simply switch to a repository and run the webrev script (it is part of SUNWonbld package, which can be downloaded from OpenSolaris download center.):

$ cd /local/Source/bugfix-314159.onnv
$ webrev

Assuming /opt/onbld/bin is present in your PATH a webrev will be generated under /local/Source/bugfix-314159.onnv/webrev/ directory.

For OpenSolaris changes, the webrev is usually uploaded to cr.opensolaris.org (every OpenSolaris member has an account automatically created for him) which serves it under http://cr.opensolaris.org/~OSol_username/ (where OSol_username is your OpenSolaris username) and a request for review with a link to the webrev is sent to one of the mailing lists relevant to the change.
Dan Price has written a script which produces RSS feed out of recently uploaded webrevs which is pretty handy substitute for feeds from news/headlines/magazines :)

For a long time I was basically doing the following:

$ cd /local/Source/bugfix-foo.onnv && webrev
$ scp -r webrev cr.opensolaris.org:bugfix-foo.onnv

This had two flaws: first it was slow (because of rcp protocol over SSH channel) and second I had to delete it via separate command (use sftp(1) and rename the old webrev to .trash directory) before uploading new version of the webrev (otherwise couple of permissions errors would follow).

To solve the first problem, rsync (with SSH transport) can be used which makes the upload nearly instantaneous. Second problem can be worked around by using incremental webrevs. Still, this does not seem good enough for code reviews with many iterations.

So, the change made in CR 6752000 introduces the following command line options for automatic webrev upload:

  • -U uploads the webrev
  • -n suppresses webrev generation
  • -t allows to specify custom upload target

webrev.1 man page has been updated to explain the usage. For common OpenSolaris code reviews the usage will probably mostly look like this:

$ webrev -O -U

This will upload the webrev to cr.opensolaris.org under directory named according to local repository name. Further invocations will replace the remote webrev with fresh version.
But it is possible to get more advanced. After the initial webrev is posted, an incremental webrev can be both generated and posted. Assuming you're switched to the repository (via bldenv) and we're dealing with 4th round of code review the following command will perform the task:

webrev_name=`basename $CODEMGR_WS`
webrev -O -U -o $CODEMGR_WS/${webrev_name}.rd4 \\
    -p $CODEMGR_WS/${webrev_name}.rd3

The above commands hide maybe not-so-obvious behavior so I'll try to explain it in the table:

+---------------------------+------------------------+-----------------------------------------------------+
| command                   | local webrev directory | remote webrev directory                             |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O -U              | $CODEMGR_WS/webrev/    | cr.opensolaris.org/~OSOLuser/`basename $CODEMGR_WS` |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O -o \\            | $CODEMGR_WS/my_webrev/ | cr.opensolaris.org/~OSOLuser/my_webrev              |
|   $CODEMGR_WS/my_webrev   |                        |                                                     |
+---------------------------+------------------------+-----------------------------------------------------+
| webrev -O \\               | $CODEMGR_WS/fix.rd2/   | cr.opensolaris.org/~OSOLuser/fix.rd2                |
|  -p $CODEMGR_WS/fix.rd1 \\ |                        |                                                     |
|  -o $CODEMGR_WS/fix.rd2   |                        |                                                     |
+---------------------------+------------------------+-----------------------------------------------------+

Basically, without the -o flag webrev will generate the webrev to local directory named 'webrev' but it will upload it to the directory named after basename of local repository. With the -o flag webrev will use the name of root directory of the repository it is called from for both local and remote storage. This is done to keep the default behavior of generating local webrev to directory named 'webrev'. At the same time, uploading different webrevs to the same remote directory named 'webrev' does not make sense.

NOTE: This behavior is also valid in the case when not enclosed in a repository via ws or bldenv, I have just used $CODEMGR_WS to express root directory of a workspace/repository.

Also, now it is possible to call webrev from within Cadmium Mercurial plugin, so all webrev commands can be prefixed with hg.

All in all, it was fun dealing with webrevs of webrev. I am looking forward to more entries in the RSS feed :)

NOTE: It will take some time before the changes appear in SUNWonbld packages offered by the download center so it's better to update the sources from the ssh://anon@hg.opensolaris.org/hg/onnv/onnv-gate repository and build and upgrade the SUNWonbld package from there.

Wednesday Sep 03, 2008

strsep() in libc

As of today, strsep() function lives in Nevada's libc (tracked by CR 4383867 and PSARC 2008/305). This constitutes another step in the quest for more feature-full (in terms of compatibility) libc in OpenSolaris. In binary form, the changes will be available in build 99. The documentation will be part of the string(3C) man page.

Here's a small example of how to use it:

#include <stdio.h>
#include <string.h>
#include <err.h>

int parse(const char \*str) {
        char \*p = NULL;
        char \*inputstring, \*origstr;
        int ret = 1;
        
        if (str == NULL)
                errx(1, "NULL string");

        /\*
         \* We have to remember original pointer because strsep()
         \* will change 'inputstr' pointer.
         \*/
        if ((origstr = inputstring = strdup(str)) == NULL)
                errx(1, "strdup() failed");

        printf("=== parsing '%s'\\n", inputstring);
        for ((p = strsep(&inputstring, ",")); p != NULL;
           (p = strsep(&inputstring, ","))) {
                if (p != NULL && \*p != '\\0')
                        printf("%s\\n", p);
                else if (p != NULL) {
                        warnx("syntax error");
                        ret = 0;
                        goto bad;
                }
        }
bad:
        printf("=== finished parsing\\n");
        free(origstr);
        return (ret);
}

int main(int argc, char \*argv[]) {
        if (argc != 2)
                errx(1, "usage: prog ");

        if (!parse(argv[1]))
                exit(1);

        return (0);
}

This example was actually used as a unit test (use e.g. "1,22,33,44" and "1,22,,44,33" as input string) and it also nicely illustrates important properties of strsep() behavior:

  • While searching for tokens, strsep() modifies the original string. This is shared property with strtok().
  • Unlike strtok(), strsep() is able to detect empty fields.

There is a function in Solaris' libc which can do token splitting and does not modify the original string - strcspn(). The other notable property of strsep() is that (unlike strtok()) it does not conform to ANSI-C. Time to draw a table:

 function(s)   ISO C90    modifies     detects
                           input     empty fields
-------------+----------+----------+--------------+
 strsep()        No          Yes         Yes
 strtok()        Yes         Yes         No
 strcspn()       Yes         No        Sort of

None of the above functions is bullet-proof. The bottom line is the user should decide which is the most suitable for given task and use it with its properties in mind.

Thursday Aug 07, 2008

Customizing Mercurial outgoing output

Part of the transition of Mercurial in OpenSolaris are changes in the integration processes. Every RTI has to contain output of hg outgoing -v so the CRT advocates can better see the impact of the changes in terms of changed files. However, the default output is not very readable:

    $ hg outgoing -v
    comparing with /local/ws-mirrors/onnv-clone.hg
    searching for changes
    changeset:   7248:225922d15fe6
    user:        Vladimir Kotal 
    date:        2008-08-06 23:39 +0200
    
    modified:    usr/src/cmd/ldap/ns_ldap/ldapaddent.c usr/src/cmd/sendmail/db/config.h usr/src/cmd/ssh/include/config.h usr/src
    /cmd/ssh/include/openbsd-compat.h usr/src/cmd/ssh/include/strsep.h usr/src/cmd/ssh/libopenbsd-compat/Makefile.com usr/src
    /cmd/ssh/libopenbsd-compat/common/llib-lopenbsd-compat usr/src/cmd/ssh/libopenbsd-compat/common/strsep.c usr/src/cmd/ssh/libssh
    /common/llib-lssh usr/src/common/util/string.c usr/src/head/string.h usr/src/lib/libc/amd64/Makefile usr/src/lib/libc
    /i386/Makefile.com usr/src/lib/libc/port/gen/strsep.c usr/src/lib/libc/port/llib-lc usr/src/lib/libc/port/mapfile-vers usr/src
    /lib/libc/sparc/Makefile usr/src/lib/libc/sparcv9/Makefile usr/src/lib/passwdutil/Makefile.com usr/src/lib/passwdutil
    /bsd-strsep.c usr/src/lib/passwdutil/passwdutil.h usr/src/lib/smbsrv/libsmb/common/mapfile-vers usr/src/lib/smbsrv/libsmb
    /common/smb_util.c
    added:       usr/src/lib/libc/port/gen/strsep.c
    deleted:     usr/src/cmd/ssh/include/strsep.h usr/src/cmd/ssh/libopenbsd-compat/common/strsep.c usr/src/lib/passwdutil/bsd-strsep.c
    log:PSARC 2008/305 strsep() in libc
    4383867 need strsep() in libc
    --------------------------------------------------------------------
    

In the above case, the list of modified files spans single line which makes the web form used for RTI go really wild in terms of width (I had to wrap the lines manually in the above example otherwise this page would suffer from the same problem). The following steps can be used to make the output a bit nicer:

  1. create ~/bin/Mercurial/outproc.py with the following contents:
    from mercurial import templatefilters
    
    def newlines(text):
        return text.replace(' ', '\\n')
    
    def outgoing_hook(ui, repo, \*\*kwargs):
        templatefilters.filters["newlines"] = newlines
    
  2. hook into outgoing command in ~/.hgrc by adding the following lines into [hooks], [extensions] sections so it looks like this:
    [extensions]
    outproc=~/bin/Mercurial/outproc.py
    
    [hooks]
    pre-outgoing=python:outproc.outgoing_hook
    
  3. create ~/bin/Mercurial/style.outgoing with the following contents:
    changeset = outgoing.template
    
  4. create ~/bin/Mercurial/outgoing.template with the following contents (the file can be downloaded here):
    changeset:	{rev}:{node|short}
    user:		{author}
    date:		{date|isodate}
    
    modified:
    {files|stringify|newlines}
    
    added:
    {file_adds|stringify|newlines}
    
    deleted:
    {file_dels|stringify|newlines}
    
    log:
    {desc}
    ------------------------------------------------------------------------
    
  5. add the following into your ~/.bashrc (or to .rc file of the shell of your choice):
    alias outgoing='hg outgoing --style ~/bin/Mercurial/style.outgoing'
    

After that it works like this:

    $ outgoing
    comparing with /local/ws-mirrors/onnv-clone.hg
    searching for changes
    changeset:	7248:225922d15fe6
    user:		Vladimir Kotal 
    date:		2008-08-06 23:39 +0200
    
    modified:
    usr/src/cmd/ldap/ns_ldap/ldapaddent.c
    usr/src/cmd/sendmail/db/config.h
    usr/src/cmd/ssh/include/config.h
    usr/src/cmd/ssh/include/openbsd-compat.h
    usr/src/cmd/ssh/include/strsep.h
    usr/src/cmd/ssh/libopenbsd-compat/Makefile.com
    usr/src/cmd/ssh/libopenbsd-compat/common/llib-lopenbsd-compat
    usr/src/cmd/ssh/libopenbsd-compat/common/strsep.c
    usr/src/cmd/ssh/libssh/common/llib-lssh
    usr/src/common/util/string.c
    usr/src/head/string.h
    usr/src/lib/libc/amd64/Makefile
    usr/src/lib/libc/i386/Makefile.com
    usr/src/lib/libc/port/gen/strsep.c
    usr/src/lib/libc/port/llib-lc
    usr/src/lib/libc/port/mapfile-vers
    usr/src/lib/libc/sparc/Makefile
    usr/src/lib/libc/sparcv9/Makefile
    usr/src/lib/passwdutil/Makefile.com
    usr/src/lib/passwdutil/bsd-strsep.c
    usr/src/lib/passwdutil/passwdutil.h
    usr/src/lib/smbsrv/libsmb/common/mapfile-vers
    usr/src/lib/smbsrv/libsmb/common/smb_util.c
    
    added:
    usr/src/lib/libc/port/gen/strsep.c
    
    deleted:
    usr/src/cmd/ssh/include/strsep.h
    usr/src/cmd/ssh/libopenbsd-compat/common/strsep.c
    usr/src/lib/passwdutil/bsd-strsep.c
    
    log:
    PSARC 2008/305 strsep() in libc
    4383867 need strsep() in libc
    ------------------------------------------------------------------------
    

I asked Richard Lowe (who has been very helpful with helping getting the transition process done) if next Mercurial version can have newlines function already included and if there could be outgoingtemplate which would be similar to logtemplate in hgrc(5).
In the meantime I will be using the above for my RTIs.

Sunday Apr 27, 2008

Test suite for netcat

In OpenSolaris world we very much care about correctness and hate regressions (of any kind). If I loosely paraphrase Bryan Cantrill the degree of devotion should be obvious:

"Have you tested your change in every way you know of ? If not, do not go any further with the integration unless you do so."

This implies that ordinary bug fix should have a unit test accompanying it. But, unit tests are cumbersome when performed by hand and do not mean much if they are not accumulated over time.

For integration of Netcat into OpenSolaris I have developed number of unit tests (basically at least one for each command line option) and couple more after spotting some bugs in nc(1). This means that nc(1) is ripe for having a test suite so the tests can be performed automatically. This is tracked by RFE 6646967. The test suite will live in onnv-stc2 gate which is hosted and maintained by OpenSolaris Testing community.

To create a test suite one can choose between two frameworks: STF and CTI-TET. I have chosen the latter because I wanted to try something new and also because CTI-TET seems to be the recommended framework these days.

The work on nc test suite has started during Christmas break 2007 and after recovery from lost data it is now in pretty stable state and ready for code review. This is actually somewhat exciting because nc test suite is supposed to be the first OpenSolaris test suite developed in the open.

Fresh webrev is always stored on cr.opensolaris.org in nc-tet.onnv-stc2 directory. Everybody is invited to participate in the code review.

Code review should be performed via testing-discuss at opensolaris.org mailing list (subscribe via Testing / Discussions). It has web interface in the form of testing-discuss forum.

So, if you're familiar with ksh scripting or CTI-TET framework (both not necessary) you have unique chance to bash (not bash) my code ! Watch for official code review announcement on the mailing list in the next couple of days.

Lastly, another philosophical food for thought: Test suites are sets of programs and scripts which serve mainly one purpose - they should prevent bugs from happening in the software they test. But, test suites are software too. Presence of bugs in test suites is an annoying phenomenon. How to get rid of that one ?

Sunday Apr 13, 2008

poll(2) and POLLHUP with pipes in Solaris

During nc(1) preintegration testing, short time before it went back I had found that 'cat /etc/passwd | nc localhost 4444' produced endless loop with 100% CPU utilization, looping in calls doing poll(2) (I still remember my laptop suddenly getting much warmer than it should be and CPU fan cranking up). 'nc localhost 4444 < /etc/password' was not exhibiting that behavior.
The cause was a difference between poll(2) implementation on BSD and Solaris. Since I am working on Netcat in Solaris again (adding more features, stay tuned), it's time to take a look back and maybe even help people porting similar software from BSD to Solaris.

The issue appears because POLLHUP is set in read events bitfield for stdin after pipe is closed (or to be more precise - after the producer/write end is done) on Solaris. poll.c (which resembles readwrite() function from nc) illustrates the issue:

01 #include <stdio.h>
02 #include <poll.h>
03 
04 #define LEN  1024
05 
06 int main(void) {
07      int timeout = -1;
08      int n;
09      char buf[LEN];
10      int plen = LEN;
11 
12      struct pollfd pfd;
13 
14      pfd.fd = fileno(stdin);
15      pfd.events = POLLIN;
16 
17      while (pfd.fd != -1) {
18              if ((n = poll(&pfd, 1, timeout)) < 0) {
19                      err(1, "Polling Error");
20              }
21              fprintf(stderr, "revents = 0x%x [ %s %s ]\\n",
22                  pfd.revents,
23                  pfd.revents & POLLIN ? "POLLIN" : "",
24                  pfd.revents & POLLHUP ? "POLLHUP" : "");
25      
26              if (pfd.revents & (POLLIN|POLLHUP)) {
27                      if ((n = read(fileno(stdin), buf, plen)) < 0) {
28                              fprintf(stderr,
29                                  "read() returned neg. val (%d)\\n", n);
30                              return;
31                      } else if (n == 0) {
32                              fprintf(stderr, "read() returned 0\\n", n);
33                              pfd.fd = -1;
34                              pfd.events = 0;
35                      } else {
36                              fprintf(stderr, "read: %d bytes\\n", n);
37                      }
38              }
39      }
40 }

Running it on NetBSD (chosen because my personal non-work mailbox is hosted on a machine running it) produces the following:

otaku[~]% ( od -N 512 -X -v /dev/zero | sed 's/ [ \\t]\*/ /g'; sleep 3 ) | ./poll
revents = 0x1 [ POLLIN  ]
read: 1024 bytes
revents = 0x1 [ POLLIN  ]
read: 392 bytes
revents = 0x11 [ POLLIN POLLHUP ]
read() returned 0

I had to post-process the output of od(1) (because of difference between output of od(1) on NetBSD and Solaris) and slow the execution down a bit (via sleep) in order to make things more visible (try to run the command without the sleep and the pipe will be closed too quickly). On OpenSolaris the same program produces different pattern:

moose:~$ ( od -N 512 -X -v /dev/zero | sed 's/ [ \\t]\*/ /g' ; sleep 3 ) | ./poll 
revents = 0x1 [ POLLIN  ]
read: 1024 bytes
revents = 0x1 [ POLLIN  ]
read: 392 bytes
revents = 0x10 [  POLLHUP ]
read() returned 0

So, the program is now obviously correct. Had the statement on line 26 checked only POLLIN, the command above (with or without the sleep) would go into endless loop on Solaris:

revents = 0x11 [ POLLIN POLLHUP ]
read: 1024 bytes
revents = 0x11 [ POLLIN POLLHUP ]
read: 392 bytes
revents = 0x10 [  POLLHUP ]
revents = 0x10 [  POLLHUP ]
revents = 0x10 [  POLLHUP ]
...

Both OSes set POLLHUP after the pipe is closed. The difference is that while BSD always indicates POLLIN (even if there is nothing to read), Solaris strips it after data stream ended. So, which one is correct ? poll() function as described by OpenGroup says that "POLLHUP and POLLIN are not mutually exclusive". This means both implementations seem to conform to the IEEE Std 1003.1, 2004 Edition standard (part of POSIX) in this respect.

However, the POSIX standard also says:

    In each pollfd structure, poll ( ) shall clear the revents member, except that where the application requested a report on a condition by setting one of the bits of events listed above, poll ( ) shall set the corresponding bit in revents if the requested condition is true. In addition, poll ( ) shall set the POLLHUP, POLLERR, and POLLNVAL flag in revents if the condition is true, even if the application did not set the corresponding bit in events.

This might be still ok even though POLLIN flag remains to be set in NetBSD's poll() even after no data are available for reading (try to comment out lines 33,34 and run as above) because the standard says about POLLIN flag: For STREAMS, this flag is set in revents even if the message is of zero length.

Without further reading it is hard to tell how exactly should POSIX compliant poll() look like. On the Austin group mailing list there was a thread about poll() behavior w.r.t. POLLHUP suggesting this is fuzzy area.

Anyway, to see where exactly is POLLHUP set for pipes in OpenSolaris go to fifo_poll(). The function _sets_ the revents bit field to POLLHUP so the POLLIN flag is wiped off after that. fifo_poll() is part of fifofs kernel module which has been around in Solaris since late eighties (I was still in elementary school the year fifovnops.c appeared in SunOS code base :)). NetBSD has fifofs too but the POLLHUP flag gets set via bit logic operation in pipe_poll() which is part of syscall processing code. The difference between OpenSolaris and NetBSD (whoa, NetBSD project uses OpenGrok !) POLLHUP attitude (respectively) is now clear:

Thursday Apr 03, 2008

ZFS is going to save my laptop data next time

The flashback is still alive even weeks after: the day before my presentation at FIRST Technical Colloquium in Prague I brought my 2 years old laptop with the work-in-progress slides to the office. Since I wanted to finish the slides in the evening a live-upgrade process was fired off on the laptop to get fresh Nevada version. (of course, to show off during the presentation ;))
LU is very I/O intensive process and the red Ferrari notebooks tend to get _very_ hot. In the afternoon I noticed that the process failed. To my astonishment, the I/O operations started to fail. After couple of reboots (and zpool status / fmadm faulty commands) it was obvious that the disk cannot be trusted anymore. I was able to rescue some data from the ZFS pool which was spanning the biggest slice of the internal disk but not all data. (ZFS is not willing to get corrupted data out.) My slides were lost as well as other data.

After some time I stumbled upon James Gosling's blog entry about ZFS mirroring on laptop. This get me started (or more precisely I was astonished and wondered how is it possible that this idea escaped me because at that time ZFS had been in Nevada for a long time) and I have discovered several similar and more in-depth blog entries about the topic.
After some experiments with borrowed USB disk it was time to make it reality on a new laptop.

The process was a multi-step one:

  1. First I had to extend the free slice #7 on the internal disk so it spans the remaining space on the disk because it was trimmed after the experiments. In the end the slices look like this in format(1) output:
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       3 -  1277        9.77GB    (1275/0/0)   20482875
      1 unassigned    wm    1278 -  2552        9.77GB    (1275/0/0)   20482875
      2     backup    wm       0 - 19442      148.94GB    (19443/0/0) 312351795
      3       swap    wu    2553 -  3124        4.38GB    (572/0/0)     9189180
      4 unassigned    wu       0                0         (0/0/0)             0
      5 unassigned    wu       0                0         (0/0/0)             0
      6 unassigned    wu       0                0         (0/0/0)             0
      7       home    wm    3125 - 19442      125.00GB    (16318/0/0) 262148670
      8       boot    wu       0 -     0        7.84MB    (1/0/0)         16065
      9 alternates    wu       1 -     2       15.69MB    (2/0/0)         32130
    
  2. Then the USB drive was connected to the system and recognized via format(1):
    AVAILABLE DISK SELECTIONS:
           0. c0d0 
              /pci@0,0/pci-ide@12/ide@0/cmdk@0,0
           1. c5t0d0 
              /pci@0,0/pci1025,10a@13,2/storage@4/disk@0,0
    
  3. Live upgrade boot environment which was not active was deleted via ludelete(1M) and the slice was commented out in /etc/vfstab. This was needed to make zpool(1M) happy.
  4. ZFS pool was created out of the slice on the internal disk (c0d0s7) and external USB disk (c5t0d0). I had to force it cause zpool(1M) complained about the overlap of c0d0s2 (slice spanning the whole disk) and c0d0s7:
    # zpool create -f data mirror c0d0s7 c5t0d0
    
    For a while I have struggled with finding a name for the pool (everybody seems either to stick to the 'tank' name or come up with some double-cool-stylish name which I wanted to avoid because of the likely degradation of the excitement from that name) but then chosen the ordinary data (it's what it is, after all).
  5. I have verified that it is possible to disconnect the USB disk and safely connect it while an I/O operation is in progress:
    root:moose:/data# mkfile 10g /data/test &
    [1] 10933
    root:moose:/data# zpool status
      pool: data
     state: ONLINE
     scrub: none requested
    config:
    
    	NAME        STATE     READ WRITE CKSUM
    	data        ONLINE       0     0     0
    	  mirror    ONLINE       0     0     0
    	    c0d0s7  ONLINE       0     0     0
    	    c5t0d0  ONLINE       0     0     0
    
    errors: No known data errors
    
    It survived it without a hitch (okay, I had to wait for the zpool command to complete a little bit longer due to the still ongoing I/O but that was it) and resynced the contents automatically after the USB disk was reconnected:
    root:moose:/data# zpool status
      pool: data
     state: DEGRADED
    status: One or more devices are faulted in response to persistent errors.
    	Sufficient replicas exist for the pool to continue functioning in a
    	degraded state.
    action: Replace the faulted device, or use 'zpool clear' to mark the device
    	repaired.
     scrub: resilver in progress for 0h0m, 3.22% done, 0h5m to go
    config:
    
    	NAME        STATE     READ WRITE CKSUM
    	data        DEGRADED     0     0     0
    	  mirror    DEGRADED     0     0     0
    	    c0d0s7  ONLINE       0     0     0
    	    c5t0d0  FAULTED      0     0     0  too many errors
    
    errors: No known data errors
    
    Also, with heavy I/O it is needed to mark the zpool as clear after the resilver completes via zpool clear data because the USB drive is marked as faulty. Normally this will not happen (unless the drive really failed) because I will be connecting and disconnecting the drive only when powering on or shutting down the laptop, respectively.
  6. After that I have used Mark Shellenbaum's blog entry about ZFS delegated administration (it was Mark who did the integration) and ZFS Delegated Administration chapter from OpenSolaris ZFS Administration Guide and created permissions set for my local user and assigned those permissions to the ZFS pool 'data' and the user:
      # chmod A+user:vk:add_subdirectory:fd:allow /data
      # zfs allow -s @major_perms clone,create,destroy,mount,snapshot data
      # zfs allow -s @major_perms send,receive,share,rename,rollback,promote data
      # zfs allow -s @major_props copies,compression,quota,reservation data
      # zfs allow -s @major_props snapdir,sharenfs data
      # zfs allow vk @major_perms,@major_props data
    
    All of the commands had to be done under root.
  7. Now the user is able to create a home directory for himself:
      $ zfs create data/vk
    
  8. Time to setup the environment of data sets and prepare it for data. I have separated the data sets according to a 'service level'. Some data are very important (e.g. presentations ;)) to me so I want them multiplied via the ditto blocks mechanism so they are actually present 4 times in case of copies dataset property set to 2. Also, documents are not usually accompanied by executable code so the exec property was set to off which will prevent running scripts or programs from that dataset.
    Some data are volatile and in high quantity so they do not need any additional protection and it is good idea to compress them with better compression algorithm to save some space. The following table summarizes the setup:
       dataset             properties                       comment
     +-------------------+--------------------------------+------------------------+
     | data/vk/Documents | copies=2                       | presentations          |
     | data/vk/Saved     | compression=on exec=off        | stuff from web         |
     | data/vk/DVDs      | compression=gzip-8 exec=off    | Nevada ISOs for LU     |
     | data/vk/CRs       | compression=on copies=2        | precious source code ! |
     +-------------------+--------------------------------+------------------------+
    
    So the commands will be:
      $ zfs create -o copies=2 data/vk/Documents
      $ zfs create -o compression=gzip-3 -o exec=off data/vk/Saved
      ...
    
  9. Now it is possible to migrate all the data, change home directory of the user to /data/vk (e.g. via /usr/ucb/vipw) and relogin.

However, this is not the end of it but just beginning. There are many things to make the setup even better, to name a few:

  • setup some sort of automatic snapshots for selected datasets
    The set of scripts and SMF service for doing ZFS snapshots and backup (see ZFS Automatic For The People and related blog entries) made by Tim Foster could be used for this task.
  • make zpool scrub run periodically
  • detect failures of the disks
    This would be ideal to see in Gnome panel or Gnome system monitor.
  • setup off-site backup via SunSSH + zfs send
    This could be done using the hooks provided by Tim's scripts (see above).
  • Set quotas and reservations for some of the datasets.
  • Install ZFS scripts for Gnome nautilus so I will be able to browse, perform and destroy snapshots in nautilus. Now which set of scripts to use ? Chris Gerhard's or Tim Foster's ? Or should I just wait for the official ZFS support for nautilus to be integrated ?
  • Find how exactly will the recovery scenario (in case of laptop drive failure) will look like.
    To import the ZFS pool from the USB disk should suffice but during my experiments I was not able to complete it successfully.

With all the above the data should be safe from disk failure (after all disks are often called "spinning rust" so they are going to fail sooner or later) and also the event of loss of both laptop and USB disk.

Lastly, a philosophical thought: One of my colleagues considers hardware as a necessary (and very faulty) layer which is only needed to make it possible to express the ideas in software. This might seem extreme but come to think of it. ZFS is special in this sense - being a software which provides that bridge, it's core idea to isolate the hardware faults.

Friday Feb 15, 2008

FIRST Technical Colloquium in Prague

Two weeks ago (yeah, I am a slacker) FIRST technical colloquium was held in Prague and we (me and Sasha) were given the opportunity to attend (the fact the Derrick serves as FIRST chair in the steering comittee has of course something to do with it).

I only attended one day of the technical colloquium (Tuesday 29th). The day was filled with various talks and presentations. Most of them were performed by various CERT teams members from around the world. This was because this event was a joint meeting of FIRST and TF-CSIRT. It was definitely interesting to see very different approaches to the shared problem set (dealing with incidents, setting up honey pots, building forensic analysis labs, etc.). Not only these differences stemmed from sizes of the networks and organizations but also (and that was kind of funny) from nationalities.
In the morning I talked about the integration of Netcat into Solaris, describing the process, current features and planned enhancements and extensions.

The most anticipated talk was by Adam Laurie who is entertaining guy involved in many hacker-like activities (see e.g. A hacker games the hotel article by Wired) directed at proving insecurities in many publicly used systems.
Adam (brother of Ben Laurie, author of Apache-SSL and OpenSSL contributor) first started with intro about satellite scanning, insecure hotel safes (with backdoors installed by the manufacturers which can be overcome by a screwdriver). Then he proceeded to talk about RFID chips, mainly about cloning.

Also, at the "social event" in the evening I had the pleasure to share a table with Ken van Wyk who is overall cool fellow and the author of Secure coding and Incident response books from O'Reilly.

In overall, it was interesting to see so many security types in a room and get to know some of them.

Monday Feb 11, 2008

Grepping dtrace logs

I have been working on a tough bug for some non-trivial time. The bug is a combination of race condition and data consistency issues. To debug this I am using multi-threaded apache process and dtrace heavily. The logs produced by dtrace are huge and contain mostly dumps of internal data structures.

The excerpt from such log looks e.g. like this:

  1  50244       pk11_return_session:return       128   8155.698
  1  50223            pk11_RSA_verify:entry       128   8155.7069
  1  50224           pk11_get_session:entry       128   8155.7199
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8

  1  50224           pk11_get_session:entry       128   8155.7199

  1  50244       pk11_return_session:return       128   8155.698

Side note: This is post-processed log (probe together with their bodies are timestamp sorted, time stamps are converted to miliseconds - see Coloring dtrace output entry for details and scripts).

Increasingly, I was asking myself questions which resembled this one: "when function foo() was called with data which contained value bar ?"

This quickly lead to a script which does the tedious job for me. dtrace-grep.pl accepts 2 or 3 parameters. First 2 are probe pattern and data pattern, respectively. Third, which is optional, is input file (if not supplied, stdin will be used). Example of use on the above pasted file looks like this:

~/bin$ ./dtrace-grep.pl pk11_get_session 0x00186248 /tmp/test
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8


~/bin$ 

Now I have to get back to work, to do some more pattern matching. Oh, and the script is here.

Thursday Jan 17, 2008

Adding dtrace SDT probes

It seems that many developers and dtrace users found themselves in a position where they wanted to add some SDT probes to a module to get more insight into what's going on but the had to pause and were thinking "okay, more probes. But where to put them ? Do I really need the additional probes when I already have the fbt ones ?". To do this, systematic approach is needed in order not to over-do or under-do. I will use KSSL (Solaris kernel SSL proxy [1]) for illustration.

With CR 6556447, tens of SDT probes were introduced into KSSL module and other modules which interface with it. Also, in addition to the new SDT probes, in KSSL we got rid of the KSSL_DEBUG macros compiled only in DEBUG kernels and substituted them with SDT probes. As a result, much better observability and error detection was achieved with both debug and non-debug kernels. The other option would be to create KSSL dtrace provider but that would be too big gun for what is needed to achieve.

Generically, the following interesting data points for data gathering/observation can be identified in code:

  • data paths
    When there is a more than one path how data could flow into a subsystem. E.g. for TCP we have couple of cases how SSL data could reach KSSL input queue. To identify where from exactly was tcp_kssl_input() called we use SDT probes:
    	if (tcp->tcp_listener || tcp->tcp_hard_binding) {
    ...
    		if (tcp->tcp_kssl_pending) {
    			DTRACE_PROBE1(kssl_mblk__ksslinput_pending,
    			    mblk_t \*, mp);
    			tcp_kssl_input(tcp, mp);
    		} else {
    			tcp_rcv_enqueue(tcp, mp, seg_len);
    		}
    	} else {
    ...
    		/\* Does this need SSL processing first? \*/
    			if ((tcp->tcp_kssl_ctx != NULL) &&
    			    (DB_TYPE(mp) == M_DATA)) {
    				DTRACE_PROBE1(kssl_mblk__ksslinput_data1,
    				    mblk_t \*, mp);
    				tcp_kssl_input(tcp, mp);
    			} else {
    				putnext(tcp->tcp_rq, mp);
    				if (!canputnext(tcp->tcp_rq))
    					tcp->tcp_rwnd -= seg_len;
    			}
    ...
    
  • data processed in while/for cycles
    To observe what happens in each iteration of the cycle. Can be used in code like this:
    while (mp != NULL) {
    
      DTRACE_PROBE1(kssl_mblk__handle_record_cycle, mblk_t \*, mp);
    
      /\* process the data \*/
      ...
    
      mp = mp->b_cont;
    }
    
  • switch statements
    If significant/non-trivial processing happens inside switch it may be useful to add SDT probes there too. E.g.:
      content_type = (SSL3ContentType)mp->b_rptr[0];
      switch(content_type) {
        /\* select processing according to type \*/
        case content_alert:
           DTRACE_PROBE1(kssl_mblk__content_alert, mblk_t \*, mp);
           ...
           break;
        case content_change_cipher_spec:
           DTRACE_PROBE1(kssl_mblk__change_cipher_spec, mblk_t \*, mp);
           ...
           break;
        default:
           DTRACE_PROBE1(kssl_mblk__unexpected_msg, mblk_t \*, mp);
           break;
      }
    
  • labels which cannot be (easily) identified in other way
    Useful if code which follows the label is generic (assignments, no function calls), e.g.:
                                    /\*
                                     \* Give this session a chance to fall back to
                                     \* userland SSL
                                     \*/
                                    if (ctxmp == NULL)
                                            goto no_can_do;
    ...
    no_can_do:
                                    DTRACE_PROBE1(kssl_no_can_do, tcp_t \*, tcp);
                                    listener = tcp->tcp_listener;
                                    ind_mp = tcp->tcp_conn.tcp_eager_conn_ind;
                                    ASSERT(ind_mp != NULL);
    

You've surely noticed that same of the probe definitions above have common prefix (kssl_mblk-). This is one of the things which make SDT probes so attractive. With prefixes it is possible to do the e.g. following:

sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
  stack(); ustack();
}

The important part is that we do not specify module of function name. The implicit wildcard (funcname/probename left out) combined with explicit wildcard (prefix + asterisk) will lead to all KSSL error probes to be activated regardless of in which module or function there are defined. This is obviously very useful for technologies which span multiple Solaris subsystems or modules (such as KSSL).

The nice thing about the error probes is that they could be leveraged in test suites. For each test case we can first run dtrace script with the above probeset covering all KSSL errors in the background and after the test completes just check if it produced some data. If it did, then the test case can be considered as failed. No need to check kstat(1M) (and other counters), log files, etc.

Also, thanks to the way how dtrace probes are activated we can have both generic probeset (using this for lack of better term) as above with addition of probe specific action, e.g.:

/\* probeset of all KSSL error probes \*/
sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
}

/\* 
  the probe definition is:
     DTRACE_PROBE2(kssl_err__bad_record_size,
         uint16_t, rec_sz, int, spec->cipher_bsize);
 \*/
sdt:kssl:kssl_handle_record:kssl_err-bad_record_size
{
  trace(timestamp);
  tracemem(arg0, 32);
  printf("rec_sz = %d , cipher_bsize = %d\\n", arg1, arg2);
}

If probe kssl_err-bad_record_size gets activated the generic probe will be activated (and fires) too because the probeset contains the probe.

Similarly to the error prefix, we can have data specific prefix. For KSSL it is kssl_mblk- prefix which could be used for tracing all mblks (msgb(9S)) as they flow through TCP/IP, STREAMS and KSSL modules. With such probes it is possible to do e.g. the following:

/\* how many bytes from a mblk to dump \*/
#define DUMP_SIZE       48

/\* use macros from  \*/
#define MBLKL(mp)       ((mp)->b_wptr - (mp)->b_rptr)
#define DB_FLAGS(mp)    ((mp)->b_datap->db_flags)

#define PRINT_MBLK_INFO(mp)                                             \\
        printf("mblk = 0x%p\\n", mp);                                    \\
        printf("mblk size = %d\\n", MBLKL((mblk_t \*)mp));                \\
        PRINT_MBLK_PTRS(mp);

#define PRINT_MBLK(mp)                                                  \\
                trace(timestamp);                                       \\
                printf("\\n");                                           \\
                PRINT_MBLK_INFO(mp);                                    \\
                printf("DB_FLAGS = 0x%x", DB_FLAGS((mblk_t \*)mp));      \\
                tracemem(((mblk_t \*)mp)->b_rptr, DUMP_SIZE);            \\
                tracemem(((mblk_t \*)mp)->b_wptr - DUMP_SIZE,            \\
                        DUMP_SIZE);

sdt:::kssl_mblk-\*
{
  trace(timestamp);
  printf("\\n");
  PRINT_MBLK(arg0)
}

This is actually an excerpt from my (currently internal) KSSL debugging suite.
An example of output from such probe can be seen in my Coloring dtrace output post.

For more complex projects it would be waste to stop here. Prefixes could be further structured. However, this has some drawbacks. In particular, I was thinking about having kssl_mblk- and kssl_err- prefixes. Now what to do for places where an error condition occurred _and_ we would like to see the associated mblk ? Using something like kssl_mblk_err-\* comes to ones mind. However, there is a problem with that - what about the singleton cases (only mblk, only err). Sure, using multiple wildcards in dtrace is possible (e.g. syscall::\*read\*:) but this will make it ugly and complicated given the number of mblk+err cases (it's probably safe to assume that the number of such cases will be low). Simply, it's not worth the hassle. Rather, I went with 2 probes.
To conclude, using structured prefixes is highly beneficial only for set of probes where categories/sub-prefixes create non-intersecting sets (e.g. data type and debug level).

Of course, all of the above is not valid only for kernel but also for custom userland probes !

[1] High-level description of KSSL can be found in blueprint 819-5782.

Saturday Dec 01, 2007

Netcat in Solaris

CR 4664622 has been integrated into Nevada and will be part of build 80 (which means it will not be part of next SXDE release but I can live with that :)).

During the course of getting the process done I have stumbled upon several interesting obstacles. For example, during ingress Open Source Review I was asked by our open-source caretaker what will be the "support model" for Netcat once it is integrated. I was puzzled. Because, for Netcat, support is not really needed since it has been around for ages (ok, since 1996 according to wikipedia) and is pretty stable piece of code which is basically no longer developed. Nonetheless, this brings some interesting challenges with move to a community model where more and more projects are integrated by people outside Sun (e.g. ksh93 project).

The nc(1) man page will be delivered in build 81. In the meantime you can read Netcat review blog entry which contains the link to updated man page. The older version of the man page is contained in the mail communication for PSARC 2007/389.
Note: I have realized that the ARC case directory does not have to include most up-to-date man page at the time of integration. Only when something _architectural_ changes, then the man page has to be updated (which was not the case with Netcat since we only added new section describing how to setup nc(1) with RBAC). Thanks to Jim for the explanation.

I have some ideas how to make Netcat in Solaris even better and will work to get them done over time. In particular, there are following RFEs: 6515928, 6573202. However, this does not mean that there is only single person who can work on nc(1). Since it is now part of ONNV, anyone is free to hack it.

So, I'd like to invite everyone to participate - if you have an idea how to extend Netcat, what features to add, it is sitting in ONNV waiting for new RFEs (or bugs) and sponsor requests (be sure to read Jayakara Kini's explanation of how to contribute if you're not OpenSolaris contributor yet).

Also, if you're Netcat user and use Netcat in a cool way, I want to hear that !

Saturday Nov 17, 2007

Chemical plants, Linux and dtrace

When my friend from Usti nad Labem (9th biggest city in Czech republic) asked me to present about OpenSolaris at local Linux conference, I got hooked. First, Usti is interesting city (the city is surrounded by beautiful countryside, yet has chemical plants in the vicinity of city center) and I haven't been there for a long time and second, having the opportunity to present about OpenSolaris technologies to Linux folks is unique.

When we (Sasha agreed to go present with me) arrived to Usti we were greeted by slightly apocalyptic weather (see pictures below). The environment where all the presentations took place compensated that, fortunately.

40 minutes to present about something which most people in the room are not very aware of is challenging. The fact that OpenSolaris is open source and it is a home for several disruptive technologies makes that both easier and harder. We have greatly leveraged Bryan Cantrill's Dtrace review video taped at Google for doing second part of the presentation where we demo'ed dtrace. I have even borrowed some of his quotes. I am pretty sure he wouldn't object since his presentations were perused in the past. To make the list of attributions complete, we have substantial material in the first part from Lukas' past presentations about OpenSolaris project.

It's much better to demo the technology than just talk about how great it is (I remember a funny moment in Bryan's presentation where a dtrace probe didn't "want" to fire where Bryan jokingly said "bad demo!" to the screen. I nearly fell off of my chair at that moment.). "So, I have finally seen dtrace in action !" was one of the great things to hear after the presentation.

The "OpenSolaris technologies" presentation can be downloaded here.

Friday Nov 02, 2007

Netcat package and code review

As you might know, Netcat implementation is going to be part of OpenSolaris. The initial Netcat integration is based on a reimplementation from OpenBSD (here's why).

As Jeff Bonwick said, open sourced code is nothing compared to the fact that all design discussions and decisions suddenly happen in the public (loosely paraphrased). This is a great wave to ride and I have jumped on it when it was not really small so I have at least posted the webrev pointer for initial Netcat integration (CR 4664622) to the opensolaris-code mailing list (which is roughly the equivalent of freebsd-hackers, openbsd-tech or similar mailing lists) to get some code review comments.

Since then couple of things changed. Thanks to Dan Price and others it's now possible to upload webrevs to cr.opensolaris.org. I have started using the service so the new and official place for the Netcat webrev is

The webrev has moved location but what I said in the opensolaris-code post still holds true:

Any constructive notes regarding the code are welcome. (I am mainly looking for functional/logic errors, packaging flaws or parts of code which could mismatch the PSARC case)

The following things could help any potential code reviewer:

  • Summary of the changes done to the original implementation in order to adapt it to Solaris environment..
  • PSARC 2007/389 case covering interfaces delivered by this project. For more information about ARCs see Architecture Process and Tools community pages.
  • SUNWnetcat package (x86) which contains /usr/bin/nc binary
  • webrev of the differences between my version of Netcat and the one which is currently in OpenBSD.
    Only the \*.[ch] files matter, of course. (This is very easy thing to do with distributed SCM since it only requires one to reparent and regenerate webrev against new parent workspace)
  • Updated manual page
    This is slightly different from the man page in the PSARC materials because it contains new section about using nc with privileges and associated set of examples in the EXAMPLES section. The man page in the PSARC materials will not be updated because after a case is approved, the man page is updated only in case some architectural changes were needed. In the case of privileges, it is only addition describing specific usage, no architectural changes.

The conclusion for non code reviewers ? I hope it is clear the in (Open)Solaris land we value quality and transparency. Peer reviews and architectural reviews are just (albeit crucial) pieces which help to achieve that goal.

Tuesday Aug 28, 2007

Getting code into libc

In my previous entry about BSD compatibility gap closure process I have promised to provide a guide on how to get new code into libc. I will use changes done via CR 6495220 to illustrate the process with examples.

Process related and technical changes which are usually needed:

  • get PSARC case done
  • File a CR to create a manual page according to the man page draft supplied with the PSARC case. You will probably need to go through the functions being added and assign them MT-Level according to attributes(5) man page (if this was not done prior to filing the PSARC case).
  • actually add the code into libc
    This includes moving/introducing files from the SCM point of view and doing necessary changes to the Makefiles.
    In terms of symbols, the functions need to be actually delivered twice. Once as underscored (strong) symbol and second as WEAK alias to the strong symbol. This allows libraries use their own private implementation of the functions. (This is because the weak symbol is silently overridden by the private symbol in runtime linker)
  • add entries to c_synonyms.h and synonyms.h
    synonyms.h is used in libc for symbol alias contruction (see above). c_synonyms.h provides access to underscored symbols for other (non-libc) libraries. This provides a way how to call the underscored symbols directly without risking namespace clashes/pollution.
    This step is actually needed to be used in conjunction with the previous step. nm(1) can be used to check this worked as expected:
    $ nm -x /usr/lib/libc.so.1 | grep '|_\\?err$'
    [5783] |0x00049c40|0x00000030|FUNC |GLOB |0  |13 |_err
    [6552] |0x00049c40|0x00000030|FUNC |WEAK |0  |13 |err
    
  • Do the necessary packaging changes
    If you're adding new header file change SUNWhea's prototype_\* files (most probably just prototype_com)
    If the file was previously installed into proto area during build it needs to be removed from the exception files (for i386 and sparc).
  • modify lib's mapfile
    This is needed for the symbols to become visible and versioned. Public symbols belong to the latest SUNW section. After you have compiled the changes you can check this via command similar to the following:
    pvs -dsv -N SUNW_1.23 /usr/lib/libc.so.1 \\
      | sed -n '1,/SUNW.\*:/p' | egrep '((v?errx?)|(v?warnx?));'
                  vwarnx;
                  ...
      
    If you're adding private (underscored) symbols do not forget to add them to the SUNWprivate section. This is usually the case because the strong symbols are accompanied by weak symbols. Weak symbols go to the global part of the most recent SUNW section and strong symbols go to global part of SUNWprivate section.
  • update libc's lint library
    If you are adding private symbols then add them as well. See the entries _vwarnfp et al. for example.
    After you're done it's time to run nightly with lint checks and fix the noise. (see below)
  • Add per-symbol filters
    If you are moving stuff from a library to libc you will probably want to preserve the existing interfaces. To accomplish this per-symbol filters can be added to the library you're moving from. So, if symbol foo is moved from libbar to libc then change the line in the global section of libbar's mapfile to look like this:
    foo = FUNCTION FILTER libc.so.1;
    
    This was done with the \*fp functions in libipsecutils' mapfile. The specialty in that case was that the \*fp functions were renamed to underscored variants while moving them via redefines in errfp.h.
  • Fix build/lint noise introduced by the changes
    There could be the following noises:
    • build noise
      Can be caused by symbol type clash (there is symbol of the same name defined in libc as FUNC and in $SRC/cmd as OBJT) which is not harmful because ld(1) will do due diligence and prefer the non-libc symbol. This can be fixed by renaming the local symbol. There could also be #define clash caused by inclusion of c_synonyms.h. Fixed via renaming as well.
    • lint noise
      In the 3rd pass of the lint checks an inconsistency in function declarations can be found such as this:
      /builds/.../usr/include/err.h", line 43: warning: function argument declared
      inconsistently: warn(arg 1) in utils.c(62) char \* and llib-lc:err.h(43) const char \*
      (E_INCONS_ARG_DECL2)
      
      The problem with this output is that there are cca 23 files named utils.c in ONNV. CR 6585621 is waiting someone to provide remedy for that via adding -F flag to LINTFLAGS in $SRC/lib and $SRC/cmd.
      After the right file(s) are found the fix is usually renaming again. Where the renaming is not possible -erroff=E_FUNC_DECL_VAR_ARG2 can be passed to lint(1).
  • Make sure there are not duplicate symbols in libc after the changes
    This is necessary because it might confuse debugging tools (mdb, dtrace). For err/warn stuff there was one such occurence:
    [6583] | 301316| 37|FUNC |GLOB |0 |13 |_warnx
    [1925] | 320000| 96|FUNC |LOCL |0 |13 |_warnx
    
    This can be usually solved by renaming the local variant.
  • Test thoroughly
    • test with different compilers
      SunStudio does different things than gcc so it is good idea to test the code with both.
    • Try to compile different consolidations (e.g. Companion CD, SFW) on top of the changes. For err/warn project a bug was filed to get RPM build fixed.
    • Test if the WEAK symbols actually work
    • Test the programs in ONNV affected by the changes
      e.g. the programs which needed to be modified because of the build/lint noise.
  • Produce annotated putback list explaining the changes
    This is handy for a CRT advocate and saves time.
  • If the change requires some sort of action from fellow gatelings, send a heads-up, e.g. like heads-up for err/warn.
  • If you are actually adding code to libc (this includes moving code from other libraries to libc) send an e-mail similar to the heads-up e-mail to opensolaris-code mailing list, e.g. like this message about err/warn.

Thursday Aug 23, 2007

Closing BSD compatibility gap

I do not get to meet customers very often but I clearly remember the last time where I participated in a pseudo-technical session with a new customer. The engineers were keen on learning details about all features which make Solaris outstanding but they were also bewildered by the lack of common functions such as daemon() (see CR 4471189). Yes, there is a number of private implementations in various places in ONNV, however this is not very useful. Until recently, this was also the case of err/warn function family.

With the putback of CR 6495220 there are now the following functions living in libc:

  void err(int, const char \*, ...);
  void verr(int, const char \*, va_list);
  void errx(int, const char \*, ...);
  void verrx(int, const char \*, va_list);
  void warn(const char \*, ...);
  void vwarn(const char \*, va_list);
  void warnx(const char \*, ...);
  void vwarnx(const char \*, va_list);

These functions were present in BSD systems for a long time (they've been in FreeBSD since 1994). The configure scripts of various pieces of software contain checks for presence and functionality of the err/warn functions in libc (and setting the HAVE_ERR_H define). For Solaris, those checks have now become enabled too.

The err(3C) man page covering these functions will be delivered in the same build as the changes, that is build 72.

The change is covered by PSARC 2006/662 architectural review and the stability level for all the functions is Committed (see Architecture Process and Tools for more details on how this works). Unfortunately, the case is still closed. Hopefully it will be opened soon.
Update 09-28-2007: the PSARC/2006/662 case is now open, including onepager document and e-mail discussion. Thanks to John Plocher, Darren Reed and Bill Sommerfeld.

As I prompted in the err/warn Heads-up there is now time to stop creating new private implementations and to look at purging duplicates (there are many of them, however not all can be gotten rid of in favour of err/warn from libc).

I will write about how to get code into libc in general from a more detailed perspective next time.

However, this does not mean there is nothing left to do in this area. Specifically, FreeBSD's err(3) contains functions err_set_exit(), err_set_file() and errc(), warnc(), verrc() and vwarnc(). These functions could be ported over too. Also, there is __progname or getprogname(3). Moreover, careful (code) reader has realized that err.c contains private implementations of functions with fp suffix. This function (sub)family could be made Committed too. So, there is still lot of work which could be done. Anyone is free to work on any of these. (see Jayakara Kini's blog entry on how to become OpenSolaris contributor if you're not already)

About

blog about security and various tools in Solaris

Search

Categories
Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today