Monday Feb 11, 2008

Grepping dtrace logs

I have been working on a tough bug for some non-trivial time. The bug is a combination of race condition and data consistency issues. To debug this I am using multi-threaded apache process and dtrace heavily. The logs produced by dtrace are huge and contain mostly dumps of internal data structures.

The excerpt from such log looks e.g. like this:

  1  50244       pk11_return_session:return       128   8155.698
  1  50223            pk11_RSA_verify:entry       128   8155.7069
  1  50224           pk11_get_session:entry       128   8155.7199
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8

  1  50224           pk11_get_session:entry       128   8155.7199

  1  50244       pk11_return_session:return       128   8155.698

Side note: This is post-processed log (probe together with their bodies are timestamp sorted, time stamps are converted to miliseconds - see Coloring dtrace output entry for details and scripts).

Increasingly, I was asking myself questions which resembled this one: "when function foo() was called with data which contained value bar ?"

This quickly lead to a script which does the tedious job for me. dtrace-grep.pl accepts 2 or 3 parameters. First 2 are probe pattern and data pattern, respectively. Third, which is optional, is input file (if not supplied, stdin will be used). Example of use on the above pasted file looks like this:

~/bin$ ./dtrace-grep.pl pk11_get_session 0x00186248 /tmp/test
  1  50234          pk11_get_session:return       128   8155.7266
PK11_SESSION:
  pid = 1802
  session handle = 0x00273998
  rsa_pub_key handle -> 0x00184e70
  rsa_priv_key handle -> 0x00274428
  rsa_pub = 0x00186248
  rsa_priv = 0x001865f8


~/bin$ 

Now I have to get back to work, to do some more pattern matching. Oh, and the script is here.

Thursday Jan 17, 2008

Adding dtrace SDT probes

It seems that many developers and dtrace users found themselves in a position where they wanted to add some SDT probes to a module to get more insight into what's going on but the had to pause and were thinking "okay, more probes. But where to put them ? Do I really need the additional probes when I already have the fbt ones ?". To do this, systematic approach is needed in order not to over-do or under-do. I will use KSSL (Solaris kernel SSL proxy [1]) for illustration.

With CR 6556447, tens of SDT probes were introduced into KSSL module and other modules which interface with it. Also, in addition to the new SDT probes, in KSSL we got rid of the KSSL_DEBUG macros compiled only in DEBUG kernels and substituted them with SDT probes. As a result, much better observability and error detection was achieved with both debug and non-debug kernels. The other option would be to create KSSL dtrace provider but that would be too big gun for what is needed to achieve.

Generically, the following interesting data points for data gathering/observation can be identified in code:

  • data paths
    When there is a more than one path how data could flow into a subsystem. E.g. for TCP we have couple of cases how SSL data could reach KSSL input queue. To identify where from exactly was tcp_kssl_input() called we use SDT probes:
    	if (tcp->tcp_listener || tcp->tcp_hard_binding) {
    ...
    		if (tcp->tcp_kssl_pending) {
    			DTRACE_PROBE1(kssl_mblk__ksslinput_pending,
    			    mblk_t \*, mp);
    			tcp_kssl_input(tcp, mp);
    		} else {
    			tcp_rcv_enqueue(tcp, mp, seg_len);
    		}
    	} else {
    ...
    		/\* Does this need SSL processing first? \*/
    			if ((tcp->tcp_kssl_ctx != NULL) &&
    			    (DB_TYPE(mp) == M_DATA)) {
    				DTRACE_PROBE1(kssl_mblk__ksslinput_data1,
    				    mblk_t \*, mp);
    				tcp_kssl_input(tcp, mp);
    			} else {
    				putnext(tcp->tcp_rq, mp);
    				if (!canputnext(tcp->tcp_rq))
    					tcp->tcp_rwnd -= seg_len;
    			}
    ...
    
  • data processed in while/for cycles
    To observe what happens in each iteration of the cycle. Can be used in code like this:
    while (mp != NULL) {
    
      DTRACE_PROBE1(kssl_mblk__handle_record_cycle, mblk_t \*, mp);
    
      /\* process the data \*/
      ...
    
      mp = mp->b_cont;
    }
    
  • switch statements
    If significant/non-trivial processing happens inside switch it may be useful to add SDT probes there too. E.g.:
      content_type = (SSL3ContentType)mp->b_rptr[0];
      switch(content_type) {
        /\* select processing according to type \*/
        case content_alert:
           DTRACE_PROBE1(kssl_mblk__content_alert, mblk_t \*, mp);
           ...
           break;
        case content_change_cipher_spec:
           DTRACE_PROBE1(kssl_mblk__change_cipher_spec, mblk_t \*, mp);
           ...
           break;
        default:
           DTRACE_PROBE1(kssl_mblk__unexpected_msg, mblk_t \*, mp);
           break;
      }
    
  • labels which cannot be (easily) identified in other way
    Useful if code which follows the label is generic (assignments, no function calls), e.g.:
                                    /\*
                                     \* Give this session a chance to fall back to
                                     \* userland SSL
                                     \*/
                                    if (ctxmp == NULL)
                                            goto no_can_do;
    ...
    no_can_do:
                                    DTRACE_PROBE1(kssl_no_can_do, tcp_t \*, tcp);
                                    listener = tcp->tcp_listener;
                                    ind_mp = tcp->tcp_conn.tcp_eager_conn_ind;
                                    ASSERT(ind_mp != NULL);
    

You've surely noticed that same of the probe definitions above have common prefix (kssl_mblk-). This is one of the things which make SDT probes so attractive. With prefixes it is possible to do the e.g. following:

sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
  stack(); ustack();
}

The important part is that we do not specify module of function name. The implicit wildcard (funcname/probename left out) combined with explicit wildcard (prefix + asterisk) will lead to all KSSL error probes to be activated regardless of in which module or function there are defined. This is obviously very useful for technologies which span multiple Solaris subsystems or modules (such as KSSL).

The nice thing about the error probes is that they could be leveraged in test suites. For each test case we can first run dtrace script with the above probeset covering all KSSL errors in the background and after the test completes just check if it produced some data. If it did, then the test case can be considered as failed. No need to check kstat(1M) (and other counters), log files, etc.

Also, thanks to the way how dtrace probes are activated we can have both generic probeset (using this for lack of better term) as above with addition of probe specific action, e.g.:

/\* probeset of all KSSL error probes \*/
sdt:::kssl_err-\*
{
  trace(timestamp);
  printf("hit error in %s\\n", probefunc);
}

/\* 
  the probe definition is:
     DTRACE_PROBE2(kssl_err__bad_record_size,
         uint16_t, rec_sz, int, spec->cipher_bsize);
 \*/
sdt:kssl:kssl_handle_record:kssl_err-bad_record_size
{
  trace(timestamp);
  tracemem(arg0, 32);
  printf("rec_sz = %d , cipher_bsize = %d\\n", arg1, arg2);
}

If probe kssl_err-bad_record_size gets activated the generic probe will be activated (and fires) too because the probeset contains the probe.

Similarly to the error prefix, we can have data specific prefix. For KSSL it is kssl_mblk- prefix which could be used for tracing all mblks (msgb(9S)) as they flow through TCP/IP, STREAMS and KSSL modules. With such probes it is possible to do e.g. the following:

/\* how many bytes from a mblk to dump \*/
#define DUMP_SIZE       48

/\* use macros from  \*/
#define MBLKL(mp)       ((mp)->b_wptr - (mp)->b_rptr)
#define DB_FLAGS(mp)    ((mp)->b_datap->db_flags)

#define PRINT_MBLK_INFO(mp)                                             \\
        printf("mblk = 0x%p\\n", mp);                                    \\
        printf("mblk size = %d\\n", MBLKL((mblk_t \*)mp));                \\
        PRINT_MBLK_PTRS(mp);

#define PRINT_MBLK(mp)                                                  \\
                trace(timestamp);                                       \\
                printf("\\n");                                           \\
                PRINT_MBLK_INFO(mp);                                    \\
                printf("DB_FLAGS = 0x%x", DB_FLAGS((mblk_t \*)mp));      \\
                tracemem(((mblk_t \*)mp)->b_rptr, DUMP_SIZE);            \\
                tracemem(((mblk_t \*)mp)->b_wptr - DUMP_SIZE,            \\
                        DUMP_SIZE);

sdt:::kssl_mblk-\*
{
  trace(timestamp);
  printf("\\n");
  PRINT_MBLK(arg0)
}

This is actually an excerpt from my (currently internal) KSSL debugging suite.
An example of output from such probe can be seen in my Coloring dtrace output post.

For more complex projects it would be waste to stop here. Prefixes could be further structured. However, this has some drawbacks. In particular, I was thinking about having kssl_mblk- and kssl_err- prefixes. Now what to do for places where an error condition occurred _and_ we would like to see the associated mblk ? Using something like kssl_mblk_err-\* comes to ones mind. However, there is a problem with that - what about the singleton cases (only mblk, only err). Sure, using multiple wildcards in dtrace is possible (e.g. syscall::\*read\*:) but this will make it ugly and complicated given the number of mblk+err cases (it's probably safe to assume that the number of such cases will be low). Simply, it's not worth the hassle. Rather, I went with 2 probes.
To conclude, using structured prefixes is highly beneficial only for set of probes where categories/sub-prefixes create non-intersecting sets (e.g. data type and debug level).

Of course, all of the above is not valid only for kernel but also for custom userland probes !

[1] High-level description of KSSL can be found in blueprint 819-5782.

Saturday Nov 17, 2007

Chemical plants, Linux and dtrace

When my friend from Usti nad Labem (9th biggest city in Czech republic) asked me to present about OpenSolaris at local Linux conference, I got hooked. First, Usti is interesting city (the city is surrounded by beautiful countryside, yet has chemical plants in the vicinity of city center) and I haven't been there for a long time and second, having the opportunity to present about OpenSolaris technologies to Linux folks is unique.

When we (Sasha agreed to go present with me) arrived to Usti we were greeted by slightly apocalyptic weather (see pictures below). The environment where all the presentations took place compensated that, fortunately.

40 minutes to present about something which most people in the room are not very aware of is challenging. The fact that OpenSolaris is open source and it is a home for several disruptive technologies makes that both easier and harder. We have greatly leveraged Bryan Cantrill's Dtrace review video taped at Google for doing second part of the presentation where we demo'ed dtrace. I have even borrowed some of his quotes. I am pretty sure he wouldn't object since his presentations were perused in the past. To make the list of attributions complete, we have substantial material in the first part from Lukas' past presentations about OpenSolaris project.

It's much better to demo the technology than just talk about how great it is (I remember a funny moment in Bryan's presentation where a dtrace probe didn't "want" to fire where Bryan jokingly said "bad demo!" to the screen. I nearly fell off of my chair at that moment.). "So, I have finally seen dtrace in action !" was one of the great things to hear after the presentation.

The "OpenSolaris technologies" presentation can be downloaded here.

Thursday Jun 14, 2007

Coloring dtrace output

Currently I am working on a subtle bug in KSSL. (SSL kernel proxy) In order to diagnose the root cause of the bug I use set of dtrace scripts to gather data in various probes. One of the dtrace scripts I am using looks like this:

#!/usr/sbin/dtrace -Cs
/\*
  trace mbuf which caused activation of 
  kssl-i-kssl_handle_any_record_recszerr probe

  NOTE: this presumes that every mblk seen by TCP/KSSL
        is resonably sized (cca 100 bytes)
 \*/

/\* how many bytes from a mblk to dump \*/
#define DUMP_SIZE       48

/\* 
  generic kssl mblk dump probe 
  (covers both input path and special cases)
 \*/
sdt:::kssl_mblk-\*
{
  trace(timestamp);
  printf("\\nmblk size = %d",
      ((mblk_t \*)arg0)->b_wptr - ((mblk_t \*)arg0)->b_rptr);
  tracemem(((mblk_t \*)arg0)->b_rptr, DUMP_SIZE);
  tracemem(((mblk_t \*)arg0)->b_wptr - DUMP_SIZE, DUMP_SIZE);
}

The scripts usually collect big chunks of data from the various SDT probes I have put into kssl kernel module. After the data are collected I usually spend big chunks of time sifting though it. At one point of time I have got a suspicion that the problem is actually a race condition of sorts. In order to shed some light on what's going on I have used less(1) which provides highlighting of data when searching. While this is sufficient when searching for a single pattern, it does not scale when more patterns are used. This is when I got the idea to color the output from dtrace scripts to see the correlations (or lack of them) of the events with a simple Perl script. Example of the output colored by the script:

colored dtrace output (data)

This looks slightly more useful than plain black output in terminal but even with 19" display the big picture is missing. So, I have changed the dtrace-coloring script to be able to strip the data parts for probes and print just the headers:

colored dtrace output (headers)

This is done via '-n' command line option. (the default is to print everything.) The output containing just the colored headers is especially nice for tracking down race conditions and other time-sensitive misbehaviors. You can download the script for dtrace log coloring here: dtrace-coloring.pl

The colors can be assigned to regular expressions in the hash 'coloring' in the script itself. For the example above I have used the following assignments:

my %coloring = (
  '.\*kssl_mblk-i-handle_record_cycle.\*' => '4b6983',    # dark blue
  '.\*kssl_mblk-enqueue_mp.\*'    => '6b7f0d',    # dark green
  '.\*kssl_mblk-getnextrecord_retmp.\*'   => 'a11c10',    # dark red
);

In the outputs above you might have noticed that a timestamp is printed when a probe fires. This is useful for pre-processing of the log file. dtrace(1) (or libdtrace to be more precise) does not sort events as they come from the kernel. (see CR 6496550 for more details) In cases when hunting down a race condition on multiprocessor machine having the output sorted is crucial. So in order to get consistent image suitable for race condition investigation a sort script is needed. You might use a crude script of mine or you can write yours :)

Technorati Profile

About

blog about security and various tools in Solaris

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today