Wednesday Sep 15, 2004

ZFS feature article

The feature article on www.sun.com this week is ZFS

The article features some great background as well as descriptions of what exactly we are doing with ZFS and more than a few quotes from Jeff Bonwick.

A recommended read.

Tuesday Sep 14, 2004

reading and writing long dtrace command lines

If you read through my last dtrace adventure, you may have noticed more than a few multi-line dtrace commands.

Like most engineers, I will look for the easiest way to accomplish what I need to get done. Actually writing D-scripts into a file is only something that I'll do if things start getting a bit complex.

When I'm playing dtrace, I tend to use "ksh -o vi" as my shell.

As my scripts commands start heading toward that right margin, I use the edit option within ksh and actually lay out the command across multiple lines so I can read what I'm doing. For those that haven't tried this, it's just a matter of hitting <ESC>v while typing.

It's also a good way of evolving the commands as you go, which is the basis of a lot of dtrace-instigated investigative work.

It certainly helps me make for dtrace commands that are a little more complex than you would normally type on the command line, but not quite comlex enough to make the decision to start writing scripts.

Friday Sep 10, 2004

dtracing on the train - gnome-vfs-daemon

There was some discussion internally recently about using gnome-vfs-daemon as a process to start demonstrating dtrace with. As a result I thought it would be interesting to have a look at it on my notebook while coming to work on the train (about a 90 minute trip).

OK, first off, let's have a look at the system calls. We could do this with

$ /usr/sbin/dtrace -n 'syscall:::entry/execname=="gnome-vfs-daemon"'

But that really generates a lot of output. One of the things that I do notice though is the following sequence repeating itself every three seconds.

  0    397                     open64:entry 
  0    397                     open64:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    103                      ioctl:entry 
  0    309                     llseek:entry 
  0     17                      close:entry 
  0    317                    pollsys:entry 

This peaked my interest. How about an aggregation over a minute?

$ /usr/sbin/dtrace -q -n '
syscall:::entry/execname=="gnome-vfs-daemon"/{
	@p[probefunc] = count();
}
tick-1m {
        printf("\\nCount Syscall\\n----- -------\\n");
        printa(" %@3d  %s\\n",@p); exit(0);
}'

Count Syscall
----- -------
  20  pollsys
  20  close
  20  llseek
  40  open64
 360  ioctl

So, for each pollsys() and close(), it looks like I am seeing two open64()s and 18 ioctl()s, and from earlier we know that we see two open64()s followed by the ioctl()s. Let's have a closer look at these system calls. Also, while we are at it, let's try and find out what the ioctl is. We now move to scripting files.

#!/usr/sbin/dtrace -s

syscall::ioctl:entry
/execname == "gnome-vfs-daemon"/
{
    self->interest = timestamp;
    self->fd = arg0;
    self->cmd1 = (char)(arg1>>8);
    self->cmd2 = arg1&0xff
}

syscall::ioctl:return
/self->interest/
{
    printf("ioctl(%d, '%c'<<8|%d, buf) took %u us",
        self->fd,
        self->cmd1,
        self->cmd2,
        (timestamp - self->interest)/1000
        );
    ustack();
    self->interest = 0;
}

syscall::open64:entry
/execname == "gnome-vfs-daemon"/
{
    self->file = copyinstr(arg0);
    self->interest = 1;
}

syscall::open64:return
/self->interest/
{
    printf("%s returns fd %d\\n", self->file, (int)arg1);
    ustack();
    self->interest = 0;
}

An excerpt of the output shows

CPU     ID                    FUNCTION:NAME
  0    398                    open64:return /etc/fstab returns fd -1

              libc.so.1`__open64+0x15
              libc.so.1`_endopen+0xa8
              libc.so.1`fopen64+0x26
              libgnomevfs-2.so.0.600.0`_gnome_vfs_get_unix_mount_table+0x3e
              libglib-2.0.so.0.400.1`g_source_callback_funcs
              libgnomevfs-2.so.0.600.0`poll_fstab

  0    398                    open64:return /etc/mnttab returns fd 44

              libc.so.1`__open64+0x15
              libc.so.1`_endopen+0xa8
              libc.so.1`fopen64+0x26
              libgnomevfs-2.so.0.600.0`_gnome_vfs_get_current_unix_mounts+0x37
              libglib-2.0.so.0.400.1`g_source_callback_funcs
              libgnomevfs-2.so.0.600.0`poll_mtab

  0    104                     ioctl:return ioctl(44, 'm'<<8|7, buf) took 232 us
              libc.so.1`ioctl+0x15
              libgnomevfs-2.so.0.600.0`_gnome_vfs_get_current_unix_mounts+0x55

  0    104                     ioctl:return ioctl(44, 'm'<<8|7, buf) took 4 us
              libc.so.1`ioctl+0x15
              libgnomevfs-2.so.0.600.0`_gnome_vfs_get_current_unix_mounts+0xfe

OK, so we first attempt to open /etc/fstab, which obviously does not exist in solaris, then we open /etc/mnttab, which does. We then go and do lots of ioctl()s on it.

The 'm'<<8|7 (as well as being MTIOCGUARANTEEDORDER in sys/mtio.h) is apparantly MNTIOC_GETMNTENT (common/sys/mntio.h).

This has only gone back into Solaris 10 in the last few months. It's generally called by getmntent(3C).

So basically, it looks like gnome-vfs-daemon is reading the mount table of the system every three seconds. This certainly seems like a lot of overkill. Indeed on the notebook I ran it on, if we try the following script

#!/usr/sbin/dtrace -s

syscall::ioctl:entry
/execname == "gnome-vfs-daemon"/
{
    self->interest = timestamp;
}
syscall::ioctl:return
/self->interest/
{
    @ioctl[probefunc] = quantize((timestamp - self->interest)/1000);
    self->timestamp = 0;
}

syscall::open64:entry
{
    self->interest = timestamp;
    self->file = copyinstr(arg0);
}

syscall::open64:return {
    @open64[self->file] = quantize((timestamp - self->interest)/1000);
    self->timestamp = 0;
}

tick-1min {
    printa (@ioctl);
    printa (@open64);
    exit(0)
}

We can look at just how much time is being spent (we could always actually sum the numbers, but I like looking at graphs).

  ioctl                                             
           value  ------------- Distribution ------------- count    
               1 |                                         0        
               2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@   340      
               4 |                                         0        
               8 |                                         0        
              16 |                                         0        
              32 |                                         0        
              64 |                                         0        
             128 |                                         0        
             256 |@@                                       20       
             512 |                                         0        


  /etc/fstab                                        
           value  ------------- Distribution ------------- count    
               8 |                                         0        
              16 |@@@@@@@@@@@@@@@@@@@@                     10       
              32 |@@@@@@@@@@@@@@@@@@@@                     10       
              64 |                                         0        

  /etc/mnttab                                       
           value  ------------- Distribution ------------- count    
              32 |                                         0        
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 20       
             128 |                                         0        

So, worst case on this system is 340\*2 + 10\*16 + 10\*32 + 20\*64 = 2440 microseconds every minute, or 122 microseconds every iteration. This might not initially sound like an awful lot to worry about. However, in this iteration, I am only seeing 81 mount points on my system and I only have one gnome-vfs-daemon running. The truly scary concept is when we start looking at this issue in the light of large sunray servers, say with 250-300 entries on the mount table and around 80 users (instances of gnome-vfs-daemon).

Fortunately there is an answer. If we lookat the man page on mnttab(4) we see

NOTES
     The snapshot of the mnttab information is taken any  time  a
     read(2)  is  performed  at  offset  0 (the beginning) of the
     mnttab file. The file modification time returned by  stat(2)
     for  the  mnttab  file  is  the  time  of the last change to
     mounted file  system  information.  A  poll(2)  system  call
     requesting  a POLLRDBAND event can be used to block and wait
     for the system's mounted file system information to be  dif-
     ferent  from  the most recent snapshot since the mnttab file
     was opened.

So we actually only need to stat the file to see if the mount table has changed. This is a much more efficient way of doing things.

After logging a bug on this, the engineer who took it determined that the code to do the stat() was actually in there, but due to some faulty logic, it would never get executed. This is now logged into the gnome bugzilla as 152114 and 152115 and with luck should be fixed shortly.

Tuesday Aug 31, 2004

Wow, Bryan and Adam have been busy

Just noticed two rather large dtrace putbacks into build 67 today.

Bryan has just completed a whole lot of small fixes that include the gas/as problem with the fbt provider that was discussed on the bigadmin forum and some interaction issues with kmdb.

Adam has just put back the plockstat provider that he has been talking about.

Go guys!

Thursday Aug 26, 2004

Dtrace should be used just like any other tool

I believe that we will have succeeded with dtrace when all kinds of folks, from sysadmins to developers start usign dtrace as a normal part of their troubleshooting toolkit.

As a simple example, today I noticed my workstation running with a load average of just above 1. Not unusual, except I really wan't having the machine do anything at the time. Unfortunately, I haven't the output from the commands as I addressed the problem this morning and it's way out of my scroll buffer now. I'm actually pulling the commands back out of my shell history.

Maybe we've got something fork/execing a lot (that won't show in prstat). OK, let's pull out dtrace and have a look at what's going on ...

# dtrace -n 'proc::exec_common:exec-success {trace(arg0);}'

No, that was pretty quiet. OK, let's look at what is getting scheduled.

# dtrace -n 'sched::resume:on-cpu {printf("%s: %d\\n", execname, pid);}'

OK, this is scrolling past relatively quickly, so it appears that we've got a bit of contect switching going on and pretty much always got something ready to run. This probe will print a line for every time we put something onto proc. How about using an aggregation to tidy things up a bit?

# dtrace -n 'sched::resume:on-cpu {@[execname,pid] = count();}'

Hmmmm, much nicer, I can see that sched and one particular instance of mozilla (I have to run a few in different profiles) is finding itself put onto proc much more often than anything else. The mozilla in question is running with pid 20051. Running pargs 20051 tells me a bit more about which instance it is. It's the one that I use generally and do my email with. At this point I could probably have killed off and restarted the mozilla and the problem would have gone away, but I was interested in what it was doing.

OK, so what are the system calls?

# dtrace -n 'syscall:::entry /pid == 20051/ { @[probefunc] = count();}

This showed me that we were spread pretty equally around an ioctl on fd3, a read on fd5 and a write on fd6. Running pfiles 20051 tells us that these file descriptors are pipes, and very little else.

If we were interested we could also aggregate on the user level stacks to each of the calls, like this.

# dtrace -n 'syscall::ioctl:entry /pid == 20051/ { @[ustack()] = count();}'
# dtrace -n 'syscall::read:entry /pid == 20051/ { @[ustack()] = count();}'
# dtrace -n 'syscall::write:entry /pid == 20051/ { @[ustack()] = count();}'

One interesting there here was that for each of the above probes, there was only one user stack. I suppose that shouldn't be surprising given that they are always calling with the same file descriptor.

It was also interesting that the other instance I had running at the time was not doing anything even vaguely similar. There are two main differences between them.

  • My general instance also does my email
  • My non-general instance only talks to our escalation tracking system, so it's only really one website

At this point I killed and restarted the mozilla, ran some of the commands before and after starting my mail. The load on the box was much lighter, but mail didn't make a difference. I can only conclude that a site I must have visited left something playing up in mozilla. Oh well, won't be the last time.

The point of me blogging this is to show just how easy it is to start getting to the bottom of problems that would be otherwise very difficult to observe. That's the whole idea of dtrace. It's a tool to let you observe the system and it can be used just as easily as you would use the others that I interspersed here (eg pfile, prstat and pargs).

As I have been saying to Jacob before every soccer match, "Get in and have a go". It's the only way to really start to appreciate the usefulness.

Monday Aug 23, 2004

Kprobes in Linux vs Dtrace

An article on osnews points at an article at IBM about Kprobes on Linux

While this is probably a step in the right direction I still have some concerns. I would encourage the author to look at adding in some more protection. i.e. Always practice safe probing.

  1. I don't see any checking for NULL Pointer dereferences for the printk's. If this is the case, then a poorly written kprobe can still take out a production box. In fact any bad piece of code could take it out.
  2. It stills rather clunky to get simple probes inserted. Looking through the article shows a lot of work required to get the probes in. The equivalent probe in dtrace would be
    #!/usr/sbin/dtrace -s
    
    syscall::fork1:entry, syscall::forkall:entry, syscall::vfork:entry {
        printf("\\n\\tpid=%d kthread=0x%llx\\n", pid, (long long)curthread);
        printf("\\tt_state=0x%x cpu=%d\\n",
            curthread->t_state, curthread->t_cpu->cpu_id);
        printf("\\n\\tCaller program is \\"%s\\"\\n\\n", execname);
        printf("\\tUser Space stack\\n");
        ustack();
        printf("\\n\\tKernel Space Stack\\n");
        stack(10);
    }
    
    
    Which gives us the following results
    # ./fork.d
    dtrace: script './fork.d' matched 3 probes
    CPU     ID                    FUNCTION:NAME
      2    207                      vfork:entry 
            pid=1443 kthread=0x300056a3c60
            t_state=0x4 cpu=2
    
            Caller program is "csh"
    
            User Space stack
    
                  libc.so.1`vfork+0x20
                  csh`execute+0xcbc
                  csh`process+0x360
                  csh`main+0xe94
                  csh`_start+0x108
    
            Kernel Space Stack
    
                  unix`syscall_trap32+0xcc
    
    
    Alternately, with the knowledge that in Solaris each of these three system calls call cfork() (which you could also determine with dtrace), we could simply do
    #!/usr/sbin/dtrace -s
    
    fbt::cfork:entry {
        printf("\\n\\tpid=%d kthread=0x%llx\\n", pid, (long long)curthread);
        printf("\\tt_state=0x%x cpu=%d\\n",
            curthread->t_state, curthread->t_cpu->cpu_id);
        printf("\\n\\tCaller program is \\"%s\\"\\n\\n", execname);
        printf("\\tUser Space stack\\n");
        ustack();
        printf("\\n\\tKernel Space Stack\\n");
        stack(10);
    }
    
    Which would give us exactly the same output as the calls to cfork() are done with tail recursion. On an x86 box it would look something like:
    # ./fork.d
    dtrace: script './fork.d' matched 1 probe
    CPU     ID                    FUNCTION:NAME
      0   3882                      cfork:entry 
            pid=669 kthread=0xffffffffd5ae6000
            t_state=0x4 cpu=0
    
            Caller program is "csh"
    
            User Space stack
    
                  libc.so.1`vfork+0x45
                  csh`execute+0x12f
                  csh`process+0x24b
                  csh`main+0xa25
                  80580ea
    
            Kernel Space Stack
    
                  unix`sys_call+0xda
    
    

Now there are also a couple of other nice things to consider here.

  1. No need to register and unregister the probe. If I'm not running the dtrace script, then the probe does not exist.
  2. If I want to change the query, I just edit the script.

  3. This one is actually a pretty basic probe. I can get much more complex information with very little effort, and as I have already stated, it's just a matter of modifying the script and the probe does not exist unless I am running the script.

But the most important thing to remember is that we have protection against the probe taking out the system. That means that we have no hesitation in running dtrace probes on production boxes, where outage time is measured in thousands of dollars per second (yes we have such customers).


Update

Dan Price made a suggestion which tidies the script up even more, meaning that even if we change the way that we do fork(), the script will remain working. This gives us stability with kernel releases as well. To see the new script, look at the comments for this entry.

Sunday Aug 15, 2004

New stuff in Solaris Express

Wow, there are even a few things in there that I had not been aware of, like

  • VTS for x86,
  • DHCP for Logical interfaces

Two of the items that jump out at me as being very useful are:


KMDB

This is one that I had been waiting for and when I saw the putback happen, there was an immediate congrats sent to Matt Simmons. The great thing was that Matt was working in New Zealand at this point in time so I saw it during my working day!

Kmdb brings much of the functionality of mdb to the kernel debugger (replacing kadb).

One of the really great things about it is that you don't need to have booted with it in order to use it. 'mdb -K' option will load the module and drop you directly into the debugger on the console. I have lost count of the number of times that I have wished that a customer had booted with kadb so that when a rare problem crops up, we can actually do something useful. Well, now we'll be able to do something about it. Again, great work Matt.


IPsec and NAT-Traversal

I was waiting for this one it now means that we can do a completely Sun on Sun vpn connection on my notebook back into work, rather than relying on third parties who have decided that Solaris on x86 is not something they want to do for their vpn support.

Solaris Express 8/04

Solaris Express 08/04 which is based on build 63 should be available for download by August 17 (note this is August 17 in the US), if the announcements that I have seen are anything to go by.

I'll talk about some of these in more detail in later blogs, but the new features for the release include:

  • DHCP Event Scripting
  • DHCP for Logical Interfaces
  • x86: Sun Validation Test Suite 6.0
  • Kernel Modular Debugger
  • Java 2 Platform, Standard Edition 5
  • Zones-Wide Resource Control
  • Stream Control Transmission Protocol
  • Zebra Multi-Protocol Routing Suite
  • IPsec and NAT-Traversal
  • Enhancement to the nfsmapid Daemon
  • sendmail Version 8.13
  • Per-thread Mode Enhancement
  • perzone Audit Policy
  • OpenSSL and OpenSSL PKCS#11 Engine
  • x86: Fibre Channel Connectivity for Storage Devices
  • BIND 9
  • Samba
  • Flex 2.5.4a
  • SIP Proxy Server
  • libusb 0.1.8

Remember that you will need the three software cd images to install Solaris Express.

As usual, all docs are on docs.sun.com without password protection.

Tuesday Aug 10, 2004

Getting the mute and volume keys working in x86

Around April I logged a bug to try and get this stuff working. I had managed a very kludgy binary patching process to get it working with Xsun, but had also decided to switch to Xorg (speed reasons mainly), so I was out in the cold again.

I'm running a modified version of the kb8042 driver which returns the right keycode/scancode mapping for them. The next trick was to see how how the Xorg xserver reacted to this. Not good. We get a whole lot of complaints in /var/log/Xorg.log.0 about KEY_UNUSED whenever I press or release one.

Hmmmm, ...

Time to hit the Xorg source code. It looks like there is a map into which I have to add in the keysymbol numbers. OK, I have patched those into the binary with mdb. The errors have stopped, but now whenever I try to map those keys to their functions in the gnome keyboard shortcuts, pressing the key undefines the function (just like if I had pressed backspace).

Gawd it's frustrating to be soooooo close. Maybe something will occur to me overnight.

Monday Aug 02, 2004

Dtrace, Solaris, Open Sourcing ...

Adam Leventhal has just got back from OSCon and has made some interesting blog entries in the last few days.

Linux, Solaris and Open Source is definitely worth a read. He discusses a conversation he had with Greg Kroah-Hartman about Linux kernel development; goes on to talk a little about Open Solaris and the Solaris Community and some of the feedback he is getting (good looking feedback too!).

Adam is now officially a celebrity :-) In DTrace Spotting it appears that he was recognised in an airport as one of the DTrace developers and a conversation struck up.

Other folks to talk about OSCon include Eric Schrock here and Jim Grisanzio who also includes some photos. Isn't it nice to finally be able to associate faces with some of the names that we have been seeing around?

Wednesday Jul 14, 2004

Playing with dtrace in user space

In chapter 30 of the dtrace reference guid we look at user level tracing. There is only one example there, so I thought I'd write a few more, as I see this as being an extremely useful tool in user-space as well as the obvious kernel use.

Now, in the current build of Solaris Express, we cannot directly run a process from the dtrace command line, so we need to do it with truss. The sample command I will be using is a simple "ls -ls". You'll probably need two terminal sessions to do this. One to deal with the command and the other the dtrace script.

On running the truss command you'll get something like

$ truss -f -t\\!all -U a.out:main ls -ls
3470/1@1:       -> main(0x2, 0xffbfe784, 0xffbfe790, 0x26000)

To restart the command again after you have the dtrace running, simply use prun.

$ prun 3470

For the dtrace commands that use aggregations you need to ctrl-c the command once the process has finished.

OK, to some commands.

1. Let's look how often we call each function within the lifetime of this process.

# dtrace -n 'pid3470:::entry { @funcs[probefunc] = count(); } END { printa(@funcs); }'
dtrace: description 'pid3470:::entry ' matched 2921 probes

CPU     ID                    FUNCTION:NAME
  2      2                             :END 
  pthread_rwlock_unlock                                             1
  _fflush_u                                                         1
  rwlock_lock                                                       1
  ...
  iflush_range                                                     90
  callable                                                        136
  elf_find_sym                                                    139
  _ti_bind_clear                                                  140
  rt_bind_clear                                                   140
  strcmp                                                          146

This shows us the "hot" functions in our code.

2. We might also be interested in how long we spend in these functions per call.

# dtrace -n 'BEGIN { depth = 1; } pid3497:::entry { self->start[depth++] = vtimestamp; } \\
  pid3497:::return { @funcs[probefunc] = quantize(vtimestamp - self->start[depth-1]); \\
  self->depth[depth--] = 0;}END { printa(@funcs);}'
dtrace: description 'BEGIN ' matched 5816 probes

This gives us some histograms of how long we spend in various functions.

e.g.

  strcmp                                            
           value  ------------- Distribution ------------- count    
            1024 |                                         0        
            2048 |@@@@@@@@@@@@@                            47       
            4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@@              98       
            8192 |                                         0        
           16384 |                                         0        
           32768 |                                         1        
           65536 |                                         0        

We could just have easily specified the functions we were interested in.

3. OK, suppose for arguments sake we were interested in strcmp (since I listed it already). How about we look at the codepath that we take through it?

# dtrace -n 'pid3486::strcmp: { trace(probename);}'
dtrace: description 'pid3511::strcmp: ' matched 256 probes

We get a really long list as we are looking at all calls to it (and we have a few). This may not be very useful. The last call looks like:

  2  47463                     strcmp:entry   entry                            
  2  47464                         strcmp:0   0                                
  2  47465                         strcmp:4   4                                
  2  47466                         strcmp:8   8                                
  2  47508                        strcmp:b0   b0                               
  2  47509                        strcmp:b4   b4                               
  2  47462                    strcmp:return   return

This list is actually a call flow through strcmp for all calls to that function. Looking at the full list could give us ideas about where we might optimise. This could be even more useful if we knew where the hot instructions within this code flow were.

4. We can do this by turning on probes for all instructions within strcmp (like above), but aggregating on the probename, which will be the function offset.

# dtrace -n 'pid3517::strcmp: { @hot[probename] = count();} END {printa(@hot);}'
dtrace: description 'pid3517::strcmp: ' matched 257 probes

The end of this list looks like

  b0                                                               74
  b4                                                               74
  c                                                                77
  18                                                               77
  14                                                               77
  10                                                               77
  2c                                                               82
  24                                                               82
  28                                                               82
  20                                                               82
  30                                                               82
  0                                                               146
  entry                                                           146
  8                                                               146
  return                                                          146
  4                                                               146

We can ignore the entry and returns as we already account for those. We can tell from them (however) that in this run we called strcmp 146 times.

4. Anyway, if we run up mdb on /lib/libc.so.1 we can find out what these instructions are.

# mdb /lib/libc.so.1 -
Loading modules: [ libc.so.1 ]
> strcmp::dis
strcmp:                         subcc     %o0, %o1, %o2
strcmp+4:                       be        +0xac         
strcmp+8:                       sethi     %hi(0x1010000), %o5
...

OK we would expect to execute these on each call, so what about the ones we hit 82 times?

strcmp+0x20:                    ldub      [%o1 + %o2], %o0
strcmp+0x24:                    ldub      [%o1], %g1
strcmp+0x28:                    subcc     %o0, %g1, %o0
strcmp+0x2c:                    bne       +0x1c4        
strcmp+0x30:                    addcc     %o0, %g1, %g0

strcmp is probably not the best example to use as it is a call that has already been very highly optimised, but I hope you get the idea. This is going to be very useful in finding bottlenecks and suboptimal codepaths in userspace.

Tuesday Jul 13, 2004

Solaris Express 7/04 out shortly

Just received the notification that the July release of Solaris Express should be available for download by July 14 and will be s10_60. When I say July 14, I do mean July 14 in the US, out here in Asia/Pacific the vagaries of timezones mean that it should be available by July 15.

Some of the new features for this release are:

  • Streams Control Transmission Protocol
  • New Functionality Introduced in Solaris Zones Software Partitioning Technology
  • New Solaris Project and Resource Management Command Functionality
  • New Functions for Converting Strings
  • Java Support for pstack Command
  • New Solaris Unicode locales
  • USB End-User Device Support Enhancements (revised)

As has been the case for a while now, all documentation is available at http://docs.sun.com with no password.

Please also remember that the distribution is now three cd images, (1of3, 2of3 and 3of3) and that you will need them all to do an installation. The old CD0 is no more.

Alan Coopersmith also lists some things that are in the release that he has an interest in.


Update

From comments I have gotten back from the Solaris on x86 mailing list, looks like it is available NOW. Certainly beats last month when it became available after the date I said.

Saturday Jul 10, 2004

More folks finding out about Dtrace

It's great that we are all seeing folks get just what is possible with Dtrace. Bryan emailed me in response to my last blog entry. He is also seeing this. He sent me a few urls of folks who are writing about it. One in particular stands out.

Daniel Berrange has written a well thought out entry on what he sees Dtrace is, how it compares with some other tools, and expresses a desire that the folks coding Linux take note of the functionality.

I for one hope that Linux community (& vendors supporting it) realize the value of a polished tool like DTrace and take prompt steps to close the gap to Solaris.

He also lists some resources for finding out about Dtrace.

This type of posting is great. As was noted in my previous entry, the detractors appear to be those who have not tried it. Get out and have a good look at it before you start with the "If it's in Solaris, the linux stuff must be better". Sure, Linux has done some great stuff; but it would be arrogant to believe that it is the only Operating Environment that is showing innovation and great advances.

As someone who has seen some of what is yet to come into Solaris Express, there is still a lot of great stuff coming.

Slashdot points to The Register's Dtrace Article, and it's generally favourable

and I'm moderately impressed by the mostly informed comment associated with the article and the comments.

As one reader notes

What strikes me most about the commentary here is that the raves are coming from people who have actually used it, not from Sun (or not \*only\* from Sun; some people there seem justifiably proud of their work:-). The snarky comments are exclusively from people who haven't used DTrace ("gee, sounds like ____; what's new about that?"), and are being soundly rebutted by those who have.

Unfortunately, as it was posted anonymously, it started at mod level 0 and no-one has modded it up.

I think that this person has hit the nail on the head. Pretty much all of the disparaging remarks are coming from those who have not tried it. For goodness sakes folks, Solaris Express is a free download for non-commercial use. As noted in the slashdot comments and many other places, real admins are starting to use this for real work.

There was another analogy made, which is close too.

That's kind of like saying perl is an all round text processing tool, then asking why using perl is better than using cut, sort, and tr.

You can do a lot with cut sort and tr. Often they're all you need, but perl lets you solve problems those three tools can't even address.

I also saw the question asked a lot "Isn't this a lot like the functionality that X provides in Y?". It was comforting to see this question almost invariably answered with something along the lines of 'To see how X compares with Dtrace have a read of the Usenix paper that was presented in Boston' this year'. I can certainly recommend the paper. It will address a lot of the questions about Dtrace that people have. It will also fill in the blanks for those who do not understand just what we are doing with Dtrace.

Friday Jul 09, 2004

Great Dtrace article on The Register

Fired up The Register this morning to find the first article that I come across is an interview with Bryan and Adam titled Sun delivers Unix shocker with DTrace

There are also a real uses in the article from Brendon Gregg and Aeysis' Jenson.

Good interview guys!

About

* - Solaris and Network Domain, Technical Support Centre


Alan is a kernel and performance engineer based in Australia who tends to have the nasty calls gravitate towards him

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Links
Blogroll

No bookmarks in folder

Sun Folk

No bookmarks in folder

Non-Sun Folk
Non-Sun Folks

No bookmarks in folder