Monday Aug 01, 2005

Off to OSCON

As folks like Keith and Bryan have already noted, there are plenty of OpenSolaris happenings at OSCON this year. I'll be at the OpenSolaris BOF at 8:30pm Wednesday evening, so stop by if you're around.

I'll also be knocking about the conference/Portland in general Tuesday night through Friday. Leave a message for me at my hotel (503-222-0001) if you'd like to meet up for a beer and talk about OpenSolaris, Solaris, or smf(5). I'll even be happy to help you write an smf(5) manifest for any open source or commercial service; conditional only on your willingness to publish the manifest for others to use. :)

Technorati Tags: , , and .

Wednesday Jul 27, 2005

rpc/bind in single-user mode

A really simple tip today. If you want rpcbind(1M) or any other service started while in single-user mode (boot -s) or netbooted to single-user for repairs, all you need to do is temporarily enable the service. Using temporary enable (svcadm enable -t) gets around the repository not yet being writable -- after all, you weren't looking for your change to take effect outside of your maintenance environment.

You may also find that the service you want requires other services. svcs -l will show you the complete dependency list with current states displayed. Rather than going through and doing manual enables of everything, you can use the -r option for svcadm enable to tell it to recursively enable all services that are required. So, to temporarily enable rpcbind(1M) and all the services it requires while in single-user mode, use:

   # svcadm enable -rt rpc/bind

As promised, pretty simple. Someone commented this isn't well covered in our current documentation set. I've filed a bug, so you should see a similar task in the System Administration Guide in a future release.

Technorati Tags: , , and .

Thursday Jul 14, 2005

smf(5) fault/retry models

Someone asked on an internal Sun alias how svc.startd(1M) determines whether there was a fault in the service and when to put the service in maintenance. Unfortunately, this is not well-described in our existing manpages, but I'm working on that. Still, it takes a little while for manpage changes to propagate into the mainline Solaris release so I figured I'd include my description from email here. A more formal version will be coming, and I'll update this post if subsequent questions yield a better description.

It is important to mention first that svc.startd(1M) offers three separate service models: contract, transient, and wait. These are described in the Service Developer Introduction. I'll only touch on the fault/retry models for the common ones, contract and transient here.

Next, I'd like to point out that there's a distinction between method failures and service failures, from svc.startd(1M)'s point of view. So, I'll go over each type of failure and how it is handled.

svc.startd(1M) believes a method has failed if it returns a non-zero exit code. Method failures cause a service to go into the maintenance state immediately if the exit code is $SMF_EXIT_ERR_CONFIG or $SMF_EXIT_ERR_FATAL. All other failures will cause the service to go back to offline. Remember, as smf(5) describes, if a service is offline and its dependencies are satisfied, we try to start the service. But, if 3 method failures happen in a row, or if the service is restarting too quickly, that service will go into maintenance.

A service failure is determined by a combination of the service model (transient or contract) and the value of the startd/ignore_error property.

A contract type service is considered to have failed if any of the following conditions occur:

  • all processes in the service exit

  • any processes in the service coredump

  • a process outside the service sends a service process a fatal signal (e.g. an admin pkills a service process)

The latter two of these conditions may be ignored by the service by specifying core and/or signal in startd/ignore_error. All of these service failures are detected by contract events. I've talked earlier about contracts and fault isolation in smf(5) too.

Defining a service as transient means that svc.startd(1M) doesn't track processes for that service, so none of the service errors above matter. Thus, a transient service only goes to maintenance if a method failure occurs.

Technorati Tags: , , and .

Friday Jun 24, 2005

assembling services for boot with smf(5)

In order to deal with some pretty stringent backwards compatibility requirements for install, smf(5) attempts to self-assemble services during boot. I won't go into the vagaries of why it is so helpful to avoid having to write the smf(5) repository during install/upgrade now, and instead stick to the tactical details of what happens in Solaris/OpenSolaris. Most of this focuses on the behaviour of smf(5) during the first boot.

Some background details

First, a refresher (or an introduction, depending on your perspective). The delivery mechanism for services in Solaris smf are Service Manifests. They're a bunch of XML goo which describe metadata and optionally data about a service: how do I start it, what's the service model, is it an inetd service, etc. If you're interested in how to write a manifest, see the Service Developer Introduction. However, the system doesn't really know anything about a service described by a manfest until it is imported with svccfg import. At import time, the data is copied from the manifest and put into the smf(5) repository. (I wrote a bit about the design choices we made when creating the repository in another post.)

Once a service is in the repository, all of the smf(5) framework kicks in for that service -- displaying it with svcs(1), starting it during boot, restarting on failure, etc. But, until the service has been imported into the repository, smf(5) knows nothing about it. The persistent repository information is stored as a data file in /etc/svc/repository.db.

The seed repository

As I already mentioned, smf(5) only cares about the repository, so we need to generate enough of a repository to get the system to the point where it can import more manifests. What's required for that? Essentially, the / and /usr filesystems must be mounted and writable. We need the utilities from /usr and obviously we can't write to /etc/svc/repository.db without a writable root filesystem. We call this repository the seed.

When you build OpenSolaris (and when we build Solaris internally) in addition to compiling all of the smf(5) source code, we also assemble the seed repository. Essentially, what goes in the seed is what's required to start svc:/system/manifest-import:default. You can see how we assemble this during the build in usr/src/cmd/svc/seed/Makefile. We build a seed for both the global zone and the non-global zones, since the non-global zones require fewer services to get to manifest-import.

You can look at what's in the seed repository for your system. For something resembling current OpenSolaris bits, this looks like:

    $ cp /lib/svc/seed/global.db /tmp/global.db
    $ svccfg     
    svc:> repository /tmp/global.db
    svc:> list

If you look carefully at that Makefile, though, we don't place these in the normal repository location, we stash them at /lib/svc/seed/[non]global.db. This ensures that when folks use bfu, we don't over-write the local service configuration with the very limited seed repository! It is important to note that no /etc/svc/repository.db gets generated during the OpenSolaris build process.

Placing the seed

There's logic in both bfu and the i.manifest package class-action script which place the appropriate seed repository in /etc/svc/repository.db if the file doesn't already exist. Thus, if you're using bfu or the Solaris pkgadd(1M), we copy the seed to the correct location for you. The Solaris install CD/DVD/netinstall image also creates its own repository.db, based on the seed we generate during the ON build.

Finally, the boot discussion

OK, so now we have a system with a very very limited seed set of services as its repository. Those services are really limited, and don't even include essential things like ssh(1). But, the system is able to boot to the point of running the manifest-import service. We'll assume it does so successfully. Then the manifest-import start method kicks in. First we find all the manifests in /var/svc/manifest:

    423 nonsite_dirs=`/usr/bin/find /var/svc/manifest/\* -name site -prune -o -type d \\
    424 	-print -prune`
    426 nonsite_manifests=`/lib/svc/bin/mfstscan $nonsite_dirs`
    427 site_manifests=`/lib/svc/bin/mfstscan /var/svc/manifest/site`
    429 manifests="$nonsite_manifests $site_manifests"

mfstscan is a private command which finds any manifests in the subdirectories provided and checks to see if they've changed since last import. Then we import any manifests which have changed, keeping a running count so the user knows something is happening:

    443 	set -- $manifests
    444 	backup=`echo "$#/$#" | sed 's/.//g'`
    445 	fwidth=`echo "$#\\c" | wc -c`
    447 	echo "Loading smf(5) service descriptions: \\c" > /dev/msglog
    449 	i=1; n=$#
    450 	while [ $# -gt 0 ]; do
    451 		printf "%${fwidth}s/%${fwidth}s" $i $n > /dev/msglog
    452 		svccfg_import $1
    453 		i=`expr $i + 1`
    454 		shift
    455 		echo "$backup\\c" > /dev/msglog
    456 	done

But how do services get enabled?

Some of you may have noticed, though, that OpenSolaris has most of the services specified as disabled in their service manifests. You can see this, for example, with the syslog manifest. The enabled='false' means that the service will be disabled after it is imported. But, on your running Solaris system, it is enabled.

What's happening after the manifest imports in the manifest-import start method is that profiles are being applied. As I described here, profiles are just a way to enable or disable a bunch of services. Solaris/OpenSolaris tries to deliver as many services as possible as disabled in their manifest, so that an administrator can choose carefully which services they want to enable, and when we add new ones, they won't be automatically enabled unless absolutely necessary to get to manifest-import. We look for the following profiles and apply them in order:


generic.xml is created as a link to generic_open.xml both during the build process and by the Solaris packaging tools. The platform.xml profile is created as a link to the appropriate file during boot by the manifest-import start method:

    480 if [ ! -f /var/svc/profile/platform.xml ]; then
    481 	this_karch=`uname -m`
    482 	this_plat=`uname -i`
    484 	if [ -f /var/svc/profile/platform_$this_plat.xml ]; then
    485 		platform_profile=platform_$this_plat.xml
    486 	elif [ -f /var/svc/profile/platform_$this_karch.xml ]; then
    487 		platform_profile=platform_$this_karch.xml
    488 	else
    489 		platform_profile=platform_none.xml
    490 	fi
    492 	ln -s $platform_profile /var/svc/profile/platform.xml
    493 fi

Finally, site.xml is left entirely to the control of the administrator. If you're interested in re-visiting this boot order, take a look at smf_bootstrap(5). These profiles enable all of the services that were just added by manifest-import. If you want to use one of our limited profiles like generic_limited_net.xml, you can do the following before reboot (i.e. during jumpstart):

    $ ln -sf /var/svc/profile/generic_limited_net.xml /var/svc/profile/generic.xml

Or, you can just place your customizations into /var/svc/profile/site.xml, and they'll be applied after the Solaris/OpenSolaris-delivered profiles are applied.

Technorati Tags: , , and .

Monday Jun 20, 2005

Where did all my blogs go?

I'm away from my mail and my blog for a few days, but realized that yesterday may have brought an unpleasant surprise for folks. The main OpenSolaris blogs page no longer contains the entire list of entries that appeared on Opening Day! Not to worry -- the OpenSolaris webmasters keep an archived list of posts. You can keep reading through the entire list of blogs from Opening Day, or start with one of the higher-level lists Bryan and I have been compiling. I'll return later this week and finish up the tour of smf(5) code I've been planning.

Technorati Tag:
Technorati Tag:

Wednesday Jun 15, 2005

OpenSolaris blogs: networking, drivers, security, and standards

Looking through the huge number of blogs written yesterday about deep technical topics in the OpenSolaris source code, I'm again flattened by the depth of expertise that's represented in these entries. There's a lot to navigate, and Bryan and I will be trying to give some lists by subject area. I hope others will start their own as they find content that's useful to them. Even if you don't have a blog, allows you to create tagged lists of bookmarks.


Devices and Drivers:

  • Artem talks about 1394 (Firewire) and SCSI.
  • Sophia gives some details about Solaris' AGPgart.
  • Drivers are mentioned peripherally, but it's just a great entry so I won't leave out Jim's description of 4028137.
  • ATA and SATA from Mary.
  • Haik handles cfgadm, which isn't a driver per se, but is close enough to fit here.
  • Jerry handles failing faster for SVM (sounds strange, but is incredibly useful -- stopping retries early when hardware is truly dead decreases recovery times).
  • Dan talks about the Zones console and console driver.
  • SVM and RAID 0+1 and RAID 1+0.
  • New boot, described by Jan changes device interaction with system startup.
  • Mark gets pretty deep into DMA.
  • Dilpreet covers hardening a leaf device driver
  • ACPI is getting plenty of coverage on David's blog.
  • Fritz gives an overview of the USB architecture.
  • Alok mentions SVM diskset import, an its-about-time feature that appeared in Solaris 10.
  • The devid cache in Solaris 10 is covered by Jerry.

By the way, I threw in the towel early on collecting SVM entries for this section. There's a bunch of really great content that will deserve its own list at some point!



And, by the way, most of the entries above just wouldn't have been possible without the help of Chandan's incredible source browser for OpenSolaris.

If you're interested in seeing all the archived blog links, check out our blogs page. I've tried to be really inclusive here and pick up entries that will suit many experience levels. The unifying factor is that every entry is by an expert in the area. If you have an entry I've missed in one of the topic areas, leave it on the comments for everyone to see.

Technorati Tags: and

Tuesday Jun 14, 2005

(Open)Solaris: getting started

OpenSolaris has arrived. It is an amazing thing to be able to share with the world what we've all been pouring our lives into for so long. I realized that a great place to start would be my first putback to Solaris as part of the kernel group. I'd had more than a passing familiarity with Solaris as part of the team that released Sun Cluster 3.0. As the cluster software was intricately tied to Solaris, I had a number of opportunities to do minor modifications to Solaris to make it interoperate better with the clustering software. But, by 2001 I was in the big leagues -- I had joined the larger team primarily responsible for the code released today in OpenSolaris.

Almost everyone new to Solaris starts by fixing a few bugs, and I expect that will be common for new people contributing to OpenSolaris too. Fixing a small-ish bug is the best way to figure out what's really involved in putting code back into (Open)Solaris. My first bug was 4314534: NFS cannot be controlled by SRM. Essentially, our resource management tools work on LWPs, not kernel threads. NFS ran as a bunch of kernel threads, so administrators were unable to have NFS as a managed resource; it often took priority over other applications on the system. A senior engineer had already suggested an approach:

     A better way to solve this would be to have nfsd (or lockd) create the
     lwps.  nfsd can park a thread in the kernel (in some nfssys call) that
     blocks until additional server threads are needed.  It can then return to
     user level, call thread_create (with THR_BOUND) for however many lwps are
     needed, and park itself again.  Since this will only happen when growing the
     NFS server thread pool, the performance impact should be negligible.  The
     newly created lwps will similarly make an nfssys call to park themselves in
     the kernel waiting for work to do.  The threads parked in the kernel should
     still be interruptible so that signals and /proc control works correctly.
     If the server pool needs to shrink, an appropriate number of lwps simply
     return to user level and exit.

The userland code was pretty simple, and is shared between nfsd and lockd in thrpool.c. svcwait() calls svcblock() which hangs around in the kernel (by using _nfssys(SVCPOOL_WAIT, &id)) until a thread is needed. Then it starts up a new thread for svcstart():

 \* Thread to call into the kernel and do work on behalf of NFS.
static void \*
svcstart(void \*arg)
	int id = (int)arg;
	int err;

	while ((err = _nfssys(SVCPOOL_RUN, &id)) != 0) {
		 \* Interrupted by a signal while in the kernel.
		 \* this process is still alive, try again.
		if (err == EINTR)

	 \* If we weren't interrupted by a signal, but did
	 \* return from the kernel, this thread's work is done,
	 \* and it should exit.
	return (NULL);

SVCPOOL_WAIT and SVCPOOL_RUN were new sub-options to the _nfssys() system call. They needed to be defined in nfssys.h then added in nfs_sys.c.

I also had to make the additions and modifications to usr/src/uts/common/rpc/svc.c (unfortunately still encumbered) to signal the userland thread instead of directly creating new kernel thread workers. The design is described by the comment for p_signal_create_thread and friends in rpc/svc.h:

	 \* Userspace thread creator variables.
	 \* Thread creation is actually done in userland, via a thread
	 \* that is parked in the kernel. When that thread is signaled,
	 \* it returns back down to the daemon from whence it came and
	 \* does the lwp create.
	 \* A parallel "creator" thread runs in the kernel. That is the
	 \* thread that will signal for the user thread to return to
	 \* userland and do its work.
	 \* Since the thread doesn't always exist (there could be a race
	 \* if two threads are created in rapid succession), we set
	 \* p_signal_create_thread to FALSE when we're ready to accept work.
	 \* p_user_exit is set to true when the service pool is about
	 \* to close. This is done so that the user creation thread
	 \* can be informed and cleanup any userland state.

Of course, much to my chagrin, the change had unforeseen implications, which caused bug 4528299. Fixing that was fun and required changes to lwp.c. I'll talk about that in a subsequent post.

None of this is particularly sexy or subtle, but.. hopefully now you see the type of place we all start with Solaris (and now OpenSolaris). A disclaimer is also required. I'm by no means an NFS expert -- those who were simply allowed me into their code to accomplish a specific task. Check out blogs by actual NFS experts like Spencer Shepler and David Robinson for more detailed NFS information.

Technorati Tag:
Technorati Tag:

Monday Jun 06, 2005

Solaris and drinks in NYC

Seems a few folks are interested, so I'm posting an update here as promised. In addition to catching us at the Developer Days I mentioned, we will be meeting up for drinks at the Roosevelt Hotel's main lounge, Tuesday 6/7/2005 at 8:00pm. Look for us near the bar, I've currently got two-tone brown and pink hair.

I know the hotel bar is lame, so be warned we may switch locations around 8:45pm. Call me at 650-274-8576 if you can't find us or you're running late. I'll likely pick up the first round, but don't know if I can fund much after that. Hope to see a few of you there!

Friday Jun 03, 2005

New York bound

A few kernel folks will be in New York and New Jersey for the "Solaris 10 Developer Series: Part IV". There are experts lined up from the Fault Manager team, Zones, networking, and of course smf(5). You can come see the entire circus either on June 8 in New York, NY, or on June 9 in Somerset, NJ from 8:30am to 5:00pm. We promise a deeply technical event.

Registration is required: log in to with EventID: techne and Password: sun. More details about location and agenda is there too. Stop by and introduce yourself if you come!

A few of us are also thinking about meeting up with some local folk in New York to have a drink and talk about Solaris and OpenSolaris. If you're interested in hanging out with a few Solaris kernel engineers in the New York area around 8 or 9pm Tuesday June 7, send me a mail at Liane dot Praza at sun dot com. If it comes together, I'll post an update here with exact location and time.

Update: The developer days and customers we talked with over our week there were fun and very helpful with feedback and comments. Thanks to those of you who came. I've posted my slides, which now have more diagrams, as well as some more developer content . Dave promises he'll post his contracts slides too, so keep your eyes out for those.

Friday May 13, 2005

smf(5) and init.d script compatibility

Lots of large (and some not so large) organizations have run-books for systems operation that must span multiple OS versions. smf(5) can throw a spanner in the works, as the following procedure to stop, then restart a service no longer works in Solaris 10 (I'll use sendmail as an example):

  1. pkill sendmail
  2. Do whatever work was required

  3. /etc/init.d/sendmail start

If a user was running a program called "sendmail-monitor" or some such, their program would be killed off. That's not so good. This also doesn't work as expected on Solaris 10 because the pkill sendmail step only causes smf(5) to restart sendmail. And, if you're running the command in the global zone on a system that has some local zones installed, it'll kill sendmail in all of the local zones! A better run-book procedure would be:

  1. /etc/init.d/sendmail stop
  2. Do whatever work was required

  3. /etc/init.d/sendmail start

This has the benefit of working on all versions of Solaris, including Solaris 10. In Solaris 10, we've retained commonly-used init.d scripts like sendmail, nfs.server, and nscd. However, they've been re-implemented to use the appropriate smf(5) commands under the covers. Thus, /etc/init.d/nscd on a Solaris 10 system looks like:

# Copyright 2004 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#ident  "@(#)nscd       1.1     04/12/20 SMI"

# This service is managed by smf(5).  Thus, this script provides
# compatibility with previously documented init.d script behaviour.


case "$1" in
        [ -f /etc/nscd.conf ] && [ -f /usr/sbin/nscd ] || exit 0
        /usr/sbin/svcadm enable -t $FMRI

        [ -f /usr/sbin/nscd ] && /usr/sbin/svcadm disable -t $FMRI

        echo "Usage: $0 { start | stop }"
        exit 1

We use the -t flag to svcadm to indicate that this change is temporary; that is, disable the service until either the operating system restarts, or the administrator enables the service again. Solaris itself never uses these scripts, but they're really helpful to maintain compatibility for administrators' well-trained finger macros. If we've missed important ones, let me know!

If you're converting your home-grown applications to be managed by smf(5), you can use the template above to create a similar init.d script for your application. It isn't required, but if admins are accustomed to uttering /etc/init.d/foo [start|stop], doing this will slightly reduce the amount of swearing when trying to reconfigure the application at 4AM.

Friday May 06, 2005

smf(5) profiles: a preamble

I'll talk more about the details of smf(5) profiles later, but I've been motivated by an email question to post a profile use-case before I get around to the entire description. You can read about profiles in smf(5) and the somewhat inscrutable smf_bootstrap(5).

Profiles are really just a way to configure a bunch of services as enabled or disabled. They're generally located in /var/svc/profile (but that's not a requirement). You can use svccfg apply to apply a profile to a running system. See Stephen's post for a specific example.

Profiles are particularly useful if you want to enable or disable a bunch of services during jumpstart. All you need to do is create a site.xml file that reflects the services you want enabled and disabled, and drop it into /a/var/svc/profile/site.xml during your finish script. It will be imported automatically on the first reboot. For example, if I wanted to disable sendmail, I'd create a site.xml that looks like this:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='profile' name='default'>
     <service name='network/smtp' version='1' type='service'>
          <instance name='sendmail' enabled='false'/>

You can easily add more services to this example by just replicating the three "service" lines, and changing them to reflect the specific service name and instance name you want to enable/disable.

Wednesday Apr 27, 2005

OpenSolaris User Group wrapup

It's hard to come to work unhappy after attending an external-to-Sun Solaris event. Despite an exhausting day which started ridiculously early to receive the Sun Chairman's Award for Innovation with the rest of a well-deserving team (see Stephen's post for pictures), I had a great time talking to and with folks interested in Solaris.

The (basic) presentation I used is posted here. I made a few minor changes for last night's audience, but the vast majority of the content is identical. Jan didn't have canned slides -- the demo spoke for itself. You'll be able to see the work he and his team did re-architecting x86/x64 Solaris boot for yourself in an upcoming Solaris Express release.

Ben Rockwood was there with camera and video equipment. He's going to try to post a video for those who couldn't attend. I'm cringing at the idea; watching yourself after a presentation can be terrifying. Personal mortification at a permanent record of my mistakes aside, I hope this is helpful for some people. :)

I look forward to meeting more of you at the next meeting. Congratulations to Alan for kicking off such a great group. Hopefully you'll also be able to see a variety of Solaris engineers at the User Groups starting soon around the world.

Tuesday Apr 26, 2005

OpenSolaris User Group tonight

I've blogged about it already, but the first (SF Bay Area) OpenSolaris User Group is tonight at 7:30pm. Both Alan and Claire have given more details, including maps. It isn't just Alan, Jan, and me either -- there are a number of Sun/Solaris/OpenSolaris luminaries who should be in attendance.

Thursday Apr 14, 2005

Sun, fun, and Usenix

Returned from Usenix today and I'm still exhilarated and exhausted -- wish I could have stayed longer. I'm a complete laggard compared to everyone else who has been so diligently posting after crawling back to their hotel rooms/homes after closing out the BoF and the bar at 1 or 2am. As I mentioned already, we did two BoF sessions: Solaris 10 and Beyond, and Developing for Solaris and OpenSolaris. Lots of folks (80+) and lots of great questions on Tuesday night -- many stayed until the bitter end to chat with us about Solaris and other interesting topics.

The developer BoF I ran on Wednesday was, as expected, a bit smaller. There was lots of competition from interesting sessions and the topic was less general: we had somewhere between 40 and 60 people (which was still a great crowd). Sun folks included Dan, Bart, Alan, Tim Marsland, David Bustos, John, and Matt. Spencer even stopped by despite an early NFSv4 session the next morning. Roy Fielding was kind enough to indulge my request to talk a bit about his view of OpenSolaris from the perspective of his new role on the community advisory board.

As the slideset I used for the developer BoF was completely new, I want to take another look over it before posting it here -- didn't even have the time to spellcheck before the BoF! I hope everyone went home with at least one new tidbit about developing on [Open]Solaris they didn't know before, even if I wasn't as funny as Dan the night before. Apologies that my presentation didn't have more panache. Here's a quick [pre|re]view. In addition to talking about the Solaris lifestyle and specific features that I think make it a great platform for software development, we also talked about what's coming with OpenSolaris. I'm hoping to have time to add notes about some of the detailed examples we went through during the BoF too, but that may take a while longer. It is more fun to do them live, unscripted -- mistakes and all. :) Bart has promised to post his example of inserting stable DTrace probes into an application on his blog soon too.

Thanks to all of you who were able to come. We had a bunch of fun and learned quite a bit from the comments, questions, and subsequent discussions. If you were there, do you have any comments on what we could do better next time aside from having more free beer?

Saturday Apr 09, 2005

Usenix Solaris BOFs

I'm now re-posting this with some more solid information... A number of Solaris folks will be around at Usenix'05. Some of us will congregate at the two Solaris BOFs. Dan will be leading Solaris 10: Where We Are and Where We Are Headed 8-10pm Tuesday. I'll be leading the Developing for Solaris and OpenSolaris BOF 8:30pm-10:30pm Wednesday. While you can be sure I won't be able to resist mentioning smf(5) once or twice, I promise plenty of other good stuff too. If you're attending Usenix'05, stop by and say hello!


Liane Praza-Oracle


« July 2016