Friday Aug 21, 2009

verexec(1): A simple execute-most-recent-version utility

We've published three of the approximately-six-monthly distribution releases now, and we're starting to refine pkg(5)'s capabilities to handle automatically an ever larger set of configuration scenarios. (Read: "edge cases".) Bart implemented actuators in 2008.11, and that let us handle any resource that could be configured or reconfigured via an smf(5) service instance. In future releases, we'll build out the set of instances that handle standard configuration.

In this post, I'd like to talk about a tool—verexec(1)—to assist with executable version selection. One of the common questions that maintainers of the various language platform packages is "how can I make sure that /usr/bin/[language] points to the latest version of the package?". Attempts to satisfy this goal have led to some of the largest preremove and postinstall script pairs in historical Solaris.

Largest, and most difficult to get completely correct.

We can make this problem much simpler—and scripting free—if we accept the minimal performance penalty of a runtime-based decision.

Basic function

verexec(1) works in a similar manner to isaexec(1), in that dispatch is based on the target command being hardlinked to verexec(1), which then, based on the name of the target command and additional information, will in turn invoke the correct version of the target command.

Like most pkg(5)-driven modifications, we want to make the identification of the newest version based on the presence or absence of a filesystem object delivered into a predetermined location. For verexec(1), one delivers a symbolic link in /etc/verexec.d/[target], pointing to the appropriate executable. For example, for Perl 5, /usr/bin/perl is delivered as a hardlink to /usr/bin/verexec, in SUNWpl5. [TK new name?] The Perl 5.8.4 package would deliver the symbolic link, /etc/verexec.d/perl/5.8.4, pointing to /usr/perl5/5.8.4/bin/perl. (The version ordering matches that of pkg(5).)

When /usr/bin/perl is invoked via exec(2), verexec(1) would open—subject to various override mechanisms—/etc/verexec.d/perl and search through the links, looking for the maximal version. The link is then read, and that program is executed, with the arguments passed to the original invocation.

This approach handles directly the delivery of multiple versions, via separate packages. If we were delivering two versions, then removal or installation of either link would change the default version. For more than two versions, the default version only changes if the currently highest version is changed. If we deliver no versions, then verexec(1) displays an informative error message.

Refinements and commentary

Overrides. Since, at some sites, verexec(1) may be useful to provide access to a variety of local command versions, we provide a number of environment variable-based override mechanisms. First, we allow identification of an alternative symbolic link directory to /etc/verexec.d via the VEREXEC_ALTDIR variable. Second, we allow per-command interception via the VEREXEC_[command] pattern. Thus, in our Perl example, I could make /usr/bin/perl point to my local Perl installation via:

export VEREXEC_perl=/home/sch/bin/$(uname -p)/perl

instead of the system installation.

Native ISA selection. We could combine the function of isaexec(1) with verexec(1), by allowing links to have the pattern [version]-[bits], as returned by isainfo -n.

Search impact. Using a technique like this impacts the search functionality, in that a search request for /usr/bin/perl in our example, will actually return the package delivering the verexec(1) link, and not an actual Perl interpreter. We can address this somewhat by ensuring that (a) the actual Perl interpreter is delivered in a package with "perl" as a basename, and (b) the link-delivering package is named and documented as a supporting package.

Delivery, while delaying change in default. Since this mechanism is so much simpler than the scripting previously required, it means that we should reexamine the transition to a new version of a significant language platform. In the past, because correct packaging was a significant amount of the work involved in integrating a new version, we've generally introduced a new version and made that version the default in a single integration. Since we don't want to introduce unnecessary risk, we have generally lagged integrating the latest version of these interpreters. With verexec(1) and pkg(5) in place, we might pursue an alternative, of introducing the new version early, and then changing the default in a subsequent, small integration later in the release.

Performance. If we find that the overhead of the additional exec(2)—one for verexec(1) and one for the link target—is too high, there are a number of performance optimizations. One approach would be to use verexec(1) to send a message to a privileged daemon, which would loopback mount the appropriately versioned executable at the verexec(1) location. Any loopback mount-based technique makes a trade-off: more efficient execution for more complex handling of updates and deletions, which would require the daemon (or privileged command or service) to inspect the existing loopback mounts and evaluate them for their continuing correctness.

I've a refinement-lacking prototype of verexec(1) available for review; we're discussing it further at

[ T: ]

Tuesday Dec 09, 2008

2008.11: More ways to get it, and more ways to share it

As I did for 2008.05, I'll collect links to mirror sites here. These will also get links on the various download pages out there.

Construction of 2008.11 was faster, more efficient, and generally more predictable than 2008.05. Image packaging, snap upgrade, and the distribution constructor—and the notorious distro importer—saw many fixes and features that resulted in the ability to rapidly turn out corrected images and packages. As problems were found during testing, we could spin up new test variants in hours or less. (Thus, the existance of RC1.5, which we were able to inject into the schedule because of this newfound speed. Dave's written more about the constructor.) You can get the ISO image, suitable for burning to a 700 MiB CD or immediate use in virtual environment, directly from the following locations:

As Glynn notes, we've set up anonymous rsync to make acquiring and mirroring the ISO images. In addition, I've posted draft instructions on how to be a content mirror for We've tested the pkg(5) content mirroring, as well as the rsync service, but would be happy to get feedback on

These links either take you to the site's page or directly to the osol-0811.iso CD image, which contains 11 "primary languages". It installs quite a bit faster, particularly on systems with slower CPUs. There is also an LZMA-compressed image, which has localization support for 42 languages, including those primary ones. It's available from,, and as a torrent. (Consult the language lists for specifics.)

[ T: ]

Sunday Nov 02, 2008 Recovering a dropped data pool

As you update your pkg(5)-based system to build 100a, you might see a message on console during startup. It will look something like

WARNING: pool 'zeta1' could not be loaded as it was last accessed by
another system (host: rosseau hostid: 0x581d45e). See:

and in /var/adm/messages, you'll find it again:

Oct 27 14:55:21 rosseau zfs: [ID 427000 kern.warning] WARNING: pool
'zeta1' could not be loaded as it was last accessed by another system
(host: rosseau hostid: 0x581d45e). See:

zeta1 is a data pool on my system—the boot pool, rpool, upgraded normally—and recent changes in the hostid implementation mean that we have to import (or reimport) any non-boot pools on the system. That is,

$ pfexec zpool import -f zeta1

and our zeta1-based filesystems are again online.

[ T: ]

Sunday May 04, 2008

CommunityOne: here and there

Sunday I spent at Moscone, teaching laptops and projectors to get along. Saturday, Nathaniel, Benjamin, and I dropped in for lunch at the Developer Summit. I managed to talk with a few people before the boys found large whacking sticks, and then it seemed best to drive to Pescadero for some beach time.

I'll be busy for the morning of CommunityOne. For Rich's keynote, I'll be running some of the less violent demos. Almost immediately after that, lead modernizer David Comay and I will be going into more detail in our session

S297399 Getting Started with OpenSolaris™; New Features & Building OpenSolaris™ Packages; 11:00 a.m., Moscone South/Esplanade 300.
We've got some additional demonstrations, including a worked example of package publication using some pre-release tools, which could be exciting.

I'm hoping to have some time for questions during the session but, if not, I'll be circulating during the afternoon and happy to talk to people about 2008.05, image packaging, or whatever. And, of course, there will be time to talk at the party after the day's sessions. In any case, I should be easy to spot: I have a new tie.

I'm told you can still register on-site—it's not too late.

[ T: ]

Thursday Apr 17, 2008

OpenSolaris: Bug dependencies and release management

Right now, if you're subscribed to any of the Installation and Packaging community group or project lists, you'll see a lot of commit notifications as the various teams attempt to fix various bugs noted since the second Developer Preview release. We've been using the trial Bugzilla instance—which is becoming the default defect tracker for it appears—and trying out various features.

For tracking release completeness, we're using "blocker bugs" or "tracker bugs", which are synthetic bug entries that we mark various important bugs ("stoppers") as blocking. That means that we end up creating a little dependency graph that shows what unfixed bugs are stopping us from reaching some initial set of release criteria. We have two tracker bugs

that we're monitoring to make sure we've got a handle on things.

Bugzilla has two nice summaries for showing this information, in addition to the default bug status page. I'll use 571 as the example tracker bug, since we've made 1190 block it—which leads to a more nteresting graph. The tree view is a useful and succinct representation, where indentation shows dependencies. The graph view is a bit unwieldy for this bug, but might be useful if the tree view became too long.

A useful technique if you're trying to bring a release together.

[ T: ]

Thursday Feb 21, 2008

pkg(5): Reverse proxying your depot with Apache HTTPD

As part of the changes to get Developer Preview 2 ready, we decided to rejigger the HTTP handling on so that we could have more options as more people attempt to use the early versions of image packaging. Previously, we ran pkg.depotd directly on port 80, in its read-only mode; now we use Apache HTTPD to listen on port 80, and use mod_proxy to proxy those incoming requests to a pkg.depotd instance listening on a separate port. With a couple of different approaches

Proxying and rewriting is one of those endlessly fun activities that somehow actually ends up being productive. Last time, proxying fun led Steve and I to fiddling around such that we ended up with proxy and rewrite patterns to enable the country portals for

If you want to share the top-level component of your URL space, you'll need to watch pkg(5) developments, as you have to map the list of operations one-by-one—and I know there are some new operations forthcoming. That would involve adding something like the following to a VirtualHost directive in your Apache configuration.

ProxyRequests On

Redirect /index.html

ProxyPass /abandon
ProxyPass /add
ProxyPass /catalog
ProxyPass /close
ProxyPass /feed
ProxyPass /file
ProxyPass /filelist
ProxyPass /manifest
ProxyPass /open
ProxyPass /search
ProxyPass /versions

ProxyPass /css
ProxyPass /logo
ProxyPass /icon

ProxyPass /status

Configuring your server in this fashion allows you to mix an image packaging server in with your other site content. You can easily deliver static content alongside your depot, for example.

If you don't mind pushing your package repository down one level in your URL space, then the above simplifies to

ProxyRequests On

ProxyPass /pkg/

(which should be a hint on how to create a repository farm under a single URL). To use the latter, you would use pkg(1)'s image-create subcommand

$ pkg image-create -F -a /path/to/image

to connect your image to your reverse-proxied packaging depot.

In the two examples above, you should of course replace machine names like and port numbers like 10000 with values appropriate to your own installation.

Happy proxied package serving!

Feel free to share your alternative configurations or approaches with other HTTP servers here, or on

[ T: ]

Monday Nov 05, 2007

pkg(5): Fueling the next steps

Most Fridays, I spend the two hours before lunch assisting at our co-operative daycare. The hour and a half before are pretty good thinking hours, most of which lately have been spent on packaging. As the year passes, and we move into fall here in Northern California, the thinking's been best assisted by having a brief, and warming, snack.

Still life with packaging

The notes being written are about some of the points raised about the use of hashing, but then I buckled down and wrote an outline for our ARC inception materials, which will probably take more than one or two Friday sessions.

Location: Café Borrone, Menlo Park, California.

[ T: ]

Saturday Oct 13, 2007

pkg(5): Talking in the redwoods, talking on the beach, ...

Danek and I are at the 2007 OpenSolaris Developer Summit at UC Santa Cruz, talking about and, I hope, later demonstrating the image packaging prototype so far. I gave a brief overview of the project's goals and status, and mentioning some of the unmentionables we've encountered—some particularly undisciplined configuration files, some apparently important but encumbered drivers, and so on.

View the slides

[ T: ]

Thursday Sep 27, 2007

pkg(5): project opens, development continues

I asked my teammates to take a brief break from prototyping, and we jumped from a collection of systems inside Sun to the clean project hosting at

As the migration registers, you should be able to access

If you're interested, please come check it out.

[ T: ]

Friday Sep 07, 2007

pkg(5): a no scripting zone

In my previous two posts, we examined two packaging system options—installer-specific knowledge and integrated build system—that I believe present costs that exceed their benefits. Here, we will again examine a design choice from a negative perspective: package-associated scripting.

System V packaging is rich with scripting hooks; scripts named checkinstall, preinstall, postinstall, preremove, request, and the class action scripts. Each of these scripts can do anything they like. Scripting, even in a relatively primitive shell, is an open-ended program—opaque to the invoking framework. It's difficult to catch an incorrect script prior to package publication time, which blocks our intent to prevent propagation of bad package versions. With a more limited set of actions—potentially with that limit enforced or marked—a class of incompletely known resource handling mechanisms can be kept off the most conservative systems.

One goal we have is to preserve or improve the hands-off behaviour associated with package operations. Legacy packaging allows hands-off by imposing a series of tasks on the deploying administrator. The pkgask(1M) tool can enable the deployer to develop a response to the request script; coming up with an appropriate admin(4) can restrict the framework's built-in interactive queries. (Interaction with signed packages also requires the deployer to modify their pkgadd invocation.) Removing the scripting degree of freedom means that obstacles to hands-off behaviour come solely from an interactive installer or from interactive services acting during system startup.

There's some amusingly egregious violations of the hands-off principle across the space of known packages. Less fun is that these set a bad example for later package developers.

A particularly error-prone aspect of the scripting interface in packaging comes from the variety of contexts the package developer must understand (and test within). It is legitimate to install packages on live systems, in alternate filesystem hierarchies of the same or different architecture, and in whole-root and filesystem inheriting zones; in fact, you have multiple choices about how your package should install in a zone.

We can expect the proliferation of virtualized systems, via the various mechanisms like LDOMs and xVM, to keep all of these contexts relevant as degrees of sharing make virtualization even more appealing. Making sure that the package system operates safely in these shared contexts is critical—another of our goals.

Returning to the zones case, the example pseudo-script in pkginfo(4)—a series of nested shell if ; then blocks to navigate some of these contexts—is helpful, but misleading. There is much more variable state a package developer needs to consider to reach correctness. In fact, if you aren't required to rediscover or reinvent a set of resource-handling cases for each components your package delivers, it becomes substantially simpler to make the package and return to improving the software it contains. Reducing the set of steps reduces developer burdens associated with packaging.

Two particular resources stand out: device drivers and smf(5) services. Although some limited amount of awareness—or at least easily duplicated code—makes these resources somewhat well-behaved during package operations, there are still problems that scripting presents: the addition of new contexts, the provision of multiple genealogies of copied code, and the failure to discover an associated best practice for any particular kind of resource.

There are other resources, of course; as a start, you could duplicate our survey of the ON postinstall and class action scripts.

I believe the key counterargument supporting scripting is that the set of configuration patterns on Unix-like systems is large, and that the easiest means of upgrading each of these potential patterns is to allow a complete programming environment to the package developer. Probably true, but if we look at service and application configuration with respect to when a correct configuration state is required, the update step appears to separate into three classes:

  1. Correct at system startup, no runtime context needed. These are the configuration settings that the various low-level boot components, the kernel, and the drivers need to bring the system to its running state. This class of configuration is generally limited to a specific set of resources, potentially established by a packaging system via corresponding resource-handling actions—or by an installer.

  2. Correct at system startup, requiring runtime context. These are settings where the manipulating agent might be influenced by policy or require some form of interprocess communication to effect configuration changes. smf(5) is an example of the latter, and handles its configuration evolution via the manifest-import service. Manipulation of the various local name service tables, like passwd or the RBAC configuration is another example, since data about potential principals must be correct for a group of affected services. Since such configuration can be required on the system as a result of package operations, these resources must also be handled via packaging, or require the use of an appropriate installer.

  3. Correct prior to service startup. Most service and application configuration falls into this class. It's not necessary, for instance, to bring a web server's configuration up to date if the service has no enabled instances. There seem to be a number of avenues for handling this kind of configuration: leaving it to the service or application, providing assistance via a configuration mechanism, or giving a hook where such updates can be made as needed. But the packaging system needn't provide this hook—there are a number of possible facilities, of varying suitability.

I should point out that David is making the smf(5) configuration update scenarios much more capable and precise with the Enhanced Profiles project. So, at least, a "configuration mechanism with assistance" is likely to be present soon.

Since the first and second classes and how their configuration manipulations vary in the various operating contexts are generally known, elimination of the third class makes precise, no-scripting packages a viable design choice.

That's a long series of arguments in favour of a scripting-free package system. It would be reasonable to ask: "can you actually do it?" So, as a check on our prototype, we used the branded zone capability to let us create a pkg(5)-based whole root zone. Here's a transcript

# zonecfg -z pkg_test
pkg_test: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:pkg_test> create -t SUNWipkg
zonecfg:pkg_test> set zonepath=/export/pkg_test
zonecfg:pkg_test> commit
zonecfg:pkg_test> \^D
# zoneadm -z pkg_test install
Preparing image
Retrieving catalog
Installing SUNWcs SUNWesu SUNWadmr SUNWts SUNWipkg
Setting up SMF profile links
Copying SMF seed repository
Done (115s)

There's dependency following, but no constraint handling; there's no filtering or snapshotting, but also none of the obvious performance optimizations has been implemented (for our 211MB resultant image). But the main point is: it works—installs, boots, upgrades, and still boots—with no scripting. Time for a project proposal.

[ T: ]

Monday Aug 06, 2007

pkg(5): No more installer magic

I thought I would continue probing some of the problems that present themselves to any packaging system that might follow the System V packaging. For the next few topics, I'll phrase the problem in terms of an outcome I believe we want to avoid. Here we discuss aspects of eliminating special metadata from installers.

If you're familiar with the packaging and installation components involved with Solaris Express, Developer Edition, or any of the OpenSolaris distributions that Sun produces, then it won't surprise you that there's a large amount of upgrade specific knowledge in the installation layer. For instance, to upgrade a package installed with Solaris 10 to its currently delivered version in a recent build, there's the set of files associated with the package—and then there's the package history, which collects additional information, like files no longer delivered, or the responsibility of another package. The presence of this file, and the absence of a true package upgrade operation in System V packaging, mean that any kind of upgrade requires some form of installer: the operating system installer for upgrading the entire installation; an application-specific installer for upgrading a group of related application packages.

There are a number of other metadata files trapped in the installation layer, but the most important are the metacluster files, which group the package clusters into large installation profiles, and the package clusters themselves, which group sets of packages along feature boundaries, approximately. It seems evident that these groupings are merely another kind of dependency, much like how a System V package can, naïvely state a dependency on any other System V package.

The System V dependencies also show that another critical piece of metadata—the versioning vector, or "arrow of time"—is encoded in the installer.

All of this information, if we are to allow packages to upgrade from one version to another in some linear fashion, needs to be pulled out of the installers and moved into some aspect of the packaging system. This change in responsibility means that the role of the installer becomes more precise: it must prepare a location for software installation, optionally lay down some initial, and possibly stale image, and collect any required configuration information. Subsequent upgrade operations are driven primarily by the packaging system, which can utilize the version history and dependencies in a manner at least equivalent to what the historical metadata allows.

The other reason that we wish to push that historical metadata into the packaging system is so that it becomes accessible to a new class of application: the distro construction toolkit, which needs the dependency and versioning information to simplify the construction of self-consistent installable images. That leads us to an architectural diagram like

pkg Layering

where we've suggested some internal structure to an updated packaging system. This separation shouldn't be that surprising: in the current system, lumake(1M) and most of LiveUpgrade as well as zoneadm(1M) for Zones are both performing image operations on either side of any packaging operations they might invoke. Designing a packaging system that makes the constructions of distributions and their installers substantially simpler will require such an API layer.

There are some open questions that come to mind:

  • How much variation is there in the initial preparation an installer offers for the image location? Is space management a policy owned exclusively by the installer, or should the image operations layer have the ability to influence that allocation?
  • Is there a marshallable image description, beyond the package dependencies? Is there a marshallable image format, or are we restricted to end-use formats, like ISO images or archived zones?
  • What OS services should we consider modifying, beyond taking advantage of ZFS's current capabilities?

If you're interested in these questions, or the potential architecture of a packaging system, we'll be discussing these topics further in the Install community group.

[ T: ]




« July 2016
External blogs