Friday Dec 04, 2009


During pkg(5) development it has become quite clear that computing the correct set of packages to install or upgrade is a non-trivial task.  Initially, we started delivering pkg(5) with a solution engine that simply took the latest available packages.  This worked so long as we only delivered packages that were all compatible, no third party publishers existed, and users were happy staying on the bleeding edge.

Since none of these conditions were maintainable, a more sophisticated solution was essential.

 We gained some breathing room with the introduction of incorporation dependencies.  Such a dependency in a package specifies the version (at a variable level of precision) of compatibility with another package.   We have used a package full of these dependencies (termed an incorporation ) during OpenSolaris development to insure that the various operating system packages from  come from the same build - that there's no way to get build 123's drivers, but build 127's IP stack.  In effect these packages define surfaces of compatible package versions, and allow package maintainers to refactor their packages, exchange content, etc. without the need for dependencies at the package level that would prevent incompatible packages from appearing on the same system.  The use of incorporations has allowed us to continue OpenSolaris development with a solver that first applied all constraints imposed by installed incorporations, and then attempted to install the latest possible packages. 

As we anticipated, however, the existing solver's deficiencies have become steadily more limiting.  Since the existing solver doesn't support back-tracking (e.g. revising a selected package version selection backwards during solution generation), trying to install third party packages that were published with different versions for various OpenSolaris releases was difficult if your machine was not running the most recent releases, and dealing with nested incorporation dependencies was impossible.  I experimented with a more conservative solver that attempted to upgrade as little as possible; this made upgrades across multiple releases painfully slow, however, and still didn't deal with newer versions being un-installable due to missing dependencies, etc.  In addition, we received multiple requests for exclude-type dependencies that would allow packages to prevent installation of incompatible packages; this was definitely outside the capabilities of our naive solver.

Conventional solvers iteratively attempt to satisfy package dependencies by walking the package dependency graph and selecting package versions to try; our experience w/ the large numbers of package being generated by biweekly (or nightly or even every push) builds indicated that such an approach would be very slow in some cases as the order of graph traversal might lead to the need to explore thousands of possible solutions.  Reading some of the research (in particular, the EDOS and ZYpp projects) indicated significant interest/progress in attacking packaging computations as boolean satisfiability problems, and we decided to try that approach.

Boolean satisfiability solvers need their problems posed in conjunctive normal form; e.g. as a conjunction (logical ANDing) of clauses containing disjunctions (logical ORed) of variables (or their negation).  By assigning the presence of a particular version of each package a unique boolean variable, we can construct clauses for dependencies and existence that allow the solver to compute solutions to our packaging problems.  For example, given four possible versions of package A, the fact that we can only install one version of a time of package A yields the following set of clauses (assigning A1 to indicate the presence of version 1 of A, and ! to represent negation and | to represent disjunction):

  • !A1 |  !A2
  • !A1 |  !A3
  • !A1 |  !A4
  • !A2 |  !A3
  • !A2 |  !A4
  • !A3 |  !A4

Clearly, large numbers of versions of a package can generate an inordinate number of clauses; more on that a bit later.

If a require dependency exists on a particular package version, indicating that that version or newer is required, clauses are generated to describe that dependency.  For example, if package B@1 depended on A@2:

  • !B1 | A2 | A3 | A4

If an optional dependency exists on a particular package version, that indicates that if that package is installed it must be at least at the specified level.  Here, we end up excluding versions we don't want.... For example, if package B@1 optional depended on A@3:

  • !B1 | !A1
  • !B1 | !A2

Our incorporate dependencies that specify the version needed also generate such exclusionary clauses.  For example, if package B@1 incorporates A@3 (e.g. if A is present it must be at version 3):

  • !B1 | !A1
  • !B1 | !A2
  • !B1 | !A4

Lastly, actual exclude dependencies indicate that if present, the depended upon package must be at the specified level or lower.  If package B@1 has a exclude dependency on A@3:

  • !B1 | !A3
  • !B1 | !A4

Once the packaging problem can be described as a series of clauses, it can be passed to a SAT solver for solution; the solver generates a set of packages that will meet the specified criteria, or declare that no solution exists.  The number of variables used in the solver is the number of package versions installed plus those considered; the number of clauses used depends on the number of versions of a package and the types of dependencies.  To minimize the size of the problem and the resulting memory footprint, we don't simply generate clauses for all possible packages and their versions; since we know that packages are not allowed (normally) to decrease in version, we eliminate from consideration earlier versions of any installed packages, and any packages excluded by incorporations we're unwilling to change.  We also eliminate duplicate packages from publishers we're not willing to consider.  This "trimming" phase is actually the most time consuming phase of generating the list of packages to install.

If all we needed was a single solution, this would be adequate; however, we'd like to find solutions that meet our definition of optimal.  We do this by finding solutions and then looking for better ones by resolving the problem w/ additional constraints excluding areas we don't consider optimal. For example, when installing a package we're willing to update other packages if needed (within incorporation constraints, of course), but we'd like to minimize such changes.  This is an area we're still exploring, and a likely topic for additional blogging.

We choose the Minisat solver as a good place to start as they had built a C version of Minisat that would be easy to link into our packaging system, which is coded in Python.  About the only changes I made were to keep track of the clauses fed to the solver, so that it is possible to cheaply revise solutions by caching copies of the current state of the solver. Introduction of the SAT solver awaited Shawn Walker's very nice catalog rewrite which added dependency information into the package catalog, as it was critical for perfomance reasons to not have to download hundreds of manifests to do package planning.  I integrated the new solver into the packaging gate for build 128 of OpenSolaris, now available from and other mirror repositories.

One of the interesting implications of the solver change has been that it is more difficult to determine just why there are no image-updates are available.  The previous solver would fail (badly) when encountering missing packages in dependencies, etc; the new solver just considers packages with missing dependencies as uninstallable and thus unavailable for upgrade.  Image-update will now very rarely generate any error messages, which is nice from a user aspect but makes debugging mis-configured or broken builds more difficult than before.  If you think you should be able to upgrade, try explicitly installing the version of entire (the incorporation that currently controls what software build you're running) you think you should be able to install w/ -nv as flags; this will generate much more verbose debugging output when no solution can be found, as the packaging system has some idea of what you'd like to achieve other than just "get me newer bits if you can". Generation of more useful error messages will remain an important area for further work.

Other interesting areas for further enhancements enabled by the SAT solver include constructing the entire incorporation as an incorporation of other incorporations; this will allow developers to easily run the latest kernel and older window system bits, or vice versa.  We're also considering conditional (package A requires package B if package C is installed)  and disjunction (package A requires package B or package C) dependencies to solve some of the more complex package configuration requests we've seen.

Wednesday Feb 04, 2009

Fattening packages - supporting multiple variants in a single package

Dealing with parts of a package

Traditionally, packaging systems have placed optional components of a package in separate packages, and established conventions for naming such components, such as -localization, -locale, -devel, -doc, etc. This method of ad-hoc decomposition makes it more difficult for GUI programs to offer the appropriate choices when selecting components, makes the introduction of new optional components difficult and makes installing documentation after the fact a painful process.

Packaging options also exist which are mutually exclusive; the typical example is which architecture the package supports. One cannot select both sparc and x86, since the two architecture's files collide - /usr/bin/ls is either a sparc binary or a i386 binary. Other examples of such colliding options include debug vs non-debug kernel binaries, and global vs nonglobal zones.

Initially we started solving the problem of selecting parts of a package using more or less ad-hoc client-side filtering, but it became clear that the publisher of the package needed to be able to describe what components intersected or were optional, and that we needed to be able to clearly distinguish between the two cases. Thus, we refer to options that may be selected or not selected in any combination, such as various locales, documentation, etc., as facets. Options which must be mutually exclusive are called variants. Both variants and facet appear as tags on IPS actions, and result in the action being selected or de-selected for installation.

An example of variant tags on an action is:

dir group=sys mode=0755 owner=root path=kernel/drv/amd64

Here this directory action is tagged with variant.arch=i386, indicating that this directory should only be installed on i386 machines (Solaris doesn't distinguish between 32 and 64 bit architectures) and, which shows that this directory should only be installed in the global zone and not in local zones as those don't contain kernel components.

Note that any action in the repository can be so tagged, including dependencies.

During installation planning, all actions not applicable to the current image are removed from consideration. If a user with a down-rev version of IPS that doesn't understand variants attempts to install a fat package, either an exception will occur indicating that duplicate actions have been found, or if a set action is found with an architecture tag an assertion will trigger. In either case the solution is either to stop attempting to install fat packages, or upgrade your version of IPS to the latest available for your build.

The initial implementation in build 106 supports only variants (facets are coming later on). To faciliate detection of attempts to install packages on the wrong architecture, our publication code now inserts set actions into packages to indicate what architectures the package supports:

set name=variant.arch value=sparc value=i386 

Publishing Fat packages

We produce fat packages by publishing separately on each architecture, and then merging the resulting manifests. Actions which are identical for both architectures are not tagged w/ variant tags; those that differ or exist only for one architecture are tagged.  Packages that exist only on one architecture are tagged to indicate that as well. This step can be repeated for different variants, and more than two variants types can be combined at once, so packages containing debug kernel binaries could be merged w/ non-debug versions, and then merged across several architectures. An example of a merged manifest can be seen here. Of course, those using pkgsend to create packages can simply insert the appropriate tags directly. For the curious, the code to merge packages can be seen here. Further work is needed to improve the performance of publishing the merged packages; allowing multiple repositories to share the filestore would help greatly here.

Tuesday Feb 12, 2008

Indiana Preview 2 - my new desktop

This weekend I decided to bite the bullet and convert my desktop to Indiana Preview 2. Since unlike most people at Sun my desktop machine also receives my email, and hosts both my home directory and calendar server, the switch-over needed some quiet concentration on my part to insure nothing important got left behind.

The installation of Preview 2 (now available here) went smoothly – not surprising, since I'd tested many trial builds on the same machine, a 2 x 2.8GHz Ultra 40. After installation completed and the machine rebooted, I created a second zpool with the two remaining drives; I use this for my home directory, mail spool, tunes and pkg server. This isolates me from any difficulties with the new installer or possible future upgrade problems. ZFS of course makes this all very easy:

: barts@cyber[227]; zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                            2.63G   224G  49.5K  /rpool
rpool@install                        0      -  49.5K  -
rpool/ROOT                       2.62G   224G    18K  none
rpool/ROOT@install                   0      -    18K  -
rpool/ROOT/preview2              2.62G   224G  2.09G  legacy
rpool/ROOT/preview2@install      66.8M      -  1.94G  -
rpool/ROOT/preview2/opt           483M   224G   483M  /opt
rpool/ROOT/preview2/opt@install    77K      -  3.61M  -
rpool/export                     2.44M   224G    19K  /export
rpool/export@install               15K      -    19K  -
rpool/export/home                2.41M   224G  2.39M  /export/home
rpool/export/home@install          19K      -    21K  -
zfs                               177G  51.8G    21K  /zfs
zfs/home                          133G  51.8G   133G  /export/home/cyber
zfs/local                         291M  51.8G   291M  /usr/local
zfs/mail                          110M  51.8G   110M  /var/mail
zfs/music                        43.4G  51.8G  43.3G  /zfs/music
zfs/music@2.1.2008               2.54M      -  42.3G  -
zfs/repo                           18K  51.8G    18K  /zfs/repo
: barts@cyber[228]; 

I then got to thinking about having a mirrored root pool; I hunted up one more 250GB drive, hot plugged it into the machine (love those SATA features) and used cfgadm -al and cfgadm -c to get Solaris to find the drive. Zpool attach took care of establishing the mirror; the mirror was resilvered in just a few minutes since ZFS knows what's data and what's empty space.

Now I needed dovecot, since I run an IMAP server to allow remote access of my mail. Off to for a tarball, download, configure and hmm – no C compiler. pkg search -r gcc told me that I needed SUNWgcc installed, so pkg install SUNWgcc grabbed the compiler, assembler and binutils. Cool. Run configure again and whoops – no headers! pkg search -r stdlib.h said I needed SUNWhea, so pkg install SUNWhea and I was compiling dovecot.... For a quick look at the packages available in Indiana so far browse over to I wrote this blog post using openoffice – which you'll find in a package called openoffice.

Indiana and IPS are usable, but we've still got a lot work to do:

  • The packages need to be re-factored into smaller pieces and renamed and tagged to better support minimization, searching, etc.

  • The command line pkg interface needs some attention...

  • and here's still much work to be done in constraints, simplifying pkg publishing and there are lots of bugs, tracebacks and other issues to address.


However, it's coming together – and being able to upgrade from preview2 from preview1 without running any postinstall scripts helps use feel better about the assertions that started the project....


An engineer's viewpoint on Solaris...


« March 2015