Wednesday Nov 03, 2004

Conversions, conversions

I mentioned that I'd converted a few of the F/OSS software packages that we use on our home system. Of course, the point of smf(5) isn't to force you to rewrite all of your init.d scripts—it's to force you to encourage your vendor to do so. (Wait, that's not right, either.) I'll come back to this aspect, but let's talk a bit about service bundles.

The service bundle (or service manifest) is meant to accompany the package delivering the software. If a System V-style package uses the i.manifest class action script to identify its manifests, then these will be automatically imported into the repository upon package installation. (The class action script knows about zones, LiveUpgrade, and a couple of complicated circumstances where it's not safe to import automatically.) In any case, upon the next reboot, unimported manifests in /var/svc/manifest will be automatically imported, and such services will be present in the graph.

Service manifests should deliver services disabled by default, so no one gets surprised by "hey, port N is now open—why's that?" after a package installation. For these cases, all the administrator needs to do is

# svcadm enable service_fmri

That is, the intentions for the service are owned by the system administrator. (There are exceptions for service complexes, where multiple services are involved, but manipulating interior services of a complex should be done by using temporary enable/disable requests. See svcadm(1M) and smf_enable_instance(3SCF).)

If you're delivering software outside of the System V packaging, your installer can do the equivalent: use svccfg(1M) to import the service. If you support some form of upgrade, you can enable the service if you detected that the service was effectively enabled already. Remember to think about your uninstall action as well. For the default packages, we disallow deletion of a package that contains an enabled service instance. (A check on accidental removal.)

So those are the fundamentals—more detail is available in our service developer introduction. But the big question is what F/OSS software would you like to see example conversions for, first? (And tell us what commercial packages matter, too.) I'm going to start describing a few of the ones I mentioned a few days ago but, again, the goal isn't to cause the creation of hundreds or thousands of custom descriptions of a standard software service, but to get the service described and to have that description accompany the service to deployment.

Sunday Oct 31, 2004

smf(5) at home

Our electrical power at home has been sagging occassionally. It tripped yesterday for a couple of seconds: enough for everything without batteries or some capacitance to reset. (So clocks set yesterday, and again today.) Our main server went as well, and I noticed one or two services—running under very early Solaris 10—didn't come back up properly. There's a solution for that.

So I upgraded that that server to the 10/2004 Express release of Solaris—the first one with smf(5). I ended up writing seven service conversions very quickly (for SASL, IMAP, stunnel, and Blastwave's Postfix, BIND9, Apache, and Squid), because otherwise Dina's email doesn't work. (Or my home blog.)

I need to reexamine the dependencies I elected on each of these service conversions, but if you're interested in seeing them, let me know. With the exception of chasing a few configuration files down, it took only half an hour to get these services properly managed; that is, basically limited by typing speed.

Friday Oct 29, 2004

smf(5) content flowing

We've posted an smf(5) quickstart guide and a preliminary service developer introduction (written by Liane), and are listening in the Predictive Self-Healing forum.

If you like, comment to let me know when the images hit the Download Center—they'll most likely say "b69" somewhere in the filenames.

Wednesday Sep 22, 2004

smf(5): authorizations built-in

I mentioned yesterday that you can manipulate services if you have the appropriate authorizations, without needing to possess any privileges. For instance, my current shell has the following privileges and authorizations:

$ ppriv $$
117292: bash
flags = 
        E: basic
        I: basic
        P: basic
        L: all
$ auths

And if we try to manipulate a service managed by smf(5) with this set of authorizations, we'll get a predictable result.

$ svcadm restart network/smtp
svcadm: network/smtp: Permission denied.

However, smf(5) defines two big authorizations:

  • solaris.smf.manage, which allows the user to request a restart, refresh, or other state modification of any service instance, and
  • solaris.smf.modify, which allows the users to create or delete services and instances, as well as manipulate any of their property groups or properties.

(There are also lesser built-in authorizations—solaris.smf.modify.method, solaris.smf.modify.dependency, solaris.smf.modify.application, and solaris.smf.modify.framework—which allow the manipulation of properties within property groups of the mentioned type. And you can also customize the authorizations to allow an action at the instance level and to manipulate properties at the property group level.)

To make assigning batches of authorizations scale, the role-based access control facility (RBAC) allows the definition of rights profiles. (The definitions are contained in /etc/security/prof_attr, and the documentation is prof_attr(4).) The service management facility delives with two rights profiles we think are convenient:

$ grep \^Service /etc/security/prof_attr
Service Management:::Manage services:auths=solaris.smf.manage,solaris.smf.modify
Service Operator:::Administer services:auths=solaris.smf.manage,solaris.smf.modify.framework

We can then add the user_attr(4) database to connect a user with the appropriate profile, like:

$ grep sch /etc/user_attr
sch::::profiles=Service Management

(You can edit /etc/user_attr by hand or, if your password database is local, using the -P option to useradd(1M). The equivalent configuration via direct authorizations would read

$ grep sch /etc/user_attr

and will work fine except that if the Service Management profile were to be enhanced subsequently, a user or role with the old explicity authorizations might not have the correct set for future operations.)

So now our authorization list is expanded, but our privileges are unchanged:

$ auths
$ ppriv $$
117292: bash
flags = 
        E: basic
        I: basic
        P: basic
        L: all

and we can carry out our operation from an authorized, but unprivileged user account:

$ svcs network/smtp
STATE          STIME    FMRI
online         Sep_21   svc:/network/smtp:sendmail
$ svcadm restart network/smtp
$ svcs network/smtp
STATE          STIME    FMRI
online         23:58:21 svc:/network/smtp:sendmail

And, finally, since the user_attr(4) database has network name service backends, you can actually make authorization grants that apply across an administrative domain, whether you're giving out the big authorizations illustrated here or custom authorizations specific to a set of services running at your site.

Tuesday Sep 21, 2004

smf(5): asking versus doing

Let's consider how applications are traditionally started: we execute (or the system executes) a command, such as fooadm(1M), which in turn calls fork(2), does some detachment work, and then calls exec(2) to run food(1M) (which is what we wanted). A schematic of this sequence would look like

For long-running, always-needed applications (which we call services), this model raises some questions:

  • Who is responsible for food(1M)? After fooadm(1M) finishes, the intent behind the running food(1M) is murky. Is it acceptable for food(1M) to exit after some period of time? Or is exit a failure condition?
  • How did food(1M) get the privileges, resources, etc. that it needed to run? Either fooadm(1M) was executed holding enough privileges for food(1M) to function normally, or food(1M) is setuid-root and must verify that its requesters are authorized to execute it for some set of its offered operations. Ditto for resource assignments.
  • Why does each fooadm(1M)–food(1M) pair on the system (baradm(1M)–bard(1M), ...) handle this relationship slightly differently? I can only speculate.

(Lest anyone assume I'm pretending to novelty: most restarters (init(1M)) or super-servers (inetd(1M)) have answered the first two of these questions by offering a single, specific application model. But many daemons we run on systems today fit neither of these application models well.)

In smf(5), the service management facility, directly forking a service is discouraged. Instead, one requests that a service be enabled, and the master restarter, svc.startd(1M) or a delegate—like inetd(1M)—will do the fork(2)–exec(2) sequence. The equivalent diagram might be drawn as

Upon receiving an enable request from smf_enable_instance(3SCF) or svcadm enable fmri, svc.startd(1M) determines if the service's dependencies are satisfied and, if so, requests that the responsible restarter start an instance of the service. What the responsible restarter is doing is:

  • setting the privileges, resource bindings, and application-specific environment for the service,
  • creating a fault boundary to place the service instance within,
  • calling fork(2)–exec(2) to run the service\*, and
  • watching the service for error conditions, upon which the service is terminated (and restarted if appropriate).

(The combination of the master restarter and the delegates are handling these calculations and operations for every service on the system, propagating their state changes and evaluating the impact of those state changes in turn.)

Moreover, because the smf_enable_instance(3SCF) request is evaluated based on the authorizations of the caller, fooadm(1M) can be run with no significant privileges. Since we split authorizations into action authorizations (non-persistent operations, like "restart this service") and modify authorizations (changing configuration aspects), it becomes straightforward to create an operator role that can tend a service, but not change its configuration or affect any other independent service on the system.

More flexible administrative assignments is one aspect of inserting smf(5) into Solaris, but we'll contrast these two approaches again—and reveal exactly what those purple rectangles represent.

\* Not every restarter need offer a fork(2)–exec(2) application model, but presently all smf(5) restarters do.

Monday Sep 20, 2004

smf(5): the system knowing more means...

you can choose to know less. For instance, if you need to know what application model your program runs under, then you have to know how to start or restart or stop your application. The common example is that, if you run a process under inetd(1M), then telling inetd(1M) to take notice of your new service (or your newly deactivated service) requires knowing that

# pkill -HUP -x inetd

will cause inetd(1M) to rescan inetd.conf(4), its configuration file.

But for a service started by init(1M) out of inittab(4), you edit that configuration file and then

# init Q

And for a service that is started somewhere in /etc/rc\*.d, restarting looks something like

# /etc/init.d/foo restart
Usage: /etc/init.d/foo {start|stop}
# /etc/init.d/foo stop
# /etc/init.d/foo start

and making this service start on boot consists of creating a link to the /etc/init.d file to some sequence number in the appropriate run level directory. (I'll omit discussion of saf(1M), but that's another distinct method for managing services.)

All of this, of course, could be simpler.

If the service describes what other service is responsible for starting it and stopping it (and restarting it or asking for its configuration to be refreshed), then a single command can relay the appropriate instructions to the responsible restarter. In smf(5), this command is svcadm(1M). Our examples above all reduce to

# svcadm restart application/foo

or to

# svcadm enable application/foo

depending on whether you wanted to restart or merely enable (and start) the service in question.

Of course, since we also know what other services application/foo requires, we can actually enable all required services automatically, by following our service graph. Let's make this more concrete: to enable the SSH daemon on Solaris, all you need to do is:

# svcadm enable -r network/ssh
# svcs -p network/ssh
STATE          STIME    FMRI
online         Sep_14   svc:/network/ssh:default
               Sep_14     100152 sshd

What's svcadm(1M) doing? We can ask it for verbose output:

# svcadm -v enable -r network/ssh
svc:/network/ssh enabled.
svc:/system/cryptosvc enabled.
svc:/system/filesystem/minimal:default enabled.
svc:/system/device/local enabled.
svc:/system/filesystem/usr enabled.
svc:/system/filesystem/root enabled.
svc:/network/loopback enabled.
svc:/system/filesystem/usr:default enabled.

svcadm(1M)'s output represents its traversal of the dependencies for sshd(1M). Taken across the many services included in Solaris, that's a lot of knowledge we've formalized and moved into the system. It becomes the basis for a lot of "meta-service" administration, including automated restart.

If you didn't know these dependencies already, you can interrogate the system (using svcs(1)) and cement your understanding; if you did, then you can answer the second-order questions like, "what is affected by a failure of system/utmp?" much more rapidly than in the past. Or you can instead know less and devote your newly freed neural capacity to understanding your application stack as a whole (or to maintaining encylopedic knowledge of Simpsons trivia...).

Thursday Sep 16, 2004

smf(5): a view from the moon

One interesting aspect of smf(5) is that we have pulled apart many of the assumed interrelationships between system services, and made them explicit. Doing this makes building availability and failure models much easier, but it also lets us see one projection of Solaris's shambling shape. (There's another interesting technique for dynamic discovery of relationships via DTrace, but I'll let Bryan show the image from those experiments when he's ready.) Everyone wanted to visualize the service graph that results, so Dan Price and David Bustos came up with a way to generate one. Here's the result, generated on my two-way Opteron system earlier today:

Because we'll be tweaking the graph a bit more, I'm only showing this scaled down version, but we can take a bit of a tour just from the gross features:

  • Generally, earliest to latest proceeds from right to left.
  • The large structure just below the centre is network/rpc/bind and its dependents, the various RPC services.
  • Furthest to the right is network/pfil, which prepares the network interfaces for IP Filter.
  • Furthest to the left is system/fmd, the new Solaris Fault Manager.

As you might guess, we've had to write numerous graph-aware diagnosis algorithms to make a large structure like this one navigable. We're looking forward to further enhancing our reporting and visualization tools to make troubleshooting easier still.

(Of course, knowing all the dependencies in Solaris doesn't protect you from the occupational hazard of overdiagnosis, as John and I spent an hour poking at every possible aspect of his system, which ultimately required a new network cable. I figure once a year I still end up following an "all possible software causes" algorithm, ending up in mdb(1) poking around the kernel, rather than checking cable connections or a bad software install.)

Wednesday Sep 15, 2004

smf(5):, new blogging, boot

We had our moment on last week, which was pretty gratifying: my first slides on doing this project are from June 2001, and I'm not certain how long before that—1997? 1998?—we started sketching out aspects of the problem. smf(5), which we developed under the codename "Greenline", is now rolling out pretty well inside Sun, and with the inclusion of the Java Desktop System, makes for a nice fast workstation startup.

After being strapped into their ergonomic chairs for a year or two, people are understandably itchy to start explaining what they've been working on to others: Tobin's first entry suggests that he's setting himself a lofty goal of touring the entire facility in his blog. I know Liane and a couple of others are ready to get the word out as well. But, being a dilettante at heart, I'll just try to show some of the pieces I think are interesting.

The past couple of days I spent refining verbose boot, which let's you see what services the system is starting and in what order (and what went wrong). I'm eliding some of the boring parts, but it looks roughly like this:

SunOS Release 5.10 Version smf-fixes-sparc 64-bit
Copyright 1983-2004 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
DEBUG enabled
misc/forthdebug (467758 bytes) loaded
[ system/filesystem/root:default starting (Root filesystem mount) ]
[ network/pfil:default starting (pfil) ]
[ network/loopback:default starting (Loopback network interface) ]
[ network/physical:default starting (Physical network interfaces) ]
Sep 15 16:26:33/11: system start time was Wed Sep 15 16:26:20 2004
[ system/filesystem/usr:default starting (/usr and / mounted read/write) ]
[ system/device/local:default starting (Standard Solaris device configuration.) ]
[ system/identity:node starting (System identity) ]
[ system/filesystem/minimal:default starting (Local filesystem mounts) ]
[ milestone/devices:default starting (Device configuration milestone.) ]
Hostname: gremlins-b10
[ system/sysevent:default starting (System event notification service.) ]
[ system/identity:domain starting (System identity) ]
NIS domain name is
[ system/cryptosvc:default starting (Cryptographic services) ]
[ system/manifest-import:default starting (Service manifest import) ]
[ system/rmtmpfiles:default starting (Remove temporary files) ]
[ system/sysidtool:net starting (sysidtool) ]
[ system/power:default starting (Power Management) ]
[ system/keymap:default starting (Keyboard defaults) ]
[ system/name-service-cache:default starting (Name service cache daemon) ]
[ milestone/single-user:default starting (Single-user milestone) ]
[ system/console-login:default starting (Console login) ]
[ milestone/multi-user:default starting (Multi-user milestone) ]
[ network/inetd:default starting (inetd) ]

gremlins-b10 console login:

(On a Java Desktop System with GDM2 enabled, it just presents the GDM graphical login shortly after system/utmp is available.) One new aspect of boot is that the service "common" names are actually localizable although, since Solaris supports various splits between the root filesystem, /usr, and other filesystems, there's a bit of hidden acrobatics involved. As another example, the reason we print out the start time is because, on this system, prior to that message, we can't access the timezone data to convert the time-of-day to the local timezone. Fun.

Tuesday Jul 27, 2004

The tantalizing aroma of svcs(1)

A side point: svcs(1) is pretty fast. Our example from last week was

$ svcs \\\*milestone\\\*
STATE          STIME    FMRI
online         Jul_23   svc:/milestone/devices:default
online         Jul_23   svc:/milestone/single-user:default
online         Jul_23   svc:/milestone/name-services:default
online         Jul_23   svc:/milestone/multi-user:default
online         Jul_23   svc:/milestone/multi-user-server:default

Other approaches to service management in Unix-like systems ask each service for their status. On a large system, with a complete representation of its running daemons as services, this can be thousands of fork(2)/exec(2) pairs. That's not how smf(5) works, and so a command like svcs(1) can be quick:

$ time svcs \\\*milestone\\\* 
STATE          STIME    FMRI
online         Jul_23   svc:/milestone/devices:default
online         Jul_23   svc:/milestone/single-user:default
online         Jul_23   svc:/milestone/name-services:default
online         Jul_23   svc:/milestone/multi-user:default
online         Jul_23   svc:/milestone/multi-user-server:default

real    0m0.027s
user    0m0.004s
sys     0m0.009s

In fact, we're not even calling fork(2) to get this report on service status:

$ truss -t fork,write,exit svcs \\\*milestone\\\* 
STATE          STIME    FMRI
write(1, " S T A T E              ".., 29)      = 29
online         Jul_23   svc:/milestone/devices:default
write(1, " o n l i n e            ".., 55)      = 55
online         Jul_23   svc:/milestone/single-user:default
write(1, " o n l i n e            ".., 59)      = 59
online         Jul_23   svc:/milestone/name-services:default
write(1, " o n l i n e            ".., 61)      = 61
online         Jul_23   svc:/milestone/multi-user:default
write(1, " o n l i n e            ".., 58)      = 58
online         Jul_23   svc:/milestone/multi-user-server:default
write(1, " o n l i n e            ".., 65)      = 65

I'll start drawing a suitable architecture diagram...

Monday Jul 26, 2004

Enabling and disabling services

I thought I would show a different example of smf(5) today. Here's the state of the network/telnet service on my desktop:

# svcs -p network/telnet:default
STATE          STIME    FMRI
online         Jul_23   svc:/network/telnet:default

It's easy to enable and disable service instances using svcadm(1M):

# svcadm disable network/telnet
# svcs -p network/telnet:default
STATE          STIME    FMRI
disabled       13:08:15 svc:/network/telnet:default
# telnet localhost
telnet: Unable to connect to remote host: Connection refused

And we can enable it just as easily, too:

# svcadm enable network/telnet
# svcs -p network/telnet:default
STATE          STIME    FMRI
online         13:08:29 svc:/network/telnet:default

Note that while something is declaring telnet service is available, no processes are associated with the service instance. If we "telnet localhost" from another window, we can then see the telnet daemon and the login session:

# svcs -p network/telnet:default
STATE          STIME    FMRI
online         13:08:52 svc:/network/telnet:default
               13:08:52   116400 in.telnetd
               13:08:52   116403 login

Support for enabling and disabling services can be done by calling smf_enable_instance(3SCF) and smf_disable_instance(3SCF), in addition to the command line interface of svcadm(1M). Since the framework relays the enable or disable request, we don't need privileges to signal any process (or even know which process we might have to signal to make the update...).

Thursday Jul 22, 2004

A peek while bug flushing

With only a few days of exposure on the varied system configurations around here, there have been a few bugs raised against smf(5), the new service management facility. (I suppose it's similar to a pack of hounds flushing pheasants during a hunt (although, ultimately, whom the hounds chase and whom a gun is pointed at does differ from a hunt).) What's more exciting is that, as the kinks get smoothed out, people are instead starting to discuss possibilities. But I thought I'd show a little tiny piece of output instead. Here's the output of the new services listing command, svcs(1), looking only at the major milestones of the startup process:

$ svcs \\\*milestone\\\*
STATE          STIME    FMRI
online         11:03:56 svc:/milestone/devices:default
online         11:04:04 svc:/milestone/single-user:default
online         11:04:07 svc:/milestone/name-services:default
online         11:04:11 svc:/milestone/multi-user:default
online         11:04:18 svc:/milestone/multi-user-server:default

What name services am I running? Examine the name services milestone more closely:

$ svcs -d milestone/name-services:default 
STATE          STIME    FMRI
disabled       11:03:51 svc:/network/ldap/client:default
disabled       11:03:51 svc:/network/nis/server:default
disabled       11:03:51 svc:/network/rpc/nisplus:default
online         11:04:04 svc:/network/nis/client:default
online         11:04:07 svc:/network/dns/client:default

NIS, with a bit of DNS for seasoning. What's it take to be a NIS client these days?

$ svcs -p svc:/network/nis/client:default
STATE          STIME    FMRI
online         11:04:04 svc:/network/nis/client:default
               11:04:04   100202 ypbind

(But you knew that already.) More later.

Tuesday Jul 20, 2004

Checklists navigated

The project that I mentioned a little while back navigated our engineering processes (or, rather, we steered it through them) and integrated into the Solaris development release last week. Now it's getting a real shakedown as more and more people in the company get access to the bits. And it will hit the various Beta programs and Software Express shortly.

They've found a few bumps and rough edges, but nothing we can't address. We're even finding that it's straightforward to keep the hardened machines hardened. And everyone's moving fast again, as we don't have to keep our many, many changes in our heads anymore—people will tell us directly what's wrong/imperfect/improvable.

The project? It's the service management facility, which is the other major technology comprising S10's Predictive Self-Healing feature.

Thursday Jul 08, 2004

Why projects? Why not?

One of the questions I often ask myself is "why aren't more sites using projects?". As I wander from forum to forum, I regularly see people saying, "I want to consolidate three [application server] instances on my system"—or two [database] instances or n applications. Many of these applications need to run with identical credentials (user id, group id, authorizations, privileges, etc.) and are only distinguishable by their working directory, environment variables, or the like. Reading these requests is a bit frustrating, as this scenario is one of the key motivations we had when introducing the project(4) database—and I can only conclude that it's my failure to really communicate its utility.

Projects let you assign a label with a specific workload. In S8 6/00 and all subsequent releases, you can explicitly launch a workload with its appropriate project using the newtask(1) command. If extended accounting has been activated using acctadm(1M) with one of the standard record groupings, then the processes within that workload will include their project ID. Writing an accounting record on every process exit can impact some workloads, so you can optionally choose to only write records when every task exits. A task is a new process collective that groups related work within a workload (so it could be a workload component, like a batch submission). acctadm(1M) will report on the current status of the extended accounting subsystem, if invoked without arguments:

$ acctadm
            Task accounting: inactive
       Task accounting file: none
     Tracked task resources: none
   Untracked task resources: extended
         Process accounting: inactive
    Process accounting file: none
  Tracked process resources: none
Untracked process resources: extended,host,mstate
            Flow accounting: inactive
       Flow accounting file: none
     Tracked flow resources: none
   Untracked flow resources: extended

The resource line is reporting what accounting resource groups and resources we can include in each record. We can expand the resource groups for each type of accounting using the -r option.

$ acctadm -r
extended pid,uid,gid,cpu,time,command,tty,projid,taskid,ancpid,wait-status,zone,flag
basic    pid,uid,gid,cpu,time,command,tty,flag
extended taskid,projid,cpu,time,host,mstate,anctaskid,zone
basic    taskid,projid,cpu,time
extended saddr,daddr,sport,dport,proto,dsfield,nbytes,npkts,action,ctime,lseen,projid,uid
basic    saddr,daddr,sport,dport,proto,nbytes,npkts,action

So we can enable the extended task record by invoking acctadm(1M) like

# acctadm -e extended task
# acctadm -E task
# acctadm -f /var/adm/exacct/task

In S10, you can optionally enable accounting without having it write to a file, such that the records are retrievable using getacct(2).

Of course, that's all about accounting, but projects are useful even if you're not interested in the long term resource consumption of your workloads. The project ID is useful for isolating your workload using conventional /proc-based tools like prstat(1M) and pgrep(1), as well as with DTrace. For instance to see only one's own projects, you can use the -J option to pgrep.

$ pgrep -lf -J user.sch
728069 /usr/bin/bash
728027 /usr/bin/bash
125169 /usr/bin/bash

To see workloads on the system, you can use prstat's -J option, which aggregates the activity by project ID, as well as displaying the most active processes:

$ prstat -c -J user.sch 1 1
653322 xx         19M   17M cpu2     0    3 166:34:10  12% setiathome/1
911046 xx         19M   17M cpu5     0    3 170:28:53  12% setiathome/1
668697 xx         19M   17M cpu4     0    3 138:53:14  12% setiathome/1
100378 daemon   2352K 1944K sleep   60  -20  30:18:23 0.2% nfsd/5
125214 sch      4472K 4152K cpu3     1    0   0:00:00 0.0% prstat/1
100066 root     7872K 6736K sleep   29    0   2:20:42 0.0% picld/13
125169 sch      2768K 2416K sleep    1    0   0:00:00 0.0% bash/1
100156 root       91M   36M sleep   59    0   0:46:59 0.0% poold/8
100249 root     6680K 4848K sleep    1    0   8:02:46 0.0% automountd/2
100254 root     5776K 3552K sleep   59    0   0:00:01 0.0% fmd/10
100262 root     4024K 3424K sleep   59    0   0:19:40 0.0% nscd/57
100265 root     1248K  776K sleep   59    0   0:00:00 0.0% sf880drd/1
100184 root     2288K 1384K sleep    1    0   0:00:00 0.0% ypbind/1
100172 daemon   2680K 1704K sleep   58    0   1:07:32 0.0% rpcbind/1
100158 root     2216K 1336K sleep   59    0   0:00:26 0.0% in.routed/1
PROJID    NPROC  SIZE   RSS MEMORY      TIME  CPU PROJECT                     
   130        3   56M   51M   0.3% 475:56:17  37% background                  
     0       61  341M  168M   1.0%  43:21:28 0.3% system                      
 36565        4   13M   11M   0.1%   0:00:01 0.1% user.sch                    
105403       14   39M   30M   0.2%   0:00:03 0.0% user.xxxxxxx                
 77194       17   74M   62M   0.4%   0:03:10 0.0% user.xxxxxx                 
Total: 133 processes, 279 lwps, load averages: 3.07, 3.07, 3.04

(This system's pretty idle during our U.S. shutdown, so it's doing its best to find extraterrestrial customers.)

To limit your DTrace predicates to only a project of interest, use the curpsinfo built-in variable to access the pr_projid field, like

/curpsinfo->pr_projid == $projid && ..../

where I've also used the $projid scripting macro, which expands to the result of curprojid(2) for the running DTrace script. You could instead explicitly enter your project ID of interest, or use one of the argument macros if writing a script you expect to reuse.

Projects also let you place resource controls on your workload, establish its resource pool bindings, and more. We'll make it easier to use them with the forthcoming service management facility. But I'll summarize: projects are a precise and efficient way to label your workloads (as opposed to pattern matching on arguments or environment variables). If you are consolidating workloads, either because of machine eliminations, organizational mergers, or other reasons, they are definitely worth considering. If you think there's a way to make them more applicable to your work, please let me know.

Friday Jul 02, 2004

Answer to sort(1) puzzle #1

/usr/bin/sort -ur -k 1,1n -k1,2nr input.d

Well, I underspecified that problem slightly; the output I was looking for is

1305 6565
1401 8192
1408 2312

which the anonymous poster's invocation will give. Alan's invocation gets the correct line, but has the first field backwards (if I had specified the problem fully). You could, of course, send Alan's output through another sort(1) stage to order the first field. The key to this puzzle is knowing that (a) sort(1) does a final comparison of the entire line using strcoll(3C), (b) that fields with specific modifiers ignore global modifiers (like the -r option here), and (c) that the Solaris implementation of sort(1) will output only the first unique line it finds in the collated sequence. The first two of these points are in the manual page; the last requires some experimenting.

True story: This puzzle grew out of a service request where a customer was moving from a platform where the last unique line was the one displayed and needed to modify their script to produce the same output on Solaris.

Please comment if you want more puzzles, or if you think I should stop before getting started!

Thursday Jul 01, 2004

sort(1) puzzle #1

(I'm waiting for a build to finish, so here's a small Solaris sort(1) trick.)

Question: I have the following data file

1401 8192
1401 3487
1401 0807
1305 3471 
1305 6565
1408 2312
1408 1233

Using only sort(1), how do I generate a file sorted by the first field, with only the highest valued second field for each first field value?

Note that the unique line behaviour of sort(1) isn't well specified, so versions from other platforms may not be able to do this trick.




« July 2016
External blogs