Monday Mar 12, 2012

How I now configure the name switch in Solaris 11

/etc/nsswitch.conf is dead in S11. What I use now to configure the name switch is the following shell script which can take either a nis or ldap argument:

#!/bin/ksh -p

# New way of configuing the name switch.  nsswitch.conf is dead in S11 so the
# following must be edited and run instead of editing that nsswitch.conf.

tmpfile=$(/usr/bin/mktemp -t ${me}.XXXXXX)

case $1 in
cat > $tmpfile <<EOF 
setprop config/password = astring: "files nis [TRYAGAIN=0 UNAVAIL=return NOTFOUND=return]"
setprop config/group =    astring: "files nis [TRYAGAIN=0 UNAVAIL=return NOTFOUND=return]"
setprop config/host =     astring: "files dns"
setprop config/network =  astring: files
setprop config/protocol = astring: files
setprop config/rpc =      astring: files
setprop config/ether =    astring:   files
setprop config/netmask =  astring:   files
setprop config/bootparam = astring:  files
setprop config/publickey = astring:  files
setprop config/netgroup =  astring:  nis
setprop config/automount = astring:  "files nis"
setprop config/alias =     astring:  files
setprop config/service =   astring:  files
setprop config/printer = astring:    "user files"
setprop config/project = astring:    files
setprop config/auth_attr = astring:  files
setprop config/prof_attr = astring:  files
setprop config/tnrhtp = astring:     files
setprop config/tnrhdb = astring:     files

cat > $tmpfile <<EOF 
setprop config/password = astring: "files ldap"
setprop config/group =    astring: "files ldap"
setprop config/host =     astring: "files dns"
setprop config/network =  astring: files
setprop config/protocol = astring: files
setprop config/rpc =      astring: files
setprop config/ether =    astring:   files
setprop config/netmask =  astring:   files
setprop config/bootparam = astring:  files
setprop config/publickey = astring:  files
setprop config/netgroup =  astring:  ldap
setprop config/automount = astring:  "files ldap"
setprop config/alias =     astring:  files
setprop config/service =   astring:  files
setprop config/printer = astring:    "user files ldap"
setprop config/project = astring:    files
setprop config/auth_attr = astring:  files
setprop config/prof_attr = astring:  files
setprop config/tnrhtp = astring:     files
setprop config/tnrhdb = astring:     files

svccfg -s svc:/system/name-service/switch -f $tmpfile || exit
rm $tmpfile
svcadm refresh svc:/system/name-service/switch:default
print "Current svc:/system/name-service/switch config:"
svccfg -s svc:/system/name-service/switch listprop 'config/*'

Tuesday Mar 06, 2012

Easy Memory Corruption Checking in Solaris

I've been using a shell script wrapper I wrote several years ago to make using libumem and watchmalloc memory corruption debugging features easy to use. The script is called memdbg and it can be used like so:

memdbg [memdbg options] <program and program args>


# Start the krb5kdc daemon with memory leak detection on
$ memdbg -l ./krb5kdc

# Check the kinit command for memory corruption
$ memdbg kinit tester
I also created a companion script called show_leaks which can report memory leaks found by libumem on a running process. Here's an example:
# Report memory leaks in kadmind when it's been started with memdbg -l:
$ show_leaks kadmind
0818aa90     127 081a81b8`decrypt_key_data+0x42
08189710     254 0878ca18`krb5_dbekd_def_decrypt_key_data+0x8d
08189a90     127 081f8998`krb5_dbekd_def_decrypt_key_data+0x8d
0818a010     127 081ca110`krb5_dbekd_def_decrypt_key_data+0x8d
   Total     635 buffers, 48768 bytes

# See the stacks of all leaked allocs
$ show_leaks -v kadmind
0818aa90     127 081a81b8`decrypt_key_data+0x42
08189710     254 0878ca18`krb5_dbekd_def_decrypt_key_data+0x8d
08189a90     127 081f8998`krb5_dbekd_def_decrypt_key_data+0x8d
0818a010     127 081ca110`krb5_dbekd_def_decrypt_key_data+0x8d
   Total     635 buffers, 48768 bytes
======================= 081a81b8::bufctl_audit info ======================
            ADDR          BUFADDR        TIMESTAMP           THREAD
                            CACHE          LASTLOG         CONTENTS
         81a81b8          81ae280    e7d2112af0cb1                1
                          818aa90          815fcfc                0

======================= 0878ca18::bufctl_audit info ======================

I've blogged before about both libumem and watchmalloc (both are part of native Solaris). I will say at this point I use libumem much more because it doesn't slow down a program very much at least when using the default options or -l leak reporting. Every Solaris developer should be using this at some point in testing. I have found numerous memory handling bugs using these tools.

Here are the URLs/links to the scripts mentioned above:
To use them, use the "save the link as" function of your browser, chmod u+x memdbg|show_leaks, place them in your PATH, then run. Happy bug hunting!!!

Tuesday Nov 23, 2010

Simple auto kinit for OS X

I wrote a couple small shell scripts to automate kinit when my MacBook Pro is connected to the internal company network. The first, kckinit, uses the OS X keychain to store my Kerberos password. The trick is to run the native kinit using < /dev/null. Apparently this will cause kinit to display a dialogue window which has a "remember in keychain" option. Once I did that I am able to run "kinit myprinc@FOO.COM #!/bin/ksh -p # This looks odd but kinit if it detects that stdin is being used will # pop up a password dialog window which allows the password for that # princ to be saved in the keychain and from then on will use the # keychain to get the password. if [[ $# -ne 1 || "$1" == @('-?'|'--help') ]] then echo "Usage: ${0##\*/} <sun|mit>" exit 1 fi case $1 mit) /usr/bin/kinit will@ATHENA.MIT.EDU < /dev/null;; sun) /usr/bin/kinit will@SUN.COM < /dev/null;; \*) print -u 2 "Usage: ${0##\*/} "; exit 1;; esac The second script calls the first one in a loop that runs in the background. I call the script auto_kinit. Here is the script:
#!/bin/ksh -p


while true
	if (! /usr/bin/klist 2>/dev/null|\\
              /usr/bin/fgrep "Default principal: $princ" >/dev/null 2>&1) &&\\
	   (ping -c2 -oq $host >/dev/null 2>&1)
		~/bin/kckinit sun
	sleep 45
It will detect when it is on the corporate network and do a kckinit while running as a background job. I've added auto_kinit as a login startup item.

Thursday Sep 23, 2010

Highly Recommended Solaris Tools book

Ever since I've been given a copy of the "Solaris Performance and Tools" book I've found it more and more useful. It has a lot of good information regarding various diagnostic tools that are native to Solaris including dtrace and mdb. If you ever need to examine a core dump or kernel panic the mdb section of this book is fantastic. Same goes for using dtrace.

Sunday Aug 01, 2010

OpenSolaris snv_134 dev build Release Notes

Recently I posted the release notes for a release of OpenSolaris that I mistakenly thought was publicly release. I have since located the release notes for the snv_134 developer build of OpenSolaris that is publicly available via . Here are those notes:
The OpenSolaris development package repository

has been updated to reflect the changes up to and including snv_134 for
both the x86/x64 and SPARC platforms.

Starting with build 133, almost all packages in the development package
repository have been renamed with hierarchical. smf(5)-style names[1].
For general information on the the format of package names, see the
pkg(5) manual page[2].

Before updating a system, review the "New issues" and "Existing issues"
sections of this document for all of the known issues that may affect
the update.

The development builds have undergone limited testing and users should
expect to uncover issues as the next release is developed.  Bug reports
and requests for enhancement are welcome through

Users who wish to update their system to the development build can do
so by setting their preferred publisher to the above URL and using the
"image-update" facility provided by the pkg(1) command or by the
"Update All" facility of the Package Manager GUI.

Existing issues in this repository update or in updating to it
3106 action upgrade needs to consider a missing origin

	When using image-update or the Package Manager to update, the
	packaging operation may fail with messages of the form

		Action removal failed for 'path/to/some/file'
		 OSError: [Errno 2] No such file or directory:
		  File "/usr/lib/python2.4/vendor-packages/pkg/",
		  line 85, in copyfile
		  fs = os.lstat(src_path)
		OSError: [Errno 2] No such file or directory:

	Work-around: This failure occurs if an "editable" file has been
	removed from the system prior to updating to build 133 or

	To restore the file in question

		user@host$ pfexec pkg fix 

	At this point the above packaging operation can be restarted.

14570 file install logic discommoded by excessive cleverness if preserve=rename\*

	When using image-update or the Package Manager to update, the
	packaging operation may fail with messages of the form

		Action upgrade failed for 'etc/mail/'
		 OSError: [Errno 2] No such file or directory
		 line 232, in rename
		   os.rename(src, dst)
		OSError: [Errno 2] No such file or directory

	Work-around: This failure occurs if an "editable" file that has
	been marked "renameold" has been modified from the system prior
	to updating to build 133 or later.

	An example of such a file is the sendmail(4) configuration
	file, /etc/mail/  Special instructions[3] are
	available in cases where /etc/mail/ has been

	In other cases, first preserve the contents of the existing

		user@host$ cp -p /path/to/file /path/to/file.orig

	Restore the modified file by searching for the package that
	delivers it

		user@host$ pkg search -l /path/to/file
		INDEX      ACTION VALUE              PACKAGE
		path       file   /path/to/file      pkg:/@ ...

	Then restore the file as follows

		user@host$ pfexec rm /path/to/file
		user@host$ pfexec pkg fix 

	At this point the above packaging operation can be restarted.

	Once booted into the new boot environment, changes recorded in
	/path/to/file.orig can be merged, if necessary, into the

6914346 upgrade from OpenSolaris 2009.06 (111b2) to 130 fails with stale

	After updating to build 130 or beyond, the system may panic
	with messages of the form

		undefined symbol 'pcie_get_rc_dip'
		WARNING: mod_load: cannot load module 'pci_autoconfig'

		failed to load misc/pci_autoconfig

	Work-around: Boot the original boot environment (BE) instead
	and correct the boot archive as follows

		user@host$ pfexec beadm mount  /mnt
		user@host$ pfexec bootadm update-archive -F -R /mnt
		user@host$ pfexec beadm unmount 

	At this point, the new BE can be booted into.

12380 image-update loses /dev/ptmx from /etc/minor_perm

	When using image-update or the Package Manager to update to
	build 125 or greater, remote access to the system via ssh(1) or
	rlogin(1) may become unavailable.  Alternatively, using
	terminal programs such as gnome-terminal(1) or xterm(1) may
	result in characters not being echoed or commands unable to be

	Work-around: Boot the original boot environment (BE) instead
	and correct the /etc/minor_perm file contained within as

		user@host$ pfexec beadm mount  /mnt
		user@host$ pfexec sh -c \\
			"grep \^clone: /etc/minor_perm >> /mnt/etc/minor_perm"
		user@host$ pfexec touch /mnt/reconfigure
		user@host$ pfexec bootadm update-archive -R /mnt
		user@host$ pfexec beadm unmount 

	At this point, the new BE can be booted into.

13534 "Could not update ICEauthority file /.ICEauthority" on bootup of build 130

	After the system boots, the following warning dialog boxes may
	be displayed

		Could not update ICEauthority file /.ICEauthority

		There is a problem with the configuration server
		(/usr/lib/gconf-sanity-check-2 exited with status 256)

	Work-around: Clicking on the "Close" button for each dialog box
	will permit one to login normally.  Once logged in, enter the
	following command to correct the home directory for the "gdm"

		user@host$ pfexec usermod -d /var/lib/gdm gdm

11051 default ai 121 dev ai build should point to /dev

	When using the Automated Installer (AI) to install development
	builds over the network, the manifest used for the install
	service should be updated to reflect that packages should be
	installed from the development repository.

	First copy the default manifest from your install image.
	Assuming the name of the create AI service is , then
	AI represents the name of the smf(5) property group
	that contains the path to the image

		user@host$ image_path=`svcprop -c -p AI/image_path`

	Next copy the default.xml file from that image and change the
	"main url" attribute of the "ai_pkg_repo_default_publisher"
	element from "" to

		user@host$ cp ${image_path}/auto_install/default.xml /tmp

	Finally associate the modified manifest with the install

		user@host$ pfexec installadm add -m /tmp/default.xml \\

	Note that users of the "bootable" AI CD and USB ISO should make
	a similar change to the custom manifest that can specified as
	part of its installation procedure.

13233 /contrib packages should not depend on "entire"

	If packages from the "/contrib" repository have been installed
	on the system, attempts to update the system may cause the
	following error to occur

		pkg: Requested "install" operation would affect files that
		cannot be modified in live image.
		Please retry this operation on an alternate boot environment.

	Work-around: Uninstall the packages from "/contrib" which are
	causing the issue.  The list can be found through the following

		user@host$ pkg contents -Ho,action.raw -t depend | \\
			grep fmri=entire@ | cut -f1

	Once these packages have been uninstalled, repeat the packaging

11523 only permit FMRIs from same publisher for network repositories

	When performing certain packaging operations, errors of the
	following form may be displayed

		pkg: The following pattern(s) did not match any
		packages in the current catalog.  Try relaxing the
		pattern, refreshing and/or examining the catalogs:


		The catalog retrieved for publisher '' only
		contains package data for these publisher(s):  To resolve this issue, update this
		publisher to use the correct repository origin, or add
		one of the listed publishers using this publisher's
		repository origin.

	These both reflect that the name of publisher has been
	incorrectly set to a value other than "".  When
	using as an origin URI, the
	name of the publisher must be "" and there
	should be no other publishers with that name.

	In addition, specifying a publisher for both the and origin URIs is an error as
	only one of them should be in use at a time, using a publisher
	of "".

	Work-around: If there is a publisher "publisher name" defined
	for the origin URI, remove
	this first

		user@host$ pfexec pkg unset-publisher 

	Then reset the publisher back to the correct value

		user@host$ pfexec pkg set-publisher \\

8347 Move boot archive from /boot/x86.microroot to /platform/i86pc/boot_archive

	Automated Installer servers must themselves be updated to at
	least build 128a in order to serve build 128a or greater
	images.  In addition, systems running the Distribution
	Constructor should also be updated in order to build images
	based on build 128a or greater.

10630 driver action gets confused by driver_aliases entries not covered by an

	When using image-update or the Package Manager to update to
	build 121 or later, messages of the following form may be

		The 'pcieb' driver shares the alias 'pciexclass,060400'
		with the 'pcie_pci' driver, but the system cannot
		determine how the latter was delivered.  Its entry on
		line 2 in /etc/driver_aliases has been commented out.
		If this driver is no longer needed, it may be removed
		by booting into the 'opensolaris-2' boot environment
		and invoking 'rem_drv pcie_pci' as well as removing
		line 2 from /etc/driver_aliases or, before rebooting,
		mounting the 'opensolaris-2' boot environment and
		running 'rem_drv -b  pcie_pci' and
		removing line 2 from /etc/driver_aliases.

	Work-around: These messages can be ignored.

6877673 add_drv fails with a permissions entry with a minor name including a

	When using image-update or the Package Manager to update,
	messages of the following form may be displayed

		driver (clone) upgrade (removal of minor perm
		'vnic 0666 root sys') failed with return code 252
		command run was: /usr/sbin/update_drv -b /tmp/tmp65jZ-x -d
		-m vnic 0666 root sys clone
		command output was:
		No entry found for driver (clone) in file


		driver (asy) upgrade (addition of minor perm
		'\*,cu 0600 uucp uucp') failed with return code 255
		command run was: /usr/sbin/update_drv -b /tmp/tmp65jZ-x -a
		-m \*,cu 0600 uucp uucp asy
		command output was:
		Option (-m) : missing token: (\*)

	Work-around: These messages can be ignored.

9568 image-update produces driver removal of policy warnings

	When using image-update or the Package Manager to update from
	builds prior to 123, warnings of the following form may be
	displayed during a packaging update

		driver (ibd) upgrade (removal of
		policy'read_priv_set=net_rawaccess write_priv_set=net_rawaccess)
		failed: minor node spec required.

	Work-around: These messages can be ignored.

10778 image-update to snv_120 produces warnings about etc/sma/snmp/mibs

	When using image-update or the Package Manager to update to
	build 120 or later, a message of the following form may be

		Warning - directory etc/sma/snmp/mibs not empty - contents
		preserved in

Thursday Sep 03, 2009


Support Software
Freedom Day
Change your world on September 19, 2009. Find an event near you.

Monday Nov 03, 2008

How to configure advanced kadmind logging in Solaris

After some experimenting and looking at source I've determined that the kadmind does have support for rotating its own log that is separate from the krb5kdc log (by default the kadmind logs to the log used by krb5kdc). To configure this, edit /etc/krb5/krb5.conf and add:
        admin_server = FILE:/var/krb5/kadmin.log                                                                        
        admin_server_rotate = {                                                                                         
                period = 1d                                                                                             
                versions = 10                                                                                           
in the [logging] section. Unfortunately, this is not documented properly in the krb5.conf man page but it basically works the same as the kdc_rotate parameter which is documented in man krb5.conf.
Note, to configure both the kdc and kadmind logging behavior to log to separate files, use something like:
# commenting out default so kadmind will log to a separate file                                                         
#        default = FILE:/var/krb5/kdc.log                                                                               
        kdc = FILE:/var/krb5/kdc.log                                                                                    
        kdc_rotate = {                                                                                                  
# How often to rotate kdc.log. Logs will get rotated no more                                                            
# often than the period, and less often if the KDC is not used                                                          
# frequently.                                                                                                           
                period = 1d                                                                                             
# how many versions of kdc.log to keep around (kdc.log.0, kdc.log.1, ...)                                               
                versions = 10                                                                                           
# controls kadmind logging                                                                                              
        admin_server = FILE:/var/krb5/kadmin.log                                                                        
        admin_server_rotate = {                                                                                         
                period = 1d                                                                                             
                versions = 10                                                                                           
This is the supported way to rotate the krb5kdc and kadmind logs. Also note that the kdc.conf man page is in error regarding the logging section. Use krb5.conf to control KDC logging instead.

Friday Mar 07, 2008

The Rough Guide to configuring a Solaris KDC to use a LDAP DS for the KDB

config_kdc_ldap.html Steps to configure a Solaris KDC and LDAP directory to store and
retrieve Kerberos records from the LDAP Directory Server.

OpenSolaris and Solaris 10 Update 5 were recently updated to allow a
Solaris KDC to use a LDAP Directory Server as the repository for the
Kerberos database.  This is a rough how-to to enabled this feature.

This document assumes that the LDAP directory already exists on the
Directory Server (DS).  Also note that the examples below assume the DS
is a Sun Java System Directory Server.  Other DS's can be used but the
procedure for creating and exporting the SSL certificate may differ.

In addition the examples below use various host names and domain names.
Wherever these occur you must substitute the appropriate host/domain
names for your environment.

Note, the Solaris KDC LDAP plugin only supports simple binds protected
by SSL (ldaps:) to the DS unlike the MITKC KDC LDAP plugin which also
support unix domain (ldapi:) binds to the DS.  The reason the Solaris
KDC LDAP plugin is so restricted is due to the native Solaris LDAP
library not supporting unix domain binds at this time.  Unix domain
binds do not require the SSL configuration below but do require the KDC
run on the same system as the DS.  SSL binds allow the KDC to run either
on the DS system or on a separate system.

1. Configure SSL for use between the KDC and DS.  Make sure the DS has a
   certificate and import that on the KDC if it is on a different
   system.  Here's the way to do that:

    (Note, the certificate for the DS must be stored in a local NSS
     compat certificate DB and the ldap_cert_path dbmodule paramter must
     point to the directory where the certificate DB resides.)

    ################## SSL Cert config steps Start ################################

    Steps to configure Solaris 10 to use DS 6.1 self signed certificate.
    When the directory instance is created it automatically creates a SS

    To use this on S10 as a client do:

    Export the PEM encoded DS SS certificate and save to
    /var/tmp/ds_cert (note /export/sun-ds6.1/directory/alias is the
    Directory instance).

        /usr/sfw/bin/certutil -L -n defaultCert \\
            -d /export/sun-ds6.1/directory/alias \\
            -P 'slapd-' -a > /var/tmp/ds_cert.pem

    Create the local certificate DB, just hit return twice when asked for a password:

        /usr/sfw/bin/certutil -N -d /var/ldap

    Add the DS SS PEM certificate to the local certificate DB:

        /usr/sfw/bin/certutil -A -n defaultCert -i /var/tmp/ds_cert.pem -a -t CT -d /var/ldap

    Or add the der certificate:

        /usr/sfw/bin/certutil -A -n defaultCert -i /var/tmp/ds_cert.der -r -t CT -d /var/ldap

    To test that SSL is working use (this example assumes that the
    "cn=directory manager" entry has admin priviledges on the LDAP

        /usr/bin/ldapsearch -Z -P /var/ldap -D "cn=directory manager" \\
            -h  -b "" -s base objectclass='\*'

    Some useful blog entries in regards to this topic:

    Note that certain versions of JES DS 6.\* auto create a self-signed
    certificate where:

                "CN=ds-server,CN=636,CN=Directory Server,O=Sun Microsystems"

    The problem with this is that CN=ds-server is the short hostname
    should really be the long form i.e.

    End of Solaris 10 SSL config steps.

    Steps to configure a OpenSolaris KDC to use DS 6.1 self signed

    1. On DS export the self-signed DS certificate:

        /export/sun-ds6.1/ds6/bin/dsadm show-cert -F der /export/sun-ds6.1/directory2 \\
            defaultCert > /tmp/defaultCert.cert.der

    2. On OpenSolaris KDC import the DS certificate:

        pktool setpin keystore=nss dir=/var/ldap
        chmod a+r /var/ldap/\*.db
        pktool import keystore=nss objtype=cert trust="CT" \\
            infile=/home/willf/pub/defaultCert.certutil.der \\
            label=defaultCert dir=/var/ldap

    To list certs in the keystore:
        pktool list keystore=nss objtype=cert dir=/var/ldap

    To test try:

        /usr/bin/ldapsearch -Z -P /var/ldap -D "cn=directory manager" \\
            -h  -b "" -s base objectclass='\*'

    or use shortname for host depending on the certificate.

    Note that if the self-signed DS certificate has expired, this will fail so
    remember to check the expiration date on the certificate and renew it if
    needed.  To renew using Java Enterprise DS 6.1 use:

    ds6/bin/dsadm renew-selfsign-cert ./directory0 defaultCert

    then export the certificate on the DS and import that on the clients/KDCs.

    ################## SSL Cert config steps End ##################################

2. Populate the directory (depends on DS so this may not be necessary).
   Note native Solaris idsconfig can be used to to setup standard
   directory root and containers if needed.

    If idsconfig is run to setup standard directory root and containers
    choose crypt for the password protection during the setup:

    Do you want to store passwords in "crypt" format (y/n/h)? [n] y
    "2 proxy" for Credential level,
    "2 simple" for the Authentication Method.

3. Add the Solaris Kerberos schema ldif file to the existing schema.  It
   is located here: /usr/share/lib/ldif/kerberos.ldif

   ldapmodify -h -D "cn=directory manager" \\
        -f /usr/share/lib/ldif/kerberos.ldif

4. Create the Kerberos directory container entry in the LDAP directory.
   To create krbcontainer and realm on the DS use:

    - First make sure ldap_kerberos_container_dn in krb5.conf is set
      properly: (note dc=central,dc=sun,dc=com is the DS root container

ldap_kerberos_container_dn = "cn=krbcontainer,dc=central,dc=sun,dc=com"

      In addition in the krb5.conf realm section set:

database_module = LDAP

      and add a [dbmodules] section like so:

    LDAP = {
        ldap_kerberos_container_dn = "cn=krbcontainer,dc=central,dc=sun,dc=com"
        db_library = kldap  # this must be kldap
        ldap_kdc_dn = "cn=kdc service,ou=profile,dc=central,dc=sun,dc=com"
        ldap_kadmind_dn = "cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com"
        ldap_cert_path = /var/ldap # path to NSS cert DB where the DS cert is stored
        ldap_servers = ldaps:// # or use shortname depending on cert

    Here is an example krb5.conf:

        default_realm = ACME.COM

        ACME.COM = {
                # Note, the KDC can run on the same system as the DS or
                # another system.
                kdc =
                admin_server =
                database_module = LDAP

[domain_realm] = ACME.COM

    LDAP = {
        db_library = kldap
        ldap_kerberos_container_dn = "cn=krbcontainer,dc=central,dc=sun,dc=com"
        ldap_kdc_dn = "cn=kdc service,ou=profile,dc=central,dc=sun,dc=com"
        ldap_kadmind_dn = "cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com"
        ldap_cert_path = /var/ldap
        ldap_servers = ldaps://

# The rest is standard for krb5.conf ...

      (Note there is a minimum of 2 for the ldap_conns_per_server krb5.conf

      The setting of ldap_servers should always be ldaps: since the code
      currently does not support a insecure bind to the DS and ldapi:
      (LDAP over unix domain sockets) is currently not supported on
      Solaris at this time.  Also note that one should pay attention to
      the form of hostname found in the DS certificate.  If the short
      name is used then this should be used in the ldap_servers
      specification.  For example if the certificate has CN=ds-server in
      the Subject then:

ldap_servers = ldaps://ds-server

      should be used.  In most cases a CA issues certs with the fully
      qualified domain name, like

    - Next to create the Kerberos database (KDB) in the LDAP directory
      do (the -D arg is the DN of the DS admin allowed to make changes
      to the directory):

    kdb5_ldap_util -D <directory manager ID> create -r <realm> -s


    kdb5_ldap_util -D "cn=directory manager" create -r ACME.COM -s

      This creates the krbcontainer and several other directory objects.
      It also creates a /var/krb5/.k5.<realm> master key stash file
      which is used by the KDC for decrypting the principal keys.

5. Create KDC service roles and stash the role passwords:

    For security/safety each KDC service (kdc and kadmin) has it's own
    role DN allowing finer setting of the ACL on the directory records.
    The ldap_kdc_dn role only needs read access at this point,
    ldap_kadmind_dn role needs access to create, modify and delete

   - First create the KDC service role entries on the DS:

    To securely add the KDC service roles to the directory the following
    should be entered on the keyboard interactive (stdin) entry mode of
    the ldapmodify command.  This can also be entered via a file however
    there are security concerns given the clear text password for the
    roles is contained in the file.
    The contents should look like:

dn: cn=kdc service,ou=profile,dc=central,dc=sun,dc=com
cn: kdc service
sn: kdc service
objectclass: top
objectclass: person
userpassword: <some cleartext password for the kdc role>

dn: cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com
cn: kadmin service
sn: kadmin service
objectclass: top
objectclass: person
userpassword: <some cleartext password for the kadmin role>

    Note that if more protection is desired for the role password in the
    directory the following can be used.

userpassword: {md5}<role password converted to MD5 output>

    To get the MD5 conversion of the password "digest -a md5" can be

    To create the role entries in the LDAP directory do:

    ldapmodify -a -Z -h -D "cn=directory manager"

    and enter the role information.  Use -Z to protect the information
    in transit.

   - Next the following parameters should be set in krb5.conf so the KDC
     can determine what DNs to use when binding to the DS:

    ldap_kdc_dn = "cn=kdc service,ou=profile,dc=central,dc=sun,dc=com"
    ldap_kadmind_dn = "cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com"):

   - Next stash the cleartext passwords for the kdc and kadmin service
     role DNs in the local password stash file used by the KDC to bind
     to the DS:

    kdb5_ldap_util stashsrvpw "cn=kdc service,ou=profile,dc=central,dc=sun,dc=com"
    kdb5_ldap_util stashsrvpw "cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com"

    which creates a /var/krb5/service_passwd file entry for the DN of the
    kdc and kadmin services.  This should contain the simple bind password
    that the KDC will use to authenticate and bind to the DS.  Note,
    the password must be entered as the clear text password (do not
    convert to MD5 if that was how the password was specified in the
    role entry).

    At this

6. Set the appropriate ACLs for the KDC related roles in the LDAP

cat <<EOF | ldapmodify -h -D "cn=directory manager"
# Set kadmin ACL for everything under krbcontainer.
dn: cn=krbcontainer,dc=central,dc=sun,dc=com
changetype: modify
add: aci
aci: (target="ldap:///cn=krbcontainer,dc=central,dc=sun,dc=com")(targetattr="krb\*")(version 3.0;\\
      acl kadmin_ACL; allow (all)\\
      userdn = "ldap:///cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com";)

# Set kadmin ACL for everything under the people subtree if there are
# mix-in entries for krb princs:
dn: ou=people,dc=central,dc=sun,dc=com
changetype: modify
add: aci
aci: (target="ldap:///ou=people,dc=central,dc=sun,dc=com")(targetattr="krb\*")(version 3.0;\\
      acl kadmin_ACL; allow (all)\\
      userdn = "ldap:///cn=kadmin service,ou=profile,dc=central,dc=sun,dc=com";)

7. Add kadmin princs to the kadm5.keytab:
(Note: <FQDN> must be replaced with the fully qualified domain name of
the system that the KDC is running on.  Example:

kadmin.local -q "ktadd -k /etc/krb5/kadm5.keytab kadmin/<FQDN>"
kadmin.local -q "ktadd -k /etc/krb5/kadm5.keytab changepw/<FQDN>"
kadmin.local -q "ktadd -k /etc/krb5/kadm5.keytab kadmin/changepw"

8. Now start the KDC daemons.  At this point the KDC should be
   functioning to issue tickets and kadmin should work.

Other administrative commands:

To destroy a realm, use:

kdb5_ldap_util -D "cn=directory manager" destroy


To mix in Kerberos principal attributes to a directory entry of the
people objectclass: (note this mix in is useful when there are entries
of non-krb objectclass type like a people object class that one wants to
associate krb principal attributes with.)

1. Prep each entry to include krb principal attributes.  It should be modified
   like so:

# Adding the krbprincipalaux, krbprincipalaux and krbPrincipalName
# attributes
cat <<EOF | ldapmodify -h -D "cn=directory manager"
dn: uid=willf,ou=people,dc=central,dc=sun,dc=com
changetype: modify
objectClass: krbprincipalaux
objectClass: krbTicketPolicyAux
krbPrincipalName: willf@ACME.COM

    Note the first component of the krbPrincipalName attribute does not
    have to match the entry's uid but it's generally good if this is the
    case.  (I need to double check this but I think it's also possible
    to have > 1 principal entries associated with a non krb object (like
    this people class entry)).

    Ideally one would write a script to pull all the people DN's from
    the LDAP directory and create a file that modifies them as above and
    feed that back to ldamodify.

2. Add a subtree attribute to the appropriate realm container entry:

        kdb5_ldap_util -D "cn=directory manager" modify \\
            -subtrees 'ou=people,dc=central,dc=sun,dc=com' -r ACME.COM

    This informs the KDC LDAP plugin that it should also search for
    principal entries from the 'ou=people,dc=central,dc=sun,dc=com' DN
    in addition to the ACME realm container DN which is searched by

3. Then either do a addprinc (either via kadmin or kadmin.local) to add the
   rest of the principal attributes including the secret key :

    kadmin.local -q 'addprinc willf'

   Or do:

    kdb5_util load -update dumpfile

   to migrate existing krb princs from the dumpfile to the LDAP

To migrate all db2 KDB entries do:

1. While krb5.conf is configured to access the db2 KDB:

    kdb5_util dump > dumpfile

2. Configure krb5.conf to use the LDAP KDB then do:

    kdb5_util load -update dumpfile

Note kdb5_util uses the ldap_kadmind_dn role to bind by default.  This
can be overridden via the -x binddn arg but to be secure the DN password
should be stashed prior (see previous steps for how to do this).

Wednesday Sep 05, 2007

Update on Playing with Solaris memory debuggers

A long time ago I wrote about my experiences playing with various memory debuggers in Solaris. One thing I mentioned was:
Note, a core dump is not necessary.  Use "mdb -o nostop -p PID" where PID
is the proc. ID of the running process and then do the findleaks stuff:

echo '::findleaks' | mdb -o nostop -p $(pgrep gssd)
echo '000cb608::bufctl_audit' | mdb -o nostop -p $(pgrep gssd)
The "mdb -o nostop" trick does not work in the upcoming Solaris Nevada. Instead use:
	gcore $(pgrep -x daemon_name)
to get a core dump of a running process then do:
	echo "::findleaks" | mdb core_produced_by_gcore
and so on. You can read the rest of my previous blog entry here.

Wednesday Nov 16, 2005

An Additional Use of NFSsec using krb5i

Using NFSsec with krb5i method of security protection provides the security of Kerberos authentication and integrity checking of each NFS message transfered between client and server. This is certainly useful in guaranteeing that NFS data is not maliciously manipulated. This integrity checking can also be useful when using NFS over unreliable networks where data corruption may not be caught by the CRC checksum used by TCP. This happened in Sun some time ago where a busy server was connected to the network via a faulty switch and there were problems with a code build as a result. If NFSsec with krb5i had been used at that time there would not have been a problem with data corruption.

Monday Oct 24, 2005

debugging Solaris Kerberos and GSS-API apps with dtrace

I created a dtrace script based on Brendan Gregg's dapptrace script that allows one to easily trace all the function calls of a GSS or Kerberos application on Solaris. It can be run in a manner similar to apptrace and can be either given a command line to trace or a PID. The script, called krbdtrace is here. Note that this script can show the entire call path in the Kerberos and GSS-API libs. It shows return codes but does not handle the return types (they are all displayed as integers). Still, if there is an error that is hard to diagnose and krbdtrace shows a function returning a negative number that may provide a clue as to what is failing. Update: it was pointed out to me that truss supports -u mech_krb5:: which will show more detail in tracing user level Kerberos code. Still the krbdtrace script could be useful as a template for using dtrace to do things that truss can not such as reporting only functions that return negative integers and so on.

Technorati Tag: Technorati Tag: Technorati Tag: Technorati Tag:

Friday Jul 29, 2005

More about libumem

I just became aware of another blog with scripts that integrate libumem memory debugging with dbx.
You can read more about the dbx module here:
You can read more about libumem here:

Thanks to Chris Quenelle for this information!

As an aside, I've found libumem to be very useful in doing QA testing. I love to pound on code with these settings exported in the environment:


One can find memory leaks, double free's, writes past allocated buffers, and more with libumem. And integrated with dbx and -g compiled source, finding the problem will be easier than ever.

Thursday Jun 16, 2005

Everything You Wanted to Know About Kerberos Enctypes But ...

I wrote a presentation about Kerberos encryption types (enctypes) and how they are used in Kerberos. It is aimed at both developers and administrators. You can download the PDF version here . Note, earlier versions of the presentation had a Sun Confidential label on the bottom of the slides which was left there by mistake. I have removed this label in the latest version of the presentation. I've updated the presentation slightly as of Oct 8,2007.

Technorati Tag: Technorati Tag:

Tuesday Jun 14, 2005

Playing with Solaris memory debuggers

Playing with Solaris memory debuggers

Playing with Solaris memory debuggers

The following are notes that I've made for myself as I used various Solaris memory debugging libraries. Given the following was originally for my consumption I can not vouch for the correctness of the grammar.

UserSpace info:

I was playing with both, to see what they can do.
Here's what I observe:

Using environment variables: MALLOC_DEBUG=WATCH,RW

watchmalloc can find bugs like (using MALLOC_DEBUG=RW):

foo=\*p; /\* invalid read \*/


\*p=1; /\* invalid write \*/


memset(p, 1, 10); /\* write past buffer \*/

Note, watchmalloc will core dump when it detects an error.  And it causes
programs to run MUCH slower.


libumem does have some memory guards to detect invalid writes but it requires
use of the mdb command ::umem_verify on the core.  It does not catch bad reads
like watchmalloc but it does have guards with patterns like 0xdeadbeef and
0xbaddcafe.  And it does not slow down program execution like watchmalloc.

With libumem one can do memory leak detection (using
UMEM_LOGGING=transaction UMEM_DEBUG=default):


then do 'echo ::findleaks|mdb core' to see: 

000bb088       1 000cb608 main+8
   Total       1 buffer, 320 bytes

then use:

echo '000cb608::bufctl_audit' | mdb core

to see the stack trace where the leak allocation took place.
Note, LEAKED is the number of times a leak occurred.

Note, a core dump is not necessary.  Use "mdb -o nostop -p PID" where PID
is the proc. ID of the running process and then do the findleaks stuff:

echo '::findleaks' | mdb -o nostop -p $(pgrep gssd)
echo '000cb608::bufctl_audit' | mdb -o nostop -p $(pgrep gssd)

This is good for daemons.  Or use "gcore <PID>" to get a core dump of a
running process.  This is useful to look for leaks in daemons like krb5kdc.

Also, to watch the memory size of a running daemon to see if it is growing over
time use:

prstat -p <PID of daemon> 300 > /tmp/prstat.out
Both watchmalloc and libumem will core dump on double free()'s.

Based on the above it seems like it would be good to use watchmalloc for
some memory corruption testing and use libumem for memory leak

Note use:

print ::umem_status|mdb core

to see umem status for user space core when debugging with libumem.


print ::umalog | mdb core

to see umem transaction log and stack traces.

It's also good do do:

print ::stack | mdb core

and look at the stack trace (note, the values in each function listed
are the input registers %i0-5).
Look at umalog to see if it's possible to determine if memory was
free'ed earlier.  Use:

print "<address>::dis" | mdb core

to see the assembly around the stack function address to see where the
call was made (look for other call's).

Kernel memory debugging info:

In /etc/system do:

    \* kmem lite flag, must use independently of other kmem flags
    \* set kmem_flags = 0x100
    \* kmem flags: audit, test, redzone
    set kmem_flags = 0x7

and reboot system.


echo "::dcmds" | mdb unix.0 vmcore.0

to see debugging commands (look for kmem stuff).

Note, the kernel kmem outputs debug messages to syslog.

Technorati Tag:
Technorati Tag:
Technorati Tag:

wx and You: your friend in managing workspace change

wx and You: your friend in managing workspace change

wx and You: your friend in managing workspace change

"Relieves OpenSolaris putback anxiety or your money back."

The following is a webpage I wrote to introduce Sun developers to a source code workspace management tool, wx, that I extensively modified after getting burned when using it to collapse a delta on a renamed file. Actually it was about a year after that incident that I realized, after talking to different folks at Sun, how wx could be modified to make my life easier (I had a hard time remembering all the development rules and I hate having to do things manually that can be automated). During my recovery from back surgery I started hacking on wx and discovered there was a lot that could be done to make the tool better. At the time I was not thinking that my version would be official so I called it wwx and let some of my Kerberos team members try it out. Within a month I started getting calls/e-mails from other Sun developers with thanks, suggestions and bug reports on wwx. Things snowballed from there and eventually my changes made it into the official version of wx.

Note that the contents below are oriented towards internal development in Sun using Teamware workspaces so some items may not be applicable to OpenSolaris development. It looks like we're moving to Mercurial controlled workspaces which are not compatible with wx. Instead one should use the Cadmium extensions to Mercurial. You can read more about this on the OpenSolaris site.


  1. About wx

  2. Examples

    1. How I normally use wx

    2. Getting a project gate/workspace ready for putback

  3. Advanced Usage

    1. How to skip particular pbchk/nits checks

    2. How to skip files in webrevs

    3. How to edit the active list (and deal with new files that haven't been "created")

    4. Environment Variables

    5. Tips for use with bugtraq

    6. How to return a file to its parent version

    7. How to initialize in a STC workspace (non-default src path)

    8. How to debug wx

  4. Credits

About wx

wx's reason for being is to help the developer keep track of file changes in a workspace and follow the Solaris source gate rules. Over the last several months I decided to enhance wx to be more functional and robust. As a result wx now does the following:

  • Keeps track of changed and new files in the active list.

  • Keeps track of renamed and deleted files in the renamed list.

  • Keeps track of comments associated with files in the active list.

  • Provides a wrapper around the underlying SCCS and workspace commands used to manipulate files in a workspace. This allows wx to update the active and renamed lists automatically and also do some checking to help the developer conform to the OpenSolaris gate rules.

  • The pbchk command does a number of new checks on active files and comments to make sure they conform to OpenSolaris rules. This includes looking for "sccs rmdel" which causes Teamware problems, too many delta's, files that are checked out that are not in the active list, poorly formed active list comments, active list comments that don't match those in the SCCS delta and whether the RTI is approved for the bugs listed in the active list comments.

  • The webrev command generates accurate webrevs that include renamed/delete info. A webrev is is a set of HTML code diffs that allow easy review with a browser.

  • The backup and restore commands allow easy backup and restore of files in the active and renamed list to a directory in your home directory which is usually backed up and thus safer than a workspace on a build system.

  • The putback/pb command allows the developer to do a putback with more safety and convenience.

  • The reedit command is enhanced to make it safer to use on files that have been renamed. Note, the reedit command is used to collapse deltas of files in the active list which is useful if the file was merged as a result of a resolve or anything else that could cause the file to be checked out and in more than once. Note, there is also a new “redelget” command that collapses file histories but leaves the file checked in. This is currently the only way to collapse new files.

  • Provides several informational commands so the developer can see what files are renamed, deleted, new, etc...

  • The mv and rm commands check for cyclic renames which cause problems for Teamware.

  • Provides a number of command aliases for ease of use.

  • See "wx help" for a list of all the commands, aliases and flags.

wx issues that the new wx fixes

  • Old "wx reedit" treats renamed files as new files, losing the file history in the process. New wx uses the Teamware nametable hash info to determine if a file has been renamed or is new. This is one reason new wx is slower than old wx but more accurate in this regard. And this is why both the parent workspace and local workspace nametables must be accurate. If wx thinks a file is new it will warn and ask the user if it is okay to proceed with the reedit (answer "no" if the file is not new).

  • Old "wx reedit" used the modification time of the SCCS delta files to determine if the parent file contains a delta that the child doesn't. This is risky because the child delta could be modified by a check in and the result would be that the reedit would lose the parent's delta code changes. New wx looks at the latest delta comment in the parent and determines if this exists in the child's delta history. If it doesn't then wx skips the reedit for that file and warns the user that a bringover is required.

  • Old "wx new" lists renamed files as new. New wx doesn't do that but it is slower as a result of more checking.

  • Not really a bug but old wx uses the paradigm of either editing the active list manually to add entries or using the update command to search all the directories to find files that are checked out (adding them to the active list). This means that the user must remember to update the active list after they check out files using "sccs edit". New wx can automatically update the active list when a file is being checked out with "wx co file" (edit/checkout/co are all aliases for the same command). This requires the developer to do fewer steps and the active list is more likely to stay in sync with the file changes in a workspace.

The best way to use new wx is to do all file manipulation in a workspace with the wx commands (do not use SCCS commands). Doing so consistently automatically updates the active, new and renamed lists and thus does not require that the update command be run in order to update these lists. Be aware that in order for wx to work properly the active and renamed file lists must be accurate. So if you do use SCCS commands or workspace filemv or filerm then you will need to use the 'wx update' command. Note, if you decide to remove a newly created file which is in your active list from your workspace, use 'wx rm file' to remove the file so the active list is updated accurately.

New wx now keeps track of file renames and deletes in a renamed list. This is separate from the active list since the active list only stores info on files that have been edited or are newly created. This separation allows wx commands like reedit to run only on active files and not files that have only been renamed/deleted. Note, if a file in the active list is renamed it will also appear in the renamed list.

New wx assumes that the parent workspace associated with the current child workspace contains the same set of files (except for new files) that the child workspace contains. Be careful if you change the parent of the child workspace since wx will assume that if it cannot locate a file in the parent then the local file is new. If this assumption is wrong, wx can output erroneous information or in the case of the reedit command, delete the SCCS history. Use "wx new" to see which files wx thinks are new.


Here's how I normally use wx:

  1. Create local workspace then do "ws workspace" and bringover files. Note, you do not have to do "ws workspace" in order to use wx in a workspace. Instead, just cd to a workspace and start using wx (note, wx will complain if you ws'ed to one workspace and cd to another and then try to use wx).

  2. "wx init". Initialize wx in a workspace (creates the wx subdir and some files for tracking changes). If I haven't checked out any files I use the "no update" option which creates a empty active list (fast). If you have modified files you should use one of the other updated options listed. Note, you can keep an active list sorted by default or sort it manually by using the "wx update -q -s" command.

  3. "wx co file_to_modify ...". (Note, co is an alias for checkout/edit. This does a "sccs edit file_to_modify ..." and adds an entry to the active list for each file on the command line. For example I would "cd usr/src" then "wx co Makefile" to check out the Makefile for editing which would also put an entry for usr/src/Makefile in my active list.) Use "wx list" or "wx active" to list files in the active list.

  4. "wx create new_file ...". New files need to be "created" to be recognized by the Teamware tools. This command starts a new file history for the new file and adds a entry in the active list. Use "wx new" to list new files in the active list.

  5. "wx rm file_to_delete ...". Use this if you need to remove a file in the OpenSolaris approved way. wx will actually move the file from usr/ to deleted_files/ and add an entry in the renamed list. Note, for new files the files are not renamed and wx will ask if you want to remove the files completely from the workspace. Typically you would answer yes but make sure you have a backup of the files just in case you need them. The renamed list will not be updated in this case but entries in the active list will be removed. Use "wx renamed" to list renamed files.

  6. "wx mv file new_file". Use this if you need to rename file to new_file. Will add an entry in the renamed list. The active list will be updated with the new name if file is in the active list. Use "wx renamed" to list renamed files. See "wx help" for more info.

  7. "wx bu -t" (bu is alias for backup, -t only backup if needed). Use this periodically to backup up files that wx are tracking. The files are backed up in the wx.backup subdir in your home directory which is useful since most build systems don't backup workspaces unlike your home directory. Note, "wx backup -i" provides info about the backup dir and files. There are other flags to control compression. See "wx help" for more info.

  8. When I have checked out and modified all the files required for the bug fix I build and test. If the testing looks good I then do:

  9. "wx bu -t". Just to be safe.

  10. "wx ci -c comment_file". Check in all active files. The check in comments will be taken from the active list comments which will be replaced by the comments in the comment_file. If you do this you can skip editing the active list comments manually.

  11. "wx webrev". Generates HTML based code diffs under the webrev/ directory at top of workspace. Have code reviewed using a web browser.

  12. Once the code review is done and all changes have been made I file an RTI using the web RTI page. XXX Note, for OpenSolaris I am not sure how an RTI is filed but I will add a pointer to that info when I find it.

  13. Login to the putback gate system, "ws /net/path_to_your_workspace", and reparent your workspace if necessary (See "man workspace").

  14. "wx pbchk". Check files and comments for OpenSolaris gate rules conformance. Note, nits runs a subset of pbchk checks and is more useful for checking your files while you've got them checked out.

  15. If "wx pbchk" complains about multiple SCCS deltas/file versions, use "wx redelget -m" to collapse the deltas so there is only one version difference between the local file and the parent. This will also reset the file history for new files to 1.1. Note, reci is an alias for redelget.

  16. "wx pb -n". Do a trial putback and check for conflicts. If there are conflicts, run "bringover files_in_conflict then "wx resolve" to resolve the conflicts and collapse the deltas created by the merge ("wx resolve" uses the reedit subcommand to collapse the SCCS deltas). Note, pb is an alias for putback. You can see the list of files that wx provides to the putback command by doing "wx pblist". "wx pbcom" will display the putback comments. If you resolved conflicts then repeat steps 10-15

  17. "wx pb". Do the real putback. Note, wx will do some checking and will prompt the user before actually doing the putback. It will pass the comments found in the active list to the putback command and will display the putback comments before the prompt to do the putback. You can also use "wx pbcom" to see what the putback comments will be.

  18. After the putback is done, if I intend to do more bug fix work in this workspace I'll save the current wx directory (either tar it or rename it), remove the current wx directory if I didn't rename it and then do "wx init" to initialize wx with fresh state. That way there is less chance of getting confused about the changes made for the new bug fix.

Getting a project gate/workspace ready for putback to the OpenSolaris gate:

    Note, the following describes the model that I use when working on a project that will require a significant amount of time and code change. In order to reduce the pain involved, my model uses a project source gate created by copying the source from the official OpenSolaris gate. Development occurs in workspaces that are children of this project gate with intermediate putbacks from this child workspaces going back to the project gate. This allows the project to use it's own rules regarding number of deltas in a putback and other issues without regard to the official OpenSolaris gate rules. When the project is done and ready to be putback to the Open Solaris gate these are the steps I use (featuring wx):

    First, a caveat about using wx in a project gate. Never use "wx reedit" or redelget (or any equivalent alias that collapses a SCCS delta) in a project gate that is the parent of developer workspaces. This will confuse the Teamware commands like putback and bringover when used between a child workspace and the project gate. Generally, it is best to avoid the wx reedit/redelget commands until it is time to putback to the OpenSolaris gate.

  1. Create a local copy of the official OpenSolaris gate (I'll call it opensolaris_copy in this example). The parent of opensolaris_copy should be the official OpenSolaris gate.

  2. "putback usr/src delete_files" from the project gate to opensolaris_copy. This leaves the project gate intact and any modification (collapsing) of file SCCS delta histories for the real OpenSolaris putback will be done in opensolaris_copy. Note, if the putback reports conflicts these will need to be resolved in the project gate. The resolve will change files in the project gate but child workspaces will be able to properly bringover those changes. Backup the project gate before doing a resolve. Again, do NOT use wx reedit/redelget in a project workspace if you want bringover to work in child workspaces.

  3. "ws opensolaris_copy". Set the local ON clone copy as the current workspace.

  4. "wx init -ft". Initialize the opensolaris_copy workspace using a thorough wx update. This will create the active and renamed lists with all the files that were changed in the project gate.

  5. "wx list". Make sure active list looks okay.

  6. "wx new". Make sure new list looks okay.

  7. "wx renamed". Make sure renamed list looks okay.

  8. "wx ea". Update the active list comments. The comments will be placed in the SCCS delta history for files in the active list when the "wx redelget" step is done below and will also be used as the putback comment.

  9. "wx redelget". Collapse all the active list files SCCS delta history so there will only be one SCCS delta for each file in the active list when they are checked in. Note, recheckin and reci are aliases for redelget.

  10. Do a nightly build in the opensolaris_copy gate. If any files need to be modified, do the modification in the project gate and putback the changes to the local copy. Do a "wx deltachk" to see if there are any delta problems.

  11. Follow steps 13-16 from the "How I normally use wx" list of wx steps. to putback the files from the opensolaris_copy gate to the official OpenSolaris gate.

Advanced Usage

How to skip particular pbchk/nits checks

Some of the checks that the pbchk and nits commands run will skip active files listed in a file called wx/command.NOT where command is the name of the wx command doing the particular check. For example if I want to skip cstyle checking of all my active list files when I run "wx pbchk" I would first do "wx list >wx/cstyle.NOT". Note, I only skip checks for which I know I have an exemption from the gatekeepers. For example for some open source code in Solaris there are exemptions from the cstyle rules. Similarly for files containing non-Sun copyrights one may want to list files that the user is sure there is not a copyright problem in "wx/copyright.NOT".

How to skip file in webrevs

List files (one per line) in wx/webrev.NOT that you don't want to include in webrevs generated by "wx webrev".

How to edit the active list (and deal with new files that haven't been "created")

The original wx required the user to edit the active list to associate comments with the active files (note, wx checkin now provides a -c comment_file option which will automatically update the comments in the active list as well as use them for the SCCS check-in delta). Also, some people like to manually add new file entries to the active list before they were 'sccs created' so they could use wx for backups and nits checking, etc... The format of an entry in the active list is:

#empty line
[one or more comment lines]
#empty line

where [filepath] is the file path of an existing or new file relative to the top of the workspace. Here's an example entry with three comment lines (note, each entry starts with the filepath, there is not a empty line before in the example below):


4772119 Enabling Kerberos's Triple DES/SHA-1 Enctype Support
PSARC/2002/178 Enabling Kerberos's Triple DES/SHA-1 Enctype Support
4705662 GSS/Kerberos clients requesting dec-cbc-crc before des-cbc-md5

Note, if you are adding new files to the active list you should run "wx new -t" to populate the new list. New files that haven't been created will be created when "wx delget" is run (note checkin and ci are aliases for delget).

Environment Variables

  • PUTBACK: specifies the command to do the putback. This is useful if you want to use something like Casper Dik's turbo-dir.flp scripts as in this example: "export PUTBACK='cm_env -g -o putback'"

  • WXDIFFCMD: specifies the diff command and args for the wx diffs commands like diffs and pdiffs. This is similar to the CDIFFCMD and UDIFFCMD environment variables that webrev uses. A good setting is: "export WXDIFFCMD='diff -bw -U 5'"

  • WXWEBREV: specifies the webrev command and args used by the "wx webrev" command.

  • WXDIR: specifies the directory where wx will keep it's state files. The default location is in wx/ at the top of the workspace however via WXDIR you can change this to point to a different path like /tmp/my_wx. This is useful when you want to run wx in a workspace where you don't have write permission.

Tips for use with bugtraq

  • For the suggested fix field in bugtraq use "wx pdiffs" output. If that's too large, save it in a file, add it as an attachment and mention this in the suggested fix note.

How to return a file to its parent version

To undo all changes in a file that you've checked out and set it to the current version in the parent workspace use: wx reset file
Note, this will bringover the file from the parent and will undo local file renames.

To undo all changes in a file and return it to the version when it was originally brought over from the parent do:

  1. wx reedit file

  2. wx unedit file

This does not bringover the file from the parent so the file contents will be that of the last bringover of that file. It also does not undo a file rename.

How to initialize in a STC workspace (non-default src path)

Since the source in a STC workspace doesn't use the wx default path of usr/src you'll need to initialize using "wx init usr/ontest" in order for it to find the source. Note, if the source you're working on isn't found under usr/src from the top of your workspace then you'll need to initialize using the common path starting from the top of your workspace.

How to debug wx

To turn on debug output in wx do "wx command -D command args". This will send debug info to stderr so to page the output use 2>&1.


Thanks go to Jeff Bonwick who wrote wx, David Robinson who had a great idea about using the nametable hashes, Brent Callaghan who wrote webrev, John Beck (code review), Anup Sekhar (testing), Glenn Barry, Arun Perinkolam, Wyllys Ingersoll, Chin-Long Shu, Alastair McDermott, Matt Simmons, Valerie Anne Bubb, Bill Sommerfeld, Heiner Steven, Nikolay Molchanov, Alan Burlison, Nico Williams, Ann-Marie Westgate, James Carlson, Jeff Smith, and last but not least my mother for pointing out that my caching methodology could be more generalized.

Technorati Tag:
Technorati Tag:




  • General
« July 2016