News, tips, partners, and perspectives for the Oracle Solaris operating system

Customize network configuration in a Golden Image

Ken Powell
Principal Software Engineer

Many of our customers use golden images of Solaris with fully installed and configured applications to deploy their production systems. They create a golden image by capturing a snapshot of their staging system's files in a Solaris Unified Archive.

Solaris normally clears the staging system's network configuration during golden image installation. This makes sense given that each production system will have a different IP address and often resides in a different site with unique subnet prefix and default route.

There are times however when you might want parts of your network configuration to survive golden image deployment. Now that Solaris 11.4 stores its persistent network configuration in SMF, you can do this by creating custom installation packages to establish the network configuration settings you wish preserved.

In a previous blog, I showed how I used the Automated Installer to replicate my SPARC staging system's network configuration on other systems. I simplified the effort by using SMF to extract the working configuration in SC-profile format. In this blog, I'll start with the staging system as it was previously configured and will use the same SC-profiles I generated for that blog.  I'll show how I packaged and installed one of the SC-profiles so that the common part of my network configuration was able to survive golden image deployment. The process I'm using works exactly the same on X86 systems.

Do the following on the staging system

I split the staging system's configuration in my previous blog into three SC-profile files. One of these, the labnet-profile.xml file, set up a pair of high speed ethernet links configured to support 9000 byte jumbo frames in a link aggregation. This is the part of the network configuration information I'll be preserving in my golden image.

root@headbanger:~# dladm
LINK                CLASS     MTU    STATE    OVER
aggr0               aggr      9000   up       net0 net1
net0                phys      9000   up       --
net1                phys      9000   up       --
net2                phys      1500   up       --
net3                phys      1500   up       --
sp-phys0            phys      1500   up       --

Place the network configuration profile in a directory from which to build a package.

root@headbanger:~# mkdir proto
root@headbanger:~# cp labnet-profile.xml proto/.

Generate the initial package manifest from the files under the proto directory.

root@headbanger:~# pkgsend generate proto | \
                   pkgfmt > labnet-profile.p5m

Use a text editor to

  • add descriptive package metadata to the manifest,
  • adjust the destination directory of the SC-profile file, and
  • add a package actuator to automatically import the SC-profile upon installation.

This example places the SC-profile file in etc/svc/profile/node/ which will cause SMF to import the profile into its node-profile layer. The changes I made are shown in blue.

set name=pkg.fmri value=labnet-profile@1.0
set name=pkg.summary value="lab net configuration"
set name=pkg.description \
    value="My sample lab system net configuration"
file labnet-profile.xml \
    path=etc/svc/profile/node/labnet-profile.xml \
    owner=root group=bin mode=0644 \

Build the new package in a temporary package repository.

root@headbanger:~# pkgrepo create /tmp/lab-repo
root@headbanger:~# pkgrepo -s /tmp/lab-repo set publisher/prefix=lab
root@headbanger:~# pkgsend -s /tmp/lab-repo publish \
                           -d proto labnet-profile.p5m

Create a package archive from the contents of the package repository.

root@headbanger:~# pkgrecv -s /tmp/lab-repo \
                           -a -d lab.p5p labnet-profile

The package I created can be installed directly from this archive. The archive can also be easily copied and installed on other systems. This particular package could even be installed directly on an X86 staging system.

Install the package just created on the staging system.

root@headbanger:~# pkg set-publisher -g lab.p5p lab
root@headbanger:~# pkg install labnet-profile

Complete development and testing of the staging system.

Create the unified archive containing the golden image. Note that the staging system's network interfaces must be running with access to the site's package servers when creating this golden image.

root@headbanger:~# archiveadm create /var/tmp/labsys.uar

Copy the golden image from the staging system to a distribution site. I used "http://example-ai.example.com/datapool/labsys.uar" for this example.

Do the following on the AI server

Solaris provides multiple options for deploying a unified archive on other systems. I chose to deploy my golden image from my lab's install server onto node beatnik.

Create an installation manifest that points to the golden image.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
  <ai_instance name="netclone" auto_reboot="true">
        <zpool name="rpool" is_root="true">
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
          <be name="solaris"/>
    <software type="ARCHIVE">
        <file uri="http://example-ai.example.com/datapool/labsys.uar"/>
      <software_data action="install">

Create the install service using

  • A Solaris install image that is based the version of Solaris installed on the golden image,
  • The manifest created here,
  • The base SC-profile from my previous blog, and
  • The beatnik-specific SC-profile from my previous blog.

Note that there is no need to create a profile with labnet-profile.xml. That configuration information will be installed with the unified archive. I used node beatnik for my production server example.

# installadm create-service -y -n netclone-sparc \
    -a sparc \
    -p solaris=http://pkg.oracle.com/solaris/release/ \
    -s install-image/solaris-auto-install@latest
# installadm create-manifest -n netclone-sparc \
    -d -m netclone-manifest \
    -f netclone-manifest.xml
# installadm create-profile -n netclone-sparc \
    -p base-profile \
    -f base-profile.xml
# installadm create-client -n netclone-sparc \
    -e 0:0:5e:0:53:24
# installadm create-profile -n netclone-sparc \
     -p beatnik-profile \
     -f beatnik-profile.xml \
     -c mac=0:0:5e:0:53:24

Do the following on the production servers

Log into the console and start the installation.

{0} ok boot net:dhcp - install

Log in after installation completes. The dladm command output verifies the definition of the aggr link from the unified archive was preserved.

root@beatnik:~# dladm
LINK                CLASS     MTU    STATE    OVER
aggr0               aggr      9000   up       net0 net1
net0                phys      9000   up       --
net1                phys      9000   up       --
net2                phys      1500   up       --
net3                phys      1500   up       --
sp-phys0            phys      1500   up       --

The svccvg listprop command shown here indicates in the third column what SMF database layers different parts of the beatnik's datalink configuration are from.

root@beatnik:~# svccfg -s datalink-management:default listprop -l all datalinks
datalinks                  application   sysconfig-profile
datalinks                  application   node-profile
datalinks                  application   manifest
datalinks/aggr0            datalink-aggr sysconfig-profile
datalinks/aggr0            datalink-aggr node-profile
datalinks/aggr0/aggr-mode  astring       node-profile          dlmp
datalinks/aggr0/force      boolean       node-profile          false
datalinks/aggr0/key        count         node-profile          0
datalinks/aggr0/lacp-mode  astring       node-profile          off
datalinks/aggr0/lacp-timer astring       node-profile          short
datalinks/aggr0/media      astring       node-profile          Ethernet
datalinks/aggr0/num-ports  count         node-profile          2
datalinks/aggr0/policy     astring       node-profile          L4
datalinks/aggr0/ports      astring       node-profile          "net0" "net1"
datalinks/aggr0/probe-ip   astring       sysconfig-profile
datalinks/net0             datalink-phys node-profile
datalinks/net0/devname     astring       admin                 i40e0
datalinks/net0/loc         astring       admin                 /SYS/MB
datalinks/net0/media       astring       admin                 Ethernet
datalinks/net0/mtu         count         admin                 9000
datalinks/net0/mtu         count         node-profile          9000
datalinks/net1             datalink-phys node-profile
datalinks/net1/devname     astring       admin                 i40e1
datalinks/net1/loc         astring       admin                 /SYS/MB
datalinks/net1/media       astring       admin                 Ethernet
datalinks/net1/mtu         count         admin                 9000
datalinks/net1/mtu         count         node-profile          9000
datalinks/net2             datalink-phys admin
datalinks/net2/devname     astring       admin                 i40e2
datalinks/net2/loc         astring       admin                 /SYS/MB
datalinks/net2/media       astring       admin                 Ethernet
datalinks/net3             datalink-phys admin
datalinks/net3/devname     astring       admin                 i40e3
datalinks/net3/loc         astring       admin                 /SYS/MB
datalinks/net3/media       astring       admin                 Ethernet
datalinks/sp-phys0         datalink-phys admin
datalinks/sp-phys0/devname astring       admin                 usbecm2
datalinks/sp-phys0/media   astring       admin                 Ethernet 

In this case, the layers correspond to the following sources. I defined it this way to show how various parts of the configuration from different sources fit together.

    manifest           from Solaris service manifests
    node-profile       from my package in the golden image
    sysconfig-profile  from profiles defined on the AI server
    admin              automatically generated on first boot 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha

Integrated Cloud Applications & Platform Services