Solaris tip of the week: Zones
By Jay Danielsen on Jun 25, 2008
This week's Solaris tip is on 'zones'. If the 'zones' term is not
familiar to you,
please find your nearest Solaris host and read the zones(5) man page
My favorite application of the zones technology is to emulate a
production architecture in miniature; the global zone acts as firewall to a set
of app server, database server, and directory server zone targets. With this configuration,
each developer has his/her own full functional representation of a production
architecture on a single host. More on this configuration in a future 'Solaris tip of the week'.
First let's discuss how we can automate the installation of a single Solaris zone or set of zones.
Zones are defined with the solaris zonecfg command ... in it's
# echo "create -t SUNWdefault
commit" > /tmp/zonecfg.test
# zonecfg -z test < /tmp/zonecfg.test
This example creates a zone named 'test' and configures IP address
18.104.22.168 on network interface bge0.
Grab a cup of coffee, come back in a bit, and you have a running
Unfortunately, we're not quite finished ... When you log on to your
shiny new zone,
you still have to fill out all the paperwork:
- root password
- etc ...
This information is already defined for the Solaris global zone. To
simplify local zone configurations,
we will use the global zone configuration to develop a script named
'install.sh', which installs a fully-configured local zone that can be immediately used when the
install.sh script completes.
Usage: install.sh [-s subnet] [-l "host1:ipaddr host2:ipaddr2"]
- if -s not specified, 22.214.171.124 is used
- if -l not specified, the zone target list defined in install.sh:get_hostlist() is used. Edit this list to match your host target requirements.
The install.sh automates the following:
- auto detect default network interface
- enable root login via zone console and ssh.
- configure global zone as nfs server for local zones.
- enable dns in local zones if defined in global zone.
- enable the fair share scheduler, so all zones get equal access to cpu resources.
- creates sysidcfg file for first boot configuration of locale, root password, timezone, nfs.
To keep the zones lightweight for a developer deployment, local zones are defined to be similar in some ways to diskless clients.
For example, each local zone's /usr partition is mounted read-only
from the global zone.
This can be a problem, since many open source packages install in
the /usr/local directory. To allow for /usr/local packages in the global zone, the install.sh script
creates a /usr/local loopback filesystem overlays /usr in the local
zone. Each local zone then has it's own private read-write /usr/local directory. As an
aside, this was one of the
main reasons that old JES software could only be installed in a
full-root zone, because
the JES software attempted to install itself in various /usr directories.
I hope this script comes in handy ... let me know if you have any