Solaris tip of the week: Zones

This week's Solaris tip is on 'zones'. If the 'zones' term is not familiar to you, please find your nearest Solaris host and read the zones(5) man page before proceeding.

My favorite application of the zones technology is to emulate a production architecture in miniature; the global zone acts as firewall to a set of app server, database server, and directory server zone targets. With this configuration, each developer has his/her own full functional representation of a production architecture on a single host. More on this configuration in a future 'Solaris tip of the week'. First let's discuss how we can automate the installation of a single Solaris zone or set of zones.

Zones are defined with the solaris zonecfg command ... in it's simplest form:

# echo "create -t SUNWdefault
set zonepath=/export/test
add net
set address=172.0.1.1/24
set physical=bge0
end
verify
commit" > /tmp/zonecfg.test
# zonecfg -z test < /tmp/zonecfg.test

This example creates a zone named 'test' and configures IP address 172.0.1.1 on network interface bge0.

Grab a cup of coffee, come back in a bit, and you have a running zone ...

Unfortunately, we're not quite finished ... When you log on to your shiny new zone,  you still have to fill out all the paperwork:
- root password
- timezone
- nfs
- etc ...

This information is already defined for the Solaris global zone. To simplify local zone configurations, we will use the global zone configuration to develop a script named 'install.sh', which installs a fully-configured local zone that can be immediately used when the install.sh script completes.

Usage: install.sh  [-s subnet] [-l "host1:ipaddr  host2:ipaddr2"]
Defaults:
- if -s not specified, 172.0.1.0 is used
- if -l not specified, the zone target list defined in install.sh:get_hostlist() is used. Edit this list to match your host target requirements.

The install.sh automates the following:
- auto detect default network interface
- enable root login via zone console and ssh.
- configure global zone as nfs server for local zones.
- enable dns in local zones if defined in global zone.
- enable the fair share scheduler, so all zones get equal access to cpu resources.
- creates sysidcfg file for first boot configuration of locale, root password, timezone, nfs.

To keep the zones lightweight for a developer deployment, local zones are defined to be similar in some ways to diskless clients. For example, each local zone's /usr partition is mounted read-only from the global zone. This can be a problem, since many open source packages install in the /usr/local directory. To allow for /usr/local packages in the global zone, the install.sh script creates a /usr/local loopback filesystem overlays /usr in the local zone. Each local zone then has it's own private read-write /usr/local directory. As an aside, this was one of the main reasons that old JES software could only be installed in a full-root zone, because the JES software attempted to install itself in various /usr directories.

I hope this script comes in handy ... let me know if you have any suggestions for improvements ...

Comments:

Very useful information

Posted by Vikrant Raut on August 25, 2008 at 12:34 AM EDT #

Hi,
Thanks very much for your script, it proved very helpful in jumpstarting a project i am currently on. I do have a little request for help concerning it's execution. It fails on my test environment with the following error and i have been unsuccessful so far in figuring it out. Hope you can help.
Regards

network_interface=PRIMARY {hostname=cluster1 ip_address=10.82.1.210 netmask=255.255.255.0 protocol_ipv6=no}
+ uname -r
+ [ 5.10 = 5.11 ]
+ configure_ssh
+ cp /export/cluster1/root/etc/ssh/sshd_config /export/cluster1/root/etc/ssh/sshd_config.orig
cp: cannot access /export/cluster1/root/etc/ssh/sshd_config
+ cat /export/cluster1/root/etc/ssh/sshd_config.orig
+ sed -e s/PermitRootLogin.\*/PermitRootLogin yes/g
cat: cannot open /export/cluster1/root/etc/ssh/sshd_config.orig
+ configure_login
+ cp /export/cluster1/root/etc/default/login /export/cluster1/root/etc/default/login.orig
cp: cannot access /export/cluster1/root/etc/default/login
+ cat /export/cluster1/root/etc/default/login.orig
cat: cannot open /export/cluster1/root/etc/default/login.orig
+ sed -e s/CONSOLE/#CONSOLE/g
+ echo PS1="`hostname` # "
+ configure_nfs
+ cp /dev/null /export/cluster1/root/etc/.NFS4inst_state.domain
+ cp /export/cluster1/root/etc/default/nfs /export/cluster1/root/etc/default/nfs.1617
cp: cannot access /export/cluster1/root/etc/default/nfs
+ cat /export/cluster1/root/etc/default/nfs.1617
cat: cannot open /export/cluster1/root/etc/default/nfs.1617
+ sed -e s/#NFSMAPID_DOMAIN.\*/NFSMAPID_DOMAIN=network.com/g
+ configure_dns
+ cp /etc/resolv.conf /export/cluster1/root/etc/resolv.conf
+ configure_nsswitch
+ cp /export/cluster1/root/etc/nsswitch.conf /export/cluster1/root/etc/nsswitch.conf.orig
cp: cannot access /export/cluster1/root/etc/nsswitch.conf
+ cat /export/cluster1/root/etc/nsswitch.conf.orig
cat: cannot open /export/cluster1/root/etc/nsswitch.conf.orig
+ sed -e s/hosts:.\*/hosts: files dns/g
+ zoneadm -z cluster1 boot
zoneadm: zone 'cluster1': WARNING: The zone.cpu-shares rctl is set but
zoneadm: zone 'cluster1': FSS is not the default scheduling class for
zoneadm: zone 'cluster1': this zone. FSS will be used for processes
zoneadm: zone 'cluster1': in the zone but to get the full benefit of FSS,
zoneadm: zone 'cluster1': it should be the default scheduling class.
zoneadm: zone 'cluster1': See dispadmin(1M) for more details.
zoneadm: zone 'cluster1': /usr/sbin/devfsadm unexpectedly terminated due to signal 10
zoneadm: zone 'cluster1': call to zoneadmd failed
+ exit 0

--
Imad Bougataya

Posted by Imad Bougataya on October 16, 2008 at 09:11 PM EDT #

Sorry, i forgot to mention my config: Solaris 10 on a x86

Posted by Imad Bougataya on October 16, 2008 at 09:28 PM EDT #

Hi Imad, I have used this script on both s10/x86 and OpenSolaris/x86. I suspect the error happened above the output you included in your comment. The errors seem to indicate that the "zoneadm -z cluster1 install" step failed ... Can you try the following:

# zoneadm -z cluster1 uninstall
# zoneadm -z cluster1 delete
# rm -rf /export/cluster1

If that succeeds, please re-run the install script and post the output of the "zoneadm -z cluster1 install" step if it still fails.

Another possibility - did you exceed your disk capacity during the install ?
This script installs to the /export directory - if /export is a mount point,
perhaps it is nearly full ?

Regards,
Jay

Posted by Jay Danielsen on October 17, 2008 at 07:27 AM EDT #

Hi Jay,

I'm running Solaris 2.6 on an Enterprise 450. When I login as root and view the mount points, I see this annoying entry: /apps/ileaf6 (Interleaf 6, a word processor that we don't use anymore). So I tracked it down:

/etc/auto_apps:
...
ileaf6 IP_Address:/...apps/interleaf/ileaf6.4.1
...

and

/usr/local/globals
total 26
drwxr-xr-x 2 root other 512 Mar 11 1999 ./
drwxr-xr-x 13 root other 512 Oct 15 2004 ../
-rwxr-xr-x 1 root other 249 Mar 11 1999 clear_path\*
-rwxr-xr-x 1 root other 2283 Nov 6 1998 env_defs\*
-rwxr-xr-x 1 root other 2767 Jan 19 1999 ileaf5_env\*
-rwxr-xr-x 1 root other 2775 Jan 18 1999 ileaf6_env\*
-rwxr-xr-x 1 root other 706 Mar 11 1999 stp_env\*

Are these scripts somehow launched at login? Do you know how I could stop ileaf6 from launching?

Thanks
Bob

Posted by Robert on March 06, 2009 at 06:44 AM EST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Jay Danielsen

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today