Monday Aug 11, 2008

Tupperware isn't the only Branded Container in town...

For this project, two Solaris 8 branded containers were installed on the test systems using "flar" images. The containers were created “unconfigured” so that IP addresses could be assigned to avoid conflict with the source systems. Instructions from the docs.sun.com Solaris 8 branded containers administration guide were used to install and configure the test containers. The docs in the administration guide were excellent, with lots of "type this" and "you should see this" guidance.

We had two test scenarios that we wanted to play with: (1) Branded containers on SAN shared storage, and (2) branded containers hosted on local storage with SAN shared storage for application data. There are advantages and disadvantages for both, and situations where each has significant value. I'll get into the details of that in a future blahg entry.

In order to separate the "storage administration" from the “zone administration” function, storage was configured with device mounts within /z, regardless of the type (SAN, NAS or internal) of storage being utilized:

The storage for the container was mounted to the /zones/[zonename]/ path using loopback filesystems (lofs). This mount was created for testing using the command line:

So that the mount could persist for reboots, an entry was added to /etc/vfstab:

Note that the “mount at boot” option is set to “no” in this example. This is to allow for zones installed on SAN shared storage volumes to migrate back and forth between the systems. Zones installed on local, internal storage will use “yes” for the “mount at boot” option. In the shared SAN storage zones, the filesystems must be mounted as part of the zone management when rebooting, or attaching a zone into a new host:

The basic Solaris 8 branded container was created with the zonecfg command on the first Solaris 10 host system. The basic Solaris 8 branded zone contains a zonepath for the zone to be installed within, and a network interface for network access:

Extra filesystems for administrative tools and applications are mounted into the branded container from the global zone mountpoints with the “add fs” command within zonecfg. The loopback filesystem (lofs) is used to allow the Solaris 10 global zone to manage devices, volume management, filesystems, and mountpoints, and “project” that filesystem space into the branded container. While it is possible to pass physical devices in to a container, it is not a recommended architecture when at all possible to avoid, especially in the case of branded containers, where device and filesystem management would be running under the brand, while the kernel being leveraged would be of a different OS release. This loopback filesystem is defined during the zone configuration with:

Because the Solaris 10 Global Zone (GZ) can run on machines that don’t support native Solaris 8, some applications and software packages could become confused about the architecture of the underlying hardware and cause problems. The Solaris 8 branded container is shielding us from the hardware platform hosting the Solaris 10 GZ, so it is recommended that we set a machine architecture of “sun4u” (Sun UltraSPARC) for the branded container so that the hardware platform is essentially hidden from the operating environment within the container. The “machine” attribute can be assigned within the zone configuration with:

Other filesystems can be added, additional network interfaces can be defined, and other system attributes can be assigned to the branded container as needed. Once the container has been configured within zonecfg, the configuration should be verified and committed to save the data.

The branded container is now configured and ready for installation of an image (flar) from a physical system. The Solaris 8 system image is created using the flarcreate command. Make sure that the source system is up to speed on patches, especially those related to patching, flash archives (flar) and the "df" patch (137749) that I mentioned in an earlier blahg entry.

The configured zone can now be installed using the flar image of the physical Solaris 8 based system:

Once the zone has been installed, it the “P2V” utility must be run to turn the physical machine configuration into a virtual machine configuration within the zone:

The Solaris 8 branded container can now be booted. There will likely be some system startup scripts, tools, and applications that will give warning messages and errors on boot up. Remember the nature of the zone / host system relationship and decide what needs to run in a zone and what functionality should remain in the host system global zone. The zone will boot “un-configured”, and ask for hostname, console terminal type, name service information, and network configurations on the first boot. As an alternative, a “sysidcfg” file can be copied into the /zones/[zonename]/root/etc/ directory before the first boot to allow the zone to auto-configure with sysid upon first boot.

That's it. Other than fixing up the system startup scripts (/etc/rc\*.d and /etc/init.d), and attaching in a copy of the source system's SAN attached data, the move is done and ready to be tested. The really cool part of this is that it just works. We were expecting some application issues and possibly some "speed of light" problems, but everything just worked. Obviously some things had to be adjusted in the branded container, disabling the Veritas Volume Manager startup, and some hardware inventory scripts that used "prtconf" to collect information, but these were identified early, and several reboots sorted out the symptoms from the zone's boot messages on the zlogin console.

More details later about migration of the zone between systems, various storage configurations that we tested, and some other (hopefully) interesting thoughts.


bill.


About

mrbill

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today