Friday Jul 01, 2011

Be Eco-Responsible. Use Resource Management features in Solaris, Oracle Database and Go Green

This blog post is about a white paper that introduces the resource management features in Oracle Solaris and Oracle Database.

Who needs more hardware when under-utilized systems are sitting idle in datacenters consuming valuable space, energy and polluting environment with carbon emissions. These days virtualization and con solidation are more popular than ever before. And then there is Oracle Corporation with great products consisting rich features for resource manageability in consolidated, virtualized and isolated envir onments. Put together all these pieces what do we get? - a complete green solution that helps make this planet a better place.

In an attempt to increase the awareness of the resource management features available in Oracle Solaris and Oracle Database, we just published a four-part white paper surrounding Oracle Solaris Resource Manager and Oracle Database Resource Manager. Those resource managers are not really a product, process or an add-on to Solaris and Oracle Database. "Resource Manager" is the abstract term for the set of software modules that are integrated into the Solaris operating system and Oracle database management system to facilitate resource management from the OS and RDBMS perspective.

The first part of the series introduces: the concept of resource management, Solaris Resource Manager and Oracle Database Resource Manager; compares and contrasts both of those resource managers andends the discussion with a brief overview of resource management in high-availability (HA) environments. Access this paper from the following URL:

        Part 1: Introduction to Resource Management in Oracle Solaris and Oracle Database

The second part is all about the available resource management features in Oracle Solaris. This includes resource management in virtualized environments too. The range of topics vary from the simple process binding to the dynamic reconfiguration of CPU and memory resources in a logical domain (LDom). The value of this paper lies in the volume of examples that follow each of the introduced feature. Access the second part from the following URL:

        Part 2: Effective Resource Management Using Oracle Solaris Resource Manager

Similar to the second part, the third part talks about the available resource management features in Oracle Database Management System. Majority of the discussion revolves around creating resource pl ans in a Oracle database. This part also contains plenty of examples. Access the third part from the following URL:

        Part 3: Effective Resource Management Using Oracle Database Resource Manager

The final part briefly introduces various hardware/software products that makes Oracle Corporation the ideal consolidation platform. Also an imaginary case study was presented to demonstrate how to consolidate multiple applications and databases onto a single server using Oracle virtualization technologies and the resource management features found in Oracle Solaris and Oracle Database. Description of the consolidation plan and the implementation of the plan takes up major portion of this final part. Access the fourth and final part of this series from the following URL:

        Part 4: Resource Management Case Study for Mixed Workloads and Server Sharing

Happy reading.

Sunday Jul 19, 2009

Solaris 10: Zone Creation for Dummies

(Reproducing the three and half year old blog entry, a top 5 one, "as is" from my other blog hosted on blogger. Source URL: http://technopark02.blogspot.com/2006/02/solaris-10-zone-creation-for-dummies.html)

About Zones

In its simple form, a zone is a virtual operating system environment created within a single instance of the Solaris operating system. Efficient resource utilization is the main goal of this technology.

Solaris 10's zone partitioning technology can be used to create local zones that behave like virtual servers. All local zones are controlled from the system's global zone. Processes running in a zone are completely isolated from the rest of the system. This isolation prevents processes that are running in one zone from monitoring or affecting processes that are running in other zones. Note that processes running in a local zone can be monitored from global zone; but the processes running in a global zone or even in another local zone cannot be monitored from a local zone.

As of now, the upper limit for the number of zones that can be created/run on a system is 8192; of course, depending on the resource availability, a single system may or may not run all the configured zones effectively.

Global Zone

When we install Solaris 10, a global zone gets installed automatically; and the core operating system runs under global zone. To list all the configured zones, we can use zoneadm command:


 % zoneadm list -v
  ID NAME             STATUS         PATH
   0 global           running        /

Global zone is the only one:

  • bootable from the system hardware
  • to be used for system-wide administrative control, such as physical devices, routing, or dynamic reconfiguration (DR). ie., global zone is the only zone that is aware of all devices and all file systems
  • from which a non-global zone can be configured, installed, managed, or uninstalled. ie., global zone is the only zone that is aware of the existence of non-global (local) zones and their configurations. It is not possible to create local zones, within a local zone

Steps to create a Local Zone

Prerequisites:

  • Plenty of disk space to hold the newly installed zone. It needs at least 2G space to copy the essential files to the local zone, and of course the disk space needed by the application(s) you are planning to run, in this zone; and
  • A dedicated IP for network connectivity

Basic Zone creation steps with examples:

  1. Check the disk space & network configuration
    
     % df -h /
     Filesystem             size   used  avail capacity  Mounted on
     /dev/dsk/c1t1d0s0       29G    22G   7.1G    76%    /
    
     % ifconfig -a
     lo0: flags=2001000849 mtu 8232 index 1
             inet 127.0.0.1 netmask ff000000
     eri0: flags=1000843 mtu 1500 index 2
             inet 192.168.74.217 netmask fffffe00 broadcast 192.168.75.255
    
    
  2. Since there is more than 5G free space, I've decided to install a local zone under /zones.
    
     % mkdir /zones
    
    
  3. Next step is to define/create the zone root. This is the path to zone's root directory that is relative to the global zone's root directory. Zone root must be owned by root user with the mode 700. This will be used in setting the zonepath property, during the zone creation process
    
     % cd /zones
     % mkdir appserver
     % chmod 700 appserver
    
     % ls -l
     total 2
     drwx------   2 root     root         512 Feb 17 12:46 appserver
    
    
  4. Create & configure a new 'sparse root' local zone, with root privileges
    
     % zonecfg -z appserv
     appserv: No such zone configured
     Use 'create' to begin configuring a new zone.
     zonecfg:appserv> create
     zonecfg:appserv> set zonepath=/zones/appserver
     zonecfg:appserv> set autoboot=true
     zonecfg:appserv> add net
     zonecfg:appserv:net> set physical=eri0
     zonecfg:appserv:net> set address=192.168.175.126
     zonecfg:appserv:net> end
     zonecfg:appserv> add fs
     zonecfg:appserv:fs> set dir=/repo2
     zonecfg:appserv:fs> set special=/dev/dsk/c2t40d1s6
     zonecfg:appserv:fs> set raw=/dev/rdsk/c2t40d1s6
     zonecfg:appserv:fs> set type=ufs
     zonecfg:appserv:fs> set options noforcedirectio
     zonecfg:appserv:fs> end
     zonecfg:appserv> add inherit-pkg-dir
     zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
     zonecfg:appserv:inherit-pkg-dir> end
     zonecfg:appserv> info
     zonepath: /zones/appserver
     autoboot: true
     pool:
     inherit-pkg-dir:
             dir: /lib
     inherit-pkg-dir:
             dir: /platform
     inherit-pkg-dir:
             dir: /sbin
     inherit-pkg-dir:
             dir: /usr
     inherit-pkg-dir:
             dir: /opt/csw
     net:
             address: 192.168.175.126
             physical: eri0
     zonecfg:appserv> verify
     zonecfg:appserv> commit
     zonecfg:appserv> exit
    
    

    Sparse Root Zone Vs Whole Root Zone(Updated 05/07/2008)

    In a Sparse Root Zone, the directories /usr, /sbin, /lib and /platform will be mounted as loopback file systems. That is, although all those directories appear as normal directories under the sparse root zone, they will be mounted as read-only file systems. Any change to those directories in the global zone can be seen from the sparse root zone.

    However if you need the ability to write into any of those directories listed above, you may need to configure a Whole Root Zone. For example, softwares like ClearCase need write permissions to /usr directory. In that case configuring a Whole Root Zone is the way to go. The steps for creating and configuring a new 'Whole Root' local zone are as follows:

    
     % zonecfg -z appserv
     appserv: No such zone configured
     Use 'create' to begin configuring a new zone.
     zonecfg:appserv> create
     zonecfg:appserv> set zonepath=/zones/appserver
     zonecfg:appserv> set autoboot=true
     zonecfg:appserv> add net
     zonecfg:appserv:net> set physical=eri0
     zonecfg:appserv:net> set address=192.168.175.126
     zonecfg:appserv:net> end
     zonecfg:appserv> add inherit-pkg-dir
     zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
     zonecfg:appserv:inherit-pkg-dir> end
     zonecfg:appserv> remove inherit-pkg-dir dir=/usr
     zonecfg:appserv> remove inherit-pkg-dir dir=/sbin
     zonecfg:appserv> remove inherit-pkg-dir dir=/lib
     zonecfg:appserv> remove inherit-pkg-dir dir=/platform
     zonecfg:appserv> info
     zonepath: /zones/appserver
     autoboot: true
     pool:
     inherit-pkg-dir:
             dir: /opt/csw
     net:
             address: 192.168.175.126
             physical: eri0
     zonecfg:appserv> verify
     zonecfg:appserv> commit
     zonecfg:appserv> exit
    
    

    Brief explanation of the properties that I added:

    \* zonepath=/zones/appserver

    Local zone's root directory, relative to global zone's root directory. ie., local zone will have all the bin, lib, usr, dev, net, etc, var, opt etc., directories physically under /zones/appserver directory

    \* autoboot=true

    boot this zone automatically when the global zone is booted

    \* physical=eri0

    eri0 card is used for the physical interface

    \* address=192.168.175.126

    192.168.175.126 is the IP address. It must have all necessary DNS entries

    [Added 08/25/08] The whole add fs section adds the file system to the zone. In this example, the file system that is being exported to the zone is an existing UFS file system.

    \* set dir=/repo2

    /repo2 is the mount point in the local zone

    \* set special=/dev/dsk/c2t40d1s6 set raw=/dev/rdsk/c2t40d1s6

    Grant access to the block (/dev/dsk/c2t40d1s6) and raw (/dev/rdsk/c2t40d1s6) devices so the file system can be mounted in the non-global zone. Make sure the block device is not mounted anywhere right before installing the non-global zone. Otherwise, the zone installation may fail with ERROR: file system check </usr/lib/fs/ufs/fsck> of </dev/rdsk/c2t40d1s6> failed: exit status <33>: run fsck manually. In that case, unmount the file system that is being exported, uninstall the partially installed zone (zoneadm -z <zone> uninstall) then install the zone from the scratch (no need to re-configure the zone, just do a re-install).

    \* set type=ufs

    The file system is of type UFS

    \* set options noforcedirectio

    Mount the file system with the option noforcedirectio[/Added 08/25/08]

    \* dir=/opt/csw

    read-only path, will be lofs'd (loop back mounted) from global zone. Note: it works for sparse root zone only -- whole root zone cannot have any shared file systems

    zonecfg commands verify and commit, verifies and commits the zone configuration for the zone, respectively. Note that it is not necessary to commit the zone configuration; it will be done automatically when we exit from zonecfg tool. info displays information about the current configuration

  5. Check the state of the newly created/configured zone
    
     % zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
        - appserv          configured     /zones/appserver
    
    
  6. Next step is to install the configured zone. It takes a while to install the necessary packages
    
     % zoneadm -z appserv install
     /zones must not be group writable.
     could not verify zonepath /zones/appserver because of the above errors.
     zoneadm: zone appserv failed to verify
    
     % ls -ld /zones
     drwxrwxr-x   3 root     root         512 Feb 17 12:46 /zones
    
    

    Since /zones must not be group writable, let's change the mode to 700.

    
     % chmod 700 /zones
    
     % ls -ld /zones
     drwx------   3 root     root         512 Feb 17 12:46 /zones
    
     % zoneadm -z appserv install
     Preparing to install zone .
     Creating list of files to copy from the global zone.
     Copying <2658> files to the zone.
     Initializing zone product registry.
     Determining zone package initialization order.
     Preparing to initialize <1128> packages on the zone.
     Initialized <1128> packages on zone.
     Zone  is initialized.
     Installation of these packages generated errors: 
     Installation of <2> packages was skipped.
     Installation of these packages generated warnings: <CSWbdb3 CSWtcpwrap 
      CSWreadline CSWlibnet CSWlibpcap CSWjpeg CSWzlib  CSWcommon CSWpkgget SMCethr CSWxpm 
      SMClsof SMClibgcc SMCossld OpenSSH SMCtar SUNWj3dmx CSWexpat CSWftype2 CSWfconfig 
      CSWiconv  CSWggettext CSWlibatk CSWpango CSWpng CSWtiff CSWgtk2 CSWpcre CSWlibmm 
      CSWgsed CSWlibtool CSWncurses CSWunixodbc CSWoldap  CSWt1lib CSWlibxml2 CSWbzip2 
      CSWlibidn CSWphp>
     The file  contains a log of the zone installation.
    
    
  7. Verify the state of the appserv zone, one more time
    
     % zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
        - appserv          installed      /zones/appserver
    
    
  8. Boot up the appserv zone. Let's note down the ifconfig output to see how it changes after the local zone boots up. Also observe that there is no answer from the server yet, since it is not up
    
     % ping 192.168.175.126
     no answer from 192.168.175.126
    
     % ifconfig -a
     lo0: flags=2001000849 mtu 8232 index 1
             inet 127.0.0.1 netmask ff000000
     eri0: flags=1000843 mtu 1500 index 2
             inet 192.168.74.217 netmask fffffe00 broadcast 192.168.75.255
             ether 0:3:ba:2d:0:84
    
     % zoneadm -z appserv boot
     zoneadm: zone 'appserv': WARNING: eri0:1: no matching subnet found in netmasks(4) for 192.168.175.126; 
     using default of  255.255.0.0.
    
     % zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
        1 appserv          running        /zones/appserver
    
     % ping 192.168.175.126
     192.168.175.126 is alive
    
     % ifconfig -a
     lo0: flags=2001000849 mtu 8232 index 1
             inet 127.0.0.1 netmask ff000000
     lo0:1: flags=2001000849 mtu 8232 index 1
             zone appserv
             inet 127.0.0.1 netmask ff000000
     eri0: flags=1000843 mtu 1500 index 2
             inet 192.168.74.217 netmask fffffe00 broadcast 192.168.75.255
             ether 0:3:ba:2d:0:84
     eri0:1: flags=1000843 mtu 1500 index 2
             zone appserv
             inet 192.168.175.126 netmask ffff0000 broadcast 192.168.255.255
    
    

    Observe that the zone appserv has it's own virtual instance of lo0, the system's loopback interface and the zone's IP address is also being served by the eri0 network interface

  9. Login to the Zone {console} and performing the internal zone configuration. zlogin utility can be used to enter a zone. The first time we log in to the console, we get a chance to answer a series of questions for the desired zone configuraton. -C option of zlogin can be used to log in to the Zone console.
    
     % zlogin -C -e [ appserv
     [Connected to zone 'appserv' console]
    
     Select a Language
    
       0. English
       1. es
       2. fr
    
     Please make a choice (0 - 2), or press h or ? for help: 0
    
     Select a Locale
    
       0. English (C - 7-bit ASCII)
       1. Canada (English) (UTF-8)
       2. Canada-English (ISO8859-1)
       3. U.S.A. (UTF-8)
       4. U.S.A. (en_US.ISO8859-1)
       5. U.S.A. (en_US.ISO8859-15)
       6. Go Back to Previous Screen
    
     Please make a choice (0 - 6), or press h or ? for help: 0
    
     ...
    
      Enter the host name which identifies this system on the network.  The name
       must be unique within your domain; creating a duplicate host name will cause
       problems on the network after you install Solaris.
    
       A host name must have at least one character; it can contain letters,
       digits, and minus signs (-).
    
         Host name for eri0:1 appserv v440appserv
    
     ...
     ...
    
     System identification is completed. 
     ...
     
     rebooting system due to change(s) in /etc/default/init
     
     [NOTICE: Zone rebooting]
    
     SunOS Release 5.11 Version snv_23 64-bit
     Copyright 1983-2005 Sun Microsystems, Inc.  All rights reserved.
     Use is subject to license terms.
     Hostname: v440appserv
    
     v440appserv console login: root
     Password:
     Feb 17 15:15:30 v440appserv login: ROOT LOGIN /dev/console
     Sun Microsystems Inc.   SunOS 5.11      snv_23  October 2007
    
     %
    
    

That is all there is in the creation of a local zone. Now simply login to the newly created zone, just like connecting to any other system in the network.

[New 08/27/2008] Mounting file systems in a non-global zone

Sometimes it might be necessary to export file systems or create new file systems when the zone is already running. This section's focus is on exporting block devices and the raw devices in such situations i.e., when the local zone is already configured.

Exporting the Raw Device(s) to a non-global zone

If the file system does not exist on the device, raw devices can be exported as they are, so the file system can be created inside the non-global zone using the normal newfs command.

The following example shows how to export the raw device to a non-global zone when the zone is already configured.


# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/rdsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

In this example /dev/rdsk/c5t0d0s6 is being exported.

After the zonecfg step, reboot the non-global zone to make the raw device visible inside the non-global zone. After the reboot, check the existence of the raw device.


# hostname
v440appserv

# ls -l /dev/rdsk/c5t0d0s6
crw-r-----   1 root     sys      118, 126 Aug 27 14:33 /dev/rdsk/c5t0d0s6

Now that the raw device is accessible within the non-global zone, we can use the regular Solaris commands to create any file system like UFS.

eg.,

# newfs -v c5t0d0s6
newfs: construct a new file system /dev/rdsk/c5t0d0s6: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t0d0s6 1140260864 -1 -1 8192 1024 251 1 120 8192 t 0 -1 8 128 n
Warning: 4096 sector(s) in last cylinder unallocated
/dev/rdsk/c5t0d0s6: 1140260864 sectors in 185590 cylinders of 48 tracks, 128 sectors
 556768.0MB in 11600 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
...............................................................................
.........................................................................
super-block backups for last 10 cylinder groups at:
 1139344160, 1139442592, 1139541024, 1139639456, 1139737888, 1139836320,
 1139934752, 1140033184, 1140131616, 1140230048

Exporting the Block Device(s) to a non-global zone

If the file system exists on the device, block devices can be exported as they are, so the file system can be mounted inside the non-global zone using the normal Solaris command, mount.

The following example shows how to export the block device to a non-global zone when the zone is already configured.


# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/dsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

In this example /dev/dsk/c5t0d0s6 is being exported.

After the zonecfg step, reboot the non-global zone to make the block device visible inside the non-global zone. After the reboot, check the existence of the block device; and mount the file system within the non-global zone.


# hostname
v440appserv

# ls -l /dev/dsk/c5t0d0s6
brw-r-----   1 root     sys      118, 126 Aug 27 14:40 /dev/dsk/c5t0d0s6

# fstyp /dev/dsk/c5t0d0s6
ufs

# mount /dev/dsk/c5t0d0s6 /mnt

# df -h /mnt
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c5t0d0s6      535G    64M   530G     1%    /mnt

Mounting a file system from the global zone into the non-global zone

Sometimes it is desirable to have the flexibility of mounting a file system in the global zone or non-global zone on-demand. In such situations, rather than exporting the file systems or block devices into the non-global zone, create the file system in the global zone and mount the file system directly from the global zone into the non-global zone. Make sure to unmount that file system in the global zone if mounted, before attempting to mount it in the non-global zone.

eg.,

In the non-global zone:


# mkdir /repo1

In the global zone:


# df -h /repo1
/dev/dsk/c2t40d0s6     134G    64M   133G     1%    /repo1

# umount /repo1

# ls -ld /zones/appserv/root/repo1
drwxr-xr-x   2 root     root         512 Aug 27 14:45 /zones/appserv/root/repo1

# mount /dev/dsk/c2t40d0s6 /zones/appserv/root/repo1

Now go back to the non-global zone and check the mounted file systems.


# hostname
v440appserv

# df -h /repo1
Filesystem             size   used  avail capacity  Mounted on
/repo1                 134G    64M   133G     1%    /repo1
To unmount the file system from the non-global zone, run the following command from the global zone.

# umount /zones/appserv/root/repo1

Removing the file system from the non-global zone

eg.,

Earlier in the zone creation step, the block device /dev/dsk/c2t40d1s6 was exported and mounted on the mount point /repo2 inside the non-global zone. To remove the file system completely from the non-global zone, run the following in the global zone.


# zonecfg -z appserv
zonecfg:appserv> remove fs dir=/repo2
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

Reboot the non-global zone for this setting to take effect.

Shutting down and booting up the local zones (Updated 01/15/2008)

  1. To bring down the local zone:
    
     % zlogin appserv shutdown -i 0
    
    
  2. To boot up the local zone:
    
     % zoneadm -z appserv boot
    
    

Just for the sake of completeness, the following steps show how to remove a local zone.

Steps to delete a Local Zone

  1. Shutdown the local zone
    
     % zoneadm -z appserv halt
    
     % zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
        - appserv          installed      /zones/appserver
    
    
  2. Uninstall the local zone -- remove the root file system
    
     % zoneadm -z appserv uninstall
     Are you sure you want to uninstall zone appserv (y/[n])? y
    
      zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
        - appserv          configured     /zones/appserver
    
    
  3. Delete the configured local zone
    
     % zonecfg -z appserv delete
     Are you sure you want to delete zone appserv (y/[n])? y
    
      zoneadm list -cv
       ID NAME             STATUS         PATH
        0 global           running        /
    
    

[New: 07/14/2009]

Cloning a Non-Global Zone

The following instructions are for cloning a non-global zone on the same system. The example shown below clones the siebeldb zone. After the cloning process, a brand new zone oraclebi emerges as a replica of siebeldb zone.

eg.,

# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - siebeldb         installed  /zones/dbserver                native   excl

  1. Export the configuration of the zone that you want to clone/copy
    
    # zonecfg -z siebeldb export > /tmp/siebeldb.config.cfg
    
    
  2. Change the configuration of the new zone that differ from the existing one -- for example, IP address, data set names, network interface etc. To make these changes, edit /tmp/siebeldb.config.cfg

  3. Create the zone root directory for the new zone being created
    
    # mkdir /zones3/oraclebi
    # chmod 700 /zones3/oraclebi
    # ls -ld /zones3/oraclebi
    drwx------   2 root     root         512 Mar 12 15:41 /zones3/oraclebi
    
    
  4. Create a new (empty, non-configured) zone in the usual manner with the edited configuration file as an input
    
    # zonecfg -z oraclebi -f /tmp/siebeldb.config.cfg
    
    #  zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - siebeldb         installed  /zones/dbserver                native   excl   
       - oraclebi         configured /zones3/oraclebi               native   excl
    
    
  5. Ensure that the zone you intend to clone/copy is not running
    
    # zoneadm -z siebeldb halt
    
    
  6. Clone the existing zone
    
    # zoneadm -z oraclebi clone siebeldb
    Cloning zonepath /zones/dbserver...
    
    

    This step takes at least 5 minutes to clone the whole zone. Larger zones may take longer to complete the cloning process.

  7. Boot the newly created zone
    
    # zoneadm -z oraclebi boot
    
    

    Bring up the halted zone (the source zone) as well, if wish.

  8. Login to the console of the new zone to configure IP, networking, etc., and you are done.
    
    # zlogin -C oraclebi
    
    

[New: 07/15/2009]

Migrating a Non-Global Zone from One Host to Another

Keywords: Solaris, Non-Global Zone, Migration, Attach, Detach

The following instructions demonstrate how to migrate the non-global zone, orabi to another server with examples.


# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   4 siebeldb         running    /zones/dbserver                native   excl  
   - orabi            installed  /zones3/orabi                  native   shared

  1. Halt the zone to be migrated, if running
    
    # zoneadm -z orabi halt
    
    
  2. Detach the zone. Once detached, it will be in the configured state
    
    # zoneadm -z orabi detach
    
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       4 siebeldb         running    /zones/dbserver                native   excl  
       - orabi            configured /zones3/orabi                  native   shared
    
    
  3. Move the zonepath for the zone to be migrated from the old host to the new host.

    Do the following on the old host:

    
    # cd /zones3
    # tar -Ecf orabi.tar orabi
    # compress orabi.tar
    
    # sftp newhost
    Connecting to newhost...
    sftp> cd /zones3
    sftp> put orabi.tar.Z
    Uploading orabi.tar.Z to /zones3/orabi.tar.Z
    sftp> quit
    
    

    On the newhost:

    
    # cd /zones3
    # uncompress orabi.tar.Z
    # tar xf orabi.tar
    
    
  4. On the new host, configure the zone.

    Create the equivalent zone orabi on the new host -- use the zonecfg command with the -a option and the zonepath on the new host. Make any required adjustments to the configuration and commit the configuration.

    
    # zonecfg -z orabi
    orabi: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:orabi> create -a /zones3/orabi
    zonecfg:orabi> info
    zonename: orabi
    zonepath: /zones3/orabi
    brand: native
    autoboot: false
    bootargs: 
    pool: 
    limitpriv: all,!sys_suser_compat,!sys_res_config,!sys_net_config,!sys_linkdir,!sys_devices,!sys_config,!proc_zone,!dtrace_kernel,!sys_ip_config
    scheduling-class: 
    ip-type: shared
    inherit-pkg-dir:
     dir: /lib
    inherit-pkg-dir:
     dir: /platform
    inherit-pkg-dir:
     dir: /sbin
    inherit-pkg-dir:
     dir: /usr
    net:
     address: IPaddress
     physical: nxge1
     defrouter not specified
    zonecfg:orabi> set capped-memory
    zonecfg:orabi:capped-memory> set physical=8G
    zonecfg:orabi:capped-memory> end
    zonecfg:orabi> commit
    zonecfg:orabi> exit
    
    
  5. Attach the zone on the new host with a validation check and update the zone to match a host running later versions of the dependent packages
    
    # ls -ld /zones3
    drwxrwxrwx   5 root     root         512 Jul 15 12:30 /zones3
    # chmod g-w,o-w /zones3
    # ls -ld /zones3
    drwxr-xr-x   5 root     root         512 Jul 15 12:30 /zones3
    
    # zoneadm -z orabi attach -u
    Getting the list of files to remove
    Removing 1740 files
    Remove 607 of 607 packages
    Installing 1878 files
    Add 627 of 627 packages
    Updating editable files
    The file  within the zone contains a log of the zone update.
    
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - orabi            installed  /zones3/orabi                  native   shared
    
    

    Note:

    It is possible to force the attach operation without performing the validation. You can do so with the help of -F option

    
    # zoneadm -z orabi attach -F
    
    

    Be careful when using this option because it could lead to an incorrect configuration; and an incorrect configuration could result in undefined behavior

[New: 07/19/2009]

Tip: How to find out whether connected to the primary OS instance or the virtual instance?

If the command zonename returns global, then you are connected to the OS instance that was booted from the physical hardware. If you see any string other than global, you might have connected to the virtual OS instance.

Alternatively try running prstat -Z or zoneadm list -cv commands. If you see exactly one non-zero Zone ID, it is an indication that you are connected to a non-global zone.

Suggested reading:

Monday Apr 06, 2009

Controlling [Virtual] Network Interfaces in a Non-Global Solaris Zone

In the software world, some tools like SAP NetWeaver's Adaptive Computing Controller (ACC) require full control over a network interface, so they can bring up/down the NICs at their will to fulfill their responsibilities. Those tools may function normally on Solaris 10 [and later] as long as they are run in the global zone. However there might be some trouble when those tools are attempted to run in a non-global zone, especially on machines with only one physical network interface installed, and when the non-global zones are created with the default configuration. This blog post attempts to suggest few solutions to get around those issues, so the tools can function the way they normally do in the global zone.

If the machine has only one NIC installed, there are at least two issues that will prevent tools like ACC from working in a non-global zone.

  1. Since there is only one network interface on the system, it is not possible to dedicate that interface to the non-global zone where ACC is supposed to run. Hence all the zones, including the global zone, must share the physical network interface.
  2. When the physical network interface is being shared across multiple zones, it is not possible to plumb/unplumb the network interface from a Shared-IP Non-Global Zone. Only the root users in the global zone can plumb/unplumb the lone physical network interface.
    • When a non-global zone is created with the default configuration, Shared-IP zone is created by default. Shared-IP zones have separate IP addresses, but share the IP routing configuration with the global zone.

Fortunately, Solaris 10 has a solution to the aforementioned issues in the form of Network Virtualization. Crossbow is the code name for network virtualization in Solaris. Crossbow provides the necessary building blocks to virtualize a single physical network interface into multiple virtual network interfaces (VNICs) - so the solution to the issue at hand is to create a virtual network interface, and then to create an Exclusive-IP Non-Global Zone using the virtual NIC. Rest of the blog post demonstrates the simple steps to create a VNIC, and to configure a non-global zone as Exclusive-IP Zone.

Create a Virtual Network Interface using Crossbow

  • Make sure the OS has Crossbow functionality
    
    global# cat /etc/release
                     Solaris Express Community Edition snv_111 SPARC
               Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                 Assembled 23 March 2009
    
    

    Crossbow has been integrated into Solaris Express Community Edition (Nevada) build 105 - hence all Nevada builds starting with build 105 will have the Crossbow functionality. OpenSolaris 2009.06 and the next major update to Solaris 10 are expected to have the support for network virtualization out-of-the-box.

  • Check the existing zones and the available physical and virtual network interfaces.
    
    global# zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
    
    global# dladm show-link
    LINK        CLASS    MTU    STATE    OVER
    e1000g0     phys     1500   up       --
    
    

    In this example, there is only one NIC, e1000g0, on the server; and there are no non-global zones installed.

  • Create a virtual network interface based on device e1000g0 with an automatically generated MAC address. If the NIC has factory MAC addresses available, one of them will be used. Otherwise, a random address is selected. The auto mode is the default action if none is specified.
    
    global# dladm create-vnic -l e1000g0 vnic1
    
    
  • Check the available network interfaces one more time. Now you should be able to see the newly created virtual NIC in addition to the existing physical network interface. It is also possible to list only the virtual NICs.
    
    global# dladm show-link
    LINK        CLASS    MTU    STATE    OVER
    e1000g0     phys     1500   up       --
    vnic1       vnic     1500   up       e1000g0
    
    global# dladm show-vnic
    LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
    vnic1        e1000g0      1000   2:8:20:32:9:10       random              0
    
    

Create a Non-Global Zone with the VNIC

  • Create an Exclusive-IP Non-Global Zone with the newly created VNIC being the primary network interface.
    
    global # mkdir -p /export/zones/sapacc
    global # chmod 700 /export/zones/sapacc
    
    global # zonecfg -z sapacc
    sapacc: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:sapacc> create
    zonecfg:sapacc> set zonepath=/export/zones/sapacc
    zonecfg:sapacc> set autoboot=false
    zonecfg:sapacc> set ip-type=exclusive
    zonecfg:sapacc> add net
    zonecfg:sapacc:net> set physical=vnic1
    zonecfg:sapacc:net> end
    zonecfg:sapacc> verify
    zonecfg:sapacc> commit
    zonecfg:sapacc> exit
    
    global # zoneadm -z sapacc install
    
    global # zoneadm -z sapacc boot
    
    global #  zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       1 sapacc           running    /export/zones/sapacc        	native   excl
    
    
  • Configure the new non-global zone including the IP address and the network services
    
    global # zlogin -C -e [ sapacc
    ...
    
      > Confirm the following information.  If it is correct, press F2;             
        to change any information, press F4.                                        
                                                                                    
                                                                                    
                      Host name: sap-zone2
                     IP address: 10.6.227.134                                       
        System part of a subnet: Yes                                                
                        Netmask: 255.255.255.0                                      
                    Enable IPv6: No                                                 
                  Default Route: Detect one upon reboot                             
    
    
  • Inside the non-global zone, check the status of the VNIC and the status of the network service
    
    local# hostname
    sap-zone2
    
    local# zonename
    sapacc
    
    local# ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000 
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
            ether 2:8:20:32:9:10 
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128 
    
    local# svcs svc:/network/physical
    STATE          STIME    FMRI
    disabled       13:02:18 svc:/network/physical:nwam
    online         13:02:24 svc:/network/physical:default
    
    
  • Check the network connectivity.

    From inside the non-global zone to the outside world:

    
    local# ping -s sap29
    PING sap29: 56 data bytes
    64 bytes from sap29 (10.6.227.177): icmp_seq=0. time=0.680 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=1. time=0.452 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=2. time=0.561 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=3. time=0.616 ms
    \^C
    ----sap29 PING Statistics----
    4 packets transmitted, 4 packets received, 0% packet loss
    round-trip (ms)  min/avg/max/stddev = 0.452/0.577/0.680/0.097
    
    
    From the outside world to the non-global zone:
    
    remotehostonWAN# telnet sap-zone2
    Trying 10.6.227.134...
    Connected to sap-zone2.sun.com.
    Escape character is '\^]'.
    login: test
    Password: 
    Sun Microsystems Inc.   SunOS 5.11      snv_111 November 2008
    
    -bash-3.2$ /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000 
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128 
    -bash-3.2$ exit
    logout
    Connection to sap-zone2 closed.
    
    

Dynamic [Re]Configuration of the [Virtual] Network Interface in a Non-Global Zone

  • Finally try plumbing down/up the virtual network interface inside the Exclusive-IP Non-Global Zone
    
    global # zlogin -C -e [ sapacc
    [Connected to zone 'sapacc' console]
    ..
    
    zoneconsole# ifconfig vnic1 unplumb
    
    zoneconsole# /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    
    zoneconsole# ifconfig vnic1 plumb
    
    zoneconsole# ifconfig vnic1 10.6.227.134 netmask 255.255.255.0 up
    
    zoneconsole# /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128
    
    

As simple as that! Before we conclude, be informed that prior to Crossbow, Solaris system administrators were required to use Virtual Local Area Networks (VLAN) to achieve similar outcomes.

Check Zones and Containers FAQ, if you are stuck with a strange situation or if you need some interesting ideas around virtualization on Solaris.

Wednesday Feb 18, 2009

Sun Blueprint : MySQL in Solaris Containers

While the costs of managing a data center are becoming a major concern with the increased number of under-utilized servers, customers are actively looking for solutions to consolidate their workloads to:

  • improve server utilization
  • improve data center space utilization
  • reduce power and cooling requirements
  • lower capital and operating expenditures
  • reduce carbon footprint, ..

To cater those customers, Sun offers several virtualization technologies such as Logical Domains, Solaris Containers, xVM at free of cost for SPARC and x86/x64 platforms.

In order to help our customers who are planning for the consolidation of their MySQL databases on systems running Solaris 10, we put together a document with a bunch of installation steps and the best practices to run MySQL inside a Solaris Container. Although the document was focused on the Solaris Containers technology, majority of the tuning tips including the ZFS tips are applicable to all MySQL instances running [on Solaris] under different virtualization technologies.

You can access the blueprint document at the following location:

        Running MySQL Database in Solaris Containers

The blueprint document briefly explains the MySQL server & Solaris Containers technology, introduces different options to install MySQL server on Solaris 10, shows the steps involved in installing and running Solaris Zones & MySQL, and finally provides few best practices to run MySQL optimally inside a Solaris Container.

Feel free to leave a comment if you notice any incorrect information, or if you have generic suggestions to improve documents like these.

Acknowledgments

Many thanks to Prashant Srinivasan, John David Duncan and Margaret B. for their help in different phases of this blueprint.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today