Separation is a Good Thing...

One of the interesting side effects of using containers/zones is that it provides a layer of separation for administration functions. Along the way, we have an opportunity to look at other separations that we can provide between the various IT organizations contributing to the "service" being provided by the system.

As an example, how many of us IT types here "I need the root password" from the database or application folks on a regular basis. Well, unless you have implemented RBAC (Role Based Access Control), and have trained your developers and DBAs to the point that they know what to ask for, this is probably a common request. I am alot more comfortable as an administrator giving administrative access to a zone than I am in giving it within the Global Zone. I know that if they totally roach things in the local zone, I can still edit files and move things around from the Global Zone to fix things up, and the system is not "down" or in a state that I need to boot from DVD or Jumpstart and twiddle bits.

Implementing zones gives us a chance to integrate several areas of separation, providing simpler administration. One example, I like to separate "disk/storage administration" from system/zone administration. I mount the (non-ZFS) storage under /z and then use the loopback filesystem (lofs) to make my zonepath. for example:


/z/[zonename]_root
      lofs mounted on /zones/[zonename]

/z/[zonename]_opt_oracle
      zonecfg "add fs" with mountpoint=/opt/oracle

/z/[zonename]_data_oracle
      zonecfg "add fs" with mountpoint=/data/oracle

Obviously this is even easier with ZFS, but I might get to that another day in another blahg entry. Using the resource mountpoint and the zonename in the filesystem information allows me to use grep in interesting ways with "df" and "mount" to more easily track down where things are being mounted and used.

I also try to avoid using devices within zones. This is especially important when I am using branded containers, as it creates interesting dependencies between the Solaris 8 software and the Solaris 10 kernel interfaces. One prime example, Veritas Filesystem (vxfs) and Veritas Volume Manager (vxvm). Often, when migrating a physical Solaris 8 machine into a branded container, admins will try to move administrative functions into the container and treat it as a real machine. There are huge benefits in moving "administrative functions" into the Global Zone, and leaving the application functionality within the branded container. Just say no. Manage your local storage, SAN, and volume resources from the GZ, and just present "disk space" into the local zones.

Other administrative functions that are much easier to keep in the GZ include backups, network administration, configuration management, and performance and capacity management. I find alot of effort being spent on trying to "shoehorn" these functions into a container, when logically they belong in the "system", or GZ. Separating the "system" (there is only one, and it is a shared resource) and the "application" (there can be many, and they consume the "system) has huge benefits in the ongoing administration and maintenance efforts. This is more of a "how to think" problem than a "what to think" education.

Just my opinions, your mileage may vary, objects in mirror may be closer than they appear, etc.. Feel free to comment, debate, or object in the comments below.


bill.


Comments:

Post a Comment:
Comments are closed for this entry.
About

mrbill

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today