Fun and Games with JET / JS / SVM / Solaris

So this project is all about deployment. We have many (dozens of) T5220 servers, loaded with memory and disks. Within those servers, we want to deploy 36 zones each. One zone will be an MQ (Sun Java System Message Queue) instance for the local application/web zones, one zone will be a "template zone" for cloning, and 34 zones will be production zones running App Server (Sun Java System Application Server) and Web Server (Sun Java System Web Server). Of course, all of these wonderful products are part of the Sun Java Application Platform Suite, a key component of JES (Sun Java Enterprise System). There, we got all those marketing links and product names out of the way. :)

So this deployment architecture has several key dependencies and local requirements that presented "challenges". By "challenges", I mean decisions written in stone before the project started that make those of us doing the implementation react by rolling our eyes, sighing heavily, or responding with "Ummm... are you sure you want to do that?". Some key examples of the requirements include:

  • Full Root zones

  • Multiple dedicated filesystems per zone

  • Separate /var filesystem, global and local zones

  • Live Upgrade in global and local zones

  • ZFS is not an option, Solaris Volume Manager and UFS only

  • Global zone deployed with all local zone filesystems created and waiting

  • Local zones deployed on demand using the pre-created filesystems

  • The provisioning servers (N1SPS and N1SM) will be clustered

  • The whole thing must fit on the internal disk drives

Easy, right? Yeah, nothing in there really looks like a show-stopper. Maybe some stuff that isn't "de-facto standard", or "traditional", or even "the easy way to do it". Now throw in the final requirement. From the first day that you stick a Solaris installation DVD into the machine until the "final release" of the deployment framework into the test environment is 60 days. Yeah. 60. A total of 60 days to get Jumpstart, JET, N1SPS, N1SM and Solaris Cluster all up, running, configured, and production ready, integrated into the existing customer IT environment and standards. Sure, no problem.

So step one, we need a solid foundation to build this stack on. This means having a standard flar image to deploy with. We configured a Jumpstart/JET server in our development lab, and started brainstorming our best guess as to what we would need at the end of the project as far as features, functionality, and configuration. We knew that the flar image was going to be a "frequent revision" item. So step one, to reduce deployment and patching times, and to reduce exposure, "minimize" the Solaris image on the production systems. There were over 1400 packages in the image that the customer was using (SUNWCall). This included Xwindows, Gnome, audio, CDE, demo stuff, and lots of other goodies that we knew we wanted to strip out for a headless server. My cohort started stripping out packages from SUNWCall, tracking dependencies and installation order as he went. I started with the "minimum" Metaclusters (SUNWCrnet and SUNWCmreq) and worked my way up, hoping that we would meet in the middle somewhere with a minimum of conflicts.

A couple days later, we were darn close. We came up with a configuration that ran just fine, had no outstanding dependency conflicts, ran all of our target applications, and contained 327 total packages. That is about 1100 packages gone, and statistically about 70% less potential patches to add. We definitely learned alot about packaging, patching, package dependencies, package clusters and metaclusters, and just how twisty this maze of stuff is once you try to go too deep into it. Definitely not for the conservative or inexperienced. If anyone is interested in the details, tools, and methods, drop me a note and I'll try to write a bit about the gory details.

At this point, we have a running test machine, installed by hand. We want the configuration to be a repeatable and deployable object. The perfect tool in this case is "flar" (Flash Archive). We decided to make our flar evolutionary, so we didn't have to repeat the system build every time we wanted to revision a configuration file or turn on/off a service by default. We wrote a little script called "do_flar" that cleans up log files, juggles some default settings, and puts some configs into place, runs the "flar" command to create the archive, and then cleans up the system so that it is in a decent running state for us to continue work. Rebuilding the machine is then done through JET, rebuilding our master machine using the flar (a checkpoint release from the systems side). Very cool, very easy, very repeatable. More on the do_flar script later.

The next challenge was to build the disk configuration. No ZFS allowed, and we need filesystems in each local zone for root, /var, apps, and logs. We also need Live Upgrade partitions for root and /var. That makes 6 filesystems each, times 36 local zones, plus root and /var for the global zone, plus Live Upgrade partitions there, plus swap, plus metadb space for Solaris Volume Manager. Ouch. That is well over 200 partitions, and over 200 filesystems to be created. All on the 8 internal 143GB disks. Mirrored. So all of this needs to fit on about 572GB of usable disk space. Next blog entry... Squeezing a gallon of stuff into a quart sized Zip-Loc. Or for our UK friends, squeezing a bushel of barley into a peck sized boot. Or for the rest of the known metric universe, "Holy smokes! That will never fit!!". :P


bill.


Comments:

Post a Comment:
Comments are closed for this entry.
About

mrbill

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today