JET checkpoint, our milestone...

At this point, we have the packages and patches that we want, and the disk configuration that we want. We have a running system that we can fiddle with and install configuration files into. From this point forward, the system flar becomes evolutionary. Files like complex resolv.conf, NTP configurations (ntp.conf), passwd, shadow, netmasks, networks, sshd_config and the like can be configured on the JET provisioned host and then wrapped up into our "Golden Image" flar for N1SPS/N1SM to use.

For this project, we tried to maintain a line of separation between the "system" side of the configurations, and the "deployed applications" side. Within the deployed applications pile, we have two groups: IT and management applications, and end-user applications. Things like SunMC, backup tools, and administrative utilities are deployed through the IT and management steps. End-user applications (web servers, application servers, message queue) will be provisioned through other steps owned by different groups under N1SPS and N1SM (with DI Dynamic Infrastructure). Rather than use both JET and N1SPS/N1SM to maintain our system configuration, we concentrated on the JET side, and only worked within the deployment frameworks where it was necessary.

One accidental side effect that we leveraged early on was provided by the JET framework itself. All of the JET generated scripts that twiddle the bits after the Jumpstart of the system are installed on the JET installed machine. In /var/opt/sun/jet, we find the post_install scripts that were used to create the system. This includes the scripts to set up the root disk mirroring, the metadbs, and the zone partitioning. The scripts for other configuration tasks, generating ssh keys, setting ndd configurations, enabling/disabling services, installing our JASS/SST driver, adding services that will run a JASS audit every time the system boots, and our /etc/system setup script that I noted earlier, are also present.

Since all of this work was done to automate the installation and make our repetitive configuration tasks easier and safer, why not leverage that work in the deployment frameworks? So our flar not only contains the software that we want to be installed, and the configurations of the services within that system, but it also contains the necessary scripts to deploy that image and those configurations on a piece of hardware, and regenerate alot of the settings and configuration dynamically. Very cool.

As an example of how this is a timesaver for us, we had to juggle the zone soft partition sizes in mid-project. If we were using the N1SPS and N1SM (including DI Dynamic Infrastructure goodies) plans to deploy the disk configurations, we would have had to juggle configuration information for all of our configured hosts in the test environment. In our case, we modified the JET template with the new sizes, ran the "make_client -f", did a "boot net - install" on our test machine, and then rolled up a new flar revision for the deployment folks to use with the new disk configuration embedded in the flar through the inclusion of the JET generated scripts. Grand total time to do this was a couple hours, and because this was a "system side" change, the IT/management and end-user applications folks under N1SPS and N1SM were never impacted by the change. In fact, they didn't even notice that the change had taken place. Very cool.

Late in the project, we did repeat all of our evolutionary steps on a single flar revision just to test our release notes and documentation. We went back and installed the packages we wanted from the DVD, and applied all of our documented changes to produce the "final flar" image from scratch. This was a great way to test our documentation and our procedures, and did uncover a few issues for us. Still, grand total time to produce the final system image flar with embedded scripts was one day, including testing. Nice.

One of my co-workers (Hi Jim!) became deeply involved with the packaging and cluster mechanics of Solaris distributions in this project. He wrote several tools (that I will let him write about at a future date and hopefully contribute to OpenSolaris) for juggling and debugging the package dependencies and installation order information. He wrote one tool in particular that I found incredibly useful. The tool "cooks" the package information from the installation media, takes a snapshot of the pkginfo from an installed machine, and creates the ordered list of packages to add necessary to satisfy all of the pkgadd dependencies. This will allow us to create our own "metaclusters". So we can now add "SUNWCgolden" to the standard "SUNWCall, SUNWCprog, SUNWCuser, and SUNWCmin" Jumpstart options. No more "packages to add" and "packages to remove" as post-install tasks to generate our flar from the installation media. Jim rocks.



Post a Comment:
Comments are closed for this entry.



« April 2014