The most inviting Solaris 11 features - Part I, BootEnvironments


Disclaimer:


"Having been involved in various projects around the upcoming Solaris 11 release, we had the possibility to compile a list of features we assume UNIX Engineers will find to be cornerstones of Solaris 11 based platforms. It wasn't easy to keep the list short, due to the sheer amount of innovation and the tight integration of the new- or updated technologies in this Solaris release.

We have planned a series of blog posts with a short preview of each Solaris 11 technologies in the list, in a Question and Answer form." 



This is the first post, featuring BootEnvironments.

And here we go with the questions: 

Q: What is a bootenvironment? 
A: It is a set of filesystems, that together build a complete, bootable Solaris instance. After a fresh Solaris 11 install, right after the first boot, the filesystems created - together with their bootable capability - form the first bootenvironment. 

Q: How is this different from LiveUpgrade ABEs on Solaris 10? 
A: BooteEnvironments (BE) have evolved from the LiveUpgrade technology. The main differences are:

  • BootEnvironments were integrated to be a core element in Solaris 11, and are not an add-on technology option anymore. 
  • BootEnvironments are the main and only way of managing Solaris 11 OS upgrades. 
  • ZFS is the choice of filesystem, BEs by default are created on ZFS clones, essentially wasting no storage space. 
  • One BE is auto-created at installation. 
  • IPS Integration: The new Image Packaging System provides hooks to manage BEs transparently at package installation/upgrade times. A system upgrade through 'pkg update' automatically creates a new BE in the background, upgrades the packages there, and awaits your reboot into the new BE. Single packages can request BE creation or snapshot at package installation too. 
  • Non-global Zones contain bootenvironments too, and are associated with the parent bootenvironment in the global zone.

Q: What are the main goals using this technology? 
A: Minimizing downtime, fallback mechanisms and delegated administration: 

  • System upgrades can be done on alternate, cloned BootEnvironments, without interrupting productive operations, maintenance windows can basically be reduced to the reboot interval. 
  • Should a new, modified BootEnvironment not fulfill all the expectations, you can boot any of the previously created unaltered BEs. Or to put it another way: Should anything happen to the active bootenvironment, you can any time revert to a clone you created. 
  • Some bootenvironment management tasks (snapshot, activate, clone...) can be delegated to zone administrators, who can use the beadm command within the non-global zone, effectively managing their bootenvironment from within their zone.
Now let's see an example. You get the task to prepare an environment for an SAP deployment. No problem, you login to the right host (in our case: host1), create a new bootenvironment in it, activate the BE, reboot into it, and hand it over to the SAP administrators for software setup: 

Step 1: What is the current BE setup/status on the host? 

root@host1:~# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
solaris-0 NR     /          46.59M static 2011-09-24 15:19
root@host1:~# 
Step 2: Create the new BE: 
root@host1:~# beadm create SAP
root@host1:~# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
SAP       -      -          36.0K  static 2011-09-24 16:19
solaris-0 NR     /          46.59M static 2011-09-24 15:19
root@host1:~#

Step 3: activate BE

root@host1:~# beadm activate SAP
root@host1:~# init 6
root@host1:~#

[...waiting for the host to come back up...]

Step 4: Login, check, handover: 

root@host1:~# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
SAP       NR     /          77.51M static 2011-09-24 16:19
solaris-0 -      -          8.29M  static 2011-09-24 15:19
root@host1:~#
root@host1:~# echo "host1 is prepared for SAP setup" | mailx -s "SAP host" \
-c unixadmins@corp.com -r unixadmins@corp.com sapadmins@corp.com
root@host1:~# 
Step 5: The lengthy procedure of SAP setup.

sap@host1:~# ./setup_sap.sh 
sap@host1:~#

Of course it is not this easy, but this is a bootenvironments-post :) 

Step 6: After the application setup, create a snapshot of the BE with the SAP installation in it, just in case...

root@host1:~# beadm create SAP@backup
root@host1:~# beadm list -a
BE/Dataset/Snapshot                   Active Mountpoint Space  Policy Created
-------------------                   ------ ---------- -----  ------ -------
SAP
   rpool/ROOT/SAP                     NR     /          66.74M static 2011-09-24 16:19
   rpool/ROOT/SAP@2011-09-24-14:19:18 -      -          10.85M static 2011-09-24 16:19
   rpool/ROOT/SAP@backup              -      -          0      static 2011-09-24 16:42
solaris-0
   rpool/ROOT/solaris-0               -      -          8.29M  static 2011-09-24 15:19
root@host1:~#

The next day, during the first SAP tests, an SAP admin calls you with panic in his voice, during SAP upgrade a procedure ran amok and they'd need to restart the upgrade from scratch. You calm him, explaining that your system is well prepared for cases like this, and you can revert all the changes to the point where they had their clean SAP install done, so they can start upgrading again. 

Step 7: Create a BE based on the SAP@backup snapshot, activate, and boot it, destroy the old SAP BE, snapshot the new SAP2ndtry BE too, you know, just in case...

root@host1:~# beadm list -s
BE/Snapshot                Space  Policy Created
-----------                -----  ------ -------
SAP
   SAP@2011-09-24-14:19:18 10.85M static 2011-09-24 16:19
   SAP@backup              24.0K  static 2011-09-24 16:42
solaris-0
root@host1:~# beadm create -e SAP@backup SAP2ndtry
root@host1:~# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
SAP       NR     /          77.67M static 2011-09-24 16:19
SAP2ndtry -      -          36.0K  static 2011-09-24 16:58
solaris-0 -      -          8.29M  static 2011-09-24 15:19
root@host1:~# beadm activate SAP2ndtry
root@host1:~# init 6
root@host1:~#

[...wait for the host to come up...]

root@host1:~# beadm destroy SAP
Are you sure you want to destroy SAP?  This action cannot be undone(y/[n]): y
root@host1:~# beadm list
BE        Active Mountpoint Space  Policy Created
--        ------ ---------- -----  ------ -------
SAP2ndtry NR     /          86.90M static 2011-09-24 16:58
solaris-0 -      -          8.29M  static 2011-09-24 15:19
root@host1:~# beadm create SAP2ndtry@backup
root@host1:~#

You have saved their day, and a lot of effort of completely having to re-setup their application from scratch. Even if they'd decide to go for PeopleSoft instead of SAP, all you'd need to do is to create a BE cloned from the original solaris-0 BE, and you have a completely clean environment again for the PeopleSoft tests. 

 And now, the surprise: All these reboots were not harmful for any other non-global zones, because...: 

root@host1:~# zonename
host1
root@host1:~# exit
logout

[Connection to zone 'host1' pts/2 closed]
root@s11ea:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   7 host1            running    /zones/host1                   solaris  excl
   - testz            installed  /zones/testz                   solaris  excl
root@s11ea:~#

...because all this has happened within a zone! We have managed the BE in a zone, no other zones were harmed in the making of this blogpost. 

To summarize: Having had separated bootenvironments allowed me to separate different configurations and application deployments from eachother, easily being able to revert to a desired snapshotted state. In this case, we have operated on the bootenvironments within a zone, haven't touched the bootenvironments of the global zone. 

Another example could be upgrading the operating system in a freshly cloned BE, rebooting there, and if something would go wrong with the new release - simply reactivate the old BE, and use the old release of the OS. This, of course brings up the question:

Q: What if I upgrade the OS, but it goes terribly wrong, and it can't even boot? How do I revert? 
A: Generally, Solaris is very well tested before a release is published. But should the case still happen, beadm auto-updates the bootmenus: You can choose to boot different BEs from the grub menu at boottime on x86, or choose the BE from OBP using the 'boot -L' option on sparc. 

We could go on and on for possible usecase scenarios, allow me to stop here and provide the links for further information: 

Q: Where can I find the documentation about bootenvironments? 
A: Solaris 11 has not been released yet, but you can download the Solaris 11 Early Adopters' release, the site also has a link to the downloadable documentation package

Comments:

Great write up on Solaris 11's enhanced BE functionality. Thanks!

Posted by Brandon Wilson on October 27, 2011 at 03:19 PM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

This is the Technology Blog of the Oracle Hardware Presales Consultant Team in Vienna, Austria. We post about our technology fields: server- and storage hardware, operating system technologies, virtualization, clustering, datacenter management and performance tuning possibilities.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today