Saturday Jul 12, 2008

So Long for now. Hope to be back!!

It is with sad emotions that I take a moment to inform you all that on July 10th 2008 my position at Sun has been eliminated. I have truly enjoyed working with all of you in my 11 years here at Sun. I will most certainly remember my employment at Sun as one of my greatest learning experiences and I will miss working with everyone.


The personal and professional relationships I have made throughout my tenure here, will leave lasting impressions on me, and I will cherish them forever. I owe a great deal of my success over these years to many of you. I'm especially proud of been part of two fantastic groups within Sun,  the Solaris Adoption Practice, managed by Dawit Bereket, and the OS Ambassadors Program where I was surrounded by the most brilliant minds within Sun Microsystems.


Thanks for your friendship, teamwork, and for allowing me to be a part of a once in a lifetime experience. I hope that the future brings more opportunities for our paths to cross.


Thanks, best wishes, and good fortune to all of you. I hope to be back someday.

"Remember life should not be a journey to the grave with the intention of arriving safely in a pretty and well-preserved body, but rather to skid in broadside, thoroughly used up, totally worn out, and loudly proclaiming -


WOW - what a ride!"

Monday Jun 09, 2008

The Live Upgrade Experience


After many hours of pondering about writing to my blog, I'm finally taking the plunge and joining the ranks of the Sun bloggers. For my first post I'd like to share my experience with Live Upgrade.


The Live Upgrade feature of the Solaris operating environment enables you to maintain multiple operating images on a single system.  An image called a boot environment, or BE represents a set of operation system and application software packages.  The BEs might contain different operating system and/or application versions.


On a system with the Solaris Live Upgrade software, your currently booted OS environment is referred to as your active, or current BE.  You have one active, or current BE; all others are active.  You can perform any number of modifications to inactive BEs on the same system, then boot from one of those BEs.  If there is a failure or some undesired behavior in the newly booted BE, Live Upgrade software makes it easy for you to fall back to the previously running BE.


I've found that Solaris Live Upgrade has many capabilities.  Upgrading a system to a new Solaris release couldn't be more easy. It involves three simple basic commands.


lucreate(1M) – create a new boot environment
luupgrade(1M) – installs, upgrades, and performs other functions on software on a boot environment
luactivate(1M) – activate a boot environment


Some notes on patching using the Live Upgrade method


Solaris Live Upgrade is not just limited to OS upgrades. In fact it can be used to manage downtime and risk when patching a system as well.


Because you are applying the patches to the inactive boot environment, patching has only minimal impact on the currently running environment. The production applications can continue to run as the patches get applied to the inactive environment. System Administrators can now avoid taking the system down to single-user mode, which is the standard practice for applying the Recommended Patch Clusters.


You can boot the newly patched environment, test your applications, and if you are not satisfied. Well! Just reboot to the original environment. With Solaris 10 8/07, patching Solaris Containers with Solaris Live Upgrade is very much supported. This can have a dramatic impact on decreasing downtime during patching.


Live Upgrade prerequisites when preparing to patch your systems.


System Type Used in Exercise



  • SunFire 220R

  • 2/ 450MHZ USII Processors with 4mb of cache

  • 2048mb of memory

  • 2/ internal 18gb SCSI drive

  • Sun StorEdge D1000


Preparing for Live Upgrade



  • I'm starting with a Solaris 10 11/06 release system, which was just freshly installed with my root file system. We will call that the primary boot environment.  I'll begin by logging into the root account, and patch the system with the latest Solaris 10 Recommended Patch Cluster, downloaded via SunSolve.

  • I will also install the required patches from the formerly Sun Infodoc 72099, now Sun InfoDoc 206844. This document provides information about the minimum patch requirements for a system on which Solaris Live Upgrade software will be used.

  • It is imperative that you ensure the target system meets these patch requirements before attempting to use Solaris Live Upgrade software on your system.


Installing the latest Solaris Live Upgrade packages you will use a script called liveupgrade20.  The scripts runs silently and installs the latest Solaris Live Upgrade packages. You can run the following command without the -noconsole and -nodisplay options and you will see the GUI install tool.  I ran it as you will see with these options.


root@sunrise1 #  pkgrm SUNWlur SUNWluu
root@sunrise1 # mount -o ro -F hsfs `lofiadm -a /apps/solaris-images/s10u4/
SPARC/solarisdvd.iso` /mnt
root@sunrise1 # cd /mnt/Solaris_10/Tools/Installers
root@sunrise1 # ./liveupgrade20 -noconsole -nodisplay

Note: This will install the following packages SUNWluu SUNWlur SUNWlucfg


For the purpose of this exercise, I wanted to see if performing a Live Upgrade of my Solaris 10 11/06 system with four non-global zones, would work as documented. I created these four sparse-root zones with the following perl script. I called the script s10-zone-create.   Feel free to use it at your own risk!


The perl script generated the following output:  Note:  I ran the script for each zone I wanted to create. 


root@sunrise1 # ./s10-zone-create 

Solaris 10 Zone Creator

------------------------------------------------------------------------------

What is the name of the new zone to be created?  zoneA

What is the name of the directory to be used for this zone? [zoneA]

What is the name of the ethernet interface to bind the IP address to? (ex: bge0) qfe0

Do you want to inherit standard directories (lib,platform,sbin,usr) from the global zone? [yN] y

Please verify the following information:

Zone name: zoneA

Zone Directory: zoneA

Zone IP Address: 192.168.1.27

Ethernet Interface: qfe0

Inherit Directories: y

Are these entries correct? [yN] y

Creating the zone...

Done!

Installing the zone, this will take awhile ...

Preparing to install zone <zoneA>

Creating list of files to copy from the global zone.

Copying <2543> files to the zone. 

Intializing zone product registry.

Determining zone package initialization order.

Preparing to initalize <1059> paakages on the zone.

Zone >zoneA> is initialized.

Done!

Now booting the zone...

Done!

Zone setup complete, connect to the virtual console with the following command:

-> zlogin -C -e\\@zoneA<-  \*Exit by typing @.



As instructed I logged onto the zones console using the zlogin (1M) command above. I do this because once the system has been booted, the normal system identification process for a newly installed Solaris OS instance is started.  You will be asked configuration questions concerning the naming service, timezone, and other system parameter the should be answered as appropriate for your site.  I've omitted the output here just to say some time.


Before I start the actual Live Upgrade process I ran the following command just so that you see that my four zones are running.


root@sunrise1 # zoneadm list -cv

ID    NAME      STATUS        PATH           BRAND      IP

 0   global   running   /         native    shared
 1   zoneA    running     /zones/zoneA    native     shared
 2   zoneB    running     /zones/zoneB    native     shared
 3   zoneC    running    /zones/zoneC    native     shared
 4   zoneD    running    /zones/zoneD    native     shared


 Now for the Live Upgrade


Now we are ready to make a copy of the root (/) file system.  In Figure 1; below is what I started with. I used the two internal disks although not very fast, I created a partition on the second disk to the same size as the root (/) partition on the first disk.


Figure 1


Figure 1  Before running the lucreate commend


The object of this next step is to create a copy of the current root (/) file system so that you have what is pictured in Figure 2 below.


Figure 2


Figure 2  After running the lucreate command


You will need to name both the current (active boot environment ans the copy (the inactive boot environment).  The partition you will use for the copy must not appear in the /etc/vfstab. In other words it should not have a file systems on it.  In this case I've named the active environment "sol10u3".


Because the goal of this exercise is to upgrade to Solaris 10 8/07, I've named the new inactive boot environment "sol10u4". You will also need to specify that you want to make a copy of the root (/) file system and where that new partition is to be located. For my exercise I used (c0t1d0s0).  Also note that I've specified the file system is to UFS.


The last three pieces of information are concatenated for the `-m' argument.



To create the new (inactive boot environment I issued the following command line option:


root@sunrise1 # lucreate -c sol10u3 -n sol10u4 -m /:c0t1d0s0:ufs

The command generated outout similar to the following. The time to of completion for this command will vary, depending on the speed of the system and it's disks. In my case my SunFire 220R isn't very fast!! :(


Wed Oct 24 10:28:27 EDT 2007
root@sunrise1 # lucreate -c sol10u3 -n sol10u4 -m /:c0t1d0s0:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <c0t1d0s0> expands to device path </dev/dsk/c0t1d0s0>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <sol10u3> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <sol10u4>.
Source boot environment is <sol10u3>.
Creating boot environment <sol10u4>.
Creating file systems on boot environment <sol10u4>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c0t1d0s0>.
Mounting file systems for boot environment <sol10u4>.
Calculating required sizes of file systems for boot environment <sol10u4>.
Populating file systems on boot environment <sol10u4>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <zoneA>.
Copying root of zone <zoneB>.
Copying root of zone <zoneC>.
Copying root of zone <zoneD>.
Creating compare databases for boot environment <sol10u4>.
Creating compare database for file system </>.
Updating compare databases on boot environment <sol10u4>.
Making boot environment <sol10u4> bootable.
Population of boot environment <sol10u4> successful.
Creation of boot environment <sol10u4> successful.
root@sunrise1 # date
Wed Oct 24 11:26:16 EDT 2007

The lucreate took approximately a total of 58 minutes.



At this point, you're ready to run the luupgrade command, however if you encountered problems with the lucreate, you may find it very useful to the lustatus (1M) utility to see the state of the boot environment.


In my case everything went as planned. Here is the output of the lustatus (1M) utility from my server.


root@sunrise1 # lustatus

Boot Environment     Is              Active         Active            Can        Copy      
Name                       Complete  Now            On Reboot    Delete     Status   
-------------------------- -------------  --------          ---------------    ---------    ----------
sol10u3                    yes            yes             no                 no             -        
sol10u4                    yes            no              yes                no             -

After the new boot environment is created, you can then begin the upgrade procedure.


As shown in the example below I will upgrade from the Solaris 10 11/06 release. After completing this next step you will the situation depicted here in Figure 3.



Figure 3


Figure 3  After running the luupgrade command



To upgrade to the new Solaris release, you will use the luupgrade (1M) command with the -u option. The -s option identifies the path to the media. In my case here I already had it mounted on /mnt before shown above in the section Preparing for Live Upgrade.


 root@sunrise1 # date

Wed Oct 24 11:58:51 EDT 2007 

The command line option would be as follows:


root@sunrise1 # luupgrade -u -n sol10u4 -s /mnt

The command generated the following output:

183584 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <sol10u4>.
Determining packages to install or upgrade for BE <sol10u4>.
Performing the operating system upgrade of the BE <sol10u4>.
CAUTION: Interrupting this process may leave the boot environment unstable or unbootable.
Upgrading Solaris: 16% completed
Upgrading Solaris: 31% completed
Upgrading Solaris: 47% completed
Upgrading Solaris: 62% completed
Upgrading Solaris: 78% completed
Upgrading Solaris: 93% completed
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <sol10u4>.
Package information successfully updated on boot environment <sol10u4>.
Adding operating system patches to the BE <sol10u4>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <sol10u4> contains a
log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <sol10u4> contains a
log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment
<sol10u4>. Before you activate boot environment <sol10u4>⁞, determine if any additional system maintenance is required or
⁞if additional media of the software distribution must be installed.
The Solaris upgrade of the boot environment <sol10u4> is complete.
root@sunrise1 # date
Wed Oct 24 18:05:27 EDT 2007

The luupgrade took approximately a total of 6 hours and 7 minutes

The luupgrade process took a considerable amount of time because of the speed of my server and the SCSI disks. Yup!! It's a old timer like myself.  :)


Now I've have created a new (inactive) boot environment and upgraded it to the Solaris 10 8/07 release.  In the next two steps, you will first tell Solaris to use the new environment the next time you boot, and then you will reboot the system so that it comes up running the new environment as shown belowin Figure 4.



Figure 4





Figure 4 After running the luactivate command and rebooting




 At this point your ready to indicate which boot environment is to be used the next time you rebooted. In my case the name is sol10u4.


The command line option is as follows:


root@sunrise1 # luactivate sol10u4

The output of the luactivate command is as follows:


\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:


     setenv boot-device /pci@1f,4000/scsi@3/disk@0,0:a


3. Boot to the original boot environment by typing:

     boot

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

Activation of boot environment <sol10u4> successful.


Please Note: The warning in the output above. The proper command that must be used to reboot, is "init" or "shutdown".


One final note, if you are finished with a boot environment and no longer need it make sure that you dispose of it properly. The proper way is to use the ludelete (1M) command. If you destroy the boot environment through some other command or mechanism (e.g. newfs, or rm -rf) you risk having your system not boot.


OH Yeah!!  I was quite surprised that Live Upgrade was very nice to change my OBP boot-device entry. WOW That's Kool!!


boot-device /pci@1f,4000/scsi@3/disk@1,0:a  boot-device /pci@1f,4000/scsi@3/disk@0,0:a

Reboot your system


root@sunrise1 # init 6

Success!!!  :)  Give it a try!!



What was kool about this entire exercise, was one the fact that it work the first time as documented, and for customers the reboot would be the only downtime they would experience using the Live Upgrade Method.


On a personal note, I'd like to thank Bob Netherton, Jeff Victor, Steffen Weiberle, and Linda Kately for their insight on this subject. Thank you so much!  Your knowledge is incredible!!


Well until next time.  My next post will be on patching using LiveUpgrade.  Bye for now.


Any question feel free to call me or send me am email.







Technocrati Tags:

Monday Jun 02, 2008

Exploring Sun's xVM VirtualBox Part II

This is Part II as I mentioned is my previous post on Exploring Sun's xVMVirtualBox



You'll first need to download VirtualBox, by clicking here.

Select the Platform and Language you desire. In my case since my Toshiba M9 laptop has a Intel dual core processor, and running Solaris Nevada Build 90, I selected the Platform OpenSolaris AMD64.

With VirtualBox, you can run unmodified operating systems – including all of the software that is installed on them – directly on top of your existing operating system, in a special environment that is called a “virtual machine”. Your physical computer is then usually called the “host”, while the virtual machine is often called a “guest”.


Part II

Step1: Let's start by launching Sun xVM VirtualBox.

 root@sunrise8 # /opt/VirtualBox/VirtualBox

You will now see the Welcome to VirtualBox window.

Click image to enlarge

Step2: Let's create a new virtual machine. Start by clicking on the New button in the VirtualBox. You now get a dialog window stating, Welcome to the New Virtual machine Wizard box! Select the Next button.


Click image to enlarge

Step3: In the VM Name and OS Type dialog window, give the name to the new Virtual Machine and the OS Type. In this case you'll notice the name I've selected is WinXP and the OS Type I've selected from the OS Type drop-down menu is Windows XP. Now click the Next button.


Click image to enlarge

Step4: In the Create New Virtual Machine Memory dialog you can adjust the memory size of your virtual machine. You'll notice it gives you a recommended base memory size of 192MB. In this case I've chosen to give this virtual machine 512MB. You do this by moving the Base Memory Size Slide Bar or just change the number in the associated box. After making your the necessary changes click the Next button.


Click image to enlarge

Step5: In the Create New Virtual Machine Virtual Hard Disk dialog, you will select the image to be used as the boot hard disk for the virtual machine. You can either create a new hard disk using the New button or select an existing hard disk image from the drop-down list or by pressing the Existing button to invoke the Virtual Disk Manager dialog.


In this case I will choose New to create a new hard disk.


Click image to enlarge

Step6: The Welcome to the New Virtual Disk Wizard dialog will appear which will help you create a new virtual hard disks image for your virtual machine.


In this case we will using the Next button to go to the next page of the wizard.


Click image to enlarge

Step7: The Virtual Disk Image Type dialog window, you will now have two choices for your virtual disk image type. You can have either a dynamic expanding image, or a fixed-sized image. Please read the captions for each choice. Choice wisely!!!


In our case here, since I have enough disk space, we'll choose the dynamic expanding image.


Click image to enlarge

Step8: In the Virtual Disk Location and Size dialog you can press the select buttonto select the location and name of the file to store the virtual hard disk image, or type the file name in the entry field.


For the Image File Name I've selected the name WinXP.  Also in this dialog window you will select the size of the virtual hard disk image in megabytes. This size will be reported to the Guest OS as the size of the virtual hard disk.


By default the wizard has chosen 10.00GB for our virtual hard disk image size. For this exercise we'll leave it at 10GB Then click the Next button


Note: If you selected dynamic expanding image in the previous Virtual Disk Image Type dialog window above, the size of the image that is created here will not necessarily be 10GB, but could expand if needed to the full 10GB size. Keep your disk space in mind!!!!


Click image to enlarge

Step9: The Create New Virtual Disk Summary dialog window will appear to give you a summary of the how your new virtual hard disk will be created.


The following screen shows your virtual hard disk image parameters. You can go back and change things if you'd like or in our case we will click the Finish button.


Click image to enlarge

Step10: The Virtual Hard Disk dialog showing the Boot hard Disk (Primary Master)


This dialog well show you the path to the Virtual Hard Disk Image file. Click the Next button. 


Click image to enlarge

Step11: The Create New Virtual Machine Summary dialog will now show you the parameters of your new soon to be created virtual machine. Click the Finish button 


Click image to enlarge

The next screenshot of the Sun xVM VirtualBox now shows the Virtual Machines that were created.


Click image to enlarge

Step12: From the Sun xVM VirtualBox dialog window, in our case here, we will need change some details shown on the right in the dialog window. For installing the Windows XP image we will click on the CD/DVD-ROM details link and change the following parameters.


Note: I ignored the following error dialog box that appeared, warning you about the USB Proxy Service has not been ported to this host. I just clicked OK to move ahead.


Click image to enlarge

In the WinXP Settings dialog window, I've selected the Mount CD/DVD-ROM checkbox, and the ISO Image File checkbox. Click the Select folder next to the drop-down  box, to tell the installer the location of the OS image file. The Virtual Disk Manager dialog will open where you can added image locations. After adding the location of the image, click the Select button, then you can click the OK button, in this case the WinXP Settings dialog window.


Click image to enlarge

Step13: Now you will see the Sun xVM VirtualBox dialog window with changes you made above, showing the details of the new WinXP virtual machine.


Highlight the Virtual Machine you want to install, in this case WinXP, and click the Start button.


Click image to enlarge

After clicking the Start Button you will now see the following WinXp Running Sun xVM VirtualBox informational dialog box.


Note: At this point, the Auto capture keyboard option is turned on. This will cause the Virtual Machine to automatically capture the keyboard everytime the VM window is activated and make it unavailable to other applications running on your host machine. You can press the host key identified here as the Right Ctrl key to uncapture the keyboard and mouse.


Click image to enlarge

Step14: The above dialog window will now change to a blue Windows Setup dialog window, where you will be prompted for various key strokes.  This will begin the setup of the Windows XP OS for the new Virtual Machine.


Click image to enlarge

If you are at all familiar with setting up Windows XP, the next several screenshots will be very familiar to you then. You'll now be prepare the live image for the new virtual machine for installation. On the this next Windows XP Setupp dialog window, you prepare your new WinXP VM for installation. For this exercise we will just hit the Enter Key to continue.


Click image to enlarge

Now you'll be asked to accept the Windows XP Licensing Agreement. Hit the F8 Key


Click image to enlarge

After accept the Windows XP Licensing Agreement, you'll now be presented with the Windows XP Professional dialog window showing you the disk partition layout for you new WinXP virtual machine. As you'll notice the size of the partition is what we stated in the beginning when you were setting up the WinXP VM. Hit the Enter Key to start the installation.


Click image to enlarge

This following Windows XP Professional dialog window we will select the Filesystem type for Windows XP. In this exercise we will format the partition to be NTFS.


Click image to enlarge

The Windows XP Professional Setup will now begin to format the partition.


Click image to enlarge

The Windows XP Professional Setup is now verifying the partition.


Click image to enlarge

The Windows XP Professional Setup is now copying files.


Click image to enlarge

Step15: You will now be presented with the Windows XP Regional and Language dialog window. Make your appropriate changes, and click the Next button.


Click image to enlarge

Windows allow you to personalize your software. Fill in the appropriate fields and click the Next button.


Click image to enlarge

You'll now be asked to enter your Windows XP product Key information and clicked the Next button.


Click image to enlarge

You'll now be asked to enter your computer name and Administrator password, then click the Next button.


Click image to enlarge

You'll now be asked to enter your Date and Timezone information, then click the Next button.


Click image to enlarge

Windows will now make your changes and move on.


Click image to enlarge

You will will now be presented with the Network Settings dialog window. Here you have two choices, Typical or Custom. For this exercise we will select the Typical check box and click the Next button.


Click image to enlarge

Windows will now ask about your Workgroup and Computer Domain. For this exercise we will except the default as shown and click the Next button.


Click image to enlarge

Windows is now copying files


Click image to enlarge

Windows Installer is now completed file copies and is moving on.


Click image to enlarge

Step16: VirtualBox will now allow you to adjust the desktop size. Click the OK button.


Click image to enlarge

You;ll now see a Monitor Settings dialog box, where Windows has now adjusted your screen resolution. If you can read the text then click the OK button.


Click image to enlarge

Windows XP will now begin to launch, so that you can completed some necessary questions about your installation.


Click image to enlarge


Click image to enlarge

Loading Windows XP


Click image to enlarge

Step17: You now see the Welcome To Microsoft Windows window thanking you for purchasing Windows. The next several screens you be setting up various pieces of Windows XP such as Security, Internet Connectivity, and Windows activation. Click the Next button.


Click image to enlarge

Here you will be asked to setup Windows Security features. For this exercise we will elect not to set up security at this time. I've checked the Not Right Now box and then click the Next button.


Click image to enlarge

Here Windows is checking Internet Connectivity. When completed, click the Next button.


Click image to enlarge

Windows will now ask you if this computer connect to the Internet directly, or through another network. When completed, click the Next button.


Click image to enlarge

Windows is now ready to be activated. For the purpose of this exercise I will not activate Windows at this time. I checked the appropriate box and clicked the Next button.


Click image to enlarge

Windows will now ask you you will use this computer. You can set-up multiple users at this time if you'd like. For this exercise I just set-up one user,
and clicked the Next button.


Click image to enlarge

You now get the Windows Thank You screen.  Clicked the Finish button.


Click image to enlarge

Windows will now launch, and you'll be presented with your Windows desktop.


Click image to enlarge

Step18: Let's launch a browser. As you can see I can get to the Internet.


Click image to enlarge

This next screenshot, I've launched a terminal window showing I've successfully booted a VM running OpenSolaris on a ZFS root filesystem, along with my WIndows XP VM as well.


Click image to enlarge

Step19: Now we'll load the Sun's xVM Windows Guest Additions.


Click image to enlarge

On this next screen you'll be asked to accept the License Agreement. I've checked the appropriate box and clicked the Next button.


Click image to enlarge

The Sun's xVM Windows Guest Additions will now ask you to choose a location to install the software. for this exercise I will take the default location, and clicked the Install button


Click image to enlarge

The Sun's xVM Windows Guest Additions will now be installed.


Click image to enlarge

You'll now be presented with a Hardware Installation dialog window that will tell you the the software you installed for the VirtualBox Graphics Adapter does not pass the Windows logo testing to verify it's compatibility with Windows XP. Read carefully!!  For this exercise I clicked the Continue Anyway button.


Click image to enlarge

This next screen, you'll be asked to complete the Sun's xVM Windows Guest Additions, by rebooting the Windows XP Guest OS. Check the appropriate box and clicked the Finish button.


Click image to enlarge<

There you go, "A basic Windows XP Guest OS on VirtualBox"


Click image to enlarge

Friday May 16, 2008

Exploring Sun's xVM VirtualBox

If you're like most people, when it comes to Virtualization you've probably heard the names VMware or Xen.


There is a new name in town, VirtualBox. VirtualBox


VirtualBox is a family of powerful x86 virtualization products for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL). See "About VirtualBox" for an introduction.


Since being acquired by Sun Microsystems, VirtualBox Version 1.6  is the first major release since the acquisition.


Version 1.6, has new host platform support, for Mac and Solaris, as well as new high performance virtual devices, and improved scalability.


VirtualBox also runs on Windows, Linux, Macintosh and OpenSolaris hosts and supports a large number of guest operating systems including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista), DOS/Windows 3.x, Linux (2.4 and 2.6), and OpenBSD.



Well let's give it a look!  In this exercise, I already have VirtualBox installed, and wanted to create two local VM's (local machines). Part I OpenSolaris 2008.05, and Part II Windows XP


Equipment Used:



  • Toshiba M9 laptop
    Intel® Core™2 Duo Processor
    4gb memory


You'll first need to download VirtualBox, by clicking here.


Select the Platform and Language you desire. In my case since my Toshiba M9 laptop has a Intel dual core processor, and running Solaris Nevada Build 87, I selected the Platform OpenSolaris AMD64.


With VirtualBox, you can run unmodified operating systems – including all of the software that is installed on them – directly on top of your existing operating system, in a special environment that is called a “virtual machine”. Your physical computer is then usually called the “host”, while the virtual machine is often called a “guest”.


Part I


Step1: Let's start by launching Sun xVM VirtualBox.


 root@sunrise8 # /opt/VirtualBox/VirtualBox

You will now see the Welcome to VirtualBox window



Click image to enlarge



Step2: Let's create a new virtual machine. Start by clicking on the New button in the VirtualBox. You now get a dialog window stating, Welcome to the New Virtual machine Wizard box! Select the Next button.



Click image to enlarge



Step3: In the VM Name and OS Type dialog window, give the name to the new Virtual Machine and the OS Type. In this case you'll notice the name I've selected is OpenSolaris2008.05 and the OS Type I've selected from the OS Type drop-down menu is Solaris. Now click the Next button.


Click image to enlarge

Step4: In the Create New Virtual Machine Memory dialog you can adjust the memory size of your virtual machine. You'll notice it gives you a recommended base memory size of 512mb. In this case I've chosen to give this virtual machine 1024mb or 1gb. You do this by moving the Base Memory Size Slide Bar or just change the number in the associated box. After making your the necessary changes click the Next button.



Click image to enlarge



Step5: In the Create New Virtual Machine Virtual Hard Disk dialog, you will select the image to be used as the boot hard disk for the virtual machine. You can either create a new hard disk using the New button or select an existing hard disk image from the drop-down list or by pressing the Existing button to invoke the Virtual Disk Manager dialog.


In this case I will choose New to create a new hard disk.



Click image to enlarge



Step6: The Welcome to the New Virtual Disk Wizard dialog will appear which will help you create a new virtual hard disks image for your virtual machine.


In this case we will using the Next button to go to the next page of the wizard.



Click image to enlarge



Step7: The Virtual Disk Image Type dialog window, you will now have two choices for your virtual disk image type. You can have either a dynamic expanding image, or a fixed-sized image. Please read the captions for each choice. Choice wisely!!!


In our case here, since I have enough disk space, we'll choose the dynamic expanding image.



Click image to enlarge



Step8: In the Virtual Disk Location and Size dialog you can press the select buttonto select the location and name of the file to store the virtual hard disk image, or type the file name in the entry field.


In our case shown here the Image File Name was pre-selected for us call OpenSolairs2008.05.  Also in this dialog window you will select the size of the virtual hard disk image in megabytes. This size will be reported to the Guest OS as the size of the virtual hard disk.


By default the wizard has chosen 16.00gb for our virtual hard disk image size. I will change this to 10gb for this exercise. Then click the Next button


Note: If you selected dynamic expanding image in the previous Virtual Disk Image Type dialog window above, the size of the image that is created here will not necessarily be 10gb, but could expand if needed to the full 10gb size. Keep your disk space in mind!!!!



Click image to enlarge



Step9: The Create New Virtual Disk Summary dialog window will appear to give you a summary of the how your new virtual hard disk will be created.


The following screen shows your virtual hard disk image parameters. You can go back and change things if you'd like or in our case we will click the Finish button.



Click image to enlarge



Step10: The Virtual Hard Disk dialog showing the Boot hard Disk (Primary Master)


This dialog well show you the path to the Virtual Hard Disk Image file. Click the Next button. 



Click image to enlarge



Step11: The Create New Virtual Machine Summary dialog will now show you the parameters of your new soon to be created virtual machine.


Click the Finish button 



Click image to enlarge



Note: To conserve the amount of reading in this post, I repeated Step1 through Step11 for my Windows XP virtual machine.


The next screenshot of the Sun xVM VirtualBox now shows the two Virtual Machines that are ready to be created and loaded.



Click image to enlarge



Step12: From the Sun xVM VirtualBox dialog window, in our case here, we will need change some details shown on the right in the dialog window. For installing the the OpenSolaris2008.05 OS image and the WIndows XP image we will click on the CD/DVD-ROM details link and change the following parameters.


Note: I ignored the following error dialog box that appeared, warning me about the USB Proxy Service has not been ported to this host. I just clicked OK to move ahead.


In the OpenSolaris2008.05 Settings dialog window, I've selected the Mount CD/DVD-ROM checkbox, and the ISO Image File checkbox. Click the Select folder next to the drop-down  box, to tell the installer the location of the OS image file. The Virtual Disk Manager dialog will open where you can added image locations. After adding the location of the image, click the Select button, then you can click the OK button, in this case the OpenSolaris2008.05 Settings dialog window.



Click image to enlarge



Step13: Now you will see the Sun xVM VirtualBox dialog window with changes you made above, showing the details of the new OpenSolaris2008.05 virtual machine.


Highlight the Virtual Machine you want to install and click the Start button.



Click image to enlarge



After clicking the Start Button you will now see the following informational dialog box. As you will notice the Auto capture keyboard option is turned on. this will cause the Virtual Machine to automatically capture the keyboard everytime the VM window is activated and make it unavailable to other applications running on your host machine. You can press the host key identified here as the Right Ctrl key to uncapture the keyboard and mouse. Please read it carefully, and click the OK button



Click image to enlarge



Step14: After clicking the OK button in the VirtualBox - Information dialog, you will now begin the installation of the OS for the new Virtual Machine. You will be asked several questions before the OS actually starts loading. For this first installation we will be installing OpenSolaris 2008.05 release. Select OpenSolaris 2008.05 in the Grub menu, and hit enter.



Click image to enlarge



After selecting OpenSolaris 2008.05 from the Grub menu, You'll now be prepare the live image for the new virtual machine for installation. On the new screen you will be asked to select a keyboard type. The default is 41 US English. For this exercise we'll take the default.



Click image to enlarge



Now you'll be asked to select the desktop language you'd prefer. The default is English.  In this case we'll take the default!! The system will now start configuring the devices, mount local partitions/cdrom, and read ZFS configuration.



Click image to enlarge



After the virtual machine is configured you will see the following VirtualBox Informational dialog appear. Read the dialog for virtual machine optimization, and click OK to accept. After clicking the OK.



Click image to enlarge



Please read the "OpenSolaris License". When finished click the Close button



Click image to enlarge



After reading the "OpenSolaris License", your OpenSolaris 2008.05 Virtual Machine is running and ready to be installed with you Operating System. As shown below. Let's start by shutting down the virtual machine and restart it. From the Systems drop-down menu select shutdown


Click image to enlarge

To restart the new OpenSolaris 2008.05 virtual machine, from the VitualBox widow select the OpenSolaris 2008.05 virtual machine and click the Start button.



Click image to enlarge


Step15: You are now ready to install OpenSolaris 2008.05 release on the Virtual Machine. From the Virtual Machine click on the Install OpenSolaris icon to invoke the OpenSolaris 2008.05 installer. 



Click image to enlarge


You will now be presented with the OpenSolaris 1008.05 Installer dialog window. Read and click the Next button



Click image to enlarge


The installer will now ask you where should OpenSolaris be install? The following dialog window is for Disk partition configuration. For this exercise we will take the default shown below. Click the Next button



Click image to enlarge


 Select Time Zone & Date Information. I've select the appropriate values for the OpenSolaris Virtual machine and clicked the Next button



Click image to enlarge


 You select the appropriate Locale for you OpenSolaris installation, then click the Next button



Click image to enlarge


Next the installer will ask you for the Users information. The Root users password, you will create a new user as welll as naming you system.



Click image to enlarge


A review on the installation settings before proceeding. You have the option of going back to change information. In our case we'll accept the setting and click the Install button.



Click image to enlarge



Installing OpenSolaris 2008.05 on your Virtual Machine.



Click image to enlarge



Your installation of OpenSolaris 2008.05 is now complete. As stated review the OpenSolairs install log for information. You now must reboot to start the system. However first I found you needed to quit and shutdown the virtual machine and unmount the ISO image or it will just read the mounted information and try to install again. So click on the Quit button, the install dialog window will go away. 



Click image to enlarge



Now select System and shutdown your virtual machine. 



Click image to enlarge



From the Sun xVM VirtualBox window, click on the CD/DVD-ROM link the following OpenSolaris2008.05 Settings dialog window will open.



Click image to enlarge



Uncheck the Mount CD/DVD-ROM drive checkbox to unmount the ISO Image.




Click image to enlarge




Step16: From the Sun xVM VirtualBox window, highlight the Virtual Machine you want to Power-on and click the Start button. In this case we'll highlight the newly installed OpenSolaris2008.05 virtual machine.



Click image to enlarge



Again by clicking the Start button, this will Power-on the new OpenSolaris2008.05 virtual machine, and you will again see the familiar VirtualBox informational dialog box. As you will notice the Auto capture keyboard option is turned on. this will cause the Virtual Machine to automatically capture the keyboard every time the VM window is activated and make it unavailable to other applications running on your host machine. You can press the host key identified here as the Right Ctrl key to uncapture the keyboard and mouse. Please read it carefully, and click the OK button



Click image to enlarge



Step17: After clicking the OK button above your virtual machine, will now be running and you will be presented with the Grub menu showing your new VirtualBox boot environment. Shown here. Select the OS from the Grub menu and boot OpenSolaris2008.08.




Click image to enlarge



OpenSolaris 2008.05 should now be boot as shown below here.



Click image to enlarge


Step18: Now that we have OpenSolaris 2008.05 booted, let's log in using the userID and password (or the Root User), we created when we installed the OpenSolaris2008.05 Operating System.



Click image to enlarge



This next screenshot, I've launched a terminal window showing I've successfully booted a VM running OpenSolaris on a ZFS root filesystem.



Click image to enlarge



There you go, "Basic VirtualBox in a Nutshell"   Part II Running Windows XP in a VirtualBox VM next post.











Tuesday May 06, 2008

Download OpenSolaris 2008.05 Now Available

OpenSolaris 2008.05 is a Live CD, allowing users to experience OpenSolaris immediately, without the need to install it to their systems. When ready, installation is a single click away with a new improved easy-to-use installer. This release also introduces IPS, a new network based package management system, allowing users to install additional software from the network.

Click image to enlarge 

ZFS is also the default root file-system, allowing unique snapshot and rollback features, especially useful during system upgrade. OpenSolaris 2008.05 has a significantly improved user environment, in particular for those familiar with other Linux distributions.

Download OpenSolaris Now at OpenSolaris.com


Software downloaded here or available from the OpenSolaris Package Repository
is subject to
LICENSE TERMS.

button_download


Friday Apr 25, 2008

Patching with Solaris Live Upgrade

In my many travels to customers, I get asked about patching Solaris. One of the most common uses of Solaris Live Upgrade is to minimize downtime for patching. Upgrading to a newer Solaris release is a relatively infrequent activity in comparison to patching a system. Depending on the length of the maintenance window and the need to minimize downtime, patching an alternate boot environment may be an advantageous operations choice.



  • Create a new boot environment if you haven't already.

  • Patch the new boot environment

  • Boot from the new boot environment

  • Check your results for the changes and see if they are acceptable 


What a concept ! Patching a offline copy of a Operating System who would of thought! It's that easy!


Having a Alternate Boot Environment (ABE) to live upgrade or patch makes perfect sense. Don't you think?  With Solaris Live Upgrade you can take advantage of applying the patches in the background with little impact on what is running on the system. Therefore, who cares how long it takes!  The only downtime you have is to reboot!! 


As part of this exercise I wanted to test patching a ABE running Solaris 10 8/07, with Solaris Volume Manager (SVM), and Solaris Zones deployed, using the live upgrade feature in Solaris.


Note: In this case I already had a (ABE) from my previous blog entry Live Upgrade with Solaris Volume Manager (SVM) and Zones.


System Type Used in Test



  • SunFire 220R

  • 2/ 450MHZ USII Processors with 4mb of cache

  • 2048mb of memory

  • 2/ internal 18gb SCSI drive

  • Sun StorEdge D1000


Live Upgrade Commands Used or referenced in this exercise


lucreate(1M) – create a new boot environment
lustatus(1M)
– display status of boot environments
luupgrade(1M) – installs, upgrades, and performs other functions on software on a boot environment
luactivate(1M) – activate a boot environment
ludelete(1M) – delete a boot environment
lufslist(1M) – list configuration of a boot environment


Some notes on patching using the Live Upgrade method


Solaris Live Upgrade is not just limited to OS upgrades. In fact it can be used to manage downtime and risk when patching a system as well.


Because you are applying the patches to the inactive boot environment, patching has only minimal impact on the currently running environment. The production applications can continue to run as the patches get applied to the inactive environment. System Administrators can now avoid taking the system down to single-user mode, which is the standard practice for applying the Recommended Patch Clusters.


You can boot the newly patched environment, test your applications, and if you are not satisfied. Well! Just reboot to the original environment. With Solaris 10 8/07, patching Solaris Containers with Solaris Live Upgrade is very much supported. This can have a dramatic impact on decreasing downtime during patching.


Live Upgrade prerequisites when preparing to patch your systems.



  • I'm starting with a Solaris 10 8/07 release system, which was created by using Live Upgrade to create this ABE. See my previous blog on Live Upgrade with Solaris Volume Manager (SVM)  and Zones

  • Remember the required patches from the formerly Sun Infodoc 72099, now Sun InfoDoc 206844.
    This document provides information about the minimum patch requirements for a system on which Solaris Live Upgrade software will be used.

  • Asmention in my previous post,it is imperative that you ensure the target system meets these patch requirements before attempting to use Solaris Live Upgrade software on your system




Now you are ready to apply the Recommended Patch Cluster to the system.


Obtain the patches:


Access the Recommended Patch Cluster for Solaris 10 from sun.com/sunsolve.


This step typically involves downloading the file10_Recommended.zip (or 10_x86_Recommended.zip). For this example, assume you have downloaded it to /var/tmp. Then use the unzip command to uncompress the downloaded file to create the directory 10_Recommended containing all the patches.


The file 10_Recommended/CLUSTER_README will tell you to take the system to single user mode, and then use the install_cluster script. In this case, since I'm patching the inactive boot environment, I do not have to take the system to single user mode.  I also will be using a slightly different procedure to apply the patches. 


Note: You will not be using the install_cluster script.


Steps:


1). Verify the ABE your about to patch is not inactive, If not boot to the PBE before patching!! In this case the s10u3 is my PBE and the s10u4 is my ABE. I also shown below I ran a zoneadm list -cv to verify that my Solaris container/zone was running.


root@sunrise1 # lufslist s10u3
boot environment name: s10u3
This boot environment is currently active.

Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d0 ufs 13811814400 / logging
/dev/md/dsk/d1 swap 4255727616 - -
/dev/dsk/c1t9d0s4 ufs 36415636992 /zones logging
/dev/dsk/c1t2d0s0 ufs 36415636992 /solaris-stuff logging

root@sunrise1 # lufslist s10u4
boot environment name: s10u4
This boot environment will be active on next system boot.

Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d100 ufs 13811814400 / logging
/dev/md/dsk/d101 swap 4255727616 - -
/dev/dsk/c1t4d0s0 ufs 4289863680 /zones logging

root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
2 zoneA running /zones/zoneA native shared


2). Make sure you are in the patch directory: 


root@sunrise1 # cd /var/tmp/10_Recommended

3). Patch the ABE or Inactive Boot Environment:


root@sunrise1 # luupgrade -n  s10u4 -s /var/tmp/10_Recommended -t 
`cat patch_order`

Validating the contents of the media </var/tmp/10_Recommended>.
The media contains 79 software patches that can be added.
Mounting the BE <s10u4>.
Adding patches to the BE <s10u4>.
Validating patches...

Loading patches installed on the system...

Done!

Loading patches requested to install.

Version of package SUNWcakr from directory SUNWcakr.v in patch 118918-24 differs from the package installed on the system.
Version of package SUNWcakr from directory SUNWcakr.us in patch 118918-24 differs from the package installed on the system.
Version of package SUNWcar from directory SUNWcar.us in patch 118918-24 differs from the package installed on the system.
Version of package SUNWcakr from directory SUNWcakr.v in patch 118833-36 differs from the package installed on the system.
Version of package SUNWkvm from directory SUNWkvm.v in patch 118833-36 differs from the package installed on the system.
Version of package SUNWcakr from directory SUNWcakr.us in patch 118833-36 differs from the package installed on the system.
Version of package SUNWdrr from directory SUNWdrr.us in patch 118833-36 differs from the package installed on the system.
Version of package SUNWefc from directory SUNWefc.us in patch 118833-36 differs from the package installed on the system.
Version of package SUNWkvm from directory SUNWkvm.us in patch 118833-36 differs from the package installed on the system.
Version of package SUNWcakr from directory SUNWcakr.v in patch 120011-14 differs from the package installed on the system.
Version of package SUNWcar from directory SUNWcar.v in patch 120011-14 differs from the package installed on the system.
Version of package SUNWcpc from directory SUNWcpc.v in patch 120011-14 differs from the package installed on the system.
Architecture for package SUNWiopc from directory SUNWiopc.v in patch 120011-14 differs from the package installed on the system.
Version of package SUNWkvm from directory SUNWkvm.v in patch 120011-14 differs from the package installed on the system.
Version of package SUNWcakr from directory SUNWcakr.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWcar from directory SUNWcar.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWcpc from directory SUNWcpc.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWdrr from directory SUNWdrr.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWefc from directory SUNWefc.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWkvm from directory SUNWkvm.us in patch 120011-14 differs from the package installed on the system.
Version of package SUNWmcon from directory SUNWmcon in patch 121211-02 differs from the package installed on the system.
Version of package SUNWmcos from directory SUNWmcos in patch 121211-02 differs from the package installed on the system.
Version of package SUNWmcosx from directory SUNWmcosx in patch 121211-02 differs from the package installed on the system.
Version of package SUNWmctag from directory SUNWmctag in patch 121211-02 differs from the package installed on the system.
Done!

The following requested patches have packages not installed on the system
Package SUNWgzipS from directory SUNWgzipS in patch 120719-02 is not installed on the system. Changes for package SUNWgzipS
will not be applied to the system.
Package SUNWbzipS from directory SUNWbzipS in patch 126868-01 is not installed on the system. Changes for package SUNWbzipS
will not be applied to the system.
Package SUNWpmr from directory SUNWpmr in patch 119042-10 is not installed on the system. Changes for package SUNWpmr
will not be applied to the system.
Package FJSVdrdr from directory FJSVdrdr.us in patch 119042-10 is not installed on the system. Changes for package FJSVdrdr
will not be applied to the system.
Package SUNWcart200 from directory SUNWcart200.v in patch 118833-36 is not installed on the system. Changes for package SUNWcart200
will not be applied to the system.
Package SUNWkvmt200 from directory SUNWkvmt200.v in patch 118833-36 is not installed on the system. Changes for package SUNWkvmt200
will not be applied to the system.
Package SUNWust1 from directory SUNWust1.v in patch 118833-36 is not installed on the system. Changes for package SUNWust1
will not be applied to the system.
Package SUNWsmaS from directory SUNWsmaS in patch 120272-13 is not installed on the system. Changes for package SUNWsmaS
will not be applied to the system.
Package SUNWcart200 from directory SUNWcart200.v in patch 120011-14 is not installed on the system. Changes for package SUNWcart200
will not be applied to the system.
Package SUNWkvmt200 from directory SUNWkvmt200.v in patch 120011-14 is not installed on the system. Changes for package SUNWkvmt200
will not be applied to the system.
Package SUNWldomr from directory SUNWldomr.v in patch 120011-14 is not installed on the system. Changes for package SUNWldomr
will not be applied to the system.
Package SUNWldomu from directory SUNWldomu.v in patch 120011-14 is not installed on the system. Changes for package SUNWldomu
will not be applied to the system.
Package SUNWpmu from directory SUNWpmu in patch 120011-14 is not installed on the system. Changes for package SUNWpmu
will not be applied to the system.
Package SUNWust1 from directory SUNWust1.v in patch 120011-14 is not installed on the system. Changes for package SUNWust1
will not be applied to the system.
Package SUNWsmbaS from directory SUNWsmbaS in patch 119757-09 is not installed on the system. Changes for package SUNWsmbaS
will not be applied to the system.
Package SUNWapch2S from directory SUNWapch2S in patch 120543-09 is not installed on the system. Changes for package SUNWapch2S
will not be applied to the system.
Package SUNWmysqlS from directory SUNWmysqlS in patch 120292-01 is not installed on the system. Changes for package SUNWmysqlS
will not be applied to the system.
Package SUNWipged from directory SUNWipged in patch 120849-04 is not installed on the system. Changes for package SUNWipged
will not be applied to the system.

The following requested patches are already installed on the system
Requested to install patch 120719-02 is already installed on the system.
Requested to install patch 121296-01 is already installed on the system.
Requested to install patch 118872-04 is already installed on the system.
Requested to install patch 120900-04 is already installed on the system.
Requested to install patch 121133-02 is already installed on the system.
Requested to install patch 119042-10 is already installed on the system.
Requested to install patch 126538-01 is already installed on the system.
Requested to install patch 118918-24 is already installed on the system.
Requested to install patch 119574-02 is already installed on the system.
Requested to install patch 119578-30 is already installed on the system.
Requested to install patch 126419-01 is already installed on the system.
Requested to install patch 126897-02 is already installed on the system.
Requested to install patch 118833-36 is already installed on the system.
Requested to install patch 122640-05 is already installed on the system.
Requested to install patch 125547-02 is already installed on the system.
Requested to install patch 125503-02 is already installed on the system.
Requested to install patch 120011-14 is already installed on the system.
Requested to install patch 120543-09 is already installed on the system.
Requested to install patch 119317-01 is already installed on the system.
Requested to install patch 120292-01 is already installed on the system.
Requested to install patch 120329-02 is already installed on the system.
Requested to install patch 124188-02 is already installed on the system.
Requested to install patch 121012-02 is already installed on the system.
Requested to install patch 118959-03 is already installed on the system.
Requested to install patch 124997-01 is already installed on the system.
Requested to install patch 119903-02 is already installed on the system.
Requested to install patch 121004-03 is already installed on the system.
Requested to install patch 120061-02 is already installed on the system.
Requested to install patch 118560-02 is already installed on the system.
Requested to install patch 121002-03 is already installed on the system.
Requested to install patch 124457-01 is already installed on the system.
Requested to install patch 123186-02 is already installed on the system.

The following requested patches do not update any packages installed on the system
Packages from patch 121211-02 are not installed on the system.

Checking patches that you specified for installation.

Done!

The following requested patches will not be installed because
they have been made obsolete by other patches already
installed on the system or by patches you have specified for installation.

0 All packages from patch 118731-01 are patched by higher revision patches.

1 All packages from patch 122660-10 are patched by higher revision patches.

2 All packages from patch 124204-04 are patched by higher revision patches.

The following requested patches will not be installed because
the packages they patch are not installed on this system.

0 Packages from patch 120849-04 are not installed on the system.

Approved patches will be installed in this order:

126868-01 118815-06 119254-45 120272-13 125369-12 119757-09

Preparing checklist for non-global zone check...

Checking non-global zones...

The following requested patches rejected on non-global zone.
Entire installation is possible but those patches
will not be installed on non-global zone.

Packages from patch 120849-04 are not installed on the system.

SUNWlu-zoneA: Packages from patch 120849-04 are not installed on the system.

This patch passes the non-global zone check.
126868-01 118815-06 119254-45 120272-13 125369-12 119757-09

Summary for zones:

Zone zoneA

Rejected patches:
120849-04
Patches that passed the dependency check:
126868-01 118815-06 119254-45 120272-13 125369-12 119757-09

Patching global zone
Adding patches...

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 126868-01 has been successfully installed.
See /a/var/sadm/patch/126868-01/log for details

Patch packages installed:
SUNWbzip

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 118815-06 has been successfully installed.
See /a/var/sadm/patch/118815-06/log for details

Patch packages installed:
SUNWesu
SUNWxcu4

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 119254-45 has been successfully installed.
See /a/var/sadm/patch/119254-45/log for details
Executing postpatch script...

Patch packages installed:
SUNWinstall-patch-utils-root
SUNWpkgcmdsu
SUNWswmt

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 120272-13 has been successfully installed.
See /a/var/sadm/patch/120272-13/log for details
Executing postpatch script...

Patch packages installed:
SUNWbzip
SUNWsmagt
SUNWsmcmd
SUNWsmmgr

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 125369-12 has been successfully installed.
See /a/var/sadm/patch/125369-12/log for details

Patch packages installed:
FJSVfmd
SUNWcsr
SUNWfmd
SUNWhea
SUNWmdb

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 119757-09 has been successfully installed.
See /a/var/sadm/patch/119757-09/log for details

Patch packages installed:
SUNWsmbac
SUNWsmbar
SUNWsmbau

Done!
Patching non-global zones...

Patching zone zoneA
Adding patches...

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 126868-01 has been successfully installed.
See /a/var/sadm/patch/126868-01/log for details

Patch packages installed:
SUNWbzip

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 118815-06 has been successfully installed.
See /a/var/sadm/patch/118815-06/log for details

Patch packages installed:
SUNWesu
SUNWxcu4

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 119254-45 has been successfully installed.
See /a/var/sadm/patch/119254-45/log for details
Executing postpatch script...

Patch packages installed:
SUNWinstall-patch-utils-root
SUNWpkgcmdsu
SUNWswmt

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Pkgadd failed. See /var/tmp/120272-13.log.23488 for details

Removing partially installed patch
Validating patches...

Loading patches installed on the system...

Done!
Patch patchaddInterrupt does not exist.

Checking patches that you specified for removal.

Done!

Approved patches will be removed in this order:

120272-13

Checking installed patches...

Backing out patch 120272-13...

Patch 120272-13 has been backed out.

Skipping patch 120272-13
Installation of patch number 120272-13 has been suspended.
ln: cannot create link SUNWbzip/pkginfo_mvd_23488: Read-only file system
mv: SUNWbzip/pkginfo: override protection 755 (yes/no)? yes
mv: cannot unlink SUNWbzip/pkginfo: Read-only file system
chmod: WARNING: can't change SUNWbzip/pkginfo
ln: cannot create link SUNWsmagt/pkginfo_mvd_23488: Read-only file system
mv: SUNWsmagt/pkginfo: override protection 755 (yes/no)? yes
mv: cannot unlink SUNWsmagt/pkginfo: Read-only file system
chmod: WARNING: can't change SUNWsmagt/pkginfo
ln: cannot create link SUNWsmcmd/pkginfo_mvd_23488: Read-only file system
mv: SUNWsmcmd/pkginfo: override protection 755 (yes/no)? yes
mv: cannot unlink SUNWsmcmd/pkginfo: Read-only file system
chmod: WARNING: can't change SUNWsmcmd/pkginfo
Executing postpatch script...

Patch packages installed:

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 125369-12 has been successfully installed.
See /a/var/sadm/patch/125369-12/log for details

Patch packages installed:
FJSVfmd
SUNWcsr
SUNWfmd
SUNWhea
SUNWmdb

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 119757-09 has been successfully installed.
See /a/var/sadm/patch/119757-09/log for details

Patch packages installed:
SUNWsmbac
SUNWsmbar
SUNWsmbau

Done!
Unmounting the BE <⁞s10u4>.
The patch add to the BE <s10u4> completed.


In the above command line example, you'll identify which boot environment to patch,"s10u4", where the patches are located (the -s option and path argument), and patches to apply (the -t option followed by the patch ⁞numbers).


Note:  The argument to the -t option is placed in backquotes meaning the expression will be evaluated by the shell before sending the results (the list of patches) to the luupgrade(1M) command.


Indicate which boot environment is to be used the next time you reboot in this exercise, it's "s10u4".


root@sunrise1 # luactivate s10u4
WARNING: Boot environment <s10u4> is already activated.

Where s10u4 is the boot environment you want to make active. As you can see from the above output the s10u4 is already activated. The reason for this, is that when I created it, as shown in my previous blog entry Live Upgrade with Solaris Volume manager (SVM) and Zones I activated the ABE at that time. Otherwise I would have received the output similar to this.



The output of the luactivate(1M) command will look like the following:


\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot
using the target BE.
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot
environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device (like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt): For boot to Solaris CD: boot cdrom -s For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory (like /mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c0t1d0s2 /mnt
4. Run <luactivate> utility with out any arguments from the current boot


Note the warning in the above about the proper command to use to reboot. You must use "init" or "shutdown".


Since the newly patched ABE was already activated, I can now boot the ABE s10u4. Since this is SPARC system, and I have not made a entry in the eeprom or nvaliased the path to the new ABE I will boot the newly patch ABE as follows.


ok boot /pci@1f,4000/scsi@3/disk@1,0:a

Resetting ...

Sun Ultra 60 UPA/PCI (2 X UltraSPARC-II 450MHz), No Keyboard
OpenBoot 3.23, 2048 MB memory installed, Serial #14809682.
Ethernet address 8:0:20:e1:fa:52, Host ID: 80e1fa52.

Rebooting with command: boot /pci@1f,4000/scsi@3/disk@0,0:a
Boot device: /pci@1f,4000/scsi@3/disk@0,0:a File and args:
SunOS Release 5.10 Version Generic_127111-06 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
TSI: gfxp0 is GFX8P @ 1152x900
Hostname: sunrise1
checking ufs filesystems
/dev/rdsk/c1t2d0s0: is logging.
/dev/rdsk/c1t9d0s4: is logging.

sunrise1 console login:


Login into ABE and check the zone


root@sunrise1 # zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP   
   0 global           running    /                              native   shared
   2 zoneA            running    /zones/zoneA                   native   shared

root@sunrise1 # zlogin -C zoneA
[Connected to zone 'zoneA' console]

# df -h
Filesystem size used avail capacity Mounted on
/ 33G 720M 32G 3% /
/dev 33G 720M 32G 3% /dev
/lib 13G 5.8G 6.7G 47% /lib
/platform 13G 5.8G 6.7G 47% /platform
/sbin 13G 5.8G 6.7G 47% /sbin
/usr 13G 5.8G 6.7G 47% /usr
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 4.6G 304K 4.6G 1% /etc/svc/volatile
fd 0K 0K 0K 0% /dev/fd
swap 4.6G 32K 4.6G 1% /tmp
swap 4.6G 40K 4.6G 1% /var/run
# uname -a
SunOS zoneA 5.10 Generic_127111-06 sun4u sparc SUNW,Ultra-60


Some things to remember!! 



On a running system, some files might change during or after the lucreate/luupgrade process. You might want those changes reflected in the new boot environment in order to synchronize the two systems. To address this, there is a synchronization mechanism that the luactivate command will automatically call. See the synclist(4) man page for more
information.


The luactivate can also be used to revert to the previous boot environment. In most cases this is very straightforward, but due to the adoption on x86 platforms of GRUB (the new boot loader for Solaris 10 1/06 and subsequent Solaris releases) there are some issues when reverting from a GRUB system to a pre-GRUB release. These are all explained in the Solaris Live Upgrade documentation referenced here.

You have now succeeded in using Live Upgrade to patch a Solaris 10 8/07 release.


Note that the reboot is the only downtime experienced using this upgrade method a very different situation than if you had done a standard install or standard upgrade.


One last and final note make sure that you dispose of a boot environment properly when you are finished with it. The proper way is to use the ludelete(1M) command. If you destroy the boot environment through some other mechanism (e.g. newfs, or rm -rf) you risk having a system that will not boot. 


Behind the scenes Solaris Live Upgrade is simply using the patchadd command. You could accomplish the same patching of the inactive boot environment by mounting it and using patchadd(1M) with the -R option. If you want to understand the behavior of patching using Solaris Live Upgrade, familiarize yourself with the patchadd(1M) command.


The Solaris 10 patchadd(1M) command is smart enough to order patches correctly, but Solaris 9 and earlier releases require patches to be in dependency order. Sun uses a command line similar to the above as part of the standard testing, so it makes sense to use a similar approach when you use Solaris Live Upgrade to patch - no matter what Solaris release you are patching


Until next time!!  Have fun with Solaris Live Upgrade!!

Wednesday Apr 23, 2008

Live Upgrade with Solaris Volume Manager (SVM) and Zones

As mentioned in my previous post on October 25th, 2007 titled The Live Upgrade Experience, I stated that the Live Upgrade feature of the Solaris operating environment enables you to maintain multiple operating images on a single system. An image called a boot environment, or BE represents a set of operating system and application software packages. The BEs might contain different operating system and/or application versions.


As part of this exercise I want to test Live Upgrade when using Solaris Volume Manager (SVM) to mirror the rootdisk, where Solaris Containers/Zones are deployed.


System Type Used in Exercise



  • SunFire 220R

  • 2/ 450MHZ USII Processors with 4mb of cache

  • 2048mb of memory

  • 2/ internal 18gb SCSI drive

  • Sun StorEdge D1000


Preparing for Live Upgrade



  • I'm starting with a Solaris 10 11/06 release system, which was just freshly installed with my root file system. We will call that the primary boot environment.  I'll begin by logging into the root account, and patch the system with the latest Solaris 10 Recommended Patch Cluster, downloaded via SunSolve.

  • I will also install the required patches from the formerly Sun Infodoc 72099, now Sun InfoDoc 206844. This document provides information about the minimum patch requirements for a system on which Solaris Live Upgrade software will be used.

  • As mention in my previous post,it is imperative that you ensure the target system meets these patch requirements before attempting to use Solaris Live Upgrade software on your system.


Verifying booted OS release


root@sunrise1 # cat /etc/release

Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006


Display and list the containers/zones 


root@sunrise1 # zoneadm list -cv

ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared


Abbreviations Used:


PBE: Primary boot environment
ABE: Alternate boot environment


Installing the latest Solaris Live Upgrade packages you will use a script called liveupgrade20.  The scripts runs silently and installs the latest Solaris Live Upgrade packages. You can run the following command without the -noconsole and -nodisplay options and you will see the GUI install tool.  I ran it as you will see with these options.


root@sunrise1 # pkgrm SUNWlur SUNWluu
root@sunrise1 # mount -o ro -F hsfs `lofiadm -a /solaris-stuff/solaris-images/s10u4/SPARC/solarisdvd.iso` /mnt

root@sunrise1 # cd /mnt/Solaris_10/Tools/Installers
root@sunrise1 # ./liveupgrade20 -noconsole -nodisplay

Note: This will install the following packages SUNWluu SUNWlur SUNWlucfg


root@sunrise1 # pkinfo |grep SUNWluu SUNWlur SUNWlucfg

    application SUNWlucfg                      Live Upgrade Configuration
    application SUNWlur                          Live Upgrade (root)
    application SUNWluu                         Live Upgrade (usr)


Introduction to Solaris Volume Manager (SVM)


Solaris Volume Manager (SVM), is included in Solaris, it allows you to manage large numbers of disks and the data on those disks. Although there are many ways to use Solaris Volume Manager, most tasks include the following:





  • Increasing storage capacity




  • Increasing data availability




  • Easing administration of large storage devices





How does Solaris Volume Manager  (SVM) manage storage


Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. For historical reasons, some command-line utilities also refer to a volume as a metadevice.


From the perspective of an application or a file system, a volume is functionally identical to a physical disk. Solaris Volume Manager converts I/O requests directed at a volume into I/O requests to the underlying member disk.


Solaris Volume Manager volumes are built from disk slices or from other Solaris Volume Manager volumes. An easy way to build volumes is to use the graphical user interface (GUI) that is built into the Solaris Management Console. The Enhanced Storage tool within the Solaris Management Console presents you with a view of all the existing volumes. By following the steps in wizards, you can easily build any kind of Solaris Volume Manager volume or component. You can also build and modify volumes by using Solaris Volume Manager command-line utilities.


For example, if you need more storage capacity as a single volume, you could use Solaris Volume Manager to make the system treat a collection of slices as one larger volume. After you create a volume from these slices, you can immediately begin using the volume just as you would use any “real” slice or device.


On to Configuring Solaris Volume Manager (SVM)


In this exercise we will first create and set up some RAID-0 metadevices for both the root file systems / and the swap partition. Then if goes well, with Live Upgrade we will create the Alternate Boot Environment (ABE). For the purpose of this exercise I had enough disk capacity to create the ABE on another set of disks. However if you do not have enough disk capacity, you will have to break the mirrors off form the PBE and use those disks for your ABE.


 Note: Keep in mind if you are going to break the mirrors off from the PBE and you are going to mirror swap with the lucreate command, it has a bug when it comes to the attach and detach flags. See SunSolve for BugID: 5042861 Synopsis: lucreate cannot perform SVM attach/detach on swap devices.


SVM Commands:


metadb(1M) – create and delete replicas of the metadevice state database
metainit(1M) – configure metadevices
metaroot(1M) – setup system files for root (/) metadevice
metastat(1M) – display status for metadevice or hot spare pool
metattach(1M) – attach a metadevice
metadetach(1M) – detach a metadevice
metaclear(1M) – delete active metadevices and hot spare pools



  1. c0t0d0s2 represents the first system disk (boot) also the PBE

  2. c1t0d0s2 represents the second disk (mirror) also will be used for the PBE

  3. c1t9d0s4 represents the disk where the zones are created for the PBE

  4. c0t1d0s2 represents the first system disk (boot) also the ABE

  5. c1t1d0s2 represents the second disk (mirror) also will be used for the ABE

  6. c1t4d0s0 represents the disk where the zones are created for the ABE

Set up the RAID-0 metadevices (stripe or concatenation volumes)corresponding to the / file system and the swap space, and automatically configure system files (/etc/vfstab and the /etc/system) for the root metadevice.

Duplicate the label's content from the boot disk to the mirror disk for both the PBE and the ABE:


root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Create replicas of the metadevice state database:


Note: Option -f is needed because it is the first invocation/creation of the metadb(1M)


root@sunrise1# metadb -a -f -c 3 c0t0d0s7 c1t0d0s7 c0t1d0s7 c1t1d0s7


Verify meta databases:


root@sunrise1# metadb

        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s7
     a    p  luo        8208            8192            /dev/dsk/c0t0d0s7
     a    p  luo        16400           8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c1t0d0s7
     a    p  luo        8208            8192            /dev/dsk/c1t0d0s7
     a    p  luo        16400           8192            /dev/dsk/c1t0d0s7
     a        u         16              8192            /dev/dsk/c0t1d0s7
     a        u         8208            8192            /dev/dsk/c0t1d0s7
     a        u         16400           8192            /dev/dsk/c0t1d0s7
     a        u         16              8192            /dev/dsk/c1t1d0s7
     a        u         8208            8192            /dev/dsk/c1t1d0s7
     a        u         16400           8192            /dev/dsk/c1t1d0s7


Creation of metadevices:


Note: Option -f is needed because the file system created on the slice we want to initialize a new metadevice are already mounted.


root@sunrise1# metainit -f d10 1 1 c0t0d0s0
    d10: Concat/Stripe is setup
root@sunrise1# metainit -f d11 1 1 c0t0d0s1
    d11: Concat/Stripe is setup
root@sunrise1# metainit -f d20 1 1 c1t0d0s0
    d20: Concat/Stripe is setup
root@sunrise1# metainit -f d21 1 1 c1t0d0s1
    d21: Concat/Stripe is setup

Create the first part of the mirror:


root@sunrise1# metainit d0 -m d10
    d0: Mirror is setup
    root@sunrise1# metainit d1 -m d11
    d1: Mirror is setup 

Make a copy of the /etc/system, and the /etc/vfstab before proceeding:


root@sunrise1# cp /etc/vfstab /etc/vfstab-beforeSVM
root@sunrise1# cp /etc/system /etc/system-beforeSVM


Change /etc/vfstab and /etc/system to reflect mirror device:


Note: The metaroot(1M) command is only necessary when mirroring the root file system.


root@sunrise1# metaroot d0
root@sunrise1# diff /etc/vfstab /etc/vfstab-beforeSVM
        6,7c6,7
        < /dev/md/dsk/d1        -       -       swap    -       no      -
        < /dev/md/dsk/d0        /dev/md/rdsk/d0 /       ufs     1       no      logging
        ---
        > /dev/dsk/c0t0d0s1     -       -       swap    -       no      -
        > /dev/dsk/c0t0d0s0     /dev/rdsk/c0t0d0s0      /       ufs     1       no      logging  

Note: Don't forget to edit /etc/vfstab in order to reflect the other metadeices: For example in this exercise we mirrored the swap partition, so will have to add the following line to the /etc/vfstab manually.


/dev/md/dsk/d1        -       -       swap    -       no      - 

Install the boot block code on the alternate boot disk:


root@sunrise1# installboot /usr/plaform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t0d0s0

Reboot on the new metadevices ( the operating system will now boot encapsulated):


root@sunrise1# shutdown -y -g 0 -i 6 

Attach the second part of the mirror:


root@sunrise1# metattach d0 d20
    d0: submirror d20 is attached 

root@sunrise1# metattach d1 d21
    d1: submirror d21 is attached

Verify all:


root@sunrise1# metastat -p
    d1 -m d11 d21 1
    d11 1 1 c0t0d0s1
    d21 1 1 c1t0d0s1
    d0 -m d10 d20 1
    d10 1 1 c0t0d0s0
    d20 1 1 c1t0d0s0

root@sunrise1# metastat |grep %
    Resync in progress: 41 % done
    Resync in progress: 46 % done 


Note: It would be a best practice to wait for the above resync of the mirrors to finish before proceeding!


Modify the system dump configuration:


root@sunrise1# mkdir /var/crash/`hostname`
root@sunrise1# chmod 700 /var/crash/`hostname`
root@sunrise1# dumpadm -s /var/crash/`hostname`
root@sunrise1# dumpadm -d /dev/md/dsk/d1  

Copy of the /etc/vfstab showing the newly created metadevice for (/) and (swap)


root@sunrise1# cat /etc/vfstab


#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs     1       no      logging
/dev/md/dsk/d1        -       -       swap    -       no      -
/dev/dsk/c1t9d0s4       /dev/rdsk/c1t9d0s4      /zones  ufs     2       yes     logging
/dev/dsk/c1t2d0s0      /dev/rdsk/c1t2d0s0      /solaris-stuff  ufs     2       yes     logging
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -



Record the Path to the Alternate Boot Device


You'll need to determine the path to the alternate root device by using the ls(1) -l command on the slice that is being attached as the second submirror to the root (/) mirror.


root@sunrise1# ls -l /dev/dsk/c1t0d0s0
lrwxrwxrwx   1 root     root          41 Nov  2 20:32 /dev/dsk/c1t0d0s0 -> ../../devices/pci@1f,4000/scsi@5/sd@0,0:a


Here you would record the string that follows the /devices directory: /pci@1f,4000/scsi@5/sd@0,0:a


Solaris Volume Manager users who are using a system with OpenBoot Prom (OBP) can use the OBP nvalias command to define a “backup root” device alias for the secondary root(/) mirror. For example:


ok nvalias rootmirror /pci@1f,4000/scsi@5/sd@0,0:a Note: I needed to change the sd to disk


Then, redefine the boot-devices alias to reference both the primary and secondary submirrors, in the order in which you want them to be used, and store the configuration.


ok printenv boot-device
boot-device =      rootdisk net 

ok setenv boot-device rootdisk rootmirror net
boot-device =      rootdisk rootmirror net


ok nvstor

In the event of primary root disk failure, the system would automatically boot to the second submirror. Or, if you boot manually, rather than using auto boot, you would only enter:


ok boot rootmirror

Note: You'll want to do this to make sure you can boot the submirror!!!!


Now on to creating your ABE. Please note the following:


Now that we have successfully set up,configured, and booted our system with SVM, there is several ways to created the Alternate Boot Environment (ABE). You can use the SVM commands such as metadetach(1M), and metaclear(1M) to break the mirrors, but for this exercise, I had enough disks to create my ABE.


Please note from above when I mentioned that if you are going to have swap mirrored, there is a bug with lucreate(1M), in that lucreate(1M) cannot perform SVM attach/detach on swap devices. The BugID is 5042861.


 If you do not have enough disk space, and you are not going to mirror swap, the Live Upgrade command lucreate will detach the mirrors, preserve the data, and create ABE for you.


Note: Before proceeding you'll want to boot back to the rootdisk, after testing booting from the mirror!


Live Upgrade commands to be used:


lucreate(1M) – create a new boot environment
lustatus(1M)
– display status of boot environments
luupgrade(1M) – installs, upgrades, and performs other functions on software on a boot environment
luactivate(1M) – activate a boot environment
lufslist(1M) – list configuration of a boot environment


Since, I will not be breaking the mirrors to create my ABE in this exercise, I've created a script call lu_create.sh to prepare the other disks for my ABE and create the ABE using the lucreate(1M) command.


The lucreate(1M) command, has several flags. I will use the -C flag. The -C boot_device flag, was provided for occasions when lucreate(1M) cannot figure out which physical storage device is your boot device. This might occur, for example, when you have a mirrored root device on the source BE on an x86 machine.


The -C specifies the physical boot device from which the source BE is booted. Without this option, lucreate(1M) attempts to determine the physical device from which a BE boots. If the device on which the root file system is located is not a physical disk (for example, if root is on a Solaris Volume Manager volume) and lucreate(1M) is able to make a reasonable guess as to the physical device, you receive the query.

Is the physical device devname the boot device for the logical device devname?

If you respond y, the command proceeds.


If you specify -C boot_device, lucreate(1M) skips the search for a physical device and uses the device you specify. The (hyphen) with the -C option tells lucreate(1M) to proceed with whatever it determines is the boot device. If the command cannot find the device, you are prompted to enter it.


If you omit -C or specify -C boot_device and lucreate(1M) cannot find a boot device, you receive an error message.


Use of the -C form is a safe choice, because lucreate(1M) either finds the correct boot device or gives you the opportunity to specify that device in response to a subsequent query.


Copy of the lu_create.sh script


root@sunrise1# cat lu_create.sh
#!/bin/sh

# Created by Mark Huff Sun Microsystems Inc. on 4/08/08
#ScriptName: lu-create.sh

# This script will use Solaris 10 LiveUpgrade commands to create a Alternate Boot Environment (ABE),
# and also create Solaris Volume Manager (SVM) metadevice.

lustatus

metainit -f d110 1 1 c0t1d0s0
metainit -f d120 1 1 c1t1d0s0
metainit d100 -m d110

# The following line will create the ABE with the zones from the PBE
lucreate -C /dev/dsk/c0t1d0s2 -m /:/dev/md/dsk/d100:ufs  \\
-m -:/dev/dsk/c0t1d0s1:swap \\
-m /zones:/dev/dsk/c1t4d0s0:ufs -n s10u4

sleep 10
# The following lines will setup the metadevices for swap and attach the mirrors for / and swap
metainit -f d111 1 1 c0t1d0s1
metainit -f d121 1 1 c1t1d0s1
metainit d101 -m d111
metattach d100 d120
metattach d101 d121

lustatus

sleep 2

echo The lufslist command will be run to lists the configuration of a boot environment BE. The output contain
s the disk slice,file system, file system type, and file system size for each BE mount point. The output also
 notes any separate file systems that belong to a non-global zone inside the BE being displayed as well.

lufslist s10u4

luactivate s10u4

lustatus

sleep 10

echo Please connect to console to see reboot!!

sleep 2

#init 6


After the lu_create.sh script has completed you will now reboot. This system will now boot on the new ABE. 


root@sunrise1# init 6 

Once the system has boot on the new ABE, list the configuration of the boot environment:


root@sunrise1# lufslist s10u4
               boot environment name: s10u4
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d100        ufs       13811814400 /                   logging
/dev/dsk/c0t1d0s1       swap       4255727616 -                   -
/dev/dsk/c1t4d0s0       ufs        4289863680 /zones              logging  


root@sunrise1# lufslist d0

               boot environment name: d0
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem               fstype    device size    Mounted on          Mount Options
-----------------------  --------  ------------       ------------------ --------------
/dev/md/dsk/d0         ufs       13811814400   /                   logging
/dev/md/dsk/d1         swap    4255727616    -                   -
/dev/dsk/c1t9d0s4     ufs       36415636992  /zones              logging


At this point, you're ready to run the luupgrade command, however if you encountered problems with the lucreate, you may find it very useful
to the lustatus (1M) utility to see the state of the boot environment.


In my case everything went as planned. Here is the output of the lustatus (1M) utility from my server.


To display the status of the current boot environment:


root@sunrise1# lufstatus
Boot Environment           Is         Active Active     Can     Copy      
Name                             Complete Now     On Reboot  Delete  Status    
--------------------------       --------   ------   ---------  ------  ----------
d0                               yes        no       no         yes     -         
s10u4                            yes        yes      yes        no      -      
 

Renamed the PBE to s10u3 


root@sunrise1# lurename -e d0 -n s10u3

To display the status of the current boot environment:


root@sunrise1# lufstatus
Boot Environment           Is         Active Active     Can     Copy      
Name                             Complete Now     On Reboot  Delete  Status    
--------------------------       --------   ------   ---------  ------  ----------
s10u3                               yes        no       no         yes     -         
s10u4                            yes        yes      yes        no      -     

To display and list the containers/zones    


root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared

To upgrade to the new Solaris release, you will use the luupgrade (1M)command with the -u option. The -s option identifies the path to the media.


In my case here I already had it mounted on /mnt before shown above in the section Preparing for Live Upgrade.


I've ran the date(1) command to record when I started the luupgrade(1M)


root@sunrise1# date

Wed Apr 23 12:00:00 EDT 2008


The command line option would be as follows:


root@sunrise1# luupgrade -u -n s10u4 -s /mnt 

The command generated the following output:

183584 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u4>.
Determining packages to install or upgrade for BE <s10u4>.
Performing the operating system upgrade of the BE <s10u4>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10u4>.
Package information successfully updated on boot environment <s10u4>.
Adding operating system patches to the BE <s10u4>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <s10u4> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <s10u4> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10u4>. Before you activate boot
environment <s10u4>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <s10u4> is complete.


I ran the date(1) command to record when the luupgrade(1M) completed.


root@sunrise1# date

Wed Apr 23 15:43:00 EDT 2008

Note: The luupgrade took approximately a total of 3 hours and 43 minutes.


We now must activate the newly create ABE by running the luactivate(1M) command.


root@sunrise1 # luactivate s10u4

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0t1d0s2 /mnt

4. Run <luactivate> utility with out any arguments from the current boot

 


Boot new ABE


After the boot of the new ABE called s10u4 we will see the that the new ABE has it new SVM metadevices along with the zones that we created when we ran the lu_create.sh script .


I suppose you could nvalias, s10u4at the OBP, to it's correct disk path something like this.


ok nvalias s10u4 /pci@1f,4000/scsi@3/disk@1,0:a

or just boot the path in this case the following

ok boot /pci@1f,4000/scsi@3/disk@1,0:a

Resetting ...

Sun Ultra 60 UPA/PCI (2 X UltraSPARC-II 450MHz), No Keyboard
OpenBoot 3.23, 2048 MB memory installed, Serial #14809682.
Ethernet address 8:0:20:e1:fa:52, Host ID: 80e1fa52.

Rebooting with command: boot /pci@1f,4000/scsi@3/disk@1,0:a
Boot device: /pci@1f,4000/scsi@3/disk@1,0:a File and args:
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
TSI: gfxp0 is GFX8P @ 1152x900
Hostname: sunrise1
Configuring devices.
Loading smf(5) service descriptions: 27/27
/dev/rdsk/c1t4d0s0 is clean

sunrise1 console login: root
Password:
Apr 23 18:13:48 sunrise1 login: ROOT LOGIN /dev/console
Last login: Tue Apr 22 14:28:09 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have mail.
Sourcing //.profile-EIS.....


root@sunrise1 # cat /etc/release
Solaris 10 8/07 s10s_u4wos_12b SPARC
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007
 


Now that the new ABE, s10u4 is boot you'll see below that the new environment has the new metadevices (d100 for / and d101 for swap.


root@sunrise1 # more /etc/vfstab
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008> updated boot environment <s10u4>
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      logging
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008>:<s10u4>#  /dev/md/dsk/d1        -       -       swap    -       no      -
/dev/md/dsk/d101        -       -       swap    -       no      -     
/dev/dsk/c1t4d0s0       /dev/rdsk/c1t4d0s0      /zones  ufs     2       yes     logging
/dev/dsk/c1t2d0s0      /dev/rdsk/c1t2d0s0      /solaris-stuff  ufs     2       yes     logging
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

 
root@sunrise1 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d100        13G   4.7G   7.8G    38%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   4.6G   1.4M   4.6G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
fd                       0K     0K     0K     0%    /dev/fd
swap                   4.6G    48K   4.6G     1%    /tmp
swap                   4.6G    48K   4.6G     1%    /var/run
/dev/dsk/c1t4d0s0      3.9G   614M   3.3G    16%    /zones

As you can see the new BE also has the container/zone


root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared


Log into the zone in the new BE


root@sunrise1 # zlogin zoneA
[Connected to zone 'zoneA' pts/1]
Last login: Mon Apr 21 16:15:41 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# cd /
# ls -al
total 1004
drwxr-xr-x 18 root root 512 Apr 23 15:41 .
drwxr-xr-x 18 root root 512 Apr 23 15:41 ..
drwx------ 3 root root 512 Apr 21 16:15 .sunw
lrwxrwxrwx 1 root root 9 Apr 22 12:00 bin -> ./usr/bin
drwxr-xr-x 12 root root 1024 Apr 23 17:44 dev
drwxr-xr-x 73 root sys 4096 Apr 23 17:47 etc
drwxr-xr-x 2 root sys 512 Nov 23 09:58 export
dr-xr-xr-x 1 root root 1 Apr 23 17:46 home
drwxr-xr-x 7 root bin 5632 Apr 23 15:03 lib
drwxr-xr-x 2 root sys 512 Nov 22 22:32 mnt
dr-xr-xr-x 1 root root 1 Apr 23 17:46 net
drwxr-xr-x 6 root sys 512 Dec 18 12:15 opt
drwxr-xr-x 54 root sys 2048 Apr 23 13:58 platform
dr-xr-xr-x 81 root root 480032 Apr 23 18:26 proc
drwxr-xr-x 2 root sys 1024 Apr 23 14:16 sbin
drwxr-xr-x 4 root root 512 Apr 23 15:41 system
drwxr-xr-x 5 root root 396 Apr 23 17:48 tmp
drwxr-xr-x 41 root sys 1024 Apr 23 15:34 usr
drwxr-xr-x 43 root sys 1024 Apr 23 15:41 var

# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
qfe0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.1.27 netmask ffffff00 broadcast 192.168.1.255



Success!! :)"  Give it a try!!


Well their you go!  Live Upgrade using Solaris Volume Manager with solaris containers/zones deployed!!


Well until next time.  My next post will be on patching using LiveUpgrade.


Technorati Profile

Technocrati Tags:

Tuesday Feb 19, 2008

Introduction to iSCSI in Solaris 10 8/07 (U4)

After consulting with Jeff Victor a colleague of mine this week on the subject of iSCSI, I decide to do some reading and playing around myself with iSCSI. Jeff was actually testing iSCSI with LDOMS etc.. Look forward to seeing Jeff's work.


The Solaris 10 8/07 (U4) release provides support for iSCSI target devices, which can be disk or tape devices. Releases prior to Solaris 10 8/07 provided support for iSCSI initiators. 


So what does this mean?  Well, It means that Solaris can act as both a “server” (target) and a “client” (initiator) for iSCSI.


The advantage of setting up Solaris iSCSI targets is you might have existing fibre-channel devices that can be connected to clients without the cost of fibre-channel HBAs. In addition, systems with dedicated arrays can now export replicated storage with ZFS or UFS file systems.


While experimenting with iSCSI you'll need to remember some basic commands.


You will use the iscsitadm(1M) command to set up and manage your iSCSI target devices. For the disk device that you select as your iSCSI target, you'll need to provide an equivalently sized ZFS or UFS file system as the backing store for the iSCSI daemon.


After the target device is set up, you'll use the iscsiadm(1M) command to identify your iSCSI targets, which will discover and use the iSCSI target device.


A good reference for you to start looking at when configuring iSCSI would be the Configuring Solaris iSCSI Targets and Initiators (Tasks), in the Systems Administration Guide: Devices and File Systems.


To assist you with iSCSI, I’ve jotted down some steps which will allow you to export some targets, play around with them and then destroy them afterwards.


The following hosts will be used in this exercise:




















Hostname IP address Description
sunrise1 192.168.1.25 This machine has the zpool, called zpoolA for this exercise
spirit2 192.168.1.30 This machine will be used to mount our iSCSI targets

Our zpool, zpoolA, will be used for storage.

The goal of this exercise is to couple ZFS with the iSCSI target in Solaris. There are two new ZFS properties added to support this feature. For this exercise we will be only using the shareiscsi property. Note the other property is the iscsioptions.


    shareiscsi

Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should
be exported as an iSCSI target. The acceptable values for this property
are 'on', 'off', and 'direct'. In the future, we may support other
target types (for example, 'tape'). The default is 'off'. This property
may be set on filesystems, but has no direct effect; this is to allow
ZVOLs created under the ZFS hierarchy to inherit a default. For
example, an administrator may want ZVOLs to be shared by default, and
so set 'shareiscsi=on' for the pool.

    iscsioptions
	This read-only property, which is hidden, is used by the iSCSI target
daemon to store persistent information, such as the IQN. It cannot be
viewed or modified using the zfs command. The contents are not intended
for external consumers. 

First, we need to create some volumes to use. Since I already started with my zpoolA, we’ll use that. We’ll create three volumes, each 1Gbyte in
size using the zfs(1M) command.


sunrise1# zfs create -V 1G zpoolA/iscsi_luns/vol001
sunrise1# zfs create -V 1G zpoolA/iscsi_luns/vol002
sunrise1# zfs create -V 1G zpoolA/iscsi_luns/vol003

Now, you'll need to configure them as iSCSI targets


sunrise1# zfs set shareiscsi=on zpoolA/iscsi_luns/vol001
sunrise1# zfs set shareiscsi=on zpoolA/iscsi_luns/vol002
sunrise1# zfs set shareiscsi=on zpoolA/iscsi_luns/vol003

Walla! You've created three thinly provisioned 1GB volumes and then turned "shareiscsi" on.


It's that simple!  Seeing is believe! Here's how you check!


sunrise1# svcs -a | grep -i iscsi
disabled 7:02:49 svc:/network/iscsi_initiator:default
online 19:21:01 svc:/system/iscsitgt:default

sunrise1# iscsitadm list target -v
Target: zpoolA/iscsi_luns/vol001
iSCSI Name: iqn.1986-03.com.sun:02:7536c2517-91e6-cf6c-d566-d48fb182e9f7
Alias: zpoolA/iscsi_luns/vol001
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 1GB
Backing store: /dev/zvol/rdsk/zpoolA/iscsi_luns/vol001
Status: online
Target: zpoolA/iscsi_luns/vol002    
iSCSI Name: iqn.1986-03.com.sun:02:78692333-1fb4-ee45-b547-ea49922ee538
Alias: zpoolA/iscsi_luns/vol002
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 1GB
Backing store: /dev/zvol/rdsk/zpoolA/iscsi_luns/vol002
Status: online
Target: zpoolA/iscsi_luns/vol003    
iSCSI Name: iqn.1986-03.com.sun:02:792405e0-a3fc-4ccf-f86d-b79f7b1ee006
Alias: zpoolA/iscsi_luns/vol003
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 1GB
Backing store: /dev/zvol/rdsk/zpoolA/iscsi_luns/vol003
Status: online 

Again make sure they’re visible by running the following command


sunrise1# iscsitadm list target

Target: zpoolA/iscsi_luns/vol001
iSCSI Name: iqn.1986-03.com.sun:02:7536c2517-91e6-cf6c-d566-d48fb182e9f7
Connections: 0
Target: zpoolA/iscsi_luns/vol002
iSCSI Name: iqn.1986-03.com.sun:02:78692333-1fb4-ee45-b547-ea49922ee538
Connections: 0
Target: zpoolA/iscsi_luns/vol003
iSCSI Name: iqn.1986-03.com.sun:02:792405e0-a3fc-4ccf-f86d-b79f7b1ee006
Connections: 0

Now, to start using the targets! We'll first need to tell our initiator where to look for the storage. Note: Keep in mind, that the iSCSI connection is not initiated until the discovery method is enabled. For the purpose here, we will be configuring the device for dynamical discovery (SendTargets). There are two new ZFS properties added to support this feature. For this exercise we will be only using the shareiscsi property. The other property is the iscsioptions.


    shareiscsi

Like the 'sharenfs' property, 'shareiscsi' indicates if a ZVOL should
be exported as an iSCSI target. The acceptable values for this property
are 'on', 'off', and 'direct'. In the future, we may support other
target types (for example, 'tape'). The default is 'off'. This property
may be set on filesystems, but has no direct effect; this is to allow
ZVOLs created under the ZFS hierarchy to inherit a default. For
example, an administrator may want ZVOLs to be shared by default, and
so set 'shareiscsi=on' for the pool.

    iscsioptions
	This read-only property, which is hidden, is used by the iSCSI target
daemon to store persistent information, such as the IQN. It cannot be
viewed or modified using the zfs command. The contents are not intended
for external consumers.

spirit2# iscsiadm add discovery-address 192.168.1.25:3260

We will now need to tell our initiator how to look for our targets. Note the -t, this is the flag to enable the sendtargets. You can use --sendtargets flag if you prefer.


spirit2# iscsiadm modify discovery -t enable

It is now time to use devfsadm(1M) to find our new storage. Note the -C flag, This flag is the Cleanup mode. Prompts devfsadm to cleanup dangling /dev links that are not normally removed. This -i flag is for configuring only the devices for the named driver, driver_name.


spirit2# devfsadm -C -i iscsi

I've found if you run iostat -En you'll see that we have new storage, like this


spirit2# iostat -En
...
c10t2d0 Soft Errors: 2 Hard Errors: 0 Transport Errors: 0
Vendor: SUN Product: SOLARIS Revision: 1 Serial No:
Size: 1.07GB <1073741824 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
...

If you look at the syslog, you’ll probably notice messages complaining about a bad magic number. This is because our newly visible LUNs aren’t labelled yet. So all you'll need to do is run the format(1M) command and label them like any other disk.


Note: If you want to make the iSCSI drive available on reboot, create the file system, and add an entry to the /etc/vfstab file as you would with a UFS file system on a SCSI device.


Now once, that the LUN's are formatted we can create filesystems and mount them, for example:


spirit2# newfs /dev/rdsk/c10t2d0s0
spirit2# mkdir /mnt/iscsi_mytest
spirit2# mount /dev/dsk/c10t2d0s0 /mnt/iscsi_mytest

Once we’ve finished playing around with our volumes, you might want to return your hosts to a clean state so simply unmount any filesystems and remove the directory where it was mounted.


spirit2# umount /mnt/iscsi_mytest
spirit2# rmdir /mnt/iscsi_mytest

Now, If you created any zpools or SVM metadevices, clear them up as well. Once everything is clear: These optional procedure, assumes that you are logged in to the local system where access to an iSCSI target device has already been configured.


Note: After removing a discovery address, iSNS server, static-config, or disabling a discovery method, the associated targets are logged out. If these associated targets are still in use, for example, have mounted file systems, and you did not execute the previous commands, (umount, rmdir), the logout of these devices will fail and they will remain on the active target list.


spirit2# iscsiadm modify discovery -t disable
spirit2# iscsiadm remove discovery-address 192.168.1.25:3260
spirit2# devfsadm -C -i iscsi

and on the target host do the following


sunrise1# zfs destroy zpoolA/iscsi_luns/vol001
sunrise1# zfs destroy zpoolA/iscsi_luns/vol002
sunrise1# zfs destroy zpoolA/iscsi_luns/vol003


There you go!  A simple exercise configuring using iSCSI 


In conclusion, please note that because iSCSI target LUN's are shared network accessible block storage devices they are built on top of the zvols. The advantage of these zvols is that they come with all that ZFS has to offer filesystems including snapshots, replication, compression, etc...


Special Thanks to Jeff Victor & Scott Dickson for their help on the subject. 


Well have fun with iSCSI... Until next time!! 

Technorati Profile

Technocrati Tags:

About

mhuff

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today