Monday Sep 13, 2010

An End and a Beginning

I have worked at Sun, and now Oracle, for longer than I really want to admit, but I have reached a point in my career where I feel that I need to try something new. Today, September 13th, 2010, is my last day at Oracle. For the past 5 years I've worked on zones. The zones team is such an outstanding group of engineers and I'm going to miss working with them. I've learned a lot from them, as well as all of the other engineers at Sun that I have been privileged to work with over the years.

Now, I'm moving on to something new. Tomorrow I start work at Joyent. Hopefully I'll have a chance to blog more than I've done recently. My blog is moving to dtrace.org. I'll continue to work on zones, and Solaris in general, at Joyent, and I'm excited about the new new challenges ahead. There are obviously a lot of changes going on with Solaris right now, so its going to interesting. And fun!

Friday Oct 23, 2009

solaris10 branded zones on OpenSolaris

For the past 9 or 10 months I've been pretty much heads down working with Jordan on the Solaris 10 branded zone project. Yesterday we integrated the first phase of this project into OpenSolaris. This brand allows you to run the Solaris 10 10/09 release, or later, inside of a zone running on OpenSolaris. We see this brand as one of the tools which will help people as they transition from running Solaris 10 to OpenSolaris.

We've divided this project into two phases. For this initial integration we have the following features:

  • basic brand emulation
    The brand emulation works for running the latest version of Solaris 10 (Solaris 10 10/09) on OpenSolaris. A zone running this brand is intended to be functionally equivalent to a native zone on Solaris 10 10/09 with the same configuration.
  • p2v
    A physical-to-virtual capability to install an archive of a system running Solaris 10 10/09 into the branded zone
  • v2v
    A virtual-to-virtual capability to install an archive of a native zone from a system running Solaris 10 10/09 into the branded zone
  • multiple architecture support
    This brand runs on all sun4u, sun4v and x86 architecture machines that OpenSolaris has defined as supported platforms

There are a few limitations with this initial version of the code which we'll work on in the second phase of the project. We'll be adding support for:

  • Exclusive IP stack zones
  • Delegated ZFS datasets
  • The ability to run these branded zones on a system running xVM
  • The ability to upgrade the version of Solaris 10 running inside the zone to a later release of Solaris 10

We've done extensive testing of the brand using our internal Solaris 10 test suites and a variety of 3rd party applications. Now that the code has integrated, we're looking forward to getting feedback from more people about their real-world experiences running their own Solaris 10 application stacks inside the zone. If you give this branded zone a try, let us hear about your experiences on the OpenSolaris zones-discuss alias.

Tuesday May 26, 2009

Community One Slides

I'll be delivering two presentations at Community One this year. My slides are posted on the wiki in case you want to download them. I like to use my slides as an outline instead of just reading them, so hopefully people who are attending will actually get some value from hearing me speak. :-) Don't forget that the Tuesday Deep Dive is free if you register with the OSDDT code. There a several ways to get into the deep dives if you are planning on attending. All of these are on the wiki.

Monday May 18, 2009

Free Community One Deep Dive

The Deploying OpenSolaris Deep Dive on Tuesday at Community One is free if you register using the promotional code OSDDT. The session doesn't start until 11:00 am so that people can still attend the JaveOne key note.

Chris Armes will start with an overview of deploying OpenSolaris in the data center. After lunch Ben Rockwood will be delivering a two hour presentation on ZFS. This promises to be the highlight of the session. Nick, one of my co-authors on the OpenSolaris Bible, will then talk about high availability and I'll wrap up with a talk on how to use zones for consolidation.

Friday May 08, 2009

Running Solaris 10 on OpenSolaris

Jordan just posted a nice blog about the work we've been doing for Solaris 10 branded zones running on OpenSolaris. His post also has a link to a Flash demo we put together showing the process of migrating a standalone Solaris 10 system into a zone on OpenSolaris. Both of us will be at Community One West and we'll be running the branded zone in the virtualization pod. If you're there and interested, stop by to check it out. I'll also be talking about this project as part of my presentations.

Thursday May 07, 2009

I'll be presenting at Community One West

I'll be delivering two presentations at Community One West at the beginning of June. The first presentation is on Monday June 1st and I'll be covering "Built-in Virtualization for the OpenSolaris Operating System". It will be an overview of some basic virtualization concepts and the various solutions available in OpenSolaris. I'll also be discussing the trade-offs of one vs. the other. The second presentation is on Tuesday as part of the deep dives. I'll be discussing application consolidation using zones. I'll also be hanging around the virtualization demo pod when I'm not presenting.

In addition, I think there is going to be a book signing for the OpenSolaris Bible. My co-authors Nick and Dave are also going to be attending. This will be the first (and only?) time the three of us have actually been together at the same time.

At least some of the other zones engineers ( Dan, Steve and Jordan) should be there too, so if you're attending, stop by the virtualization pod and say hi.

Saturday May 02, 2009

OpenSolaris books on google book search

I happened to be looking at google book search today and I thought I'd see if the book I co-authored, the OpenSolaris Bible was there. It is and you can see it here. Although the table of contents and some sample chapters are available elsewhere, this provides a nice way to browse more material in the book. I think google will let you see up to 20% of the book.

I also noticed the other new OpenSolaris book, Pro OpenSolaris is there, as is the venerable Solaris Internals.

Wednesday Feb 11, 2009

zones p2v

About two years ago the zones team sat down and began to create the solaris8 brand for zones. This brand allows you to run your existing Solaris 8 system images inside of a branded zone on Solaris 10. One of the key goals for this project was to easily enable migration of Solaris 8 based systems into a zone on Solaris 10. To accomplish this, as part of the project we built support for a "physical to virtual" capability, or p2v for short. The idea with p2v is that you can create an image of an existing system using a flash archive, cpio archive, a UFS dump, or even just a file system image that is accessible over NFS, then install the zone using that image. There is no explicit p2v tool you have to run; behind the scenes the zone installation process does all of the work to make sure the Solaris 8 image runs correctly inside of the zone.

Once we finished the solaris8 brand we followed that with the solaris9 brand which has this same p2v capability. Of course, while we were doing this work, we understood that having a similar feature for native zones would be useful as well. This would greatly simplify consolidation using zones, since you could deploy onto bare metal, then later consolidate that application stack into a zone with very little work.

The problem for p2v with native zones is that there is no brand module that mediates between the user-level code running in the zone and the kernel code, as we have with the solaris8 and solaris9 brands. Thus, the native zones must be running user-level code that is in sync with the kernel. This includes things like libc, which has a close relationship with the kernel. Every time a patch is applied which impacts both kernel code and user-level library code, all of the native zones must be kept in sync or unpredictable bugs will occur.

Just doing native p2v, as we did for the solaris8 and solaris9 brands, doesn't make sense since the odds that the system image you want to install in the zone will be exactly in sync with the kernel are pretty low. Most deployed systems are at different patch levels or even running different minor releases (e.g. Solaris 10 05/08 vs. 11/08), so there is no clean way to reliably p2v those images.

We really felt that native p2v was important, but we couldn't make any progress until we solved the problem of syncing up the system image to match the global zone. Fortunately I was able to find some time to add this capability, which we call update on attach. This was added into our zone migration subcommands, 'detach' and 'attach', which can be used to move zones from one system to another. Since zone migration has a similar problem as p2v, where the source and target systems can be out of sync, we do a lot of validation to make sure that the new host can properly run the zone. Of course this validation made zone migration pretty restrictive. Now that we have "update on attach", we can automatically update the zone software when you move it to the new host.

While "update on attach" is a valuable feature in its own right, we also built this with an eye on p2v, since it is the enabling capability needed for p2v. In addition, we leveraged all of the work Dan Price did on the installers for the solaris8 and solaris9 brands and were able to reuse much of that. As with the solaris8 and solaris9 brands, the native brand installer accepts a variety of image inputs; flar, cpio, compressed cpio, pax xustar, UFS dump or a directly accessible root image (e.g. over NFS). It was also enhanced to accept a pre-existing image in the zone root path. This is useful if you use ZFS send and receive to set up the zone root and want to then p2v that as a fully installed zone.

I integrated the native p2v feature into NV build 109 this morning. The webrev from the code review is still available if anyone is interested in seeing the scope of the changes. At over 2000 lines of new code this is a pretty substantial addition to zones which should greatly improve future zone consolidation projects.

Tuesday Jan 20, 2009

OpenSolaris Bible Samples

A comment on my last post noted that there were no sample chapters available for the book, however I just noticed that Wiley has posted some samples on the book's webpage.

The samples include chapter one, the index, and the detailed table of contents.

The index and TOC are probably the best sections for getting a feel for the material in the book. This is actually the first time I've seen the index myself, since it was produced after we finished writing and the final pages were nailed down. I haven't reviewed it closely yet, but at first glance it looks to be pretty comprehensive at 35 pages. I've always thought that the index was critical for a book like this. The detailed TOC is also useful for getting a sense of the topics covered in each chapter.

Tuesday Jan 06, 2009

Writing the OpenSolaris Bible

2008 was a busy year for me since I spent most of my free time co-authoring a book on OpenSolaris; the OpenSolaris Bible.

Having never written a book before, this was a new experience for me. Nick originally had the idea for writing a book on OpenSolaris and he'd already published Professional C++ with Wiley, so he had an agent and a relationship with a publisher. In December 2007 he contacted me about being a co-author and after thinking it through, I agreed. I had always thought writing a book was something I wanted to do, so I was excited to give this a try. Luckily, Dave agreed to be the third author on the book, so we had our writing team in place. After some early discussions, Wiley decided our material fit best into their "Bible" series, hence the title.

In early January 2008 the three of us worked on the outline and decided which chapters each of us would write. We actually started writing in early February of 2008. Given the publishing schedule we had with Wiley, we had to complete each chapter in about 3 weeks, so there wasn't a lot of time to waste. Also, because this project was not part of our normal work for Sun, we had to ensure that we only worked on the book on our own time, that is evenings and weekends. In the end it turned out that we each wrote exactly a third of the book, based on the page counts. Since the book came out at around 1000 pages, with approximately 950 pages of written material, not counting front matter or the index, we each wrote over 300 pages of content. Over the course of the project we were also fortunate that many of our friends and colleagues who work on OpenSolaris were willing to review our early work and provide much useful feedback.

We finished the first draft at the end of August 2008 and worked on the revisions to each chapter through early December 2008. Of course the OpenSolaris 2008.11 release came out right at the end of our revision process, so we had to scramble to be sure that everything in the book was up-to-date with respect to the new release.

From a personal perspective, this was a particularly difficult year because we also moved to a "new" house in April of 2008. Our new house is actually about 85 years old and hadn't been very well maintained for a while, so it needs some work. The first week we moved in, we had the boiler go out, the sewer back up into the basement, the toilet and the shower wouldn't stop running, the electrical work for our office took longer than expected, our DSL wasn't hooked up right, and about a million other things all seemed to go wrong. Somehow we managed to cope with all of that, keep working for our real jobs, plus I was able to finish my chapters for the book on schedule. I'm pretty sure Sarah wasn't expecting anything like this when I talked to her about working on the book the previous December. Needless to say, we're looking forward to a less hectic 2009.

If you are at all interested in OpenSolaris, then I hope you'll find something in our book that is worthwhile, even if you already know a lot about the OS. The book is targeted primarily at end-users and system administrators. It has a lot of breadth and we tried to include a balanced mix of introductory material as well as advanced techniques. Here's the table of contents so you can get a feel for whats in the book.
I. Introduction to OpenSolaris.
    1. What Is OpenSolaris?
    2. Installing OpenSolaris.
    3. OpenSolaris Crash Course.

II. Using OpenSolaris
    4. The Desktop.
    5. Printers and Peripherals.
    6. Software Management.

III. OpenSolaris File Systems, Networking, and Security.
    7. Disks,  Local File Systems, and the Volume Manager.
    8. ZFS.
    9. Networking.
    10. Network File Systems and Directory Services.
    11. Security.

IV. OpenSolaris Reliability, Availability, and Serviceability.
    12. Fault Management.
    13. Service Management.
    14. Monitoring and Observability.
    15. DTrace.
    16. Clustering for High Availability.

V. OpenSolaris Virtualization.
    17. Virtualization Overview.
    18. Resource Management.
    19. Zones.
    20. xVM Hypervisor.
    21. Logical Domains (LDoms).
    22. VirtualBox.

VI. Developing and Deploying on OpenSolaris.
    23. Deploying a Web Stack on OpenSolaris.
    24. Developing on OpenSolaris. 
If this looks interesting, you can pre-order a copy from Amazon here. It comes out early next month, February 2009, and I'm excited to hear peoples reaction once they've actually had a chance to look it over.

Tuesday Dec 23, 2008

Updating zones on OpenSolaris 2008.11 using detach/attach

In my last post I talked a bit about the new way that software and dataset management works for zones on the 2008.11 release.

One of the features that is still under development is to provide a way to automatically keep the non-global zones in sync with the global zone when you do a 'pkg image-update'. The IPS project still needs some additional enhancements to be able to describe the software dependencies between the global and non-global zones. In the meantime, you must manually ensure that you update the non-global zones after you do an image-update and reboot the global zone. Doing this will create new ZFS datasets for each zone which you can then manually update so that they match the global zone software release.

The easiest way to update the zones is to use the new detach/attach capabilities we added to the 2008.11 release. You can simply detach the zone, then re-attach it. We provide some support for the zone update on attach option for ipkg-branded zones, so you can use 'attach -u' to simply update the zone.

The following shows an example of this.
# zoneadm -z jj1 detach
# zoneadm -z jj1 attach -u
       Global zone version: pkg:/entire@0.5.11,5.11-0.101:20081119T235706Z
       Non-Global zone version: pkg:/entire@0.5.11,5.11-0.98:20080917T010824Z
Updating non-global zone: Output follows
                     Cache: Using /var/pkg/download.
PHASE                                          ITEMS
Indexing Packages                        54/54 
DOWNLOAD                                    PKGS           FILES       XFER (MB)
Completed                                     54/54   2491/2491   52.76/52.76 

PHASE                                        ACTIONS
Removal Phase                            1253/1253 
Install Phase                                 1440/1440 
Update Phase                               3759/3759 
Reading Existing Index                            9/9 
Indexing Packages                               54/54 
pkg:/entire@0.5.11,5.11-0.98:20080917T010824Z

Here you can see how the zone is updated when it is re-attached to the system. This updates the software in the currently active dataset associated with the global zone BE. If you roll-back to an earlier image, the dataset associated with the zone and the earlier BE will be used instead of this newly updated dataset. We've also enhanced the IPS code so it can use the pkg cache from the global zone, thus the zone update is very quick.

Because the zone attach feature is implemented as a brand-specific capability, each brand provides its own options for how zones can be attached. In addition to the -u option, the ipkg brand supports a -a or -r option. The -a option allows you to take an archive (cpio, bzip2, gzip, or USTAR tar) of a zone from another system and attach it. The -r option allows you to receive the output of a 'zfs send' into the zone's dataset. Either of these options can be combined with -u to enable zone migration from one OpenSolaris system to another. An additional option, which didn't make it into 2008.11, but is in the development release, is the -d option, which allows you to specify an existing dataset to be used for the attach. The attach operation will take that dataset and add all of the properties needed to make it usable on the current global zone BE.

If you used zones on 2008.11, you might have noticed that the zone's dataset is not mounted when the zone is halted. This is something we might change in the future, but in the meantime, one final feature related to zone detach is that it leaves the zone's dataset mounted. This provides and easy way to access the zone's data. Simply detach the zone, then you can access the zone's mounted file system, then re-attach the zone.

Wednesday Dec 10, 2008

zones on OpenSolaris 2008.11

The OpenSolaris 2008.11 release just came out and we've made some significant changes in the way that zones are installed on this release. The motivation for these changes are so that we can eventually have software management operations using IPS work in a non-global zone much the same way as they work in the global zone. Global zone software management uses the SNAP Upgrade project along with IPS and the idea is to create a new Boot Environment (BE) when you update the software in the global zone. A BE is based on a ZFS snapshot and clone, so that you can easily roll back if there are any problems with the newly installed software. Because the software in the non-global zones should be in sync with the global zone, when a new BE is created each of the non-global zones must also have a new ZFS snapshot and clone that matches up to the new BE.

We'd also eventually like to have the same software management capabilities within a non-global zone. That is, we'd like the non-global zone system administrator to be able to use IPS to install software in the zone, and as part of this process, a new BE inside the zone would be created based on a ZFS snapshot and clone. This way the non-global zone can take advantage of the same safety features for rolling back that are available in the global zone.

In order to provide these capabilities, we needed to make some important changes in how zones are laid out in the file system. To support all of this we need the actual zone root file system to be its own delegated ZFS dataset. In this way the non-global zone sysadmin can make their own ZFS snapshots and clones of the zone root and the IPS software can automatically create a new BE within the zone when a software management operation takes place in the zone.

The gory details of this are discussed in the spec.

All of the capabilities described above don't work yet, but we have laid a foundation to enable this for the future. In particular, when you create a new global zone BE, all of the non-global zones are also cloned as well. However, running image-update in the global zone still doesn't update each individual zone. You still need to do that manually, as Dan described in his blog about zones on the 2008.05 release. In a future post I'll talk about some other ways to update each zone. Another feature that isn't done yet is the full SNAP Upgrade support from within the zone itself. That is, zone roots are now delegated ZFS datasets, but when you run IPS inside the zone itself, a new clone is not automatically created. Adding this feature should be fairly straightforward though, now that the basic support is in the release.

With all of these changes to how zone roots use ZFS in 2008.11, here is a summary of the important differences and limitations with using zones on 2008.11.

1) Existing zones can't be used. If you have zones installed on an earlier release of OpenSolaris and image-update to 2008.11 or later, those zones won't be usable.

2) Your global zone BE needs a UUID. If you are running 2008.11 or later then your global zone BE will have a UUID.

3) Zones are only supported in ZFS. This means that the zonepath must be a dataset. For example, if the zonepath for your zone is /export/zones/foo, then /export/zones must be a dataset. The zones code will then create the foo dataset and all the underlying datasets when you install the zone.

4) As I mentioned above, image-updating the global BE doesn't update the zones yet. After you image-update the global zone, don't forget to update the new BE for each zone so that it is in sync with the global zone.

Thursday Sep 06, 2007

A busy week for zones

This is turning out to be a busy week for zones related news. First, the newest version of Solaris 10, the 8/07 release, is now available. This release includes the improved resource management integration with zones that has been available for a while now in the OpenSolaris nevada code base and which I blogged about here. It also includes other zones enhancements such as brandz and IP instances. Jeff Victor has a nice description of all of these new zone features on his blog.

If that wasn't enough, we have started to talk about our latest project, code named Etude. This is a new brand for zones, building on the brandz framework, and allows you to run a Solaris 8 environment within a zone. We have been working on this project for a good part of the year and it is exciting to finally be able to talk more about it. With Etude you can quickly consolidate those dusty old Solaris 8 SPARC systems, running on obsolete hardware, onto current generation, energy efficient, systems. Marc Hamilton, VP of Solaris Marketing, describes this project at a high level on his blog but for more details, Dan Price, our project lead, wrote up a really nice overview on his blog. If you have old systems still running Solaris 8 and would like an easy path to Solaris 10 and to newer hardware, then this project might be what you need.

Thursday Feb 01, 2007

Containers in SX build 56

The many Resource Management (RM) features in Solaris have been developed and evolved over the course of years and several releases. We have resource controls, resource pools, resource capping and the Fair Share Scheduler (FSS). We have rctls, projects, tasks, cpu-shares, processor sets and the rcapd(1M). All of these features have different commands and syntax to configure the feature. In some cases, particularly with resource pools, the syntax is quite complex and long sequences of commands are needed to configure a pool. When you first look at RM it is not immediately clear when to use one feature vs. another or if some combination of these features is needed to achieve the RM objectives.

In Solaris 10 we introduced Zones, a lightweight system virtualization capability. Marketing coined the term 'containers' to refer to a combination of Zones and RM within Solaris. However, the integration between the two was fairly weak. Within Zones we had the 'rctl' configuration option, which you could use to set a couple of zone specific resource controls, and we had the 'pool' property which could be used to bind the zone to an existing resource pool, but that was it. Just setting the 'zone.cpu-shares' rctl wouldn't actually give you the right cpu shares unless you also configured the system to use FSS. But, that was a separate step and easily overlooked. Without the correct configuration of these various, disparate components even a simple test, such as a fork bomb within a zone, could disrupt the entire system.

As users started experimenting with Zones we found that many of them were not leveraging the RM capabilities provided by the system. We would get dinged in evaluations because Zones, without a correct RM configuration, didn't provide all of the containment users needed. We always expected Zones and RM to be used together, but due the the complexity of the RM features and the loose integration between the two, we were seeing that few Zones users actually had a proper RM configuration. In addition, our RM for memory control was limited to rcapd running within a zone and capping RSS on projects. This wasn't really adequate.

About 9 months ago the Zones engineering team started a project to try to improve this situation. We didn't want to just paper over the complexity with things like a GUI or wizards, so it took us quite a bit of design before we felt like we hit upon some key abstractions that we could use to truly simplify the interaction between the two components. Eventually we settled upon the idea of organizing the RM features into 'dedicated' and 'capped' configurations for the zone. We enhanced resource pools to add the idea of a 'temporary pool' which we could dynamically instantiate when a zone boots. We enhanced rcapd(1M) so that we could do physical memory capping from the global zone. Steve Lawrence did a lot of work to improve resident set size (RSS) accounting as well as adding new rctls for maximum swap and locked memory. These new features significantly improve RM of memory for Zones. We then enhanced the Zones infrastructure to automatically do the work to set up the various RM features that were configured for the zone. Although the project made many smaller improvements, the key ideas are the two new configuration options in zonecfg(1M). When configuring a zone you can now configure 'dedicated-cpu' and 'capped-memory'. Going forward, as additional RM features are added, we anticipate this idea will evolve gracefully to add 'dedicated-memory' and 'capped-cpu' configuration. We also think this concept can be easily extended to support RM features for other key parts of the system such as the network or storage subsystem.

Here is our simple diagram of how we eventually unified the RM view within Zones.
       | dedicated  |  capped
---------------------------------
cpu    | temporary  | cpu-cap
       | processor  | rctl\*
       | set        |
---------------------------------
memory | temporary  | rcapd, swap
       | memory     | and locked
       | set\*       | rctl

\* memory sets and cpu caps are under development but are not yet part of Solaris.

With these enhancements, it is now almost trivial to configure RM for a zone. For example, to configure a resource pool with a set of up to four cpu's, all you do in zonecfg is:
zonecfg:my-zone> add dedicated-cpu
zonecfg:my-zone:dedicated-cpu> set ncpus=1-4
zonecfg:my-zone:dedicated-cpu> set importance=10
zonecfg:my-zone:dedicated-cpu> end
To configure memory caps, you would do:
zonecfg:my-zone> add capped-memory
zonecfg:my-zone:capped-memory> set physical=50m
zonecfg:my-zone:capped-memory> set swap=128m
zonecfg:my-zone:capped-memory> set locked=10m
zonecfg:my-zone:capped-memory> end
All of the complexity of configuring the associated RM capabilities is then handled behind the scenes when the zone boots. Likewise, when you migrate a zone to a new host, these RM settings migrate too.

Over the course of the project we discussed these ideas within the opensolaris Zones community where we benefited from much good input which we used in the final design and implementation. The full details of the project are available here and here.

This work is available in Solaris Express build 56 which was just posted. Hopefully folks using Zones will get a chance to try out the new features and let us know what they think. All of the core engineering team actively participates in the zones discuss list and we're happy to try to answer any questions or just hear your thoughts.

Monday Feb 20, 2006

SVM root mirroring and GRUB

Although I haven't been working on SVM for over 6 months (I am working on Zones now), I still get questions about SVM and x86 root mirroring from time to time. Some of these procedures are different when using the new x86 boot loader (GRUB) that is now part of Nevada and S10u1. I have some old notes that I wrote up about 9 months ago that describe the updated procedures and I think these are still valid.

Root mirroring on x86 is more complex than is root mirroring on SPARC. Specifically, there are issues with being able to boot from the secondary side of the mirror when the primary side fails. On x86 machines the system BIOS and fdisk partitioning are the complicating factors.

The x86 BIOS is analogous to the PROM interpreter on SPARC. The BIOS is responsible for finding the right device to boot from, then loading and executing GRUB from that device.

All modern x86 BIOSes are configurable to some degree but the discussion of how to configure them is beyond the scope of this document. In general you can usually select the order of devices that you want the BIOS to probe (e.g. floppy, IDE disk, SCSI disk, network) but you may be limited in configuring at a more granular level. For example, it may not be possible to configure the BIOS to probe the first and second IDE disks. These limitations may be a factor with some hardware configurations (e.g. a system with two IDE disks that are root mirrored). You will need to understand the capabilities of the BIOS that is on your hardware. If your primary boot disk fails you may need to break into the BIOS while the machine is booting and reconfigure to boot from the second disk.

On x86 machines fdisk partitions are used and it is common to have multiple operating systems installed. Also, there are different flavors of master boot programs (e.g. LILO) in addition to GRUB which is the standard Solaris master boot program. The boot(1M) man page is a good resource for a detailed discussion of the multiple components that are used during booting on Solaris x86.

Since SVM can only mirror Solaris slices within the Solaris fdisk partition this discussion will focus on a configuration that only has Solaris installed. If you have multiple fdisk partitions then you will need to use some other approach to protect the data outside of the Solaris fdisk partition.

Once your system is installed you create your metadbs and root mirror using the normal procedures.

You must ensure that both disks are bootable so that you can boot from the secondary disk if the primary fails. You use the installgrub program to setup the second disk as a Solaris bootable disk (see installgrub(1M)). An example command is:

/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0

Solaris x86 emulates some of the behavior of the SPARC eeprom. See eeprom(1M). The boot device is stored in the "bootpath" property that you can see with the eeprom command. The value should be assigned to the the device tree path of the root mirror. For example:

bootpath=/pseudo/md@0:0,10,blk

Next you need to modify the GRUB boot menu so that you can manually boot from the second side of the mirror, should this ever be necessary. Here is a quick overview of the GRUB disk naming convention.

(hd0),(hd1) -- first & second BIOS disk (entire disk)
(hd0,0),(hd0,1) -- first & second fdisk partition of first BIOS disk
(hd0,0,a),(hd0,0,b) -- Solaris/BSD slice 0 and 1 on first fdisk partition on the first BIOS disk

Hard disk names starts with hd and a number, where 0 maps to BIOS disk 0x80 (first disk enumerated by the BIOS), 1 maps to 0x81, and so on. One annoying aspect of BIOS disk numbering is that the order may change depending on the BIOS configuration. Hence, the GRUB menu may become invalid if you change the BIOS boot disk order or modify the disk configuration. Knowing the disk naming convention is essential to handling boot issues related to disk renumbering in the BIOS. This will be a factor if the primary disk in the mirror is not seen by the BIOS so that it renumbers and boots from the secondary disk in the mirror. Normally this renumbering will mean that the system can still automatically boot from the second disk, since you configured it to boot in the previous steps, but it becomes a factor when the first disk becomes available again, as described below.

You should edit the GRUB boot menu in /boot/grub/menu.lst and add an entry for the second disk in the mirror. It is important that you be able to manually boot from the second side of the mirror due to the BIOS renumbering described above. If the primary disk is unavailable, the boot archive on that disk may become stale. Later, if you boot and that disk is available again, the BIOS renumbering would cause GRUB to load that stale boot archive which could cause problems or may even leave the system unbootable.

If the primary disk is once again made available and then you reboot without first resyncing the mirror back onto the primary drive, then you should use the GRUB menu entry for the second disk to manually boot from the correct boot archive (the one on the secondary side of the mirror). Once the system is booted, perform normal metadevice maintenance to resync the primary disk. This will restore the current boot archive to the primary so that subsequent boots from that disk will work correctly.

The previous procedure is not normally necessary since you would replace the failed primary disk using cfgadm(1M) and resync but it will be required if the primary is simply not powered on, causing the BIOS to miss the disk and renumber. Subsequently powering up this disk and rebooting would cause the BIOS to renumber again and by default you would boot from the stale disk.

Note that all of the usual considerations of mddb quorum apply to x86 root mirroring, just as they do for SPARC.
About

jerrysblog

Search

Top Tags
Categories
Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today