ZFS Boot in Solaris 10 Update 6

Lori Alt gives us the deep-dive lowdown on ZFS boot. ~ 40 mins.

View the subtitled video here.

Comments:

When does Update 6 get released?

Posted by patrick giagnocavo on October 23, 2008 at 02:36 PM PDT #

The GA date for Update 6 is 11/10/08.

Posted by Lori Alt on October 24, 2008 at 04:58 AM PDT #

Lori, you are talking to the world here. To most of the world 11/10/08 is October 11th...so on that basis S10 U6 is out already then..or are you using US date formats. Easiest ways to not make this mistake is just write it out as 11th October or 10th November (or October 11th / November 10th). I cannot tell you how confusing it is that people in the USA use a different date format then most of the rest of the world. Rgds, Tim

Posted by Tim Thomas on October 24, 2008 at 08:56 PM PDT #

Are the slides presented available?

Posted by Vincent Fox on October 25, 2008 at 02:14 PM PDT #

Sorry about the date confusion. Yes, I should post dates
in a format which is non-ambiguous. The GA date
for U6 is 10 Nov 2008.

As for the slides, they are at:

http://opensolaris.org/os/community/zfs/boot/zfsboottalk.0910.pdf

Which is referenced from the Opensolaris zfs boot page:

http://opensolaris.org/os/community/zfs/boot/

Posted by Lori Alt on October 27, 2008 at 08:03 AM PDT #

About live upgrade, If I have it right, when the cloning is done the 2 BootEnvironments evolve separately. An upgrade can take a few hours (or more).
When you boot on the new BE, all datas from the dataset are from the time of the cloning. but some datas will have changed in the old BE and will be lost.
So if you don't have a separate zvol for /var or other datas, you endup losing the work done between the lucreate and the actual booting of the new BE.
So people will still need to have a good separation (zvols) between system files and data files. /var being at the intersection, poses problems or at least questions: /var/svc could be changed by the upgrade so it should be in the dataset but logs in /var/log inserted in the upgrade time window will be lost, so will be /var/mail.
To be completely transparent, live upgrade should copy all non system files modified since cloning from the older BE to the newer at boot time (in single user).
The other way is to configure all systems as to have all datas in zvols in another dataset.

Posted by Michel Jansens on October 28, 2008 at 12:40 AM PDT #

Liveupgrade does have a mechanism for syncing the
new BE with the currently-running BE. Here is
the text from the luactivate man page:

"The first time you boot from a newly created BE,
Live Upgrade software synchronizes this BE with the
BE that was last active.... 'Synchronize' here means
that certain system files and directories are
copied from the last-active BE to the BE being
booted. (See synclist(4).)"

<end man page excerpt>

See the file /etc/lu/synclist for the list of files and
directories that are copied to the new BE. This occurs
when you do the "init 6" to shutdown the system for the
purpose of booting the new BE, so it should pick up all
changes to those files. The synclist(4) man page describes
the format of that file, so that you can update it if
additional files need to be synced.

Having a separate /var dataset doesn't have the effect
you want: lucreate knows that /var is part of the Solaris
name space and will clone that one too. So it too would
get out of sync with your current running environment if
it were not for the synclist mechanism.

Posted by Lori Alt on October 28, 2008 at 01:36 AM PDT #

Lori, Thanks a lot for your reply.

Posted by Michel Jansens on October 28, 2008 at 06:37 PM PDT #

Thanks to this info Lori.

Could you tell me if the new support for ZFS in Live Upgrade extends to local zones installed in zvols?
Thanks a lot.

Posted by Michel Jansens on October 29, 2008 at 08:08 PM PDT #

OK, this is complicated: the new support for ZFS in Live
Upgrade supports zones as long as both the root file system
and the zone roots are ZFS datasets and are in the same pool
(and, I believe) the zone root datasets must under the BE
in the dataset hierarchy. That is, if the BE is named
"mybe" and there is a local zone called z1, the dataset
hierarchy must look something like this:

rpool/ROOT/mybe
rpool/ROOT/mybe/z1

or:

rpool/ROOT/mybe
rpool/ROOT/mybe/zones/z1

That is, the zone root must be subordinate to the
BE's root in the dataset hierarchy.

There are bugs right now in supporting zfs-rooted
zones when the root file system is ufs. Also some
problems with spreading the root dataset and the
zones over multiple pools. A patch to LiveUpgrade
is in the works to fix these problems. But if you
stick to the above described configuration, zone
roots on zfs do work.

Posted by Lori Alt on October 30, 2008 at 02:42 AM PDT #

I wish i worked for Sun <3 nice video :)

Posted by douglas on February 12, 2009 at 11:40 AM PST #

From what I gather there is nothing fundamentally wrong with having the global using UFS and the whole root zones built on SAN attached ZFS file systems. The issues appear to come from LU. If I didn't plan to LU but use the U6 upgrade on attach feature, would this cause amy issues

Posted by paul on February 18, 2009 at 12:41 AM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Solaris Storage News

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks