Simplified administration with ZFS


The simplest case in system administration can often be just about one's ability to, inexpensively, move an OS image with as minimal effort as possible.  I recently acquired an old used Sun Ultra 40 system, and this being a long weekend (in the US), decided to allocate time to make it my primary desktop system.  Well, turns out I didn't need a weekend -- all I needed was 1 hour, a screwdriver and Scorpions to keep me company -- a thought-provoking combination.


Your mileage may vary, of course, based on what you already have in place, i.e. what your starting point and destination points are. In my case, I was looking at migrating from an older HP Pavilion desktop model a1150y, that I previously outfitted with 4GB of RAM (for $60 from Amazon) and 2 mirrored 1.5TB SATA drives (for $100 or so, each, prior to the flooding in Thailand that has since caused a spike in hard disk pricing). I'd been running a version of Oracle Solaris 11 Express that I have been updating to SRUs (Support Repository Updates) as they were being released by Oracle. Since the disks were mirrored when installed on the HP system, I simply broke the mirror by disconnecting one of the disks (no advance zfs commands were necessary), and plopped the 1.5TB disk into the Ultra 40.  At the time when Ultra 40's were being sold by Sun, the largest SATA disks that were sold by Sun for this model (if my memory serves me right) had been of 750GB capacity. Taking off the hard disk brackets from original disks in the Ultra 40, and attaching them to the bigger disk took the most time in this exercise (that, and proofreading this blog entry). As soon as the system was powered-on, the 4 year old Phoenix BIOS had no issues detecting the ZFS submirror and the GRUB menu simply came up -- showing all of the Boot Environments (why they're important) that had been created over the lifespan of the OS installation from when it'd been on the HP desktop.


All Boot Environments are shown below:


isaac@HPensolaris:~# beadm list
BE             Active Mountpoint Space   Policy Created         
--             ------ ---------- -----   ------ -------         
151a-july11    -      -          12.29M  static 2011-07-18 12:54
151a-july11-1  -      -          15.34M  static 2011-09-19 15:12
S11EwithSRU11  NR     /          8.63G   static 2011-10-10 09:18
before_168     -      -          87.91M  static 2011-07-02 18:46
snv_150        -      -          49.48M  static 2010-10-17 11:05
snv_150-bkup   -      -          6.85M   static 2011-03-17 00:58
snv_150-bkup-1 -      -          356.53M static 2011-07-02 18:47
snv_151a_ga    -      -          10.59M  static 2011-03-17 11:57


It would probably be unfair to hint at the performance improvement being seen on this machine, without mentioning that there's now 16GB of RAM addressable by Solaris, vs. 4GB. The processing power of 2 socket dual-core AMD CPUs at 3.2GHz is gargantuan compared to the 3GHz single socket dual-core Pentium 4 in the older HP machine.  This is just an artifact and a reality -- I did not perform any benchmarking activities on these two machines -- that's not the aim here.  One of the other reasons why I wanted to make the switch away from that HP desktop model was because the HP desktop's fan had always been working extra hard and generated noise that became almost unbearable to deal with.  Looking into it further had been something I'd planned on doing but never gotten around to doing. Now it can be a system used for Solaris 11 Automated Installation (and related) testing.


Yeah, the fact that the ZFS dataset submirror just worked, in a different system, albeit of the same architecture (after being moved) is slick! (ZFS is the only root file system available on Solaris 11 for a number of reasons, one of which is our reliance on conducting system updates by leveraging the benefits and features of ZFS such as snapshots and clones). No fsck, and firefox/thunderbird/office productivity (read: youtube) are all much faster now.


The next thing to do here is to add the 2nd disk from that HP machine, to correct the following condition being tracked automatically by the Solaris Fault Management subsystem.


isaac@HPensolaris:~# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
    the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scan: resilvered 10.8G in 0h8m with 0 errors on Mon Oct 18 08:06:03 2010
config:

    NAME           STATE     READ WRITE CKSUM
    rpool          DEGRADED     0     0     0
      mirror-0     DEGRADED     0     0     0
        c13t0d0s0  ONLINE       0     0     0
        c9d0s0     UNAVAIL      0     0     0  cannot open

errors: No known data errors
isaac@HPensolaris:~#


isaac@HPensolaris:~# fmdump -v|tail -10
          Location: -

Jan 15 19:20:03.5998 f6f64adc-b226-cbd7-9829-eedc74efdfd5 ZFS-8000-D3 Diagnosed
  100%  fault.fs.zfs.device

        Problem in: zfs://pool=rpool/vdev=fb5057fa88f00c2f
           Affects: zfs://pool=rpool/vdev=fb5057fa88f00c2f
               FRU: -
          Location: -





Comments:

Post a Comment:
Comments are closed for this entry.
About

Isaac Rozenfeld is a Product Manager for Oracle Solaris; current responsibilities include the portfolio of networking and installation technologies in Solaris, with a focus on easing the overall application deployment experience


You can follow Isaac on Twitter @izfromsun

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News
Blogroll
Tech Reference

No bookmarks in folder