Friday Apr 20, 2007

ZFS pool in an ISCSI ZVOL

My last post about backing up ZFS on my laptop to iscsi targets exported from a server backed by ZFS Zvols on the server prompted a commend and also promoted me to think about whether this would be a worth while thing in the real world?

Initially I would say no, however it does offer the tantalizing possibility to allow the administrator of the system with the iscsi targets to take back ups of the pools without interfering with the contents of the pool at all.




It allows you to split the snapshots for users, which would all live in the client pool from the snapshots for administrators, essentially for disaster recovery which would all be in the server pool.

If the server went pop the recovery would be to create a new server and then restore the zvol which would then contain the whole client pool with all the client pool's snapshots. Similarly if the client pool were to become corrupted you could roll it back to a good state by rolling back the ZVOL on the server pool. Now clearly the selling point of ZFS is an always consistent on disk format so this is less of a risk than with other file systems (unless there are bugs) however the belt and braces approach seems appealing to the latent sysadmin in me who knows that the performance of a storage system that has lost your data is zero.

I'm going to see if I can build a server like this to see how well it performs but that won't be for at least a few weeks.

Tags:

Wednesday Apr 18, 2007

Backing up laptop using ZFS over iscsi to more ZFS

After the debacle of the reinstall of my laptop zpool having to be rebuild and “restored” using zfs send and zfs receive I thought I would look for a better back up method. One that did not involve being clever with partitions on an external USB disk that are “ready” for when the whole disk is using ZFS.

The obvious solution is a play on one I had played with before. Store one half of the pool on another system. So welcome to ISCI.

ZFS volumes can now be shared using iscsi. So on the server create a volume with the “shareisci” property set to “on” and enable the iscsi target:

# zfs get  shareiscsi tank/iscsi/pearson   
NAME                PROPERTY    VALUE               SOURCE
tank/iscsi/pearson  shareiscsi  on                  inherited from tank/iscsi
  
# svcadm enable  svc:/system/iscsitgt       
# 

Now on the client tell the iscsi initiator where the server is:


5223 # iscsiadm add discovery-address 192.168.1.20
5224 # iscsiadm list discovery-address            
Discovery Address: 192.168.1.20:3260
5225 # iscsiadm modify discovery --sendtargets enable
5226 # format < /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 3791 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
       1. c10t0100001731F649B400002A004625F5BEd0 <SUN-SOLARIS-1-11.00GB>
          /scsi_vhci/disk@g0100001731f649b400002a004625f5be
Specify disk (enter its number): 
5227 # 

Now attach the new device to the pool. I can see some security would be a good thing here to protect my iscsi pool. More on that later.


5229 # zpool status newpool                                       
  pool: newpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: scrub completed with 0 errors on Wed Apr 18 12:30:43 2007
config:

        NAME        STATE     READ WRITE CKSUM
        newpool     ONLINE       0     0     0
          c0d0s7    ONLINE       0     0     0

errors: No known data errors
5230 # zpool attach newpool c0d0s7 c10t0100001731F649B400002A004625F5BEd0
5231 # zpool status newpool                                              
  pool: newpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.02% done, 8h13m to go
config:

        NAME                                        STATE     READ WRITE CKSUM
        newpool                                     ONLINE       0     0     0
          mirror                                    ONLINE       0     0     0
            c0d0s7                                  ONLINE       0     0     0
            c10t0100001731F649B400002A004625F5BEd0  ONLINE       0     0     0

errors: No known data errors
5232 # 

The 8 hours to complete the resilver turns out to be hopelessly pessimistic and is quickly reduced to a more realistic, but still overly pessimistic 37 minutes. All of this over what is only a 100Mbit ethernet connection from this host. I'm going to try this on the Dell that has a 1Gbit network to see if that improves this even further. (Since the laptop has just been upgraded to build 62 the pool “needs” to be upgraded. However since upgrading the pool would then not be able to be imported on earlier builds I won't upgrade the pool version until both boot environments are running build 62 or above.)

I am left wondering how useful this could be in the real world. As a “nasty hack” you could have your ZFS based NAS box serving out volumes to your NFS which then have Zpools in them. Then on the NAS box you can snapshot and backup the the volumes which would actually give you a back up of the whole of the client pool, something many people want for disaster recovery reasons. Which is in effect what I have here.


Tags:


Saturday Nov 26, 2005

ZFS "remote" replication.

More games with ZFS on my laptops. I was wondering about getting an external disk that I could attach to the zpool to act as a backup of the pool. This would seem the ideal solution, but has one draw back. I have not got an external drive here, now.

I do however have another laptop. How about exporting (sharing) a file over a back to back ethernet link and then creating a lofi device and attaching that into the zpool?

I'll gloss over the setting up the link but is was a private link, how I like that gigabit networks auto-detect the cable. Now zfs does all the sharing for me, so no editing dfstab and running extra commands. I did not even have to sacrifice a chicken. Just use zfs set to set the sharenfs option. Smf then starts all the nfs server services automatically.

principia 788 # zfs list -o name,sharenfs backup
NAME                  SHARENFS
backup                root=192.168.2.1,rw=192.168.2.1
789 #

Meanwhile on the host I am backing up(sigma) :

sigma 761 # lofiadm -a /net/192.168.2.2/backup/sigma /dev/lofi/1
sigma 762 # zpool attach home c0d0s6 /dev/lofi/1

Now at this point I did expect it all to go wrong and at least in some ways I was not disappointed. The system being backed up did after a short while stop responding to anything and I had to power cycle it. When I rebooted I was more than slightly apprehensive as to whether the zpool, which contains my home directory would still be there. I should not have worried.

sigma 785 # zpool status home
  pool: home
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver completed with 0 errors on Sat Nov 26 17:42:39 2005
config:

        NAME             STATE     READ WRITE CKSUM
        home             DEGRADED     0     0     0
          mirror         DEGRADED     0     0     0
            c0d0s6       ONLINE       0     0     0
            c0d0s7       ONLINE       0     0     0
            /dev/lofi/1  FAULTED      0     0     0  cannot open
sigma 786 #

So now lets bring back the interface and get the lofi device back and see what happens:

sigma 904 # zpool online home /dev/lofi/1
Bringing device /dev/lofi/1 online
sigma 905 # zpool status home
  pool: home
 state: ONLINE
 scrub: resilver completed with 0 errors on Sat Nov 26 18:39:20 2005
config:

        NAME             STATE     READ WRITE CKSUM
        home             ONLINE       0     0     0
          mirror         ONLINE       0     0     0
            c0d0s6       ONLINE       0     0     0
            c0d0s7       ONLINE       0     0     0
            /dev/lofi/1  ONLINE       0     0     0  588K resilvered
sigma 906 #

Now I can take it offline in a more graceful way:

sigma 906 # zpool offline home /dev/lofi/1
Bringing device /dev/lofi/1 offline
sigma 907 # zpool status home
  pool: home
 state: DEGRADED
status: One or more devices has been taken offline by the adminstrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: resilver completed with 0 errors on Sat Nov 26 18:39:20 2005
config:

        NAME             STATE     READ WRITE CKSUM
        home             DEGRADED     0     0     0
          mirror         DEGRADED     0     0     0
            c0d0s6       ONLINE       0     0     0
            c0d0s7       ONLINE       0     0     0
            /dev/lofi/1  OFFLINE      0     0     0  588K resilvered
sigma 908 #

At this point I need to try the next step which is to import the data that is in that file on another system.


More on this later.


Tags:

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today