Backing up laptop using ZFS over iscsi to more ZFS

After the debacle of the reinstall of my laptop zpool having to be rebuild and “restored” using zfs send and zfs receive I thought I would look for a better back up method. One that did not involve being clever with partitions on an external USB disk that are “ready” for when the whole disk is using ZFS.

The obvious solution is a play on one I had played with before. Store one half of the pool on another system. So welcome to ISCI.

ZFS volumes can now be shared using iscsi. So on the server create a volume with the “shareisci” property set to “on” and enable the iscsi target:

# zfs get  shareiscsi tank/iscsi/pearson   
NAME                PROPERTY    VALUE               SOURCE
tank/iscsi/pearson  shareiscsi  on                  inherited from tank/iscsi
  
# svcadm enable  svc:/system/iscsitgt       
# 

Now on the client tell the iscsi initiator where the server is:


5223 # iscsiadm add discovery-address 192.168.1.20
5224 # iscsiadm list discovery-address            
Discovery Address: 192.168.1.20:3260
5225 # iscsiadm modify discovery --sendtargets enable
5226 # format < /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 3791 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0
       1. c10t0100001731F649B400002A004625F5BEd0 <SUN-SOLARIS-1-11.00GB>
          /scsi_vhci/disk@g0100001731f649b400002a004625f5be
Specify disk (enter its number): 
5227 # 

Now attach the new device to the pool. I can see some security would be a good thing here to protect my iscsi pool. More on that later.


5229 # zpool status newpool                                       
  pool: newpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: scrub completed with 0 errors on Wed Apr 18 12:30:43 2007
config:

        NAME        STATE     READ WRITE CKSUM
        newpool     ONLINE       0     0     0
          c0d0s7    ONLINE       0     0     0

errors: No known data errors
5230 # zpool attach newpool c0d0s7 c10t0100001731F649B400002A004625F5BEd0
5231 # zpool status newpool                                              
  pool: newpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.02% done, 8h13m to go
config:

        NAME                                        STATE     READ WRITE CKSUM
        newpool                                     ONLINE       0     0     0
          mirror                                    ONLINE       0     0     0
            c0d0s7                                  ONLINE       0     0     0
            c10t0100001731F649B400002A004625F5BEd0  ONLINE       0     0     0

errors: No known data errors
5232 # 

The 8 hours to complete the resilver turns out to be hopelessly pessimistic and is quickly reduced to a more realistic, but still overly pessimistic 37 minutes. All of this over what is only a 100Mbit ethernet connection from this host. I'm going to try this on the Dell that has a 1Gbit network to see if that improves this even further. (Since the laptop has just been upgraded to build 62 the pool “needs” to be upgraded. However since upgrading the pool would then not be able to be imported on earlier builds I won't upgrade the pool version until both boot environments are running build 62 or above.)

I am left wondering how useful this could be in the real world. As a “nasty hack” you could have your ZFS based NAS box serving out volumes to your NFS which then have Zpools in them. Then on the NAS box you can snapshot and backup the the volumes which would actually give you a back up of the whole of the client pool, something many people want for disaster recovery reasons. Which is in effect what I have here.


Tags:


Comments:

you could be replicating data between two drives in possibly different locations (geographically or just data centre shelves) if each iscsi target zvol sits on a separate server then you have replicated your data across 2 RAIDed drives (in an administratively simple fashion). that will get you a 2 way mirror that is accessible over different network paths. if the mirror is broken, you could reconstruct on either server. or both. like you, i wonder how reliable this would be in the "real world". how fast. and how awailable.

Posted by RNC on April 19, 2007 at 09:57 AM BST #

[Trackback] My last post about backing up ZFS on my laptop to iscsi targets exported from a server backed by ZFS Zvols on the server prompted a commend and also promoted me to think about whether this would be a worth while thing in the real world? Initia...

Posted by The dot in ... --- ... on April 20, 2007 at 08:49 AM BST #

Interesting; it's a shame that format doesn't present the remote iSCSI target with a more meaningful name. Could be very easy to pick the wrong one. And did you say there would be more re security? I know there are some new zfs props for this, but perhaps that's not what you meant? cheers, c.

Posted by Calum Mackay on April 25, 2007 at 09:37 AM BST #

Over a 10/100 network you will only get 8-12MBs performance gigabit will give you upto 125MBs which is good enough for disks. I can hear oh wait I get 35MBs on my files.. yes due to smart caching of files at both ends of the copper. smallnetbuilder > Nas or you can just transfer a massive file to find out your speed.

Posted by Anthony on May 07, 2007 at 09:02 PM BST #

Post a Comment:
Comments are closed for this entry.
About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today