Backing up laptop using ZFS over iscsi to more ZFS
By user12625760 on Apr 18, 2007
After the debacle of the reinstall of my laptop zpool having to be rebuild and “restored” using zfs send and zfs receive I thought I would look for a better back up method. One that did not involve being clever with partitions on an external USB disk that are “ready” for when the whole disk is using ZFS.
The obvious solution is a play on one I had played with before. Store one half of the pool on another system. So welcome to ISCI.
ZFS volumes can now be shared using iscsi. So on the server create a volume with the “shareisci” property set to “on” and enable the iscsi target:
# zfs get shareiscsi tank/iscsi/pearson NAME PROPERTY VALUE SOURCE tank/iscsi/pearson shareiscsi on inherited from tank/iscsi # svcadm enable svc:/system/iscsitgt #
Now on the client tell the iscsi initiator where the server is:
5223 # iscsiadm add discovery-address 192.168.1.20 5224 # iscsiadm list discovery-address Discovery Address: 192.168.1.20:3260 5225 # iscsiadm modify discovery --sendtargets enable 5226 # format < /dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 3791 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0 1. c10t0100001731F649B400002A004625F5BEd0 <SUN-SOLARIS-1-11.00GB> /scsi_vhci/disk@g0100001731f649b400002a004625f5be Specify disk (enter its number): 5227 #
Now attach the new device to the pool. I can see some security would be a good thing here to protect my iscsi pool. More on that later.
5229 # zpool status newpool pool: newpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: scrub completed with 0 errors on Wed Apr 18 12:30:43 2007 config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 c0d0s7 ONLINE 0 0 0 errors: No known data errors 5230 # zpool attach newpool c0d0s7 c10t0100001731F649B400002A004625F5BEd0 5231 # zpool status newpool pool: newpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 0.02% done, 8h13m to go config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0d0s7 ONLINE 0 0 0 c10t0100001731F649B400002A004625F5BEd0 ONLINE 0 0 0 errors: No known data errors 5232 #
The 8 hours to complete the resilver turns out to be hopelessly pessimistic and is quickly reduced to a more realistic, but still overly pessimistic 37 minutes. All of this over what is only a 100Mbit ethernet connection from this host. I'm going to try this on the Dell that has a 1Gbit network to see if that improves this even further. (Since the laptop has just been upgraded to build 62 the pool “needs” to be upgraded. However since upgrading the pool would then not be able to be imported on earlier builds I won't upgrade the pool version until both boot environments are running build 62 or above.)
I am left wondering how useful this could be in the real world. As a “nasty hack” you could have your ZFS based NAS box serving out volumes to your NFS which then have Zpools in them. Then on the NAS box you can snapshot and backup the the volumes which would actually give you a back up of the whole of the client pool, something many people want for disaster recovery reasons. Which is in effect what I have here.