By MandyWaite on May 29, 2008
I had to have an important build system reinstalled with the latest build of Nevada and I was worried because I had the important stuff on a ZFS file system. The main reason I used ZFS was for the snapshots feature that would allow me to go back to a previous version if I totally messed up. What I was worried about was whether there was any ZFS metadata tied into the OS and whether losing that would stop me from accessing my ZFS pools and File Systems after a complete reinstall. The important partitions were on the 2nd disk and our lab guys only touch the first during reinstalls. I decided to backup the data anyway, in my days as a support engineer and working with SunOS, we used to do something like this in the directory I wanted to backup:
tar cvf - . | (cd <target dir>; tar xvBpf - )
It was a long time ago so it be slightly wrong
There are so many different versions of tar these days that I lost confidence in this and generally, for directories that don't contain symlinks it's easier just to use cp -r -p . Also 'mv' actually works across filesystems now, it didn't once upon a time.
I wanted to back this by copying everything, retaining permissions, and not recursing down symlinks. I was told that the example in the cpio man page described how to do this, it said:
% find . -depth -print | cpio -pdlmv newdir
But when I ran this it failed as it tried to create links in the new directory back to the original files/dirs in the source directory (caused by the -l switch), this was a problem in itself, but it actually failed as there were hard links in the source directory which it tried to link back to and the target directory was on a different file system.
So I tried without the -l switch to cpio
% find . -depth -print | cpio -pdmv newdir
This actually seemed to work and I had the right number of files in the target directory, but it was impossible to check that the links and permissions were all correct as there were just too many files and too complex a directory heirarchy.
As luck would have it I had access to a second system which had a second, unused disk. I was to able create a ZFS pool on the second system and then use 'zfs send' to send my latest snapshot to a 'zfs recv' process on the second system. This process is described here.
Once I had some level of confidence that my data was safe, I asked the lab guys if they could reinstall the system and whether my zfs pools and filesystems would still be around afterwards. He did the reinstall and emailed me back including the following info:
zz01# zpool import pool: workspace id: 2926848695052943867 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: workspace ONLINE c3t1d0s5 ONLINE
I then simply had to run:
zz01# zpool import workspace
and I was back up and running.
It's silly really, I work for Sun and I could have emailed a bunch of aliases, but I generally like working things out for myself (even if it is by googling). I didn't in this case but at least I didn't have hundreds of other Sun people having to come to my rescue.