Sunday Mar 01, 2009

Does a zfs snapshot flush writes?

I'm wanting to write about how we would go about creating a consistent snapshot of a pnfs community. But first I want to try and understand how zfs does this for a single filesystem. And I'm going to be braindead about figuring out how. I.e., I'm playing dumb.

The question at hand is whether or not zfs snapshot flushes writes or not? I can determine this by:

  1. Reading the source, doh! But I'm too lazy to do that... (see how I'm playing dumb?)
  2. In one script, write timestamps to a file and see if there is any funny business occurring whilst I run another script in parallel and capture the timestamp of the zfs snapshot command. Hmm, sounds complicated and error prone.
  3. Create a dirty page in memory and see if the snapshot flushes it. I can do this with a small write and keeping the file open while I do the snapshot command. If the write appears in the snapshot, we know a flush occurred.

I wrote a simple Perl script to test this:


[thud@warlock test]> more slam.pl 
#!/usr/bin/perl

use Time::HiRes qw(time gettimeofday);

open(FP, ">$ARGV[0]") || die "Can't open for writing $ARGV[0]: $!\\n";

print FP time . "\\n";

`zfs snapshot tank/test\\@$ARGV[0]`;

close(FP);

Now time to test:

[thud@warlock ~]> zfs create tank/test
[thud@warlock ~]> chmod 777 /tank/test/
[thud@warlock test]> cd /tank/test
[thud@warlock test]> ./slam.pl one
[thud@warlock test]> zfs clone tank/test@one tank/one
[thud@warlock test]> more ../one/one 
1235973947.93013

Which would indicate that the flush had to occur. What if we add another write after the snapshot?

[thud@warlock test]> vi slam.pl	
[thud@warlock test]> ./slam.pl two
[thud@warlock test]> zfs clone tank/test@two tank/two
[thud@warlock test]> more ../one/one 
1235973947.93013
[thud@warlock test]> more ../two/one
1235973947.93013
[thud@warlock test]> more ../two/two
1235974891.62171
[thud@warlock test]> more two
1235974891.62171
1235974891.84716

That is a strong indication that zfs is flushing writes. Hmm, what if we add a pause to the script and see if we are flushing the writes to the active filesystem?

[thud@warlock test]> more slam.pl 
#!/usr/bin/perl

use Time::HiRes qw(time gettimeofday);

open(FP, ">$ARGV[0]") || die "Can't open for writing $ARGV[0]: $!\\n";

print FP time . "\\n";

`zfs snapshot tank/test\\@$ARGV[0]`;

print FP time . "\\n";

print "Type something, hit return\\n";
my ($pause) = ;
print "$pause\\n";

print FP time . "\\n";

close(FP);

[thud@warlock test]> ./slam.pl pause
Type something, hit return

And meanwhile, in another window:

[thud@warlock test]> zfs clone tank/test@pause tank/pause
[thud@warlock test]> more ../pause/pause 
1235975458.21851
[thud@warlock test]> more pause	
1235975458.21851

So, the zfs snapshot is causing a write to be flushed that wouldn't normally be flushed. I.e., we are waiting on input and haven't flushed the second write. What happens if we take another snapshot manually and look at the contents?

[thud@warlock test]> zfs snapshot tank/test@pause2
[thud@warlock test]> zfs clone tank/test@pause2 tank/pause2
[thud@warlock test]> more ../pause/pause 
1235975458.21851
[thud@warlock test]> more ../pause2/pause 
1235975458.21851
[thud@warlock test]> more pause	
1235975458.21851

Very, very interesting - this dirty write is clearly not flushed.

[thud@warlock test]> ./slam.pl pause
Type something, hit return
unpause
unpause

Time to stop playing dumb, even though it is fun to experiment here. I'll go look at the code tomorrow.


Originally posted on Kool Aid Served Daily
Copyright (C) 2009, Kool Aid Served Daily

Friday Sep 26, 2008

Moving a ZFS filesystem

In a past life, I did the code for moving files/directories across a qtree in WAFL. The mv used to degrade to a copy because of the effective change in the fsid. The algorithm I used to combat this was to copy directories and move files. I had to copy the directories because we needed a source and destination target for the renames. And this would not leave us in an inconsistent state.

So I was all prepared for pain and hoops when I wanted to rename a ZFS filesystem on a test box. And I am happy to be dismayed that did not happen:

[root@jhereg ~]> zfs create pool/builds
[root@jhereg ~]> zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
pool                          59.9G  74.0G    22K  /builds
pool/builds                     18K  74.0G    18K  /builds/builds
pool/jasmith                  22.5G  74.0G   866K  /builds/jasmith
pool/jasmith/nfs41-instp       773M  74.0G   731M  /builds/jasmith/nfs41-instp
pool/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/jasmith/nfs41-instp-bld
pool/jasmith/nfs41-open        767M  74.0G   712M  /builds/jasmith/nfs41-open
pool/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/jasmith/nfs41-open-bld
pool/webaker                  37.4G  74.0G  17.5G  /builds/webaker
pool/webaker/vbox0            7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/webaker/vbox_ds1         1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/webaker/vbox_ds2           17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/webaker/vbox_ds3          118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/webaker/vbox_master      7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/webaker/vbox_mds         3.07G  74.0G  8.99G  /builds/webaker/vbox_mds
[root@jhereg ~]> zfs rename pool/jasmith pool/builds/jasmith
[root@jhereg ~]> zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
pool                                 59.9G  74.0G    23K  /builds
pool/builds                          22.5G  74.0G    18K  /builds/builds
pool/builds/jasmith                  22.5G  74.0G   866K  /builds/builds/jasmith
pool/builds/jasmith/nfs41-instp       773M  74.0G   731M  /builds/builds/jasmith/nfs41-instp
pool/builds/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/builds/jasmith/nfs41-instp-bld
pool/builds/jasmith/nfs41-open        767M  74.0G   712M  /builds/builds/jasmith/nfs41-open
pool/builds/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/builds/jasmith/nfs41-open-bld
pool/webaker                         37.4G  74.0G  17.5G  /builds/webaker
pool/webaker/vbox0                   7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/webaker/vbox_ds1                1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/webaker/vbox_ds2                  17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/webaker/vbox_ds3                 118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/webaker/vbox_master             7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/webaker/vbox_mds                3.07G  74.0G  8.99G  /builds/webaker/vbox_mds

In all fairness to WAFL, I expect that with their virtual volumes, this type of operation is just as painless. The FSID of the filesystem is not really changing - just the point at which it is mounted in the name space. I.e., the operation is on the filesystem and not the individual files. We don;t have to recurse to change the inodes for each file.

Back to the great ZFS discussion. What I wanted to do was push the name space down for the existing sub-filesystems and yet keep the existing name space. I.e., I don't want pool mounted on /builds, but I still want to see /builds/jasmith. I can finish this off thusly:

[root@jhereg ~]> zfs rename pool/webaker pool/builds/webaker
[root@jhereg ~]> zfs list pool
NAME   USED  AVAIL  REFER  MOUNTPOINT
pool  59.9G  74.0G    23K  /builds
[root@jhereg ~]> zfs set mountpoint=/pool pool
[root@jhereg ~]> zfs set mountpoint=/builds pool/builds
[root@jhereg ~]> zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
pool                                 59.9G  74.0G    21K  /pool
pool/builds                          59.9G  74.0G    21K  /builds
pool/builds/jasmith                  22.5G  74.0G   866K  /builds/jasmith
pool/builds/jasmith/nfs41-instp       773M  74.0G   731M  /builds/jasmith/nfs41-instp
pool/builds/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/jasmith/nfs41-instp-bld
pool/builds/jasmith/nfs41-open        767M  74.0G   712M  /builds/jasmith/nfs41-open
pool/builds/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/jasmith/nfs41-open-bld
pool/builds/webaker                  37.4G  74.0G  17.5G  /builds/webaker
pool/builds/webaker/vbox0            7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/builds/webaker/vbox_ds1         1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/builds/webaker/vbox_ds2           17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/builds/webaker/vbox_ds3          118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/builds/webaker/vbox_master      7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/builds/webaker/vbox_mds         3.07G  74.0G  8.99G  /builds/webaker/vbox_mds

And now I can construct other high level filesystems here:

[root@jhereg ~]> zfs create pool/home
[root@jhereg ~]> zfs set mountpoint=/export/home pool/home
[root@jhereg ~]> zfs set sharenfs=rw pool/home
[root@jhereg ~]> zfs create pool/home/tdh
[root@jhereg ~]> share | grep tdh
-@pool/home     /export/home/tdh   rw   ""  

Note that I set up the properties I want on pool/home and count on inheritance to make sure new filesystems under it have the properties I care about.


Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Friday Apr 21, 2006

ssh wont start

I've had wont off for some time, it gets hot in my office. Anyway, I started it back up and I can't ssh into the box. I can't restart ssh:

# svcadm restart ssh
# ps -ef | grep ssh
    root   250   232   0 10:45:28 console     0:00 grep ssh
# cd /var/svc/log
# tail network-ssh:default.log
[ Apr  2 10:42:55 Stopping because service disabled. ]
[ Apr  2 10:42:55 Executing stop method (:kill) ]
[ Apr  2 10:51:37 Executing start method ("/lib/svc/method/sshd start") ]
[ Apr  2 10:51:44 Method "start" exited with status 0 ]
[ Apr  3 12:46:02 Stopping because service disabled. ]
[ Apr  3 12:46:02 Executing stop method (:kill) ]
[ Apr  3 12:55:53 Executing start method ("/lib/svc/method/sshd start") ]
[ Apr  3 12:56:00 Method "start" exited with status 0 ]
[ Apr 13 22:37:28 Stopping because service disabled. ]
[ Apr 13 22:37:28 Executing stop method (:kill) ]

Thinking I wasn't doing the right ps options:

# ps -ef
     UID   PID  PPID   C    STIME TTY         TIME CMD
    root     0     0   0 10:34:00 ?           0:34 sched
    root     1     0   0 10:34:03 ?           0:00 /sbin/init
    root     2     0   0 10:34:03 ?           0:00 pageout
    root     3     0   0 10:34:03 ?           0:00 fsflush
    root     7     1   0 10:34:05 ?           0:03 /lib/svc/bin/svc.startd
    root     9     1   0 10:34:06 ?           0:05 /lib/svc/bin/svc.configd
    root   197     1   0 10:34:21 ?           0:00 /usr/lib/utmpd
    root   110     1   0 10:34:17 ?           0:00 /usr/lib/power/powerd
    root    84     1   0 10:34:17 ?           0:00 /usr/lib/sysevent/syseventd
    root   111     1   0 10:34:17 ?           0:00 /usr/lib/picl/picld
    root   192     7   0 10:34:20 console     0:00 -sh
    root    99     1   0 10:34:17 ?           0:00 /usr/sbin/nscd
    root   253   232   0 10:48:15 console     0:00 ps -ef
    root   232   192   0 10:37:08 console     0:00 tcsh
  daemon   115     1   0 10:34:18 ?           0:00 /usr/lib/crypto/kcfd

Not much at all is running!

Okay, I rebooted to see if anything came up on the console. (Note, I tend to reboot when I don't understand something. It seems to drive other people crazy.) And I see:

cannot mount '/zoo': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Apr 21 10:34:20 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.

Right away, I should check /var/svc/log/system-filesystem-local:default.log, but I'll take my sweet time getting there. :>

First, I find that svcs -x will tell me info:

# svcs -x
svc:/system/filesystem/local:default (local file system mounts)
 State: maintenance since Fri Apr 21 10:34:20 2006
Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
   See: http://sun.com/msg/SMF-8000-KS
   See: /var/svc/log/system-filesystem-local:default.log
Impact: 28 dependent services are not running.  (Use -v for list.)

svc:/network/rpc/gss:default (Generic Security Service)
 State: uninitialized since Fri Apr 21 10:34:07 2006
Reason: Restarter svc:/network/inetd:default is not running.
   See: http://sun.com/msg/SMF-8000-5H
   See: gssd(1M)
Impact: 14 dependent services are not running.  (Use -v for list.)

svc:/network/rpc/smserver:default (removable media management)
 State: uninitialized since Fri Apr 21 10:34:08 2006
Reason: Restarter svc:/network/inetd:default is not running.
   See: http://sun.com/msg/SMF-8000-5H
   See: rpc.smserverd(1M)
Impact: 3 dependent services are not running.  (Use -v for list.)

svc:/application/print/server:default (LP print server)
 State: disabled since Fri Apr 21 10:34:07 2006
Reason: Disabled by an administrator.
   See: http://sun.com/msg/SMF-8000-05
   See: lpsched(1M)
Impact: 1 dependent service is not running.  (Use -v for list.)

If you do a 'svcs -xv' you can get more detailed info. That lead me to looking at http://sun.com/msg/SMF-8000-5H, which in turn had me restarting services. That did not work. So I then went to the link to http://sun.com/msg/SMF-8000-KS. This lead me to look at the log files.

# tail system-filesystem-local:default.log
[ Apr 13 22:37:39 Executing stop method (null) ]
[ Apr 19 11:21:21 Executing start method ("/lib/svc/method/fs-local") ]
WARNING: /usr/sbin/zfs mount -a failed: exit status 1
[ Apr 19 11:21:22 Method "start" exited with status 95 ]
[ Apr 21 10:14:41 Executing start method ("/lib/svc/method/fs-local") ]
WARNING: /usr/sbin/zfs mount -a failed: exit status 1
[ Apr 21 10:14:42 Method "start" exited with status 95 ]
[ Apr 21 10:34:19 Executing start method ("/lib/svc/method/fs-local") ]
WARNING: /usr/sbin/zfs mount -a failed: exit status 1
[ Apr 21 10:34:20 Method "start" exited with status 95 ]

Sweet, what happens if I try '/usr/sbin/zfs mount -a' manually?

# /usr/sbin/zfs mount -a
cannot mount '/zoo': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
# ls -la /zoo
total 6
drwxr-xr-x   3 root     root         512 Apr  6 21:46 .
drwxr-xr-x  42 root     root        1024 Mar 29 12:33 ..
dr-xr-xr-x   2 root     root         512 Apr  6 21:46 isos

Now, did I manage to, sometime in the recent past, create a directory directly in /zoo? Or is there an 'isos' filesystem?

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zoo                   24.3G   110G  10.5K  /zoo
zoo/home               623M  9.39G  16.5K  /export/zfs
zoo/home/coach        12.5K  9.39G  12.5K  /export/zfs/coach
zoo/home/haynest      12.5K  9.39G  12.5K  /export/zfs/haynest
zoo/home/kanigix         8K  9.39G     8K  /export/zfs/kanigix
zoo/home/loghyr       12.5K  9.39G  12.5K  /export/zfs/loghyr
zoo/home/morgan       12.5K  9.39G  12.5K  /export/zfs/morgan
zoo/home/mrx          12.5K  9.39G  12.5K  /export/zfs/mrx
zoo/home/nfsv2        12.5K  9.39G  12.5K  /export/zfs/nfsv2
zoo/home/nfsv3        12.5K  9.39G  12.5K  /export/zfs/nfsv3
zoo/home/nfsv4         362K  9.39G   264K  /export/zfs/nfsv4
zoo/home/nfsv4@monday  97.5K      -   108K  -
zoo/home/spud         12.5K  9.39G  12.5K  /export/zfs/spud
zoo/home/stacy        12.5K  9.39G  12.5K  /export/zfs/stacy
zoo/home/tdh           622M  9.39G   622M  /export/zfs/tdh
zoo/home/thomas       12.5K  9.39G  12.5K  /export/zfs/thomas
zoo/isos              20.9G   110G  20.9G  /zoo/isos
zoo/local                9K   110G     9K  /zoo/local
zoo/scratch           43.7M   110G  43.7M  /zoo/scratch
zoo/x86               2.78G   110G  2.78G  /zoo/x86

Okay, there is a filesystem. But, if we look at the directory entry again, it is quite recent and I think much newer than when I created the zoo pool:

# ls -la /zoo
total 6
drwxr-xr-x   3 root     root         512 Apr  6 21:46 .
drwxr-xr-x  42 root     root        1024 Mar 29 12:33 ..
dr-xr-xr-x   2 root     root         512 Apr  6 21:46 isos
# ls -la /zoo/isos/
total 4
dr-xr-xr-x   2 root     root         512 Apr  6 21:46 .
drwxr-xr-x   3 root     root         512 Apr  6 21:46 ..

Lets delete the directory and see what happens:

# rm -rf /zoo/isos/
# /usr/sbin/zfs mount -a
#

And lets reboot to get a clean slate:

# ps -ef |

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
\*
\* Starting Desktop Login on display :0...
\*
\* Wait for the Desktop Login screen before logging in.
\*
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
grep ssh
    root   575   217   0 11:00:18 console     0:00 grep ssh
    root   408     1   0 11:00:02 ?           0:00 /usr/lib/ssh/sshd

And does ssh work?

[tdh@adept log]> ssh wont
Password:
Last login: Tue Apr 11 12:41:41 2006 from adept.internal.
Sun Microsystems Inc.   SunOS 5.11      snv_36  October 2007

Sweet!


Technorati Tags:
Orginally posted on Kool Aid Served Daily
Copyright (C) 2006, Kool Aid Served Daily

Tuesday Mar 28, 2006

Getting a specific property from all ZFS filesystems

I wanted to see what all of the ZFS filesystems thought they had set for the property of sharenfs. I could do it on a piece-by-piece basis:

[tdh@wont ~coach]> zfs get sharenfs zoo/home/coach
NAME             PROPERTY       VALUE                      SOURCE
zoo/home/coach   sharenfs       rw,anon=0                  inherited from zoo/home

Or I could get all of the properties from one filesystem:

[tdh@wont ~]> zfs get all zoo/home/tdh
NAME             PROPERTY       VALUE                      SOURCE
zoo/home/tdh     type           filesystem                 -
zoo/home/tdh     creation       Mon Mar 20 23:12 2006      -
zoo/home/tdh     used           15.8M                      -
zoo/home/tdh     available      9.98G                      -
zoo/home/tdh     referenced     15.8M                      -
zoo/home/tdh     compressratio  1.36x                      -
zoo/home/tdh     mounted        yes                        -
zoo/home/tdh     quota          none                       default
zoo/home/tdh     reservation    none                       default
zoo/home/tdh     recordsize     128K                       default
zoo/home/tdh     mountpoint     /export/zfs/tdh            inherited from zoo/home
zoo/home/tdh     sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/tdh     checksum       on                         default
zoo/home/tdh     compression    on                         inherited from zoo/home
zoo/home/tdh     atime          on                         default
zoo/home/tdh     devices        on                         default
zoo/home/tdh     exec           on                         default
zoo/home/tdh     setuid         on                         default
zoo/home/tdh     readonly       off                        default
zoo/home/tdh     zoned          off                        default
zoo/home/tdh     snapdir        visible                    default
zoo/home/tdh     aclmode        groupmask                  default
zoo/home/tdh     aclinherit     secure                     default

But I couldn't figure out how to say either:

  • zfs share
  • zfs get sharenfs \*
  • zfs get sharenfs all

So I decided to use the scripting features in 'zfs list' to do the same thing:

[tdh@wont ~coach]> zfs list -H -o name -t filesystem | xargs zfs get sharenfs
NAME             PROPERTY       VALUE                      SOURCE
zoo              sharenfs       off                        default
zoo/home         sharenfs       rw,anon=0                  local
zoo/home/coach   sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/haynest  sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/kanigix  sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/loghyr  sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/mrx     sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/nfsv2   sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/nfsv3   sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/nfsv4   sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/spud    sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/tdh     sharenfs       rw,anon=0                  inherited from zoo/home
zoo/home/thomas  sharenfs       rw,anon=0                  inherited from zoo/home
zoo/isos         sharenfs       off                        default
zoo/x86          sharenfs       off                        default

The power here is the extensibility built on top of the Unix paradigm of small programs linked with pipes. With a lot of other OSes (and some built on top of Unix implementations) this would be a PVR or RFE to the engineering department. $$$ for the company, but an irate customer base.

Of course I think I'm pretty hot, until eshrock points out:

# zfs list -o name,sharenfs
NAME                  SHARENFS
zoo                   off
zoo/home              rw,anon=0
zoo/home/coach        rw,anon=0
zoo/home/haynest      rw,anon=0
zoo/home/kanigix      rw,anon=0
zoo/home/loghyr       rw,anon=0
zoo/home/mrx          rw,anon=0
zoo/home/nfsv2        rw,anon=0
zoo/home/nfsv3        rw,anon=0
zoo/home/nfsv4        rw,anon=0
zoo/home/nfsv4@monday  -
zoo/home/spud         rw,anon=0
zoo/home/tdh          rw,anon=0
zoo/home/thomas       rw,anon=0
zoo/isos              off
zoo/x86               off

Technorati Tags:
Orginally posted on Kool Aid Served Daily
Copyright (C) 2006, Kool Aid Served Daily

Tuesday Mar 21, 2006

Recovering a ZFS filesystem after a reinstall

So last night I reinstalled Nevada b35 on wont. Of course, I didn't bother nuking my ZFS filesystem. And of course it did not show up after the system came up. But that is the same as a UFS filesystem. For UFS you need to add it to /etc/vfstab to get it mounted automatically upon a reboot.

What do I need to do with a ZFS filesystem?

  • Let's find it first:
    # zpool import
      pool: zoo
        id: 6577446991347315550
     state: ONLINE
    action: The pool can be imported using its name or numeric identifier.  The
            pool may be active on on another system, but can be imported using
            the '-f' flag.
    config:
    
            zoo         ONLINE
              mirror    ONLINE
                c0d1s0  ONLINE
                c0d1s1  ONLINE
              mirror    ONLINE
                c0d1s3  ONLINE
                c0d1s4  ONLINE
    
  • And now lets import it:
    # zpool import zoo
    cannot import 'zoo': pool may be in use from other system
    use '-f' to import anyway
    # zpool import -f zoo
    cannot mount 'zoo/home/nfsv2': mountpoint or dataset is busy
    
  • And lets verify that we can see it:
    # df -h
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0d0s0         30G   4.3G    25G    15%    /
    /devices                 0K     0K     0K     0%    /devices
    ctfs                     0K     0K     0K     0%    /system/contract
    proc                     0K     0K     0K     0%    /proc
    mnttab                   0K     0K     0K     0%    /etc/mnttab
    swap                   1.5G   716K   1.5G     1%    /etc/svc/volatile
    objfs                    0K     0K     0K     0%    /system/object
    /usr/lib/libc/libc_hwcap2.so.1
                            30G   4.3G    25G    15%    /lib/libc.so.1
    fd                       0K     0K     0K     0%    /dev/fd
    swap                   1.5G   108K   1.5G     1%    /tmp
    swap                   1.5G    44K   1.5G     1%    /var/run
    /dev/dsk/c0d0s7        5.7G   5.8M   5.6G     1%    /export/home
    /vol/dev/dsk/c1t0d0/"kanigixx86"
                           2.8G   2.8G     0K   100%    /cdrom/"kanigixx86"
    zoo                    134G   100K   128G     1%    /zoo
    zoo/x86                134G   2.8G   128G     3%    /zoo/x86
    zoo/home                10G   100K  10.0G     1%    /export/zfs
    zoo/home/nfsv2          10G    98K  10.0G     1%    /export/zfs/nfsv2
    zoo/home/tdh            10G    98K  10.0G     1%    /export/zfs/tdh
    zoo/home/nfsv3          10G    98K  10.0G     1%    /export/zfs/nfsv3
    zoo/home/nfsv4          10G   108K  10.0G     1%    /export/zfs/nfsv4
    zoo/isos               134G   2.8G   128G     3%    /zoo/isos
    

No idea why 'zfs import -f zoo' spit out a warning. Well, it might have to do with the fact that I didn't do a 'zfs export' before I did the reinstall.

And finally, lets make sure we can see the contents across the network on sandman:

# showmount -e wont
export list for wont:
/export/zfs       (everyone)
/export/zfs/tdh   (everyone)
/export/zfs/nfsv3 (everyone)
/export/zfs/nfsv4 (everyone)
/export/zfs/nfsv2 (everyone)
# cd /net/wont/export
# ls -la
total 3
dr-xr-xr-x   2 root     root           2 Mar 21 15:36 .
dr-xr-xr-x   2 root     root           2 Mar 21 15:36 ..
dr-xr-xr-x   1 root     root           1 Mar 21 15:36 zfs
# cd zfs
# ls -la
total 7
drwxr-xr-x   6 root     sys            6 Mar 20 23:12 .
dr-xr-xr-x   2 root     root           2 Mar 21 15:36 ..
dr-xr-xr-x   3 root     root           3 Mar 21 15:35 .zfs
dr-xr-xr-x   1 root     root           1 Mar 21 15:36 nfsv2
dr-xr-xr-x   1 root     root           1 Mar 21 15:36 nfsv3
dr-xr-xr-x   1 root     root           1 Mar 21 15:36 nfsv4
dr-xr-xr-x   1 root     root           1 Mar 21 15:36 tdh
# cd tdh
# ls -la
total 4
drwxr-xr-x   2 1066     staff          2 Mar 20 23:12 .
drwxr-xr-x   6 root     sys            6 Mar 20 23:12 ..
dr-xr-xr-x   3 root     root           3 Mar 21 15:36 .zfs

Obviously, the userid of 1066 has not been added to sandman. Notice that this is a NFSv4 mount and the ID mapping must be working correctly. I.e., if it were not, we would see nobody. A NFSv3 mount would show 1066 regardless of the ID domain settings.


Technorati Tags:
Orginally posted on Kool Aid Served Daily
Copyright (C) 2006, Kool Aid Served Daily
About

tdh

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today