Sunday Aug 26, 2007

Why SUNW.nfs is required to configure HA-NFS over ZFS in Solaris Cluster?


The typical way of configuring a Highly Available NFS file system in Solaris Cluster environment is by using an HA-NFS agent (SUNW.nfs). The SUNW.nfs agent simply does the sharing of the file systems which has to be exported.

With the support of ZFS as failover file system in SUNW.HAStoragePlus, there are 2 possible ways to configure Highly Available NFS file system with ZFS as underlying file system. They are

  1. By enabling ZFS sharenfs property (i.e sharenfs=on) for filesystems of zpool and without using SUNW.nfs.
  2. By disabling the ZFS sharenfs property (i.e sharenfs=off) for filesystems of zpool and letting SUNW.nfs does the actual share.

Among the above 2 approaches HA-NFS work correctly only when SUNW.nfs agent is used (i.e option 2), and this blog explains the rationale behind the requirement of SUNW.nfs, to configure an Highly Available NFS file system in Cluster environment with ZFS.

Lock reclaiming by Clients (NFSv[23])

The statd(1M) keeps track of clients and processes holding locks on the server.The server can use this information to allow the client to re-claim the lock after NFS server reboot/failover.

When a file system is shared by setting ZFS sharenfs property on and not using SUNW.nfs, the lock information will be kept under /var/statmon which is local file system and specific to a host. So in the case of failover the stored information is not available on the machine to which the server is failed over. This makes server unable to send requests to clients to re-claim the locks.

This problem has been overcome by SUNW.nfs agent by keeping the monitor information in stable storage (which is on multi-ported disks) and accessible from all cluster nodes.

State information of clients (NFSv4)

NFSv4 is stateful protocol where nfsd(1M) keeps track of client status like opened/locked files in stable storage.

When a file system is shared by setting ZFS sharenfs property on, the stable storage will be under /var/nfs which is not accessible from all nodes of cluster. In this case, in a server failover scenario, the clients reclaim requests will fail and might result in client applications being exited (unless the client applications catch the SIGLOST signal).

This problem has been overcome by SUNW.nfs agent by keeping the state information in stable storage which is shared among cluster nodes and this helps server to make clients to reclaim their state.


The pictorial difference is shown below. 

HA-NFS without SUNW.nfs
 HA-NFS without SUNW.nfs

 

 

HA-NFS using SUNW.nfs
 HA-NFS using SUNW.nfs

 
To say more precisely the ZFS sharenfs property of zfs file system is not meant to work in Solaris Cluster environment and hence using SUNW.nfs agent is must for HA-NFS over ZFS.

P.S:
The stable storage where SUNW.nfs keeps information is on ZFS highly available file system (which is value of PathPrefix extension property of SUNW.nfs resource type).

Venkateswarlu Tella (Venku)
Solaris Cluster Engineering

About

mkb

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today