By hstsao on May 17, 2009
ZFS still a very young FS and many new features with every new Solaris update and ZFS patch
This blog talk about /etc/zfs/zpool.cache behind the sence of zpool
The need of /etc/zfs/zpool.cache to speed up the import of ZFS may impact the HA-ZFS with Suncluster, the ZFS import happen before the Suncluster come to play.
In the beginning ZFS was designed to panic a system in the event of a catastrophic
write failure to a pool, since then zfs has introduce the failmode. PSARC 2007/567
The default behavior will be to "wait" for manual intervention before
allowing any further I/O attempts. Any I/O that was already queued would
remain in memory until the condition is resolved. This error condition can
be cleared by using the 'zpool clear' subcommand, which will attempt to resume
any queued I/Os.
The "continue" mode returns EIO to any new write request but attempts to
satisfy reads. Any write I/Os that were already in-flight at the time
of the failure will be queued and maybe resumed using 'zpool clear'.
Finally, the "panic" mode provides the existing behavior that was explained
The syntax for setting the pool property utilizes the "set" subcommand defined
in PSARC 2006/577:
# zpool set failmode=continue pool
# zpool create -o failmode=continue pool
ZFs and suncluster
Sun alert Solution 245626 : ZFS Pool Corruption May Occur With Sun Cluster 3.2 Running Solaris 10 with patch 137137-09 or 137138-09
This issue is addressed in the following releases:
\* Solaris 10 with patch 139579-02 or later obsoleted by 139555-08
\* Solaris 10 with patch 139580-02 or later obsoleted by 139556-08
To avoid this problem, install Solaris 10 patch 139579-02 (for SPARC) or 139580-02 (for x86) immediately after you install 137137-09 or 137138-09 but before you reboot the cluster nodes.
The latest Suncluster 3.2u2 patch for solaris 10
There are some IDR patch for ZFS performance
To boot the system without import any zpool
boot -m milestone=none
return to normal
svcadm milestone all