Monday Dec 19, 2011

More robust control of zfs in Solaris Cluster 3.x

In some situations there is a possibility that a zpool will not be exported correctly if controlled by SUNW.HAStoragePlus resource. Please refer to the details in Document 1364018.1: Potential Data Integrity Issues After Switching Over a Solaris Cluster High Availability Resource Group With Zpools

I like to mention this because zfs is used more and more in Solaris Cluster environments. Therefore I highly recommend to install following patches to get a more reliable Solaris Cluster environment in combination with zpools on SC3.3 and SC3.2. So, if you already running such a setup, start planning NOW to install the following patch revision (or higher) for your environment...

Solaris Cluster 3.3:
145333-10 Oracle Solaris Cluster 3.3: Core Patch for Oracle Solaris 10
145334-10 Oracle Solaris Cluster 3.3_x86: Core Patch for Oracle Solaris 10_x86

Solaris Cluster 3.2
144221-07 Solaris Cluster 3.2: CORE patch for Solaris 10
144222-07 Solaris Cluster 3.2: CORE patch for Solaris 10_x86

Friday May 08, 2009

Administration of zpool devices in Sun Cluster 3.2 environment


Carefully configure zpools in Sun Cluster 3.2. Because it's possible to use the same physical device in different zpools on different nodes at the same time. This means the zpool command does NOT care about if the physical device is already in use by another zpool on another node. e.g. If node1 have an active zpool with device c3t3d0 then it's possible to create a new zpool with c3t3d0 on another node. (assumption: c3t3d0 is the same shared device on all cluster nodes).

Output of testing...


If problems occurred due to administration mistakes then the following errors have been seen:

NODE1# zpool import tank
cannot import 'tank': I/O error

NODE2# zpool import tankothernode
cannot import 'tankothernode': one or more devices is currently unavailable

NODE2# zpool import tankothernode
cannot import 'tankothernode': no such pool available

NODE1# zpool import tank
cannot import 'tank': pool may be in use from other system, it was last accessed by NODE2 (hostid: 0x83083465) on Fri May 8 13:34:41 2009
use '-f' to import anyway
NODE1# zpool import -f tank
cannot import 'tank': one or more devices is currently unavailable


Furthermore the zpool command also use the disk without any warning if it used by Solaris Volume Manager diskset or Symantec (Veritas) Volume Manager diskgroup.

Summary for Sun Cluster environment:
ALWAYS MANUALLY CHECK THAT THE DEVICE WHICH USING FOR ZPOOL IS FREE!!!


This is addressed in bug 6783988.

About

I'm still mostly blogging around Solaris Cluster and support. Independently if for Sun Microsystems or Oracle. :-)

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
23
24
25
26
27
28
29
30
   
       
Today