Administration of zpool devices in Sun Cluster 3.2 environment


Carefully configure zpools in Sun Cluster 3.2. Because it's possible to use the same physical device in different zpools on different nodes at the same time. This means the zpool command does NOT care about if the physical device is already in use by another zpool on another node. e.g. If node1 have an active zpool with device c3t3d0 then it's possible to create a new zpool with c3t3d0 on another node. (assumption: c3t3d0 is the same shared device on all cluster nodes).

Output of testing...


If problems occurred due to administration mistakes then the following errors have been seen:

NODE1# zpool import tank
cannot import 'tank': I/O error

NODE2# zpool import tankothernode
cannot import 'tankothernode': one or more devices is currently unavailable

NODE2# zpool import tankothernode
cannot import 'tankothernode': no such pool available

NODE1# zpool import tank
cannot import 'tank': pool may be in use from other system, it was last accessed by NODE2 (hostid: 0x83083465) on Fri May 8 13:34:41 2009
use '-f' to import anyway
NODE1# zpool import -f tank
cannot import 'tank': one or more devices is currently unavailable


Furthermore the zpool command also use the disk without any warning if it used by Solaris Volume Manager diskset or Symantec (Veritas) Volume Manager diskgroup.

Summary for Sun Cluster environment:
ALWAYS MANUALLY CHECK THAT THE DEVICE WHICH USING FOR ZPOOL IS FREE!!!


This is addressed in bug 6783988.

Comments:

Can we have mutiple zpools with HAStoragePlus with one failover resource group?
One Zpool=Data
One Zpool=Log

Posted by carina on May 26, 2009 at 09:33 AM CEST #

Yes, e.g:
# zfs create zfs-rg
# clrs create -g zfs-rg -t SUNW.HAStoragePlus -p Zpools=Data,Log zfs-hastp-rs
or
# zfs create zfs-rg
# clrs create -g zfs-rg -t SUNW.HAStoragePlus -p Zpools=Data zfs1-hastp-rs
# clrs create -g zfs-rg -t SUNW.HAStoragePlus -p Zpools=Log zfs2-hastp-rs
Both are valid I prefer the first option...

Posted by jschleich on May 27, 2009 at 01:24 AM CEST #

Hello Juergen we we're affected by this issue it was "great fun" but thanks to Sun Support, for the great Support. :-)

You guys are doing a wonderful Job at Mission Critical.
Well done. :)

Cheers from Munich. :)

Posted by Bernd Helber on August 25, 2009 at 02:04 AM CEST #

Hi jschleich,

I have the following error..how could i recover my zpool........

bash-3.00# zpool import
pool: logs
id: 4242057460003747848
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:

logs UNAVAIL missing device
c1t2d0s7 ONLINE
c1t3d0s7 ONLINE
c1t4d0s7 ONLINE
c1t5d0s7 ONLINE
c2t8d0s7 ONLINE
c2t9d0s7 ONLINE
c2t10d0s7 ONLINE
c2t11d0s7 ONLINE
c2t12d0s7 ONLINE
c2t13d0s7 ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

but when i run import the pool i get the follwing error...

bash-3.00# zpool import -f logs
cannot import 'logs': one or more devices is currently unavailable

bash-3.00# zpool status -v logs
cannot open 'logs': no such pool

Please help...Im stuck with this.....

Regards,
Arvindh.L

Posted by Arvindh L on September 07, 2009 at 10:56 AM CEST #

Hi Arvindh,
due this short description is hard to say what's going on. I guess all drives are online, if not you should check that all drives are accessible from Solaris. Otherwise maybe you hit RFE (Request for Enhancement) 6538021 (
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6538021 ). For further investigation, please open a service request by using the Sun Microsystems support channels.
Best Regards,
Juergen

Posted by jschleich on September 08, 2009 at 10:54 AM CEST #

Hello Jurgen,
We have a clustered environment with two solaris box.
/oracle01 is a zpool mounted in node2.That zpool is running out of space and we are to add space to the zpool.

We have emc LUNS used to configure this.Luns are not used by any other zpools.
============================================

root@ssb-voor-02 bin$ zpool status -v oracle01

NAME STATE READ WRITE CKSUM
oracle01 ONLINE 0 0 0
emcpower4c ONLINE 0 0 0
emcpower14c ONLINE 0 0 0
====================

My question is ..is it ok run command "zpool add oracle01 <newsan> "
or we need to run any cluster specific commands also.
This resource group fas only one zfs resource and only this zpool.

Posted by guest on September 28, 2011 at 05:19 AM CEST #

Yes, its ok to run the mentioned command. Solaris Cluster use the zpool name for switchover/failover. But I recommended to do a switchover of the resource group to test if the cluster is working properly with the new shared device.
Best Regards,
Juergen

Posted by Juergen on October 05, 2011 at 01:26 PM CEST #

Posted by guest on September 28, 2011 at 10:19 AM CEST #
Now we are having a different activity in the same servers.
All the EMC LUNS will be migrated to HDS(Hitachi).
Now below is what has been planned so far.
SAN team will sync the EMC LUNS to HDS LUNS.
We shall add a qouram disk from HDS and remove the EMC qurom disk by scsetup.
After that we are going to offline cluster resource groups and remove the EMC LUNS from the cluster nodes.
Run #cldevice refresh
Attach the HDS LUNS and run #cldevice populate
Then online the cluster Resource groups.

Will the zpool get imported.We understand that new DID numbers may get generated for the HDS LUNS.

EMC is already synced to HDS.
But how are the EMC pseudo names (eg.: emcpower14c) /zpool header / DID devices going to get affected.Thats my doubt.

Posted by sunny on October 15, 2011 at 03:22 AM CEST #

Hi Juergen,

We have a SAN migration and that will change the device names for the zpools will be changed. We can do 'zpool import -p device=new_device kpool' right? but what change is required at the cluster level? kindly list the major steps in sequence.

Regards

kas

Posted by kas on October 20, 2011 at 04:55 PM CEST #

Hi,

for sunny:
yes there are a lot possibilities to migrate data. Normally I would say make all LUNs available (EMC and HDS) then use zfs snapshot and zfs send/receive to migrate the data. The new did numbers for the HDS are independent from the zpool configuration, because normally the physical devices will be used for the zpool. If the zpool import can read the all zpool header correctly after the data sync then it could work…

for kas:
If you change your zpool name which is used for HAStoragePlus then you also need to change the zpool name in the HAStoragePlus resource.
# Disable the HAStoragePlus resource
# change Zpools property with clrs set -p Zpools=<newZpoolname> <resourcename>
# Enable the HAStoragePlus resource

All, please ask such questions in Oracle Solaris Cluster Community
https://communities.oracle.com/portal/server.pt/community/oracle_solaris_cluster/393

Thanks, Juergen

Posted by Juergen on October 24, 2011 at 09:43 AM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

I'm still mostly blogging around Solaris Cluster and support. Independently if for Sun Microsystems or Oracle. :-)

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today