Evacuating resource groups from nodes


The new command "clresourcegroup evacuate" switches all resource groups off of a given node. This is a narrower form of the command "clnode evacuate", which switches both resource groups and disk device groups off of a given node. In the old command line interface, the "scswitch -S" command performed the same function as "clnode evacuate" does in the new command line interface. However, the old command line interface had no exact counterpart to "clresourcegroup evacuate".
 
Recently, a co-worker asked me this question about node evacuation:

I have an nfs resource group configured and online on node1. I evacuate node1 first, and the resource group switches over to node2. Immediately after the nfs resource group goes online on node2, I evacuate node2. But then the nfs resource group is offline on both nodes.

If I wait for some time after the first evacuation to do the 2nd one, then the nfs resource group will be online on node1.

Is this expected? Does it happen for all agents?

The answer is "yes" and "yes". This is expected behavior, and it applies to all agents. In the new command line interface, it is controlled by the -T option of clresourcegroup(1CL). Here is an excerpt from the man page:

-T seconds
--time=seconds
--time seconds

Specifies the number of seconds to keep resource groups from switching back onto a node or zone after you have evacuated resource groups from the node or zone.

You can use this option only with the 'evacuate' subcommand. You must specify an integer value between 0 and 65535 for seconds. If you do not specify a value, 60 seconds is used by default.

Resource groups cannot fail over or automatically switch over onto a node or zone while that node or zone is being evacuated. The -T option specifies that resource groups are not to be brought online by the RGM on the evacuated node or zone for a period of _seconds_ seconds after the evacuation has completed. You can override the -T timer by switching a resource group onto the evacuated node or zone, using the clrg 'switch' or 'online' subcommand with the -n option. When such a switch is done, the -T timer is immediately considered to have expired for that node or zone. However, switchover commands such as 'clrg online' or 'clrg remaster' without the -n flag will continue to respect the -T timer and will avoid switching any resource groups onto the evacuated node.

To modify the evacuate behavior, you can specify a different value of -T than the default of 60 seconds.

In the old command-line interface, the feature corresponding to -T is the the -K option, used with the scswitch -S command.

Martin Rattner
Sun Cluster Engineering

Comments:

Hi, Really superb.......never seen some much of information on cluster. I do have a query. what will happen to a 2 node cluster, when all the private interconnect fails.Whether cluster will hang on primary? or it will failover to secondary? or both the node will hang? Guna

Posted by guna on March 22, 2007 at 12:57 AM PDT #

Guna, In a 2 node cluster, we typically use a shared storage as a quorum disk. So when the nodes can't communicate, they will try to grab the disk. Whichever node is able to do that first, will remain in the cluster, and the other node will be brought down. In general, we use a mix of node membership votes and disk votes to decide a majority. In case of communication failure, those that are part of this majority are allowed to remain up, and others will panic themselves.

Posted by Suraj Verma on March 22, 2007 at 04:44 PM PDT #

A longer description of quorum devices is in the Sun Cluster concepts guide. In particular, see http://docs.sun.com/app/docs/doc/819-2969/6n57kl13o?a=view

Furthermore, if you need to post general questions about Sun Cluster, you can always post them to the Sun Cluster forum (see http://forum.java.sun.com/forum.jspa?forumID=842) where someone should be able to help answer your question.

Posted by Tim Read on March 22, 2007 at 07:57 PM PDT #

Guna, If the private interconnect fails entirely on a 2-node cluster, this is called a partitioning" or "split brain". It raises the possibility of data corruption if each node thinks that it is the only surviving node and both of them attempt to write to shared storage. To avoid this problem, Sun Cluster uses a quorum device. Only one of the two nodes can achieve quorum; that node will remain up and will continue to provide services to clients. The other node will be immediately halted and removed from the cluster membership.

Posted by Martin Rattner on March 23, 2007 at 03:43 AM PDT #

Sorry for the duplicated comment!

Posted by Martin Rattner on March 23, 2007 at 03:44 AM PDT #

Hi Sun gurus,
I have four node sun cluster of which
I want to take 3 nodes offline (break cluster) all resources and device groups and perform reinstall of web applications and rejoin them the cluster(ONLINE) with 4th node.

Note that our four node cluster is configured as below.

runtime 1 and 2 - active-active
manager 1 and 2 - active-standby

I want runtime 1 and 2 and manager 1 nodes off the cluster and reinstall applications and join the cluster back.

Our applications are installed locally on disks and media-data stored on SAN disks.

Can anyone suggest a solution.

Thanks,
Shiva

Posted by Shiva Duddeda on August 07, 2009 at 06:44 AM PDT #

Hello Friends ,

Kudos for this great Blog .

Can someone please help me with this , If I had to boot each node out of the cluster mode ( ie. with boot -x ) , Is there a way to start the cluster services and join both nodes , yet without making any RGs online ( for some reason the shared data disks are not available now ) .

I need to be able to query the cluster with scstat , after starting it and joining both nodes , yet I do not want it to attempt making any RGs online during coming up .

Many thanks for your kind help .

Regards

Ahmad

Posted by Ahmad on October 18, 2011 at 10:33 AM PDT #

Many Thanks Martin for your kind generous reply and help . Very much appreciated .

Ahmad

Posted by Ahmad on October 31, 2011 at 10:58 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

mkb

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today