Wednesday Aug 20, 2014

Oracle Solaris Cluster 4.2 Event and its SNMP Interface

Background

The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.

Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.

Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2

The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.


The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)
• WARNING
• ERROR
• CRITICAL
• FATAL

In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, 
pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.

With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.

2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.

• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).

Examples,
• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event

Administering the Cluster Event SNMP Interface

Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
clsnmpmib: administer the SNMP interface, and the MIB configuration.
clsnmphost: administer hosts for the SNMP traps
clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)

Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.

Examples:
1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host

Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.

The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.

- Leland Chen 

Monday Aug 18, 2014

SCHA API for resource group failover / switchover history

The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format.

Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups.

The command usage is as shown below:

# scha_cluster_get -O RG_FAILOVER_LOG number_of_days

number_of_days : the number of days to be considered for scanning the historical logs.

The command returns a list of events in the following format. Each field is separated by a semi-colon [;]:

resource_group_name;source_nodes;target_nodes;time_stamp

source_nodes: node_names from which resource group is failed over or was switched manually.

target_nodes: node_names to which the resource group failed over or was switched manually.

There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag.

In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup.

# /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3
geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014
geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014
oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014
asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014
asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014
oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014
geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014
geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014
asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014
asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014
oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014
oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014
oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014

Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline.

- Arpit Gupta, Harish Mallya

Thursday Aug 07, 2014

Solaris Unified Archive Support in Zone Clusters

OSC Blog - GH+clzc.docx

Terry Fu

terry.fu@oracle.com

Introduction

Oracle Solaris Cluster version 4.2 software provides support for Solaris Unified Archive in configuring and installing zone clusters. This new feature enables you deploying a zone cluster with your customized configuration and application stacks much easier and faster. Now, you can create a unified archive from a zone with your application stack installed, and then easily deploy this archive into a zone cluster.

This blog will introduce you how to create unified archives for zone clusters, configure a zone cluster from a unified archive and install a zone cluster from a unified archive.

How to Create Unified Archives for Zone Clusters

Solaris 11.2 introduced unified archive which is a new native archive file type. Users create unified archives from a deployed Solaris instance, and the archive could include or exclude any zones or zone clusters within the instance. Unified archives are created through the use of archiveadm(1m) utility. You can check out the blog for Unified Archive in Solaris 11.2 for more information.

There are two types of Solaris Unified Archives: clone archive and recovery archive. A clone archive created for a global zone contains, by default, every system present on the host which includes global zone itself and every non-global zone. A recovery archive contains a single deployable system which can be the global zone without any non-global zones, a non-global zone or the global zone with the existing non-global zones.

Both clone and recovery type of archive are supported for zone cluster configuration and installation. The unified archive used for zone cluster can be created from either a global zone or a non global zone.

How to Configure and Install a Zone Cluster from Unified Archive

clzonecluster(1cl) utility is used to configure and install zone clusters. To configure a zone cluster from unified archive, “-a” option should be used along with “create” subcommand of “clzonecluster configure” utility. This option is supported in both interactive mode and non-interactive mode.

Though the unified archive contains the configuration for the zone cluster, you will need to set some properties as needed to valid zone cluster configuration. When configuring a zone cluster from a unified archive, you will need to set the new zone path and node scope properties as the old ones in the unified archive may not be valid for the destination system. If the unified archive is created from a non-clustered zone, you need to set the cluster attribute and also set the enable_priv_net property to true. Moreover, you can also change any zone property as needed.

You can use the info command in the interactive mode to view the current configuration of the zone cluster.

The following shows an example of configuring a zone cluster from a unified archive.

phys-schost-1#
clzonecluster configure sczone
sczone: No such zone cluster configured
Use'create' to begin configuring a new zone cluster.
clzc:sczone> create -a absolute_path_to_archive -z archived_zone_1
clzc:sczone> set zonepath=/zones/sczone
clzc:sczone> info
zonename:sczone
zonepath:/zones/sczone
autoboot:true
hostid:
brand:solaris
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type:exclusive
enable_priv_net:true
attr:
name: cluster
type: boolean
value: true
clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft1
clzc:sczone:node> set hostname=zc-host-1
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft1a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft2
clzc:sczone:node> set hostname=zc-host-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft2a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end

The zone cluster is now configured. The following commands installs the zone cluster from a unified archive on a global-cluster node.

phys-schost-1# clzonecluster install -a absolute_path_to_archive -z archived-zone sczone

The zone cluster is now installed. If the unified archive for the zone cluster installation was created from a non-clustered zone, you will need to install the Oracle Solaris Cluster packages on the zone nodes before forming a zone cluster.

Note, configuring and installing zone cluster are independent to each other, so you can use the Solaris Unified Archive feature in either or both steps. Also, the unified archive used to configure and install the zone cluster does not have to be the same one.

Next Steps

After configuring and installing the zone cluster from unified archives, you can now boot the zone cluster.

If you would like to learn more about the use of Solaris Unified Archives for configuring and installing zone clusters in Solaris Cluster 4.2, see clzonecluster(1cl) and Oracle Solaris Cluster System Administration Guide.

HA-LDOM live migration in Oracle Solaris Cluster 4.2

Most Oracle Solaris Cluster resource types clearly separate their stopping and starting actions. However, the HA-LDom (HA Logical Domain / HA for Oracle VM for SPARC data service) agent has a capability of performing live migration, in which an entire switchover of the resource is performed while the cluster framework is stopping the resource. Suppose that the LDom resource starts out on node A and the administrator executes a switchover to node B. While the cluster framework is stopping the resource on node A, LDom live migration is actually migrating the LDom from node A to node B.

The problem (which previously existed) with this approach arose if another resource group declared a strong negative (SN) affinity for the LDom resource's group or if load limits were in use. The LDom is actually migrating onto the target node before the cluster framework knows that it has started there, so the cluster framework will not have had a chance to evict any resource group(s) due to strong negative affinities or load limits. There will be an interval of time in which the hard loadlimit or strong negative affinity will be violated. This may cause overload of the node, and consequent failure of the LDom live migration attempt.

With Oracle Solaris Cluster 4.2 we eliminate the above said problem by providing a mechanism in the cluster framework by which excess workload can be migrated off of the target node of the switchover before executing the live migration of the LDom. This is accomplished by making use of the following two enhancements:

  1. A new resource property Pre_evict which ensures seamless live migration of LDom. The Pre_evict property allows the cluster framework to move excess workload off of the target node before the switchover begins, which then allows the live migration to succeed more often.

  2. A new scha_resourcegroup_get query tag SCHA_TARGET_NODES which allows a data service to perform the live migration by informing it of the target node to be used for the migration.

For the convenience of the users / administrators, the HA-LDom agent in Oracle Solaris Cluster 4.2 has the Pre_evict set to True by default and the agent is enhanced to take advantage of these new API (Pre_evict and SCHA_TARGET_NODES query) features.

For more information, refer the following links

- Harish Mallya
Oracle Solaris Cluster 

Tuesday Aug 05, 2014

Using Oracle Solaris Unified Archives to Replicate an Oracle Solaris Cluster 4.2 Cluster

Oracle Solaris Automated Installation (AI) was first supported in Oracle Solaris Cluster 4.0 software to install and configure a new cluster from IPS repositories. With Oracle Solaris Unified Archive introduced in Oracle Solaris 11.2 software, the automated installation of Oracle Solaris Cluster 4.2 software with the Oracle Solaris 11.2 OS is expanded with the following added functionality:

  1. Install and configure a new cluster from archives. The cluster is in the initial state, just like one created using the standard method of running scinstall(1M) on the potential cluster nodes.

  2. Restore cluster nodes from recovery archives created for the same nodes, due to hardware failures on the nodes.

  3. Replicate a new cluster from archives created for the nodes in a source cluster. The software packages and the cluster configuration on the new nodes will remain the same as in the source cluster, but the new nodes and some cluster objects (such as zone clusters) can have different system identities.

This document shows how to replicate a new cluster from the Oracle Solaris Unified Archives.

Replicating clusters can greatly reduce the effort of installing the Oracle Solaris OS, installing Oracle Solaris Cluster packages, configuring the nodes to form a cluster, installing and configuring applications, and applying SRUs or patches for maintenance. All of this can be done in one installation.

At first, the source cluster needs to be set up. This effort cannot be omitted. However, for use cases such as engineered systems, archives can be created for the source cluster nodes as master images, and can be used to replicate multiple clusters, as many as one wants, using this feature in Oracle Solaris Cluster 4.2 software. The more clusters you replicate, the more effort it saves.

The procedure to replicate a cluster includes the following steps:

  • Set up the source cluster and create archives for each source node.
  • Set up the AI server and the DHCP server.
  • Run scinstall on the AI server to configure the installation of the new cluster nodes, and add the new nodes to the configuration of the DHCP server.
  • Boot net install the new cluster nodes.

These same steps also apply to configuring a new cluster and restoring cluster nodes. The only difference is that, when running scinstall on the AI server, the menu options and inputs are different.

Requirements of the new cluster

When replicating a new cluster from a source cluster, the new cluster needs to have the following similar hardware configuration as the source cluster:

  • Same number of nodes.
  • Same architecture.
  • Same private adapters for cluster transport.

As for the archives used to install the new cluster nodes, they must be created for the global zone, not from a non-global zone; do not mix using clone and recovery types of archives; and exclude datasets on shared storage when creating archives and migrate the data separately.

Currently, Oracle Solaris Unified Archive only supports ZFS, therefore other file systems and volume managers that are configured in the source cluster are not included in the archive. They can be migrated using the corresponding methods of those types.

If quorum servers or NAS devices are configured in the source cluster, the cluster configuration related to these objects are carried to the new cluster. However, the configuration on these hosts is not updated. Manual intervention is needed on these systems for them to function in the new cluster.

Set up the source cluster and create an archive for each node

The source cluster can be set up with any of the existing supported methods. Let’s use a two-node cluster (host names source-node1 and source-node2) with HA for NFS agent and a zone cluster as simple examples just for illustration purpose. This source cluster has a shared disk quorum device.

The zone cluster myzc is configured, installed, and booted online, and the two zone cluster nodes have host name source-zcnode1 and source-zcnode2.

# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Brand Node Name Zone Host Name Status Zone Status

---- ----- --------- -------------- ------ -----------

myzc solaris source-node1 source-zcnode1 Online Running

source-node2 source-zcnode2 Online Running

The HA for NFS resource group (named nfs-rg) contains a logical-hostname resource nfs-lh, an HAStoragePlus resource hasp-rs, and a SUNW.nfs resource nfs-rs. The local mount point for the NFS file system is /local/ufs as shown using clresource show.

# clresource list -v

Resource Name Resource Type Resource Group

------------- ------------- --------------

nfs-rs SUNW.nfs:3.3 nfs-rg

hasp-rs SUNW.HAStoragePlus:10 nfs-rg

nfs-lh SUNW.LogicalHostname:5 nfs-rg

# clresource show -v -p HostnameList nfs-lh

=== Resources ===

Resource: nfs-lh

--- Standard and extension properties ---

HostnameList: source-lh-hostname

Class: extension

Description: List of hostnames this resource manages

Per-node: False

Type: stringarray

# clresource show -p FilesystemMountPoints hasp-rs

=== Resources ===

Resource: hasp-rs

--- Standard and extension properties ---

FilesystemMountPoints: /local/ufs

Class: extension

Description: The list of file system mountpoints

Per-node: False

Type: stringarray

On each node in this source cluster, create an unified archive for the global zone. The archive can be of clone type or recovery type. A clone archive created for the global zone contains multiple deployable systems. Non-global zones are excluded from the global zone, and each non-global zone is a single deployable system. A recovery archive created for the global zone consists just one single deployable system. Installing from a global zone recovery archive installs the global zone as well as the non-global zone that it contains. Check the Unified Archive introduction blog for more details.

Since there is a zone cluster in the source cluster, create a recovery archive for each node so that the zones will get installed.

As an example, we use archive-host as the name of the host that is mounted under /net and exports a file system to store the archives.

source-node1# archiveadm create -r /net/archive-host/export/source-node1.sua

source-node2# archiveadm create -r /net/archive-host/export/source-node2.sua

Note even though the recovery archive contains multiple boot environment (BE), only the current active BE is updated to function in the new cluster.

Set Up the AI server and the DHCP server

The new cluster nodes must be networked with a designated AI server, a DHCP server, and the host for storing the archive files. The AI server must have static IP and be installed with Oracle Solaris 11.2. The archive files created for the source cluster nodes must be accessible from the AI server as well. The archive location can be an autofs mount point mounted via /net/host, an http, or an https URL.

On the AI server, install the Oracle Solaris Cluster installation package ha-cluster/system/install. Do not install other Oracle Solaris Cluster packages. Installing this package also installs the Oracle Solaris installadm package and the Internet Systems Consortium (ISC) DHCP package service/network/dhcp/isc-dhcp, if they are not yet installed.

# pkg publisher

PUBLISHER TYPE STATUS P LOCATION

solaris origin online F http://ipkg.us.oracle.com/solaris11/release/

ha-cluster origin online F http://ipkg.us.oracle.com/ha-cluster/release

# pkg install ha-cluster/system/install

Run scinstall on the AI server to configure the installation

The tool to configure the AI installation of the new cluster nodes is /usr/cluster/bin/scinstall. As the same tool to configure a new cluster in the non-AI method, it gives a different menu when it runs at the AI server. Use the interactive method (running the scinstall command without any options) instead of the command line options method, as it gives help texts and prompts.

An archive must be created for the global zone of each source cluster node, and one archive file can only be specified to install just one node in the new cluster. This 1:1 mapping relationship must be maintained.

To avoid using the same host identities in both the source and the new clusters, the tool prompts for a host identity mapping file. It is a text file that contains 1:1 mapping from host identities used in the source cluster to the new host identities in the new cluster. The host names of the physical nodes do not need to be included in this file. The host names or IPs configured for zone clusters, non-global zones, and logical-hostname or shared-address resources can be included. Hosts used in name services for zones can be included too.

The file can contain multiple lines, and each line has two columns. The first column is the host name or IP used in the source cluster, and the second column is the corresponding new host name or IP address that will be used in the new cluster. “#” at the beginning of a line marks the comment lines.

# cat /net/archive-host/export/mapping_new_cluster.txt

# zone cluster host names

source-zcnode1 new-zcnode1

source-zcnode2 new-zcnode2

# ha-nfs logicalhost resource host name

source-lh-hostname new-lh-hostname

The scinstall tool provides menus and options, and prompts for the following user inputs, as shown with examples in the following table.

Item

Description

Example Values

Root password

Password for root account to access the nodes upon installation completes


Cluster name

Name of the new cluster

new_cluster

Node names and MACs

Node names in the new cluster and their MACs

new-node1 00:14:4F:FA:42:42

new-node2 00:14:4F:FA:EF:E8

Archive locations

Archive location for each node in the new cluster

new-node1: /net/archive-host/export/source-node1.sua

new-node2: /net/archive-host/export/source-node2.sua

Network address and netmask (optional)

Network address for private network and netmask

Network address: 172.17.0.0

netmask: 255.255.240.0

Host ID mapping file (optional)

A text file for 1:1 mapping from the old host identities in the source cluster to new host identities

/net/archive-host/export/mapping_new_cluster.txt

After confirming all the inputs, scinstall creates the install service for the new cluster, named as cluster-name-{sparc|i386}, creates manifest files and sysconfig profiles for each new node, and associates each node as clients to this install service.

The output from running scinstall also contains the instructions for adding the new cluster nodes as clients to the DHCP server. Installing Oracle Solaris 11.2 Systems has every detail about setting up the ISC DHCP server and adding clients to the DHCP configuration.

Boot net install the new cluster nodes

Boot the nodes from network to start the installation. For SPARC nodes,

Ok boot net:dhcp – install

For x86 nodes, press the proper function key to boot from the network.

After the installation completes, the nodes are automatically rebooted three times before joining the new cluster. Check the cluster nodes status with the /usr/clutser/bin/clquorum command. The shared disk quorum device is re-created using a DID in the freshly populated device name spaces.

# clquorum status

=== Cluster Quorum ===

--- Quorum Votes Summary from (latest node reconfiguration) ---

Needed Present Possible

------ ------- --------

2 3 3

--- Quorum Votes by Node (current status) ---

Node Name Present Possible Status

--------- ------- -------- ------

new-node1 1 1 Online

new-node2 1 1 Online

--- Quorum Votes by Device (current status) ---

Device Name Present Possible Status

----------- ------- -------- ------

d1 1 1 Online

The zone cluster has updated host names. Its zone status is Running but its cluster status is Offline. Check its configuration for any manual updates to perform in the new environment. If the new configuration looks fine, using the clzonecluster reboot command to bring the zone cluster to Online cluster status.

# clzonecluster status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Brand Node Name Zone Host Name Status Zone Status

---- ----- --------- -------------- ------ -----------

myzc solaris source-node1 new-zcnode1 Offline Running

source-node2 new-zcnode2 Offline Running

# clzonecluster reboot myzc

The private network address is changed to the one specified during running scinstall.

# cluster show -t global | grep private_

private_netaddr: 172.17.0.0

private_netmask: 255.255.240.0

The nfs-rg and its resources are in offline state as well with updated host name for the logical-hostname resource.

# clresource show -v -p HostnameList nfs-lh

=== Resources ===

Resource: nfs-lh

--- Standard and extension properties ---

HostnameList: new-lh-hostname

Class: extension

Description: List of hostnames this resource manages

Per-node: False

Type: stringarray

Since the archive does not contain the UFS file systems used in resource group nfs-rg, the mount entry (I.e., /local/ufs) for this UFS file system is commented out in the /etc/vfstab file. Update /etc/vfstab to bring it back on all the nodes. Then on one node, create the file system on that shared disk. Finally bring the resource group nfs-rg online using the clresourcegroup online command. The files in this file system in the source cluster can be copied over, or just start with the new file system.

# grep '/local/ufs' /etc/vfstab

/dev/global/dsk/d4s6 /dev/global/rdsk/d4s6 /local/ufs ufs 2 no logging

# cldevice list -v d4

DID Device Full Device Path

---------- ----------------

d4 new-node2:/dev/rdsk/c1d4

d4 new-node1:/dev/rdsk/c1d4

# format /dev/rdsk/c1d4s2

# newfs /dev/did/rdsk/d4s6

newfs: construct a new file system /dev/did/rdsk/d4s6: (y/n)? y

# clresourcegroup online nfs-rg

# clresource status

=== Cluster Resources ===

Resource Name Node Name State Status Message

------------- --------- ----- --------------

nfs-rs new-node1 Online Online - Service is online.

new-node2 Offline Offline

hasp-rs new-node1 Online Online

new-node2 Offline Offline

nfs-lh new-node1 Online Online - LogicalHostname online.

new-node2 Offline Offline

At this stage, this newly replicated cluster and the agent are fully functional.

- Lucia Lai <yue.lai@oracle.com>
Oracle Solaris Cluster 
About

Oracle Solaris Cluster Engineering Blog

Search

Categories
Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today