X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Using Oracle Solaris Unified Archives to Replicate an Oracle Solaris Cluster 4.2 Cluster

Krishna Murthy
Senior Manager, Software Development

Oracle
Solaris Automated Installation (AI) was first supported in Oracle
Solaris Cluster 4.0 software to install and configure a new cluster
from IPS repositories. With Oracle Solaris Unified Archive introduced
in Oracle Solaris 11.2 software, the automated installation of Oracle
Solaris Cluster 4.2 software with the Oracle Solaris 11.2 OS is
expanded with the following added functionality:

  1. Install

    and configure a new cluster from archives. The cluster is in the

    initial state, just like one created using the standard method of

    running scinstall(1M) on the

    potential cluster nodes.

  2. Restore

    cluster nodes from recovery archives created for the same nodes, due

    to hardware failures on the nodes.

  3. Replicate

    a new cluster from archives created for the nodes in a source

    cluster. The software packages and the cluster configuration on the

    new nodes will remain the same as in the source cluster, but the new

    nodes and some cluster objects (such as zone clusters) can have

    different system identities.

This
document shows how to replicate a new cluster from the Oracle Solaris
Unified Archives.

Replicating
clusters can greatly reduce the effort of installing the Oracle
Solaris OS, installing Oracle Solaris Cluster packages, configuring
the nodes to form a cluster, installing and configuring applications,
and applying SRUs or patches for maintenance. All of this can be done
in one installation.

At
first, the source cluster needs to be set up. This effort cannot be
omitted. However, for use cases such as engineered systems, archives
can be created for the source cluster nodes as master images, and can
be used to replicate multiple clusters, as many as one wants, using
this feature in Oracle Solaris Cluster 4.2 software. The more
clusters you replicate, the more effort it saves.

The
procedure to replicate a cluster includes the following steps:

  • Set

    up the source cluster and create archives for each source node.
  • Set

    up the AI server and the DHCP server.
  • Run

    scinstall on the AI server to

    configure the installation of the new cluster nodes, and add the new

    nodes to the configuration of the DHCP server.
  • Boot

    net install the new cluster nodes.

These
same steps also apply to configuring a new cluster and restoring
cluster nodes. The only difference is that, when running scinstall
on the AI server, the menu options and inputs are different.

Requirements
of the new cluster

When
replicating a new cluster from a source cluster, the new cluster
needs to have the following similar hardware configuration as the
source cluster:

  • Same

    number of nodes.
  • Same

    architecture.
  • Same

    private adapters for cluster transport.

As
for the archives used to install the new cluster nodes, they must be
created for the global zone, not from a non-global zone; do not mix
using clone and recovery
types of archives; and exclude datasets on shared storage when
creating archives and migrate the data separately.

Currently,
Oracle Solaris Unified Archive only supports ZFS, therefore other
file systems and volume managers that are configured in the source
cluster are not included in the archive. They can be migrated using
the corresponding methods of those types.

If
quorum servers or NAS devices are configured in the source cluster,
the cluster configuration related to these objects are carried to the
new cluster. However, the configuration on these hosts is not
updated. Manual intervention is needed on these systems for them to
function in the new cluster.

Set up the
source cluster and create an archive for each node

The
source cluster can be set up with any of the existing supported
methods. Let’s use a two-node cluster (host names source-node1
and source-node2)
with HA for NFS agent and a zone cluster as simple examples just for
illustration purpose. This source cluster has a shared disk quorum
device.

The
zone cluster myzc is
configured, installed, and booted online, and the two zone cluster
nodes have host name source-zcnode1
and source-zcnode2.

#

clzonecluster status

=== Zone

Clusters ===

--- Zone

Cluster Status ---

Name Brand

Node Name Zone Host Name Status Zone Status

----

----- --------- --------------

------ -----------

myzc

solaris source-node1 source-zcnode1 Online

Running

source-node2

source-zcnode2 Online Running

The
HA for NFS resource group (named nfs-rg)
contains a logical-hostname resource nfs-lh,
an HAStoragePlus resource hasp-rs,
and a SUNW.nfs resource nfs-rs.
The local mount point for the NFS file system is /local/ufs as shown
using clresource show.

# clresource

list -v

Resource Name

Resource Type Resource Group

-------------

-------------

--------------

nfs-rs

SUNW.nfs:3.3 nfs-rg

hasp-rs

SUNW.HAStoragePlus:10 nfs-rg

nfs-lh

SUNW.LogicalHostname:5 nfs-rg

# clresource

show -v -p HostnameList nfs-lh

=== Resources

===

Resource:

nfs-lh

---

Standard and extension properties ---

HostnameList:

source-lh-hostname

Class:

extension

Description:

List of hostnames this resource

manages

Per-node:

False

Type:

stringarray

# clresource

show -p FilesystemMountPoints hasp-rs

=== Resources

===

Resource:

hasp-rs

---

Standard and extension properties ---

FilesystemMountPoints:

/local/ufs

Class:

extension

Description:

The list of file system

mountpoints

Per-node:

False

Type:

stringarray

On
each node in this source cluster, create an unified archive for the
global zone. The archive can be of clone
type or recovery type. A clone
archive created for the global zone contains multiple deployable
systems. Non-global zones are excluded from the global zone, and each
non-global zone is a single deployable system. A recovery
archive created for the global zone consists just one single
deployable system. Installing from a global zone recovery archive
installs the global zone as well as the non-global zone that it
contains. Check the Unified
Archive introduction blog
for more details.

Since
there is a zone cluster in the source cluster, create a recovery
archive for each node so that the zones will get installed.

As
an example, we use archive-host
as
the name of
the host that is mounted under /net and
exports a file system to store the archives.

source-node1#

archiveadm create -r /net/archive-host/export/source-node1.sua

source-node2#

archiveadm create -r /net/archive-host/export/source-node2.sua

Note
even though the recovery archive contains multiple boot environment
(BE), only the current active BE is updated to function in the new
cluster.

Set Up the
AI server and the DHCP server

The
new cluster nodes must be networked with a designated AI server, a
DHCP server, and the host for storing the archive files. The AI
server must have static IP and be installed with Oracle Solaris 11.2.
The archive files created for the source cluster nodes must be
accessible from the AI server as well. The archive location can be an
autofs mount point mounted via /net/host, an http, or an https
URL.

On
the AI server, install the Oracle Solaris Cluster installation
package ha-cluster/system/install.
Do not install other Oracle Solaris Cluster packages. Installing this
package also installs the Oracle Solaris installadm
package and the Internet Systems Consortium (ISC) DHCP package
service/network/dhcp/isc-dhcp,
if they are not yet installed.

# pkg

publisher

PUBLISHER

TYPE STATUS P LOCATION

solaris

origin online F

http://ipkg.us.oracle.com/solaris11/release/

ha-cluster

origin online F

http://ipkg.us.oracle.com/ha-cluster/release

# pkg install

ha-cluster/system/install

Run
scinstall on the AI server to
configure the installation

The
tool to configure the AI installation of the new cluster nodes is
/usr/cluster/bin/scinstall. As
the same tool to configure a new cluster in the non-AI method, it
gives a different menu when it runs at the AI server. Use the
interactive method (running the scinstall
command without any options) instead of the command line options
method, as it gives help texts and prompts.

An
archive must be created for the global zone of each source cluster
node, and one archive file can only be specified to install just one
node in the new cluster. This 1:1 mapping relationship must be
maintained.

To
avoid using the same host identities in both the source and the new
clusters, the tool prompts for a host identity mapping file. It is a
text file that contains 1:1 mapping from host identities used in the
source cluster to the new host identities in the new cluster. The
host names of the physical nodes do not need to be included in this
file. The host names or IPs configured for zone clusters, non-global
zones, and logical-hostname or shared-address resources can be
included. Hosts used in name services for zones can be included too.

The
file can contain multiple lines, and each line has two columns. The
first column is the host name or IP used in the source cluster, and
the second column is the corresponding new host name or IP address
that will be used in the new cluster. “#” at the
beginning of a line marks the comment lines.

#

cat /net/archive-host/export/mapping_new_cluster.txt

#

zone cluster host names

source-zcnode1

new-zcnode1

source-zcnode2

new-zcnode2

#

ha-nfs logicalhost resource host name

source-lh-hostname

new-lh-hostname

The
scinstall tool provides menus
and options, and prompts for the following user inputs, as shown with
examples in the following table.

Item

Description

Example Values

Root password

Password for

root account to access the nodes upon installation completes


Cluster name

Name of the

new cluster

new_cluster

Node names

and MACs

Node names in

the new cluster and their MACs

new-node1

00:14:4F:FA:42:42

new-node2

00:14:4F:FA:EF:E8

Archive

locations

Archive

location for each node in the new cluster

new-node1:

/net/archive-host/export/source-node1.sua

new-node2:

/net/archive-host/export/source-node2.sua

Network

address and netmask (optional)

Network

address for private network and netmask

Network

address: 172.17.0.0

netmask:

255.255.240.0

Host ID

mapping file (optional)

A text file

for 1:1 mapping from the old host identities in the source

cluster to new host identities

/net/archive-host/export/mapping_new_cluster.txt

After
confirming all the inputs, scinstall
creates the install service for the new cluster, named as
cluster-name-{sparc|i386}, creates manifest files and
sysconfig profiles for each new node, and associates each node as
clients to this install service.

The
output from running scinstall
also contains the instructions for adding the new cluster nodes as
clients to the DHCP server. Installing
Oracle Solaris 11.2 Systems
has every detail about setting up
the ISC DHCP server and adding clients to the DHCP configuration.

Boot net
install the new cluster nodes

Boot the nodes
from network to start the installation. For SPARC nodes,

Ok boot

net:dhcp – install

For
x86 nodes, press the proper function key to boot from the network.

After
the installation completes, the nodes are automatically rebooted
three times before joining the new cluster. Check the cluster nodes
status with the /usr/clutser/bin/clquorum
command. The shared disk quorum device is re-created using a DID in
the freshly populated device name spaces.

# clquorum

status

=== Cluster

Quorum ===

--- Quorum

Votes Summary from (latest node reconfiguration) ---

Needed

Present Possible

------

------- --------

2

3 3

--- Quorum

Votes by Node (current status) ---

Node Name

Present Possible Status

---------

------- -------- ------

new-node1

1 1 Online

new-node2

1 1 Online

--- Quorum

Votes by Device (current status) ---

Device Name

Present Possible Status

-----------

------- -------- ------

d1

1 1 Online

The
zone cluster has updated host names. Its zone status is Running
but its cluster status is Offline.
Check its configuration for any manual updates to perform in
the new environment. If the new configuration looks fine, using the
clzonecluster reboot command to
bring the zone cluster to Online
cluster status.

#

clzonecluster status

=== Zone

Clusters ===

--- Zone

Cluster Status ---

Name Brand

Node Name Zone Host Name Status Zone Status

----

----- --------- --------------

------ -----------

myzc

solaris source-node1 new-zcnode1 Offline

Running

source-node2

new-zcnode2 Offline Running

#

clzonecluster reboot myzc

The
private network address is changed to the one specified during
running scinstall.

# cluster

show -t global | grep private_

private_netaddr:

172.17.0.0

private_netmask:

255.255.240.0

The
nfs-rg and its resources are in
offline state as well with updated host name for the logical-hostname
resource.

# clresource

show -v -p HostnameList nfs-lh

=== Resources

===

Resource:

nfs-lh

---

Standard and extension properties ---

HostnameList:

new-lh-hostname

Class:

extension

Description:

List of hostnames this resource

manages

Per-node:

False

Type:

stringarray

Since
the archive does not contain the UFS file systems used in resource
group nfs-rg, the mount entry
(I.e., /local/ufs) for this UFS
file system is commented out in the /etc/vfstab file. Update
/etc/vfstab to bring it back on all the nodes. Then on one node,
create the file system on that shared disk. Finally bring the
resource group nfs-rg online
using the clresourcegroup online
command. The files in this file system in the source cluster can be
copied over, or just start with the new file system.

# grep

'/local/ufs' /etc/vfstab

/dev/global/dsk/d4s6

/dev/global/rdsk/d4s6 /local/ufs ufs 2 no logging

# cldevice

list -v d4

DID Device

Full Device Path

----------

----------------

d4

new-node2:/dev/rdsk/c1d4

d4

new-node1:/dev/rdsk/c1d4

# format

/dev/rdsk/c1d4s2

# newfs

/dev/did/rdsk/d4s6

newfs:

construct a new file system /dev/did/rdsk/d4s6: (y/n)? y

#

clresourcegroup online nfs-rg

# clresource

status

=== Cluster

Resources ===

Resource Name

Node Name State Status Message

-------------

--------- -----

--------------

nfs-rs

new-node1 Online Online - Service is

online.

new-node2

Offline Offline

hasp-rs

new-node1 Online Online

new-node2

Offline Offline

nfs-lh

new-node1 Online Online -

LogicalHostname online.

new-node2

Offline Offline

At
this stage, this newly replicated cluster and the agent are fully
functional.

- Lucia Lai <yue.lai@oracle.com>
Oracle Solaris Cluster 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha