Tuesday Jul 26, 2016

SSL Live Migration for HA for Oracle VM Server for SPARC

As detailed in this article, the HA for Oracle VM Server for SPARC data service for Oracle Solaris Cluster can be used to support enhanced availability of Oracle VM Server for SPARC. This high availability (HA) agent can control and manage a guest domain as a "black box." It can fail over the guest domain in case of failure, but it can also use the domain migration procedures to operate a managed switchover.   

Up to this point, using the HA for Oracle VM Server for SPARC service to orchestrate the guest domain migration required, providing the Oracle Solaris Cluster software with administrative credentials for the control domains. Starting with Oracle Solaris Cluster 4.3 SRU3, a resource of type SUNW.ldom starting with version 8 can now live-migrate a guest domain by using SSL (Secure Sockets Layer) certificates that have been set up to establish a trust relationship between different Oracle VM Server for SPARC server control domains, thereby enhancing the security for the system.

To enable live migration of a guest domain by using SSL certificate based authentication, you must first configure the SSL certificates by referring to the version-specific Administration Guide for Oracle VM Server for SPARC.

Resource type SUNW.ldom version 8 introduces the extension property Use_SSL_Certificate that can be tuned to enable or disable SSL certificate-based authentication for live migration. By default Use_SSL_Certificate=FALSE is set which disables SSL certificate based authentication. However, Use_SSL_Certificate=TRUE can be set anytime to enable SSL certificate based authentication.

In order to migrate an existing resource of type SUNW.ldom to resource type version 8, assume the root role or a role that provides solaris.cluster.modify and solaris.cluster.admin authorizations to execute the following on any one node:
$ /usr/cluster/bin/clresource set -p TYPE_VERSION=8 ldom-rs

The steps below briefly describe how to set up the SSL certificates for a guest domain that is managed by Oracle Solaris Cluster Data Service for Oracle VM Server for SPARC 3.3 on a three-node cluster (node1, node2, node3), and leverage the configured SSL certificates for live migration of the guest domain.

Perform the following on each node where a guest domain could be managed by a resource of SUNW.ldom starting with version 8. Setting up SSL certificates for a guest domain requires root privilege.

1. Create the /var/share/ldomsmanager/trust directory, if it does not already exist.
root@node1:~# /usr/bin/mkdir -p /var/share/ldomsmanager/trust
root@node2:~# /usr/bin/mkdir -p /var/share/ldomsmanager/trust
root@node3:~# /usr/bin/mkdir -p /var/share/ldomsmanager/trust


2. Securely copy the remote ldmd certificate to the local ldmd trusted certificate directory.

root@node1:~# /usr/bin/scp \
root@node2.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node2.pem

root@node1:~# /usr/bin/scp \
root@node3.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node3.pem


root@node2:~# /usr/bin/scp \
root@node1.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node1.pem

root@node2:~# /usr/bin/scp \
root@node3.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node3.pem


root@node3:~# /usr/bin/scp \
root@node1.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node1.pem

root@node3:~# /usr/bin/scp \
root@node2.example.com:/var/share/ldomsmanager/server.crt \
/var/share/ldomsmanager/trust/node2.pem



3. Create a symbolic link from the certificate in the ldmd trusted certificate directory to the /etc/certs/CA/ directory.
root@node1:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node2.pem \
/etc/certs/CA/

root@node1:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node3.pem \
/etc/certs/CA/


root@node2:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node1.pem \
/etc/certs/CA/

root@node2:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node3.pem \
/etc/certs/CA/


root@node3:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node1.pem \
/etc/certs/CA/

root@node3:~# /usr/bin/ln -s /var/share/ldomsmanager/trust/node2.pem \
/etc/certs/CA/


4. Restart the svc:/system/ca-certificates service.
root@node1:~# /usr/sbin/svcadm restart svc:/system/ca-certificates
root@node2:~# /usr/sbin/svcadm restart svc:/system/ca-certificates
root@node3:~# /usr/sbin/svcadm restart svc:/system/ca-certificates

5. Verify that the configuration is operational.
root@node1:~# /usr/bin/openssl verify /etc/certs/CA/node2.pem
/etc/certs/CA/node2.pem: OK
root@node1:~# /usr/bin/openssl verify /etc/certs/CA/node3.pem
/etc/certs/CA/node3.pem: OK
root@node2:~# /usr/bin/openssl verify /etc/certs/CA/node1.pem
/etc/certs/CA/node1.pem: OK
root@node2:~# /usr/bin/openssl verify /etc/certs/CA/node3.pem
/etc/certs/CA/node3.pem: OK
root@node3:~# /usr/bin/openssl verify /etc/certs/CA/node1.pem
/etc/certs/CA/node1.pem: OK
root@node3:~# /usr/bin/openssl verify /etc/certs/CA/node2.pem
/etc/certs/CA/node2.pem: OK

6. Restart the ldmd daemon.
root@node1:~# /usr/sbin/svcadm restart svc:/ldoms/ldmd:default
root@node2:~# /usr/sbin/svcadm restart svc:/ldoms/ldmd:default
root@node3:~# /usr/sbin/svcadm restart svc:/ldoms/ldmd:default


7. If the guest domain is already managed by a resource of type SUNW.ldom version 8, set the Use_SSL_Certificate extension property to TRUE.
Assume the root role or a role that provides solaris.cluster.modify and solaris.cluster.admin authorizations to execute the following on any one node:
$ /usr/cluster/bin/clresource set -p Use_SSL_Certificate=TRUE ldom-rs

For a new resource of SUNW.ldom starting with version 8, ensure that the live migration of a guest domain, using SSL certificate authentication, is successful before setting Use_SSL_Certificate=TRUE and enabling the resource.

If you are configuring the SSL certificates for a guest domain that is already running, verify that a dry-run live migration, using SSL certificate authentication, is successful before setting Use_SSL_Certificate=TRUE.

Assume the root role or a role that has been assigned the "LDoms Management" profile and execute the following command:
$ /usr/sbin/ldm migrate-domain -n -c domain-name target_host
$ echo $?
0


For more information, refer the following resources:

Oracle Solaris Cluster Data Service for Oracle VM Server for SPARC Guide
http://docs.oracle.com/cd/E56676_01/html/E56924/index.html

SUNW.ldom(5) Man Page
https://docs.oracle.com/cd/E56676_01/html/E56746/sunw.ldom-5.html

Oracle VM Server for SPARC 3.4 Administration Guide

http://docs.oracle.com/cd/E69554_01/html/E69557/index.html

Oracle VM Server for SPARC 3.3 Administration Guide
https://docs.oracle.com/cd/E62357_01/html/E62358/index.html

Oracle VM Server for SPARC 3.2 Administration Guide
https://docs.oracle.com/cd/E48724_01/html/E48732/index.html

Secure administration of Oracle VM Server for SPARC
https://blogs.oracle.com/jsavit/entry/secure_administration_of_oracle_vm


Tapan Avasthi
Oracle Solaris Cluster Engineering

Tuesday Jul 19, 2016

Oracle E-Business 12.2 support on Oracle Solaris Cluster

We are very pleased to announce that Oracle Solaris Cluster 4.3 SRU 3 on Oracle Solaris 11 introduces support for Oracle E-Business Suite 12.2.

In particular, starting with Oracle E-Business Suite 12.2.4, Oracle Applications DBA (AD) and Oracle E-Business Suite Technology Stack (TXK) Delta 6, Oracle E-Business Suite 12.2 can now be managed by Oracle Solaris Cluster 4.3 SRU 3.

One advantage of deploying Oracle E-Business Suite 12.2 with Oracle Solaris Cluster is the ability to install the Primary Application Tier and subsequent WebLogic Administration Server on a logical host. 

What this means is that, if the physical node hosting the Primary Application Tier and subsequent WebLogic Administration Server fails, then Oracle Solaris Cluster will fail over the Primary Application Tier to another node.

Oracle Solaris Cluster will detect a node failure within seconds and will automatically fail over the logical host to another node where the Primary Application Tier services are automatically started again. Typically, the WebLogic Administration Server is available again within 2-3 minutes after a node failure. Without Oracle Solaris Cluster providing high availability for Oracle E-Business Suite 12.2, you will need to recover the Primary Application Tier and WebLogic Administration Server from a reliable backup.

Please note that if the physical node hosting the Primary Application Tier and subsequent WebLogic Administration Server fails, then patching would not be possible until the WebLogic Administration Server is available again.

The following diagram shows a typical Oracle E-Business 12.2 deployment on Oracle Solaris Cluster 4.3 with Oracle Solaris 11, using Oracle Solaris Zone Clusters.


This deployment example is based on the information available in the “Deployment Option with Single Web Entry Point and Multiple Managed Servers” section of the My Oracle Support (MOS) note, Using Load-Balancers with Oracle E-Business Suite Release 12.2 (Doc ID 1375686.1), with the modification that the Admin Server and its Node Manager are running on the Web Entry Point server.

It is important to note that the Primary Applications Tier and subsequent WebLogic Administration Server have been installed on the logical host primary-lh to enable Oracle Solaris Cluster, in the event of a node failure to fail over the Primary Application Tier services to another node where those services are automatically started again.

For more information please refer to the following:

How to Deploy Oracle RAC on an Exclusive-IP Oracle Solaris Zones Cluster

http://www.oracle.com/technetwork/articles/servers-storage-admin/rac-excl-ip-zone-cluster-2341657.html#About

Oracle Solaris Cluster Data Service for Oracle E-Business Suite as of Release 12.2 Guide

http://docs.oracle.com/cd/E56676_01/html/E60641/index.html

Neil Garthwaite
Oracle Solaris Cluster Engineering

Sunday Dec 06, 2015

Oracle Solaris Cluster Manager: Configuring HA for Oracle Solaris Zones

This article gives the insight of Oracle Solaris Zone Wizard implemented in Oracle Solaris Cluster 4.3, whose purpose is for configuring a Highly Available agent for Oracle Solaris Zones with zonepath residing on a failover file system.[Read More]

Tuesday Nov 10, 2015

Oracle Solaris Cluster Manager: Setting Up Geo Disaster Recovery Orchestration

Oracle Solaris Cluster Manager (Cluster Manager) provides a way to set up and administer disaster recovery (DR) orchestration starting in Oracle Solaris Cluster 4.3. Prior to this version, DR orchestration could be administered only by using the command line interface (CLI). The Cluster Manager browser interface makes it easier by providing a user-friendly way of doing it, with either clicks or by typing in the user-defined fields, eliminating the need to memorize the commands and the sub-commands. Providing inline and online help, Cluster Manager also eliminates potential errors by doing appropriate checks.

Prerequisites for DR Orchestration to Be Usable Under Oracle Solaris Cluster Manager 4.3

  • Geographic Edition software is installed.
    # pkg info ha-cluster/group-package/ha-cluster-geo-full

  • Geographic Edition infrastructure is enabled and the Geographic Edition resource groups and resources are running.
    # /usr/cluster/bin/geoadm show
    # /usr/cluster/bin/clresourcegroup status geo-clusterstate geo-infrastructure
    # /usr/cluster/bin/clresource status -g geo-clusterstate,geo-infrastructure

Access the DR Orchestration

In version 4.3, the DR orchestration feature is accessed via the Sites folder in the navigation pane.

Access Sites Details

The site details can be accessed in one of the following ways:

  • Click the site name in the tree under Sites folder in the navigation pane.
  • Click the Sites folder. Then, click the site name in the Sites table in the main panel.

Access Multigroup Details

Browse to the site details page as mentioned above, then click the multigroup name in the Multigroups table.

Access the Details of a Protection Group In a Multigroup

Browse to the Multigroup details page as mentioned above, then click the protection group name in the Protection Groups table.

Creating a Site and a Multigroup

Creating a Site

  • Click the Sites folder. Click the Create button in the Sites table. A wizard will pop up.
  • In the Initial Setup step, enter the Site Name. The role of the current cluster is controller by default. Click the Add Cluster button to add more clusters. Enter the name of the cluster in the table below the button and the role of the cluster. To skip the next step and use the default site properties, check the checkbox.
    Note: In version 4.3, the cluster to be added to the current cluster must have the same password for root user as that of the current cluster.


  • In the Site Properties step, enter the different site property fields.


  • In the Review step, review the already entered values and click Finish.

Creating a Multigroup

  • Browse to the Site details page as mentioned earlier.
  • Click the Create button in the Multigroups table.
  • In the pop-up, enter the multigroup name. The other fields are optional. Then click Create.

DR Orchestration Operations Available in the Cluster Manager Browser Interface

Site Operations

In All Sites Table

  • Create site
  • Join Site
  • Leave Site
  • Validate Site

In Site Details Page

  • Add Cluster to the site
  • Set Role for clusters in the site
  • Accept another cluster in the site as controller
  • Remove a cluster from the site
  • Update site information from another cluster in the site

Multigroup Operations

In the Multigroups Table

  • Create a multigroup
  • Delete a multigroup
  • Update a multigroup information from another cluster in the site
  • Validate a multigroup with another cluster in the site
  • Start Multigroup
  • Stop Multigroup
  • Switch Over Multigroup
  • Take Over Multigroup

In the Protection Groups Table of the Multigroup Details Page

  • Add Protection Group
  • Remove Protection Group

Editing Site and Multigroup Properties

  • Browse to the Site or Multigroup details page.
  • Click the Properties tab at the top below the breadcrumb.
  • Click the Edit button.


  • The area with the Edit button now have Save and Cancel button. The value column changes into editable fields wherever applicable(encircled in blue). Make the changes and click the Save button.

To see more on configuring, administering and CLI options of Oracle Solaris Cluster 4.3 Geographic Edition, follow the links below:

Friday Nov 06, 2015

New choices for Oracle Solaris Cluster Public Networking: Link Aggregation (Trunk & DLMP) and VNIC

From Oracle Solaris Cluster 4.3 and on, in addition to IPMP, cluster is able to use IP over link aggregation and IP over VNIC over link aggregation for public networking.

[Read More]
About

Oracle Solaris Cluster Engineering Blog

Search

Archives
« August 2016
SunMonTueWedThuFriSat
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
   
       
Today