Wednesday Aug 13, 2014

Support for Kernel Zones with the Oracle Solaris Cluster 4.2 Data Service for Oracle Solaris Zones

The Oracle Solaris Cluster Data Service for Oracle Solaris Zones is enhanced to support Oracle Solaris Kernel Zones (also called solaris-kz branded zones) with Oracle Solaris 11.2.

This data service provides high availability for Oracle Solaris Zones through three components in a failover or multi-master configuration:

  • sczbt: The orderly booting, shutdown and fault monitoring of an Oracle Solaris zone.
  • sczsh: The orderly startup, shutdown and fault monitoring of an application within the Oracle Solaris zone (managed by sczbt), using scripts or commands.
  • sczsmf: The orderly startup, shutdown and fault monitoring of an Oracle Solaris Service Management Facility (SMF) service within the Oracle Solaris zone managed by sczbt.

With Oralce Solaris Cluster 4.0 and 4.1 the sczbt component does support cold migration (boot and shutdown) of solaris and solaris10 branded zones.

The sczbt component now in addition supports cold and warm migration for kernel zones on Oracle Solaris 11.2. Using warm migration (suspend and resume of a kernel zone) allows you to minimize planned downtime, in case a cluster node is overloaded or needs to be shutdown for maintenance.

By deploying kernel zones under control of the sczbt data service, system administrators can provide a highly available virtual machine with enhanced flexibility associated with being able to support consolidation of multiple zones with separate kernel patch levels on a single system. At the same time the administrator has the ability to very efficiently consolidate multiple workloads onto a server.

The three data service components are now implemented as their own dedicated resource types as follows:

  • sczbt - ORCL.ha-zone_sczbt
  • sczsh - ORCL.ha-zone_sczsh
  • sczsmf - ORCL.ha-zone_sczsmf

Resource configuration is still done by amending the component configuration file and providing it to the component register script. There is no longer a need to configure or maintain a parameter file for the sczbt or sczsh component.

In case existing deployments of the data service components are upgraded to Oracle Solaris Cluster 4.2, there is no requirement to re-register the resources. The previous SUNW.gds based component resources continue to run unchanged.

For more details, please refer to the Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide.

Thorsten Früauf
Oracle Solaris Cluster Engineering

Thursday Aug 07, 2014

Oracle's Siebel 8.2.2 support on Oracle Solaris Cluster software

Swathi Devulapalli

The Oracle Solaris Cluster 4.1 data service for Siebel on SPARC now supports Oracle's Siebel 8.2.2 version. It is now possible to configure the Siebel Gateway Server and Siebel Server components of Siebel 8.2.2 software for failover/high availability.

What it is

Siebel is the most popular CRM solution that delivers a combination of transactional, analytical, and engagement features to manage all customer-facing operations. Siebel 8.2.2 is the first Siebel version that is certified and released for Oracle Solaris 11 software on the SPARC platform.

The Oracle Solaris Cluster 4.1 SRU3 software on Oracle Solaris 11 provides a high availability(HA) data service for Siebel 8.2.2. The Oracle Solaris Cluster data service for Siebel provides fault monitoring and automatic failover of Siebel application. The data service makes highly available two essential components of the Siebel application: Siebel Gateway Server and Siebel Server. A resource of type SUNW.sblgtwy monitors the Siebel Gateway server and a resource of type SUNW.sblsrvr monitors the Siebel server. The Oracle Solaris Cluster 4.1 SRU3 on Oracle Solaris 11 extends the support of this data service for Siebel 8.2.2.

With the support of Siebel 8.2.2 on Oracle Solaris 11, the features of Oracle Solaris 11 and Oracle Solaris Cluster 4.1 are available in the Siebel 8.2.2 HA agent. The HA solution for Siebel stack can be configured on a complete Oracle products stack with an Oracle Solaris Cluster HA solution available on each tier of the stack. For example, the Oracle Solaris Cluster HA Oracle agent in the database tier, the Oracle Solaris Cluster HA Siebel agent in the application tier and the Oracle Solaris Cluster HA Oracle iPlanet Web Server agent in the Web tier.

What’s new?

1. A new extension property Siebel_version has been introduced. This property indicates the Siebel server version number i.e 8.2.2, etc

The below example illustrates the usage of the Siebel_version property in the Siebel Gateway Server and Siebel Server resource creation.

Creation of Siebel Gateway Server resource:


Creation of Siebel Server resource:


2. Encryption facility for the HA-Siebel configuration files

The HA Siebel solution uses the database user/password and the Siebel user/password to execute the start, stop and monitor methods. These passwords are stored in scsblconfig and scgtwyconfig files located under the Siebel Server installation directory and Siebel Gateway Server installation directory respectively. The new HA-Siebel data service provides an option to encrypt these files that the agent decrypts before use.

The Oracle Solaris Cluster administrator encrypts the configuration files following the steps provided in the HA-Siebel document. The HA-Siebel agent decrypts these files and uses the entries while executing the start, stop and monitor methods.

For detailed information on the configuration of encrypted files, refer to Configuring the Oracle Solaris Cluster Data Service for Siebel (Article 1509776.1) posted on My Oracle Support at http://support.oracle.com. You must have an Oracle support contract to access the site. Alternatively, you can also refer to the Oracle Solaris Cluster Data Service for Siebel Guide.

Friday Aug 01, 2014

Oracle Solaris Cluster 4.2 - Agent development just got better and easier

Agent or data service development allows you to deploy your application within the Oracle Solaris Cluster infrastructure. The Generic Data Service (GDS) has been designed and developed by Oracle Solaris Cluster Engineering to reduce the complexity associated with data service development so that all you need to do is to supply the GDS with a script to start your application. Of course, other scripts can also be used by the GDS to validate, stop and probe your application. Essentially, the GDS is our preferred data service development tool and is extensively deployed within our fully supported data service portfolio of popular applications. If we do not have a data service for your application then the GDS will be your friend to develop your own data service.

With Oracle Solaris Cluster 4.2 we are delivering a new version of the GDS. If you are familiar with data service development within Oracle Solaris Cluster then you will be pleased to know that we have kept the consistent behaviour that makes the GDS a trusted and robust data service development tool. By delivering a new and enhanced version of the GDS, data service development within Oracle Solaris Cluster is now much more flexible.

We have included some enhancements which previously required somewhat awkward workarounds when using the original GDS resource type SUNW.gds. For example, previously your application was always started under the control of the Process Monitor Facility (PMF) and now with the new version of the GDS, using the PMF is optional by setting the extension property PMF_managed=TRUE or FALSE. As you may be aware, when starting an application under the control of the PMF your application had to leave behind at least one process for the PMF to monitor. While most applications will do this, you may want your GDS resource to execute a script or command that does not leave behind any processes. Setting the extension property PMF_managed=FALSE for your GDS resource will inform the GDS to not start your script or command under the control of the PMF, instead your script or command is just executed. This is very useful if your GDS resource performs some task that does not leave behind a process.

We have also enhanced the GDS probe's behaviour so that you can ensure that your probe command will always be executed even on a system that is heavily loaded, potentially eliminating a probe timeout. We have bundled a new resource type so that you can proxy a service state as well as providing you with the ability to subclass this new version of the GDS. Subclassing allows you to use the new version of the GDS but have your own resource type name and new or modified extension properties. In fact there are over 20 improvements with this new version of the GDS.

While it is not practical to showcase all the capabilities of the new version of the GDS within this blog, the capability of the new version of the GDS is significant if you consider that new extension properties and other improvements, such as subclassing, can all be combined together. Furthermore, the new version of the GDS will validate your configuration and inform you if your configuration has a conflict. Consequently, the new version of the GDS has enough functionality and flexibility to be our preferred data service development tool.

The original GDS resource type SUNW.gds still exists and is fully supported with Oracle Solaris Cluster 4.2. The new version of the GDS includes two new resource types, ORCL.gds and ORCL.gds_proxy. It is possible to use your existing SUNW.gds start/stop and probe scripts with ORCL.gds. However, if you included workarounds within your scripts you should consider removing or disabling those workarounds and instead use the new extension property that delivers the same behaviour.

Oracle Solaris Cluster 4.2 delivers enterprise Data Center high availability and business continuity for your physical and virtualised infrastructures. Deploying your application within that infrastructure is now better and easier with ORCL.gds and ORCL.gds_proxy. Comprehensive documentation, including demo applications, for the new version of the GDS can be found within the Oracle Solaris Cluster Generic Data Service (GDS) Guide.

Additionally, further links are available that describe data services from the Oracle Solaris Cluster Concepts Guide, all our data service documentation for Oracle Solaris Cluster 4.2 and finally our Oracle Solaris Cluster 4 Compatibility Guide that lists our fully supported list of data services.

Neil Garthwaite
Oracle Solaris Cluster Engineering

Wednesday Mar 20, 2013

Configuring HA for Oracle WebLogic Server Managed Servers by Using the clsetup Wizard

This post describes the steps to use the clsetup wizard to configure an Oracle WebLogic Server Managed Servers instance as a highly available failover data service with Oracle Solaris Cluster software.

HA Prerequisites

  • The logical host resource to be used by the data service is configured.

  • The administration server instance of Oracle WebLogic Server software is running. This is to avoid the delay during discovery of the managed servers, which might take 10 minutes or longer.

  • The common agent container (cacao) service is running on the nodes on which Oracle WebLogic Server software will be configured.

Application Prerequisites

  • The boot.properties file is in the $DOMAIN_DIR/servers/<server-name>/security folder.

  • The Managed Server to be configured is created.

Points to Note

While using the clsetup wizard:

  • Type a question mark (?) to access the help.

  • Type a left angle bracket (<) or a right angle bracket (>) to move backward or forward, respectively.

  • Each input provided is validated by the wizard.

Procedure

From a command prompt, type clsetup to start the utility. From the clsetup menu, select "Data Services". In the next menu, select “Oracle WebLogic Server”.

The following table is an overview of the steps that the clsetup wizard performs to configure the data service.

#

Step

Remarks

1

Select Oracle WebLogic Server Location

Select "Global Cluster" or "Zone Cluster". If you select “Zone Cluster”, in the next menu select the zone cluster in which to configure the data service.

2

WebLogic Server Configuration

Lists the available types of configuration. Select “Managed Servers”.

3

Verify Prerequisites

Make sure all listed prerequisites are met.

4

Specify Domain Location

Be sure to provide the proper domain location. If you provide the proper domain location and no customized configuration is made in the Oracle WebLogic Server software, the clsetup wizard will auto-discover the parameter values.

5

Specify WebLogic Home Directory

Either select a directory from the discovered values or specify your own home directory by selecting the option “Specify explicitly”.

6

Specify WebLogic Managed Server Start Script

Press “RETURN” to accept the discovered value or type your own value.

7

Specify WebLogic Server Environment File

Press “RETURN” to accept the discovered value or type your own value.

8

Select Configuration Mode for Managed Servers

Select “Failover”.

9

Select Managed Server

Select from the displayed list the Managed Server that you want to configure.

10

Select Logical Hostname for Managed Server <Managed Server Name>

Select the appropriate logicalhost from the list.

11

Specify Monitoring URI

Specify the URI or leave the field blank. To specify multiple monitoring URIs, separate them by commas. If you specify a URI, it will be validated.

12

Configure Highly Available Storage Resources

Select the appropriate storage from the list.

13

Review Panel

Displays the values entered. To edit the information, choose the corresponding number.

14

Summary Panel

Displays the values you entered. [This panel is read only.]

Configuring the HA for Oracle WebLogic Server service by using the clsetup wizard is easy, effective, and ensures proper configuration of properties for resources and resource groups.

For additional information for the Oracle Solaris Cluster 3.3 3/13 release, see Oracle Solaris Cluster Data Service for Oracle WebLogic Server Guide. For other applicable Oracle Solaris Cluster releases, see the Oracle Systems Software documentation indexes.

- Sabareesh

Friday Mar 08, 2013

Stay in sync with a changing SAP world

Over the years, the SAP kernel changed a lot. Finally, the architecture of our SAP agents had reached its limits. It was clearly time for a rewrite of the SAP agent. To accommodate the changes in the SAP kernel, we created the new HA for NetWeaver agent. We also took this chance to support central instance and traditional HA deployments within this one agent. This integration reduces the resource complexity.


Most recently, the SAP kernel 7.20_EXT SAP introduced the ability to integrate HA frameworks. Of course, we did not miss the opportunity to integrate our new SAP NetWeaver agent with this functionality. So, with the new HA for NetWeaver agent, it is now possible to use the SAP command and the SAP Management console to manage SAP instances under cluster control, without having root access. This will improve the ease of use for SAP in an Oracle Solaris Cluster environment. We made these functionalities available on Oracle Solaris 11 software starting with Oracle Solaris Cluster 4.0 SRU 4.


One of the larger cities in Europe made the change with their SAP system from Oracle Solaris 10 to Oracle Solaris 11 as soon as the HA for NetWeaver agent was available. They reduced their system administration costs because of the benefits of Oracle Solaris 11 and Oracle Solaris Cluster 4.0. Information about this customer use case is available at:
http://www.oracle.com/us/corporate/customers/customersearch/city-of-nuremberg-1-solaris-ss-1912239.html


We look forward to your feedback and inputs !


Detlef

About

Oracle Solaris Cluster Engineering Blog

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today