Tuesday Dec 06, 2011

Announcing Release of Oracle Solaris Cluster 4.0!

We are very happy to announce the release of Oracle Solaris Cluster 4.0, the first release providing High Availability (HA) and Disaster Recovery (DR) capabilities for Oracle Solaris 11. This release comes within a few of the weeks of the release of Solaris 11.


Oracle Solaris Cluster 4.0 offers the best availability for enterprise applications with instant system failure detection for fastest service recovery. It includes out-of the box support for Oracle database and applications such as Oracle WebLogic Server and is pre-tested with Oracle Sun servers, storage and networking components. It is optimized to leverage the SPARC SuperCluster redundancy and reliability features and delivers the high availability infrastructure for the Oracle Optimized Solutions.


Oracle Solaris 11 Oracle Solaris Cluster on Solaris 11 offers an unified installation experience leveraging the Oracle Solaris Image Packaging System (IPS) for all the benefits that it bring. These include error-free software updates, automatic patch dependency resolution and automated installer for easy multi-node installation.


For a complete list of features and benefits, see  What's New in Oracle Solaris Cluster.'  Also, watch Bill Nesheim, VP Solaris Platform Engineering, in a webcast on Oracle Solaris Cluster 4.0 at 9AM PST. Stay tuned for more blog articles on the features. You can try the product out for evaluation, development and testing use at the Oracle Technology Network or for production use at the Oracle Software Delivery Cloud. We look forward to your feedback and inputs!


- Roma Baron, Sr. Program Manager, Solaris Cluster


- Meenakshi Kaul-Basu, Director, Solaris Cluster

Friday Mar 18, 2011

Oracle Solaris Cluster 3.3 available

On September 8, 2010 Oracle announced the availability of Oracle Solaris Cluster 3.3

Oracle Solaris Cluster 3.3, built on the solid foundation of Oracle Solaris, offers the 
most extensive Oracle enterprise High Availability and Disaster Recovery solutions for the 
largest portfolio of mission-critical applications.

Integrated and thoroughly tested with Oracle's Sun servers, storage, connectivity 
solutions and Solaris 10 features, Oracle Solaris Cluster is now qualified with Solaris 
Trusted Extensions, supports Infiniband for general networking or storage usage, and can 
be deployed with Oracle Unified Storage in Campus Cluster configurations. It extends its 
applications support to new Oracle applications such as Oracle Business Intelligence, 
PeopleSoft, TimesTen, and MySQL Cluster.

The single, integrated HA and DR solution enables multi-tier deployments in virtualized 
environments. In this release, Oracle Solaris Containers clusters supports even more 
configurations including additional applications (Oracle WebLogic Server, Siebel CRM, and 
more) and integration with Oracle Solaris Cluster Geographic Edition.


Benefits:
    * Delivers unrivaled High Availability on Oracle Solaris OS for much faster failure 
      detection and recovery
    * Enables cost-savings without performance compromise by integrating seamlessly with 
      Oracle Solaris Containers for applications and databases consolidation
    * Out of the box support for a wide selection of applications
    * Certified with a broad range of storage arrays from Oracle and third parties on 
      SPARC and x86 platforms


New features:
-------------

Availability:
- Active Monitoring of Storage Resources
- Flexible load distribution of application services


Virtualization:
- Extended Oracle Solaris Containers cluster support:
 * NAS, GFS, RDSV1
 * More applications : Oracle WebLogic Server, OBIEE, MySQL cluster,
 PeopleSoft, TimesTen

Hardware Integration:
- InfiniBand on public network and as storage connectivity

Application Integration
- New agents: Oracle Business Intelligence Enterprise Edition,
 PeopleSoft Enterprise, MySQL cluster, TimesTen
- Updates on Oracle E-Business Suite, WebLogic Server, MySQL, SAP
- Oracle 11gR2 database and RAC support

Disaster Recovery
- Containers cluster  with Geographic Edition
- Sun Unified Storage 7xxxx in campus cluster

Security
- Solaris Trusted Extensions

Ease of use
- Wizards for ASM configurations set-up
- GUI and CLI performance improvements
- Power Management User interface
- Node rename


Compatibility information
--------------------------

Supported Solaris release: Solaris 10 10/09, Solaris 10 9/10


Media Kit and downloads
-----------------------------------

Software is available through:

- OTN (for evaluation and tests)
http://www.oracle.com/technetwork/server-storage/solaris-cluster/downloads/index.html

- e-delivery (for production use - requires purchase of commercial license)
http://edelivery.oracle.com

Select Product Pack:  Oracle Solaris
From results pick:  Oracle Solaris Cluster 3.3 Media Pack

Documentation
---------------------

* Oracle Solaris Cluster 3.3 Documentation Center:
http://www.oracle.com/technetwork/documentation/solaris-cluster-33-192999.html


* Release Notes Information:
http://wikis.sun.com/display/SunCluster/Release+Notes+Information

The Release Notes documents on this site are regularly updated with new
documentation to support new features, hardware qualifications, bug
workarounds, and other late-breaking information. Check the Release
Notes or Release Notes Supplement for your release before installing the
cluster or performing any maintenance.

Web site
----------------
http://www.oracle.com/technetwork/server-storage/solaris-cluster/overview/index.html


Thursday Jun 25, 2009

Single-node clusters, for disaster recovery and more...

At first sight, a single-node cluster may seem to be a pointless thing. After all, what sort of high availability can you get from one node? :)

That might be a valid point if HA alone were the only consideration, but there are quite a few other ways in which single-node clusters can be useful. Two of the most useful ones are:

  • As part of a Geographic Edition (SCGE) Disaster Recovery (DR) configuration.
  • For development and test.

Disaster Recovery with SCGE

SCGE allows two clusters, separated by enough distance that a disaster at one site will not affect the the other, to be managed together. Several data replication products (AVS, SRDF, Oracle DataGuard, etc.) can be managed within this two-cluster partnership to ensure that the DR site has up-to-date information, ready to take over service.

Obviously this configuration requires two clusters, but if we assume that the DR site will be needed only in the (hopefully rare) instance of a disaster, and probably occasionally during maintenance of the primary, there is no need for it to be an exact copy of the primary site. In fact, it can be a single node. All that is required is that it be running Solaris Cluster software, i.e. it can be a Single-Node Cluster.

Carrying this idea further, it is also fully supported to have single-node clusters at both primary and secondary sites.

As I mentioned at the start, this won't give very much in the way of High Availability in the event of a local primary-site server failure, but you may not need that. Strange though it might seem at first glance, HA isn't a prerequisite for DR, it depends entirely on your business continuity needs (and that's a subject for a future blog entry).

No special tricks or configurations are required for this, it just works “out of the box”. With two single-node clusters and AVS (SNDR) replication between them, you have a fully-supported Disaster Recovery configuration, implemented with no special additional hardware. Larger sites with external storage arrays and replication also work just as well with a single-node cluster as with a multi-node configuration.

Development

Another place where a single-node cluster can be really useful is when developing cluster-based software, especially cluster agents. With the support for Solaris Containers (aka zones) that was added in Solaris Cluster 3.2, this has become even easier.

Fully testing a cluster agent requires that you simulate failures, such as system crashes or disconnections, and ensure that the agent reacts correctly. This is also true when testing that a given application operates correctly in a cluster environment. It's not something that you'd normally want to do on your desktop. However, providing an extra pair of systems in a cluster as lab test equipment for each developer is costly, and takes up valuable lab space and energy.

The solution? A single-node cluster, with some zones configured. With Solaris Cluster 3.2 you can specify zones (in the format of nodename:zonename) in the nodelist of an application resource group, see the clrg(1CL) manpage. The cluster software, running in the global zone, manages those applications just as if they were on separate physical nodes. You can request that the resource groups be switched between zones, or even crash or halt zones to test that automatic recovery is performed correctly. All without leaving your desk or rebooting your development system.

And for my next trick...

I hope that's given a brief idea of what can be done today with a single node. What might the future hold? Well, people will jump on me if I promise anything, but I really like some of the ideas that the Open HA Cluster guys have been demonstrating. Take a look at Thorsten's whitepaper if you want to try clustering VirtualBox systems - on your laptop!

As always, join us at http://www.opensolaris.org/os/community/ha-clusters/ to discuss this or any other cluster topics.


Steve McKinty
SCGE Architect


Tuesday Apr 21, 2009

Solaris Cluster + MySQL Demo at a Exhibition Hall near you

Hi,

If by any chance you are at the MySQL user conference in Santa Clara conference center, there are some great possibilities to see MySQL and Solaris Cluster in action. We have a demo at the exhibition hall, where you can see MySQL with zone cluster and MySQL with SC Geographic Edition.

I will host a birds of a feather at Wednesday 4/22/09 evening in meeting room 205 at 7 pm. The Title is "Configuring MySQL in Open HA Cluster, an Easy Exercise" check out the details here.  I will give a presentation at Thursday 4/23/09 morning 10:50 in Ballroom G, the title is "Solutions for High Availability and Disaster Recovery with MySQL". Check out the details here

I hope to see you there in person.

-Detlef Ulherr

Availability Products Group

Wednesday Feb 25, 2009

Disaster Recovery Protection Options for Oracle with Sun Cluster Geographic Edition 01/09

With the announcement of Solaris Cluster 3.2 01/09 comes a new version of Sun Cluster Geographic Edition (SCGE). Among the features delivered in this release is support for replication of Oracle RAC databases using Oracle Data Guard. So it seems like a good opportunity to summarise the ways you can protect your Oracle infrastructure against disaster using the replication support provided by Sun Cluster Geographic Edition.

I'll start by breaking the discussion into two halves: first deployments using a highly available (HA) Oracle implementations, the second using Oracle Real Application Clusters (RAC). Additionally, I'll reiterate the replication technologies that SCGE supports, namely: EMC Symmetrix Remote Data Facility (SRDF), Hitachi TrueCopy, Sun StorageTek Availability Suite (AVS) and last, but not least, Oracle Data Guard (ODG). One final point to make is that SCGE support for SRDF/A is limited to takeover operations only.

HA Oracle Deployments

HA-Oracle deployments are found in environments where the cost/benefit analysis determines that you are prepared to trade off the longer outages involved in switching or failing over an Oracle database compared with the near-continuous service that Oracle RAC can offer, against the additional licensing costs involved.

Deployments of HA-Oracle can be on a file system: UFS or VxFS (stay posted for ZFS support) or on raw disk with, or without, a volume manager: Solaris Volume Manager (SVM) or Veritas Volume Manager (VxVM). Why not Oracle Automatic Storage Management you might ask? Well, while ASM is indeed supported on Oracle RAC, it poses problems when employed in a failover environment. There is a need to fail over either the ASM instance or just the disk groups used to support the dependent databases. These requirements currently preclude ASM from being supportable. Are we working on this? Of course we are!

So this gives us a set of storage deployment options that must be married with the replication options that SCGE supports and any restrictions that may come into play when deploying HA-Oracle in a Solaris Container (a.k.a zone).

Coverage is extensive: Oracle 9i, 10g and 11g are supported on file systems (UFS or VxFS) or raw devices, with or without containers and with or without VxVM, using either AVS, SRDF or TrueCopy. In contrast, SVM restricts the replication technology support to AVS only.

Why isn't Oracle Data Guard supported here, especially given that it's one of the new replication modules in SCGE 3.2 01/09? The answer lies in the use Oracle Data Guard broker as an interface to control the replication. Unfortunately, ODG broker stores a physical host name in its configuration files and after a fail-over it doesn't match that of the new host, thus invalidating the configuration. Consequently, Oracle does not support ODG broker on 'cold failover' database implementations even if this host name change could be avoided, say by putting the database into a Solaris Container.

Oracle RAC Deployments

With HA-Oracle options covered I'll now turn to Oracle RAC. As you will no doubt know from reading "Solaris Cluster 3.2 Software: Making Oracle Database 10G R2 and 11G RAC Even More Unbreakable", Solaris Cluster brings a number of additional benefits to Oracle RAC deployments including support for shared QFS as a means of storing the Oracle data files. So now you'll need to know what deployment options exist when you include SCGE in your architecture.

As we're still working on adding Solaris Container Cluster support to SCGE there is currently no support for Oracle RAC, or indeed any other data service, using this virtualisation technique.

Furthermore, I should remind you that AVS is not an option for any Oracle RAC deployment simply because it cannot intercept the writes coming from more than one node simultaneously.

On the positive side, storage replication products such as SRDF and TrueCopy are an option as they intercept writes at a storage array level rather than at the kernel level. These replication technologies are restricted to configurations using raw disk on hardware RAID or raw VxVM/CVM volumes. These storage options can then be used with Oracle 9i, 10g or 11g RAC. For a write up of just such a configuration, please read EMC's white paper on our joint demonstration at Oracle Open World 2008.

Combinations wishing to use shared QFS or ASM are currently precluded because of the additional steps that must be interposed prior to an SCGE switchover or takeover being effected. Are we looking to address this? Absolutely!

If you want unfettered choice of storage options on Solaris Cluster when replicating Oracle 10g and 11g RAC data, then the new Oracle Data Guard module for SCGE is the answer. You have freedom to choose any combination of raw disk, ASM, shared QFS deployment combination that makes sense to you. You can configure a physical standby partner in either single instance to single instance or dual instance to single instance combinations, i.e. primary site to standby site configurations. All ODG replication modes are supported: maximum performance, maximum availability and maximum protection. Although the SCGE can control logical standby configurations Sun have not yet announced formal support for use of this feature.

You've Read The Blog, Now See The Movie....

I hope that gives you a clear picture of how you can use a combination of Solaris Cluster and the various replication technologies that Sun Cluster Geographic Edition supports to create disaster recovery solutions for your Oracle databases. If you would like to see demonstrations of some of these capabilities, please watch the video of an ODG setup and an SRDF configuration on Sun Learning Exchange.

Tim Read
Staff Engineer
Solaris Cluster Engineering

About

mkb

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today