Tuesday Jan 12, 2010
Thursday Dec 03, 2009
By dlnprasad on Dec 03, 2009
Solaris Cluster is a multi-system, multi-site high availability and disaster recovery solution that manages the availability of applications services and data across local, regional and geographically dispersed data centers. The Solaris Cluster environment extends the Solaris Operating System into a cluster operating system.
If you are following the Solaris Cluster product and features, you would have noticed extreme innovation by Sun in the high availability and disaster recovery space since the time we released our first HA product many years ago. For well over a decade, Solaris Cluster has been a market leader for providing business continuity and disaster recovery solutions to all mission critical business applications, spanning all the major industry segments.
Continuing with our tradition of innovation, we are pleased to announce another release - "Solaris Cluster 3.2 11/09" - an update to the Solaris Cluster 3.2 product.
This new release brings more features for high availability, virtualization, disaster recovery, flexibility, diagnosibility, and ease of use. We are extending support for virtualization with more options for Solaris Containers Clusters (zone clusters), failover Solaris Containers, and LDoms. This release provides more file system and volume management choices. We have added new replication solutions for disaster recovery deployments, in addition to improving scalable services support. This release also brings support for the latest versions of many third-party software applications.
The following is a list of some of the new features in this update release:
- New deployment options for Oracle DB, RAC, and applications
- Solaris Containers cluster support for more Oracle solutions:
- Oracle E-Business Suite
- Siebel CRM 8
- Single-instance Oracle database
- Support for Oracle Automated Storage Management (ASM) with single-instance Oracle database
- Support of Reliable Data Sockets over Infiniband for RAC
- Standalone QFS 4.6 & later in failover Solaris Containers
- Upgrade on attach for failover Solaris Containers
- Solaris Volume Manager three-mediator support in Campus Cluster deployments
- Hitachi Universal Replicator support in Campus Cluster deployments
- Support for 1TB disks as a quorum devices
- Outgoing connection support for scalable services
- IPsec support for Scalable Services traffic
- SCTP support for Scalable services
- Managed failover of IPsec session & key information
- Round-robin load balancing
- Hitachi Universal Replicator support
- Script-based plugin replication module
- MySQL replication
- New agent: HA Agent for LDoms guest domains
- New supported application version: SWIFTAlliance Access & Gateway 6.3
This feature-rich and high-quality product can be downloaded from here. Download and try this latest Solaris Cluster release. We look forward to your comments.
Solaris Cluster Engineering
Thursday Oct 22, 2009
By mkb on Oct 22, 2009
A white paper titled "Addressing Virtualization and High-Availability Needs with Sun Solaris Cluster", written by IDC analyst, Jean Bozman, is now available!
What you will find from the paper includes:
- Businesses' High Availability needs in the Virtualized IT Environment and how Solaris Cluster addresses these requirements
- Why Virtualization Software and High Availability Software are being used to protect applications
- Worldwide Availability and Clustering Software Revenue, 2008-2013
- Integration of High Availability and Virtualization Use Cases
- Solaris Cluster Customer Snapshots
We look forward to your comments on the product.
Director, Solaris Cluster
Thursday Jun 25, 2009
By smckinty on Jun 25, 2009
At first sight, a single-node cluster may seem to be a pointless thing. After all, what sort of high availability can you get from one node?
That might be a valid point if HA alone were the only consideration, but there are quite a few other ways in which single-node clusters can be useful. Two of the most useful ones are:
- As part of a Geographic Edition (SCGE) Disaster Recovery (DR) configuration.
- For development and test.
Disaster Recovery with SCGE
SCGE allows two clusters, separated by enough distance that a disaster at one site will not affect the the other, to be managed together. Several data replication products (AVS, SRDF, Oracle DataGuard, etc.) can be managed within this two-cluster partnership to ensure that the DR site has up-to-date information, ready to take over service.
Obviously this configuration requires two clusters, but if we assume that the DR site will be needed only in the (hopefully rare) instance of a disaster, and probably occasionally during maintenance of the primary, there is no need for it to be an exact copy of the primary site. In fact, it can be a single node. All that is required is that it be running Solaris Cluster software, i.e. it can be a Single-Node Cluster.
Carrying this idea further, it is also fully supported to have single-node clusters at both primary and secondary sites.
As I mentioned at the start, this won't give very much in the way of High Availability in the event of a local primary-site server failure, but you may not need that. Strange though it might seem at first glance, HA isn't a prerequisite for DR, it depends entirely on your business continuity needs (and that's a subject for a future blog entry).
No special tricks or configurations are required for this, it just works “out of the box”. With two single-node clusters and AVS (SNDR) replication between them, you have a fully-supported Disaster Recovery configuration, implemented with no special additional hardware. Larger sites with external storage arrays and replication also work just as well with a single-node cluster as with a multi-node configuration.
Another place where a single-node cluster can be really useful is when developing cluster-based software, especially cluster agents. With the support for Solaris Containers (aka zones) that was added in Solaris Cluster 3.2, this has become even easier.
Fully testing a cluster agent requires that you simulate failures, such as system crashes or disconnections, and ensure that the agent reacts correctly. This is also true when testing that a given application operates correctly in a cluster environment. It's not something that you'd normally want to do on your desktop. However, providing an extra pair of systems in a cluster as lab test equipment for each developer is costly, and takes up valuable lab space and energy.
The solution? A single-node cluster, with some zones configured. With Solaris Cluster 3.2 you can specify zones (in the format of nodename:zonename) in the nodelist of an application resource group, see the clrg(1CL) manpage. The cluster software, running in the global zone, manages those applications just as if they were on separate physical nodes. You can request that the resource groups be switched between zones, or even crash or halt zones to test that automatic recovery is performed correctly. All without leaving your desk or rebooting your development system.
And for my next trick...
I hope that's given a brief idea of what can be done today with a single node. What might the future hold? Well, people will jump on me if I promise anything, but I really like some of the ideas that the Open HA Cluster guys have been demonstrating. Take a look at Thorsten's whitepaper if you want to try clustering VirtualBox systems - on your laptop!
As always, join us at http://www.opensolaris.org/os/community/ha-clusters/ to discuss this or any other cluster topics.
Friday Apr 24, 2009
By mkb on Apr 24, 2009
Solaris Cluster is now supported with the latest in Intel technology: the Sun Fire x4170 (1U) and x4270/x4275 (2U) leading-edge x64 servers.
Included in the latest platform portfolio is the Sun Blade x6270 Intel Xeon 5500 Nehalem Constellation blade. The new servers support a slew of features, such as double the compute threads (16 Hyper-Threads) and more memory than before at 144GB.
With the addition of PCIe Gen 2, larger storage capacities can be achieved with double the I/O. This is all wrapped around a package that includes SAS, SATA, and flash-based Solid State Disks (SSD).
Combine this with Solaris and Solaris Cluster, and you've got a scalable architecture for multi-threaded applications with mission-critical capability.
Solaris Cluster configurations on x4170, x4270, and x4275, along with the x6270, are now supported with the following Solaris Cluster releases and configurations:
\* Solaris Solaris 10
\* Solaris Cluster Solaris Cluster 3.2
\* 8-Node support
\* 8-Node N+1
\* 8-Node Cluster Pairs \* 8 Node Pair + N
\* 8-Node Campus Cluster
Feel free to contact me or your sales representative for more detailed information!
Sr. Manager - Solaris Cluster
Oracle Solaris Cluster Engineering Blog
- Managing a remote Oracle Database instance with "Geographic Edition"
- Disaster Recovery Orchestration
- Oracle Solaris Cluster and Oracle GoldenGate
- Oracle Database 12c Agent: new features
- Oracle Solaris Cluster 4.2 Event and its SNMP Interface
- SCHA API for resource group failover / switchover history
- Support for Kernel Zones with the Oracle Solaris Cluster 4.2 Data Service for Oracle Solaris Zones
- Solaris Unified Archive Support in Zone Clusters
- Oracle's Siebel 8.2.2 support on Oracle Solaris Cluster software
- HA-LDOM live migration in Oracle Solaris Cluster 4.2