Thursday Dec 03, 2009

Solaris Cluster 3.2 11/09 is now available

Solaris Cluster is a multi-system, multi-site high availability and disaster recovery solution that manages the availability of applications services and data across local, regional and geographically dispersed data centers. The Solaris Cluster environment extends the Solaris Operating System into a cluster operating system.

If you are following the Solaris Cluster product and features, you would have noticed extreme innovation by Sun in the high availability and disaster recovery space since the time we released our first HA product many years ago. For well over a decade, Solaris Cluster has been a market leader for providing business continuity and disaster recovery solutions to all mission critical business applications, spanning all the major industry segments.

Continuing with our tradition of innovation, we are pleased to announce another release - "Solaris Cluster 3.2 11/09" - an update to the Solaris Cluster 3.2 product.

This new release brings more features for high availability, virtualization, disaster recovery, flexibility, diagnosibility, and ease of use. We are extending support for virtualization with more options for Solaris Containers Clusters (zone clusters), failover Solaris Containers, and LDoms.  This release provides more file system and volume management choices. We have added new replication solutions for disaster recovery deployments, in addition to improving scalable services support. This release also brings support for the latest versions of many third-party software applications.

The following is a list of some of the new features in this update release:

- New deployment options for Oracle DB, RAC, and applications

  • Solaris Containers cluster support for more Oracle solutions:
    • Oracle E-Business Suite
    • Siebel CRM 8
    • Single-instance Oracle database
  • Support for Oracle Automated Storage Management (ASM) with single-instance Oracle database
  • Support of Reliable Data Sockets over Infiniband for RAC
- Infrastructure features
  • Standalone QFS 4.6 & later in failover Solaris Containers
  • Upgrade on attach for failover Solaris Containers
  • Solaris Volume Manager three-mediator support in Campus Cluster deployments
  • Hitachi Universal Replicator support in Campus Cluster deployments
  • Support for 1TB disks as a quorum devices
- Scalable services features
  • Outgoing connection support for scalable services
  • IPsec support for Scalable Services traffic
  • SCTP support for Scalable services
  • Managed failover of IPsec session & key information
  • Round-robin load balancing
- Geographic Edition
  • Hitachi Universal Replicator support
  • Script-based plugin replication module
  • MySQL replication
- Supported applications and agent features
  • New agent: HA Agent for LDoms guest domains
  • New supported application version: SWIFTAlliance Access & Gateway 6.3
Check here for additional details about the features listed above. Solaris Cluster 3.2 11/09 is supported on Solaris 10 5/09 and 10/09 . Refer to the release notes to get a list of minimum patches required to run this update. The release notes also has links to the product documentation for all the features listed above.

This feature-rich and high-quality product can be downloaded from here. Download and try this latest Solaris Cluster release. We look forward to your comments.

Thanks,

Prasad Dharmavaram
Roma Baron
Solaris Cluster Engineering

Thursday Jan 29, 2009

Working Across Boundaries: Solaris Cluster 3/2 1/09!

I am sure you have seen the recent blog post and the announcement of Solaris Cluster 3.2 1/09. This release has a cool set of features, which includes providing high availability and disaster recovery in virtualized environment.  It is also exciting to see distributed applications like Oracle RAC run in separate virtual clusters! Some of the features integrated into this release were developed in the open HA Cluster community. That is another first for us!

I am sure the engineers in the team will be writing blog journals in the coming weeks, detailing the features that have been developed. Stay tuned for more big things from this very energetic and enthusiastic team!

It is very interesting to see the product getting rave reviews from you,  our customers. We value your feedback and take extra steps to make the product even better. We appreciate the acknowledgement! Here is one customer success story from EMBARQ that will catch your attention:
"Solaris Cluster provides a superior method of high availability. Our overall availability is 99.999%. No other solution has been able to ensure us with such tremendous uptime"        —  Kevin McBride, Network Systems Administrator III at EMBARQ

That is one big compliment! Thank you for your continued support and feedback!

A distributed team has made this release possible. Some of the features are large and needed coordinated effort across various boundaries (teams, organizations, continents)! A BIG thanks to the entire Cluster Team for their hard work, to get yet another quality product released. It is a great team!

-Meenakshi Kaul-Basu

Director, Availability Engineering


Friday Jun 08, 2007

Sun Cluster 3.2 Doc Responses to VOC Survey - Part 2

This blog continues my discussion of the improvements that we have made to the Sun Cluster 3.2 documentation in response to a Voice of the Customer survey.

We identified four major customer requests in the VOC survey. For information on our response to the first two VOC requests, see Sun Cluster3.2 Documentation Responses to Voice-of-the-Customer Feedback.

Third Request: Provide Task-based Information

This VOC survey was designed to cover several Sun products. Fortunately, Sun Cluster procedural documentation is already task based, so this was an easy request for us. Sun Cluster introductory information and conceptual information that is not relevant to a specific task is provided in an introductory chapter or the Overview document. This separation of procedural and introductory information makes the procedures shorter and more concise.

Let me know if you think this separation in our documentation works well for you.

Fourth Request: Provide More Information for the Novice User

The original Sun Cluster documentation was targeted toward a sophisticated, well-trained, UNIX-savvy system administrator. Sun Cluster required that the customer receive extensive training before using the Sun Cluster product. As a result, our documentation tended to assume a high level of knowledge and some of our customers started to complain. Customers, it seemed, did not want to be required to hire senior system administrators to manage their clustered environment. And even the senior system administators did not always have the UNIX experience we required. Customers also told us that the person trained on Sun Cluster might not be the person who administers the cluster. As a result, we've added several documents to help the new cluster system administrator to quickly ramp up and get started with Sun Cluster.

Here are some examples of our new and improved documentation:


  1. Sun Cluster Documentation Center

    This topic-based document helps novice users to find information more quickly in our Sun Cluster doc set. The documentation center begins with a Getting Started group of links specifically targeted for novice users.

  2. Sun Cluster Overview

    This guide provides information on the Sun Cluster product along with an introduction to key concepts and the Sun Cluster architecture.

  3. Quick Start Guide
    The Quick Start Guide provides detailed, easy-to-follow steps to install Sun Cluster in a configuration used by approximately 80% of our Sun Cluster customers.
  4. (how-to) How To Install and Configure a Two-Node Cluster

    This "How To" provides simple task-based instructions for installing a two-node cluster.

  5. Quick Reference Card

    The Quick Reference card provides simplified steps to perform the most commonly used Sun Cluster tasks. This card is available in a customized format to make printing easier.

  6. Intro(1CL) man page

    This man page describes the new object-oriented command set for Sun Cluster 3.2. This man page also defines each of the new commands, provides short versions of the commands, and provides links to the command man pages.

  7. Object-Oriented Commands appendix

    This appendix is included in each of our procedural docs and provides a quick reference to the new object-oriented commands in Sun Cluster 3.2. The appendix lists the command short forms, and their subcommands.


If you have any suggestions for other documentation that will help novice users, feel free to comment.

Rita McKissick
Sun Cluster Documentation

Monday May 14, 2007

Clustering Solaris Guests That Run on VMware with Sun Cluster 3.2 Software

Virtualization in today's world is attracting quite a lot of attention. In the era of power-packed machines, one would not prefer to dedicate an entire piece of costly hardware to a single cause. From running multiple applications to multiple operating systems (OS) to multiple hardware domains, it's all about making the most out of a box. But to truly leverage the positives of virtualization, the applications need to be supportive enough. Most modern-day software run on virtual platforms as if virtualization was transparent, which is really what the intent is.

For the sake of this experiment, VMware was chosen as the virtualization technology, mostly due to the fact that it has been around for quite some time now. Support for the Solaris(TM) 10 OS (both 32- and 64-bit versions) as a guest operating system, is already present in multiple flavors of VMware. VMware ESX Server 3.0.1 (http://www.vmware.com/products/vi/esx/) with the features it provides, was the platform of choice. Documentation for VMware ESX server can be found at http://www.vmware.com/support/pubs/vi_pubs.html. I would like to add that I did try my hands on VMware Workstation and VMware Server, but certain areas like networking and shared devices in a clustered setup with Sun Cluster 3.2 software, had problems in areas like Private Interconnects and Device Fencing.

The aim was to run Sun Cluster 3.2 software on top of Solaris 10 guest OSes, and thereby cluster VMware Virtual Machines running the Solaris OS. The initial assumption was that, since Solaris OS runs on VMware without problems, Sun Cluster software should work just fine! It makes sense to mention up front here that these are initial steps in this direction, and the Sun Cluster team is continuously investigating various virtualization techniques and Sun Cluster support for them. The setup mentioned here was done on 32-bit Solaris (purely due to available hardware at the time of this setup), but I must say that I strongly believe that things won't look or work any different in a 64-bit environment.

Given below are the various aspects of the setup. All mention of nodes and guests here refers to the virtual Solaris guests on VMware ESX Server, which are to be clustered using Sun Cluster 3.2 software.

P.S. : Images have been shown as thumbnails for the sake of brevity of the blog. Please click on the images to enlarge them.

I ) No. of cluster nodes
The maximum number of nodes that could be supported would be dictated by Sun Cluster software. Having VMware in the picture doesn't affect this aspect. The setup here has been tried with 2 and 3 node clusters. For the purpose of illustration, we have upto 3 physical hosts (Dual CPU SunFire V20z machines with 4 GB RAM), with each of the clustered nodes on a different physical host. However this could easily be extrapolated to the cluster members being on the same physical machine or a combination thereof.

II) Storage
VMware ESX Server provides various ways to add virtual SCSI storage devices to the Solaris Guests running on it. VMware virtual devices could be on :

  • Direct-attached SCSI storage

  • Fibre Channel SAN arrays

  • iSCSI SAN arrays

  • NAS

In all cases where the VMware ESX Server abstracts any of the above underlying storage devices, so that the guest just sees a SCSI disk, there is no direct control of the devices from the guest. The problem here, when it comes to clustering, is that SCSI reservations don't seem to work as expected in all cases. Sun Cluster fencing algorithms for shared devices requires SCSI reservations. So SCSI reservations not working doesn't help the cause. One could, however, use such devices when the intent is not to share them between cluster nodes, that is, they should be local devices.

However, VMware ESX has a feature called Raw Device Mapping (RDM), which allows the guest operating systems to have direct access to the devices, bypassing the VMware layer. More information on RDM can be found in VMware documentation. The following documents could be starting points:

http://www.vmware.com/pdf/vi3_301_201_intro_vi.pdf

http://www.vmware.com/pdf/vi3_301_201_san_cfg.pdf

RDM works with either Fibre Channel or iSCSI only. In the setup here, a SAN storage box connected through Fibre Channel was used for mapping LUNS to the physical hosts. These LUNS could then be mapped onto the VMware guests using RDM. SCSI reservations have been found to be working fine with RDM (both SCSI-2 Reserve/Release and SCSI-3). These RDM devices could therefore be used as shared devices between the cluster nodes. However, of course they can also serve as local devices for a node.

One point to note here is that the virtual SCSI controllers for the guest OSes need to be different for the local and the shared disks. This is a VMware requirement when sharing disks. Also the compatibility mode for RDM, to allow direct access to the storage from the guest, should be “Physical”. For detailed information, please refer to VMware ESX documentation.

Figure 1 (click to enlarge) is a screenshot of the storage configuration on a physical host. It shows the LUNs from the SAN storage which the ESX Server sees.


Figure 1. Storage Configuration (SAN through Fibre Channel) on a VMware ESX Server

Figure 2 (click to enlarge) is a peek at what the device configuration for a Solaris guest OS looks like. It shows that a few devices (hard disks) are Mapped Raw LUNS. This of course is being done through RDM. Each such RDM mapping shows the virtual HBA adapter for the guest (vmhba1 here), the LUN ID from the SAN storage that is being mapped (28 here) and the SCSI bus location for that device (SCSI 1:0 for this guest here). The disks which show “Virtual Disk” against them, are devices abstracted by the VMware layer to the guest OS. Note that there are 2 SCSI controllers for the guest OS. SCSI Controller 0 is used for the local devices, and SCSI Controller 1 is used for devices that are shared with other guests. Also note that the compatibility mode for the RDM mapped device is “Physical”. This is to make sure that the guest OS has direct and uninhibited access to the device.


Figure 2. Storage Configuration for a Solaris Guest OS

For sharing devices (mapped through RDM) between guests on the same physical host, one should enable “SCSI Bus Sharing” in the “Virtual Machine Properties “ for the SCSI controller that caters to the shared devices, and set it to “Virtual”. In Figure 2, SCSI Controller 1 in this setup is for sharing disks across physical hosts. Then choose “Use an existing Virtual Disk” while adding a hard disk and select the “.vmdk” file for that device that is intended to be shared. For example, Figure 2 shows the location of the .vmdk file for “Hard Disk 2”.

Sharing RDM mapped devices between guest OSes across physical hosts, would involve setting the SCSI Bus Sharing to "Physical", as shown in Figure 2, and mapping the same LUN from the SAN storage to the physical hosts running VMware ESX. Using RDM then, one would map the same LUN as devices on all guest OSes that need to share the device. e.g. Node 1 in this setup has LUN ID 28 mapped as "Hard Disk 2". The same LUN should be mapped as a hard disk in all other guest OSes which intend to have LUN ID 28 as a shared device with Node 1.

Figure 3 (click to enlarge) here is the guest Solaris OS showing the devices added to it. Controller “c1” has the 2 local disks shown in Figure 2, and Controller “c2” has the shared disks.


Figure 3. Guest Solaris OS Showing Devices Presented To It From VMware ESX

In addition to SCSI devices, the guests could also use iSCSI or NAS devices. The functioning and setup for them would be similar to that on a normal Solaris machine.

III) Quorum
Both SCSI and Quorum Server type quorum devices were tried out, without problems. Do keep in mind here that a SCSI quorum device should be added to the guest via RDM. The guest OS should have direct access on the device. NAS Quorum is expected to work as is.

IV) Networking
VMware ESX Server's networking features are indeed rich. With virtual switches, VLAN support on virtual switches, NIC teaming etc., networking worked really fine. In fact, for the setups here, a single NIC on the physical host was used to route traffic for the guest OSes. This included both public and private interconnect traffic in a clustered scenario. However, segregation on the virtual switch level or the physical NIC level can easily be achieved, and would be ideal in a production environment. One could either have separate VLANs (on the same virtual switch) for the public and private traffic, or have dedicated physical NICs mapped onto different virtual switches for each type of traffic.

Do note that the "pcn" driver, which gets loaded by default in Solaris running on VMware, could possibly be a little unstable. So it is advised that one install the VMware tools on all the Solaris guest OSes involved, to switch to the "vmxnet" driver, which is pretty stable.

Figure 4 (click to enlarge) is a screenshot of the network configuration on a physical host. It shows the different virtual switches and the associated physical NICs, which cater to the traffic from the virtual machines on a physical host. Each virtual switch has a “Virtual Machine Port Group”, which is the interface to the external world for the guest OSes. In a typical production setup for Sun Cluster software, one could have a Port Group (say “VM Network”) for all public network traffic, and another dedicated Port Group (say “Interconnect Network 1”) for the private interconnect traffic between the cluster members.


Figure 4. Networking Setup on a Physical Host Running VMware ESX Server.

In this setup since we have all traffic (public and private) from the clusterized Solaris guests, going through a single Port Group, hence both the public and the private interconnect adapters for the guest shows the same “VM Network” against them in Figure 2 shown earlier. We have leveraged Single NIC support for the private interconnects here. This could save a PCI slot on a VMware guest, which the user may want to use for adding more devices for the guest OS. Single NIC support would be available to customers pretty soon in the Sun Cluster 3.2 patch.

Note that the maximum number of PCI slots available to each guest OS in VMware ESX Server 3.0.l is 5 slots. This would mean that the total number of NICs + SCSI controllers <= 5. For more information, refer to VMware ESX documentation.

V) Closing comments
The hardware setup used for this experiment :

  • 3 Dual CPU (2 X 1.792 GHz) SunFire V20zs with 4 GB RAM, 1 local disk, QLA 2300 FC-AL Adapter for SAN.

  • Sun StorEdge 3510 for SAN, connected to the physical hosts through Fibre Channel.

The cluster functions just as a normal cluster would do. We created Solaris Volume Manager metasets/VxVM disk groups and configured Sun Cluster agents to make applications highly available. All in all, Sun Cluster 3.2 software runs in a virtualized VMware setup, as expected, clustering Solaris guests, and adding yet another dimension to usability and availability. An overview of the configured cluster can be seen here.

Sorry for the long post ! But that was a handful of things to mention ! Feedback/Comments are welcome as always.

Subhadeep Sinha.
Sun Cluster Engineering.

Friday Apr 27, 2007

Tips and Tools to Search in Sun Cluster 3.2 Documentation

As a writer for Sun Cluster software, I am constantly looking up information in various manuals and man pages on docs.sun.com to check facts and provide references. I've learned a few ways to help speed up my information search.

Use Browse Product Documentation to search a whole documentation set

The default docs.sun.com search choices are to search all titles in docs.sun.com or to search within a book or collection that is already displayed. But there is also a way to search within a specific documentation set or product line.

Say you are looking for information about HAStoragePlus. You found it once before in a Sun Cluster 3.2 document, but you can't remember which document that was. Was it in a data-service book, a man page, the system administration guide? Since you know the product and release you want to search in, go to the Browse Production Documentation tab of docs.sun.com. From here you can search all collections that belong to the Sun Cluster 3.2 release, but ignore everything else.

To display the entire Sun Cluster 3.2 documentation set, drill down to Software > Enterprise Computing > High Availability/Clustering > Solaris Cluster > Solaris Cluster 3.2. You'll see "Solaris Cluster 3.2" in the Within search field. From here, in a single search you can look for "HAStoragePlus" throughout the Sun Cluster 3.2 hardware, software, data service, man page, release notes, and Geographic Edition collections.

Note: Don't get confused when you see "Solaris Cluster" instead of "Sun Cluster". The product software uses the name Sun Cluster 3.2, as does the documentation. But the suite of Sun Cluster
3.2 products (framework, data services, and Geographic Edition) are being marketed under the name of Solaris Cluster, similar to their being marketed under the name of Sun Java Availability Suite. It's all the same thing.

Use the Index when a word you search for isn't found

Although "One Term, One Meaning" is an ideal that technical writers aspire to, features can end up being called different names by different groups when a product is finally released. So the feature name you are used to using during development or Beta might not be the formal name used in the final product documentation. A look in the index of a book that is likely to document the feature could help you find the formal name of the feature you want to read about.

For example, a file system that is configured with HAStoragePlus is informally referred to as a "failover file system". A search for that term in the Sun Cluster 3.2 doc set finds hits in two books. Follow the hit in the Software Installation Guide to the index, and there you will see "failover file system, See highly available local file system". Run your search again on "highly available local file system", and you will get several hits instead of two.

Looking up a familiar term can also be helpful by showing you other possible index terms to look under. For example, the index listing for "installing" might include "See also adding" and "See also configuring". You can look up these additional terms if you don't quite find what you are looking for under "installing".

Use the Documentation Center for quick links to frequent topics

The Documentation Center is a new reference tool in Sun Cluster 3.2. This web page provides links to information at different levels of the documentation, grouped by topic or purpose. If there is an important feature you want to look up, you can probably find a link to it in the Sun Cluster 3.2 Documentation Center.

Use an external search engine

If you prefer, you can use an external search engine to locate information on docs.sun.com. Include "site:docs.sun.com" in your search criteria, and the search will be conducted only on docs.sun.com documents. Also include "Sun Cluster 3.2" and the search will hit mostly or only documents in the Sun Cluster 3.2 doc set.

Restore missing bullets

This is a readability tip rather than a search tip. A lot of Sun Cluster documentation uses bulleted lists to organize information and make it easier to read or scan. But sometimes on my browser the bullets don't display, leaving me with indented paragraphs that are hard to read. If that happens to you, just press Shift-Reload and the bullets should show up.

Learn how the docs.sun.com search works

The search syntax that docs.sun.com uses is very different from most search engines, especially how it interprets special characters. To avoid frustration, click the Search tips link, just under the search field on any docs.sun.com page, and read about how docs.sun.com works.

Tell us what you want

There is a Send comments link on the same line and to the right of the Search tips link, on any docs.sun.com page. This takes you to a feedback site for both documentation content and website functionality. If you are reporting an error in a book, or like something and want to see more of it, include the URL or the book and section title that you are talking about. Unfortunately, the Send comments site is not aware of what web page you are in when you click the link, so you have to provide that information in the Comments field.

I hope these tips help you find more quickly and easily the Sun Cluster 3.2 information you need.

Lisa Shepherd
Sun Cluster Technical Publications
"We're the M in RTFM"

About

mkb

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today