Thursday Dec 03, 2009

Solaris Cluster 3.2 11/09 is now available

Solaris Cluster is a multi-system, multi-site high availability and disaster recovery solution that manages the availability of applications services and data across local, regional and geographically dispersed data centers. The Solaris Cluster environment extends the Solaris Operating System into a cluster operating system.

If you are following the Solaris Cluster product and features, you would have noticed extreme innovation by Sun in the high availability and disaster recovery space since the time we released our first HA product many years ago. For well over a decade, Solaris Cluster has been a market leader for providing business continuity and disaster recovery solutions to all mission critical business applications, spanning all the major industry segments.

Continuing with our tradition of innovation, we are pleased to announce another release - "Solaris Cluster 3.2 11/09" - an update to the Solaris Cluster 3.2 product.

This new release brings more features for high availability, virtualization, disaster recovery, flexibility, diagnosibility, and ease of use. We are extending support for virtualization with more options for Solaris Containers Clusters (zone clusters), failover Solaris Containers, and LDoms.  This release provides more file system and volume management choices. We have added new replication solutions for disaster recovery deployments, in addition to improving scalable services support. This release also brings support for the latest versions of many third-party software applications.

The following is a list of some of the new features in this update release:

- New deployment options for Oracle DB, RAC, and applications

  • Solaris Containers cluster support for more Oracle solutions:
    • Oracle E-Business Suite
    • Siebel CRM 8
    • Single-instance Oracle database
  • Support for Oracle Automated Storage Management (ASM) with single-instance Oracle database
  • Support of Reliable Data Sockets over Infiniband for RAC
- Infrastructure features
  • Standalone QFS 4.6 & later in failover Solaris Containers
  • Upgrade on attach for failover Solaris Containers
  • Solaris Volume Manager three-mediator support in Campus Cluster deployments
  • Hitachi Universal Replicator support in Campus Cluster deployments
  • Support for 1TB disks as a quorum devices
- Scalable services features
  • Outgoing connection support for scalable services
  • IPsec support for Scalable Services traffic
  • SCTP support for Scalable services
  • Managed failover of IPsec session & key information
  • Round-robin load balancing
- Geographic Edition
  • Hitachi Universal Replicator support
  • Script-based plugin replication module
  • MySQL replication
- Supported applications and agent features
  • New agent: HA Agent for LDoms guest domains
  • New supported application version: SWIFTAlliance Access & Gateway 6.3
Check here for additional details about the features listed above. Solaris Cluster 3.2 11/09 is supported on Solaris 10 5/09 and 10/09 . Refer to the release notes to get a list of minimum patches required to run this update. The release notes also has links to the product documentation for all the features listed above.

This feature-rich and high-quality product can be downloaded from here. Download and try this latest Solaris Cluster release. We look forward to your comments.

Thanks,

Prasad Dharmavaram
Roma Baron
Solaris Cluster Engineering

Monday Jul 14, 2008

LDoms guest domains supported as Solaris Cluster nodes

Folks, when late last year we announced support for Solaris Cluster in LDoms I/O domains on this blog entry , we also hinted about support for LDoms guest domains. It has taken a bit longer then we envisaged, but i am pleased to report that SC Marketing has just announced support for LDoms guest domains with Solaris Cluster!!

So, what exactly does "support" mean here? It means that you can create a LDoms guest domain running Solaris, and then treat that guest domain as a cluster node by installing SC software (specific version and patch information noted later in the blog) inside the guest domain and have the SC software work with the virtual devices in the guest domain. The technically inclined reader would, at this point, have several questions pop into his head... How exactly does SC work with virtual devices? What do i have to do to make SC recognize these devices? Are there any differences between how SC is configured in LDoms guest domains, vs non-virtualized environments? Read-on below for a high level summary of specifics:

  • For shared storage devices (i.e. those accessible from multiple cluster nodes), the virtual device must be backed by a full SCSI LUN. That means, no file backed virtual devices, no slices, no volumes. This limitation is required because SC needs advanced features in the storage devices to guarantee data integrity and those features are available only for virtual storage devices backed by full SCSI LUNs.

  • One may need to use storage which is unshared (ie is accessed from only one cluster node), for things such as OS image installation for the guest domain. For such usage, any type of virtual devices can be used, including those backed by files in the I/O domain. However, for such virtual devices, make sure to configure them to be synchronous. Check LDoms documentation and release notes on how to do that. Currently (as of July 2008) one needs to add "set vds:vd_file_write_flags = 0" to the /etc/system file in the I/O domain exporting the file. This is required because the Cluster stores some key configuration information on the root filesystem (in /etc/cluster) and it expects that the information written to this location is written synchronously to the disks. If the root filesystem of the guest domain is on a file in the I/O domain, it needs this setting to be synchronous.

  • Network based storage (NAS etc.) is fine when used from within the guest domain. Check cluster support matrix for specifics. LDoms guest domains don't change this support.

  • For cluster private interconnect, the LDoms virtual device "vnet" can be used just fine, however the virtual switch which it maps must have the option "mode=sc" specified for it. So essentially, for the command ldm subcommand add-vsw, you would add another argument "mode=sc" on the command line while creating the virtual switch which would be used for cluster private interconnect inside the guest domains. This option enables a fastpath in the I/O domain for the Cluster heartbeat packets so that those packets do not compete with application network packets in the I/O domain for resources. This greatly improves the reliability of the Cluster heartbeats, even under heavy load, leading to a very stable cluster membership for applications to work with. Note however, that good engineering practices should still be followed while sizing your server resources (both in the I/O domain as well as in the guest domains) for the application load expected on the system.

  • With this announcement all features of Solaris Cluster supported in non-virtualized environments are supported in LDoms guest domains, unless explicitly noted in the SC release notes. Some limitations come from LDoms themselves, such as lack of jumbo frame support over virtual networks or lack of link based failure detection with IPMP in guest domains. Check LDoms documentation and release notes for such limitations as support for such missing features are improving all the time.

  • For support of specific applications with LDoms guest domains and SC, check with your ISV. Support for applications in LDoms guest domains is improving all the time, so check often.

  • Software version requirements. LDoms_1.0.3 or higher, S10U5 and patches 137111-01, 137042-01, 138042-02, and 138056-01 or higher are required in BOTH the LDoms guest domains as well as in the I/O domains exporting virtual devices to the guest domains. Solaris Cluster SC32U1 (3.2 2/08) with patch 126106-15 or higher is required in the LDoms guest domains.

  • Licensing for SC in LDoms guest domains follows the same model as those for the I/O domains. You basically pay for the physical server, irrespective of how many guest domains and I/O domains are deployed in that physical server.
  • This covers the high level overview of how SC is to be deployed inside the LDoms guest domains. Check out the SC Release notes for additional details, and some sample configurations. The whole virtualization space is evolving very rapidly and new developments are happening ever so quickly. Keep this blog page bookmarked and visit it frequently to find out how Solaris Cluster is evolving along with this space.

    Cheers!

    Ashutosh Tripathi
    Solaris Cluster Engineering

    Tuesday Oct 09, 2007

    Announcing Solaris Cluster support in LDoms I/O domains

    If you keep track of Solaris Cluster developments, you probably have already seen the Marketing announcement which went out recently. In this blog entry we would be talking about this new support from more of a technical point of view.

    First, just to make sure we are on the same page about what exactly we are talking about, this is about supporting Solaris Cluster in the LDoms I/O domains. For details on what an LDoms I/O domain is, please see LDoms Admin Guide . Informally, an LDoms I/O domain "owns" at least one PCI bus on the system and thus has direct physical access to the devices on that bus. The I/O domain can then export services to other guest domains on the system. These "services" are in form of virtual devices which are made available to other domains.

    So, where does that leave support for Solaris Cluster and LDoms guest domains? You can create guest domains on the same system where SC is running in the I/O domain and deploy non-HA applications into those guest domains. This allows one to achieve better utilization of hardware and flexibility with respect to application deployments. The ability to manage HA applications inside LDoms guest domains is something we are currently working on, stay tuned.

    With that bit of informal taxonomy and scope clarification out of the way, let us look at how Solaris Cluster 3.2 can be deployed in such LDoms I/O domains. First thing to note is that on some LDoms capable servers, there is only one PCI bus available and hence there can be only one I/O domains on such systems (which would also, by definition, be the control domain). The figure below illustrates this deployment scenario.

    The thing to note in the configuration above is that the non-clustered guest domains are using the public network provided via the control domain which is running SC. This allows for sharing of network bandwidth between non-clustered applications running inside the guest domains and the HA applications running inside the control domain. Thus, sizing requirements should keep this mind while deciding on how many guest domains to run on the system and how much network load is present on the system. Similar considerations apply for any I/O bandwidth which may be shared between guest domains and I/O domains. There is no specific restriction on use of LDoms virtualization features inside the guest domains, such as different kind of virtual storage devices, dynamic assignment of CPUs etc.

    For our next deployment scenario, we will pick a server platform which has more then 1 PCI bus. This would allow us to create additional I/O domains on the system and create more interesting (and potentially more useful to customers) scenarios. We would take the Sun Fire T2000 (Ontario) as the target system as it has two PCI busses. Note that not all systems have that flexibility, some systems have only one PCI bus. In this deployment scenario, we take two Ontario machines in a split bus configuration. For details on how to configure a split bus configuration, see Alex's blog entry on split bus ldom configurations with T2000s. The resulting 4 I/O domains (2 each on each Ontario system) are then configured as two different clusters. The picture below should help clarify what we are talking about.

    In the configuration above, the bus pci@7c0 (bus_b) has been assigned to the "primary" domain and bus pci@780 (bus_a) has been assigned to a domain named "alternate". One point to note here is that on the Ontario, both internal disks are actually on bus_b, and so a dual channel fiber storage HBA card has been added to the "alternate" domain on its PCI bus (bus_a, slot 0), to provide for local storage for OS image etc as well as access to shared storage. Note that the disk which holds the OS image needs to be fiber bootable for this to work. On the PCI bus bus_b, we also need another HBA card to provide access to the shared storage. That exhausts all available I/O slots and we do not have a way to add additional network cards on the system. That means that on the "alternate" cluster, we are left with only 2 onboard NICs (e1000g0 and e1000g1) to provide for both public and private network connectivity. Having just a single NIC card for the public network is no problem (IPMP and SC support that just fine), but for a single private interconnect, you would have to use the custom option of scinstall command directly to install the cluster because using the standard installation option enforces the minimum 2 private network interfaces. For mission critical deployments, you may want to avoid this configuration because the single interconnect link can lead to reduced availability in some scenarios.

    The above configuration creates two separate 2 node clusters on two T2000 boxes. That is useful for scenarios where a mission critical application and other HA application needs to be consolidated on the same hardware, providing for cost savings. This also caters to scenarios where increased isolation (resource-wise as well as from administrative isolation point of view) between different application is desired, which require two different clusters for maximum isolation.

    In the next configuration we study, we take the same two Ontario machines in split bus configuration as before. However, instead of creating two different 2 node clusters, we create a single 4 node cluster. The schematic below illustrates that configuration:

    Note that the two cluster nodes which are on the "alternate" domains have only single interconnect cards. To get the cluster to install in this configuration, you may want to first install a 4 node cluster with single interconnect, then add the additional interconnect on the primary domains using clsetup. Alternatively, one can first create a 2 node cluster on the primary domains which have the two interconnect cards, then add the two alternate domain nodes to the cluster via the clnode add command after installing the SC software. For example:
    clnode add -n node1 -c clustername -e node3:e1000g4,switch1

    The above command, when executed on node3, adds it to the cluster by using node1 as the sponsor node and using a single network card e1000g4 connected to switch1 as the private interconnect card. Also note that while for a 4 node cluster, a Quorum device is not strictly required, in this situation you would definitely want to configure a quorum device so that loss of a single physical machine does not lead to loss of the whole cluster.

    This above 4 node configuration can be useful in scenarios where the increased isolation of 2 different clusters is not necessary for the applications being deployed and ease of administration of a single cluster is desired. Note that all the power of LDoms is available to you in these configurations (my favourite: Dynamically move CPUs from one domain into another!) making your deployments very flexible and cost effective.

    Note that the configurations displayed in this blog are for the first generation of Sun Fire T2000 systems. On the newer systems such as the just released Sun SPARC Enterprise T5x20 systems the device assignments are a bit different. The basic considerations on how to deploy SC on such platforms would still be the same. Check out the details of the T5520 system here . Read more about what people are saying about CMT and UltraSPARC T2 technologies on Allan Packer's weblog .

    Hope this was useful. Stay tuned for Solaris Cluster support for LDoms Guest Domains which would allow you to cluster LDoms guest domains and thus bring the full power of SC application/data management to guest domains.

    Ashutosh Tripathi - Solaris Cluster Engineering
    Alexandre Chartre - LDoms Engineering

    About

    mkb

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today