Monday Jul 14, 2008

LDoms guest domains supported as Solaris Cluster nodes

Folks, when late last year we announced support for Solaris Cluster in LDoms I/O domains on this blog entry , we also hinted about support for LDoms guest domains. It has taken a bit longer then we envisaged, but i am pleased to report that SC Marketing has just announced support for LDoms guest domains with Solaris Cluster!!

So, what exactly does "support" mean here? It means that you can create a LDoms guest domain running Solaris, and then treat that guest domain as a cluster node by installing SC software (specific version and patch information noted later in the blog) inside the guest domain and have the SC software work with the virtual devices in the guest domain. The technically inclined reader would, at this point, have several questions pop into his head... How exactly does SC work with virtual devices? What do i have to do to make SC recognize these devices? Are there any differences between how SC is configured in LDoms guest domains, vs non-virtualized environments? Read-on below for a high level summary of specifics:

  • For shared storage devices (i.e. those accessible from multiple cluster nodes), the virtual device must be backed by a full SCSI LUN. That means, no file backed virtual devices, no slices, no volumes. This limitation is required because SC needs advanced features in the storage devices to guarantee data integrity and those features are available only for virtual storage devices backed by full SCSI LUNs.

  • One may need to use storage which is unshared (ie is accessed from only one cluster node), for things such as OS image installation for the guest domain. For such usage, any type of virtual devices can be used, including those backed by files in the I/O domain. However, for such virtual devices, make sure to configure them to be synchronous. Check LDoms documentation and release notes on how to do that. Currently (as of July 2008) one needs to add "set vds:vd_file_write_flags = 0" to the /etc/system file in the I/O domain exporting the file. This is required because the Cluster stores some key configuration information on the root filesystem (in /etc/cluster) and it expects that the information written to this location is written synchronously to the disks. If the root filesystem of the guest domain is on a file in the I/O domain, it needs this setting to be synchronous.

  • Network based storage (NAS etc.) is fine when used from within the guest domain. Check cluster support matrix for specifics. LDoms guest domains don't change this support.

  • For cluster private interconnect, the LDoms virtual device "vnet" can be used just fine, however the virtual switch which it maps must have the option "mode=sc" specified for it. So essentially, for the command ldm subcommand add-vsw, you would add another argument "mode=sc" on the command line while creating the virtual switch which would be used for cluster private interconnect inside the guest domains. This option enables a fastpath in the I/O domain for the Cluster heartbeat packets so that those packets do not compete with application network packets in the I/O domain for resources. This greatly improves the reliability of the Cluster heartbeats, even under heavy load, leading to a very stable cluster membership for applications to work with. Note however, that good engineering practices should still be followed while sizing your server resources (both in the I/O domain as well as in the guest domains) for the application load expected on the system.

  • With this announcement all features of Solaris Cluster supported in non-virtualized environments are supported in LDoms guest domains, unless explicitly noted in the SC release notes. Some limitations come from LDoms themselves, such as lack of jumbo frame support over virtual networks or lack of link based failure detection with IPMP in guest domains. Check LDoms documentation and release notes for such limitations as support for such missing features are improving all the time.

  • For support of specific applications with LDoms guest domains and SC, check with your ISV. Support for applications in LDoms guest domains is improving all the time, so check often.

  • Software version requirements. LDoms_1.0.3 or higher, S10U5 and patches 137111-01, 137042-01, 138042-02, and 138056-01 or higher are required in BOTH the LDoms guest domains as well as in the I/O domains exporting virtual devices to the guest domains. Solaris Cluster SC32U1 (3.2 2/08) with patch 126106-15 or higher is required in the LDoms guest domains.

  • Licensing for SC in LDoms guest domains follows the same model as those for the I/O domains. You basically pay for the physical server, irrespective of how many guest domains and I/O domains are deployed in that physical server.
  • This covers the high level overview of how SC is to be deployed inside the LDoms guest domains. Check out the SC Release notes for additional details, and some sample configurations. The whole virtualization space is evolving very rapidly and new developments are happening ever so quickly. Keep this blog page bookmarked and visit it frequently to find out how Solaris Cluster is evolving along with this space.

    Cheers!

    Ashutosh Tripathi
    Solaris Cluster Engineering

    Tuesday Nov 21, 2006

    Sun Cluster HA Sun Java System Application Server - Configuration made easy

    The Sun Cluster 3.2 agent for the Sun Java System Application Server (version 8.2) enables two of its important components to be made highly available. The application server's configuration files have a number of settings that are inter-dependent and require careful editing when being changed by hand to make it work in sun cluster environment. This blog begins by outlining the key components of the application server and then provides the source for a tool designed to simplify the configuration of application server.

    The two important components of Sun Java Application Server which are made HA are:

    1. Domain Administration Server (DAS)
    The domain administration server (DAS) is the single process that manages the application server entities like node agents, standalone instances, application clusters, and their configurations.

    2. Node agents (NA)
    This is the component that makes spanning of a domain across machines possible. It is a standalone process that is started with or without manual intervention and it controls the life cycle of server instances it is responsible for. The server instances host the applications.

    A typical example configuration for the Sun Cluster HA Sun Java System Application Server would be as follows:

    In order to configure the above set up, you would require to modify:

    1. domain.xml
    In this file, you would need to modify the http-listeners, IIOP-listeners and the client-hostname under the server-config tag. This is to make the DAS listen on the failover IP. In addition to this, you would need to modify the client-hostname of the respective node-agents to make the node-agents listen on their respective failover IPs. This is to make DAS aware of the IPs on which NAs are bound to.

    2. nodeagent.properties
    In this file, you would need to modify the agent.client.host entity to make the node agent listen on their respective failover IPs. Make sure that it is the same as specified in the domain.xml for the respective node agents.

    3. das.properties (optional)
    This file needs to be changed, only if the value of the attribute "agent.das.host" is not the same as specified in domain.xml against the client-hostname tag of DAS.

    Sun Cluster Data service HA configuration guide for Sun Java System Application Server provides manual instructions for running Application Server in the Sun cluster environment. This configuration can be done using the script below. This script uses Perl v5.8.2 onwards and requires XPath modules which can be downloaded from http://search.cpan.org/CPAN/authors/id/M/MS/MSERGEANT/XML-XPath-1.13.tar.gz.

    Click this link to view and save the perl script.

    This script makes modifications to the above configuration files and creates a backup of the original configuration files. If you run this script more than once, the backed up files will be overwritten. Hence it is suggested to have a copy of the original file.

    Swathi,
    - Sun Cluster Engineering

    About

    mkb

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today