Thursday Sep 24, 2015

Best Practices for EC Backups

Ops Center has a backup and recovery feature for the Enterprise Controller - you can save the current EC state as a backup file, and restore the EC to that state using the file. It's an important feature, but I've seen a few folks asking for guidelines about how to use it. Every site is different, but here are some broad guidelines that we recommend:

  • Perform a backup at least once a week, and keep at least two backup files.
  • Once you've made a backup file, store it offsite or on a NAS share - don't keep it locally on the EC.
  • You can use a cron job to automate regular backups. Here's a sample cron job to perform a backup:
    0 0 * * 0 /opt/SUNWxvmoc/bin/ecadm backup -o /bigdisk/oc_backups -l /bigdisk/oc_backups
  • Remember that some files and directories are not part of the EC backup for size reasons: isos, flars, firmware images, and Solaris 8-10 and Linux patches.
    Firmware images are automatically re-downloaded in Connected Mode. Isos and flars can be re-imported. You can also do separate backups of your Ops Center libraries via Netbackup or the like.

Some folks have also asked if there's a good way to test the backup and recovery procedure, to make sure it's working. Well, there's really only one way to do it - do an EC backup, and also backup or clone the file systems. Then, uninstall and reinstall the EC, restore from the backup, and make sure that everything looks right.

Take a look at the Backup and Recovery chapter for more information about how to perform a backup.

Thursday Sep 17, 2015

Blank Credentials and Monitoring Delays

I saw a couple of unrelated but short questions this week, so I thought I'd answer them both.

"I tried to edit the credentials used by the Enterprise Controller to access My Oracle Support, but when I open the credentials window, the password field was blank, even though there should be an existing password. What's going on?"

So, naturally, once you've entered a password, we don't want to send that password back to the UI, because it'd be a security risk. In 12.3, though, the asterisks to indicate an existing password aren't showing up. So, your credentials are still there, and they won't be changed unless you specifically enter new credentials and save them.

"I was trying to make sure that file system monitoring was working correctly on a managed system. I made a file to push utilization up past the 90% threshold, which should've generated an incident. However, the incident didn't show up for almost an hour. Why is there a delay?"

You can edit the Alert Monitoring Rule Parameters in Ops Center. However, the thresholds that you set have to be maintained for a certain amount of time before an alert is generated. For a file system utilization alert, the default value for this delay is 45 minutes. You can edit the alert to change this threshold if needed.

Thursday Sep 10, 2015

Updating the OCDoctor on a Managed System

There was a new feature introduced in version 4.38 of the OCDoctor script which has been causing some confusion, so I thought I'd explain it a bit.

Beginning with version 4.38, when you run the OCDoctor script with the --update option on a managed system, the OCDoctor script looks for a newer version on the Enterprise Controller, rather than using external download sites. In connected mode, the Enterprise Controller runs a recurring job to download the latest OCDoctor, which the managed systems can then reach.

This makes updates more feasible if you're in a dark site, and minimizes external connections in other sites. However, if you've downloaded the OCDoctor manually on the EC, you will need to place the OCDoctor zip file in the /var/opt/sun/xvm/images/os/others/ directory on the Enterprise Controller so that managed systems can download it.

Thursday Sep 03, 2015

Installing Ops Center in a Zone

I got a question recently about an Ops Center deployment:

"I'm looking at installing an Enterprise Controller, co-located Proxy Controller, and database inside an Oracle Solaris 11 Zone. Is this doable, and are there any special things I should do to make it work?"

You can install all of these components in an S11 zone. There are a few things that you should do beforehand:

-Limit the ZFS ARC cache size in the global zone. Without a limit, the ZFS ARC can consume memory that should be released. The recommended size of the ZFS ARC cache given in the Sizing and Performance guide is equal to (Physical memory - Enterprise Controller heap size - Database memory) x 70%. For example:

  # limit ZFS memory consumption, example (tune memory to your system):
  echo "set zfs:zfs_arc_max=1024989270" >>/etc/system
  echo "set rlim_fd_cur=1024" >>/etc/system
  # set Oracle DB FDs
  projmod -s -K "process.max-file-descriptor=(basic,1024,deny)" user.root

Make sure the global zone has enough swap space configured. The recommended swap space for an EC is twice the physical memory if the physical memory is less than 16 GB, or 16 GB otherwise. For example:

  volsize=$(zfs get -H -o value volsize rpool/swap)
  volsize=${volsize%G}
  volsize=${volsize%%.*}
  if (( $volsize < 16 )); then zfs set volsize=16G rpool/swap; \
  else echo "Swap size sufficient at: ${volsize}G"; fi
  zfs list

In the non-global zone that you're using for the install, set the ulimit:

  echo "ulimit -Sn 1024">>/etc/profile

Finally, run the OCDoctor to check the prerequisites before you install.

Thursday Aug 27, 2015

How Many Systems Can Ops Center Manage?

I saw a question about how many systems you can manage through Ops Center. This is an important question when you're planning a deployment, or looking at expanding an existing deployment.

In general terms, an Enterprise Controller can manage up to 3,000 assets. A Proxy Controller can manage between 350 and 450 assets, although you'll get better performance if there are fewer assets.

The Sizing and Performance guide has more detailed information about the requirements and sizing guidelines for Ops Center.

Thursday Aug 20, 2015

Editing or Disabling Analytics

There was a recent question thread about how you can tweak the OS analytics settings in Ops Center.

"Ops Center collects analytics data every 5 minutes and retains it for 5 days. Is it possible to edit these settings?"

You can edit the retention period but not the collection interval.

To edit the retention period, log into the UI. Click the Administration section, then click the Configuration tab for the EC, and select the Report Service subsystem.

The repsvc.daily-samples-retention-days property specifies the number of days to retain OS analytics data. You can edit this property, then restart the EC to make it take effect.

"Can I turn off data collection for OS analytics entirely?"

Yes, you can. Bear in mind that this requires you to edit a config file, so be very careful.

Go to the /opt/sun/n1gc/lib directory on the EC and find the XVM_SATELLITE.properties file. Edit it to uncomment this line:

#report.service.disable=true

Then, restart the Enterprise Controller.

Thursday Aug 13, 2015

Recovering After a Proxy Controller Crash

I saw a question recently about how to restore your environment if a remote Proxy Controller system fails. This is a good question, and there are a few facets to the answer, depending on your environment.

Recovery is easiest if you have recently backed up the Proxy Controller. The backup file includes asset data, so if you can restore the PC using the backup file, you should be golden.

If you don't have a backup of the Proxy Controller, it's going to take a bit more work. First, you have to migrate the dead PC's assets to a new PC. If you have automatic failover enabled, this happens automatically (hence the name); otherwise you can do it manually.

Then, you can install a new Proxy Controller (using the Linux or Solaris procedure), and migrate the assets to that PC.

Thursday Aug 06, 2015

Mixing Servers in a Server Pool

I saw a couple of questions recently about what kinds of servers you can group together in a server pool.

"Can I create a server pool with different types of servers, like T4 and T5, or T4 and M10?"

Yes, you can. As long as you're not trying to mix SPARC and x86, you can put different types of hardware in a server pool.

"Is it a good idea to make server pools like this?"

It will work, but performance won't be quite as good. To enable migration between hardware, the cpu-arch property needs to be set to generic, which means that not all of the hardware features are used. If you have the hardware to build completely homogeneous server pools, you'll get better performance.

The Server Pools chapter explains how to set up a server pool for whatever virtualization type you're using.

Thursday Jul 30, 2015

Recovering LDoms From a Failed Server

I saw a recent question about Logical Domain recovery: If you have a control domain installed on a server and the server goes down with a hardware fault, what options do you have for recovering the logical domains from that control domain?

The answer is that you have options depending on how your environment is configured:

  • If you have the control domain in a server pool, and you have enabled automatic recovery on the LDom, you have the option of watching as the LDom is automatically brought back up on another control domain in the server pool.
  • If the control domain is in a server pool but you didn't enable automatic recovery, you can still manually migrate the guest by deleting the control domain asset. The guests will then be put in the Shutdown Guests list for the server pool, and you can bring it up on another control domain.
  • If you want to add the failed control back in, before you rediscover it and put it back in the server pool, you should log in and make sure that the guest OS isn't running, to avoid split brain issues.

    Take a look at the Recover Logical Domains from a Failed Server how-to for more information.

    Thursday Jul 23, 2015

    Kernel Zones support in 12.3

    One of the new features in Ops Center 12.3 is support for Oracle Solaris kernel zones. I wanted to talk a bit about this, because there are some caveats, and a new document to help you with using this type of zone.

    Kernel zones differ from other zones in that they have a separate kernel and OS from the global zone, making them more independent. In Ops Center 12.3, you can discover and manage kernel zones. However, you can't migrate them, put them in a server pool, or change their configuration through the user interface.

    We put together a how-to that explains how you can discover existing kernel zones in your environment. You can also take a look at the What's New doc for more information about what's changed in 12.3.

    Thursday Jul 16, 2015

    New Books in 12.3

    One of the changes that we've made in Ops Center 12.3 is a change to the documentation library. We've divided the old Feature Reference Guide up into several smaller books so that it's easier to use:

    • Configure Reference talks about how to get the software working - discovering assets; configuring libraries, networks, and storage; and managing jobs.
    • Operate Reference talks about incidents, reports, hardware management, and OS management, provisioning, and updating.
    • Virtualize Reference describes the use and management of Oracle Solaris Zones, Oracle VM Servers for SPARC, and server pools.
    • Oracle SuperCluster Operate Reference covers the management of Oracle SuperCluster.

    The What's New doc has more information about these new books. You can find the new books by clicking Feature Reference on the main doc site.

    Thursday Jul 09, 2015

    New Virtualization Icons

    There's a change in the UI that I wanted to talk about, since it's been confusing some people after they upgrade to version 12.3. The icons that represent the different virtualization types, such as Oracle Solaris Zones or Logical Domains, have changed. Here are the new icons:

    We made this change because there were getting to be a lot of supported virtualization types, particularly now that Kernel Zones are supported. The new icons make it easier to differentiate between different types so that you know at a glance what sort of system you're dealing with.

    The other new features in version 12.3 are discussed in the What's New document.

    Thursday Jul 02, 2015

    Upgrading to 12.3

    Now that Ops Center 12.3 is out, you might be wondering how to upgrade to it. I thought I'd walk you through the major steps involved in upgrading, and direct you to the documents that go into more detail.

    First off, to upgrade directly to 12.3, you have to be using some variant of 12.2 - 12.2.0, 12.2.1, or 12.2.2. If you're on 12.1, you have to upgrade to 12.2 first. Here's a flowchart of the upgrade paths:


    The Upgrade guide also walks you through the other planning steps. It's a good idea to look at the release notes and the Oracle Solaris and Linux install guides before you upgrade, to make sure that you're aware of known issues and the latest system requirements.

    Once you've done your planning, you go to the Upgrade guide chapter that matches your environment - there are separate procedures for HA and non-HA environments, and separate procedures for upgrading from the command line or from the UI.

    These chapters will walk you through downloading the upgrade (you can get it through the UI, from the Oracle Technology Network, or from the Oracle Software Delivery Cloud), and applying the upgrade to the Enterprise Controller(s), Proxy Controllers, and Agents.

    Thursday Jun 25, 2015

    Ops Center 12.3 is Released

    Version 12.3 of Ops Center is now available!

    This is a major upgrade from the prior versions. In addition to bug fixes and performance enhancements, there are a number of new features:

    • Asset discovery refinements, including the ability to run discovery probes only on a specific network or from a specific Proxy Controller
    • Support for discovering and managing existing Oracle Solaris 11 Kernel Zones
    • The ability to create a custom Oracle Solaris 11 AI manifest and use it for provisioning
    • Refined search: Searching for one or more assets now displays the search results in a new tab in the Navigation pane, making navigation a bit easier
    • New and expanded books in the doc library

    Take a look at the What's New In This Release document for a more detailed breakdown of the new features, and the Upgrade guide for more information about upgrading to version 12.3.

    You can also take a look at the 12.3 documentation library here.

    Thursday Jun 18, 2015

    Using MPxIO and Veritas in Ops Center

    Hey, folks. This is a guest post by Doug Schwabauer about using MPxIO and Veritas in Ops Center. It's well worth the read.

    Enterprise Manager Ops Center has a rich feature set for Oracle VM Server for SPARC (LDOMs) technology. You can provision Control Domains, service domains, LDOM guests, control the size and properties of the guests, add networks and storage to guests, create Server Pools, etc. The foundation for many of these features is called a Storage Library. In Ops Center, a Storage Library can be Fiber Channel based, iSCSI based, or NAS based.

    For FC and iSCSI-based LUNs, Ops Center only recognizes LUNS that are presented to the Solaris Host via MPxIO, the built in multipathing software for Solaris. LUNs that are presented via other MP software, such as Symantec DMP, EMC Power Path, etc., are not recognized.

    Sometimes users do not have the option to use MPxIO, or may choose to not use MPxIO for other reasons, such as use cases where SCSI3 Persistent Reservations (SCSI3 PR) is required.

    It is possible to have a mix of MPxIO-managed LUNs and other LUNs, and to mix and match the LDOM guests and storage libraries for different purposes.

    Below are two such use cases. In both, the user wanted to utilize Veritas Cluster Server (VCS) to cluster select applications running within LDOM’s that are managed with Ops Center. However, the cluster requires I/O fencing via SCSI3 PR. In this scenario, storage devices CANNOT be presented via MPxIO since SCSI3 PR requires direct access to storage from inside the guest OS, and MPxIO does not facilitate that capability. Therefore, the user thought they were either going to have to choose to not use Ops Center, or not use VCS.

    We found and presented a "middle road" solution, where the user is able to do both - use Ops Center for the majority of their LDOM/OVM Server for SPARC environment, but still use Veritas Dynamic Multi-Pathing (DMP) software to manage the disk devices used for the data protected by the cluster.

    In both use cases, the hardware is the same:

    • 2 T4-2, each with 4 FC cards and 2 ports/card
    • Sun Storedge 6180 FC LUNs presented to both hosts
    • One Primary Service domain, and one Alternate Service Domain, a Root Complex domain
    • Each Service domain sees two of the 4 FC Cards.

    See the following blog posts for more details on setting up Alternate Service Domains:

    In our environment, the primary domain owns the cards in Slots 4 and 6, and the alternate domain owns the cards in Slots 1 and 9.

    (Refer to System Service Manual for System Schematics and bus/PCI layouts.)

    A user can control what specific cards, and even ports on cards, use MPxIO, and which don't.

    You can either enable MPxIO globally, and then just disable it on certain ports, or disable MPxIO globally, and then just enable it on certain ports. Either way will accomplish the same thing.

    See the Enabling or Disabling Multipathing on a Per-Port Basis document for more information.

    For example:

    root@example:~# tail /etc/driver/drv/fp.conf
    # "target-port-wwn,lun-list"
    #
    # To prevent LUNs 1 and 2 from being configured for target
    # port 510000f010fd92a1 and target port 510000e012079df1, set:
    #
    # pwwn-lun-blacklist=
    # "510000f010fd92a1,1,2",
    # "510000e012079df1,1,2";
    mpxio-disable="no";      <---------------------- Enable MPxIO globally
    name="fp" parent="/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0" port=0 mpxio-disable="yes";  <--- Disable on port


    root@example:~# ls -l /dev/cfg
    total 21
    .
    .
    .
    lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c3 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0/fp@0,0:fc
    lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c4 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0:fc
    lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c5 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0/fp@0,0:fc
    lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c6 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0,1/fp@0,0:fc
    .
    .
    .

    Therefore "c5" on the example host will not be using MPxIO.

    Similar changes were made for the other 3 service domains.

    Now, for the guest vdisks that will not use MPxIO, the backend devices used were just raw /dev/dsk names - no multi pathing software is involved.   You will see a mix of these below - italics use MPxIO, and non-italics do not:

    VDS
        NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
        primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                          quorum1                                        /dev/dsk/c5t20140080E5184632d12s2
                                          quorum2                                        /dev/dsk/c5t20140080E5184632d13s2
                                          quorum3                                        /dev/dsk/c5t20140080E5184632d14s2
                                          clusterdata1                                   /dev/dsk/c5t20140080E5184632d8s2
                                          clusterdata2                                   /dev/dsk/c5t20140080E5184632d7s2
                                          clusterdata3                                   /dev/dsk/c5t20140080E5184632d10s2
                                          aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2

    VDS
        NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
        alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                          clusterdata3                                   /dev/dsk/c3t20140080E5184632d10s2
                                          clusterdata2                                   /dev/dsk/c3t20140080E5184632d7s2
                                          clusterdata1                                   /dev/dsk/c3t20140080E5184632d8s2
                                          quorum3                                        /dev/dsk/c3t20140080E5184632d14s2
                                          quorum2                                        /dev/dsk/c3t20140080E5184632d13s2
                                          quorum1                                        /dev/dsk/c3t20140080E5184632d12s2
                                          aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2

    Here you can see in Ops Center what the Alternate Domain's virtual disk services look like:

    From guest LDOM perspective, we see 12 data disks (c1d0 is the boot disk), which is really 2 paths to 6 LUNs - one path from Primary and one from Alternate:

    AVAILABLE DISK SELECTIONS:
           0. c1d0 <SUN-SUN_6180-0784-100.00GB>
              /virtual-devices@100/channel-devices@200/disk@0
           1. c1d1 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@1
           2. c1d2 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@2
           3. c1d3 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@3
           4. c1d4 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@4
           5. c1d5 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@5
           6. c1d6 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@6
           7. c1d7 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@7
           8. c1d8 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@8
           9. c1d9 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@9
          10. c1d10 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@a
          11. c1d11 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
              /virtual-devices@100/channel-devices@200/disk@b
          12. c1d12 <SUN-SUN_6180-0784-500.00MB>
              /virtual-devices@100/channel-devices@200/disk@c

    Again from Ops Center, you can click on the Storage tab of the guest, and see that the MPxIO-enabled LUN is known to be "Shared" by the hosts in Ops Center, while the other LUNs are not:

    At this point, since VCS was going to be installed on the LDOM OS and a cluster built, the Veritas stack, including VxVM and VxDMP,  was enabled on the guest LDOMs to correlate the two paths from primary and alternate domains into one path.

    For example:

    root@aa-guest1:~# vxdisk list
    DEVICE       TYPE            DISK         GROUP        STATUS
    sun6180-0_0  auto:ZFS        -            -            ZFS
    sun6180-0_1  auto:cdsdisk    sun6180-0_1  data_dg      online shared
    sun6180-0_2  auto:cdsdisk    -            -            online
    sun6180-0_3  auto:cdsdisk    -            -            online
    sun6180-0_4  auto:cdsdisk    -            -            online
    sun6180-0_5  auto:cdsdisk    sun6180-0_5  data_dg      online shared
    sun6180-0_6  auto:cdsdisk    sun6180-0_6  data_dg      online shared

    root@aa-guest1:~# vxdisk list sun6180-0_6
    Device:    sun6180-0_6
    devicetag: sun6180-0_6
    type:      auto
    clusterid: aa-guest
    disk:      name=sun6180-0_6 id=1427384525.22.aa-guest1
    group:     name=data_dg id=1427489834.14.aa-guest1
    info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
    flags:     online ready private autoconfig shared autoimport imported
    pubpaths:  block=/dev/vx/dmp/sun6180-0_6s2 char=/dev/vx/rdmp/sun6180-0_6s2
    guid:      {a72e068c-d3ce-11e4-b9a0-00144ffe28bc}
    udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000184632000056B354E6025F
    site:      -
    version:   3.1
    iosize:    min=512 (bytes) max=2048 (blocks)
    public:    slice=2 offset=65792 len=104783616 disk_offset=0
    private:   slice=2 offset=256 len=65536 disk_offset=0
    update:    time=1430930621 seqno=0.15
    ssb:       actual_seqno=0.0
    headers:   0 240
    configs:   count=1 len=48144
    logs:      count=1 len=7296
    Defined regions:
     config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
     config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
     log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
     lockrgn  priv 055504-055647[000144]: part=00 offset=000000
    Multipathing information:
    numpaths:   2
    c1d11s2         state=enabled   type=secondary
    c1d9s2          state=enabled   type=secondary

    connectivity: aa-guest1 aa-guest2

    root@aa-guest1:~# vxdmpadm getsubpaths dmpnodename=sun6180-0_6
    NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME          ENCLR-TYPE   ENCLR-NAME    ATTRS
    ========================================================================================
    c1d11s2      ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -
    c1d9s2       ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -

    In this way, the 2 guests that were going to be clustered together are now ready for VCS installation and configuration.

    The second use case changes a little bit in that both MPxIO and Veritas DMP are used in the Primary and Alternate domains, and DMP is still used in the guest as well.   The advantage of this is there is more redundancy and I/O throughput available at the service domain level, because a multi-pathed devices are used for the guest virtual disk services, instead of just the raw /dev/dsk/c#t#d#.

    Now the disk services looks something like this, where the italics vdsdevs are DMP-based, and the non-italic ones are DMP based:

    VDS
        NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
        primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                          aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                          quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                          quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                          quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                          clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                          clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                          clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2


    VDS
        NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
        alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                          aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                          quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                          quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                          quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                          clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                          clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                          clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2

    Again, the advantage here is that 2 paths to the same LUN are being presented from each service domain, so there is additional redundancy and throughput available. You can see the two paths:

    root@example:~# vxdisk list sun6180-0_5
    Device:    sun6180-0_5
    devicetag: sun6180-0_5
    type:      auto
    clusterid: aa-guest
    disk:      name= id=1427384268.11.aa-guest1
    group:     name=data_dg id=1427489834.14.aa-guest1
    info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
    flags:     online ready private autoconfig shared autoimport
    pubpaths:  block=/dev/vx/dmp/sun6180-0_5s2 char=/dev/vx/rdmp/sun6180-0_5s2
    guid:      {0e62396e-d3ce-11e4-b9a0-00144ffe28bc}
    udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000183F120000107F54E603CA
    site:      -
    version:   3.1
    iosize:    min=512 (bytes) max=2048 (blocks)
    public:    slice=2 offset=65792 len=104783616 disk_offset=0
    private:   slice=2 offset=256 len=65536 disk_offset=0
    update:    time=1430930621 seqno=0.15
    ssb:       actual_seqno=0.0
    headers:   0 240
    configs:   count=1 len=48144
    logs:      count=1 len=7296
    Defined regions:
     config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
     config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
     log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
     lockrgn  priv 055504-055647[000144]: part=00 offset=000000
    Multipathing information:
    numpaths:   2
    c5t20140080E5184632d7s2 state=enabled   type=primary
    c5t20250080E5184632d7s2 state=enabled   type=secondary

    The Ops Center view of the virtual disk services is much the same:

    Now the cluster can be set up just as it was before. To the guest, the virtual disks have not changed - just the back-end presentation of the LUNs has changed. This was transparent to the guest.

    About

    This blog discusses issues encountered in Ops Center and highlights the ways in which the documentation can help you

    Search

    Archives
    « April 2016
    SunMonTueWedThuFriSat
         
    1
    2
    3
    4
    6
    8
    9
    10
    11
    12
    13
    15
    16
    17
    18
    19
    20
    22
    23
    24
    25
    26
    27
    29
    30
           
    Today