Thursday Jul 02, 2015

Upgrading to 12.3

Now that Ops Center 12.3 is out, you might be wondering how to upgrade to it. I thought I'd walk you through the major steps involved in upgrading, and direct you to the documents that go into more detail.

First off, to upgrade directly to 12.3, you have to be using some variant of 12.2 - 12.2.0, 12.2.1, or 12.2.2. If you're on 12.1, you have to upgrade to 12.2 first. Here's a flowchart of the upgrade paths:


The Upgrade guide also walks you through the other planning steps. It's a good idea to look at the release notes and the Oracle Solaris and Linux install guides before you upgrade, to make sure that you're aware of known issues and the latest system requirements.

Once you've done your planning, you go to the Upgrade guide chapter that matches your environment - there are separate procedures for HA and non-HA environments, and separate procedures for upgrading from the command line or from the UI.

These chapters will walk you through downloading the upgrade (you can get it through the UI, from the Oracle Technology Network, or from the Oracle Software Delivery Cloud), and applying the upgrade to the Enterprise Controller(s), Proxy Controllers, and Agents.

Thursday Jun 25, 2015

Ops Center 12.3 is Released

Version 12.3 of Ops Center is now available!

This is a major upgrade from the prior versions. In addition to bug fixes and performance enhancements, there are a number of new features:

  • Asset discovery refinements, including the ability to run discovery probes only on a specific network or from a specific Proxy Controller
  • Support for discovering and managing existing Oracle Solaris 11 Kernel Zones
  • The ability to create a custom Oracle Solaris 11 AI manifest and use it for provisioning
  • Refined search: Searching for one or more assets now displays the search results in a new tab in the Navigation pane, making navigation a bit easier
  • New and expanded books in the doc library

Take a look at the What's New In This Release document for a more detailed breakdown of the new features, and the Upgrade guide for more information about upgrading to version 12.3.

You can also take a look at the 12.3 documentation library here.

Thursday Jun 18, 2015

Using MPxIO and Veritas in Ops Center

Hey, folks. This is a guest post by Doug Schwabauer about using MPxIO and Veritas in Ops Center. It's well worth the read.

Enterprise Manager Ops Center has a rich feature set for Oracle VM Server for SPARC (LDOMs) technology. You can provision Control Domains, service domains, LDOM guests, control the size and properties of the guests, add networks and storage to guests, create Server Pools, etc. The foundation for many of these features is called a Storage Library. In Ops Center, a Storage Library can be Fiber Channel based, iSCSI based, or NAS based.

For FC and iSCSI-based LUNs, Ops Center only recognizes LUNS that are presented to the Solaris Host via MPxIO, the built in multipathing software for Solaris. LUNs that are presented via other MP software, such as Symantec DMP, EMC Power Path, etc., are not recognized.

Sometimes users do not have the option to use MPxIO, or may choose to not use MPxIO for other reasons, such as use cases where SCSI3 Persistent Reservations (SCSI3 PR) is required.

It is possible to have a mix of MPxIO-managed LUNs and other LUNs, and to mix and match the LDOM guests and storage libraries for different purposes.

Below are two such use cases. In both, the user wanted to utilize Veritas Cluster Server (VCS) to cluster select applications running within LDOM’s that are managed with Ops Center. However, the cluster requires I/O fencing via SCSI3 PR. In this scenario, storage devices CANNOT be presented via MPxIO since SCSI3 PR requires direct access to storage from inside the guest OS, and MPxIO does not facilitate that capability. Therefore, the user thought they were either going to have to choose to not use Ops Center, or not use VCS.

We found and presented a "middle road" solution, where the user is able to do both - use Ops Center for the majority of their LDOM/OVM Server for SPARC environment, but still use Veritas Dynamic Multi-Pathing (DMP) software to manage the disk devices used for the data protected by the cluster.

In both use cases, the hardware is the same:

  • 2 T4-2, each with 4 FC cards and 2 ports/card
  • Sun Storedge 6180 FC LUNs presented to both hosts
  • One Primary Service domain, and one Alternate Service Domain, a Root Complex domain
  • Each Service domain sees two of the 4 FC Cards.

See the following blog posts for more details on setting up Alternate Service Domains:

In our environment, the primary domain owns the cards in Slots 4 and 6, and the alternate domain owns the cards in Slots 1 and 9.

(Refer to System Service Manual for System Schematics and bus/PCI layouts.)

A user can control what specific cards, and even ports on cards, use MPxIO, and which don't.

You can either enable MPxIO globally, and then just disable it on certain ports, or disable MPxIO globally, and then just enable it on certain ports. Either way will accomplish the same thing.

See the Enabling or Disabling Multipathing on a Per-Port Basis document for more information.

For example:

root@example:~# tail /etc/driver/drv/fp.conf
# "target-port-wwn,lun-list"
#
# To prevent LUNs 1 and 2 from being configured for target
# port 510000f010fd92a1 and target port 510000e012079df1, set:
#
# pwwn-lun-blacklist=
# "510000f010fd92a1,1,2",
# "510000e012079df1,1,2";
mpxio-disable="no";      <---------------------- Enable MPxIO globally
name="fp" parent="/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0" port=0 mpxio-disable="yes";  <--- Disable on port


root@example:~# ls -l /dev/cfg
total 21
.
.
.
lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c3 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0/fp@0,0:fc
lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c4 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0:fc
lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c5 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0/fp@0,0:fc
lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c6 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0,1/fp@0,0:fc
.
.
.

Therefore "c5" on the example host will not be using MPxIO.

Similar changes were made for the other 3 service domains.

Now, for the guest vdisks that will not use MPxIO, the backend devices used were just raw /dev/dsk names - no multi pathing software is involved.   You will see a mix of these below - italics use MPxIO, and non-italics do not:

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      quorum1                                        /dev/dsk/c5t20140080E5184632d12s2
                                      quorum2                                        /dev/dsk/c5t20140080E5184632d13s2
                                      quorum3                                        /dev/dsk/c5t20140080E5184632d14s2
                                      clusterdata1                                   /dev/dsk/c5t20140080E5184632d8s2
                                      clusterdata2                                   /dev/dsk/c5t20140080E5184632d7s2
                                      clusterdata3                                   /dev/dsk/c5t20140080E5184632d10s2
                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      clusterdata3                                   /dev/dsk/c3t20140080E5184632d10s2
                                      clusterdata2                                   /dev/dsk/c3t20140080E5184632d7s2
                                      clusterdata1                                   /dev/dsk/c3t20140080E5184632d8s2
                                      quorum3                                        /dev/dsk/c3t20140080E5184632d14s2
                                      quorum2                                        /dev/dsk/c3t20140080E5184632d13s2
                                      quorum1                                        /dev/dsk/c3t20140080E5184632d12s2
                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2

Here you can see in Ops Center what the Alternate Domain's virtual disk services look like:

From guest LDOM perspective, we see 12 data disks (c1d0 is the boot disk), which is really 2 paths to 6 LUNs - one path from Primary and one from Alternate:

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-SUN_6180-0784-100.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@1
       2. c1d2 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@2
       3. c1d3 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@3
       4. c1d4 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@4
       5. c1d5 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@5
       6. c1d6 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@6
       7. c1d7 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@7
       8. c1d8 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@8
       9. c1d9 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@9
      10. c1d10 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@a
      11. c1d11 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@b
      12. c1d12 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@c

Again from Ops Center, you can click on the Storage tab of the guest, and see that the MPxIO-enabled LUN is known to be "Shared" by the hosts in Ops Center, while the other LUNs are not:

At this point, since VCS was going to be installed on the LDOM OS and a cluster built, the Veritas stack, including VxVM and VxDMP,  was enabled on the guest LDOMs to correlate the two paths from primary and alternate domains into one path.

For example:

root@aa-guest1:~# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sun6180-0_0  auto:ZFS        -            -            ZFS
sun6180-0_1  auto:cdsdisk    sun6180-0_1  data_dg      online shared
sun6180-0_2  auto:cdsdisk    -            -            online
sun6180-0_3  auto:cdsdisk    -            -            online
sun6180-0_4  auto:cdsdisk    -            -            online
sun6180-0_5  auto:cdsdisk    sun6180-0_5  data_dg      online shared
sun6180-0_6  auto:cdsdisk    sun6180-0_6  data_dg      online shared

root@aa-guest1:~# vxdisk list sun6180-0_6
Device:    sun6180-0_6
devicetag: sun6180-0_6
type:      auto
clusterid: aa-guest
disk:      name=sun6180-0_6 id=1427384525.22.aa-guest1
group:     name=data_dg id=1427489834.14.aa-guest1
info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags:     online ready private autoconfig shared autoimport imported
pubpaths:  block=/dev/vx/dmp/sun6180-0_6s2 char=/dev/vx/rdmp/sun6180-0_6s2
guid:      {a72e068c-d3ce-11e4-b9a0-00144ffe28bc}
udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000184632000056B354E6025F
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=2 offset=65792 len=104783616 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
update:    time=1430930621 seqno=0.15
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=48144
logs:      count=1 len=7296
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
 log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
 lockrgn  priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths:   2
c1d11s2         state=enabled   type=secondary
c1d9s2          state=enabled   type=secondary

connectivity: aa-guest1 aa-guest2

root@aa-guest1:~# vxdmpadm getsubpaths dmpnodename=sun6180-0_6
NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME          ENCLR-TYPE   ENCLR-NAME    ATTRS
========================================================================================
c1d11s2      ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -
c1d9s2       ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -

In this way, the 2 guests that were going to be clustered together are now ready for VCS installation and configuration.

The second use case changes a little bit in that both MPxIO and Veritas DMP are used in the Primary and Alternate domains, and DMP is still used in the guest as well.   The advantage of this is there is more redundancy and I/O throughput available at the service domain level, because a multi-pathed devices are used for the guest virtual disk services, instead of just the raw /dev/dsk/c#t#d#.

Now the disk services looks something like this, where the italics vdsdevs are DMP-based, and the non-italic ones are DMP based:

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2


VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2

Again, the advantage here is that 2 paths to the same LUN are being presented from each service domain, so there is additional redundancy and throughput available. You can see the two paths:

root@example:~# vxdisk list sun6180-0_5
Device:    sun6180-0_5
devicetag: sun6180-0_5
type:      auto
clusterid: aa-guest
disk:      name= id=1427384268.11.aa-guest1
group:     name=data_dg id=1427489834.14.aa-guest1
info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags:     online ready private autoconfig shared autoimport
pubpaths:  block=/dev/vx/dmp/sun6180-0_5s2 char=/dev/vx/rdmp/sun6180-0_5s2
guid:      {0e62396e-d3ce-11e4-b9a0-00144ffe28bc}
udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000183F120000107F54E603CA
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=2 offset=65792 len=104783616 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
update:    time=1430930621 seqno=0.15
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=48144
logs:      count=1 len=7296
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
 log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
 lockrgn  priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths:   2
c5t20140080E5184632d7s2 state=enabled   type=primary
c5t20250080E5184632d7s2 state=enabled   type=secondary

The Ops Center view of the virtual disk services is much the same:

Now the cluster can be set up just as it was before. To the guest, the virtual disks have not changed - just the back-end presentation of the LUNs has changed. This was transparent to the guest.

Thursday Jun 11, 2015

Providing Contact Info for ASR

Ops Center includes a feature called Auto Service Request, which can automatically file service requests for managed hardware. However, I've seen a bit of confusion about how to get it running.

First, the prereqs - to get ASR running, you need to be in connected mode, and you need to have a set of My Oracle Support (MOS) credentials entered in the Edit Authentications window. Your MOS credentials have to be associated with a customer service identifier (CSI) with rights over the hardware that you want to be enabled for ASR.

Once you've got that, you'll click the Edit ASR Contact Information action in the Administration section. This opens a window where you specify the default contact information for your assets, which is used for all ASRs by default.

If you have assets that need separate contact information, you can specify separate ASR contact information for an asset or a group of assets. That info is used in place of the default contact info.

Finally, once you've got the contact info in the system, you click Enable ASR. This action launches a job to enable the assets for ASR, and it attempts to enable new assets for ASR when they're discovered. From then on, if a critical incident occurs on the hardware, ASR should create a service request for it.

Take a look at the Auto Service Request chapter of the Admin Guide for more information.

Thursday Jun 04, 2015

Enterprise Controllers in Logical Domains

I saw a few questions about installing Enterprise Controllers in Logical Domains, and what's possible with that sort of deployment. Here are some answers:

"Is it supported to install the Enterprise Controller in a Logical Domain?"

Yep. The Certified Systems Matrix lists the supported OSes for EC installation, and Oracle VM Server for SPARC is supported (as are some Oracle Solaris Zones).

"Can you use Oracle Solaris Cluster to provide High Availability for an Enterprise Controller installed on a Logical Domain?"

Yes, this is possible. It deserves its own post, so I'll go into more detail on it soon, but yes, it works.

"If I have two Enterprise Controllers installed on Logical Domains, can I have EC 1 discover and manage the LDom for EC 2, and vice versa?"

No. The Agent Controllers installed on EC and PC systems are different from standard Agents, and if you install an Agent from one EC on another EC's system, it's going to get confused.

Thursday May 28, 2015

Uploading and Deploying Oracle Solaris 11 Files

I saw a question recently about uploading flat files, such as a config file, or tarballs to an Oracle Solaris 11 library and then deploy them to Oracle Solaris 11 servers. This is an easy task for Oracle Solaris 8, 9, or 10, but it's trickier to find with Oracle Solaris 11.

Here are the steps to upload and deploy such files with Oracle Solaris 11 in Ops Center, using our software library for the content.

  1. Create an Oracle Solaris 11 pkg which contains the config files. Here's an example for how to do so: http://docs.oracle.com/cd/E23824_01/html/E21798/glcej.html
  2. Add that pkg to the repository. (The above example also covers this step.)
  3. Sync Ops Center with the repository so that the new pkg is added to Ops Center's catalog of software.
  4. Create an Ops Center Oracle Solaris 11 Profile that installs the pkg created in Step 1.
  5. Apply the profile in an update plan to the target systems.

For more information about OS Profiles, see the OS Updates chapter.

Thursday May 21, 2015

Special Database Options

When you're installing Ops Center, you have two options for the product database: You can use an embedded database, that's automatically installed on the Enterprise Controller and managed by Ops Center, or you can use a remote database that you manage yourself.

With regards to the customer-managed database, I saw an important question recently: When you install this database, do you have to enable any of the advanced or special features? Some folks want to use the bare minimum installation for security reasons.

The answer here is that Ops Center only requires the base installation; no special features are used. As long as you're using one of the DB versions listed in the Certified Systems Matrix, you're golden.

Thursday May 14, 2015

Supported OS, LDom, and firmware versions

A couple of weeks ago, I did a post about the supported versions for LDoms. As of Ops Center version 12.2.2, LDoms 3.2 is not supported, although the latest Oracle Solaris 11.2 SRU 8 comes with it. Well, this raised a couple of followup questions that I thought I should answer:

"If LDoms 3.2 isn't supported, are new versions of Oracle Solaris that contain it supported?"

The OS versions themselves are supported, yes. If you have a non-virtualized S11.2 OS, you can upgrade to SRU 8 without difficulty. It's only on LDoms systems where you should avoid SRUs that contain LDoms 3.2 until it's officially supported.

"The Certified Systems Matrix recommends using the latest firmware for managed servers. Does the latest firmware have a minimum OS level?"

No, the firmware and OS levels are independent. Even if you have an LDoms system that's using an earlier version of S11.2, updating the hardware underneath it to use the latest firmware shouldn't cause problems.

EDIT: Ops Center 12.3 supports LDoms 3.2.

Thursday May 07, 2015

Clustered Ops Center installation

Today's question from an Ops Center user:

"Can we cluster Ops Center deployments using, say, Solaris Cluster? How would we do it?"

Well, there are a couple of possible ways that you can install the Enterprise Controller so that it can fail over.

The first method is to use the documented HA installation, which uses Oracle Clusterware, two or more Enterprise Controller systems, and a remote customer-managed database. The procedures for this kind of installation are documented in the Oracle Solaris and Linux install guides.

The second is to install the Enterprise Controller in an LDom controlled by Oracle Solaris Cluster, and then have the Enterprise Controller fail over between hosts via Cluster. You can use an embedded database or a remote database with this solution.

Thursday Apr 30, 2015

Using Maintenance Mode

So, after last week's post about blacklisting assets for Ops Center, a couple of people pointed out that there's another - probably easier - way of temporarily stopping an asset from generating ASRs if you're doing maintenance on it: Putting the asset in maintenance mode.

Putting an asset in maintenance mode stops it from generating new incidents, so that when you power off or reconfigure it, Ops Center doesn't freak out. Ops Center doesn't stop managing the asset, and you can then disable maintenance mode when you're done.

Bear in mind that Ops Center will also treat the asset as though it's about to go down: If you put a Proxy Controller in maintenance mode it can't run jobs, and if you put an Oracle VM Server for SPARC in maintenance mode Ops Center will try to migrate its guests to another system in the server pool, or stop them if no other system is available.

Take a look at the Incidents chapter in the Feature Reference Guide for more information about maintenance mode.

Thursday Apr 23, 2015

Disabling ASR for a specific asset

Enabling ASR for your environment helps you get quick assistance when you have a hardware issue - Ops Center takes asset information when a hardware incident occurs, and generates a service request based on contact information and MOS credentials that you've provided.

The trick, though, is that you enable ASR for your entire environment. You can provide separate contact info for some groups, but when you enable it it's on for all assets that are associated with the credentials. So, what if you're performing some hardware maintenance, and you don't want to accidentally cause ASR to open a service request?

In cases like this, you can add an asset to a blacklist, to prevent ASRs from being generated for it. Select the Enterprise Controller from the admin section of the UI, then click configuration. In the list of subsystems, select Auto Service Request. You can then enter one or more serial numbers in a comma-separated list in the serial blacklist field to disable ASR creation for those assets. Monitoring of the assets still happens, and ASRs are created for other assets as normal.

The ASR chapter in the Admin guide goes into more detail about how to get ASR running.

Thursday Apr 16, 2015

LDom versions

I saw a recent question about LDOM versions that come along with Oracle Solaris versions:

"I'm looking at upgrading some of my Control Domains to Oracle Solaris 11.2 SRU 8, which comes with LDoms 3.2. If I upgrade, will the new version be supported for migration?"

LDoms 3.2 is not yet supported by Ops Center, so if you start using it, you'll find that Ops Center won't know how to find migration targets for it.

If you're interested in which versions are supported, keep an eye on the Certified Systems Matrix; we'll add new versions there once they're certified.

EDIT: As Jay Lake pointed out in the comments, MOS Document 2014856.1 contains a workaround for this issue:

On all CDOMs in the pool, add the following line to /opt/sun/n1gc/etc/ldoms.properties:

archlist.LDMMGR.3.2=native,generic,migration-class1,sparc64-class1

Then restart the agent on the CDOM.

Repeat for all cdoms in pool.

EDIT 2: Ops Center 12.3 supports LDoms 3.2.

Thursday Apr 09, 2015

Adding an Asset to a Group During Discovery

A lot of people use user-defined groups to organize their assets in ways that make sense for their environment - like having groups for different locations, or groups based on the primary purpose of the assets. If you're adding a number of new assets, though, it can be a bit of a pain to discover all of them and then manually add them to the correct group.

A solution to this issue is to use group rules and tags to add the assets to the correct group during the discovery process. First, you edit the group and use the Create Group Rules option. You can create a rule that will add assets to the group automatically if they have a specific tag.

Then, when you're discovering systems that belong in a group, use the Tags step of the discovery profile wizard to add the corresponding tag to those assets as they're discovered. They will then be added to the correct group automatically.

See the Asset Management chapter of the Feature Reference Guide for more information about discovery and asset groups.

Thursday Apr 02, 2015

Ops Center's port usage

I saw a question about the ports used by Ops Center:

"There's a table in the Ports and Protocols guide showing what ports have to be opened for Ops Center, but I'm confused about directionality. Do these ports have to be open bi-directional?"

Nope. The ports only have to be open in the direction indicated by the first column - so, the ports listed for "Enterprise Controller to Proxy Controller" only need to be open in that direction.

Thursday Mar 26, 2015

Modifying Asset Groups

I got a question recently about the system-defined groups used in Ops Center:

"As I started discovering assets in Ops Center, I saw that they were being sorted into system-defined groups based on asset type. How can I modify these groups for my environment - for example, to put assets from two different labs into two groups?"

Well, you can't. The system-defined groups will forever be system-defined. However, if you're looking for a way to control the way that your assets are organized, you can create user-defined groups.

For example, if you have two labs, you can create a group for each lab, and then add each asset to the correct lab group.

Another possibility is to use group rules to add the assets automatically. For example, when you discover the assets in Lab A, you could add a "LabA" tag to them. Then, you could set up a group for Lab A, and create a rule that will automatically add assets to the group if they have the "LabA" tag.

You can learn more about discovery and user-defined groups in the Asset Management chapter.

About

This blog discusses issues encountered in Ops Center and highlights the ways in which the documentation can help you

Search

Archives
« July 2015
SunMonTueWedThuFriSat
   
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today