X

Recent Posts

General

What's In Your Datacenter

Startingin 12.3.1, Ops Center keeps count of your assets. Despite the name, thenumbers are not a physical inventory of assets; they are the numberof access points to the assets. An access point is the connectionbetween the Enterprise Controller and an asset. Often, it's a 1-to-1relationship, but it is common to have multiple connections betweenan asset and the Enterprise Controller. For example, a server willhave an access point for its service processor and one for itsoperating system. A logical domain will have an access point for itscontrol domain and one for its operating system. So what the assetcounter really shows you is how many connections the EnterpriseController is handling by type and how many connections each ProxyController is handling by type. If you are onlyinterested in a particular type of asset, use the Asset Counter tabin the Enterprise Controller's dashboard. Go to the Administration section of the navigation pane, then click the Asset Counter tab in the center pane. Let's say you are running ajob that creates logical domains and you only need to check progress.You could always check the Jobs pane to see to see the job complete, but tosee only each logical domain complete, refresh the Asset Counter taband watch the count in the Ldoms column increase. Toinvestigate the access points, or if you wonder whether it's time torebalance your Proxy Controllers, call the OCDoctor. Two of theOCDoctor's options now show the number of access points on theEnterprise Controller and on each Proxy Controller: the --troubleshootoption and the --collectlogs option. A new script in OCDoctor, AssetCount.sh,when you run it with its standardparameter, gives you the same information as you see in UI's AssetCounter tab. #/var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh standard To drilldown into the count, run the script with each of its otherparameters:machine,agent,and all. To see eachProxy Controller:     #/var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.shmachine To seeeach Proxy Controller by type of access: #/var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh agent To put the outputtogether in one long listing,use the allparameter:    #/var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh all The resulting output for a smallish datacenter, just 72 access points, looks like this:  EC 72Proxy Assets Zones Ldoms OVMGuests Servers Storages Switches ExadataCells MSeriesChassis MSeriesD-------------------------------------------------------------------------------------------------pc4 32 5 25 0 2 0 0 0 0 0pc1 28 0 26 0 2 0 0 0 0 0pc0 12 2 4 0 6 0 0 0 0 0Proxy Agents Agentless SPs--------------------------pc4 25 2 0pc1 1 1 0pc0 5 5 5Proxy 32 pc4Zones 5 S11zone101 S11zone102 S11zone100 S11zone103 S11zone104Ldoms 25 stdldom21 stdldom34 stdldom36 stdldom22 stdldom45 stdldom47 ...OVMGuests 0Servers 2 10.187.70.169 pc4...Proxy 28 pc1Zones 0Ldoms 26 stdldom21 stdldom34 stdldom36 stdldom22 stdldom45 stdldom47 ...OVMGuests 0Servers 2 10.187.70.171 pc1 By the way, ifyou're wondering at what point the number of assets affectsperformance, don't worry – Ops Center tells you if it's feelingthe strain. When the Enterprise Controller exceeds 2700 accesspoints, you'll get a Warning incident. At 3000 access points,you'll get a Critical incident. For a Proxy Controller, you'll get aWarning when it exceeds 450 access points and a Critical incidentat 500 access points. So there's time and some headroom to makeadjustments. The Sizing and Performance Guide has more information, and if you want to adjust the number of access points each Proxy Controller is handling, see Rebalancing Assets.

Starting in 12.3.1, Ops Center keeps count of your assets. Despite the name, the numbers are not a physical inventory of assets; they are the numberof access points to the assets. An access point is...

FAQ

What are JMX Credentials and What is Ops Center Doing With Them?

When you discover a Solaris Cluster, you're asked to provide ssh credentials and JMX credentials. You already know the ssh credentials but what about those JMX credentials? They're for the agent on the cluster's global node. The agent uses JMX so they're called JMX credentials. Think of them as agent credentials. The only thing these credentials are doing is allowing the agent on the global node to respond to the Enterprise Controller. Without the JMX creds, you can discover and manage the cluster server itself, but nothing else. If you look in the log file, you'll see a message like "JMXMP provider exception java.net.ConnectException: Connection refused." With the JMX creds, Ops Center authenticates the agent, connects to the agent, and acquires all the agent's information about the global node. JMX credentials can be anything convenient for you, like cluster1 and cluster2, and simple passwords. For all global nodes to use the same credentials, create one set in the discovery profile and run the discovery job. However, if for some reason, you need to use a unique set of credentials for each global node, create each set of credentials in a credential profile and then run a discovery job for each global node. You'll use the same discovery profile but change the credential profile for each job. You can still keep it simple, like cluster1node1 and cluster1node2. For more information, take a look at the Oracle Solaris Cluster section of the Configuration Reference.

When you discover a Solaris Cluster, you're asked to provide ssh credentials and JMX credentials. You already know the ssh credentials but what about those JMX credentials? They're for the agent on...

General

Ops Center Communication Issue with Some Java Versions

Some Ops Center users have run into an issue recently with certain Java versions on managed assets. Basically, if you upgrade to one of the problematic versions, it can disrupt communication between the different parts of Ops Center, causing assets to show as unreachable and jobs to fail. The problem children are: Java SE JDK and JRE 6 Update 101 or later Java SE JDK and JRE 7 Update 85 or later Java SE JDK and JRE 8 Update 51 or later There's a simple workaround for this issue, which you can apply before or after you've hit the issue, which uses the OCDoctor script (by default, it's in the /var/opt/sun/xvm/OCDoctor/ directory). On the EC, run the OCDoctor script with the --update option (the fix requires version 4.51 or later). Then run it again with the --troubleshoot and --fix options. If you're using JDK 6, you then need to download a new Agent Controller bundle based on the version of Ops Center that you're using: Enterprise Manager Ops Center 12.2.1: https://updates.oracle.com/OCDoctor/152112-01.tar Enterprise Manager Ops Center 12.2.2: https://updates.oracle.com/OCDoctor/152113-01.tar Enterprise Manager Ops Center 12.3.0: https://updates.oracle.com/OCDoctor/152114-01.tar Finally, use the same OCDoctor options (--update, then --troubleshoot --fix) on the Proxy Controllers. This will either clear up the issue, or prevent it from appearing at all.

Some Ops Center users have run into an issue recently with certain Java versions on managed assets. Basically, if you upgrade to one of the problematic versions, it can disrupt communication between...

FAQ

Best Practices for EC Backups

Ops Center has a backup and recovery feature for the Enterprise Controller - you can save the current EC state as a backup file, and restore the EC to that state using the file. It's an important feature, but I've seen a few folks asking for guidelines about how to use it. Every site is different, but here are some broad guidelines that we recommend: Perform a backup at least once a week, and keep at least two backup files. Once you've made a backup file, store it offsite or on a NAS share - don't keep it locally on the EC. You can use a cron job to automate regular backups. Here's a sample cron job to perform a backup:0 0 * * 0 /opt/SUNWxvmoc/bin/ecadm backup -o /bigdisk/oc_backups -l /bigdisk/oc_backups Remember that some files and directories are not part of the EC backup for size reasons: isos, flars, firmware images, and Solaris 8-10 and Linux patches. Firmware images are automatically re-downloaded in Connected Mode. Isos and flars can be re-imported. You can also do separate backups of your Ops Center libraries via Netbackup or the like. Some folks have also asked if there's a good way to test the backup and recovery procedure, to make sure it's working. Well, there's really only one way to do it - do an EC backup, and also backup or clone the file systems. Then, uninstall and reinstall the EC, restore from the backup, and make sure that everything looks right. Take a look at the Backup and Recovery chapter for more information about how to perform a backup.

Ops Center has a backup and recovery feature for the Enterprise Controller - you can save the current EC state as a backup file, and restore the EC to that state using the file. It's an important...

FAQ

Blank Credentials and Monitoring Delays

I saw a couple of unrelated but short questions this week, so I thought I'd answer them both. "I tried to edit the credentials used by the Enterprise Controller to access My Oracle Support, but when I open the credentials window, the password field was blank, even though there should be an existing password. What's going on?" So, naturally, once you've entered a password, we don't want to send that password back to the UI, because it'd be a security risk. In 12.3, though, the asterisks to indicate an existing password aren't showing up. So, your credentials are still there, and they won't be changed unless you specifically enter new credentials and save them. "I was trying to make sure that file system monitoring was working correctly on a managed system. I made a file to push utilization up past the 90% threshold, which should've generated an incident. However, the incident didn't show up for almost an hour. Why is there a delay?" You can edit the Alert Monitoring Rule Parameters in Ops Center. However, the thresholds that you set have to be maintained for a certain amount of time before an alert is generated. For a file system utilization alert, the default value for this delay is 45 minutes. You can edit the alert to change this threshold if needed.

I saw a couple of unrelated but short questions this week, so I thought I'd answer them both. "I tried to edit the credentials used by the Enterprise Controller to access My Oracle Support, but when I...

FAQ

Installing Ops Center in a Zone

I got a question recently about an Ops Center deployment: "I'm looking at installing an Enterprise Controller, co-located Proxy Controller, and database inside an Oracle Solaris 11 Zone. Is this doable, and are there any special things I should do to make it work?" You can install all of these components in an S11 zone. There are a few things that you should do beforehand: -Limit the ZFS ARC cache size in the global zone. Without a limit, the ZFS ARC can consume memory that should be released. The recommended size of the ZFS ARC cache given in the Sizing and Performance guide is equal to (Physical memory - Enterprise Controller heap size - Database memory) x 70%. For example:   # limit ZFS memory consumption, example (tune memory to your system):  echo "set zfs:zfs_arc_max=1024989270" >>/etc/system  echo "set rlim_fd_cur=1024" >>/etc/system  # set Oracle DB FDs  projmod -s -K "process.max-file-descriptor=(basic,1024,deny)" user.root Make sure the global zone has enough swap space configured. The recommended swap space for an EC is twice the physical memory if the physical memory is less than 16 GB, or 16 GB otherwise. For example:   volsize=$(zfs get -H -o value volsize rpool/swap)  volsize=${volsize%G}  volsize=${volsize%%.*}  if (( $volsize < 16 )); then zfs set volsize=16G rpool/swap; \  else echo "Swap size sufficient at: ${volsize}G"; fi  zfs list In the non-global zone that you're using for the install, set the ulimit:   echo "ulimit -Sn 1024">>/etc/profile Finally, run the OCDoctor to check the prerequisites before you install.

I got a question recently about an Ops Center deployment: "I'm looking at installing an Enterprise Controller, co-located Proxy Controller, and database inside an Oracle Solaris 11 Zone. Is this...

General

Using MPxIO and Veritas in Ops Center

Hey, folks. This is a guest post by Doug Schwabauer about using MPxIO and Veritas in Ops Center. It's well worth the read. Enterprise Manager Ops Center has a rich feature set for Oracle VM Server for SPARC (LDOMs) technology. You can provision Control Domains, service domains, LDOM guests, control the size and properties of the guests, add networks and storage to guests, create Server Pools, etc. The foundation for many of these features is called a Storage Library. In Ops Center, a Storage Library can be Fiber Channel based, iSCSI based, or NAS based. For FC and iSCSI-based LUNs, Ops Center only recognizes LUNS that are presented to the Solaris Host via MPxIO, the built in multipathing software for Solaris. LUNs that are presented via other MP software, such as Symantec DMP, EMC Power Path, etc., are not recognized. Sometimes users do not have the option to use MPxIO, or may choose to not use MPxIO for other reasons, such as use cases where SCSI3 Persistent Reservations (SCSI3 PR) is required. It is possible to have a mix of MPxIO-managed LUNs and other LUNs, and to mix and match the LDOM guests and storage libraries for different purposes.Below are two such use cases. In both, the user wanted to utilize Veritas Cluster Server (VCS) to cluster select applications running within LDOM’s that are managed with Ops Center. However, the cluster requires I/O fencing via SCSI3 PR. In this scenario, storage devices CANNOT be presented via MPxIO since SCSI3 PR requires direct access to storage from inside the guest OS, and MPxIO does not facilitate that capability. Therefore, the user thought they were either going to have to choose to not use Ops Center, or not use VCS. We found and presented a "middle road" solution, where the user is able to do both - use Ops Center for the majority of their LDOM/OVM Server for SPARC environment, but still use Veritas Dynamic Multi-Pathing (DMP) software to manage the disk devices used for the data protected by the cluster. In both use cases, the hardware is the same: 2 T4-2, each with 4 FC cards and 2 ports/card Sun Storedge 6180 FC LUNs presented to both hosts One Primary Service domain, and one Alternate Service Domain, a Root Complex domain Each Service domain sees two of the 4 FC Cards. See the following blog posts for more details on setting up Alternate Service Domains: https://blogs.oracle.com/jsavit/entry/availability_best_practices_availability_using https://blogs.oracle.com/jsavit/entry/availability_best_practices_example_configuring In our environment, the primary domain owns the cards in Slots 4 and 6, and the alternate domain owns the cards in Slots 1 and 9. (Refer to System Service Manual for System Schematics and bus/PCI layouts.) A user can control what specific cards, and even ports on cards, use MPxIO, and which don't. You can either enable MPxIO globally, and then just disable it on certain ports, or disable MPxIO globally, and then just enable it on certain ports. Either way will accomplish the same thing. See the Enabling or Disabling Multipathing on a Per-Port Basis document for more information. For example: root@example:~# tail /etc/driver/drv/fp.conf# "target-port-wwn,lun-list"## To prevent LUNs 1 and 2 from being configured for target# port 510000f010fd92a1 and target port 510000e012079df1, set:## pwwn-lun-blacklist=# "510000f010fd92a1,1,2",# "510000e012079df1,1,2";mpxio-disable="no";      <---------------------- Enable MPxIO globallyname="fp" parent="/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0" port=0 mpxio-disable="yes";  <--- Disable on portroot@example:~# ls -l /dev/cfgtotal 21...lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c3 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0/fp@0,0:fclrwxrwxrwx   1 root     root          62 Feb 13 12:51 c4 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0:fclrwxrwxrwx   1 root     root          60 Feb 13 12:51 c5 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0/fp@0,0:fclrwxrwxrwx   1 root     root          62 Feb 13 12:51 c6 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0,1/fp@0,0:fc... Therefore "c5" on the example host will not be using MPxIO. Similar changes were made for the other 3 service domains. Now, for the guest vdisks that will not use MPxIO, the backend devices used were just raw /dev/dsk names - no multi pathing software is involved.   You will see a mix of these below - italics use MPxIO, and non-italics do not: VDS    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2                                      quorum1                                        /dev/dsk/c5t20140080E5184632d12s2                                      quorum2                                        /dev/dsk/c5t20140080E5184632d13s2                                      quorum3                                        /dev/dsk/c5t20140080E5184632d14s2                                      clusterdata1                                   /dev/dsk/c5t20140080E5184632d8s2                                      clusterdata2                                   /dev/dsk/c5t20140080E5184632d7s2                                      clusterdata3                                   /dev/dsk/c5t20140080E5184632d10s2                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2VDS    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2                                      clusterdata3                                   /dev/dsk/c3t20140080E5184632d10s2                                      clusterdata2                                   /dev/dsk/c3t20140080E5184632d7s2                                      clusterdata1                                   /dev/dsk/c3t20140080E5184632d8s2                                      quorum3                                        /dev/dsk/c3t20140080E5184632d14s2                                      quorum2                                        /dev/dsk/c3t20140080E5184632d13s2                                      quorum1                                        /dev/dsk/c3t20140080E5184632d12s2                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2 Here you can see in Ops Center what the Alternate Domain's virtual disk services look like: From guest LDOM perspective, we see 12 data disks (c1d0 is the boot disk), which is really 2 paths to 6 LUNs - one path from Primary and one from Alternate: AVAILABLE DISK SELECTIONS:       0. c1d0 <SUN-SUN_6180-0784-100.00GB>          /virtual-devices@100/channel-devices@200/disk@0       1. c1d1 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@1       2. c1d2 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@2       3. c1d3 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@3       4. c1d4 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@4       5. c1d5 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@5       6. c1d6 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@6       7. c1d7 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@7       8. c1d8 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@8       9. c1d9 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@9      10. c1d10 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@a      11. c1d11 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>          /virtual-devices@100/channel-devices@200/disk@b      12. c1d12 <SUN-SUN_6180-0784-500.00MB>          /virtual-devices@100/channel-devices@200/disk@c Again from Ops Center, you can click on the Storage tab of the guest, and see that the MPxIO-enabled LUN is known to be "Shared" by the hosts in Ops Center, while the other LUNs are not: At this point, since VCS was going to be installed on the LDOM OS and a cluster built, the Veritas stack, including VxVM and VxDMP,  was enabled on the guest LDOMs to correlate the two paths from primary and alternate domains into one path. For example: root@aa-guest1:~# vxdisk listDEVICE       TYPE            DISK         GROUP        STATUSsun6180-0_0  auto:ZFS        -            -            ZFSsun6180-0_1  auto:cdsdisk    sun6180-0_1  data_dg      online sharedsun6180-0_2  auto:cdsdisk    -            -            onlinesun6180-0_3  auto:cdsdisk    -            -            onlinesun6180-0_4  auto:cdsdisk    -            -            onlinesun6180-0_5  auto:cdsdisk    sun6180-0_5  data_dg      online sharedsun6180-0_6  auto:cdsdisk    sun6180-0_6  data_dg      online sharedroot@aa-guest1:~# vxdisk list sun6180-0_6Device:    sun6180-0_6devicetag: sun6180-0_6type:      autoclusterid: aa-guestdisk:      name=sun6180-0_6 id=1427384525.22.aa-guest1group:     name=data_dg id=1427489834.14.aa-guest1info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2flags:     online ready private autoconfig shared autoimport importedpubpaths:  block=/dev/vx/dmp/sun6180-0_6s2 char=/dev/vx/rdmp/sun6180-0_6s2guid:      {a72e068c-d3ce-11e4-b9a0-00144ffe28bc}udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000184632000056B354E6025Fsite:      -version:   3.1iosize:    min=512 (bytes) max=2048 (blocks)public:    slice=2 offset=65792 len=104783616 disk_offset=0private:   slice=2 offset=256 len=65536 disk_offset=0update:    time=1430930621 seqno=0.15ssb:       actual_seqno=0.0headers:   0 240configs:   count=1 len=48144logs:      count=1 len=7296Defined regions: config   priv 000048-000239[000192]: copy=01 offset=000000 enabled config   priv 000256-048207[047952]: copy=01 offset=000192 enabled log      priv 048208-055503[007296]: copy=01 offset=000000 enabled lockrgn  priv 055504-055647[000144]: part=00 offset=000000Multipathing information:numpaths:   2c1d11s2         state=enabled   type=secondaryc1d9s2          state=enabled   type=secondaryconnectivity: aa-guest1 aa-guest2root@aa-guest1:~# vxdmpadm getsubpaths dmpnodename=sun6180-0_6NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME          ENCLR-TYPE   ENCLR-NAME    ATTRS========================================================================================c1d11s2      ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -c1d9s2       ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        - In this way, the 2 guests that were going to be clustered together are now ready for VCS installation and configuration. The second use case changes a little bit in that both MPxIO and Veritas DMP are used in the Primary and Alternate domains, and DMP is still used in the guest as well.   The advantage of this is there is more redundancy and I/O throughput available at the service domain level, because a multi-pathed devices are used for the guest virtual disk services, instead of just the raw /dev/dsk/c#t#d#. Now the disk services looks something like this, where the italics vdsdevs are DMP-based, and the non-italic ones are DMP based: VDS    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2VDS    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2 Again, the advantage here is that 2 paths to the same LUN are being presented from each service domain, so there is additional redundancy and throughput available. You can see the two paths: root@example:~# vxdisk list sun6180-0_5Device:    sun6180-0_5devicetag: sun6180-0_5type:      autoclusterid: aa-guestdisk:      name= id=1427384268.11.aa-guest1group:     name=data_dg id=1427489834.14.aa-guest1info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2flags:     online ready private autoconfig shared autoimportpubpaths:  block=/dev/vx/dmp/sun6180-0_5s2 char=/dev/vx/rdmp/sun6180-0_5s2guid:      {0e62396e-d3ce-11e4-b9a0-00144ffe28bc}udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000183F120000107F54E603CAsite:      -version:   3.1iosize:    min=512 (bytes) max=2048 (blocks)public:    slice=2 offset=65792 len=104783616 disk_offset=0private:   slice=2 offset=256 len=65536 disk_offset=0update:    time=1430930621 seqno=0.15ssb:       actual_seqno=0.0headers:   0 240configs:   count=1 len=48144logs:      count=1 len=7296Defined regions: config   priv 000048-000239[000192]: copy=01 offset=000000 enabled config   priv 000256-048207[047952]: copy=01 offset=000192 enabled log      priv 048208-055503[007296]: copy=01 offset=000000 enabled lockrgn  priv 055504-055647[000144]: part=00 offset=000000Multipathing information:numpaths:   2c5t20140080E5184632d7s2 state=enabled   type=primaryc5t20250080E5184632d7s2 state=enabled   type=secondary The Ops Center view of the virtual disk services is much the same: Now the cluster can be set up just as it was before. To the guest, the virtual disks have not changed - just the back-end presentation of the LUNs has changed. This was transparent to the guest.

Hey, folks. This is a guest post by Doug Schwabauer about using MPxIO and Veritas in Ops Center. It's well worth the read. Enterprise Manager Ops Center has a rich feature set for Oracle VM Server for...

FAQ

Providing Contact Info for ASR

Ops Center includes a feature called Auto Service Request, which can automatically file service requests for managed hardware. However, I've seen a bit of confusion about how to get it running. First, the prereqs - to get ASR running, you need to be in connected mode, and you need to have a set of My Oracle Support (MOS) credentials entered in the Edit Authentications window. Your MOS credentials have to be associated with a customer service identifier (CSI) with rights over the hardware that you want to be enabled for ASR.Once you've got that, you'll click the Edit ASR Contact Information action in the Administration section. This opens a window where you specify the default contact information for your assets, which is used for all ASRs by default.If you have assets that need separate contact information, you can specify separate ASR contact information for an asset or a group of assets. That info is used in place of the default contact info.Finally, once you've got the contact info in the system, you click Enable ASR. This action launches a job to enable the assets for ASR, and it attempts to enable new assets for ASR when they're discovered. From then on, if a critical incident occurs on the hardware, ASR should create a service request for it.Take a look at the Auto Service Request chapter of the Admin Guide for more information.

Ops Center includes a feature called Auto Service Request, which can automatically file service requests for managed hardware. However, I've seen a bit of confusion about how to get it running. First,...

Document

Access Point counts in the OCDoctor

When you're trying to figure out questions of scaling in Ops Center, it's important to be able to tell exactly how much load the different parts of the infrastructure are handling. In Ops Center, the relevant number is the access point count. An access point is a connection between a Proxy Controller and a managed asset. We don't just count the number of assets directly because, if different parts of an asset are managed by different Proxy Controllers, that puts more load on the system.There's a tool in the OCDoctor's toolbox directory that lets you count the access points in your environment. The AssetCount.sh script gives you the total number of access points managed by the Enterprise Controller, and gives additional information depending on which option you use:The standard option shows the number of access points for each Proxy Controller - both the total and a detailed breakdown by asset category.The machine option gives a list of the access points on each Proxy Controller in machine-readable format.The agent option shows, for each Proxy Controller, how many assets are agent-managed, how many are agentless, and how many are SPs.The Scaling and Performance Guide explains how to use the AssetCount.sh script. You can find it in the version 12.2.2 documentation library.

When you're trying to figure out questions of scaling in Ops Center, it's important to be able to tell exactly how much load the different parts of the infrastructure are handling. In Ops Center, the...

Document

Sizing and Performance Guide

When you're planning an Ops Center deployment, planning to expand a datacenter where Ops Center is installed, or looking to optimize your Ops Center deployment, it's vital to have information about scaling and performance. You need to know whether your new systems will need another Proxy Controller, or whether the system you're planning on using for your Enterprise Controller is beefy enough. We've added a Sizing and Performance Guide to the Ops Center library, to help you answer these sorts of questions for your environment: The Resource Utilization chapter discusses the relative resource usage of common Ops Center uses, such as OS provisioning, update management, and virtualization. The Scaling and Performance Guidelines chapter provides detailed information about the resources used by, and the scaling capabilities of, Ops Center components such as the Enterprise Controller, Proxy Controllers, database, virtualization controllers, and networks. The Reference Systems chapter provides several reference system specifications for you to use in your planning. The Report Service Configuration Properties appendix explains how to edit the reporting properties, or disable reporting entirely, to improve performance. You can find this document, along with the rest of the documentation, in the Ops Center 12.2.2 documentation library.

When you're planning an Ops Center deployment, planning to expand a datacenter where Ops Center is installed, or looking to optimize your Ops Center deployment, it's vital to have information about...

FAQ

Managing user access in multiple sites

An Ops Center user who's setting up their environment sent in a question about their users: "I'm looking to manage three different data centers from one Enterprise Controller instance. However, the three data centers have different administrators. How can I make sure that each administrator can see and manage only the resources that they're supposed to?" The answer here is that you can use Ops Center's asset groups, combined with its fine-grained roles capabilities, to control which users can see and do what. First, you create a new asset group in the Assets section of the UI. In this example, I'm creating a group for one of the three data centers: Once you've created the group, you can add the correct assets to it, by selecting the assets and clicking Add Asset to Group: Now that you have the assets for one of the datacenters grouped together, you add that admin to Ops Center: Then you'll select that user and click the Manage User Roles icon. When the wizard comes up, you make sure they have the correct roles, then deselect the "Use the default role associations" checkbox: When you click next, you select which groups the roles should apply to. So, for this user, we can apply their Asset Admin role only to the Data Center A group: And there you have it. Rinse and repeat for other groups and users, and each user will be able to see and manage only the correct assets. For more information, check out the Asset Management and User and Role Management chapters.

An Ops Center user who's setting up their environment sent in a question about their users: "I'm looking to manage three different data centers from one Enterprise Controller instance. However, the...

Oracle

Integrated Cloud Applications & Platform Services