Thursday Jul 09, 2015

New Virtualization Icons

There's a change in the UI that I wanted to talk about, since it's been confusing some people after they upgrade to version 12.3. The icons that represent the different virtualization types, such as Oracle Solaris Zones or Logical Domains, have changed. Here are the new icons:

We made this change because there were getting to be a lot of supported virtualization types, particularly now that Kernel Zones are supported. The new icons make it easier to differentiate between different types so that you know at a glance what sort of system you're dealing with.

The other new features in version 12.3 are discussed in the What's New document.

Thursday Jun 18, 2015

Using MPxIO and Veritas in Ops Center

Hey, folks. This is a guest post by Doug Schwabauer about using MPxIO and Veritas in Ops Center. It's well worth the read.

Enterprise Manager Ops Center has a rich feature set for Oracle VM Server for SPARC (LDOMs) technology. You can provision Control Domains, service domains, LDOM guests, control the size and properties of the guests, add networks and storage to guests, create Server Pools, etc. The foundation for many of these features is called a Storage Library. In Ops Center, a Storage Library can be Fiber Channel based, iSCSI based, or NAS based.

For FC and iSCSI-based LUNs, Ops Center only recognizes LUNS that are presented to the Solaris Host via MPxIO, the built in multipathing software for Solaris. LUNs that are presented via other MP software, such as Symantec DMP, EMC Power Path, etc., are not recognized.

Sometimes users do not have the option to use MPxIO, or may choose to not use MPxIO for other reasons, such as use cases where SCSI3 Persistent Reservations (SCSI3 PR) is required.

It is possible to have a mix of MPxIO-managed LUNs and other LUNs, and to mix and match the LDOM guests and storage libraries for different purposes.

Below are two such use cases. In both, the user wanted to utilize Veritas Cluster Server (VCS) to cluster select applications running within LDOM’s that are managed with Ops Center. However, the cluster requires I/O fencing via SCSI3 PR. In this scenario, storage devices CANNOT be presented via MPxIO since SCSI3 PR requires direct access to storage from inside the guest OS, and MPxIO does not facilitate that capability. Therefore, the user thought they were either going to have to choose to not use Ops Center, or not use VCS.

We found and presented a "middle road" solution, where the user is able to do both - use Ops Center for the majority of their LDOM/OVM Server for SPARC environment, but still use Veritas Dynamic Multi-Pathing (DMP) software to manage the disk devices used for the data protected by the cluster.

In both use cases, the hardware is the same:

  • 2 T4-2, each with 4 FC cards and 2 ports/card
  • Sun Storedge 6180 FC LUNs presented to both hosts
  • One Primary Service domain, and one Alternate Service Domain, a Root Complex domain
  • Each Service domain sees two of the 4 FC Cards.

See the following blog posts for more details on setting up Alternate Service Domains:

In our environment, the primary domain owns the cards in Slots 4 and 6, and the alternate domain owns the cards in Slots 1 and 9.

(Refer to System Service Manual for System Schematics and bus/PCI layouts.)

A user can control what specific cards, and even ports on cards, use MPxIO, and which don't.

You can either enable MPxIO globally, and then just disable it on certain ports, or disable MPxIO globally, and then just enable it on certain ports. Either way will accomplish the same thing.

See the Enabling or Disabling Multipathing on a Per-Port Basis document for more information.

For example:

root@example:~# tail /etc/driver/drv/fp.conf
# "target-port-wwn,lun-list"
#
# To prevent LUNs 1 and 2 from being configured for target
# port 510000f010fd92a1 and target port 510000e012079df1, set:
#
# pwwn-lun-blacklist=
# "510000f010fd92a1,1,2",
# "510000e012079df1,1,2";
mpxio-disable="no";      <---------------------- Enable MPxIO globally
name="fp" parent="/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0" port=0 mpxio-disable="yes";  <--- Disable on port


root@example:~# ls -l /dev/cfg
total 21
.
.
.
lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c3 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0/fp@0,0:fc
lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c4 -> ../../devices/pci@400/pci@1/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0:fc
lrwxrwxrwx   1 root     root          60 Feb 13 12:51 c5 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0/fp@0,0:fc
lrwxrwxrwx   1 root     root          62 Feb 13 12:51 c6 -> ../../devices/pci@400/pci@2/pci@0/pci@0/SUNW,qlc@0,1/fp@0,0:fc
.
.
.

Therefore "c5" on the example host will not be using MPxIO.

Similar changes were made for the other 3 service domains.

Now, for the guest vdisks that will not use MPxIO, the backend devices used were just raw /dev/dsk names - no multi pathing software is involved.   You will see a mix of these below - italics use MPxIO, and non-italics do not:

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      quorum1                                        /dev/dsk/c5t20140080E5184632d12s2
                                      quorum2                                        /dev/dsk/c5t20140080E5184632d13s2
                                      quorum3                                        /dev/dsk/c5t20140080E5184632d14s2
                                      clusterdata1                                   /dev/dsk/c5t20140080E5184632d8s2
                                      clusterdata2                                   /dev/dsk/c5t20140080E5184632d7s2
                                      clusterdata3                                   /dev/dsk/c5t20140080E5184632d10s2
                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      clusterdata3                                   /dev/dsk/c3t20140080E5184632d10s2
                                      clusterdata2                                   /dev/dsk/c3t20140080E5184632d7s2
                                      clusterdata1                                   /dev/dsk/c3t20140080E5184632d8s2
                                      quorum3                                        /dev/dsk/c3t20140080E5184632d14s2
                                      quorum2                                        /dev/dsk/c3t20140080E5184632d13s2
                                      quorum1                                        /dev/dsk/c3t20140080E5184632d12s2
                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2

Here you can see in Ops Center what the Alternate Domain's virtual disk services look like:

From guest LDOM perspective, we see 12 data disks (c1d0 is the boot disk), which is really 2 paths to 6 LUNs - one path from Primary and one from Alternate:

AVAILABLE DISK SELECTIONS:
       0. c1d0 <SUN-SUN_6180-0784-100.00GB>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c1d1 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@1
       2. c1d2 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@2
       3. c1d3 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@3
       4. c1d4 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@4
       5. c1d5 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@5
       6. c1d6 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@6
       7. c1d7 <SUN-SUN_6180-0784 cyl 51198 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@7
       8. c1d8 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@8
       9. c1d9 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@9
      10. c1d10 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@a
      11. c1d11 <SUN-SUN_6180-0784 cyl 25598 alt 2 hd 64 sec 64>
          /virtual-devices@100/channel-devices@200/disk@b
      12. c1d12 <SUN-SUN_6180-0784-500.00MB>
          /virtual-devices@100/channel-devices@200/disk@c

Again from Ops Center, you can click on the Storage tab of the guest, and see that the MPxIO-enabled LUN is known to be "Shared" by the hosts in Ops Center, while the other LUNs are not:

At this point, since VCS was going to be installed on the LDOM OS and a cluster built, the Veritas stack, including VxVM and VxDMP,  was enabled on the guest LDOMs to correlate the two paths from primary and alternate domains into one path.

For example:

root@aa-guest1:~# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sun6180-0_0  auto:ZFS        -            -            ZFS
sun6180-0_1  auto:cdsdisk    sun6180-0_1  data_dg      online shared
sun6180-0_2  auto:cdsdisk    -            -            online
sun6180-0_3  auto:cdsdisk    -            -            online
sun6180-0_4  auto:cdsdisk    -            -            online
sun6180-0_5  auto:cdsdisk    sun6180-0_5  data_dg      online shared
sun6180-0_6  auto:cdsdisk    sun6180-0_6  data_dg      online shared

root@aa-guest1:~# vxdisk list sun6180-0_6
Device:    sun6180-0_6
devicetag: sun6180-0_6
type:      auto
clusterid: aa-guest
disk:      name=sun6180-0_6 id=1427384525.22.aa-guest1
group:     name=data_dg id=1427489834.14.aa-guest1
info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags:     online ready private autoconfig shared autoimport imported
pubpaths:  block=/dev/vx/dmp/sun6180-0_6s2 char=/dev/vx/rdmp/sun6180-0_6s2
guid:      {a72e068c-d3ce-11e4-b9a0-00144ffe28bc}
udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000184632000056B354E6025F
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=2 offset=65792 len=104783616 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
update:    time=1430930621 seqno=0.15
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=48144
logs:      count=1 len=7296
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
 log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
 lockrgn  priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths:   2
c1d11s2         state=enabled   type=secondary
c1d9s2          state=enabled   type=secondary

connectivity: aa-guest1 aa-guest2

root@aa-guest1:~# vxdmpadm getsubpaths dmpnodename=sun6180-0_6
NAME         STATE[A]   PATH-TYPE[M] CTLR-NAME          ENCLR-TYPE   ENCLR-NAME    ATTRS
========================================================================================
c1d11s2      ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -
c1d9s2       ENABLED(A)  SECONDARY    c1                 SUN6180-     sun6180-0        -

In this way, the 2 guests that were going to be clustered together are now ready for VCS installation and configuration.

The second use case changes a little bit in that both MPxIO and Veritas DMP are used in the Primary and Alternate domains, and DMP is still used in the guest as well.   The advantage of this is there is more redundancy and I/O throughput available at the service domain level, because a multi-pathed devices are used for the guest virtual disk services, instead of just the raw /dev/dsk/c#t#d#.

Now the disk services looks something like this, where the italics vdsdevs are DMP-based, and the non-italic ones are DMP based:

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      aa-guest3-vol0                  aa-guest3      /dev/dsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2


VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    alternate-vds0   example-a    aa-guest2-vol0                  aa-guest2      /dev/rdsk/c0t60080E5000183F120000107754E60374d0s2
                                      aa-guest3-vol0                  aa-guest3      /dev/rdsk/c0t60080E5000183F120000138B5522A1C4d0s2
                                      quorum1                                        /dev/vx/dmp/sun6180-0_6s2
                                      quorum2                                        /dev/vx/dmp/sun6180-0_7s2
                                      quorum3                                        /dev/vx/dmp/sun6180-0_8s2
                                      clusterdata1                                   /dev/vx/dmp/sun6180-0_12s2
                                      clusterdata2                                   /dev/vx/dmp/sun6180-0_5s2
                                      clusterdata3                                   /dev/vx/dmp/sun6180-0_14s2

Again, the advantage here is that 2 paths to the same LUN are being presented from each service domain, so there is additional redundancy and throughput available. You can see the two paths:

root@example:~# vxdisk list sun6180-0_5
Device:    sun6180-0_5
devicetag: sun6180-0_5
type:      auto
clusterid: aa-guest
disk:      name= id=1427384268.11.aa-guest1
group:     name=data_dg id=1427489834.14.aa-guest1
info:      format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags:     online ready private autoconfig shared autoimport
pubpaths:  block=/dev/vx/dmp/sun6180-0_5s2 char=/dev/vx/rdmp/sun6180-0_5s2
guid:      {0e62396e-d3ce-11e4-b9a0-00144ffe28bc}
udid:      SUN%5FSUN%5F6180%5F60080E5000184108000000004C2CF217%5F60080E5000183F120000107F54E603CA
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=2 offset=65792 len=104783616 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0
update:    time=1430930621 seqno=0.15
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=48144
logs:      count=1 len=7296
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-048207[047952]: copy=01 offset=000192 enabled
 log      priv 048208-055503[007296]: copy=01 offset=000000 enabled
 lockrgn  priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths:   2
c5t20140080E5184632d7s2 state=enabled   type=primary
c5t20250080E5184632d7s2 state=enabled   type=secondary

The Ops Center view of the virtual disk services is much the same:

Now the cluster can be set up just as it was before. To the guest, the virtual disks have not changed - just the back-end presentation of the LUNs has changed. This was transparent to the guest.

Thursday Mar 12, 2015

Giving Feedback in the OC Docs

There are a couple of features of the docs that are sometimes overlooked, so I thought I'd mention them.

If you click on a book in the documentation library, over on the left there are a couple of buttons. The feedback button lets you send us a message about the doc that you're looking at, so if something is confusing or incorrect, you can let us know and we'll fix it. There's also a download button where you can get a PDF of the current doc.


Also, if you're on the main page for Oracle Docs, there's a flying question mark box that you can click to give feedback about the doc site in general:


Speaking for our team, we do actually read those feedback messages, so let us know what you think.

Thursday Feb 19, 2015

Creating Networks for Server Pools

I've seen a couple of questions recently about creating networks for Server Pools. There's an important guideline that you should bear in mind, particularly when you're planning your Server Pools out: The servers in a server pool should be homogenous in terms of their network tagging mode.

The reason for this is that, if you had a Server Pool with servers using tagged and untagged networks, a guest migrated from one server to another could lose its network configuration.

The same principle applies even to standalone Oracle VM Servers for SPARC - you can't connect the same network to a standalone one in both tagged and untagged modes.

Thursday Nov 13, 2014

New Library Design

As you may have noticed, we've revamped the design for docs.oracle.com. While the new look and feel lets us present information for a lot of products in a clearer (and snazzier) format, it also means that the path to the Ops Center docs from the main page is different.

The new library front page has a set of icons for the various categories of documentation. To get to Ops Center, you'll click on the Enterprise Manager category.


This brings up the main page for Enterprise Manager software, which has a tab for Ops Center. You'll click that tab:

Then, once the tab is displayed, you'll click the version of Ops Center that you want:


It's worth noting that the urls for the Ops Center libraries haven't changed, so if you have them bookmarked you don't need to change anything. You can also still get to the library through the help button at the top of the Ops Center UI. This only affects how you get to those libraries through the landing page.

Thursday Aug 28, 2014

Oracle Solaris 11.2 Support

Oracle Solaris 11.2 was released just after Ops Center 12.2.1, and I've seen a few questions about when we'll support it, and to what degree.

The first answer is that Oracle Solaris 11.2 is now supported for most features in Ops Center. You can manage it, install it, update it, create zones on it - basically, anything you can do with Oracle Solaris 11.1, you can also do with Oracle Solaris 11.2.

The caveat to that is that the features introduced with Oracle Solaris 11.2, such as Kernel Zones, are not yet officially supported through Ops Center. We're working hard on adding that support right now.

Thursday Aug 14, 2014

CPU Architectures in 12.2.1

I've talked about a few of the enhancements that came in version 12.2.1 over the past few weeks. The last one is a new generic architecture class for newer systems that gives you new migration options for logical domains.

Some new systems - Oracle SPARC T4 servers, Oracle M5 and M6 servers, and Fujitsu M10 servers - have a class1 architecture. When you start a guest domain on one of these systems, Ops Center recognizes the architecture, and you can migrate the guest to systems with other CPU architectures without losing any LDOM capabilities.

Take a look at the Oracle VM Server for SPARC chapter in the Feature Reference Guide for more information about CPU architectures and other Logical Domain configuration options.

Thursday Aug 07, 2014

Audit Logs in 12.2.1

An increased audit capacity for Ops Center is something that folks have been asking about for a while. In 12.2.1, we added audit logs to provide this capability.

The audit logs are kept on the Enterprise Controller system starting in 12.2.1. They contain records of user logins, changes to user accounts, and job details. The logs require root access on the EC system and can't be edited, so they're a secure means of tracking who's logged in to Ops Center and done what.

The Feature Reference Guide has more information about how to view the audit logs.

Thursday Jul 31, 2014

LDAP Enhancements in 12.2.1

In Ops Center 12.2 and older versions, there were limitations on how a user pulled from an LDAP could log in to Ops Center - basically, users could only log in using the user name field.

One of the improvements in version 12.2.1 is that an Ops Center Admin can designate other fields, such as email address, name, or member ID, to be used when logging in to Ops Center.

The Admin Guide's section on Users and Roles has more information about adding users from directory servers.

Thursday Jul 24, 2014

Upgrading to version 12.2.1

Now that Ops Center 12.2.1 is out, I thought I'd give a brief walkthrough of the upgrade process.

The first thing to do if you're planning on upgrading to 12.2.1 is checking the upgrade paths. If you're using an older version of Ops Center, such as 12.1.4, you'll have to upgrade to version 12.2 first. Here's a flowchart that shows the supported upgrade paths:


Once you've figured out your upgrade path, you should check the release notes. They have a list of known issues that could be relevant to your environment. In particular, if you're using version 12.2, there's a patch that you have to apply if you want to upgrade through the UI.

Once you've taken a look at the release notes, the Upgrade Guide will take you through the upgrade itself, with different procedures based on how you're doing your upgrade (through the UI or command line) and what sort of environment you have.

Thursday Jan 02, 2014

Installing and Configuring Ops Center

Our previous post was about planning an Ops Center installation. Once you've done the planning, you're ready to install and configure the software.

The initial installation is done from the command line. You put the correct bundle on the EC and PC systems, unpack it, and install it. Depending on your planning choices you'll use a few different options with the install commands.

Once the install tells you that it's done, your final step is to log in to the UI and configure the Enterprise Controller. During the config process you will choose a connection mode (disconnected mode for dark sites or connected mode if the EC has internet access), set up the libraries for Linux, Oracle Solaris 8-10, and Oracle Solaris 11 content, and set up DHCP for OS provisioning if you need it.

Installing Ops Center is a complex process, and this post is just an overview and not a quick-start. The Oracle Solaris Installation Guide and the Linux Installation Guide go into much greater detail. If you have specific questions about installing Ops Center, let me know.

Thursday Dec 19, 2013

Planning an Ops Center Installation

Installing Ops Center in your environment takes some planning. There are several planning decisions that you need to make before you can kick off the installation and configuration.

The first decision is whether or not you want to set up High Availability for your Enterprise Controller. You can set up Ops Center with a single Enterprise Controller, or you can use Oracle Clusterware to set up multiple ECs in a cluster. Having multiple ECs uses more systems, but it also makes your environment much more resistant to system failures; EC failover is much quicker than restoring a single EC from a backup file.

Figuring out where you want to put the product database is your next step. You can use a co-located database, which Ops Center can install on the Enterprise Controller system. However, if you're using EC HA, or if you want to have more control over the database, you can use an existing Oracle Database 11g.

Finally, before you begin the installation, you should decide how many Proxy Controllers you want to use. Proxy Controllers work with the Enterprise Controller to execute jobs. One Proxy Controller can generally handle about a thousand managed systems. If you have fewer than that, you can enable the co-located Proxy Controller on the Enterprise Controller system, but if you have more you're liable to want multiple Proxy Controllers on separate systems. Also, if you plan on performing OS provisioning, you'll want to have Proxy Controllers on the networks of the OSP target systems.

Planning an Ops Center installation is a complex process, and this post is just an overview. The Oracle Solaris Installation Guide and the Linux Installation Guide go into much greater detail. If you have specific questions about planning an installation, let me know.

Thursday Dec 12, 2013

New OCDoctor released

There's a new version of the OCDoctor available - 4.26. It makes the connectivity check option a bit clearer, and checks for several new issues in troubleshooting.

You can get the new version at https://updates.oracle.com/OCDoctor/OCDoctor-latest.zip. If you have an Ops Center installation in connected mode, an automated job will get the new version for you.

Thursday Nov 07, 2013

Plugins in Ops Center and Cloud Control

Cloud Control just released an updated plugin for Oracle Virtual Networking, so I thought I'd mention a bit about how both Ops Center and Cloud Control use plugins.

On the Ops Center side, we have a plugin to connect Ops Center to Cloud Control, letting them share monitoring data. This guide explains how to install and use that plugin.

In Cloud Control, they have a more extensive collection of plugins, letting you link Cloud Control with a variety of other products. The Cloud Control library plug-in tab goes into more detail about how you can use these plugins in your environment.

Friday Oct 18, 2013

MSR Issue on 12.1 Enterprise Controllers

We've noticed a problem with MSR initialization and synchronization on Enterprise Controllers that are using Java 7u45. If you're running into the issue, these jobs fail with Java errors. Java 7u45 is bundled with Oracle Solaris 11.1 SRU 12, so if you're using that version or if you plan to use it, you should be aware of this issue.

There's a simple fix. You can do the fix before upgrading to SRU 12, but you can't do it before you install the Enterprise Controller.

First, log on to the Enterprise Controller system and stop the EC using the ecadm command. This command is in the /opt/SUNWxvmoc/bin directory on Oracle Solaris systems and in the /opt/sun/xvmoc/bin directory on Linux systems:

ecadm stop -w

Then run this command to fix the issue:

cacaoadm set-param java-flags=`cacaoadm get-param -v java-flags -i oem-ec | sed 's/Xss256k/Xss384k/'` -i oem-ec

And then restart the EC:

ecadm start -w

Once you apply this fix, you should be set.

About

This blog discusses issues encountered in Ops Center and highlights the ways in which the documentation can help you

Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
3
4
5
6
7
8
10
11
12
13
14
15
17
18
19
20
21
22
24
25
26
27
28
29
30
31
 
       
Today