Tuesday Apr 28, 2015

EM Compliance ‘Required Data Available’ flag – Understanding and Troubleshooting


Starting in release of Enterprise Manager a new column was added to the top level Compliance Standard Results page named ‘Required Data Available’. This column is meant to convey whether the data required for proper compliance evaluation of all of the rules within the Compliance Standard for each associated target is available in the repository. If the value for a given target is ‘No’ then the compliance results for that target should not be considered valid. A value of ‘Yes’ indicates the required configuration data is being collected and results valid; at least from a data availability perspective.

One challenge facing customers is the lack of understanding of the column and how to correct the cause of the ‘No’ status. The goal of this guide is to explain how this column’s value is derived and provide some tools to correct the situation.

Read More

Monday Jan 05, 2015

Enterprise Manager Ops Center - OS provisioning across secondary network interfaces


One of Enterprise Manager Ops Center's core functionalities is to be able to provision the OS to bare metal servers. If the network you are provisioning across is connected to one of the onboard ports (on the first onboard network chip), all is well and provisioning will work as expected. This would be the case for 95% plus of all customers, but if you are trying to provision across a network that is connected to a port on a card in an expansion slot (or a second onboard network chip), your provisioning job will fail due to the incorrect MAC address being set in the JET/AI/Kickstart server. If you are one of the people who has hit this issue, please read on. If you are provisioning over an onboard NIC port, stop reading now and happy OS provisioning.

The Cause

When Ops Center discovers the ILOM (ALOM/XSCF and all the other various LOMs) of a server, there are only certain pieces of information that can be collected from the LOM while the OS is running. We maintain a policy of creating as little impact as possible during discovery, so we do not force you to shutdown the OS during discovery. 

Information we can collect:

  • the number of network interfaces
  • the MAC address of the first network interface (port)

Information we can NOT collect:

  • the MAC address of all the other network interfaces (ports)

Since the LOM only provides the first MAC address, Ops Center must calculate the MAC addresses of the remaining network interfaces. Ops Center will get the MAC addresses for the onboard NICs correct but its calculated MAC addresses will be wrong for any NICs not on the first onboard network chip.

If we have an example system that has 4 onboard network ports (on the motherboard) and an expansion network card in the PCI-E/X slot with an additional 4 network ports, Ops Center's view of that server, based on the information from the LOM, would not match the physical server.

Interfaces Name Ops Center's Mac Address - Calculated ( from LOM) Actual Mac Adress Correct
Number of Network interfaces 8 8 YES
Mac Address for interface 0 (onboard) net0 00:21:28:17:72:b2 00:21:28:17:72:b2 YES
Mac Address for interface 1 (onboard) net1 00:21:28:17:72:b3 00:21:28:17:72:b3 YES
Mac Address for interface 2 (onboard net2 00:21:28:17:72:b4 00:21:28:17:72:b4 YES
Mac Address for interface 3 (onboard net3 00:21:28:17:72:b5 00:21:28:17:72:b5 YES
Mac Address for interface 4 (PCI-E/X card) net4 00:21:28:17:72:b6 00:14:4f:6b:fd:28 NO
Mac Address for interface 5 (PCI-E/X card) net5 00:21:28:17:72:b7 00:14:4f:6b:fd:29 NO
Mac Address for interface 6 (PCI-E/X card) net6 00:21:28:17:72:b8 00:14:4f:6b:fd:30 NO
Mac Address for interface 7 (PCI-E/X card) net7 00:21:28:17:72:b9 00:14:4f:6b:fd:31 NO

 You can confirm that the Mac addresses for an expansion network card has been calculated, by looking at the Network tab in the BUI for the LOM object.

You can see the displayed MAC addresses for GB_4 and GB_5 are just a simple increment of 1 from that of GB_3 which should not be the case as GB_4 and GB_5 are on a PCI-E/X expansion card. While most Oracle(Sun) servers have 4 on-board network interfaces of the same type, some servers may have 2 x 1GBit interfaces and 2 x 10Gbit interfaces. In this case, only the first on-board network interfaces will display the correct MAC addresses.

It should be noted that if you have discovered the LOM and discovered the running operation system, Ops Center will have been able to identify the correct MAC addresses for all the network interfaces as it combines the information gathered from the LOM and the Operating System to display the full picture (correct values). Unfortunately, you can not rely on these when re-provisioning, as part of the OSP job will delete the OS object (we are re-provisioning it after all) and the cached values for the MAC address may expire before the JET/AI/Kickstart server is configured.

The Impact

If you were to provision across net0, net1, net2, or net3 all would work well, but if you selected net4 or above for provisioning, the job would fail due to a timeout in the "Monitor OS Installation" task as the Jet/AI/Kickstart server would have been configured with the wrong MAC address and so it would have not responded to the OSP request. Please note that a misidentified MAC address is not the only possible cause of a timeout in the "Monitor OS Installation" task. This error only indicates that some step of the OS provisioning has failed and can be caused by a number of different issues.

The Solution

There are 2 ways of provisioning to secondary network interfaces

1) Use the MAC address method (simplest method - only available in 12.2.0+)

In Ops Center 12.2.0, we introduced an option to specify the MAC address to provision across directly in the BUI. When running the "Install Server" Action/Wizard, the "Boot Interface Resource Assignments" page has a check-box [Identify Network Interface by MAC Address]. Selecting this check-box will change the wizard from using netX interface names that rely on the discovered MAC address, to letting you manually enter the MAC address. This entered MAC address is used to setup the JET/AI/Kickstart server and is used to interrogate the OBP of the server to workout the netX interface that is required for wanboot.

It is as easy as that and your provisioning job will progress as normal.

2) Overload the MAC address before provisioning (the way we did it before we had method #1)

Assuming you have already discovered and managed the systems LOM, you can overload (update) the discovered/calulated network interface MAC addresses.

In the BUI, select "Assets" ==> "All Assets" ==> "Add Assets"

then choose "Manually declared server to be a target of OS provisioning"

While this could declare multiple servers using an XML file, in this example, we will just be doing a single server. This wizard normally lets us declare a server network interfaces but as some of the MAC addresses we will be declaring are already part of an existing discovered server LOM, Ops Center will identify the overlaping MAC address and merge this data with the existing server. The matching interfaces will stay the same but the new MAC addresses will overload (replace) the incorrect addresses.

Select Declare a single server, then click the [Green Plus] icon

Enter the port name [GB_X] and the actual MAC address.

Repeat this for all the interfaces, up to and including the one you want to provision across. 

Do not skip any interfaces as the interface numbering is based on the order the entries are stored in the database.

When you have entered all required interfaces, you then have to fill in the server details.

Once completed, click the [Declare Asset] button and wait for the job to complete. Normally, this will just take a few seconds. 

You can check in the BUI that the updated MAC addresses have been applied.

Now, just run your provisioning job as per normal and the correct MAC address will be configured in the JET/AI/Kickstart server.

As you can see, if you have updated your Enterprise Controller to Ops Center 12.2.0 or higher, option #1 is the simpler method.

All the best with your OS provisioning,


Tuesday Jul 22, 2014

Upgrading to Enterprise Manager Ops Center

Hi all,

this is just a quick note for those looking to upgrade to the new relase of Ops Center (

It is avalable for download:

  • From OTN - after July 25, 2014
  • From OSDC - after July 31, 2014
  • From  the Ops Center BUI - after July 20, 2014

    Before you can see the new version in an existing 12.2.0 (or earlier) version of Ops Center you will need to apply a quick patch to to make the new version visable to downlaod. Details of how to access this IDR and how to apply it can be found in the MOS Note (Doc ID 1908726.1) 

    Happy Upgrading



  • Thursday Jun 26, 2014

    Creating A Secondary I/O domain with Ops Center 12.2

    Contributed by Juergen Fleischer and Mahesh Sharma.

    The purpose of this blog is to show you how to create a Secondary I/O domain. The First I/O domain is commonly known as Control Domain (CDOM). There are various terms that are used for a Secondary domain, like alternative I/O domain or redundant I/O domain.

    The secondary I/O domain will have been assigned some physical I/O devices, which may be a PCIe bus root complex, a PCI device, or a SR-IOV (Single Root I/O Virtualization) virtual function.

    Within Ops Center when creating a Secondary Domain we also use the terms Physical I/O Domain and Root Domain. The Physical I/O Domain maps PCI device end points, and the Root Domain maps PCIe buses, which also has an option to create SR-IOV functions.

    In this blog we will show you how to create a Root Domain by assigning PCIe buses, so that we have a redundant I/O domain which will enable us to shutdown the CDOM without affecting any of our guests.

    Our host a T5-2 (we’re using the same host as in the previous blogs) has two free PCIe buses that have not been assigned to domains (pci_2 and pci_3). So let’s create a Secondary I/O domain (Root Domain) with these buses and give the domain two whole cores with 4 GB of Memory.

    We’ll start by creating a Logical Domain Profile. This is created from:

    Navigation -> Plan Management -> Profiles and Polices, then click on Logical Domain. On the right under Actions select Create Profiles

    From the Identify Profile screen, give the Profile a name (Secondary-I/O in our case) and select Root Domain in the Sub-type, as shown above. Click Next.

    Step 2: is where we provide a name for our Secondary I/O domain, we called ours secondary and click Next to continue.

    The next screen, show below, we entered the amount of cores and memory.

     Step 4: is where we specify how many PCIe buses we will assign to this secondary domain. In our cases we have specified two for  pci_2 and pci_3.

    The next few steps are optional and are not required for this example.

    Step 7: Is the Summary, if everything looks correct click Finish.

    Note: The Metadata is on the local disk (file://guest). This is fine for the Secondary Domain as it will not be migrated. It’s just the Logical domains and their Guests that will get migrated, therefore making it mandatory to have the metadata on shared storage for these – if you want migration to succeed!

    Now that we have created the Profile, we will create our Secondary domain (Root domain).

    This is done from Navigation -> Plan Management -> Deployment Plans – Create Logical Domain and select the plan we have just created. From the Actions panel on the right select "Apply Deployment Plan".

    From the Select Target Asset pop-up, select the CDOM and move it to the Target list. Select Next.

    Complete Step 1, by specifying the secondary name.

    Step 2, we pick the PCIe Buses that will be assigned to the secondary domain.

    In step 3: we kept the default Virtual Disk Sever (vds) name.

    The next few screens are not required in this example.

    From the Summary screen click Finish. This will create the Secondary Domain.

    Once the Secondary Domain has been created we can check if it’s built as we specified. We can check via the Ops Center BUI and also command line.

    Let’s do both:

    And from the command line

    While on the command line we can also check if the buses have been assigned correctly.