Wednesday May 15, 2013

How to go Physical to Virtual with Oracle Solaris Zones using Enterprise Manager Ops Center

Many customers have large collections of physical Solaris 8, 9 and 10 servers in their datacenters and they are wondering how they are going to virtualize them. This leads to a commonly asked question. Can Enterprise Manager Ops Center 12C be used to P2V (Physical to Virtual) my old servers? Ops Center does not have a single button P2V capability, but it is possible for Ops Center to deploy physical servers, LDOMs and branded zones based on flash archives(flars) that have been taken of your existing physical servers. Ops Center achieves P2V by deploying flars and leveraging its patching and automation capabilities, to make the P2V process consistent, repeatable and as cost effective as possible.

As with any virtualization project, there will be a number of things that will need to be updated as you move from a physical to a virtual environment. It is a common misconception that you can virtualize a system and change nothing about it. There are always a few things that have to be changed on an OS or process level to make it compatible with the virtualized environment. As a best practice, there are many more things that should be updated, re-allocated and redesigned as part of a virtualization project but that is a subject for another blog.

In this blog, we will be covering migrating physical servers to Oracle Solaris Zones.

We will also review this in our community call on Monday , 05/20/2013. Here are web conference details.


Topic: How to P2V to Oracle Solaris Zones using Enterprise Manager Ops Center
Date: Monday, May 20, 2013
Time: 5:00 pm, Eastern Daylight Time (New York, GMT-04:00)
Meeting Number: 594 835 639
Meeting Password: oracle123
To join the online meeting (Now from mobile devices!)
1. Go to
2. If requested, enter your name and email address.
3. If a password is required, enter the meeting password: oracle123
4. Click "Join".

To view in other time zones or languages, please click the link:

To join the teleconference only
Call-in toll-free number:       1-866-682-4770  (US/Canada)      
Other countries:

Conference Code:       7629343#
Security code:            7777# 

Ops Center does not do anything differently to what you could do by hand, it just automates the process and, by the use of templates and wizards, drives consistency and repeatable results.

Physical Server to Branded Zones

Let’s first look at converting existing Solaris 8, 9 and 10 physical servers into Solaris 8/9 branded zones (on Solaris 10) and Solaris 10 branded zones (on Solaris 11).

There are 4 basic steps involved in converting a physical server to a virtual server.


This is done by creating a flash archive of the source system. If the source system is a Solaris 10 environment with a ZFS root, it is possible to use a ZFS based flar, but for consistency and ease of coding in the grooming scripts, I recommend that all flars be taken as a cpio flash archive.

When capturing the flar, we should include all the root files systems that will be required for the new zone to boot. If the application data is small, it can be included in the flar, but if it is large, you should copy the application data after the P2V migration is complete.

Example flar capture command line

# flarcreate -n [HostNameOfSourceSystem] -S -c -L cpio \

-x /[Dir where archive will be stored] \

 /[Dir where archive will be stored]/[HostNameOfSourceSystem].flar

Note that we are using the –x flag to exclude the directory where we are storing the archive. You can use multiple –x flags to exclude any other directories you do not want to include as part of the archive, such as large application data. Large archives just become difficult to unpack/pack and upload. As you can imagine, if your source archive contained 100 GB of application data, you would need probably 300-400 GB of space just to perform grooming, and that would make the process much slower.


The first question I often get is, “Why do I need to groom my flar?” In an ideal world, you would not need to, but rarely do we live in an ideal world:

1). If your environments are like most customers', there are a wide variety of configurations, patching levels and processes that have been applied to the source system during its lifecycle. This level of entropy means that there may well be things that must be fixed, like the following “real world” examples of problems with source flars that I have observed:
    • A customer’s patching methodology had disabled the startup scripts that are required for a branded zone to boot.
    • Some Solaris “df” command patches were missing that are mandatory for Solaris 8/9 branded zones (with a separate /var) to work.
So formalized grooming is a chance to automate the fixing of these anomalies and to help standardize your new environment.

2).  A common restriction that is placed on you, in a migration project, is that you cannot alter/update/reconfigure the source machine to suit its target virtualized environment. So you have to at least apply any mandatory fixes needed to the archive file itself and preferably also remove anything that may conflict with/fail in the new virtualized environment. So, once again, a formalized grooming is an ideal way to address these requirements.

You will often come across the argument that everything must be identical on the new virtualized environment, in an attempt to minimize change. Let’s be honest here, while virtualization will aim to provide a functionally identical environment, there is a massive amount of change that is happening under the covers when you virtualize a machine and there are a few things that must be changed to make it work. The history of virtualization projects has shown that once the existing servers are virtualized, they are almost never revisited to cleanup the idiosyncrasies of the source machines to bring them into a standard format. Therefore, I am a supporter of more aggressive grooming and the cleaning up of years of entropy as part of the P2V process.

The way to make grooming consistent and less manual and tedious is to convert the steps into an Operational Plan and use Ops Center to run them in a consistent manner.

Here are some of the modules I usually include in my grooming plans:


What It Does


Export flar info as shell variables to be used elsewhere in the program.


Check that the flar is in cpio format.


Unpack the flar archive.


Check that the OS release meets the minimum for branded zones.


Remove the service tag identity file. Old service tags and duplicate service tags can confuse Ops Center.


Check that no soft partition data is included in the flar.


Remove any outstanding FMD faults from source machine.


Disable any references to disk suite (svm/sds/ods) from source machine.


Disable any NFS mounts in vfstab from source machine as they may not be available in the new virtualized environment and will stop the zone from booting.


Add missing packages if required (the packages you want to add are laid out under a directory structure).


Add missing patches if required (the patches you want to add are laid out under a directory structure)


Disable application startup in the flash archive. It is often advisable to disable automatic application startup as part of the P2V process, so that it cannot conflict/corrupt the production source machine.


Un-configure OC agent and cleanup OC identity files in case the source system was already under Ops Center control.


Repack the flar archive.

These are just some of the common modules that I have used over the years on P2V projects. You may not need any or all of these. You may need to create your own module if you find something unique in your source machines. Grooming is an iterative process, as your source machines can vary wildly from any machine I have ever come across. This is not an Ops Center thing; this is just the nature of P2V on a source pool of unknown quality. So if you hit an issue, troubleshoot it, add a new module to the Operational Plan and try again. To help get you started, I have included some sample code [Sample Grooming script]. This script provides examples of what can be done and should be considered an example of how you can build your own grooming script.

Loading the script into an Operational Plan is as simple as:

Creating a new Operational Profile
Give the plan a name, and select Operating System as the target type.

Creating an Operational Plan

Click browse to find the script and then load it. I have increased the profile time out to 180 minutes as grooming large flars can take a while.
Browse and load script
Finally, specify any variables for which the script requires user input. You can specify the variable as required or optional and provide a hint/description to help the user at run time.
Specify variable input

So how do I groom my flar?

Step 1) Copy the flar taken from the source system onto any system managed by Ops Center that has sufficient space.

Step 2) Launch your grooming “Operational Plan” against the system to which you copied the flar.

Laanching an op plan

Step 3) Have a cup of coffee... then check the logs

Check the logs

Step 4) You now have a flar ready for uploading into Ops Center.


Deployment is where this process comes under full Ops Center control.

Step 1) Load your groomed flar into the EC library.

Load Groomed flar

Step 2) Create a profile for the zone you want to deploy.

Create zone profile

Step 3) Deploy the zone.

Deploy a zone

When the job completes, you have your source system as a zone.


What are the sorts of actions you can do here?

  • Add additional networks if you only deployed with a single network.
  • Update the application configuration.
  • Update secondary apps like backup software, application monitoring, etc.
  • Re-enable application startup scripts and remote file systems in vfstab.
  • Do any updates to patches/packages/applications.

 To make these actions repeatable and consistent, I employ Operational Plans or Update Profiles.

Operational Plans – These are scripts that make actions repeatable and can be modified by operator input (Operational Plan Variables) at run time.

Update Profiles - These can contain patches, packages, scripts and customized configuration files (based on the system you are deploying to).

Choose the right method for what you are trying to do or combine both of them together in a Software Deployment/Update Plan.

Congratulations you have P2Ved your system.


It is fairly straightforward to automate the migration of physical servers to Oracle Solaris Zones using Enterprise Manager Ops Center. Operational Plans and Update Profiles are great tools to automate many of those operational tasks and increase their reliability and repeatability.

Look out for a future blog on “How to P2V to Oracle Virtual Machine for SPARC using Enterprise Manager Ops Center“.

Monday Oct 29, 2012

OS Analytics - Deep Dive Into Your OS

Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes.

The recording of our call to discuss this blog is available here:

Download the presentation here

See also:

Blog about Alert Monitoring and Problem Notification

Blog about Using Operational Profiles to Install Packages and other content

Here is quick summary of what you can do with OS Analytics in Ops Center:

  • View historical charts and real time value of CPU, memory, network and disk utilization
  • Find the top CPU and Memory processes in real time or at a certain historical day
  • Determine proper monitoring thresholds based on historical data
  • View Solaris services status details
  • Drill down into a process details
  • View the busiest zones if applicable

Where to start

To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab.

You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):

 In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes:

One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors.

On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones:

This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab.

Next, click the "Processes" tab to see real time information of all the processes on the machine:

An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty.

Next, if you view a Solaris machine, you will have a "Services" tab:

By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file).

The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be:

You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data.

It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day.

Applying new monitoring values

When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile.

Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies".

Visiting the past

Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines:

You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command.

The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time:

Under the hood

The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/

  • An "" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken
  • If you have any zones, there will be a file called "" containing the same small files for all the zones, as well as a folder with the name of the zone along with "" in it
  • If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "" for that controller

The actual script collecting the data can be viewed for debugging purposes as well:

  • On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect
  • On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect

If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it:

# touch /tmp/.collect.stderr  

The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped.

If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/ Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values.

I hope you find this helpful! Please post questions in the comments below.

Eran Steiner

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Friday Mar 16, 2012

Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available.

Stay Connected:
Twitter | Facebook | YouTube | Linkedin | Newsletter

[Read More]

Tuesday Jan 10, 2012

Oracle Enterprise Manager Ops Center plug-in for Enterprise Manager Cloud Control 12c is now available

Systems monitoring plug-in to connect Oracle Enterprise Manager Cloud Control 12c Release 1 ( and Oracle Enterprise Manager Ops Center 11g Release 1, Update 3 ( is now available in Oracle Enterprise Manager Cloud Control 12c self update software catalog (see screenshot below).

This Oracle Enterprise Manager Ops Center 11g plug-in for Enterprise Manager Cloud Control 12c increases the efficiency of datacenter operations by sharing event and target attribute information between the business service owners and the infrastructure operations staff.  This enables business service owners to be fully aware of hardware, virtualization, and operating system event information while troubleshooting application issues.  They can see hardware component level details from the user interface they are comfortable working from within EM Cloud Control 12c. 

At the same time, the operations staff can see which Oracle applications are running on which infrastructure asset within Ops Center before taking maintenance actions and disrupting business.  

More information on this plug-in can be found in the following user guide:

The Oracle Enterprise Manager Ops Center 11g plug-in for Enterprise Manager Grid Control 10g and 11g is located within the Ops Center 11g product media.  No additional download is required.  More information can be found in the following user guide:

For more information, please go to Oracle Enterprise Manager  web page or  follow us at : 

Twitter | Facebook | YouTube | Linkedin | Newsletter

Tuesday Dec 20, 2011

Oracle Enterprise Manager Ops Center 11g UPDATE 3 is now available - Learn more in a webcast on December 22

Oracle Enterprise Manager Ops Center 11g UPDATE 3 was released last week. The update 3 is now available for download via the Ops Center user interface. You will be able to download it soon at  You can read the updated readme file for all changes and installation instructions.

Here is a summary of what's new with Oracle Enterprise Manager Ops Center 11g UPDATE 3 :

  • Operating Systems support
    • Oracle Enterprise Linux 5.6, 5.7
    • Red Hat 5.6 and 5.7
    • Solaris 10 Update 10 (X86 and SPARC)
  • Enhanced support of SPARC T4 systems
    • LDOM provisioning
    • Component firmware provisioning
    • RAID support
    • Hardware management provided by HMP
  • Added driver for Netra SPARC family
  • Disconnected Mode Improvements
    • Major upgrade to harvester utility
    • Improve the upload process of offline content on the Enterprise Controller
  • Many other performance related and general product improvements

The Oracle Enterprise Ops Center engineering team will be hosting a webcast on Thursday, December 22 at 8:00 A.M. Pacific time to review the key enhancements in the Update 3. The webcast will also cover download and upgrade instructions . You will also have an opportunity to ask questions to the experts from the Ops Center engineering team in this webcast.

Date: Thursday, December 22, 2011
Time: 8:00 A.M. Pacific time, 11:00 am, Eastern Standard Time (New York, GMT-05:00)

The recording of the call is now available at:

The presentation may be downloaded from:


For more information on Oracle Enterprise Manager , please go to our web page or  follow us at : 

Twitter   Facebook YouTube Linkedin

Thursday Nov 03, 2011

Alert Monitoring and Problem Notification in Oracle Enterprise Manager Ops Center

Oracle Enterprise Manager Ops Center provides full lifecycle management of your Oracle hardware and operating systems, including your virtual environments. A significant portion of any given asset's lifecycle is spent in daily operations and when things are running smoothly, there isn't much for an administrator to do. When things go awry, it's critical to know what happened and why as quickly as possible. Oracle Enterprise Manager Ops Center provides alert monitoring and problem notification and management capabilities to enable you to do just that. I'll walk you through a quick and simple example of how you can use these features and hopefully it will spark ideas of how you can implement even more interesting solutions using the same basic steps.

The first step is to tune your monitoring rules. Each type of asset will have a default set of monitoring rules that are applied when the asset is first managed. Rules can be managed on individual assets via their Monitoring tab, or by applying Monitoring Profiles to individual assets or groups of assets. Monitoring rules can be configured to raise alerts when, for example, a monitored attribute exceeds a threshold value for a selectable period of time. For more details on how to configure your monitoring rules, please see section 9 of the Advanced User's Guide, available by clicking on the Help link from within the browser user interface. If you update monitoring rules in a Monitoring Profile, be sure to apply that profile to your desired assets in order to make it affect their monitoring rules. For this example I have set a very short window for the CPU Usage attribute to generate an alert after only 1 minute of high CPU utilization, as shown in the screenshot below.

When an alert is generated, a new problem will be created if none is already open for the issue. Otherwise the alert will be added to an existing problem. Problems aggregate alerts and annotations together and provide the opportunity to assign and track resolution. Any users who have their Notification Profile defined to receive notification of the problem will get an email or page with the pertinent details. The image below shows how you might specify to have the root user subscribe to get email notification of all WARNING or higher level problems.

Problems can be managed holistically from the Message Center in the top of the left-hand navigation panel or they can be viewed for individual assets by selecting the Problems tab. When looking at an open problem, icons along the top allow you to see existing alerts and annotations, to add an annotation, to assign the problem to a user or to take action on the problem, as shown in the screenshot below.

Annotations can be simple textual comments or suggested actions which can include the execution of an existing Operational Plan. For more detail on how to use Operational Plans, see section 11 of the Advanced User's Guide. For this example, I created a simple Operational Plan to execute a prstat. Be sure to select the appropriate Subtype, in this case a Global Zone.

When adding an annotation to a problem, you can optionally select the checkbox at the bottom of the window in order to save that annotation to the Problems Knowledge Base and associate it to future problems of the same type and severity as shown below.

When an annotation has been saved to the Problems Knowledge Base, it can be edited to include additional severities and can also be changed to execute automatically when a future problem is initially created, as shown below. For more detail on the Problems Knowledge Base, please refer to section 10 of the Advanced User's Guide.

When a new problem is detected, the newly added Automated Action will execute the associated Operational Plan and attach the output as an annotation to the problem. To demonstrate this in action, I executed several 'dd' commands on the host to force excessive CPU usage. In this case, the prstat output shows the high CPU usage of the processes that were running at the time that the alert was generated, even though they lasted only a few minutes.

This is clearly a simple example and would not suffice to capture very short-lived processes but it illustrates the possibilities available. The automatic action could have been a more in-depth data gathering script utilizing dTrace or could have even made system changes, depending on the real scenario it was built to address. I hope this quick walk-through has provoked thoughts of how you might implement Alert Monitoring and Problem Notification and Management in your enterprise using Oracle Enterprise Manager Ops Center.

Follow Oracle Enterprise Manager Ops Center at : 

Twitter   Facebook YouTube Linkedin

Friday Sep 16, 2011

How to Install Oracle Enterprise Manager Ops Center 11g (Step by Step Guide)

This morning I came across a blog post from Gokhan Atil, founding member of Turkish Oracle User Group (TROUG), about Oracle Enterprise Manager Ops Center and thought of sharing with all of you. In his latest blog, Gokhan writes about how to install Oracle Enterprise Manager Ops Center 11g Release 1 on Oracle Linux 5.5 (32bit) . We really appreciate all the contribution by Gokhan and many other Oracle User Group members around the world.

As you know,  Oracle Open World  is once again around the corner, October 2-6, 2011. There will be many sessions, demos and hands-on labs related to Oracle Enterprise Manager Ops Center and Oracle Enterprise Manager.  Hope to see you at Oracle Open World 2011.

Friday Jul 29, 2011

Hello from a new member of the Oracle Enterprise Manager team


My name is Anand Akela and I am a new member of the Oracle Enterprise Manager product marketing team. I wanted to introduce myself before you start seeing my postings here.

I will be focusing on cloud management, virtualization management and infrastructure management offerings in the Oracle Enterprise Manager portfolio.

I have been working in various product marketing, product management and engineering roles in the systems management, servers, data center energy efficiency, enterprise software areas. I also serve as the chairman of Data Collection and Analysis workgroup at the Green Grid, an industry consortium developing and promoting energy efficiency for data centers and enterprise systems.

I am looking forward to interacting with you through the Oracle Enterprise Manager blog and other social media tools .

Anand Akela Twitter LinkedIn


Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs


« April 2014