By Hari Srinivasan-Oracle on Sep 19, 2014
Now that Enterprise Manager 12cR4 has been out for a little while, more people are getting around to upgrading their agents. Since the monthly Patch bundles were released we already have a few Agent side patches that we want to apply to our newly upgraded agents. I’ve written about simplifying your agent patching before, but this feature still seems to fly under the radar. It’s days like these that I miss running a massive Enterprise Manager with thousands of databases, because this is one of the things that would have made me dance in my cubicle.Let's say, you have 100 18.104.22.168.0 agents (50 with Database plug-in, 50 with Middleware plug-in). In my previous blog on EM patches, I explained the different types of patches available for EM, so I’m not going to go into detail here. What I'm going to illustrate is how we can upgrade those 100 agents, and patch them with the following patches in one step (current as of today):
The core Enterprise Manager system is typically patched with the quarterly PSU patches (released Jan, Apr, July, Oct) or a one-off when directed by support for a critical issue. PSU patches will be cumulative, so you need not apply each of them, just apply the latest. The OMSes must be shutdown during patching, however some patches are being released with rolling patch instructions for multi-OMS systems. These patches must be applied at the host level, and cannot be automated via EM. ALWAYS read the readme, yes every time. The patching steps can change from patch to patch so it's critical to read the readme. OPatch or OPatchauto will be used to apply these patches. Did I mention to read the readme for every patch? It's also important to note that there may be additional steps when patching in a multi-OMS or standby environment, so read the output of OPatchauto carefully.
Always download the latest OPatch release for the appropriate version. If you read the readme, you already know this! Download patch 6880880 for 11.1 (the OPatch version used by EM) and unzip into the $ORACLE_HOME. Most errors in patching are related to not updating OPatch.
For more information on PSU Patches and patching EM:
Oracle Enterprise Manager Cloud Control Administrators Guide - Chapter 16 Patching Oracle Management Server and the Repository
EM 12c Cloud Control: List of Available Patch Set Updates PSU (Doc ID 1605609.1)
How to Determine the List of Patch Set Update(PSU) Applied to the Enterprise Manager OMS and Agent Oracle Homes? (Doc ID 1358092.1)
Each plug-in has binaries that will require patches as well. Same downtime requirements apply for plug-in patches as the quarterly PSUs. Starting in 22.214.171.124, the plug-in patches are being released as a monthly bundle. This means that if you have 6 plug-ins, you may have 6 OMS side patches to apply - 1 for each plug-in. Bundles are not always released for every plug-in every month. They are cumulative, so pick the latest.
Starting with 126.96.36.199, the individual OMS-side plug-in bundles are being grouped into a System Patch each month. So for example, in June 2014 the System patch includes MOS, Cloud, DB, FA, FMW, SMF, and Siebel plug-ins. Non-required patches will be skipped.
For more information on the EM Patch Bundles and Patching EM:
Enterprise Manager 188.8.131.52.0 (PS3) Master Bundle Patch List (Doc ID 1900943.1)
Enterprise Manager 184.108.40.206 Bundle Patch Master Note (Doc ID 1572022.1)
Agent patches are applied to each agent. They can be applied via EM using the MOS patch plans, which makes it a lot easier when you have 100s or 1000s of Agents to patch! The Patch Plans will start a blackout, validate prerequisites, check for conflicts, and update OPatch for you. If you don't use the Patch Plan you can patch manually with OPatch, don't forget to read the readme! The Agent must be shutdown during the patch application. There are 4 main types of Agent patches you will see:
You can apply the latest Agent bundle, JDBC patch and the plug-in bundles in one patch plan. If there's a conflict, you'll be notified. If the Agents you've selected don't have specified plug-ins, you'll also receive notice during the analyze step. As of now, for my 220.127.116.11 agents, I would apply the 18.104.22.168.1 patch (18873338) and the two available plug-in agent patches DB monitoring (19002534) and FMW monitoring (18953219) and the latest JDBC patches (18502187,18721761) all in one patch plan.
I discovered a new feature in 22.214.171.124 while testing this. Normally you had to have Normal Oracle Home preferred credentials set for all Agent targets to patch, or select Override and specify the Normal Oracle Home credentials. In 126.96.36.199, the Agent uses it's internal credentials to Patch itself, so setting preferred credentials or specifying at run-time is not required. The user patching would require the Manage Target Patch and Patch Plan privileges.
The OMS and Agent are the key components, and my main focus here. However it's important to keep the infrastructure stack up to date as well. This includes the Oracle Fusion Middleware and Oracle Database that are used for EM. The recommendation is to follow the best practices for each of these components, and regularly update with the PSU patches available. The following reference notes will help in identifying the current PSU patches. The WebLogic Server version used by EM 12c is 10.3.6.
Hopefully this will help you understand the various types of components involved with keeping EM up to date. Obviously, you may not want to patch each month and maybe not every quarter, but the patches are available to keep the software up to date and make things easier to apply in bundles. You'll want to setup a plan for planned software maintenance in your environment. There's a whitepaper Oracle Enterprise Manager Software Planned Maintenance that will help guide you through the best practices.
When implementing database as a service and/or snap clone, a common request was for a way to hide the other service types like IaaS, MWaaS, etc from the self service portal for the end users. Before EM12c R4, there was no way to restrict the portal view. Essentially, any user with the EM_SSA_USER role would be directed to the self service portal and would then be able to see all service types supported by EM12c.
Of course, you could always set Database as your default self service portal from the 'My Preferences' pop up, but this only helps with their post-login experience. The end user still gets to see all the options as shown in screen above.
In EM12c R4, a new out of the box role called EM_SSA_USER_BASE has been introduced. This role, by default, does not give access to any portal, that is an explicit selection. Here is how you use this role:
1. Create a custom role and add the EM_SSA_USER_BASE role to it.
2. Now in the Resource Privileges step, select the Resource Type 'Cloud Self Service Portal for Database', and edit it
3. Check the 'Access the Cloud Self Service Portal for Database.' privilege. Finish the rest of the wizard.
Now, when a user with this custom role accesses the self service portal, they can only do so for databases and nothing else.
While the EM_SSA_USER role will continue to work, we recommend you start using the new EM_SSA_USER_BASE role. For more details on DBaaS or Snap Clone roles, refer to the cloud admin guide chapter on roles and users.
-- Adeesh Fulay (@AdeeshF)
Surprisingly, a popular question posted on our internal forum is about the possibility of using the Enterprise Manager (EM) Job System to replace customer’s numerous cron jobs. The answer is obviously YES! I say surprisingly because the EM Job system has been in existence for around 10 years (I believe since EM 10.2.0.1), and my hope was that, by now, customers would have moved to using more enterprise class job schedulers instead of cron. So, here is a quick post on how to get started with this conversion from cron to EM Jobs for some of our new users.
Before we learn about the how, let’s look at the why. The EM job system is:
Now back to our topic.
Let’s start with a sample crontab that we want to convert.
A cron expression consists of 6 fields, where the first 5 fields represent the schedule, while the last field represents the command or script to run.
| Field Name
||Mandatory?|| Allowed Values
|| Allowed special characters
|Minutes||Yes||0-59||* / , -|
|Hours||Yes||0-23|| * / , -
| Day of month
||Yes||1-31|| * / , - ? L W
|Month||Yes|| 1-12 or JAN-DEC
||* / , -|
| Day of week
||Yes|| 0-6 or SUN-SAT
|| * / , - ? L #
Cron jobs run on the operating system, often using the native shell or other tools installed on the operating system. The equivalent of this capability in Enterprise Manager is the ‘OS Command’ job type. Here are the steps required to convert the first entry in the crontab to an EM job:
1. Navigate to the Job Activity page
2. Select the ‘OS Command’ job and click Go
A 5-tab wizard will appear. Let’s step through this one by one.
3. Select the first tab called ‘General’. Here provide a meaningful name and description for the job. Since this job will be run on the Host target, keep the target type selection as ‘Host’. Next, select all host targets in EM that you wish to run this script against.
While cron jobs are defined on a per host bases, in EM a job definition can be run and managed across multiple hosts or groups of hosts. This avoids having to maintain the same crontab across multiple hosts.
4. Select the ‘Parameters’ tab. Here enter the command or script as specified in the last field of the crontab entry. When constructing the command, you can make use of the various target properties.
5. Next select ‘Credentials’. Here we provide the credentials required to connect to the host and execute the required commands or scripts. Three options are presented to the user:
Note: If your OS user does not have the required privileges to execute the set command, Named Credentials also support use of sudo, powerbroker, sesu, etc.
6. Next, we set the schedule and this is where it gets interesting. As discussed before, crontab uses a textual representation for the schedule, while EM Job system has a graphical representation for the schedule.
Our sample schedule in the crontab is ‘00 0 * * Sun’. This translates to a weekly job at 12 midnight on every Sunday. To set this in EM, choose the ‘Repeating’ schedule type. The screenshot below shows all the other selections.
The key here is to select the correct ‘Frequency Type’, the rest of the selections are quite obvious. This also lets you choose the desired timezone for the schedule. Your options are to either start the job w.r.t a fixed timezone, or start it in individual target's timezone. The latter is very popular, for example, I want to start a job at 2 AM local time in every region around the world.
Another selection of note is that for ‘Grace Period’. This is an extremely powerful feature, but often not used by many customers. Typically, we expect jobs to be started within a few seconds or minutes (based on the load on the system and number of jobs scheduled) of the start time, but a job might not start on time for many reasons. The most common reasons are the Agent being down or due to a blackout. The grace period controls the latest start time for the job in case the job is delayed, else its is marked as skipped. By default, jobs are scheduled with indefinite grace periods, but I highly recommend setting a value for it. In the sample above, I set a 3 hr limit which may seem large but given the weekly nature of the job seems reasonable. So the job system will wait until 3 am (the job start time is 12 am) to start the job, after which the iteration will be skipped. For repeating schedules, the grace period should always be less than the repeat interval. If the job starts on time, the grace period is ignored.
7. Finally, we navigate to the ‘Access’ tab. This tab has two parts:
To prevent EM from sending deluge of emails, I recommend the following settings in the notifications region:
You can always come back and modify these settings to suit your needs.
Not all cron jobs need to be converted to OS command. For example, if you are taking Oracle database backups using cron, then you probably want to use the out-of-the-box job type for RMAN scripts. Just provide the RMAN script, list of databases to run this against, and the credentials required to connect to the database. Similarly, if you run sqls on numerous databases, you can leverage the SQL Script job type for this purpose. There are over 50 job types available in EM12c, all available for use from the UI and EMCLI.
Finally, the best way to learn more about the EM Job System is to actually play with it. I also recommend blogs from Maaz, Kellyn, and other users on this topic. Good luck!!
Maaz Anjum: http://maazanjum.com/2013/12/30/create-a-simple-job-for-a-host-target-in-em12c/
Kellyn Pot'vin: http://dbakevlar.com/
-- Adeesh Fulay (@adeeshf)
So you just installed a new EM12c R4 environment or upgraded your existing EM environment to EM12c R4. Post upgrade you go to the Job System activity page (via Enterprise->Job->Activity menu) and view the progress details of a job. Well nothing seems to have changed, its the same UI, the same multi-page drill down to view step output, same no. of clicks, etc. Wrong! In this two part blog post, i talk about two Job System Easter Eggs (hidden features) that most of you will find interesting. These are:
So before i go any further, let me address the issue of why are these features hidden? As we were building these features, we realized that we would not be ready to ship the desired quality of code by the set dates. Hence, instead of removing the code, it was decided to ship it in a disabled state so as not to impact customers, but still allowing a brave few to experiment with it and provide valuable feedback.
1. New Job Progress Tracking UI
The job system UI hasn't changed much since its introduction almost 10 years ago. It is a daunting task to update all the job system related UIs in a single release, and hence we decided to take a piecemeal approach instead. In the first installment, we have revamped the job progress tracking page.
The current UI, as shown above, while being very functional, is also very laborious. Multiple clicks and drill downs are required to view the step output for a particular target. Also, any click leads to complete page refresh, which leads to wastage of time and resources. The new UI tries to address all these concerns. It is a single page UI, which means no matter where you click, you never leave the page and thus never lose context of the target or step you where in. It also significantly reduces the no. of clicks required to complete the same task as in the current UI. So lets take a look at this new UI.
First, as i mentioned earlier, you need to enable this UI. To do this, you need to run the following emctl command from any of the OMS:
./emctl set property -name oracle.sysman.core.jobs.ui.useAdfExecutionUi -value true
This command will prompt for the sysman password, and then will enable the new UI.
NOTE: This command does not require a restart of the OMS. Once run, the new UI will be enabled for all user across all OMSes.
Now revisit the job progress tracking page from before. You will be directed to the new UI.
There are in all 6 key regions on this new single page job progress tracking UI. Starting from top left, these are:
Essentially, considering that jobs have two levels - executions and steps, we have experimented with a multi-master style layout. EM has never used such a layout and hence there were concerns raised when we chose to do so.
Master 1 (region 2) -> Detail 1 (regions 3, 4, & 5)
Master 2 (region 4) -> Detail 2 (region 5)
In summary, with this new UI, we have been able to significantly reduce the no. of clicks required to track job progress and drill into details. We have also been able to show all relevant information in a single page, thus avoiding unnecessary page redirection and reloads. I would love to hear from you if this experiment has paid off and if you find this new UI useful.
In the next part of this blog i talk about the new emcli verbs to import and export job definitions across EM environments. This has been a long standing enhancement request, and we are quite excited about our efforts.
-- Adeesh Fulay (@adeeshf)
Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 188.8.131.52.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent!
1. Multiple resources under a single verb:
A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.
Here is an example which retrieves the list of administrators within EM.
$ emcli list -resource="Administrators"
The output will be the same as standard mode.
$ emcli @myAdmin.py
Enter password : ******
The output will be the same as standard mode.
Contents of myAdmin.py script
To get a list of all available resources use
$ emcli list -help
With every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks.
2. Consistent Formatting:
It is possible to format the output of any resource consistently using these options:
This option is used to specify which columns should be shown in the output.
Here is an example which shows the list of administrators and their account status
$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS"
To get a list of columns in a resource use:
$ emcli list -resource="Administrators" -help
You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"
This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.
This option is used to resize column widths.
Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30"
The existing standard EMCLI formatting options are also available in list verb. They are:
-format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -script
There are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.
Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:
is (Must be followed by null or not null)
Here is an example which searches for all EM administrators in the marketing department located in the USA.
$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'"
Here is another example which shows all the named credentials created since a specific date.
$emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"
Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM
Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.
Here is an example
$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15"
You can provide multiple bind variables.
To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:
$ emcli list -resource="PreferredCredentialsDefault" -help
4. Secure access
When list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access.
For example consider this usecase:
AdminA has access only to TargetA.
AdminA logs into EM CLI
Executing the list verb to get the list of all targets will only show TargetA.
5. User defined SQL
Using the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema.
To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views. Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.
Here’s an example
$ emcli list -sql='select * from mgmt$target'
|1||from emcli import *
|| search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE=
\'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']
|3|| if len(sys.argv) == 2:
|4|| print login(username=sys.argv)
|5|| l_prop_val_to_set = sys.argv
|| l_targets = list(resource="TargetProperties", search=search_list,
|7|| for target in l_targets.out()['data']:
|8|| t_pn = 'LifeCycle Status'
|9|| print "INFO: Setting Property name " + t_pn + " to value " +
l_prop_val_to_set + " for " + target['TARGET_NAME']
|12|| print "\n ERROR: Property value argument is missing"
|| print "\n INFO: Format to run this file is filename.py <username>
<Database Target LifeCycle Status Property Value>"
|1|| Imports the emcli verbs as functions
|2|| search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.
|3|| This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.
|4|| Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.
|5|| l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.
|6|| Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.
|7|| This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.
|8|| t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.
|9|| This a message informing the user the script is setting the property value for dbxyz.
|10|| This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets.
The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.
For your quick reference, the resources available in Enterprise Manager 184.108.40.206.0 with list verb are:
$ emcli list -help
Watch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!
With the latest release of Enterprise Manager 12c, Release 4 (220.127.116.11) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure. Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor). This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators.
In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page. You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.
1. Comprehensive Database Service Catalog
Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are:
EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits:
Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration:
The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:
|Primary|| Standby [1 or more]
A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c.
2. Additional Storage Options for Snap Clone
In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style.
In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 18.104.22.168 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources:
The advantages of the new CloneDB integration with EM12c Snap Clone are:
3. Improved Rapid Start Kits
DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service
(PDBaaS). One command creates all the Cloud artifacts like Roles,
Administrators, Credentials, Database Profiles, PaaS Infrastructure
Zone, Database Pools and Service Templates. Once the Rapid Start Kit has
been successfully executed, requests can be made to provision
databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple
zones, pools and service templates. It also supports standby databases
and use of RMAN image backups.
The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software.
Steps to use the kit:
database_cloud_setup.pyscript takes two inputs:
emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml
The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal.
More information available in the Rapid Start Kit chapter in Cloud Administration Guide.
4. Extensible Metering and Chargeback
Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to :
A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line.
More information available in the Chargeback API chapter in Cloud Administration Guide.
5. Miscellaneous Enhancements
There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are:
I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback.
-- Adeesh Fulay (@adeeshf)
If you have an Enterprise Manager 12c Release 3 or older agent monitoring database targets, Support may recommend that you install a JDBC (Java database connectivity) patch, such as patch 17591700, to prevent high CPU consumption by the agent.
JDBC patches, including 17591700, have readme files containing instructions for installing the patches to a database, not to an EM agent. This post provides an example of how to install a JDBC patch to an EM agent. It walks through the steps of installing patch 17591700 to an EM12c Release3 agent.
Here are the steps:
1. Identify the version of the JDBC client in the Agent Binaries home.
$ setenv ORACLE_HOME /u01/em12/core/22.214.171.124.0
Note: One way to find out the Agent Binaries home is to look in file /etc/oragchomelist on the agent host. It should contain an entry for an agent install in the format of:
<Agent Binaries home>:<Agent home>
For example, file /etc/oragchomelist contains:/u01/em12/core/126.96.36.199.0:/u01/em12/agent_inst
$ $ORACLE_HOME/OPatch/opatch lsinv -details | grep 'Oracle JDBC/OCI Instant Client'
Oracle JDBC/OCI Instant Client 188.8.131.52.0
In this case, the version of the JDBC client is 184.108.40.206.0.
2. Download the patch from MOS (My Oracle Support) website. The version of the JDBC client should match the version of the database for which the patch is intended. Stage the patch zip file, p17591700_111070_Generic.zip, on the agent host. I stage the patch in directory /u01/stage/jdbc_patch. I will refer to this location as the patch stage directory.
3. Go to the patch stage directory and extract files from the zip file.
$ cd /u01/stage/jdbc_patch
$ unzip p17591700_111070_Generic.zip
4. Stop the agent.
$ /u01/em12/agent_inst/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.
Stopping agent ..... stopped.
5. Install the patch.
$ cd /u01/stage/jdbc_patch/17591700
6. Start the agent.
$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 220.127.116.11.0
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/em12/core/18.104.22.168.0
Central Inventory : /u01/app/oraInventory
from : /u01/em12/core/22.214.171.124.0/oraInst.loc
OPatch version : 126.96.36.199.0
OUI version : 188.8.131.52.0
Log file location : /u01/em12/core/184.108.40.206.0/cfgtoollogs/opatch/17591700_Mar_17_2014_12_03_05/apply2014-03-17_12-03-05PM_1.log
Applying interim patch '17591700' to OH '/u01/em12/core/220.127.116.11.0'
Verifying environment and performing prerequisite checks...
Patch 17591700: Optional component(s) missing : [ oracle.dbjava.jdbc, 18.104.22.168.0 ]
Interim patch 17591700 is a superset of the patch(es) [ 16087066 ] in the Oracle Home
OPatch will roll back the subset patches and apply the given patch.
All checks passed.
Backing up files...
Rolling back interim patch '16087066' from OH '/u01/em12/core/22.214.171.124.0'
Patching component oracle.dbjava.ic, 126.96.36.199.0...
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/aq/AQNotificationEvent$EventType.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/aq/AQNotificationEvent.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent$AdditionalEventType.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent$EventType.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFAQEvent.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFConnection.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFDCNEvent.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFManager.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/T4CConnection.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/T4CTTIokpn.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/aq/AQNotificationEvent$EventType.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/aq/AQNotificationEvent.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent$AdditionalEventType.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent$EventType.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFAQEvent.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFConnection.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFDCNEvent.class"
Updating jar file "/u01/em12/core/220.127.116.11.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/18.104.22.168.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFManager.class"
Updating jar file "/u01/em12/core/22.214.171.124.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/126.96.36.199.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/T4CConnection.class"
Updating jar file "/u01/em12/core/188.8.131.52.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/184.108.40.206.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/T4CTTIokpn.class"
RollbackSession removing interim patch '16087066' from inventory
OPatch back to application of the patch '17591700' after auto-rollback.
Patching component oracle.dbjava.ic, 220.127.116.11.0...
Verifying the update...
Patch 17591700 successfully applied
Log file location: /u01/em12/core/18.104.22.168.0/cfgtoollogs/opatch/17591700_Mar_17_2014_12_03_05/apply2014-03-17_12-03-05PM_1.log
$ /u01/em12/agent_inst/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.
Starting agent ............. started.
The above step completes the process of installing patch 17591700 to the 22.214.171.124 agent.
Oracle Exadata Database Machine is the ideal consolidation platform for Enterprise Database Cloud and Oracle Enterprise Manager provides the most optimized and comprehensive solution to rapidly setup, manage and deliver Enterprise Clouds. Clearly, very significant innovations have been delivered via Exadata X4, Enterprise Manager 12c and Database 12c in Cloud Computing space and customers can start realizing benefits from this combination of most powerful and unique enterprise database cloud solution in industry.
As per OracleVoice blog on Forbes.com: "Why Database As A Service (DBaaS) Will Be The Breakaway Technology of 2014":
"Database as a Service (DBaaS) is arguably the next big thing in IT. Indeed, the market analysis firm 451 Research projects an astounding 86% cumulative annual growth rate, with annual revenues from DBaaS providers rising from $150 million in 2012 to $1.8 billion by 2016."
In this blog post, I will walk through the steps aiming to simplify DBaaS Setup on Exadata and also describe automation kits available to achieve the following rapidly -
There are 2 separate automation kits that are provided with EM 12c, first kit is for enabling rapid monitoring and management setup of Exadata stack in EM 12c and second kit is for rapid setup of DBaaS -
1) Deploy EM 12c site or use existing site - If you do not have existing EM 12c R3 setup, you can use EM Automation Kit for Exadata for installing EM 12c R3 Plug-in update 1. This kit is available via patch 17036016 on My Oracle Support(MOS) and can be used to deploy EM 12c latest release. Refer to Readme of patch and MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Please note that this will setup EM12c Oracle Management Service along with Management Repository. It can be deployed on a single m/c or OMS and OMR can be setup on different machines.
2) Deploy EM 12cR3 agents and required plug-ins on Exadata Machine - Agent kit is also part of the same EM Automation Kit for Exadata and can be used for deploying agents and plug-ins on Exadata stack. Refer to MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Best practice is to use most recent version of Agent kit and also deploy latest plug-ins. Patch details for respective platform are described in the MOS note.
Agent kit script will require Java 1.6.0_43 or greater version on database node where this script is being run. Agent kit script will need to be run as root OS user on Exadata db node, however JAVA_HOME and PATH with JAVA_HOME/bin should be set up as agent OS owner, so these OS env variables need to setup in profile of agent OS owner.
Agent Automation kit helps with achieving following -
Note - In case of Exadata X4, ensure you have the EM 12cR3 latest Bundle patch(released in January 2014). Refer to following MOS notes -
Enterprise Manager 126.96.36.199 Bundle Patch Master Note (Doc ID 1572022.1)
Enterprise Manager for Exadata Plug-in 12cR3 Bundle Patch Bug List (Doc ID 1613177.1)
3) Discover Grid Infrastructure and RAC targets – Above setup script will discover Targets Cluster, Grid Infrastructure, RAC database and listener targets. Discover Grid Infrastructure, ASM and RAC targets manually if required.
4) Please note that this setup script will not discover Oracle Exadata Database machine target in EM 12c. You need to discover the machine using following steps
5) Setup Database Cloud Using Rapid Start Kit - Once you have setup Exadata management in EM 12c, next step is to setup database cloud. Refer to Rapid Start Kit for setting up cloud for both DBaaS and Pluggable DBaaS/PDBaaS. This kit will help achieve the following -
Here are brief steps for setting up Database Cloud using Rapid start Kit, available in EM Agent Kit 188.8.131.52.0, after login to Exadata machine first DB node as EM 12c agent owner
Note: Currently Rapid Start kit for DBaaS makes use of 184.108.40.206.0 Database "Exadata Data Warehouse"
Profile available out-of-box.
However you can create your own DBCA based Profiles and customize the
dbaas_cloud_input.xml. Also if you need to
use RMAN backup based or Snap clone based profile, you can to login to EM12c SSA Portal as SSA Administrator, to create the profile and setup service template.
At this stage, you will be able to manage and deliver your Exadata powered enterprise database cloud using EM 12c.
The Oracle Enterprise Manager Special Interest Group (SIG) is a growing body of IOUG members who manage or are interested in all aspects of Oracle Enterprise Manager. This IOUG SIG is managed by volunteers and supported by Oracle Enterprise Manager product managers and developers. The purpose of the SIG is to bring relevant information and education through webcasts, discussions and networking to users interested in learning more about the product, and to share user experiences.On January 28th at 10 AM pacific time, Oracle Enterprise Manager SIG is hosting a webcast on "Managing Oracle Enterprise Manager Cloud Control 12c with Oracle Clusterware". In this webcast, Leighton Nelson, Associate Principal Database Administrator/Oracle ACE will discusss the steps required to configure virtual host names and create an Oracle Clusterware resource for Oracle Enterprise Manager to provide seamless failover and improved high availability levels.
Thank you to all who attended our webcast on Enterprise Manager 12c Snap Clone last month. In this webcast, we talked about how EM12c Snap Clone can help:
For those who missed this webcast, the replay is available here and the slides have been uploaded to slideshare.
Feel free to reach out to us if you have any questions on Snap Clone or Database as a Service.
- Adeesh Fulay (@adeeshf)
When using Database Configuration Assistant (DBCA) to create a database instance to house the repository of Enterprise Manager Cloud Control, people often end up with an instance containing DB Control schema, and they have to remove the schema from the instance before installing EM12c. To create an instance without DB Control or SYSMAN schema in the first place, make the following selections in the DBCA database creation process.
1. In the Database Template step, select the Custom Database option.
2. In the Management Options step, uncheck the Configure Enterprise Manager option.
Note: If DBCA locates an agent installation on the host, it will provide an option to register the database instance that you are creating with the corresponding Management Service. If you choose that option, DBCA will add the database instance to the Enterprise Manager site as a managed target upon completion of instance creation. This option, however, does not create SYSMAN schema in the database instance.
3. In the Database Content step, uncheck the Enterprise Manager Repository option.
Remember to make the above-mentioned selections, and you will have an instance without DB Control, which is fit for housing an EM repository.
Note: From EM12c Release 2, there is an option to create an 220.127.116.11 database instance with a pre-configured EM repository. For instructions see:
Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.
The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago. It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.
This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.
Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases
To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:
Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:
5 Prod DB = 30 TB
5 Standby DB = 30 TB
5 Masked DB = 30 TB (These will be used for creating clones)
6 Clones (6 * 30 TB) = 180 TB
Total = 270 TB
Time = days to weeks
As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:
Snap Clone to the Rescue
All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.
EM 12c DBaaS Snap Clone tries to address all these issues. It provides:
So how does Snap Clone work?
The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:
1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.
2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.
For more details on how to setup and use Snap Clone, refer to a previous blog post.
Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.
5 Prod DB = 30 TB
5 Standby DB = 30 TB
5 Masked DB = 30 TB
6 Clones (6 * 30 TB) = 180 TB
6 Clones (6 * 5 * 2 GB) = 60 GB
270 TB 90 TB
days to weeks minutes
Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.
As i said earlier, Snap Clone is most effective when cloning large databases (~TBs). Common scenarios we see our customers best use Snap Clone are:
Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.
In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.
In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.
-- Adeesh Fulay (@adeeshf)
Latest information and perspectives on Oracle Enterprise Manager.