Friday Jun 13, 2014

EM12c Release 4: New Compliance features including DB STIG Standard

Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. )

Starting with release of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs.

You can find more information about the Oracle Database and other STIGs on the DISA website.

The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories.

The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation.

In order to use this standard both the OMS and agent must be at version as it takes advantage of several features new in this release including:

  • Agent-Side Compliance Rules
  • Manual Compliance Rules
  • Violation Suppression
  • Additional BI Publisher Compliance Reports

Agent-Side Compliance Rules

Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you.

This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed.

The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions.

You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN.

Manual Compliance Rules

There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion.

Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results.

Violation Suppression

There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID.

Additional BI Publisher compliance reports

As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in covering all aspects including the association status, library as well as summary and detailed results reports.

 10 New Compliance Reports

Compliance Summary Report Example showing STIG results


Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN.

Additional EM12c Compliance Management Information

Compliance Management - Overview ( Presentation )

Compliance Management - Custom Compliance on Default Data (How To)

Compliance Management - Custom Compliance using SQL Configuration Extension (How To)

Compliance Management - Customer Compliance using Command Configuration Extension (How To)

Tuesday Jun 10, 2014

EM12c: Using the LIST verb in emcli

Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent!

1. Multiple resources under a single verb:
A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.

Here is an example which retrieves the list of administrators within EM.
Standard mode
$ emcli list -resource="Administrators"

Interactive mode
The output will be the same as standard mode.

Standard mode
$ emcli
Enter password :  ******

The output will be the same as standard mode.

Contents of script
print list(resource="Administrators",jsonout=False).out()

To get a list of all available resources use
$ emcli list -help

With every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks.

2. Consistent Formatting:
It is possible to format the output of any resource consistently using these options:


  This option is used to specify which columns should be shown in the output.

Here is an example which shows the list of administrators and their account status
$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS"

To get a list of columns in a resource use:
$ emcli list -resource="Administrators" -help

You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"

This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.

  This option is used to resize column widths.
Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30"

The existing standard EMCLI formatting options are also available in list verb. They are:
-format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -script

There are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.

3. Search:
Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:
           is (Must be followed by null or not null)

Here is an example which searches for all EM administrators in the marketing department located in the USA.
$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'"

Here is another example which shows all the named credentials created since a specific date. 
$emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"
Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM

Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.

Here is an example
$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15"

You can provide multiple bind variables.

To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:
$ emcli list -resource="PreferredCredentialsDefault" -help

4. Secure access
When list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access.

For example consider this usecase:
AdminA has access only to TargetA.
AdminA logs into EM CLI
Executing the list verb to get the list of all targets will only show TargetA.

5. User defined SQL
Using the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema.

To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.

Here’s an example
  $ emcli list -sql='select * from mgmt$target'

6. JSON output support   
JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.

As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.

We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status.

Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.

If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.
If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.

 1 from emcli import *
search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE=
 \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']
 3 if len(sys.argv) == 2:
 4    print login(username=sys.argv[0])
 5    l_prop_val_to_set = sys.argv[1]
   l_targets = list(resource="TargetProperties", search=search_list,
 7    for target in l_targets.out()['data']:
 8       t_pn = 'LifeCycle Status'
 9      print "INFO: Setting Property name " + t_pn + " to value " +
      l_prop_val_to_set + " for " + target['TARGET_NAME']
 10      print  set_target_property_value(property_records=
 12   print "\n ERROR: Property value argument is missing"
  print "\n INFO: Format to run this file is <username>
  <Database Target LifeCycle Status Property Value>"

You can download the script from here. I could not upload the file with .py extension so you need to rename the file to before executing it using emcli.

A line by line explanation for beginners:

 1 Imports the emcli verbs as functions
 2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.
 3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.
 4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.
 5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.
 6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.
 7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.
 8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.
 9 This a message informing the user the script is setting the property value for dbxyz.
 10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets.

The script is executed as:
$ emcli subin Production

The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.

For your quick reference, the resources available in Enterprise Manager with list verb are:
$ emcli list -help

Watch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
mt=8">Download the Oracle Enterprise Manager 12c Mobile app

Monday Jun 09, 2014

EM12c Release 4: New EMCLI Verbs

Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 ( This helps you in writing new scripts or enhancing your existing scripts for further automation.

Basic Administration Verbs
 invoke_ws - Invoke EM web service.

ADM Verbs
 associate_target_to_adm - Associate a target to an application data model.
 export_adm - Export Application Data Model to a specified .xml file.
 import_adm - Import Application Data Model from a specified .xml file.
 list_adms - List the names, target names and application suites of existing Application Data Models
 verify_adm - Submit an application data model verify job for the target specified.

BI Publisher Reports Verbs
 grant_bipublisher_roles - Grants access to the BI Publisher catalog and features.
 revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.

Blackout Verbs
 create_rbk - Create a Retro-active blackout.

CFW Verbs
 cancel_cloud_service_requests -  To cancel cloud service requests
 delete_cloud_service_instances -  To delete cloud service instances
 delete_cloud_user_objects - To delete cloud user objects.
 get_cloud_service_instances - To get information about cloud service instances
 get_cloud_service_requests - To get information about cloud requests
 get_cloud_user_objects - To get information about cloud user objects.

Chargeback Verbs
 add_chargeback_entity - Adds the given entity to Chargeback.
 assign_charge_plan - Assign a plan to a chargeback entity.
 assign_cost_center - Assign a cost center to a chargeback entity.
 create_charge_entity_type - Create  charge entity type
 export_charge_plans - Exports charge plans metadata to file
 export_custom_charge_items -  Exports user defined charge items to a file
 import_charge_plans - Imports charge plans metadata from given file
 import_custom_charge_items -  Imports user defined charge items metadata from given file
 list_charge_plans - Gives a list of charge plans in Chargeback.
 list_chargeback_entities - Gives a list of all the entities in Chargeback
 list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback
 list_cost_centers - Lists the cost centers in Chargeback.
 remove_chargeback_entity - Removes the given entity from Chargeback.
 unassign_charge_plan - Un-assign the plan associated to a chargeback entity.
 unassign_cost_center - Un-assign the cost center associated to a chargeback entity.

Configuration/Association History
 disable_config_history - Disable configuration history computation for a target type.
 enable_config_history - Enable configuration history computation for a target type.
 set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.

 config_compare - Submits the configuration comparison job
 get_config_templates - Gets all the comparison templates from the repository

Compliance Verbs
 fix_compliance_state -  Fix compliance state by removing references in deleted targets.

Credential Verbs

Data Subset Verbs
 export_subset_definition - Exports specified subset definition as XML file at specified directory path.
 generate_subset - Generate subset using specified subset definition and target database.
 import_subset_definition - Import a subset definition from specified XML file.
 import_subset_dump - Imports dump file into specified target database.
 list_subset_definitions - Get the list of subset definition, adm and target name

Delete pluggable Database Job Verbs
 delete_pluggable_database - Delete a pluggable database

Deployment Procedure Verbs
 get_runtime_data - Get the runtime data of an execution

Discover and Push to Agents Verbs
 generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains
 refresh_fa - Refresh Fusion Instance
 run_fa_diagnostics - Run Fusion Applications Diagnostics

Fusion Middleware Provisioning Verbs
 create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain
 create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home
 create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation Media

Incident Rules Verbs
 add_target_to_rule_set - Add a target to an enterprise rule set.
 delete_incident_record - Delete one or more open incidents
 remove_target_from_rule_set - Remove a target from an enterprise rule set.

 Job Verbs
 export_jobs - Export job details in to an xml file
 import_jobs - Import job definitions from an xml file
 job_input_file - Supply details for a job verb in a property file
 resume_job - Resume a job or set of jobs
 suspend_job - Suspend a job or set of jobs

 Oracle Database as Service Verbs
 config_db_service_target - Configure DB Service target for OPC

Privilege Delegation Settings Verbs
 clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms
 set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms
 test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a host

SSA Verbs
 cleanup_dbaas_requests - Submit cleanup request for failed request
 create_dbaas_quota - Create Database Quota for a SSA User Role
 create_service_template - Create a Service Template
 delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role
 delete_service_template - Delete a given service template
 get_dbaas_quota - List the Database Quota setup for all SSA User Roles
 get_dbaas_request_settings - List the Database Request Settings
 get_service_template_detail - Get details of a given service template
 get_service_templates -  Get the list of available service templates
 rename_service_template -  Rename a given service template
 update_dbaas_quota - Update the Database Quota for a SSA User Role
 update_dbaas_request_settings - Update the Database Request Settings
 update_service_template -  Update a given service template.

 get_saved_configs  - Gets the saved configurations from the repository

 Server Generated Alert Metric Verbs
 validate_server_generated_alerts  - Server Generated Alert Metric Verb

Services Verbs
 edit_sl_rule - Edit the service level rule for the specified service

Siebel Verbs
 list_siebel_enterprises -  List Siebel enterprises currently monitored in EM
 list_siebel_servers -  List Siebel servers under a specified siebel enterprise
 update_siebel- Update a Siebel enterprise or its underlying servers

SiteGuard Verbs
 add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system
 configure_siteguard_lag -  Configure apply lag and transport lag limit for databases
 delete_siteguard_aux_host -  Delete auxiliary host associated with a site
 delete_siteguard_lag -  Erases apply lag or transport lag limit for databases
 get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site
 get_siteguard_health_checks -  Shows schedule of health checks
 get_siteguard_lag -  Shows apply lag or transport lag limit for databases
 schedule_siteguard_health_checks -  Schedule health checks for an operation plan
 stop_siteguard_health_checks -  Stops all future health check execution of an operation plan
 update_siteguard_lag -  Updates apply lag and transport lag limit for databases

Software Library Verbs
 stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.

Target Data Verbs
 create_assoc - Creates target associations
 delete_assoc - Deletes target associations
 list_allowed_pairs - Lists allowed association types for specified source and destination
 list_assoc - Lists associations between source and destination targets
 manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnerships

Trace Reports
 generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)

 add_virtual_platform - Add Oracle Virtual PLatform(s).
 modify_virtual_platform - Modify Oracle Virtual Platform.

To get more details about each verb, execute
$ emcli help <verb_name>
Example: $ emcli help list_assoc

New resources in list verb
These are the new resources in EM CLI list verb :

Credential Resource Group
  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)
  PreferredCredentialsSystemScope - Target preferred credential

Privilege Delegation Settings
  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host
  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host
  PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings
  PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings

To get more details about each resource, execute
$ emcli list -resource="<resource_name>" -help
Example: $ emcli list -resource="PrivilegeDelegationSettings" -help

Deprecated Verbs:
Agent Administration Verbs
 resecure_agent - Resecure an agent

To get the complete list of verbs, execute:
$ emcli help

Update (6/11):- Please note that the "Gold Agent Image Verbs" and "Agent Update Verbs" verbs shown under "emcli help" are not supported yet.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager 12c Mobile app

Wednesday Jun 04, 2014

EM12c Release 4: Database as a Service Enhancements

Oracle Enterprise Manager (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are:

  1. Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard)
  2. Additional Storage Options for Snap Clone (includes support for Database feature CloneDB)
  3. Improved Rapid Start Kits
  4. Extensible Metering and Chargeback
  5. Miscellaneous Enhancements

1. Comprehensive Database Service Catalog

Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are:

Service Catalogs: Defining Standardized Database Service

High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA]

EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits:

  • Present a collection of standardized database service definitions,
  • Define standardized pools of hardware and software for provisioning,
  • Role based access to cater to different class of users,
  • Automated procedures to provision the predefined database definitions,
  • Setup chargeback plans based on service tiers and database configuration sizes, etc

Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration:

  • Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites)
  • The standby databases can be single instance, RAC, or RAC One Node databases
  • Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software
  • The standby databases can be in either mount or read only (requires active data guard option) mode
  • All database versions 10g to 12c supported (as certified with EM 12c)
  • All 3 protection modes can be used - Maximum availability, performance, security
  • Log apply can be set to sync or async along with the required apply lag

The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:

 Primary  Standby [1 or more]
 EM 12cR4
 SI  -
where RON = RAC One Node is supported via custom post-scripts in the service template

A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c.

2. Additional Storage Options for Snap Clone

In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style.

In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources:

Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2

Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB

The advantages of the new CloneDB integration with EM12c Snap Clone are:

  • Space and time savings
  • Ease of setup - no additional software is required other than the Oracle database binary
  • Works on all platforms
  • Reduce the dependence on storage administrators
  • Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal
  • Uses dNFS to delivers better performance, availability, and scalability over kernel NFS
  • Complete lifecycle of the clones managed by EM12c - performance, configuration, etc

3. Improved Rapid Start Kits

DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups.

The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software.

Steps to use the kit:

  • The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.
  • It can be run from this default location or from any server which has emcli client installed
  • For most scenarios, you would use the script dbaas/setup/
  • For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/
  • The script takes two inputs:
    • Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM.
    • Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal.
  • Once all the xml files have been prepared, invoke the script as follows for PDBaaS:
    emcli -pdbaas 

         The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal.

More information available in the Rapid Start Kit chapter in Cloud Administration Guide

4. Extensible Metering and Chargeback

 Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to :

  • Extend chargeback to any target type managed in EM
  • Promote any metric in EM as a chargeback entity
  • Extend list of charge items via metric or configuration extensions
  • Model abstract entities like no. of backup requests, job executions, support requests, etc

 A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line.

More information available in the Chargeback API chapter in Cloud Administration Guide.

5. Miscellaneous Enhancements

There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are:

  • Custom naming of DB Services
    • Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces
    • Every custom name is validated for uniqueness in EM
  • 'Create like' of Service Templates
    • Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels.
  • Profile viewer
    • View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template
  • Cleanup automation - for failed and successful requests
    • Single emcli command to cleanup all remnant artifacts of a failed request
    • Cleanup can be performed on a per request bases or by the entire pool
    • As an extension, you can also delete successful requests
  • Improved delete user workflow
    • Allows administrators to reassign cloud resources to another user or delete all of them
  • Support for multiple tablespaces for schema as a service
    • In addition to multiple schemas, user can also specify multiple tablespaces per request

I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback.

Good luck!


Cloud Management Page on OTN

Cloud Administration Guide [Documentation]

-- Adeesh Fulay (@adeeshf)

Thursday Apr 24, 2014

The case for Snap Clone over point tools

Today, I stumbled over a competitor blog, conspicuous by its factual incorrectness on Enterprise Manager Snap Clone. However, I must compliment the author of the blog, because inadvertently, he has raised a point that we have been highlighting all along. The author, with reference to Dataguard and storage technologies, argues against the cobbling of technologies together and adding another technology stack to the mix without any automated management

Precisely the point! In the wide realm of technologies, there are necessities and there are accessories aka nice-to-haves. The necessities are technologies that are needed anyway, such as a high fidelity, high performance storage from a reputed vendor or a good DR solution for a mission critical database environment. Similarly, for any Oracle DBA worth his/her salt, Enterprise Manager 12c is a necessity, a part of the daily life. The Enterprise Manager agent, keeping vigil on every host, is therefore not an overhead, but the representative (the "agent" in true sense) of the DBA. Deep diagnostics, performance management, large scale configuration management, patching and compliance management make Enterprise Manager the darling of any Oracle DBA. All surveys suggest that any DBA spends considerable amount of time in Enterprise Manager for performing things beyond just data cloning, so why invest in an accessory for the cloning of Oracle test databases and unnecessarily proliferate the number of point tools (and possibly several instances of them) that you need to manage and maintain, not to ignore the past history that cites that very few such point tools solved customers' CAPEX and OPEX problems over the long run. It is like using spreadsheet for expenses and ERP for all other financial tasks.This is not to suggest that these point tools do not have good, innovative features. Over my tenure in the industry, I have come across several such tools with nice features, but often the hidden costs outweigh the benefits. Our position in this aspect has been consistent, whether it is on a competitor’s tool or our own. Few years back, we integrated My Oracle Support into Enterprise Manager with the same consistent goal that Enterprise Manager will serve as the single pane of glass for the Oracle ecosystem. Same has been our position on any product that we acquire.

Snap Clone's support for Dataguard and native storage stems from popular customer demand to leverage technologies they already invested in, and not create standalone islands of automation. Moreover, several customers have voiced in favor of the performance and scalability advantages that they would get by leveraging the native storage APIs. How else would you support one of the world's largest banks, a Snap Clone customer, who performs 60,000 (sixty thousand) data refreshes per year! In any case, that should not imply that we bind ourselves to any of those technologies. We do support cloning on various storage systems based on ZFS filesystem. Similarly, the Test Master refresh can be achieved with one among RMAN, Dataguard, Golden Gate or storage replication and optionally orchestrated with EM Job System.

Enterprise Manager 12c has taken a great step in delivering features via plugins that can be revisioned independent of the framework. An unwanted side effect is that the awareness often lags what is actually supported in the latest version of the product. For example, the filesystem support was introduced last Fall. And of course Enterprise Manager 12c Snap Clone supports RAC. My esteemed colleague and DBA par excellence, in her blog has highlighted some of these to dispel some of the prevalent awareness issues. Snap Clone's usage among the E-Business Suite and Developer community does not need any special accreditation. It is heavily used by the world's largest E-Business Suite Developer community-the Oracle E-Business Suite Engineering organization itself! It is true that Snap Clone does not support  restoration to any arbitrary point in time, but then our customers and prospects have not voiced a need for it. In reality, most customers want to perform intermediate data transformation such as masking and subsetting as they clone from production to test, and Enterprise Manager 12c already boasts of sophisticated data masking technologies, again via the same interface. It also includes testing features like Real Application Testing (RAT) that can complement and follow the test database creation. Future releases of Enterprise Manager will support a tighter integration among these features.

Snap Clone is delivered as a part of the Database as a Service feature set that has been pioneering, industry-leading and getting adopted at a great pace. Little wonder that we have already received a copious amount of Openworld paper submissions on the topic. In this emerging trend of DBaaS adoption, we find no reason to fragment the tasks such as fresh database creation, pluggable database provisioning and cloning across silo'ed point tools (not to mention broader PaaS capabilities which may be needed for complete application testing). Each use case could be different but needs a single service delivery platform. EM12c is that platform for Oracle. Period. So, think twice before 'adding another technology to the mix'. You do not need to.

Thursday Jan 02, 2014

What is EM 12c DBaaS Snap Clone?

Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.

The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago.  It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.

This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.

Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases

Customer Scenario

To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:

  • 5 production databases total 30 TB of storage
  • All 5 production databases have a standby
  • Clones of the production database are required for data analysis and reporting
  • 6 total clones across different teams every quarter
  • For security reasons, sensitive data has to be masked prior to cloning

Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:

5 Prod DB                  = 30 TB
5 Standby DB            = 30 TB
5 Masked DB             = 30 TB (These will be used for creating clones)
6 Clones (6 * 30 TB) = 180 TB
Total                           = 270 TB
Time = days to weeks

As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:

  • Lack of automation. Scripts are good but often not a long term solution.
  • Traditional cloning techniques are slow while, existing storage vendor solutions are DBA unfriendly
  • Data explosion often outpaces storage capacity and hurts ITs ability to provide clones for dev and testing
  • Archaic processes that require multiple users to share a single clone, or only supports fixed refresh cycles
  • Different priorities between DBAs and Storage admins

Snap Clone to the Rescue

All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.

EM 12c DBaaS Snap Clone tries to address all these issues. It provides:

  • Rapid and space efficient cloning of databases by leveraging storage copy-on-write (or similar) technology
  • Supports all database versions from 10g to 12c
  • Supports various storage vendors and configurations NAS and SAN
  • Lineage and association tracking between clone master and its various clones and snapshots
  • 'Time Travel' capability to restore and access past data
  • Deep visibility into storage, OS, and database layer for easy triage of performance and configuration issues
  • Simplified access for end user via out-of-the-box self service portal
  • RESTful APIs to integrate with custom portals and third party products
  • Ability to meter and charge back on the clone databases

So how does Snap Clone work?

The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:

1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.

2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.

For more details on how to setup and use Snap Clone, refer to a previous blog post.

Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.

5 Prod DB                      = 30 TB
5 Standby DB                 = 30 TB
5 Masked DB                 = 30 TB
6 Clones (6 * 30 TB)      = 180 TB
6 Clones (6 * 5 * 2 GB) = 60 GB
Total                               = 270 TB 90 TB
Time = days to weeks minutes

Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.

Snap Clone Savings

Where can you use Snap Clone databases?

As i said earlier, Snap Clone is most effective when cloning large databases  (~TBs). Common scenarios we see our customers best use Snap Clone are:

  • Application upgrade testing. For example, EBusiness suite upgrade to R12.
  • Functional testing. For example, testing using production datasets.
  • Agile development. For example, run parallel development sprints by giving each sprint its own cloned database.
  • Data Analysis and Reporting. For example, stock market analysis at the close of market everyday.

Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.

In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.

In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.

-- Adeesh Fulay (@adeeshf)

Additional References

Cloud Management Page on OTN

Cloud Administration Guide (Documentation)

Enterprise Manager 12c Database-as-a-Service Snap Clone Overview (Presentation)

Tuesday Dec 31, 2013

Database Lifecycle Management for Cloud Service Providers

Adopting the Cloud Computing paradigm enables service providers to maximize revenues while driving capital costs down through greater efficiencies of working capital and OPEX changes. In case of enterprise private cloud, corporate IT, which plays the role of the provider, may not be interested in revenues, but still care about providing differentiated service at lower cost. The efficiency and cost eventually makes the service profitable and sustainable. This basic tenet has to be satisfied irrespective of the type of service-infrastructure (IaaS), platform (PaaS) or software application (SaaS). In this blog, we specifically focus on the database layer and how its lifecycle gets managed by the Service Providers.

Any service provider needs to ensure that:

  • Hardware and software population are in control. As new consumers come in and some consumers retire, there is a constant flux of resources in the data center. The flux has to be managed and controlled
  • The platform for providing the service is standardized, so that operations can be conducted predictable and at scale across a pool of resources
  • Mundane and repeatable tasks like backup, patching, etc are automated
  • Customer attrition does not happen owing to heightened compliance risk

While the Database Lifecycle Management features of Enterprise Manager have been widely adopted, I feel that the applicability of the features with respect to service providers is yet well understood and hence appreciated. In this blog, let me try addressing how the lifecycle management features can be effective in addressing each of the above requirements.

1. Controlling hardware and software population:

Enterprise Manager 12c provides a near real-time view of the assets in a data center. It comes with out-of-box inventory reports that show the current population and the growth trend within the data center. The inventory can be further sliced and diced based on cost center, owner, etc. In a cloud, whether private or public, the target properties of each asset can be appropriately populated, so that the provider can easily figure out the distribution of assets. For example, how many databases are owned by Marketing LOB can be easily answered. The flux within the data center is usually higher when virtualization techniques such as server virtualization and Oracle 12c multitenant option are used. These technologies make the provisioning process extremely nimble, potentially leading to a higher number of virtual machines (VMs) or pluggable databases (PDBs) within the data center and hence accentuating the need for such ongoing reporting. The inventory reports can be also created using BI Publisher and delivered to non-EM users, such as a CIO.

Now, not all reports can always be readily available. There can be situations where a data center manager can seek adhoc information, such as, how many databases owned by a particular customer is running on Exadata. This involves an adhoc query based upon an association, viz. database running on Exadata and target properties, viz. owner being the customer. Enterprise Manager 12c provides a sophisticated Configuration Search feature that lets administrators define such adhoc queries and save them for reuse.

2. Standardization of platform:

The massive standardization of platform components is not merely a nice-to-have for a cloud service provider, it is rather a must-have. A provider may choose to offer various levels of services, tagged with levels such as gold, silver and bronze. However, for each such level, the platform components need to be standardized, not only for ease of manageability but also for ensuring consistency of QOS across all the tenants. So how can the platform be standardized? We can highlight two major Enterprise Manager 12c features here:

The ability to rollout gold images that can be version controlled within Enterprise Manager's Software Library. The inputs of the provisioning process can be "locked down" by the designer of the provisioning process, thereby ensuring that each deployment is a replica of the other.

The ability to compare the configuration of deployments (often referred to as the "Points of Delivery" of the services). This is a very powerful feature that supports 1-n comparisons across multiple tiers of the stack. For example, one can compare an entire database machine from storage cells, compute nodes to databases with one or more of those.

3. Automation of repeatable tasks:

A large portion of OPEX for a service provider is expended while executing mundane and repeatable tasks like backup, log file cleanup or patching. Enterprise Manager 12c comes with an automation framework comprising Jobs and Deployment Procedures that lets administrators define these repetitive actions and schedule them as needed. EMCC’s task automation framework is scalable, carries functions such as ability to schedule, resume, retry which are of paramount importance in conducting mass operations in an enterprise scale cloud. The task automation verbs are also exposed through the EMCLI interface. Oracle Cloud administrators make extensive use of EMCLI for large scale operations on thousands of tenant services.

One of the most popular features of Enterprise Manager 12c is the out-of-box procedures for patch automation. The patching procedures can patch the Linux operating system, clusterware and the database. For minimizing the downtime involved in the patching process Enterprise Manager 12c also supports out-of-place patching that can prepare the patched software ahead of time and migrate the instances one by one as needed. This technique is widely adopted by the service providers to make sure the tenants' downtime related SLAs are respected and adhered to. The co-ordination of such downtime can be instrumented by Enterprise Manager 12c's blackout functionality.

4. Managing Compliance risks:

In a service driven model, the provider is liable in case of security breaches. The consumer and in turn, the customer of the consumer's apps need to be assured that their data is not breached into owing to platform level vulnerabilities. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open network port. The hardening of the platform therefore, has to be done at all levels-OS, network, database, etc. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open port. . To manage compliance, administrators can create baselines referred to as Compliance Standard. Any deviations from the baselines triggers compliance violation notifications, alerting administrators to resolve the issue before it creates risk in the environment.

We can therefore see how four major asks from a service provider can be satisfied with the Lifecycle Management features of Enterprise Manager 12c. As substantiated through several third party studies and customer testimonials, these result in higher efficiency with lower OPEX.

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Google+ Newsletter

Tuesday Nov 12, 2013

Automate RAC Cluster Upgrades using EM12c

One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.
With the release of Database Plug-in  (, EM12c Rel 3 (  now supports automated upgrading of RAC Clusters in addition to Standalone Databases.

This automation includes:

  • Upgrade of the complete Cluster across the nodes. ( Example: CRS, ASM, RAC DB  -> or GI, RAC DB) 
  • Best practices in tune with your operations, where you can automate upgrade in steps:
    Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs).
    Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster).
  • Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks
  • Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required.
  • Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to :
    a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN)
    b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup
    c. Ignore backup completely, if there are external mechanisms already in place. 

    Mass Upgrade of RAC using EM12c

High Level Steps:

  1. Select the Procedure "Upgrade Database" from Database Provisioning Home page.
  2. Choose the Target Type for upgrade and the Destination version
  3. Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already
  4. Select the Gold Image of the destination version for deploying both the GI and RAC OHs
  5. Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts
  6. Set the Break points in the procedure execution to isolate Downtime activities
  7. Submit and track the procedure's execution status. 

The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link.

Explore the functionality!!

In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

Wednesday Jul 24, 2013

Understanding Agent Resynchronization

Agent Resynchronization (resync) is an important topic but often misunderstood or misused. In this Q&A styled blog, I discuss how and when it is appropriate to use agent resynchronization.

What is Agent Resynchronization?

Management Agent can be reconfigured using target information present in the Management Repository. Resynchronization pushes all targets and related information from the Management Repository to the Management Agent and then unblocks the Agent.

 Why do agents need to be re-synchronized?

Read More

Thursday Jul 11, 2013

Oracle Enterprise Manager 12c Release 3: What’s New in EMCLI

If you have been using the classic Oracle Enterprise Manager Command Line interface ( EMCLI ), you are in for a treat. Oracle Enterprise Manager 12c R3 comes with a new EMCLI kit called ‘EMCLI with Scripting Option’. Not my favorite name, as I would have preferred to call this EMSHELL since it truly provides a shell similar to bash or cshell. Unlike the classic EMCLI, this new kit provides a Jython-based scripting environment along with the large collection of verbs to use. This scripting environment enables users to use established programming language constructs like loops (for, or while), conditional statements (if-else), etc in both interactive and scripting mode.

Benefits of ‘EMCLI with Scripting Option’

Some of the key benefits of the new EMCLI are:

  • Jython based scripting environment
  • Interactive and scripting mode
  • Standardized output format using JSON
  • Can connect to any EM environment (no need to run EMCLI setup …)
  • Stateless communication with OMS (no user data is stored with the client)
  • Generic list function for EM resources
  • Ability to run user-defined SQL queries to access published repository views

Before we go any further, there are two topics that warrant some discussion – Jython and JSON.

Read More

Thursday Apr 11, 2013

Qualcomm Deploys Application Changes Faster with Oracle Enterprise Manager

Listen in as Qualcomm talks about saving time and energy by making application changes faster through Oracle Enterprise Manager.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app

Thursday Mar 14, 2013

Database as a Service: Glad that you asked these!

Thanks for visiting my earlier blog post on the new Database as a Service (DBaaS) features which got released in Enterprise Manager 12cR2 Plugin Update 1.

Our first public webcast on  DBaaS since the release was held this morning (the recording will be soon available on The webcast was pretty well attended with peak attendance going well over our expectation. I wish we had more time to handle the technical Q&A, but since we didn't, let me use the blogosphere to answer some of the questions that were asked. I am repeating some of the questions that we answered during the webcast, because they warrant details beyond what the duration permitted.

Kevin from the audience asked "What's the difference between a regular provisioning and DbaaS?" Sometimes the apparently obvious ones are the most difficult to answer. The recently released whitepaper covers the regular/traditional provisioning versus DBaaS in detail. Long story cut short, in a traditional provisioning model, IT (usually a DBA) uses scripts and tools to provision databases on behalf of end users. In DBaaS IT's role changes and the DBA simply creates a service delivery platform for end users to provision databases on demand as and when they need them. And that too, with minimal inputs ! Here's how the process unfolds:

  • The DBA pools together a bunch of server resources that can host databases or a bunch of databases that can host schema and creates a Self-Service zone.
  • The DBA creates a gold image and provisioning procedure and expresses that as a service template
  • As a result, the end users do not have to deal with the intricacies of the provisioning process. They input a couple of very simple things like the service template and the zone and everything else happens under the hood. The provisioning process, the physicality of the database, etc are completely abstracted out.
  • And finally, because DbaaS deals with shared resource utilization and self-service automation, a DBaaS is usually complemented by quota, retirement and chargeback. 

The following picture can make it clear.

In terms of licensing, for a traditional administrator driven database provisioning, you need the Database Lifecycle Management Pack.  If you want to enable DBaaS on top of it, simply add the Cloud Management Pack for Database.

I will combine the next two questions. Alfred asked, "Is RAC a requirement?" (the short answer for which is "No") while Jud asked, "Is the schema-level provisioning supported in an environment where the target DBs are running in VMs?" First of all, in our DBaaS solution we support multiple models, as shown below.

In the dedicated database model, the database can run on a pool of servers or a pool of cluster. So both single instance and RAC are supported. Similarly, in the dedicated schema (Schema as a Service) model, it can run on single instance or RAC, which can in turn be hosted on physical servers or VMs. Enterprise Manager treats both physical servers and VMs as hosts and as long as the hosts have the agent installed, they can participate in DBaaS. Bottomline is that as we move from IaaS and offer these higher order services, the underlying infrastructure becomes irrelevant. This should also satisfy Steve, who queried "As the technology matures is there an attempt by Oracle to provide ODA vs EXADATA as the foundation of the dbaas to lower the cost?". The answer is YES. But, why wait?  DBaaS is supported on Exa and ODA platforms TODAY. In fact, HDFC Bank in India is running DBaaS on Exadata. You can read about them in the latest Oracle Magazine.

Another interesting question came from Yuri. He asked, "Is there an option to disable startup/shutdown for the self-service users?" It can be answered in multiple ways. First of all, in Schema as a Service or dedicated schema model, the end user cannot control the database instance state because it houses database services (schemas) owned by others too. So this may be a good model for enterprises trying to limit what end users can do at the database instance level.  However, in a dedicated database model, the Enterprise Manager out-of-box self-service console allows the end user to perform operations like startup and shutdown on the database instance. In general, if you want to create your tailored own self-service console with a limited set of operations exposed in the self-service interface, using the APIs may be the way to go. Enterprise Manager 12c also supports RESTFul APIs for self-service operations and hence a limited set of capabilities may be exposed. Check this technical presentation for the supported APIs.

Gordon's question precisely brings out the value of the Enterprise Manager 12c offering. He asked, "How do the services in the cloud get added to Cloud Control monitoring and alerting?" Ever since Amazon became the poster child of public IaaS, enterprises tried emulating their model within the data centers. What most people ignore or forget is that there is a life of the resources in a cloud beyond the provisioning process. Initial provisioning is just the beginning of that lifecycle. In Amazon's case, the management and monitoring of resources is the headache of Amazon's IT staff and consumers are oblivious to the time and effort it takes for them to manage the resources. In a private cloud scenario, one does not have that luxury. Once the database gets provisioned, it needs to monitored for performance, compliance and configuration drifts by company's own  IT staff. In Enterprise Manager 12c, the agent is deployed on the hosts that constitute the pool making the databases automatically managed without any additional work. It comprehensively manages the entire lifecycle and both adminsitrators and self-service users have tailored views of the databases. Well, this also gives me an opportunity to address a question by a participant who alluded to a 3rd party tool exclusively for database provisioning purposes. First of all, as I mentioned during the webcast, Enterprise Manager 12c is the only tool that handles all the use cases- creation of full databases, schemas and cloning (both full clone and Snap Clone) from a single management interface. The point tools out there handle only fraction of these use cases- some specialize in cloning while others specialize in seed database provisioning. Second, as stated in the previous answer, provisioning is only the initial phase of the lifecycle and a provisioning tool cannot be synonymous with a cloud management tool. Thanks Gordon for helping me make that point!

Sam and Cesar share the honors for the most difficult question that came right at the beginning. "Has it started?  Been on hold for a while." was their reaction at two minutes past ten. This is possibly the most embarrassing one for me because I was caught in traffic. With due apologies for that, I wish my car operated like Enterprise Manager's  Database as a Service!

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Tuesday Mar 12, 2013

Monitoring virtualization targets in Oracle Enterprise Manager 12C

Contributed by Sampanna Salunke, Principal Member of Technical Staff, Enterprise Manager

For monitoring any target instance in Oracle Enterprise Manager 12C, you would typically go to target home page, and click on the target menu to navigate to:

  • Monitoring->All Metrics page to view all the collected metrics
  • Monitoring->Metric and Collection Settings to set thresholds and/or modify collection frequencies of metrics
The thresholds and collection frequencies modified affect only the target instance that you are making changes to.

However, some of virtualization targets need to be monitored and managed differently due to changes made to the way data is collected and thresholds/collection frequencies are applied. Such target types include:

  • Oracle VM Server
  • Oracle VM Guest

As an optimization effort to minimize number of connections made to Oracle VM Manager to collect data for virtualization targets, the performance metrics for Oracle VM Server and Oracle VM Guest targets are “bulk-collected” at the Oracle VM Server Pool level. This means that thresholds and collection frequencies of Oracle VM Server and Oracle VM Guest metrics need to be set on the Oracle Server Pool that they belong to. For example, if a user wants to set thresholds on the “Oracle VM Server Load:CPU Utilization” metric for Oracle VM Server target, the sequence of steps to be performed are:

1. Navigate to the homepage of the Oracle VM Server Pool target that the Oracle VM Server target belongs to

2. Click on the target menu->Monitoring->Metric and Collection Settings

3. Expand the view option to “All Metrics” if required, and find the “Oracle VM Server Load” metric and change the thresholds or collection frequency of "CPU Utilization" as required.

Note that any changes made at the Oracle VM Server Pool for a “bulk collected” metric affect all the targets for which the metric is applicable in the server pool. In this example, since the user modified the “Oracle VM Server Load: CPU Utilization” threshold, the change is applied to all the Oracle VM Server targets in the server pool sg-pool1.

To summarize – the differences between “traditional” monitoring and “bulk-collected” monitoring is that the thresholds and collection frequencies of metrics are modified at the parent target, and the changes made are applied to all the children targets for which the metrics are applicable. However, data and alerts uploaded continue to appear as normal against the child target.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Wednesday Mar 06, 2013

Snap Clone: Instant, self-serviced database-on-demand

Snap Clone: Introduction
Oracle just released Enterprise Manager 12c Release 2 plugin Update 1 in February, 2013. This release has several new cloud management features that, such as Schema as a Service and Snap Clone. While the relevance of Schema as a Service is in the context of new database services, Snap Clone is useful in performing functional testing on pre-existing data.

One big consumer group of cloud is QA Engineers or Testers. They perform User Acceptance Tests (UAT) for various applications. To perform an UAT, they need to create copies of the production database. For intense testing, such as in pre-upgrade scenarios, they need a full updateable copy of the production data. There are other situations, such as in functional testing, they need to perform minimal updates to the data, but at the same time, need multiple functional copies. Enterprise Manager 12c supports both the scenarios. In the former case, it leverages RMAN backups to clone the data. In the latter case, it leverages the “Copy on Write” technology at the storage layer to perform Enterprise Manager 12c Snap Clone (or just Snap Clone). Currently, NAS technologies viz. Netapp and ZFS Storage Appliance are supported for Snap Clone. By using this technology, the entire data does not need to be cloned, but the new database can physically point to the source blocks within the same filer and only needs to allocate new blocks if there are updates to the cloned copy.

Underlying “Copy on Write” technology
To cover the underlying technology, let us look at the Netapp  and Sun ZFS storage technologies. First of all, Netapp supports pooling of storage resources and creating volumes on top of those. These volumes are called Flexvols. NetApp FlexClone technology enables true data cloning - instant replication of the Flexvols without requiring additional storage space at the time of creation.  Each cloned volume is a transparent, virtual copy that can be used for a wide range of operations such as product/system development testing, bug fixing, upgrade checks, data set simulations, etc. FlexClone volumes have all the capabilities of a FlexVol volume, including growing, shrinking, and being the source of a snapshot copy or even another FlexClone volume. Data ONTAP makes it happen by Copy on Write technology. When a volume is cloned, ONTAP does not allocate any new physical space but simply updates the metadata to point to the old blocks of the parent volume. NetApp filers use a Write Anywhere File Layout (WAFL) to manage disk storage. When a file is changed, the snapshot copy still points to the disk blocks where the file existed before it was modified, and only the changes (deltas) are written to new disk blocks. A block in WAFL currently can have a maximum of 255 pointers to it. This means that a single FlexVol volume can be cloned upto 255 times. All the metadata updates are just pointer changes, and the filer takes advantage of locality of reference, NVRAM, and RAID technology to keep everything fast and reliable. I found this documentation on the Netapp site specially useful to understand the concept. The following picture provides a graphical illustration of how this works.

Oracle  ZFS employs a similar copy-on-write methodology that creates clones that point to the source block of data. When one needs to modify the block, data is never overwritten in place. Oracle Solaris ZFS then creates new pointers to the new data and a new master block (uberblock) that points to the modified data tree. Only then does it move to using the new uberblock and tree. In addition to providing data integrity, having new and previous versions of the data on disk allows for services such as snapshots to be implemented very efficiently.

The best way to think of storage snapshots is that it is a point-in-time view of the data. It’s a time machine, letting you look into the past. Because it’s all just pointers, you can actually look at the snapshot as if it was the active filesystem. It’s read-only, because you can’t change the past, but you can actually look at it and read the data. NetApp and SunZFS snapshots just write the new information to a special bit of disk reserved for storing these changes, called the SnapReserve. Then, the pointers that tell the system where to find the data get updated to point to the new data in the SnapReserve.

Space efficiency: Since we are only recording the deltas, you get the disk savings of copy-on-write snapshots (typically a few hundred kilobytes for a 1 terabyte database).

Time efficiency: Because the snapshot is just pointers, to restore data (using SnapRestore), we simply update the pointers to point to the original data again. This is faster than copying all the data back. So taking a snapshot completes in seconds, even for really large volumes (like, terabytes) and so do restores. A typical terabyte database therefore takes only a couple of minutes to clone, backup and restore.

So, what is the additional benefit of Enterprise Manager Snap Clone over storage level cloning?

Snap Clone is complementary to the copy-on-write technologies described above. It leverages the technologies mentioned above;  however it provides additional value in:

  1. Automated registration and association with Test Master database: Registering the storage with Enterprise Manager in context of the Test Master database. For example, it queries the filer to find the storage volumes and then associates those with the volumes that the datafiles are associated with. It provides granular control to the admins to make a database clonable, since there could be databases that DBAs do not want cloned off.
  2. Database as a Service using a self-service paradigm: Provides a self-service user (typically a functional tester) to provision a clone based on the Test Master. The self-service capability has administrator side feature like setting up the pool of servers which will host the databases, creating a zone, creating service templates for provisioning and setting access controls for the users both at the zone level and the service template level.
  3. Time travel: Functional testers often need to go back to an earlier incarnation of a database. Enterprise Manager provides the self service users to take multiple snapshots of the database as backups. The users can then easily restore from an earlier snapshot. Since the snapshot is only a thin copy, the backup and restore are almost instantaneous, typically a couple of minutes. During restore a large part of that is spent in actually starting the database, for example and discovering its state in Enterprise manager and not in the actual restore.
  4. Manageability: Finally, Enterprise Manager provides the complete manageability of these clones. This includes performance management, lifecycle management, etc. For example, when cloning at a storage volume level, sysadmin tools have little idea on the databases and applications that are consuming those volumes. From an inventory management, capacity planning and compliance it is important to track the storage association and lineage of the clones at the database level. Enterprise Manager provides that rich set of manageability features.

So how does this work in Enterprise Manager 12c?

In order to understand the Snap Clone feature of Enterprise Manager and its relevance to DBaaS, it is important to understand the sequence of steps that enable the feature and the DbaaS.

Step 1: Setting up the DbaaS Pool
First of all the Sysadmin has to designate few servers (which become Enterprise Manager hosts when the agent is deployed on them) to constitute the PaaS Infrastructure Zone. Each of these servers should have the connectivity to be able to mount the volumes participating in the Snap Clone process.

The DBA intrinsically knows the exact versions and flavor of databases being used within each LoB along with the operating system version compatibility. As the next level of streamlining he/she can add each unique type of the database configuration to a single place called Pool. For example, single Instance, cluster database …etc.

A database pool contains a set of homogeneous resources that can be used to provision a database instance within a PaaS Infrastructure Zone. For Snap Clone in particular, the administrator needs to pre-provision the same version of Oracle Homes either on standalone hosts or in a RAC cluster, which should be a part of the PaaS Infrastructure Zone.

Step 2: Setting up the Test Master
In the first step the administrator has to set up a Test Master as a clone of the production. Sometimes, the administrator has to create another copy of production at the source itself for masking and subsetting. The solution would vary depending on the customer's specific need and infrastructure. One can use one of RMAN, Dataguard, Golden Gate or even storage technologies such as Netapp Snapmirror, but usually our customers have figured out one way or other to do it. If the customer wishes to use EM for this, they can also use the Database Clone feature to clone the data (this leverages RMAN behind the scenes) or even use data synchronization feature of the Change Manager (part of Database Lifecycle Management Pack) to keep production and Test Master consistent. There is no unique way of accomplishing this; it all depends on the specific use case. There can be cases where the customer may need to mask or subset the data at source for which they can use those EM features as well.

The test master has to be created on ZFS Storage Appliance or Netapp Filer. Currently, the versions supported are:
·    ZFS Storage Appliance models  7410 and 7420
·    Any Netapp storage model where Version ONTAP® or above of Netapp is supported.  The Netapp interoperability matrix is available here

Here’s a sample of database files on a Netapp filer that could constitute the Test Master database.
·    /vol/oradata (datafiles and indexes): [8-16 luns]
·    /vol/oralog (redologs only): [2-4 luns]
·    /vol/orarch  (archived redo logs ):[2-4 luns]
·    /vol/controlfiles (small vol for controlfiles):[2-4 luns]
·    /vol/oratemp (temp tablespace):[4-8 luns]

Step 3: Register the storage and designate the Test Master
Once the Test Master database has been created, one has to
1.    Discover the Test Master database as an EM target
2.    Register the storage with Enterprise Manager. Enterprise Manager uses an agent installed on Linux x86-64 bit to communicate with the filer. For NetApp storage, the connection is over http or https. For Sun ZFS storage, the connection is over ssh.

 Enterprise Manager associates a database with a filer by deriving the volumes  from the data files and then associating the volumes with those seen by the filer. For a database to participate in Snap Clone, it should be wholly located on flexvols or shares with Copy on Write enabled. Enterprise Manager performs the necessary validations for that.

Step 4: Creating the service template using the Profile
Finally, the Test Master needs to be exposed as a source of cloning to functional clones to self-service users. This is done by creating a provisioning profile. Provisioning Profile, in general, is an Enterprise Manager concept that denotes a gold image-whether in the form of a “tarball” archive or an RMAN backup or a Test Master.  The concept of profile makes the process repeatable by several users, such as QA testing different parts of the application.

The profile is exposed to the service catalog via a service template which also includes the provisioning procedure, pre and post scripts for deploying the image.

Finally, comes the user side experience
. Enterprise Manager supports a self-service model where users can provision databases without being gated by DBA. The self-service user can pick a service template (which indirectly via the provisioning profile links to the Test Master) , specify the zone where to deploy and the database gets provisioned.  This new database is actually a "thin clone" of the Test Master and new blocks will get allocated only when the data is updated. The user can also take backup the cloned database, which are essentially read-only snapshots of the database. If the user needs to restore the database the latest incarnation of the database is simply pointed to the snapshot, so that the restore is instantaneous. This literally enables the self-service user to go back in time, in a "time travel" fashion. In addition to provisioning and backup, self-service users can also monitor the databases-check their statuses, look at session statistics, etc.

Before concluding this blog entry, let me point to a bunch of collateral related to DBaaS that we recently published. Check out the new whitepaper, demos, and presentation. We will soon publish a technical whitepaper on performing E-Business Suite testing using Snap Clone. Till then...

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Thursday Feb 28, 2013

Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 ( now available on OTN

Contributed by Martin Pena, Director, Product Management, Oracle Enterprise Manager

The Oracle Enterprise Manager Cloud Control 12c Release 2 ( software binaries, updated with the new plug-ins and updated plug-in versions, are now available for download from Oracle Technology Network (OTN). Note that this is not a patch or patch set release; the new and updated plug-ins have simply been integrated with the binaries to make it easier for users to deploy the plug-ins as part of installing or upgrading Cloud Control. The management agent software binaries have not been changed.

+  New Management Plug-Ins:  The following new and updated plug-Ins are now available as part of this release. In addition to providing new and enhanced functionality, the plug-ins incorporate numerous bug fixes.

Plug-In Name / Version
*Enterprise Manager for Oracle Database (DB) (new)
*Enterprise Manager for Oracle Virtualization (VT) (new)
*Enterprise Manager Storage Management Framework (SMF) (new)
*Enterprise Manager for Oracle Cloud (SSA) (new)

How  to access  Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 ( :

Follow the option that is relevant for your installation to deploy the latest plug-in releases and updates into your environment:

+  If you have not yet installed Enterprise Manager Cloud Control 12c, download the Cloud Control 12c Release 2 Update 1 ( binaries from Oracle Technology Network (OTN) and install the new version. The plug-ins will be deployed as part of the installation process.

+  If you already have Enterprise Manager Cloud Control 12c Release 1 ( installed [with or without Bundle Patch 1], download Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 ( from Oracle Technology Network (OTN) and upgrade to the new version. The plug-ins will be deployed as part of the upgrade process.

+  If you have already installed or upgraded to 12c Release 2 (, you do not need to download the binaries from from Oracle Technology Network (OTN), but can instead deploy the new and updated plug-in versions using Cloud Control's Self Update feature. To deploy all of the plug-ins in simultaneously, use the deploy_plugin_on_server emcli command as shown below:

           emcli deploy_plugin_on_server -plugin="plug-in_id[:version][;plug-in_id[:version]]" [-sys_password=sys_password] [-prereq_check]

For example:

           emcli deploy_plugin_on_server -plugin="oracle.sysman.vt;oracle.sysman.ssa;oracle.sysman.db"

For more information, refer to the Plug-in Manager chapter in the Enterprise Manager Administrator's Guide, available here:

To review more install/upgrade usecases and the FAQ, please refer to the Getting Started chapter in the Basic Installation Guide, available here:

Join live webcast "Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-In Update 1 Installation and Upgrade Overview" learn more and ask questions on Friday, March 01 - 9:00 AM PT

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter


Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs


« March 2015