Monday Jun 23, 2014

Patch automation of Data Guard Databases takes a leap with EM12cR4


Patching has always been one of the many joys (or dreads) of Oracle DBAs. 
The effort compounds as the Database sprawl grows and the architecture complexity increases.  Additionally,  there is an increasing pressure to patch the systems with minimal or zero downtime. EM12c's Database patch automation functionality provides  an end-to-end management starting with proactive patch recommendations (like quarterly PSUs and CPUs), complete pre-flight checker to ensure predictable patching during maintenance window and  automation for patching the comprehensive list of products under the Oracle Database family.  ( EM12c Patch Automation overview

Patching has always been one of the many joys (or dreads) of Oracle DBAs. The effort compounds as the Database sprawl grows and the architecture complexity increases. Additionally, there is an increasing pressure to patch the systems with minimal or zero downtime. 

EM12c's Database patch automation functionality provides an end-to-end management starting with proactive patch recommendations (like quarterly PSUs and CPUs), complete pre-flight checker to ensure predictable patching during maintenance window and  automation for patching the comprehensive list of products under the Oracle Database family.  ( EM12c Patch Automation overview)

With the introduction of "Out of Place" patching in EM12c, the automation feature got a complete overhaul. Over the past 3 years I have seen customers moving to this mode to achieve their automation goals.  ( What is Out of Place?)

Patching Data guard environments (read as Primary and its corresponding Standby Databases) has always been a challenging task for the DBAs. 
In addition to handling the differences in steps needed for the configuration, the very nature of its distributed environment and incorporating any additional custom tasks makes it more demanding. Till now in EM we only supported automation in disparate steps and in “In Place” mode.

Starting 12.1.0.4 (EM12c R4), the support is enhanced to be more tighter, can be done in 'Out of Place' mode, and has new supplemental features to manage the process along with its additional tasks.

In this post we will take a closer look of the data guard stand by patching feature and then dive into a use case, you could try out in your environment.

[Read More]

Tuesday Jun 17, 2014

EM12c Release 4: Job System Easter Eggs - Part 2

This is part 2 of my two part blog post on Job System Easter Eggs (hidden features). The two features being covered are:

  1. New Job progress tracking UI
  2. Import/Export of job definitions

In the previous post, I talked about the new UI while in this post I will cover import and export for job definitions. 


2.  Import/Export of job definitions

The ability to export and import job definitions across EM environments has been a popular enhancement request for quite some time. There are 3 primary drivers:

  • Test to Production: Moving active and library job definitions from test site to production
  • Migration: Move job definitions across EM versions. This is often used as an alternative to upgrade.
  • As Failsafe: For that extra protection, in addition to repository backups. This can be useful for selective restoration of job definition.

In the first iteration, we are exposing these functions as emcli verbs.  Since these emcli verbs are not formally supported in the release, no documentation is included in the EMCLI Reference Guide. So to learn more about them, we will have to look at emcli help. The two verbs are called export_jobs, and import_jobs.

>> ./emcli help | grep _jobs


Some salient points about the import & export verbs are.

Export:

  • Rich filter criteria to form the list of jobs to export
  • Active and Library jobs can be exported
  • Preview mode, to view results before exporting them
  • Job definitions are exported as zip files for ease of transfer
  • Contextual information like targets, credentials, access settings, etc is exported, but not imported. In future, we may be able to support import for this information as well.
  • System jobs and nested jobs are not exported

Import:

  • Special Preview mode to view contents of the exported file, and to determine possible conflicts during import
  • Rich filter criteria to selectively import job definitions from the exported file
  • Job definitions are imported to the library ONLY. This is true even if an active job was exported
  • Two failure modes - skip on error, and rollback on error
  • Import only works on same or higher version of EM12c. The patch set number matters.


Export Job Definitions

Let's walk through an example.

1. We start with export_jobs. The help for this verb will show you all the available options. Most of the filters are self explanatory, so i will skip the explanation. The most important option of all is the -preview flag. This when used in conjunction with other filters, will show results without exporting the job definitions.

>> ./emcli help export_jobs

emcli export_jobs
   [-name="job name1;job name2;..."]
   [-type="job type1;job type2;..."]
   [-targets="tname1:ttype1;tname2:ttype2;..."]
   [-owner="owner1;owner2;..."]
   [-preview]
  -export_file=<Zip filename that will be created>"
   -libraryjobs

2.  Now lets play with this verb. If we invoke the verb with just the -preview flag, it will list all job definitions that can be exported both that are active, and from the library. Note: system jobs and nested jobs are skipped from this output.

>> ./emcli export_jobs -preview

Not all job types are exportable. To determine the list of job types supported via the import and export verbs, use the get_job_types verb.

>>  ./emcli get_job_types

Currently, there are over 50 job types supported, and this list will continue to grow with every release.

3. From the list above, I am primarily interested in jobs that are created by the user AFULAY. So I apply the -owner filter.

>> ./emcli export_jobs -owner=AFULAY -preview


In this output i see 2 jobs, 'Library Job' which is a simple OS command job stored in the job library, while the 'Test Job' is an active OS command job scheduled against a bunch of targets.

Note: if multiple options are specified, like -name and -owner, then these are ANDed together.

4. Since i am the lazy kind, i would rather export every thing and then later decide what i wish to import. So here it goes. The -export_file option takes the location and file name for the export file. Note the actual output file is an xml that contains job metadata, but the export file is always archived and stored in a zip format. At this time, I am sure most would instinctively unzip the file and start perusing through its contents, but doing so would be analogous to removing the warranty void sticker of your new TV or Blue Ray player. Basically, attempts to manually modify the contents of the exported zip file is highly discouraged.

>> ./emcli export_jobs -export_file=/tmp/afulay_alljobs.zip


Note how the status column reports success or failure for each job being exported. With the file exported, we now move on to the import verb.

.

Import Job Definitions

In the previous section, we exported a file with all job definitions in it. Now lets say we share this file with bunch of other admins and ask them to import whatever job definitions that make sense or are relevant to their environment.

1. To understand the capabilities of the import verb, we take a look at the help content.

>> ./emcli help import_jobs

emcli import_jobs
   [-name="job name1;job name2;..."]
   [-type="job type1;job type2;..."]
   [-targets="tname1:ttype1;tname2:ttype2;..."]
   [-owner="owner1;owner2;..."]
   [-preview]
   [-force]
   [-stoponerror]
   -import_file=<Zip filename that contains job definitions>"

 Most of the options are quite similar to export_jobs barring a few. The -force flag allows the admin to update an existing job definition. Typically, you will run into these situations when a conflicting job is found in the new environment, and you want to either update its definition with the new version in the import file or overwrite the localized changes. The -stoponerror flag, when specified, will stop the import process on first encountered failure, and then rollback all jobs imported in the session. We will likely change this label to rollbackonerror to correctly represent its behavior. The default behavior is to skip failed jobs and continue importing others.

2. Before our admins import job definitions, they first need to view the contents of the exported file. This again can be done using the -preview option.

>> ./emcli import_jobs -preview -import_file=/tmp/afulay_alljobs.zip


The -preview option in the import verb is very special. It not only just lists the contents of the exported file, but also connects to the new EM environment and looks for potential conflicts during import. So this is a deep validation test. As seen in the above screenshot, there are two sections in the output, first is just a listing of all job definitions from the import file, while the second is a list of all conflicts. Note: for demo sake, i am exporting and importing to the same EM site, and hence every job shows up as a conflict. To address this issue, i will eventually delete the 'Library Job' from my job library, and import it from the import file.

Disclaimer:

In the interest of full disclosure, i should mention that there are few known bugs for the import verb, hence the rationale for not releasing these verbs formally with EM12c R4. Some bugs i ran into when writing this blog were:

  • you cannot export an active job, delete it, and import it back to the same EM environment, this currently is only possible with library jobs.  This is an obscure case though.
  • The -force flag is a little flaky, so sometimes it wouldn't force import even if you want it to
  • The -owner flag does not work on the import file, it instead will throw an exception

That said, when the job does get imported, it does so properly, so there is never any risk of metadata corruption.


3. If i try to import the 'Library Job', the verb will fail and give me an error message.

>> ./emcli import_jobs -name='LIBRARY JOB' -import_file=/tmp/afulay_alljobs.zip


The Status column reports Error, while the Notes column gives the reason as 'job already exists'.

4. Now lets delete the library job and try to import it.

>> ./emcli delete_library_job -name='LIBRARY JOB'

>> ./emcli import_jobs -name='LIBRARY JOB' -import_file=/tmp/afulay_alljobs.zip


Success!!We were able to delete the library job, and import it back from the import file.


In summary, there two very useful enhancements made in EM12c R4. Unfortunately, due to time constraints and our inability to meet the set quality standards, we decided to ship these features in disabled state. This ensures that production sites and users are not impacted, while still giving the few brave souls the opportunity to test these features. In my assessment, i have found the new UI to be fairly robust as i have been using this exclusively for a while. On the other hand, there are few known bugs with the import and export emcli verbs, so use these with caution.

-- Adeesh Fulay (@adeeshf)  

Monday Jun 16, 2014

EM12c Release 4: Job System Easter Eggs - Part 1

So you just installed a new EM12c R4 environment or upgraded your existing EM environment to EM12c R4. Post upgrade you go to the Job System activity page (via Enterprise->Job->Activity menu) and view the progress details  of a job. Well nothing seems to have changed, its the same UI, the same multi-page drill down to view step output, same no. of clicks, etc. Wrong! In this two part blog post, i talk about two Job System Easter Eggs (hidden features) that most of you will find interesting. These are:

  1. New Job progress tracking UI
  2. Import/Export of job definitions

So before i go any further, let me address the issue of why are these features hidden? As we were building these features, we realized that we would not be ready to ship the desired quality of code by the set dates. Hence, instead of removing the code, it was decided to ship it in a disabled state so as not to impact customers, but still allowing a brave few to experiment with it and provide valuable feedback.

1.  New Job Progress Tracking UI

The job system UI hasn't changed much since its introduction almost 10 years ago. It is a daunting task to update all the job system related UIs in a single release, and hence we decided to take a piecemeal approach instead. In the first installment, we have revamped the job progress tracking page.

Old Job Progress Tracking UI

The current UI, as shown above, while being very functional, is also very laborious. Multiple clicks and drill downs are required to view the step output for a particular target. Also, any click leads to complete page refresh, which leads to wastage of time and resources. The new UI tries to address all these concerns. It is a single page UI, which means no matter where you click, you never leave the page and thus never lose context of the target or step you where in. It also significantly reduces the no. of clicks required to complete the same task as in the current UI. So lets take a look at this new UI.

 First, as i mentioned earlier, you need to enable this UI. To do this, you need to run the following emctl command from any of the OMS:

./emctl set property -name oracle.sysman.core.jobs.ui.useAdfExecutionUi -value true

 This command will prompt for the sysman password, and then will enable the new UI.

NOTE: This command does not require a restart of the OMS. Once run, the new UI will be enabled for all user across all OMSes.

EMCTL Output

Now revisit the job progress tracking page from before. You will be directed to the new UI.

New Job Progress Tracking UI

There are in all 6 key regions on this new single page job progress tracking UI. Starting from top left, these are:

  1. Job Run Actions - These are actions that can be performed on the job run like suspend resume, retry, stop, edit, etc.
  2. Executions - This region displays all the executions in the job run. An execution, in most cases, represents a single target and hence runs independently from other executions. This region thus shows the progress and status of all executions in a single view. The best part of this region is the column titled 'Execution Time'. The cigar chart in this column represents two things - one, the duration of the execution, and two, the difference in start times. The visual representation helps in identifying runaway executions, or just compare execution times across different targets. The Actions menu allows various options like start, stop, debug, delete, etc.
  3. Execution Summary - Clicking on an execution in the above region, paints the area on the right. This specific region shows execution summary with information like status, start & end date, execution id, command, etc
  4. Execution Steps - This region lists the steps that make up the execution.
  5. Step Output - Clicking on a step from the above region, paints this region. This shows the details of the step. This includes the step output and the ability to download it to a text file.
  6. Page Options - We imagine that learning any new UI takes time, and hence this final region provides the option to switch between the new and the classic view. Additionally, this also allows you to set the auto refresh rate for the page.

Essentially, considering that jobs have two levels - executions and steps, we have experimented with a multi-master style layout. EM has never used such a layout and hence there were concerns raised when we chose to do so.

Master 1 (region 2) -> Detail 1 (regions 3, 4, & 5)

Master 2 (region 4) -> Detail 2 (region 5)


In summary, with this new UI, we have been able to significantly reduce the no. of clicks required to track job progress and drill into details. We have also been able to show all relevant information in a single page, thus avoiding unnecessary page redirection and reloads. I would love to hear from you if this experiment has paid off and if you find this new UI useful.

In the next part of this blog i talk about the new emcli verbs to import and export job definitions across EM environments. This has been a long standing enhancement request, and we are quite excited about our efforts.

-- Adeesh Fulay (@adeeshf)  

Friday Jun 13, 2014

EM12c Release 4: New Compliance features including DB STIG Standard

Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. )

Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs.

You can find more information about the Oracle Database and other STIGs on the DISA website.

The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories.

The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation.

In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including:

  • Agent-Side Compliance Rules
  • Manual Compliance Rules
  • Violation Suppression
  • Additional BI Publisher Compliance Reports

Agent-Side Compliance Rules

Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you.

This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed.

The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions.

You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN.

Manual Compliance Rules

There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion.

Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results.

Violation Suppression

There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID.

Additional BI Publisher compliance reports

As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.

 10 New Compliance Reports

Compliance Summary Report Example showing STIG results

Conclusion

Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN.

Additional EM12c Compliance Management Information

Compliance Management - Overview ( Presentation )

Compliance Management - Custom Compliance on Default Data (How To)

Compliance Management - Custom Compliance using SQL Configuration Extension (How To)

Compliance Management - Customer Compliance using Command Configuration Extension (How To)



Tuesday Jun 10, 2014

EM12c: Using the LIST verb in emcli

Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent!

1. Multiple resources under a single verb:
A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.

Here is an example which retrieves the list of administrators within EM.
Standard mode
$ emcli list -resource="Administrators"


Interactive mode
emcli>list(resource="Administrators")
The output will be the same as standard mode.

Standard mode
$ emcli @myAdmin.py
Enter password :  ******

The output will be the same as standard mode.

Contents of myAdmin.py script
login()
print list(resource="Administrators",jsonout=False).out()


To get a list of all available resources use
$ emcli list -help

With every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks.

2. Consistent Formatting:
It is possible to format the output of any resource consistently using these options:

  –column

  This option is used to specify which columns should be shown in the output.

Here is an example which shows the list of administrators and their account status
$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS"

To get a list of columns in a resource use:
$ emcli list -resource="Administrators" -help


You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"

This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.

  –colsize
  This option is used to resize column widths.
Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30"

The existing standard EMCLI formatting options are also available in list verb. They are:
-format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -script

There are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.

3. Search:
Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:
          =
           !=
           >
           <
           >=
           <=
           like
           is (Must be followed by null or not null)

Here is an example which searches for all EM administrators in the marketing department located in the USA.
$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'"

Here is another example which shows all the named credentials created since a specific date. 
$emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"
Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM

Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.

Here is an example
$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15"


You can provide multiple bind variables.

To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:
$ emcli list -resource="PreferredCredentialsDefault" -help


4. Secure access
When list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access.

For example consider this usecase:
AdminA has access only to TargetA.
AdminA logs into EM CLI
Executing the list verb to get the list of all targets will only show TargetA.

5. User defined SQL
Using the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema.

To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.

Here’s an example
  $ emcli list -sql='select * from mgmt$target'

6. JSON output support   
JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.

As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.

We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status.

Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.

If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.
If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.

 1 from emcli import *
 2 
search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE=
 \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']
 3 if len(sys.argv) == 2:
 4    print login(username=sys.argv[0])
 5    l_prop_val_to_set = sys.argv[1]
 6  
   l_targets = list(resource="TargetProperties", search=search_list,
   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")
 7    for target in l_targets.out()['data']:
 8       t_pn = 'LifeCycle Status'
 9      print "INFO: Setting Property name " + t_pn + " to value " +
      l_prop_val_to_set + " for " + target['TARGET_NAME']
 10      print  set_target_property_value(property_records=
      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+
      t_pn+":"+l_prop_val_to_set)
 11 
else:
 12   print "\n ERROR: Property value argument is missing"
 13
  print "\n INFO: Format to run this file is filename.py <username>
  <Database Target LifeCycle Status Property Value>"

You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.

A line by line explanation for beginners:
Line

 1 Imports the emcli verbs as functions
 2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.
 3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.
 4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.
 5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.
 6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.
 7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.
 8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.
 9 This a message informing the user the script is setting the property value for dbxyz.
 10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets.

The script is executed as:
$ emcli @myScript.py subin Production



The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.

For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:
$ emcli list -help


Watch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.


Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
mt=8">Download the Oracle Enterprise Manager 12c Mobile app

Monday Jun 09, 2014

EM12c Release 4: New EMCLI Verbs

Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation.

Basic Administration Verbs
 invoke_ws - Invoke EM web service.

ADM Verbs
 associate_target_to_adm - Associate a target to an application data model.
 export_adm - Export Application Data Model to a specified .xml file.
 import_adm - Import Application Data Model from a specified .xml file.
 list_adms - List the names, target names and application suites of existing Application Data Models
 verify_adm - Submit an application data model verify job for the target specified.

BI Publisher Reports Verbs
 grant_bipublisher_roles - Grants access to the BI Publisher catalog and features.
 revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.

Blackout Verbs
 create_rbk - Create a Retro-active blackout.

CFW Verbs
 cancel_cloud_service_requests -  To cancel cloud service requests
 delete_cloud_service_instances -  To delete cloud service instances
 delete_cloud_user_objects - To delete cloud user objects.
 get_cloud_service_instances - To get information about cloud service instances
 get_cloud_service_requests - To get information about cloud requests
 get_cloud_user_objects - To get information about cloud user objects.

Chargeback Verbs
 add_chargeback_entity - Adds the given entity to Chargeback.
 assign_charge_plan - Assign a plan to a chargeback entity.
 assign_cost_center - Assign a cost center to a chargeback entity.
 create_charge_entity_type - Create  charge entity type
 export_charge_plans - Exports charge plans metadata to file
 export_custom_charge_items -  Exports user defined charge items to a file
 import_charge_plans - Imports charge plans metadata from given file
 import_custom_charge_items -  Imports user defined charge items metadata from given file
 list_charge_plans - Gives a list of charge plans in Chargeback.
 list_chargeback_entities - Gives a list of all the entities in Chargeback
 list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback
 list_cost_centers - Lists the cost centers in Chargeback.
 remove_chargeback_entity - Removes the given entity from Chargeback.
 unassign_charge_plan - Un-assign the plan associated to a chargeback entity.
 unassign_cost_center - Un-assign the cost center associated to a chargeback entity.

Configuration/Association History
 disable_config_history - Disable configuration history computation for a target type.
 enable_config_history - Enable configuration history computation for a target type.
 set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.

ConfigurationCompare
 config_compare - Submits the configuration comparison job
 get_config_templates - Gets all the comparison templates from the repository

Compliance Verbs
 fix_compliance_state -  Fix compliance state by removing references in deleted targets.

Credential Verbs
 update_credential_set

Data Subset Verbs
 export_subset_definition - Exports specified subset definition as XML file at specified directory path.
 generate_subset - Generate subset using specified subset definition and target database.
 import_subset_definition - Import a subset definition from specified XML file.
 import_subset_dump - Imports dump file into specified target database.
 list_subset_definitions - Get the list of subset definition, adm and target name

Delete pluggable Database Job Verbs
 delete_pluggable_database - Delete a pluggable database

Deployment Procedure Verbs
 get_runtime_data - Get the runtime data of an execution

Discover and Push to Agents Verbs
 generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains
 refresh_fa - Refresh Fusion Instance
 run_fa_diagnostics - Run Fusion Applications Diagnostics

Fusion Middleware Provisioning Verbs
 create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain
 create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home
 create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation Media

Incident Rules Verbs
 add_target_to_rule_set - Add a target to an enterprise rule set.
 delete_incident_record - Delete one or more open incidents
 remove_target_from_rule_set - Remove a target from an enterprise rule set.

 Job Verbs
 export_jobs - Export job details in to an xml file
 import_jobs - Import job definitions from an xml file
 job_input_file - Supply details for a job verb in a property file
 resume_job - Resume a job or set of jobs
 suspend_job - Suspend a job or set of jobs

 Oracle Database as Service Verbs
 config_db_service_target - Configure DB Service target for OPC

Privilege Delegation Settings Verbs
 clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms
 set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms
 test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a host

SSA Verbs
 cleanup_dbaas_requests - Submit cleanup request for failed request
 create_dbaas_quota - Create Database Quota for a SSA User Role
 create_service_template - Create a Service Template
 delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role
 delete_service_template - Delete a given service template
 get_dbaas_quota - List the Database Quota setup for all SSA User Roles
 get_dbaas_request_settings - List the Database Request Settings
 get_service_template_detail - Get details of a given service template
 get_service_templates -  Get the list of available service templates
 rename_service_template -  Rename a given service template
 update_dbaas_quota - Update the Database Quota for a SSA User Role
 update_dbaas_request_settings - Update the Database Request Settings
 update_service_template -  Update a given service template.

 SavedConfigurations
 get_saved_configs  - Gets the saved configurations from the repository

 Server Generated Alert Metric Verbs
 validate_server_generated_alerts  - Server Generated Alert Metric Verb

Services Verbs
 edit_sl_rule - Edit the service level rule for the specified service

Siebel Verbs
 list_siebel_enterprises -  List Siebel enterprises currently monitored in EM
 list_siebel_servers -  List Siebel servers under a specified siebel enterprise
 update_siebel- Update a Siebel enterprise or its underlying servers

SiteGuard Verbs
 add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system
 configure_siteguard_lag -  Configure apply lag and transport lag limit for databases
 delete_siteguard_aux_host -  Delete auxiliary host associated with a site
 delete_siteguard_lag -  Erases apply lag or transport lag limit for databases
 get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site
 get_siteguard_health_checks -  Shows schedule of health checks
 get_siteguard_lag -  Shows apply lag or transport lag limit for databases
 schedule_siteguard_health_checks -  Schedule health checks for an operation plan
 stop_siteguard_health_checks -  Stops all future health check execution of an operation plan
 update_siteguard_lag -  Updates apply lag and transport lag limit for databases

Software Library Verbs
 stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.

Target Data Verbs
 create_assoc - Creates target associations
 delete_assoc - Deletes target associations
 list_allowed_pairs - Lists allowed association types for specified source and destination
 list_assoc - Lists associations between source and destination targets
 manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnerships

Trace Reports
 generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)

VI EMCLI Verbs
 add_virtual_platform - Add Oracle Virtual PLatform(s).
 modify_virtual_platform - Modify Oracle Virtual Platform.

To get more details about each verb, execute
$ emcli help <verb_name>
Example: $ emcli help list_assoc

New resources in list verb
These are the new resources in EM CLI list verb :
Certificates
  WLSCertificateDetails

Credential Resource Group
  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)
  PreferredCredentialsSystemScope - Target preferred credential

Privilege Delegation Settings
  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host
  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host
  PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings
  PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings

To get more details about each resource, execute
$ emcli list -resource="<resource_name>" -help
Example: $ emcli list -resource="PrivilegeDelegationSettings" -help

Deprecated Verbs:
Agent Administration Verbs
 resecure_agent - Resecure an agent

To get the complete list of verbs, execute:
$ emcli help

Update (6/11):- Please note that the "Gold Agent Image Verbs" and "Agent Update Verbs" verbs shown under "emcli help" are not supported yet.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager 12c Mobile app

Wednesday Jun 04, 2014

EM12c Release 4: Database as a Service Enhancements

Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are:

  1. Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard)
  2. Additional Storage Options for Snap Clone (includes support for Database feature CloneDB)
  3. Improved Rapid Start Kits
  4. Extensible Metering and Chargeback
  5. Miscellaneous Enhancements


1. Comprehensive Database Service Catalog

Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are:

Service Catalogs: Defining Standardized Database Service

High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA]

EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits:

  • Present a collection of standardized database service definitions,
  • Define standardized pools of hardware and software for provisioning,
  • Role based access to cater to different class of users,
  • Automated procedures to provision the predefined database definitions,
  • Setup chargeback plans based on service tiers and database configuration sizes, etc

Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration:

  • Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites)
  • The standby databases can be single instance, RAC, or RAC One Node databases
  • Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software
  • The standby databases can be in either mount or read only (requires active data guard option) mode
  • All database versions 10g to 12c supported (as certified with EM 12c)
  • All 3 protection modes can be used - Maximum availability, performance, security
  • Log apply can be set to sync or async along with the required apply lag

The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:

 Primary  Standby [1 or more]
 EM 12cR4
 SI  -
 SI
 SI
 RAC
-
 RAC
SI
 RAC
RAC
 RON
-
 RON
RON
where RON = RAC One Node is supported via custom post-scripts in the service template

A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c.


2. Additional Storage Options for Snap Clone

In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style.

In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources:

Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2

Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB

The advantages of the new CloneDB integration with EM12c Snap Clone are:

  • Space and time savings
  • Ease of setup - no additional software is required other than the Oracle database binary
  • Works on all platforms
  • Reduce the dependence on storage administrators
  • Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal
  • Uses dNFS to delivers better performance, availability, and scalability over kernel NFS
  • Complete lifecycle of the clones managed by EM12c - performance, configuration, etc


3. Improved Rapid Start Kits

DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups.

The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software.

Steps to use the kit:

  • The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup
  • It can be run from this default location or from any server which has emcli client installed
  • For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py
  • For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py
  • The database_cloud_setup.py script takes two inputs:
    • Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM.
    • Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal.
  • Once all the xml files have been prepared, invoke the script as follows for PDBaaS:
    emcli @database_cloud_setup.py -pdbaas 
          -cloud_boundary=/tmp/my_boundary.xml 
          -cloud_input=/tmp/pdb_inputs.xml

         The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal.

More information available in the Rapid Start Kit chapter in Cloud Administration Guide


4. Extensible Metering and Chargeback

 Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to :

  • Extend chargeback to any target type managed in EM
  • Promote any metric in EM as a chargeback entity
  • Extend list of charge items via metric or configuration extensions
  • Model abstract entities like no. of backup requests, job executions, support requests, etc

 A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line.

More information available in the Chargeback API chapter in Cloud Administration Guide.


5. Miscellaneous Enhancements

There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are:

  • Custom naming of DB Services
    • Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces
    • Every custom name is validated for uniqueness in EM
  • 'Create like' of Service Templates
    • Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels.
  • Profile viewer
    • View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template
  • Cleanup automation - for failed and successful requests
    • Single emcli command to cleanup all remnant artifacts of a failed request
    • Cleanup can be performed on a per request bases or by the entire pool
    • As an extension, you can also delete successful requests
  • Improved delete user workflow
    • Allows administrators to reassign cloud resources to another user or delete all of them
  • Support for multiple tablespaces for schema as a service
    • In addition to multiple schemas, user can also specify multiple tablespaces per request

I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback.

Good luck!

References:

Cloud Management Page on OTN

Cloud Administration Guide [Documentation]

-- Adeesh Fulay (@adeeshf)

Thursday Apr 24, 2014

The case for Snap Clone over point tools

Today, I stumbled over a competitor blog, conspicuous by its factual incorrectness on Enterprise Manager Snap Clone. However, I must compliment the author of the blog, because inadvertently, he has raised a point that we have been highlighting all along. The author, with reference to Dataguard and storage technologies, argues against the cobbling of technologies together and adding another technology stack to the mix without any automated management

Precisely the point! In the wide realm of technologies, there are necessities and there are accessories aka nice-to-haves. The necessities are technologies that are needed anyway, such as a high fidelity, high performance storage from a reputed vendor or a good DR solution for a mission critical database environment. Similarly, for any Oracle DBA worth his/her salt, Enterprise Manager 12c is a necessity, a part of the daily life. The Enterprise Manager agent, keeping vigil on every host, is therefore not an overhead, but the representative (the "agent" in true sense) of the DBA. Deep diagnostics, performance management, large scale configuration management, patching and compliance management make Enterprise Manager the darling of any Oracle DBA. All surveys suggest that any DBA spends considerable amount of time in Enterprise Manager for performing things beyond just data cloning, so why invest in an accessory for the cloning of Oracle test databases and unnecessarily proliferate the number of point tools (and possibly several instances of them) that you need to manage and maintain, not to ignore the past history that cites that very few such point tools solved customers' CAPEX and OPEX problems over the long run. It is like using spreadsheet for expenses and ERP for all other financial tasks.This is not to suggest that these point tools do not have good, innovative features. Over my tenure in the industry, I have come across several such tools with nice features, but often the hidden costs outweigh the benefits. Our position in this aspect has been consistent, whether it is on a competitor’s tool or our own. Few years back, we integrated My Oracle Support into Enterprise Manager with the same consistent goal that Enterprise Manager will serve as the single pane of glass for the Oracle ecosystem. Same has been our position on any product that we acquire.

Snap Clone's support for Dataguard and native storage stems from popular customer demand to leverage technologies they already invested in, and not create standalone islands of automation. Moreover, several customers have voiced in favor of the performance and scalability advantages that they would get by leveraging the native storage APIs. How else would you support one of the world's largest banks, a Snap Clone customer, who performs 60,000 (sixty thousand) data refreshes per year! In any case, that should not imply that we bind ourselves to any of those technologies. We do support cloning on various storage systems based on ZFS filesystem. Similarly, the Test Master refresh can be achieved with one among RMAN, Dataguard, Golden Gate or storage replication and optionally orchestrated with EM Job System.

Enterprise Manager 12c has taken a great step in delivering features via plugins that can be revisioned independent of the framework. An unwanted side effect is that the awareness often lags what is actually supported in the latest version of the product. For example, the filesystem support was introduced last Fall. And of course Enterprise Manager 12c Snap Clone supports RAC. My esteemed colleague and DBA par excellence, in her blog has highlighted some of these to dispel some of the prevalent awareness issues. Snap Clone's usage among the E-Business Suite and Developer community does not need any special accreditation. It is heavily used by the world's largest E-Business Suite Developer community-the Oracle E-Business Suite Engineering organization itself! It is true that Snap Clone does not support  restoration to any arbitrary point in time, but then our customers and prospects have not voiced a need for it. In reality, most customers want to perform intermediate data transformation such as masking and subsetting as they clone from production to test, and Enterprise Manager 12c already boasts of sophisticated data masking technologies, again via the same interface. It also includes testing features like Real Application Testing (RAT) that can complement and follow the test database creation. Future releases of Enterprise Manager will support a tighter integration among these features.

Snap Clone is delivered as a part of the Database as a Service feature set that has been pioneering, industry-leading and getting adopted at a great pace. Little wonder that we have already received a copious amount of Openworld paper submissions on the topic. In this emerging trend of DBaaS adoption, we find no reason to fragment the tasks such as fresh database creation, pluggable database provisioning and cloning across silo'ed point tools (not to mention broader PaaS capabilities which may be needed for complete application testing). Each use case could be different but needs a single service delivery platform. EM12c is that platform for Oracle. Period. So, think twice before 'adding another technology to the mix'. You do not need to.

Thursday Jan 02, 2014

What is EM 12c DBaaS Snap Clone?

Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.

The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago.  It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.

This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.

Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases

Customer Scenario

To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:

  • 5 production databases total 30 TB of storage
  • All 5 production databases have a standby
  • Clones of the production database are required for data analysis and reporting
  • 6 total clones across different teams every quarter
  • For security reasons, sensitive data has to be masked prior to cloning

Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:

5 Prod DB                  = 30 TB
5 Standby DB            = 30 TB
5 Masked DB             = 30 TB (These will be used for creating clones)
6 Clones (6 * 30 TB) = 180 TB
                               ------------------
Total                           = 270 TB
Time = days to weeks

As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:

  • Lack of automation. Scripts are good but often not a long term solution.
  • Traditional cloning techniques are slow while, existing storage vendor solutions are DBA unfriendly
  • Data explosion often outpaces storage capacity and hurts ITs ability to provide clones for dev and testing
  • Archaic processes that require multiple users to share a single clone, or only supports fixed refresh cycles
  • Different priorities between DBAs and Storage admins

Snap Clone to the Rescue

All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.

EM 12c DBaaS Snap Clone tries to address all these issues. It provides:

  • Rapid and space efficient cloning of databases by leveraging storage copy-on-write (or similar) technology
  • Supports all database versions from 10g to 12c
  • Supports various storage vendors and configurations NAS and SAN
  • Lineage and association tracking between clone master and its various clones and snapshots
  • 'Time Travel' capability to restore and access past data
  • Deep visibility into storage, OS, and database layer for easy triage of performance and configuration issues
  • Simplified access for end user via out-of-the-box self service portal
  • RESTful APIs to integrate with custom portals and third party products
  • Ability to meter and charge back on the clone databases

So how does Snap Clone work?

The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:

1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.

2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.

For more details on how to setup and use Snap Clone, refer to a previous blog post.

Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.

5 Prod DB                      = 30 TB
5 Standby DB                 = 30 TB
5 Masked DB                 = 30 TB
6 Clones (6 * 30 TB)      = 180 TB
6 Clones (6 * 5 * 2 GB) = 60 GB
                                   ------------------
Total                               = 270 TB 90 TB
Time = days to weeks minutes

Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.

Snap Clone Savings

Where can you use Snap Clone databases?

As i said earlier, Snap Clone is most effective when cloning large databases  (~TBs). Common scenarios we see our customers best use Snap Clone are:

  • Application upgrade testing. For example, EBusiness suite upgrade to R12.
  • Functional testing. For example, testing using production datasets.
  • Agile development. For example, run parallel development sprints by giving each sprint its own cloned database.
  • Data Analysis and Reporting. For example, stock market analysis at the close of market everyday.

Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.

In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.

In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.

-- Adeesh Fulay (@adeeshf)

Additional References

Cloud Management Page on OTN

Cloud Administration Guide (Documentation)

Enterprise Manager 12c Database-as-a-Service Snap Clone Overview (Presentation)

Tuesday Dec 31, 2013

Database Lifecycle Management for Cloud Service Providers

Adopting the Cloud Computing paradigm enables service providers to maximize revenues while driving capital costs down through greater efficiencies of working capital and OPEX changes. In case of enterprise private cloud, corporate IT, which plays the role of the provider, may not be interested in revenues, but still care about providing differentiated service at lower cost. The efficiency and cost eventually makes the service profitable and sustainable. This basic tenet has to be satisfied irrespective of the type of service-infrastructure (IaaS), platform (PaaS) or software application (SaaS). In this blog, we specifically focus on the database layer and how its lifecycle gets managed by the Service Providers.

Any service provider needs to ensure that:

  • Hardware and software population are in control. As new consumers come in and some consumers retire, there is a constant flux of resources in the data center. The flux has to be managed and controlled
  • The platform for providing the service is standardized, so that operations can be conducted predictable and at scale across a pool of resources
  • Mundane and repeatable tasks like backup, patching, etc are automated
  • Customer attrition does not happen owing to heightened compliance risk

While the Database Lifecycle Management features of Enterprise Manager have been widely adopted, I feel that the applicability of the features with respect to service providers is yet well understood and hence appreciated. In this blog, let me try addressing how the lifecycle management features can be effective in addressing each of the above requirements.

1. Controlling hardware and software population:

Enterprise Manager 12c provides a near real-time view of the assets in a data center. It comes with out-of-box inventory reports that show the current population and the growth trend within the data center. The inventory can be further sliced and diced based on cost center, owner, etc. In a cloud, whether private or public, the target properties of each asset can be appropriately populated, so that the provider can easily figure out the distribution of assets. For example, how many databases are owned by Marketing LOB can be easily answered. The flux within the data center is usually higher when virtualization techniques such as server virtualization and Oracle 12c multitenant option are used. These technologies make the provisioning process extremely nimble, potentially leading to a higher number of virtual machines (VMs) or pluggable databases (PDBs) within the data center and hence accentuating the need for such ongoing reporting. The inventory reports can be also created using BI Publisher and delivered to non-EM users, such as a CIO.


Now, not all reports can always be readily available. There can be situations where a data center manager can seek adhoc information, such as, how many databases owned by a particular customer is running on Exadata. This involves an adhoc query based upon an association, viz. database running on Exadata and target properties, viz. owner being the customer. Enterprise Manager 12c provides a sophisticated Configuration Search feature that lets administrators define such adhoc queries and save them for reuse.

2. Standardization of platform:

The massive standardization of platform components is not merely a nice-to-have for a cloud service provider, it is rather a must-have. A provider may choose to offer various levels of services, tagged with levels such as gold, silver and bronze. However, for each such level, the platform components need to be standardized, not only for ease of manageability but also for ensuring consistency of QOS across all the tenants. So how can the platform be standardized? We can highlight two major Enterprise Manager 12c features here:

The ability to rollout gold images that can be version controlled within Enterprise Manager's Software Library. The inputs of the provisioning process can be "locked down" by the designer of the provisioning process, thereby ensuring that each deployment is a replica of the other.

The ability to compare the configuration of deployments (often referred to as the "Points of Delivery" of the services). This is a very powerful feature that supports 1-n comparisons across multiple tiers of the stack. For example, one can compare an entire database machine from storage cells, compute nodes to databases with one or more of those.

3. Automation of repeatable tasks:

A large portion of OPEX for a service provider is expended while executing mundane and repeatable tasks like backup, log file cleanup or patching. Enterprise Manager 12c comes with an automation framework comprising Jobs and Deployment Procedures that lets administrators define these repetitive actions and schedule them as needed. EMCC’s task automation framework is scalable, carries functions such as ability to schedule, resume, retry which are of paramount importance in conducting mass operations in an enterprise scale cloud. The task automation verbs are also exposed through the EMCLI interface. Oracle Cloud administrators make extensive use of EMCLI for large scale operations on thousands of tenant services.

One of the most popular features of Enterprise Manager 12c is the out-of-box procedures for patch automation. The patching procedures can patch the Linux operating system, clusterware and the database. For minimizing the downtime involved in the patching process Enterprise Manager 12c also supports out-of-place patching that can prepare the patched software ahead of time and migrate the instances one by one as needed. This technique is widely adopted by the service providers to make sure the tenants' downtime related SLAs are respected and adhered to. The co-ordination of such downtime can be instrumented by Enterprise Manager 12c's blackout functionality.

4. Managing Compliance risks:

In a service driven model, the provider is liable in case of security breaches. The consumer and in turn, the customer of the consumer's apps need to be assured that their data is not breached into owing to platform level vulnerabilities. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open network port. The hardening of the platform therefore, has to be done at all levels-OS, network, database, etc. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open port. . To manage compliance, administrators can create baselines referred to as Compliance Standard. Any deviations from the baselines triggers compliance violation notifications, alerting administrators to resolve the issue before it creates risk in the environment.

We can therefore see how four major asks from a service provider can be satisfied with the Lifecycle Management features of Enterprise Manager 12c. As substantiated through several third party studies and customer testimonials, these result in higher efficiency with lower OPEX.

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Google+ Newsletter

Tuesday Nov 12, 2013

Automate RAC Cluster Upgrades using EM12c

One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.
With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases.

This automation includes:

  • Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB) 
  • Best practices in tune with your operations, where you can automate upgrade in steps:
    Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs).
    Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster).
  • Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks
  • Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required.
  • Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to :
    a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN)
    b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup
    c. Ignore backup completely, if there are external mechanisms already in place. 

    Mass Upgrade of RAC using EM12c


High Level Steps:

  1. Select the Procedure "Upgrade Database" from Database Provisioning Home page.
  2. Choose the Target Type for upgrade and the Destination version
  3. Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already
  4. Select the Gold Image of the destination version for deploying both the GI and RAC OHs
  5. Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts
  6. Set the Break points in the procedure execution to isolate Downtime activities
  7. Submit and track the procedure's execution status. 

The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link.

Explore the functionality!!

In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

Wednesday Jul 24, 2013

Understanding Agent Resynchronization

Agent Resynchronization (resync) is an important topic but often misunderstood or misused. In this Q&A styled blog, I discuss how and when it is appropriate to use agent resynchronization.

What is Agent Resynchronization?

Management Agent can be reconfigured using target information present in the Management Repository. Resynchronization pushes all targets and related information from the Management Repository to the Management Agent and then unblocks the Agent.

 Why do agents need to be re-synchronized?

Read More

Thursday Jul 11, 2013

Oracle Enterprise Manager 12c Release 3: What’s New in EMCLI

If you have been using the classic Oracle Enterprise Manager Command Line interface ( EMCLI ), you are in for a treat. Oracle Enterprise Manager 12c R3 comes with a new EMCLI kit called ‘EMCLI with Scripting Option’. Not my favorite name, as I would have preferred to call this EMSHELL since it truly provides a shell similar to bash or cshell. Unlike the classic EMCLI, this new kit provides a Jython-based scripting environment along with the large collection of verbs to use. This scripting environment enables users to use established programming language constructs like loops (for, or while), conditional statements (if-else), etc in both interactive and scripting mode.

Benefits of ‘EMCLI with Scripting Option’

Some of the key benefits of the new EMCLI are:

  • Jython based scripting environment
  • Interactive and scripting mode
  • Standardized output format using JSON
  • Can connect to any EM environment (no need to run EMCLI setup …)
  • Stateless communication with OMS (no user data is stored with the client)
  • Generic list function for EM resources
  • Ability to run user-defined SQL queries to access published repository views

Before we go any further, there are two topics that warrant some discussion – Jython and JSON.

Read More

Thursday Apr 11, 2013

Qualcomm Deploys Application Changes Faster with Oracle Enterprise Manager

Listen in as Qualcomm talks about saving time and energy by making application changes faster through Oracle Enterprise Manager.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app

Thursday Mar 14, 2013

Database as a Service: Glad that you asked these!

Thanks for visiting my earlier blog post on the new Database as a Service (DBaaS) features which got released in Enterprise Manager 12cR2 Plugin Update 1.

Our first public webcast on  DBaaS since the release was held this morning (the recording will be soon available on O.com). The webcast was pretty well attended with peak attendance going well over our expectation. I wish we had more time to handle the technical Q&A, but since we didn't, let me use the blogosphere to answer some of the questions that were asked. I am repeating some of the questions that we answered during the webcast, because they warrant details beyond what the duration permitted.

Kevin from the audience asked "What's the difference between a regular provisioning and DbaaS?" Sometimes the apparently obvious ones are the most difficult to answer. The recently released whitepaper covers the regular/traditional provisioning versus DBaaS in detail. Long story cut short, in a traditional provisioning model, IT (usually a DBA) uses scripts and tools to provision databases on behalf of end users. In DBaaS IT's role changes and the DBA simply creates a service delivery platform for end users to provision databases on demand as and when they need them. And that too, with minimal inputs ! Here's how the process unfolds:

  • The DBA pools together a bunch of server resources that can host databases or a bunch of databases that can host schema and creates a Self-Service zone.
  • The DBA creates a gold image and provisioning procedure and expresses that as a service template
  • As a result, the end users do not have to deal with the intricacies of the provisioning process. They input a couple of very simple things like the service template and the zone and everything else happens under the hood. The provisioning process, the physicality of the database, etc are completely abstracted out.
  • And finally, because DbaaS deals with shared resource utilization and self-service automation, a DBaaS is usually complemented by quota, retirement and chargeback. 

The following picture can make it clear.


In terms of licensing, for a traditional administrator driven database provisioning, you need the Database Lifecycle Management Pack.  If you want to enable DBaaS on top of it, simply add the Cloud Management Pack for Database.

I will combine the next two questions. Alfred asked, "Is RAC a requirement?" (the short answer for which is "No") while Jud asked, "Is the schema-level provisioning supported in an environment where the target DBs are running in VMs?" First of all, in our DBaaS solution we support multiple models, as shown below.

In the dedicated database model, the database can run on a pool of servers or a pool of cluster. So both single instance and RAC are supported. Similarly, in the dedicated schema (Schema as a Service) model, it can run on single instance or RAC, which can in turn be hosted on physical servers or VMs. Enterprise Manager treats both physical servers and VMs as hosts and as long as the hosts have the agent installed, they can participate in DBaaS. Bottomline is that as we move from IaaS and offer these higher order services, the underlying infrastructure becomes irrelevant. This should also satisfy Steve, who queried "As the technology matures is there an attempt by Oracle to provide ODA vs EXADATA as the foundation of the dbaas to lower the cost?". The answer is YES. But, why wait?  DBaaS is supported on Exa and ODA platforms TODAY. In fact, HDFC Bank in India is running DBaaS on Exadata. You can read about them in the latest Oracle Magazine.

Another interesting question came from Yuri. He asked, "Is there an option to disable startup/shutdown for the self-service users?" It can be answered in multiple ways. First of all, in Schema as a Service or dedicated schema model, the end user cannot control the database instance state because it houses database services (schemas) owned by others too. So this may be a good model for enterprises trying to limit what end users can do at the database instance level.  However, in a dedicated database model, the Enterprise Manager out-of-box self-service console allows the end user to perform operations like startup and shutdown on the database instance. In general, if you want to create your tailored own self-service console with a limited set of operations exposed in the self-service interface, using the APIs may be the way to go. Enterprise Manager 12c also supports RESTFul APIs for self-service operations and hence a limited set of capabilities may be exposed. Check this technical presentation for the supported APIs.

Gordon's question precisely brings out the value of the Enterprise Manager 12c offering. He asked, "How do the services in the cloud get added to Cloud Control monitoring and alerting?" Ever since Amazon became the poster child of public IaaS, enterprises tried emulating their model within the data centers. What most people ignore or forget is that there is a life of the resources in a cloud beyond the provisioning process. Initial provisioning is just the beginning of that lifecycle. In Amazon's case, the management and monitoring of resources is the headache of Amazon's IT staff and consumers are oblivious to the time and effort it takes for them to manage the resources. In a private cloud scenario, one does not have that luxury. Once the database gets provisioned, it needs to monitored for performance, compliance and configuration drifts by company's own  IT staff. In Enterprise Manager 12c, the agent is deployed on the hosts that constitute the pool making the databases automatically managed without any additional work. It comprehensively manages the entire lifecycle and both adminsitrators and self-service users have tailored views of the databases. Well, this also gives me an opportunity to address a question by a participant who alluded to a 3rd party tool exclusively for database provisioning purposes. First of all, as I mentioned during the webcast, Enterprise Manager 12c is the only tool that handles all the use cases- creation of full databases, schemas and cloning (both full clone and Snap Clone) from a single management interface. The point tools out there handle only fraction of these use cases- some specialize in cloning while others specialize in seed database provisioning. Second, as stated in the previous answer, provisioning is only the initial phase of the lifecycle and a provisioning tool cannot be synonymous with a cloud management tool. Thanks Gordon for helping me make that point!

Sam and Cesar share the honors for the most difficult question that came right at the beginning. "Has it started?  Been on hold for a while." was their reaction at two minutes past ten. This is possibly the most embarrassing one for me because I was caught in traffic. With due apologies for that, I wish my car operated like Enterprise Manager's  Database as a Service!

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

About

Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs




Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today