Friday Jun 27, 2014

Convert Crontab to Enterprise Manager Jobs

Surprisingly, a popular question posted on our internal forum is about the possibility of using the Enterprise Manager (EM) Job System to replace customer’s numerous cron jobs. The answer is obviously YES! I say surprisingly because the EM Job system has been in existence for around 10 years (I believe since EM 10.2.0.1), and my hope was that, by now, customers would have moved to using more enterprise class job schedulers instead of cron. So, here is a quick post on how to get started with this conversion from cron to EM Jobs for some of our new users.

Benefits of EM Job System

 Before we learn about the how, let’s look at the why. The EM job system is:

  • Free - (Yes, I said free) It is included with the base EM at no cost.
  • Flexible - It supports multiple options for scheduling, notification, authentication, etc
  • Infinitely scalable - the job system seamlessly scales to every new Oracle Management Server (OMS). In fact, in case of OMS failures, the job steps are automatically picked up by the next available OMS without affecting the job execution.
  • General purpose - General purpose since it provides numerous out-of-the-box job types like run OS command, start/stop, backup, SQL Script, patch, etc that span multiple target types. As of today, there are over 50 job types available in the product.
  • Enterprise grade - It allows users to automate multiple administrative tasks like backup, patching, cloning, etc across multiple targets. Customers have not only converted their cron jobs to EM, but have also replaced other enterprise tools like Autosys and migrated 1000s of jobs to EM Job System.
  • APIs - Jobs can be scheduled and managed from the UI and using EMCLI (the command line interface).

Now back to our topic.

The Conversion Process

Let’s start with a sample crontab that we want to convert.

Sample Crontab

A cron expression consists of 6 fields, where the first 5 fields represent the schedule, while the last field represents the command or script to run.

 Field Name
Mandatory?  Allowed Values
 Allowed special characters
 Minutes  Yes 0-59  * / , -
 Hours  Yes  0-23  * / , -
 Day of month
Yes  1-31 * / , - ? L W
 Month Yes  1-12 or JAN-DEC
* / , -
 Day of week
Yes  0-6 or SUN-SAT
* / , - ? L #

Cron jobs run on the operating system, often using the native shell or other tools installed on the operating system. The equivalent of this capability in Enterprise Manager is the ‘OS Command’ job type. Here are the steps required to convert the first entry in the crontab to an EM job:

1. Navigate to the Job Activity page
Job activity menu

2. Select the ‘OS Command’ job and click Go
OS Command

A 5-tab wizard will appear. Let’s step through this one by one.

3. Select the first tab called ‘General’. Here provide a meaningful name and description for the job. Since this job will be run on the Host target, keep the target type selection as ‘Host’. Next, select all host targets in EM that you wish to run this script against.

While cron jobs are defined on a per host bases, in EM a job definition can be run and managed across multiple hosts or groups of hosts. This avoids having to maintain the same crontab across multiple hosts.

General

4. Select the ‘Parameters’ tab. Here enter the command or script as specified in the last field of the crontab entry. When constructing the command, you can make use of the various target properties.
Parameters tab

5. Next select ‘Credentials’. Here we provide the credentials required to connect to the host and execute the required commands or scripts. Three options are presented to the user:

  • Preferred – default credential set for the host
  • Named - Credentials are stored within Enterprise Manager as "named" entities. Named credentials can be a username/password, or a public key-private key pair. Here we choose pre-created named credentials
  • New – This allows us to create and use new named credential

Note: If your OS user does not have the required privileges to execute the set command, Named Credentials also support use of sudo, powerbroker, sesu, etc.

Credentials tab

6. Next, we set the schedule and this is where it gets interesting. As discussed before, crontab uses a textual representation for the schedule, while EM Job system has a graphical representation for the schedule.

Our sample schedule in the crontab is ‘00 0 * * Sun’. This translates to a weekly job at 12 midnight on every Sunday. To set this in EM, choose the ‘Repeating’ schedule type. The screenshot below shows all the other selections.
Schedule tab

 The key here is to select the correct ‘Frequency Type’, the rest of the selections are quite obvious. This also lets you choose the desired timezone for the schedule. Your options are to either start the job w.r.t a fixed timezone, or start it in individual target's timezone. The latter is very popular, for example, I want to start a job at 2 AM local time in every region around the world.

Another selection of note is that for ‘Grace Period’. This is an extremely powerful feature, but often not used by many customers. Typically, we expect jobs to be started within a few seconds or minutes (based on the load on the system and number of jobs scheduled) of the start time, but a job might not start on time for many reasons. The most common reasons are the Agent being down or due to a blackout. The grace period controls the latest start time for the job in case the job is delayed, else its is marked as skipped. By default, jobs are scheduled with indefinite grace periods, but I highly recommend setting a value for it. In the sample above, I set a 3 hr limit which may seem large but given the weekly nature of the job seems reasonable. So the job system will wait until 3 am (the job start time is 12 am) to start the job, after which the iteration will be skipped. For repeating schedules, the grace period should always be less than the repeat interval. If the job starts on time, the grace period is ignored.

7. Finally, we navigate to the ‘Access’ tab. This tab has two parts:

  • Privilege assignment to roles and users: this allows you to control job level access for other users
  • Email notifications for the Job owner: this allows you to control the events for which you wish to receive notifications. Note, this only sets notification for the job owner, the other users can subscribe to emails by setting up notification and/or incident rules.

To prevent EM from sending deluge of emails, I recommend the following settings in the notifications region:

  • Match status and severity: Both
  • Select severity of status: Critical
  • Select status: Problems & Action Required

       You can always come back and modify these settings to suit your needs.

Access tab

Not all cron jobs need to be converted to OS command. For example, if you are taking Oracle database backups using cron, then you probably want to use the out-of-the-box  job type for RMAN scripts. Just provide the RMAN script, list of databases to run this against, and the credentials required to connect to the database. Similarly, if you run sqls on numerous databases, you can leverage the SQL Script job type for this purpose. There are over 50 job types available in EM12c, all available for use from the UI and EMCLI.

Finally, the best way to learn more about the EM Job System is to actually play with it. I also recommend blogs from Maaz, Kellyn, and other users on this topic. Good luck!!

References

Maaz Anjum: http://maazanjum.com/2013/12/30/create-a-simple-job-for-a-host-target-in-em12c/
Kellyn Pot'vin: http://dbakevlar.com/

-- Adeesh Fulay (@adeeshf)

Monday Jun 16, 2014

EM12c Release 4: Job System Easter Eggs - Part 1

So you just installed a new EM12c R4 environment or upgraded your existing EM environment to EM12c R4. Post upgrade you go to the Job System activity page (via Enterprise->Job->Activity menu) and view the progress details  of a job. Well nothing seems to have changed, its the same UI, the same multi-page drill down to view step output, same no. of clicks, etc. Wrong! In this two part blog post, i talk about two Job System Easter Eggs (hidden features) that most of you will find interesting. These are:

  1. New Job progress tracking UI
  2. Import/Export of job definitions

So before i go any further, let me address the issue of why are these features hidden? As we were building these features, we realized that we would not be ready to ship the desired quality of code by the set dates. Hence, instead of removing the code, it was decided to ship it in a disabled state so as not to impact customers, but still allowing a brave few to experiment with it and provide valuable feedback.

1.  New Job Progress Tracking UI

The job system UI hasn't changed much since its introduction almost 10 years ago. It is a daunting task to update all the job system related UIs in a single release, and hence we decided to take a piecemeal approach instead. In the first installment, we have revamped the job progress tracking page.

Old Job Progress Tracking UI

The current UI, as shown above, while being very functional, is also very laborious. Multiple clicks and drill downs are required to view the step output for a particular target. Also, any click leads to complete page refresh, which leads to wastage of time and resources. The new UI tries to address all these concerns. It is a single page UI, which means no matter where you click, you never leave the page and thus never lose context of the target or step you where in. It also significantly reduces the no. of clicks required to complete the same task as in the current UI. So lets take a look at this new UI.

 First, as i mentioned earlier, you need to enable this UI. To do this, you need to run the following emctl command from any of the OMS:

./emctl set property -name oracle.sysman.core.jobs.ui.useAdfExecutionUi -value true

 This command will prompt for the sysman password, and then will enable the new UI.

NOTE: This command does not require a restart of the OMS. Once run, the new UI will be enabled for all user across all OMSes.

EMCTL Output

Now revisit the job progress tracking page from before. You will be directed to the new UI.

New Job Progress Tracking UI

There are in all 6 key regions on this new single page job progress tracking UI. Starting from top left, these are:

  1. Job Run Actions - These are actions that can be performed on the job run like suspend resume, retry, stop, edit, etc.
  2. Executions - This region displays all the executions in the job run. An execution, in most cases, represents a single target and hence runs independently from other executions. This region thus shows the progress and status of all executions in a single view. The best part of this region is the column titled 'Execution Time'. The cigar chart in this column represents two things - one, the duration of the execution, and two, the difference in start times. The visual representation helps in identifying runaway executions, or just compare execution times across different targets. The Actions menu allows various options like start, stop, debug, delete, etc.
  3. Execution Summary - Clicking on an execution in the above region, paints the area on the right. This specific region shows execution summary with information like status, start & end date, execution id, command, etc
  4. Execution Steps - This region lists the steps that make up the execution.
  5. Step Output - Clicking on a step from the above region, paints this region. This shows the details of the step. This includes the step output and the ability to download it to a text file.
  6. Page Options - We imagine that learning any new UI takes time, and hence this final region provides the option to switch between the new and the classic view. Additionally, this also allows you to set the auto refresh rate for the page.

Essentially, considering that jobs have two levels - executions and steps, we have experimented with a multi-master style layout. EM has never used such a layout and hence there were concerns raised when we chose to do so.

Master 1 (region 2) -> Detail 1 (regions 3, 4, & 5)

Master 2 (region 4) -> Detail 2 (region 5)


In summary, with this new UI, we have been able to significantly reduce the no. of clicks required to track job progress and drill into details. We have also been able to show all relevant information in a single page, thus avoiding unnecessary page redirection and reloads. I would love to hear from you if this experiment has paid off and if you find this new UI useful.

In the next part of this blog i talk about the new emcli verbs to import and export job definitions across EM environments. This has been a long standing enhancement request, and we are quite excited about our efforts.

-- Adeesh Fulay (@adeeshf)  

Tuesday Jun 10, 2014

EM12c: Using the LIST verb in emcli

Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent!

1. Multiple resources under a single verb:
A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.

Here is an example which retrieves the list of administrators within EM.
Standard mode
$ emcli list -resource="Administrators"


Interactive mode
emcli>list(resource="Administrators")
The output will be the same as standard mode.

Standard mode
$ emcli @myAdmin.py
Enter password :  ******

The output will be the same as standard mode.

Contents of myAdmin.py script
login()
print list(resource="Administrators",jsonout=False).out()


To get a list of all available resources use
$ emcli list -help

With every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks.

2. Consistent Formatting:
It is possible to format the output of any resource consistently using these options:

  –column

  This option is used to specify which columns should be shown in the output.

Here is an example which shows the list of administrators and their account status
$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS"

To get a list of columns in a resource use:
$ emcli list -resource="Administrators" -help


You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"

This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.

  –colsize
  This option is used to resize column widths.
Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.
$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30"

The existing standard EMCLI formatting options are also available in list verb. They are:
-format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -script

There are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.

3. Search:
Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:
          =
           !=
           >
           <
           >=
           <=
           like
           is (Must be followed by null or not null)

Here is an example which searches for all EM administrators in the marketing department located in the USA.
$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'"

Here is another example which shows all the named credentials created since a specific date. 
$emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"
Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM

Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.

Here is an example
$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15"


You can provide multiple bind variables.

To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:
$ emcli list -resource="PreferredCredentialsDefault" -help


4. Secure access
When list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access.

For example consider this usecase:
AdminA has access only to TargetA.
AdminA logs into EM CLI
Executing the list verb to get the list of all targets will only show TargetA.

5. User defined SQL
Using the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema.

To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.

Here’s an example
  $ emcli list -sql='select * from mgmt$target'

6. JSON output support   
JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.

As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.

We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status.

Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.

If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.
If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.

 1 from emcli import *
 2 
search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE=
 \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']
 3 if len(sys.argv) == 2:
 4    print login(username=sys.argv[0])
 5    l_prop_val_to_set = sys.argv[1]
 6  
   l_targets = list(resource="TargetProperties", search=search_list,
   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")
 7    for target in l_targets.out()['data']:
 8       t_pn = 'LifeCycle Status'
 9      print "INFO: Setting Property name " + t_pn + " to value " +
      l_prop_val_to_set + " for " + target['TARGET_NAME']
 10      print  set_target_property_value(property_records=
      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+
      t_pn+":"+l_prop_val_to_set)
 11 
else:
 12   print "\n ERROR: Property value argument is missing"
 13
  print "\n INFO: Format to run this file is filename.py <username>
  <Database Target LifeCycle Status Property Value>"

You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.

A line by line explanation for beginners:
Line

 1 Imports the emcli verbs as functions
 2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.
 3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.
 4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.
 5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.
 6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.
 7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.
 8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.
 9 This a message informing the user the script is setting the property value for dbxyz.
 10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets.

The script is executed as:
$ emcli @myScript.py subin Production



The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.

For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:
$ emcli list -help


Watch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!

Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.


Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
mt=8">Download the Oracle Enterprise Manager 12c Mobile app

Friday Jun 06, 2014

EM12c Release 4: Cloud Control to Major Tom...

With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators.

In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository

[Read More]

Wednesday Jun 04, 2014

EM12c Release 4: Database as a Service Enhancements

Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are:

  1. Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard)
  2. Additional Storage Options for Snap Clone (includes support for Database feature CloneDB)
  3. Improved Rapid Start Kits
  4. Extensible Metering and Chargeback
  5. Miscellaneous Enhancements


1. Comprehensive Database Service Catalog

Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are:

Service Catalogs: Defining Standardized Database Service

High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA]

EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits:

  • Present a collection of standardized database service definitions,
  • Define standardized pools of hardware and software for provisioning,
  • Role based access to cater to different class of users,
  • Automated procedures to provision the predefined database definitions,
  • Setup chargeback plans based on service tiers and database configuration sizes, etc

Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration:

  • Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites)
  • The standby databases can be single instance, RAC, or RAC One Node databases
  • Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software
  • The standby databases can be in either mount or read only (requires active data guard option) mode
  • All database versions 10g to 12c supported (as certified with EM 12c)
  • All 3 protection modes can be used - Maximum availability, performance, security
  • Log apply can be set to sync or async along with the required apply lag

The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:

 Primary  Standby [1 or more]
 EM 12cR4
 SI  -
 SI
 SI
 RAC
-
 RAC
SI
 RAC
RAC
 RON
-
 RON
RON
where RON = RAC One Node is supported via custom post-scripts in the service template

A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c.


2. Additional Storage Options for Snap Clone

In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style.

In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources:

Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2

Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB

The advantages of the new CloneDB integration with EM12c Snap Clone are:

  • Space and time savings
  • Ease of setup - no additional software is required other than the Oracle database binary
  • Works on all platforms
  • Reduce the dependence on storage administrators
  • Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal
  • Uses dNFS to delivers better performance, availability, and scalability over kernel NFS
  • Complete lifecycle of the clones managed by EM12c - performance, configuration, etc


3. Improved Rapid Start Kits

DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups.

The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software.

Steps to use the kit:

  • The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup
  • It can be run from this default location or from any server which has emcli client installed
  • For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py
  • For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py
  • The database_cloud_setup.py script takes two inputs:
    • Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM.
    • Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal.
  • Once all the xml files have been prepared, invoke the script as follows for PDBaaS:
    emcli @database_cloud_setup.py -pdbaas 
          -cloud_boundary=/tmp/my_boundary.xml 
          -cloud_input=/tmp/pdb_inputs.xml

         The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal.

More information available in the Rapid Start Kit chapter in Cloud Administration Guide


4. Extensible Metering and Chargeback

 Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to :

  • Extend chargeback to any target type managed in EM
  • Promote any metric in EM as a chargeback entity
  • Extend list of charge items via metric or configuration extensions
  • Model abstract entities like no. of backup requests, job executions, support requests, etc

 A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line.

More information available in the Chargeback API chapter in Cloud Administration Guide.


5. Miscellaneous Enhancements

There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are:

  • Custom naming of DB Services
    • Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces
    • Every custom name is validated for uniqueness in EM
  • 'Create like' of Service Templates
    • Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels.
  • Profile viewer
    • View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template
  • Cleanup automation - for failed and successful requests
    • Single emcli command to cleanup all remnant artifacts of a failed request
    • Cleanup can be performed on a per request bases or by the entire pool
    • As an extension, you can also delete successful requests
  • Improved delete user workflow
    • Allows administrators to reassign cloud resources to another user or delete all of them
  • Support for multiple tablespaces for schema as a service
    • In addition to multiple schemas, user can also specify multiple tablespaces per request

I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback.

Good luck!

References:

Cloud Management Page on OTN

Cloud Administration Guide [Documentation]

-- Adeesh Fulay (@adeeshf)

Monday Mar 17, 2014

Installing a JDBC patch to an EM agent

If you have an Enterprise Manager 12c Release 3 or older agent monitoring database targets, Support may recommend that you install a JDBC (Java database connectivity) patch, such as patch 17591700, to prevent high CPU consumption by the agent.


JDBC patches, including 17591700, have readme files containing instructions for installing the patches to a database, not to an EM agent. This post provides an example of how to install a JDBC patch to an EM agent. It walks through the steps of installing patch 17591700 to an EM12c Release3 agent.


Here are the steps:

1.  Identify the version of the JDBC client in the Agent Binaries home.

  • Set the ORACLE_HOME environment variable to the Agent Binaries home.

$ setenv ORACLE_HOME /u01/em12/core/12.1.0.3.0


Note: One way to find out the Agent Binaries home is to look in file /etc/oragchomelist on the agent host. It should contain an entry for an agent install in the format of:

<Agent Binaries home>:<Agent home>

For example, file /etc/oragchomelist contains:

/u01/em12/core/12.1.0.3.0:/u01/em12/agent_inst



  • Run the following command to identify the version of the JDBC client.

$ $ORACLE_HOME/OPatch/opatch lsinv -details | grep 'Oracle JDBC/OCI Instant Client'
Oracle JDBC/OCI Instant Client           11.1.0.7.0

In this case, the version of the JDBC client is 11.1.0.7.0.


2.  Download the patch from MOS (My Oracle Support) website. The version of the JDBC client should match the version of the database for which the patch is intended. Stage the patch zip file, p17591700_111070_Generic.zip, on the agent host. I stage the patch in directory /u01/stage/jdbc_patch. I will refer to this location as the patch stage directory.


3.  Go to the patch stage directory and extract files from the zip file.

$ cd /u01/stage/jdbc_patch

$ unzip p17591700_111070_Generic.zip

Archive: p17591700_111070_Generic.zip

creating: 17591700/

inflating: 17591700/README.txt

creating: 17591700/files/

creating: 17591700/files/jdbc/

creating: 17591700/files/jdbc/lib/

inflating: 17591700/etc/xml/GenericActions.xml

inflating: 17591700/etc/xml/ShiphomeDirectoryStructure.xml


4.  Stop the agent.

$ /u01/em12/agent_inst/bin/emctl stop agent

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.

Stopping agent ..... stopped.


5.  Install the patch.

  • Go to the <patch stage directory>/<patch number>.

$ cd /u01/stage/jdbc_patch/17591700

  • Run opatch apply command.

$ $ORACLE_HOME/OPatch/opatch apply

Oracle Interim Patch Installer version 11.1.0.10.0

Copyright (c) 2013, Oracle Corporation. All rights reserved.

Oracle Home : /u01/em12/core/12.1.0.3.0

Central Inventory : /u01/app/oraInventory

from : /u01/em12/core/12.1.0.3.0/oraInst.loc

OPatch version : 11.1.0.10.0

OUI version : 11.1.0.11.0

Log file location : /u01/em12/core/12.1.0.3.0/cfgtoollogs/opatch/17591700_Mar_17_2014_12_03_05/apply2014-03-17_12-03-05PM_1.log

Applying interim patch '17591700' to OH '/u01/em12/core/12.1.0.3.0'

Verifying environment and performing prerequisite checks...

Patch 17591700: Optional component(s) missing : [ oracle.dbjava.jdbc, 11.1.0.7.0 ]

Interim patch 17591700 is a superset of the patch(es) [ 16087066 ] in the Oracle Home

OPatch will roll back the subset patches and apply the given patch.

All checks passed.

Backing up files...

Rolling back interim patch '16087066' from OH '/u01/em12/core/12.1.0.3.0'

Patching component oracle.dbjava.ic, 11.1.0.7.0...

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/aq/AQNotificationEvent$EventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/aq/AQNotificationEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent$AdditionalEventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent$EventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/dcn/DatabaseChangeEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFAQEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFConnection.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFDCNEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/NTFManager.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/T4CConnection.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc6.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc6.jar/oracle/jdbc/driver/T4CTTIokpn.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/aq/AQNotificationEvent$EventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/aq/AQNotificationEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent$AdditionalEventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent$EventType.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/dcn/DatabaseChangeEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFAQEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFConnection.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFDCNEvent.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/NTFManager.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/T4CConnection.class"

Updating jar file "/u01/em12/core/12.1.0.3.0/jdbc/lib/ojdbc5.jar" with "/u01/em12/core/12.1.0.3.0/.patch_storage/16087066_Feb_4_2013_04_52_18/files//jdbc/lib/ojdbc5.jar/oracle/jdbc/driver/T4CTTIokpn.class"

RollbackSession removing interim patch '16087066' from inventory

OPatch back to application of the patch '17591700' after auto-rollback.

Patching component oracle.dbjava.ic, 11.1.0.7.0...

Verifying the update...

Patch 17591700 successfully applied

Log file location: /u01/em12/core/12.1.0.3.0/cfgtoollogs/opatch/17591700_Mar_17_2014_12_03_05/apply2014-03-17_12-03-05PM_1.log

OPatch succeeded


6. Start the agent.

$ /u01/em12/agent_inst/bin/emctl start agent

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.

Starting agent ............. started.


The above step completes the process of installing patch 17591700 to the 12.1.0.3 agent.

Thursday Feb 13, 2014

Steps to Fast Track your Database Cloud implementation on Exadata

Oracle Exadata Database Machine is the ideal consolidation platform for Enterprise Database Cloud and Oracle Enterprise Manager provides the most optimized and comprehensive solution to rapidly setup, manage and deliver Enterprise Clouds. Clearly, very significant innovations have been delivered via Exadata X4, Enterprise Manager 12c and Database 12c in Cloud Computing space and customers can start realizing benefits from this combination of most powerful and unique enterprise database cloud solution in industry.

As per OracleVoice blog on Forbes.com:  "Why Database As A Service (DBaaS) Will Be The Breakaway Technology of 2014":

"Database as a Service (DBaaS) is arguably the next big thing in IT. Indeed, the market analysis firm 451 Research projects an astounding 86% cumulative annual growth rate, with annual revenues from DBaaS providers rising from $150 million in 2012 to $1.8 billion by 2016."

In this blog post, I will walk through the steps aiming to simplify DBaaS Setup on Exadata and also describe automation kits available to achieve the following rapidly -

  • Setup Monitoring and Management of Exadata Database Machine platform in EM 12c
  • Setup and Deliver DBaaS on Exadata using EM 12c
  • Manage and Optimize Exadata and EM 12c powered DBaaS cloud platform on an ongoing basis



There are 2 separate automation kits that are provided with EM 12c, first kit is for enabling rapid monitoring and management setup of Exadata stack in EM 12c and second kit is for rapid setup of DBaaS -

1) Deploy EM 12c site or use existing site - If you do not have existing EM 12c R3 setup, you can use EM Automation Kit for Exadata for installing EM 12c R3 Plug-in update 1. This kit is available via patch 17036016 on My Oracle Support(MOS) and can be used to deploy EM 12c latest release. Refer to Readme of patch and MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Please note that this will setup EM12c Oracle Management Service along with Management Repository. It can be deployed on a single m/c or OMS and OMR can be setup on different machines.

2) Deploy EM 12cR3 agents and required plug-ins on Exadata Machine - Agent kit is also part of the same EM Automation Kit for Exadata and can be used for deploying agents and plug-ins on Exadata stack. Refer to MOS note "Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)" for additional details. Best practice is to use most recent version of Agent kit and also deploy latest plug-ins. Patch details for respective platform are described in the MOS note.

Agent kit script will require Java 1.6.0_43 or greater version on database node where this script is being run. Agent kit script will need to be run as root OS user on Exadata db node, however JAVA_HOME and PATH with JAVA_HOME/bin should be set up as agent OS owner, so these OS env variables need to setup in profile of agent OS owner.

Agent Automation kit helps with achieving following -

  • EMCLI setup on Exadata Server
  • EM 12c R3 site compatibility checks
  • Setup and remove SSH between Exadata nodes to test SSH setup
  • Deploy EM 12c Agent and required Plugins on all DB Nodes of Exadata Machine
  • Confirm Exachk tool availability and run Exachk tool
  • Run Exadata Discovery Prerequisites
  • Discover Targets Cluster, Grid Infrastructure, RAC database and listener targets

Note - In case of Exadata X4, ensure you have the EM 12cR3 latest Bundle patch(released in January 2014). Refer to following MOS notes -
Enterprise Manager 12.1.0.3 Bundle Patch Master Note (Doc ID 1572022.1)
Enterprise Manager for Exadata Plug-in 12cR3 Bundle Patch Bug List (Doc ID 1613177.1)

3) Discover Grid Infrastructure and RAC targets – Above setup script will discover Targets Cluster, Grid Infrastructure, RAC database and listener targets. Discover Grid Infrastructure, ASM and RAC targets manually if required.

4) Please note that this setup script will not discover Oracle Exadata Database machine target in EM 12c. You need to discover the machine using following steps

  • From the Setup menu, select Add Targets, then select “Add targets Manually”.
  • In the “Add Targets Manually” page, select 'Add Targets Using Guided Process (Also Adds Related Targets)' and Target Type as Oracle Exadata Database Machine.
  • Click Add Using Guided Discovery and follow the wizard.


5) Setup Database Cloud Using Rapid Start Kit - Once you have setup Exadata management in EM 12c, next step is to setup database cloud. Refer to Rapid Start Kit for setting up cloud for both DBaaS and Pluggable DBaaS/PDBaaS. This kit will help achieve the following -

  • Create Cloud Admin, SSA Admin and SSA User custom roles
  • Create Cloud Admin, SSA Admin and SSA Users
  • Grant Quota to SSA User custom roles
  • Setup Zones with Placement Policy Constraints
  • Setup Pools with Placement Constraints
  • Setup Service Template/Catalog and grant it SSA User custom roles.

Here are brief steps for setting up Database Cloud using Rapid start Kit, available in EM Agent Kit 12.1.0.3.0, after login to Exadata machine first DB node as EM 12c agent owner

  • Change to <location where Agent kit is unzipped>/cloudsetup directory.
  • Review the input files under config directory and customize the dbaas_cloud_input.xml for configuring DBaaS cloud and pdbaas_cloud_input.xml for configuring Pluggable Database as a Service.
  • Run the following command to setup DBaaS on Exadata Machine.
<EMCLI Home>/emcli login -username=sysman
<EMCLI Home>/emcli @exadata_cloud_setup.py –dbaas
Above command will use dbaas_cloud_input.xml (under cloudsetup/config) as input file for configuring DBaaS. 
  • To setup PDBaaS on Exadata, please use following command.
<EMCLI Home>/emcli @exadata_cloud_setup.py –pdbaas
Above command will use pdbaas_cloud_input.xml (under cloudsetup/config) as input file for configuring PDBaaS

Note: Currently Rapid Start kit for DBaaS makes use of 11.2.0.3.0 Database "Exadata Data Warehouse" Profile available out-of-box. However you can create your own DBCA based Profiles and customize the dbaas_cloud_input.xml. Also if you need to use RMAN backup based or Snap clone based profile, you can to login to EM12c SSA Portal as SSA Administrator, to create the profile and setup service template. 

At this stage, you will be able to manage and deliver your Exadata powered enterprise database cloud using EM 12c.

Additional References:

Obtaining the Oracle Enterprise Manager Setup Automation kit for Exadata (Doc ID 1440951.1)

Oracle® Enterprise Manager Cloud Administration Guide12c Release 3 (12.1.0.3)

Oracle® Enterprise Manager Exadata Management Getting Started Guide Release 12.1.0.5.0


Monday Jan 27, 2014

IOUG Oracle EM SIG Techcast: Managing Oracle Enterprise Manager 12c with Oracle Clusterware


The Oracle Enterprise Manager Special Interest Group (SIG) is a growing body of IOUG members who manage or are interested in all aspects of Oracle Enterprise Manager. This IOUG SIG is managed by volunteers and supported by Oracle Enterprise Manager product managers and developers. The purpose of the SIG is to bring relevant information and education through webcasts, discussions and networking to users interested in learning more about the product, and to share user experiences.

On January 28th at 10 AM pacific time, Oracle Enterprise Manager SIG is hosting a webcast on "Managing Oracle Enterprise Manager Cloud Control 12c with Oracle Clusterware". In this webcast, Leighton Nelson, Associate Principal Database Administrator/Oracle ACE will discusss the steps required to configure virtual host names and create an Oracle Clusterware resource for Oracle Enterprise Manager to provide seamless failover and improved high availability levels.


Register Now !

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Google+ Newsletter

Wednesday Jan 15, 2014

Oracle Enterprise Manager Snap Clone Webcast Replay and Slides

Thank you to all who attended our webcast on Enterprise Manager 12c Snap Clone last month. In this webcast, we talked about how EM12c Snap Clone can help:

  • Leverage storage copy-on-write technologies for rapid provisioning
  • Integrate cloning with other Oracle Enterprise Manager 12c Lifecycle Management features, such as data masking and sub-setting
  • “Time travel” across multiple database snapshots to restore and access past data
  • Reduced administrative overhead from integrated lifecycle management

 For those who missed this webcast, the replay is available here and the slides have been uploaded to slideshare.

Feel free to reach out to us if you have any questions on Snap Clone or Database as a Service.

- Adeesh Fulay (@adeeshf)


Tuesday Jan 07, 2014

Create a Database Instance without DB Control Schema

When using Database Configuration Assistant (DBCA) to create a database instance to house the repository of Enterprise Manager Cloud Control, people often end up with an instance containing DB Control schema, and they have to remove the schema from the instance before installing EM12c. To create an instance without DB Control or SYSMAN schema in the first place, make the following selections in the DBCA database creation process.


1. In the Database Template step, select the Custom Database option.



2. In the Management Options step, uncheck the Configure Enterprise Manager option.



Note: If DBCA locates an agent installation on the host, it will provide an option to register the database instance that you are creating with the corresponding Management Service. If you choose that option, DBCA will add the database instance to the Enterprise Manager site as a managed target upon completion of instance creation. This option, however, does not create SYSMAN schema in the database instance.



3. In the Database Content step, uncheck the Enterprise Manager Repository option.


Remember to make the above-mentioned selections, and you will have an instance without DB Control, which is fit for housing an EM repository.



Note: From EM12c Release 2, there is an option to create an 11.2.0.3 database instance with a pre-configured EM repository. For instructions see:

Creating a Database Instance with Preconfigured Repository Using Database Templates

Installing Oracle Database and Creating a Database


Thursday Jan 02, 2014

What is EM 12c DBaaS Snap Clone?

Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.

The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago.  It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.

This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.

Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases

Customer Scenario

To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:

  • 5 production databases total 30 TB of storage
  • All 5 production databases have a standby
  • Clones of the production database are required for data analysis and reporting
  • 6 total clones across different teams every quarter
  • For security reasons, sensitive data has to be masked prior to cloning

Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:

5 Prod DB                  = 30 TB
5 Standby DB            = 30 TB
5 Masked DB             = 30 TB (These will be used for creating clones)
6 Clones (6 * 30 TB) = 180 TB
                               ------------------
Total                           = 270 TB
Time = days to weeks

As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:

  • Lack of automation. Scripts are good but often not a long term solution.
  • Traditional cloning techniques are slow while, existing storage vendor solutions are DBA unfriendly
  • Data explosion often outpaces storage capacity and hurts ITs ability to provide clones for dev and testing
  • Archaic processes that require multiple users to share a single clone, or only supports fixed refresh cycles
  • Different priorities between DBAs and Storage admins

Snap Clone to the Rescue

All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.

EM 12c DBaaS Snap Clone tries to address all these issues. It provides:

  • Rapid and space efficient cloning of databases by leveraging storage copy-on-write (or similar) technology
  • Supports all database versions from 10g to 12c
  • Supports various storage vendors and configurations NAS and SAN
  • Lineage and association tracking between clone master and its various clones and snapshots
  • 'Time Travel' capability to restore and access past data
  • Deep visibility into storage, OS, and database layer for easy triage of performance and configuration issues
  • Simplified access for end user via out-of-the-box self service portal
  • RESTful APIs to integrate with custom portals and third party products
  • Ability to meter and charge back on the clone databases

So how does Snap Clone work?

The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:

1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.

2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.

For more details on how to setup and use Snap Clone, refer to a previous blog post.

Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.

5 Prod DB                      = 30 TB
5 Standby DB                 = 30 TB
5 Masked DB                 = 30 TB
6 Clones (6 * 30 TB)      = 180 TB
6 Clones (6 * 5 * 2 GB) = 60 GB
                                   ------------------
Total                               = 270 TB 90 TB
Time = days to weeks minutes

Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.

Snap Clone Savings

Where can you use Snap Clone databases?

As i said earlier, Snap Clone is most effective when cloning large databases  (~TBs). Common scenarios we see our customers best use Snap Clone are:

  • Application upgrade testing. For example, EBusiness suite upgrade to R12.
  • Functional testing. For example, testing using production datasets.
  • Agile development. For example, run parallel development sprints by giving each sprint its own cloned database.
  • Data Analysis and Reporting. For example, stock market analysis at the close of market everyday.

Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.

In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.

In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.

-- Adeesh Fulay (@adeeshf)

Additional References

Cloud Management Page on OTN

Cloud Administration Guide (Documentation)

Enterprise Manager 12c Database-as-a-Service Snap Clone Overview (Presentation)

Saturday Nov 02, 2013

Sending notification after an event has remained open for a specified period

Enterprise Manager (EM) 12c allows you to create an incident rule to send a notification and/or create an incident after an event has been open for a specified period. Such an incident rule will help prevent premature alerts on issues that may correct themselves within a certain amount of time.

For example, there are some agents in an unstable network area, and often there are communication failures between the agents and the OMS lasting three, four minutes at a time. In this scenario, you may only want to receive alerts after an agent in that area has been in the Agent Unreachable status for at least five minutes.

Note: Many non-target availability metrics allow users to specify the “number of occurrences” or the number of consecutive times metric values reach thresholds before a notification is sent. It is best to use the feature for such metrics.


This article provides a step-by-step guide for creating an incident rule set to cater for the above scenario, that is, to create an incident and send a notification after the Agent Unreachable event has remained open for a five-minute duration.


Steps to create the incident rule

1.     Log on to the console and navigate to Setup -> Incidents -> Incident Rules.

Note: A non-super user requires the Create Enterprise Rule Set privilege, which is a resource privilege, to create an incident rule.


The Incident Rules - All Enterprise Rules page displays.


2.     Click Create Rule Set …

The Create Rule Set page displays.


3.     Enter a name for the rule set (e.g. Rule set for agents in flaky network areas), optionally enter a description, and leave everything else at default values, and click + Add.

The Search and Select: Targets page pops up.


Note:  While you can create a rule set for individual targets, it is a best practice to use a group for this purpose.



4.     Select an appropriate group, e.g. the AgentsInFlakyNework group. The Select button becomes enabled, click the button.

The Create Rule Set page displays.


5.     Leave everything at default values, and click the Rules tab.

The Create Rule Set page displays.


6.     Click Create…

The Select Type of Rule to Create page pops up.


7.     Leave the Incoming events and updates to events option selected, and click Continue.

The Create New Rule : Select Events page displays.


8.     Select Target Availability from the Type drop-down list.

The page shows more options for Target Availability.


9.     Select the Specific events of type Target Availability option, and click + Add.

The Select Target Availability events page pops up.


10.   Select Agent from the Target Type dropdown list.

The page expands.


11.   Click the Agent unreachable checkbox, and click OK.


Note: If you want to also receive a notification when the event is cleared, click the Agent unreachable end checkbox as well before clicking OK.


The Create New Rule : Select Events page displays.


12.   Click Next.

The Create New Rule : Add Actions page displays.


13.   Click + Add.

The Add Actions page displays.


14.   Do the following:

a.     Select the Only execute the actions if specified conditions match option (You don’t want the action to trigger always).

The following options appear in the Conditions for Actions section.

b.     Select the Event has been open for specified duration option.

The Conditions for actions section expands.

c.     Change the values of Event has been open for to 5 Minutes as shown below.

d.     In the Create Incident or Update Incident section, click the Create Incident checkbox as following:

e.     In the Notifications section, enter an appropriate EM user or email address in the E-mail To field.

f.     Click Continue (in the top right hand corner).

The Create New Rule : Add Actions page displays.


15.   Click Next.

The Create New Rule : Specify name and Description page displays.


16.   Enter a rule name, and click Next.

The Create New Rule : Review page appears.


17.   Click Continue, and proceed to save the rule set.

The incident rule set creation completes.

After one of the agents in the group specified in the rule set is stopped for over 5 minutes, EM will send a mail notification and create an incident as shown in the following screenshot.



In conclusion, you have seen the steps to create an example incident rule set that only creates an incident and triggers a notification after an event has been open for a specified period. Such an incident rule can help prevent unnecessary incidents and alert notifications leaving EM administrators time to more important tasks.

- Loc Nhan

Thursday Oct 17, 2013

IOUG SIG Webcast on October 30th : Performance Tuning your DB Cloud


The Oracle Enterprise Manager Special Interest Group (SIG) is a growing body of IOUG members who manage or are interested in all aspects of Oracle Enterprise Manager. This IOUG SIG is managed by volunteers and supported by Oracle Enterprise Manager product managers and developers. The purpose of the SIG is to bring relevant information and education through webcasts, discussions and networking to users interested in learning more about the product, and to share user experiences.

On October 30th at 10 AM pacific time, Oracle Enterprise Manager SIG is hosting a webcast on "Performance Tuning your DB Cloud in OEM 12c Cloud Control - 360 Degrees". In this webcast, Tariq Farooq , CEO, BrainSurface and Mike Ault, Oracle  will provide a tutorial on how to monitor and perform performance tuning of the Oracle database cloud environment.

You will learn how to leverage Oracle Enterprise Manager for tuning, trouble-shooting & monitoring your Oracle Database Cloud Ecosystem. The session covers lessons learned, tips/tricks, recommendations, best practices, gotchas and a whole lot more on how to effectively use Oracle Enterprise Manager Cloud Control 12c for quick, easy & intuitive performance tuning of your Oracle Database Cloud.

Session Objectives:
• Leveraging OEM12c Cloud Control for Oracle DB Tuning/Monitoring
• Limited Deep-Dive on AWR
• Oracle DB Cloud Performance Tuning
• Best Practices for DB Cloud Maintenance/Monitoring

Register Now !

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Google+ Newsletter

Wednesday Jul 24, 2013

Understanding Agent Resynchronization

Agent Resynchronization (resync) is an important topic but often misunderstood or misused. In this Q&A styled blog, I discuss how and when it is appropriate to use agent resynchronization.

What is Agent Resynchronization?

Management Agent can be reconfigured using target information present in the Management Repository. Resynchronization pushes all targets and related information from the Management Repository to the Management Agent and then unblocks the Agent.

 Why do agents need to be re-synchronized?

There are two primary reasons why you may need to use agent resynchronization:

1. Agent is blocked

An agent is blocked whenever it is out-of-sync with the repository. This, typically, can happen due to a corrupt targets.xml, missing files and directories, and bugs in the code (they are rare but few do exist :) ) that can leave the plug-in inventories in a strange state. In this condition, the OMS rejects all heartbeat or upload requests from the blocked agent. This means, the blocked agent will not be able to upload any alerts or metric data to the OMS, but it does continue to collect monitoring data. This is useful as once the agent is resynchronized, no monitoring data is lost.

2. Agent is lost and has to be reinstalled

This could be considered to be a special case of agent blocked condition, but it is worth discussing separately. If an agent host or file system is ever lost, the recommended way to reinstall it is by cloning from a reference install. This not only recovers the agent, but avoids having to track and reapply customizations and patches.

Note: It is important to retain the same port when reinstalling the agent.

Agent resync when run on a reinstalled agent, reconfigures it using target information present in the repository. The OMS detects that the agent has been re-installed and blocks it temporarily to prevent the auto-discovered targets in the re-installed agent from overwriting previous customizations.

Note: NEVER, NEVER, combine agent recovery with upgrade! If you lose your agent, recover it first using the original version, and then upgrade it to the new release.

Which interfaces are available for this operation?

There are two interfaces that will allow you to perform agent resync.

1. The Enterprise Manager Console

  a. Navigate to Setup->Manage Cloud Control->Agents to view list of all agents

  b. Select the desired agent and visit its home page

  c. Finally, select the 'Resynchronization...' option from the agent menu

Agent Resynchronization Menu Item

2. EMCLI

The agent can also be resynchronized via EMCLI. The command is as follows:

>> emcli resyncAgent -agent="Agent Host:Port"

How long does it take to resynchronize an agent?

This is a popular question, but unfortunately there is no straight answer for it. The time for resynchronization depends on the amount of data stored in the repository about the agent. When this action is invoked, the OMS does not consult the agent - it just asks agent to delete everything first, and then pushes the known state to it. Majority of the time is spent in pushing the plug-in content. So the more plug-ins deployed to the agent, the longer it takes. Metric Extensions and Configuration Extensions deployed to the agent would also contribute to the time.

Additional  Resources:

Upgrading  Oracle Management Agents

Back Up and Recover Enterprise Manager

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Newsletter


Thursday Jul 11, 2013

Oracle Enterprise Manager 12c Release 3: What’s New in EMCLI

If you have been using the classic Oracle Enterprise Manager Command Line interface ( EMCLI ), you are in for a treat. Oracle Enterprise Manager 12c R3 comes with a new EMCLI kit called ‘EMCLI with Scripting Option’. Not my favorite name, as I would have preferred to call this EMSHELL since it truly provides a shell similar to bash or cshell. Unlike the classic EMCLI, this new kit provides a Jython-based scripting environment along with the large collection of verbs to use. This scripting environment enables users to use established programming language constructs like loops (for, or while), conditional statements (if-else), etc in both interactive and scripting mode.

Benefits of ‘EMCLI with Scripting Option’

Some of the key benefits of the new EMCLI are:

  • Jython based scripting environment
  • Interactive and scripting mode
  • Standardized output format using JSON
  • Can connect to any EM environment (no need to run EMCLI setup …)
  • Stateless communication with OMS (no user data is stored with the client)
  • Generic list function for EM resources
  • Ability to run user-defined SQL queries to access published repository views

Before we go any further, there are two topics that warrant some discussion – Jython and JSON.

Jython

Jython is the Java implementation of the Python programming language. I have been working with Python (or CPython) and Jython for the last 10 years, and to me it is the best scripting language ever. It is fun, easy to learn, the syntax is simple, is self formatted, and is dynamically typed. This comic from XKCD summarizes it the best:

Python

There are numerous tutorials for Python/Jython on the web, so feel free to pick anyone you like but just remember that the Jython version supported by the new kit is v2.5.1.

JSON

JSON stands for JavaScript Object Notation. It is a data interchange format, much like XML, which is easier to read and write for both humans and machines, but unlike XML it contains very little metadata (elements and attribute names). JSON format is quite simple; it basically represents data as a collection of name/value pairs. These pairs can be contained within arrays, lists, or maps. Here is a sample:

{"menu": {

"id": "file",

"value": "File",

"popup": {

"menuitem": [

{"value": "New", "onclick": "CreateNewDoc()"},

{"value": "Open", "onclick": "OpenDoc()"},

{"value": "Close", "onclick": "CloseDoc()"}

]

}

}}

JSON is quite popular. You will often find it used with REST based web services APIs or even with some modern databases like MongoDB. Most programming languages provide libraries to work with JSON.

The EMCLI kit uses JSON as its output format as well. Many of the verbs return output in JSON format for ease of programmatic use. I say many, since there are still some verbs that don’t, but this is only matter of time.

Now let’s get back to EMCLI.

Steps to setup the kit for ‘EMCLI with Scripting Option’

1. To download the new EMCLI kit, go to Setup->Command Line Interface. Here you will notice the new section for ‘EMCLI with Scripting Option’. Click on the link to download the kit to your desktop or desired server.

Download

You can also download the kit directly from the following url:

http(s)://<host>:<port>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
 

2. Copy the kit (emcliadvancedkit.jar) to a directory where you wish to install EMCLI

kit

3. To install, run the following command. Note that we need the Java version to be 1.6.0_43 or greater.

java -jar emcliadvancedkit.jar client -install_dir=<emcli_client_dir>

Verify Java version 

4. The last step to complete the setup is to run ‘sync’. Before using EMCLI you have to connect to the OMS to install all verb-related command line help. In classic EMCLI, this happens automatically when you run the ‘setup’ command. But in the new EMCLI, since we do not run setup, we run the ‘sync’ command instead.

The ‘sync’ verb now accepts some additional arguments. Run the following command:
emcli sync -url=http(s)://<host>:<port>/em -username=<user> -trustall

It will prompt for the user password and then take a few minutes to download and install all the help content.

emcli sync

5. Now confirm the setup with a simple test. We do this using the interactive mode. Just run ‘emcli’, and once you see the prompt run ‘help()’. This will print list all verbs along with their description.

emcli interactive mode

With the setup complete, let’s now have some fun.

Using the ‘EMCLI with Scripting Option’

Connect to the interactive mode by running ‘emcli’ from the command prompt. Now try the following commands:

1. Basic Jython: Since EMCLI is built using the Jython interpreter, you can run Jython commands at the EMCLI prompt. For example, you can try the following:

>>1+2

>>print “Hello Jython”

>>mylist = [1,2,3]

>>print mylist

Jython test

2. EMCLI Status: Next, print the status of the EMCLI session using the ‘status()’ command.

emcli status

You will notice that the EM url and user are not set. To do this we have to set the client_properties. Run ‘help('client_properties')’ for more details.

client properties

The help text instructs us to set the client properties to connect to a specific EM environment. The 4 properties of interest to us are the following:

Name

Details

EMCLI_OMS_URL

The EM url

EMCLI_USERNAME

The EM user to connect as. We will use the login() function to set this.

EMCLI_TRUSTALL

I like to set this to true, but the default is false.

EMCLI_OUTPUT_TYPE

I like to set this to JSON even for interactive mode

To set these properties run the following:

>>set_client_property('EMCLI_OMS_URL','http(s)://<host>:<port>/em')

>>set_client_property('EMCLI_TRUSTALL','TRUE')

>>set_client_property('EMCLI_OUTPUT_TYPE', 'JSON')

>>login(username="<em_user>",password="<password>")

You should see the message on successful login. Now we are connected to EM.

login

3. Understanding help and verb invocations: Most of the help text presented in EMCLI is tailored towards the classic interface. Since Jython is a programming language, verb invocations are done in the function form. There is a simple mechanism for converting the classic invocation format for use in both interactive and scripting mode. Let’s use the login() verb as an example.

The EMCLI help for login is as follows:

>>help('login')

emcli login

-username=<EM Console Username>

[-password=<EM Console Password>]

[-force]

This means, when using classic EMCLI, you would invoke it as follows:

emcli login –username=”foo” –password=”bar” -force

Instead, in the interactive or script mode, the invocation would look like:

login(username="<em_user>",password="<password>",force=True)

Essentially, all verbs are now functions, and all arguments to the verb are now parameters passed to the function. Since the –force argument does not take any value, it is treated as a Boolean in Jython and takes the values of True or False.

Note: The -force parameter in the login() function is not applicable to the interactive or script mode, but is being used in this example to explain the concept of passing Boolean values. Again, you should never use the -force parameter in the interactive or script mode.

Another such conversion that you may come across is for list of values. For example,

In classic EMCLI, some verbs will ask for the same attribute to be repeated with varying values to represent a list.

emcli grant_privs -name='jan.doe' 
         -privilege="USE_ANY_BEACON"
         -privilege="FULL_TARGET;TARGET_NAME=host1.acme.com:TARGET_TYPE=host"

In interactive or script mode, you can use native Jython listes instead and pass it as parameters. In Jython, lists are represented within square brackets ([]).

>>priv_list = ['USE_ANY_BEACON','FULL_TARGET;TARGET_NAME=host1.acme.com:TARGET_TYPE=host']
>>grant_privs(name='jan.doe',privilege=priv_list)

4. Sample Use Case: Let’s take a very simple use case to demonstrate the interaction with EMCLI in the interactive mode. So our sample use case is to ‘List all targets of type oracle_database and those whose name starts with the characters ‘db’”.

For this use case, we will make use of the new generic ‘list’ verb. Traditionally, each feature in EM provided its own verbs for list, get, show, and describe. Rather than working with multiple such variants, the new generic ‘list’ verb takes a page from the REST web service specification and provides a generic action that can work against different EM resources.

To learn more about this verb, we ru:

>>help('list')

help

The help text shows us the format of this verb. Essentially, there are 3 parameters that we care about:
  • resource = the EM resource which is to be queried
  • columns = specify the different resource attributes to display
  • search = filters to narrow down the result

First, we need to know the list of resources that are supported by this verb. For this we run

>>List(‘help’)

list help

From the output, it is obvious that for our sample use case we want to query the Targets resource.

Second, we need to know which columns are supported by the Targets resource. For this, we run

>>list('help',resource="Targets")

help resourcesl

From the output, we can determine that we need the column related to target name and type. With this we have all the information we need to construct the final function call for our sample use case.

For ease of explanation, I will break down the process of determining the final function call into small incremental steps. Once you gain proficiency, you will be able to define this function in a single pass.

       1. List all targets in the EM environment. For this we run,

>>list(resource="Targets")

This command will spew a lot of text on your screen as there are likely to be numerous targets in your EM environment. So instead of listing all of them on the screen, let’s just get a count. For this, we need to understand the output format of this verb.

Any function that you run in the interactive or script mode returns an object of class Response (<class 'emcli.response.Response'>). The Response class has 4 key methods:

Function

Description

out()

Provides the verb execution output. The output can be text, or the JSON.

 isJson() method on the Response object can be used to determine whether the output is JSON.

error()

Provides the error text (if any) of the verb execution if there are any errors or exceptions during verb execution.

exit_code()

Provides the exit code of the verb execution. The exit code is zero for a successful execution and non-zero otherwise.

isJson()

Provides details about the type of output. It returns True if response.out() can be parsed into a JSON object.

So let’s look at a code snippet.

snippet

For the first function call to list all targets in EM, we store the results into a variable called ‘all_tgts’. This variable contains the response object. ‘all_tgts.out()’ will give us the actual output. The output returned is in JSON format which automatically gets converted into a Jython dictionary (collection of name-value pairs represented by curly brackets). The output dictionary has a key name called ‘data’ which contains all search results in the form of a Jython list as its value. Finally, len() is a native Jython function which returns the number of elements in a Jython list. As seen in the output, we found 878 targets in the EM environment which is clearly not what we desire.

 2. Now we add search parameters to filter our results. We add two search filters, first the target type should be equal to oracle_database, and second the target name be like db%. You can add multiple search filters to the function call, but all these filters should be encapsulated in a Jython list. The search filter supports various operators: =, !=, >, <, >=, <=, like, null, and not null. Similar to a SQL query, you can also control which columns are to be displayed in the output.

So let’s run our final function.

>>search_filters=["TARGET_TYPE ='oracle_database'","TARGET_NAME like 'db%'"]

>>list(resource="Targets", columns="TARGET_NAME,TARGET_TYPE", search=search_filters)

The formatted output looks like this. As mentioned before it is in the form of a Jython dictionary which can be easily accessed programmatically. The value of the ‘data’ key is a Jython list that contains all search results, while the other keys provide other metadata related to the result.

{

'exceedsMaxRows': False,

'columnHeaders': ['TARGET_NAME', 'TARGET_TYPE'],

'columnLength': [256, 64],

'columnNames': ['TARGET_NAME', 'TARGET_TYPE'],

'data':

[

{'TARGET_NAME': 'db9328.acme.com', 'TARGET_TYPE': 'oracle_database'},

{'TARGET_NAME': 'db3092.acme.com', 'TARGET_TYPE': 'oracle_database'},

],

'filler': '\n\n\n'}

You must have noticed that I hardly talk about the scripting mode. This is on purpose, as I believe that interactive mode is the best interface to learn the new EMCLI. Once you master the interactive mode, converting your code snippets into a script is fairly easy. In future blog posts, I will cover scripting mode and numerous other use cases that seem like a perfect fit for the new EMCLI.

In summary, ‘EMCLI with Scripting Option’ is a new kit that is built on top of a Jython interpreter. It is much superior to the classic EMCLI, as it provides a complete programming environment with the ability to use native Jython functions and primitives. The output is presented in the JSON format which is both human and machine readable, and avoids the need for parsing text output. The client is completely stateless, which means no user data is stored with the client. This means numerous sessions can be launched from a single client, each connecting to a different EM environment, and as a different user.

I encourage you to play around with this new EMCLI kit, and post the different use cases that you found interesting and would benefit the community. You can reach me on twitter @AdeeshF.

Additional Reading:

The EMCLI Documentation Guide

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Newsletter

About

Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs




Search

Archives
« February 2015
SunMonTueWedThuFriSat
1
3
4
5
6
7
8
10
12
13
14
15
16
17
19
20
21
22
26
27
28
       
       
Today