Thursday Jan 02, 2014

What is EM 12c DBaaS Snap Clone?

Happy New Year to all! Being the first blog post of the new year, lets look at a relatively new feature in EM that has gained significant popularity over the last year - EM 12c DBaaS Snap Clone.

The ‘Oracle Cloud Management Pack for Oracle Database’ a.k.a the Database as a Service (DBaaS) feature in EM 12c has grown tremendously since its release two years ago.  It started with basic single instance and RAC database provisioning, a technical service catalog, an out of box self service portal, metering and chargeback, etc. But since then we have added provisioning of schemas and pluggable databases, full clones using RMAN backups, and Snap Clone. This video showcases the various EM12c DBaaS features.

This blog will cover one of the most exciting and popular features – Snap Clone. In one line, Snap Clone is a self service way of creating rapid and space efficient clones of large (~TB) databases.

Self Service - empowers the end users (developers, testers, data analysts, etc) to get access to database clones whenever they need it.
Rapid - implies the time it takes to clone the database. This is in minutes and not hours, days, or weeks.
Space Efficient - represents the significant reduction in storage (>90%) required for cloning databases

Customer Scenario

To best explain the benefits of Snap Clone, let’s look at a Banking customer scenario:

  • 5 production databases total 30 TB of storage
  • All 5 production databases have a standby
  • Clones of the production database are required for data analysis and reporting
  • 6 total clones across different teams every quarter
  • For security reasons, sensitive data has to be masked prior to cloning

Based on the above scenario, the storage required, if using traditional cloning techniques, can be calculated as follows:

5 Prod DB                  = 30 TB
5 Standby DB            = 30 TB
5 Masked DB             = 30 TB (These will be used for creating clones)
6 Clones (6 * 30 TB) = 180 TB
                               ------------------
Total                           = 270 TB
Time = days to weeks

As the numbers indicate, this is quite horrible. Not only 30 TB turn into 270 TB, creating 6 clones of all production databases would take forever. In addition to this, there are other issues with data cloning like:

  • Lack of automation. Scripts are good but often not a long term solution.
  • Traditional cloning techniques are slow while, existing storage vendor solutions are DBA unfriendly
  • Data explosion often outpaces storage capacity and hurts ITs ability to provide clones for dev and testing
  • Archaic processes that require multiple users to share a single clone, or only supports fixed refresh cycles
  • Different priorities between DBAs and Storage admins

Snap Clone to the Rescue

All of the above issues lead to slow turnaround times, and users have to wait for days and weeks to get access to their databases. Basically, we end up with competing priorities and requirements, where the user demands self service access, rapid cloning, and the ability to revert data changes, while IT demands standardization, better control, reduction in storage and administrative overhead, better visibility into the database stack, etc.

EM 12c DBaaS Snap Clone tries to address all these issues. It provides:

  • Rapid and space efficient cloning of databases by leveraging storage copy-on-write (or similar) technology
  • Supports all database versions from 10g to 12c
  • Supports various storage vendors and configurations NAS and SAN
  • Lineage and association tracking between clone master and its various clones and snapshots
  • 'Time Travel' capability to restore and access past data
  • Deep visibility into storage, OS, and database layer for easy triage of performance and configuration issues
  • Simplified access for end user via out-of-the-box self service portal
  • RESTful APIs to integrate with custom portals and third party products
  • Ability to meter and charge back on the clone databases

So how does Snap Clone work?

The secret sauce lies in the Storage Management Framework (SMF) plug-in. This plug-in sits between the storage system and the DBA, and provides the much needed layer of abstraction required to shield DBAs and users from the nuances of the different storage systems. At the storage level, Snap Clone makes use of storage copy-on-write (or similar) technology. There are two options in terms of using and interacting with storage:

1. Direct connection to storage: Here storage admins can register NetApp and ZFS storage appliance with EM, and then EM directly connects to the storage appliance and performs all required snapshot and clone operations. This approach requires you to license the relevant options on the storage appliance, but is the easiest and the most efficient and fault tolerant approach.

2. Connection to storage via ZFS file system: This is a storage vendor agnostic solution and can be used by any customer. Here instead of connecting to storage, the storage admin mounts the volumes to a Solaris server and format it with ZFS file system. Now all snapshot and clone operations required on the storage are conducted via ZFS file system,. The good thing about this approach is that it does not require thin cloning options to be licensed on the storage since ZFS file system provides these capabilities.

For more details on how to setup and use Snap Clone, refer to a previous blog post.

Now, lets go back to our Banking customer scenario and see how Snap Clone helped then reduce their storage cost and time to clone.

5 Prod DB                      = 30 TB
5 Standby DB                 = 30 TB
5 Masked DB                 = 30 TB
6 Clones (6 * 30 TB)      = 180 TB
6 Clones (6 * 5 * 2 GB) = 60 GB
                                   ------------------
Total                               = 270 TB 90 TB
Time = days to weeks minutes

Assuming the clone databases will have minimal writes, we allocate about 2GB of write space per clone. For 5 production databases and 6 clones, this totals to just 60GB in required storage space. This is a whopping 99.97% savings in storage. Plus, these clones are created in matter of minutes and not the usual days or weeks. The product has out-of-the-box charts that show the storage savings across all storage devices and cloned databases. See the screenshot below.

Snap Clone Savings

Where can you use Snap Clone databases?

As i said earlier, Snap Clone is most effective when cloning large databases  (~TBs). Common scenarios we see our customers best use Snap Clone are:

  • Application upgrade testing. For example, EBusiness suite upgrade to R12.
  • Functional testing. For example, testing using production datasets.
  • Agile development. For example, run parallel development sprints by giving each sprint its own cloned database.
  • Data Analysis and Reporting. For example, stock market analysis at the close of market everyday.

Its obvious that Snap Clone has a strong affinity to applications, since its application data that you want to clone and use. Hence it is important to add that the Snap Clone feature when combined with EM12c middleware-as-a-service (MWaaS) can provide a complete end-to-end self service application deployment experience. If you have existing portals or need to integrate Snap Clone with existing processes, then use our RESTful APIs for easy integration with third party systems.

In summary, Snap Clone is a new and exciting way of dealing with data cloning challenges. It shields DBAs from the nuances of different storage systems, while allowing end users to request and use clones in a rapid and self service fashion. All of this while saving storage costs. So try this feature out today, and your development and test teams will thank you forever.

In subsequent blog posts, we will look at some popular deployment models used with Snap Clone.

-- Adeesh Fulay (@adeeshf)

Additional References

Cloud Management Page on OTN

Cloud Administration Guide (Documentation)

Enterprise Manager 12c Database-as-a-Service Snap Clone Overview (Presentation)

Tuesday Dec 31, 2013

Database Lifecycle Management for Cloud Service Providers

Adopting the Cloud Computing paradigm enables service providers to maximize revenues while driving capital costs down through greater efficiencies of working capital and OPEX changes. In case of enterprise private cloud, corporate IT, which plays the role of the provider, may not be interested in revenues, but still care about providing differentiated service at lower cost. The efficiency and cost eventually makes the service profitable and sustainable. This basic tenet has to be satisfied irrespective of the type of service-infrastructure (IaaS), platform (PaaS) or software application (SaaS). In this blog, we specifically focus on the database layer and how its lifecycle gets managed by the Service Providers.

Any service provider needs to ensure that:

  • Hardware and software population are in control. As new consumers come in and some consumers retire, there is a constant flux of resources in the data center. The flux has to be managed and controlled
  • The platform for providing the service is standardized, so that operations can be conducted predictable and at scale across a pool of resources
  • Mundane and repeatable tasks like backup, patching, etc are automated
  • Customer attrition does not happen owing to heightened compliance risk

While the Database Lifecycle Management features of Enterprise Manager have been widely adopted, I feel that the applicability of the features with respect to service providers is yet well understood and hence appreciated. In this blog, let me try addressing how the lifecycle management features can be effective in addressing each of the above requirements.

1. Controlling hardware and software population:

Enterprise Manager 12c provides a near real-time view of the assets in a data center. It comes with out-of-box inventory reports that show the current population and the growth trend within the data center. The inventory can be further sliced and diced based on cost center, owner, etc. In a cloud, whether private or public, the target properties of each asset can be appropriately populated, so that the provider can easily figure out the distribution of assets. For example, how many databases are owned by Marketing LOB can be easily answered. The flux within the data center is usually higher when virtualization techniques such as server virtualization and Oracle 12c multitenant option are used. These technologies make the provisioning process extremely nimble, potentially leading to a higher number of virtual machines (VMs) or pluggable databases (PDBs) within the data center and hence accentuating the need for such ongoing reporting. The inventory reports can be also created using BI Publisher and delivered to non-EM users, such as a CIO.


Now, not all reports can always be readily available. There can be situations where a data center manager can seek adhoc information, such as, how many databases owned by a particular customer is running on Exadata. This involves an adhoc query based upon an association, viz. database running on Exadata and target properties, viz. owner being the customer. Enterprise Manager 12c provides a sophisticated Configuration Search feature that lets administrators define such adhoc queries and save them for reuse.

2. Standardization of platform:

The massive standardization of platform components is not merely a nice-to-have for a cloud service provider, it is rather a must-have. A provider may choose to offer various levels of services, tagged with levels such as gold, silver and bronze. However, for each such level, the platform components need to be standardized, not only for ease of manageability but also for ensuring consistency of QOS across all the tenants. So how can the platform be standardized? We can highlight two major Enterprise Manager 12c features here:

The ability to rollout gold images that can be version controlled within Enterprise Manager's Software Library. The inputs of the provisioning process can be "locked down" by the designer of the provisioning process, thereby ensuring that each deployment is a replica of the other.

The ability to compare the configuration of deployments (often referred to as the "Points of Delivery" of the services). This is a very powerful feature that supports 1-n comparisons across multiple tiers of the stack. For example, one can compare an entire database machine from storage cells, compute nodes to databases with one or more of those.

3. Automation of repeatable tasks:

A large portion of OPEX for a service provider is expended while executing mundane and repeatable tasks like backup, log file cleanup or patching. Enterprise Manager 12c comes with an automation framework comprising Jobs and Deployment Procedures that lets administrators define these repetitive actions and schedule them as needed. EMCC’s task automation framework is scalable, carries functions such as ability to schedule, resume, retry which are of paramount importance in conducting mass operations in an enterprise scale cloud. The task automation verbs are also exposed through the EMCLI interface. Oracle Cloud administrators make extensive use of EMCLI for large scale operations on thousands of tenant services.

One of the most popular features of Enterprise Manager 12c is the out-of-box procedures for patch automation. The patching procedures can patch the Linux operating system, clusterware and the database. For minimizing the downtime involved in the patching process Enterprise Manager 12c also supports out-of-place patching that can prepare the patched software ahead of time and migrate the instances one by one as needed. This technique is widely adopted by the service providers to make sure the tenants' downtime related SLAs are respected and adhered to. The co-ordination of such downtime can be instrumented by Enterprise Manager 12c's blackout functionality.

4. Managing Compliance risks:

In a service driven model, the provider is liable in case of security breaches. The consumer and in turn, the customer of the consumer's apps need to be assured that their data is not breached into owing to platform level vulnerabilities. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open network port. The hardening of the platform therefore, has to be done at all levels-OS, network, database, etc. The security breaches often happen owing to faulty configuration such as default passwords, relaxed file permissions, or an open port. . To manage compliance, administrators can create baselines referred to as Compliance Standard. Any deviations from the baselines triggers compliance violation notifications, alerting administrators to resolve the issue before it creates risk in the environment.

We can therefore see how four major asks from a service provider can be satisfied with the Lifecycle Management features of Enterprise Manager 12c. As substantiated through several third party studies and customer testimonials, these result in higher efficiency with lower OPEX.

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Google+ Newsletter

Tuesday Nov 12, 2013

Automate RAC Cluster Upgrades using EM12c

One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.
With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases.

This automation includes:

  • Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB) 
  • Best practices in tune with your operations, where you can automate upgrade in steps:
    Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs).
    Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster).
  • Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks
  • Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required.
  • Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to :
    a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN)
    b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup
    c. Ignore backup completely, if there are external mechanisms already in place. 

    Mass Upgrade of RAC using EM12c


High Level Steps:

  1. Select the Procedure "Upgrade Database" from Database Provisioning Home page.
  2. Choose the Target Type for upgrade and the Destination version
  3. Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already
  4. Select the Gold Image of the destination version for deploying both the GI and RAC OHs
  5. Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts
  6. Set the Break points in the procedure execution to isolate Downtime activities
  7. Submit and track the procedure's execution status. 

The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link.

Explore the functionality!!

In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

Wednesday Jul 24, 2013

Understanding Agent Resynchronization

Agent Resynchronization (resync) is an important topic but often misunderstood or misused. In this Q&A styled blog, I discuss how and when it is appropriate to use agent resynchronization.

What is Agent Resynchronization?

Management Agent can be reconfigured using target information present in the Management Repository. Resynchronization pushes all targets and related information from the Management Repository to the Management Agent and then unblocks the Agent.

 Why do agents need to be re-synchronized?

There are two primary reasons why you may need to use agent resynchronization:

1. Agent is blocked

An agent is blocked whenever it is out-of-sync with the repository. This, typically, can happen due to a corrupt targets.xml, missing files and directories, and bugs in the code (they are rare but few do exist :) ) that can leave the plug-in inventories in a strange state. In this condition, the OMS rejects all heartbeat or upload requests from the blocked agent. This means, the blocked agent will not be able to upload any alerts or metric data to the OMS, but it does continue to collect monitoring data. This is useful as once the agent is resynchronized, no monitoring data is lost.

2. Agent is lost and has to be reinstalled

This could be considered to be a special case of agent blocked condition, but it is worth discussing separately. If an agent host or file system is ever lost, the recommended way to reinstall it is by cloning from a reference install. This not only recovers the agent, but avoids having to track and reapply customizations and patches.

Note: It is important to retain the same port when reinstalling the agent.

Agent resync when run on a reinstalled agent, reconfigures it using target information present in the repository. The OMS detects that the agent has been re-installed and blocks it temporarily to prevent the auto-discovered targets in the re-installed agent from overwriting previous customizations.

Note: NEVER, NEVER, combine agent recovery with upgrade! If you lose your agent, recover it first using the original version, and then upgrade it to the new release.

Which interfaces are available for this operation?

There are two interfaces that will allow you to perform agent resync.

1. The Enterprise Manager Console

  a. Navigate to Setup->Manage Cloud Control->Agents to view list of all agents

  b. Select the desired agent and visit its home page

  c. Finally, select the 'Resynchronization...' option from the agent menu

Agent Resynchronization Menu Item

2. EMCLI

The agent can also be resynchronized via EMCLI. The command is as follows:

>> emcli resyncAgent -agent="Agent Host:Port"

How long does it take to resynchronize an agent?

This is a popular question, but unfortunately there is no straight answer for it. The time for resynchronization depends on the amount of data stored in the repository about the agent. When this action is invoked, the OMS does not consult the agent - it just asks agent to delete everything first, and then pushes the known state to it. Majority of the time is spent in pushing the plug-in content. So the more plug-ins deployed to the agent, the longer it takes. Metric Extensions and Configuration Extensions deployed to the agent would also contribute to the time.

Additional  Resources:

Upgrading  Oracle Management Agents

Back Up and Recover Enterprise Manager

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Newsletter


Thursday Jul 11, 2013

Oracle Enterprise Manager 12c Release 3: What’s New in EMCLI

If you have been using the classic Oracle Enterprise Manager Command Line interface ( EMCLI ), you are in for a treat. Oracle Enterprise Manager 12c R3 comes with a new EMCLI kit called ‘EMCLI with Scripting Option’. Not my favorite name, as I would have preferred to call this EMSHELL since it truly provides a shell similar to bash or cshell. Unlike the classic EMCLI, this new kit provides a Jython-based scripting environment along with the large collection of verbs to use. This scripting environment enables users to use established programming language constructs like loops (for, or while), conditional statements (if-else), etc in both interactive and scripting mode.

Benefits of ‘EMCLI with Scripting Option’

Some of the key benefits of the new EMCLI are:

  • Jython based scripting environment
  • Interactive and scripting mode
  • Standardized output format using JSON
  • Can connect to any EM environment (no need to run EMCLI setup …)
  • Stateless communication with OMS (no user data is stored with the client)
  • Generic list function for EM resources
  • Ability to run user-defined SQL queries to access published repository views

Before we go any further, there are two topics that warrant some discussion – Jython and JSON.

Jython

Jython is the Java implementation of the Python programming language. I have been working with Python (or CPython) and Jython for the last 10 years, and to me it is the best scripting language ever. It is fun, easy to learn, the syntax is simple, is self formatted, and is dynamically typed. This comic from XKCD summarizes it the best:

Python

There are numerous tutorials for Python/Jython on the web, so feel free to pick anyone you like but just remember that the Jython version supported by the new kit is v2.5.1.

JSON

JSON stands for JavaScript Object Notation. It is a data interchange format, much like XML, which is easier to read and write for both humans and machines, but unlike XML it contains very little metadata (elements and attribute names). JSON format is quite simple; it basically represents data as a collection of name/value pairs. These pairs can be contained within arrays, lists, or maps. Here is a sample:

{"menu": {

"id": "file",

"value": "File",

"popup": {

"menuitem": [

{"value": "New", "onclick": "CreateNewDoc()"},

{"value": "Open", "onclick": "OpenDoc()"},

{"value": "Close", "onclick": "CloseDoc()"}

]

}

}}

JSON is quite popular. You will often find it used with REST based web services APIs or even with some modern databases like MongoDB. Most programming languages provide libraries to work with JSON.

The EMCLI kit uses JSON as its output format as well. Many of the verbs return output in JSON format for ease of programmatic use. I say many, since there are still some verbs that don’t, but this is only matter of time.

Now let’s get back to EMCLI.

Steps to setup the kit for ‘EMCLI with Scripting Option’

1. To download the new EMCLI kit, go to Setup->Command Line Interface. Here you will notice the new section for ‘EMCLI with Scripting Option’. Click on the link to download the kit to your desktop or desired server.

Download

You can also download the kit directly from the following url:

http(s)://<host>:<port>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
 

2. Copy the kit (emcliadvancedkit.jar) to a directory where you wish to install EMCLI

kit

3. To install, run the following command. Note that we need the Java version to be 1.6.0_43 or greater.

java -jar emcliadvancedkit.jar client -install_dir=<emcli_client_dir>

Verify Java version 

4. The last step to complete the setup is to run ‘sync’. Before using EMCLI you have to connect to the OMS to install all verb-related command line help. In classic EMCLI, this happens automatically when you run the ‘setup’ command. But in the new EMCLI, since we do not run setup, we run the ‘sync’ command instead.

The ‘sync’ verb now accepts some additional arguments. Run the following command:
emcli sync -url=http(s)://<host>:<port>/em -username=<user> -trustall

It will prompt for the user password and then take a few minutes to download and install all the help content.

emcli sync

5. Now confirm the setup with a simple test. We do this using the interactive mode. Just run ‘emcli’, and once you see the prompt run ‘help()’. This will print list all verbs along with their description.

emcli interactive mode

With the setup complete, let’s now have some fun.

Using the ‘EMCLI with Scripting Option’

Connect to the interactive mode by running ‘emcli’ from the command prompt. Now try the following commands:

1. Basic Jython: Since EMCLI is built using the Jython interpreter, you can run Jython commands at the EMCLI prompt. For example, you can try the following:

>>1+2

>>print “Hello Jython”

>>mylist = [1,2,3]

>>print mylist

Jython test

2. EMCLI Status: Next, print the status of the EMCLI session using the ‘status()’ command.

emcli status

You will notice that the EM url and user are not set. To do this we have to set the client_properties. Run ‘help('client_properties')’ for more details.

client properties

The help text instructs us to set the client properties to connect to a specific EM environment. The 4 properties of interest to us are the following:

Name

Details

EMCLI_OMS_URL

The EM url

EMCLI_USERNAME

The EM user to connect as. We will use the login() function to set this.

EMCLI_TRUSTALL

I like to set this to true, but the default is false.

EMCLI_OUTPUT_TYPE

I like to set this to JSON even for interactive mode

To set these properties run the following:

>>set_client_property('EMCLI_OMS_URL','http(s)://<host>:<port>/em')

>>set_client_property('EMCLI_TRUSTALL','TRUE')

>>set_client_property('EMCLI_OUTPUT_TYPE', 'JSON')

>>login(username="<em_user>",password="<password>")

You should see the message on successful login. Now we are connected to EM.

login

3. Understanding help and verb invocations: Most of the help text presented in EMCLI is tailored towards the classic interface. Since Jython is a programming language, verb invocations are done in the function form. There is a simple mechanism for converting the classic invocation format for use in both interactive and scripting mode. Let’s use the login() verb as an example.

The EMCLI help for login is as follows:

>>help('login')

emcli login

-username=<EM Console Username>

[-password=<EM Console Password>]

[-force]

This means, when using classic EMCLI, you would invoke it as follows:

emcli login –username=”foo” –password=”bar” -force

Instead, in the interactive or script mode, the invocation would look like:

login(username="<em_user>",password="<password>",force=True)

Essentially, all verbs are now functions, and all arguments to the verb are now parameters passed to the function. Since the –force argument does not take any value, it is treated as a Boolean in Jython and takes the values of True or False.

Note: The -force parameter in the login() function is not applicable to the interactive or script mode, but is being used in this example to explain the concept of passing Boolean values. Again, you should never use the -force parameter in the interactive or script mode.

Another such conversion that you may come across is for list of values. For example,

In classic EMCLI, some verbs will ask for the same attribute to be repeated with varying values to represent a list.

emcli grant_privs -name='jan.doe' 
         -privilege="USE_ANY_BEACON"
         -privilege="FULL_TARGET;TARGET_NAME=host1.acme.com:TARGET_TYPE=host"

In interactive or script mode, you can use native Jython listes instead and pass it as parameters. In Jython, lists are represented within square brackets ([]).

>>priv_list = ['USE_ANY_BEACON','FULL_TARGET;TARGET_NAME=host1.acme.com:TARGET_TYPE=host']
>>grant_privs(name='jan.doe',privilege=priv_list)

4. Sample Use Case: Let’s take a very simple use case to demonstrate the interaction with EMCLI in the interactive mode. So our sample use case is to ‘List all targets of type oracle_database and those whose name starts with the characters ‘db’”.

For this use case, we will make use of the new generic ‘list’ verb. Traditionally, each feature in EM provided its own verbs for list, get, show, and describe. Rather than working with multiple such variants, the new generic ‘list’ verb takes a page from the REST web service specification and provides a generic action that can work against different EM resources.

To learn more about this verb, we ru:

>>help('list')

help

The help text shows us the format of this verb. Essentially, there are 3 parameters that we care about:
  • resource = the EM resource which is to be queried
  • columns = specify the different resource attributes to display
  • search = filters to narrow down the result

First, we need to know the list of resources that are supported by this verb. For this we run

>>List(‘help’)

list help

From the output, it is obvious that for our sample use case we want to query the Targets resource.

Second, we need to know which columns are supported by the Targets resource. For this, we run

>>list('help',resource="Targets")

help resourcesl

From the output, we can determine that we need the column related to target name and type. With this we have all the information we need to construct the final function call for our sample use case.

For ease of explanation, I will break down the process of determining the final function call into small incremental steps. Once you gain proficiency, you will be able to define this function in a single pass.

       1. List all targets in the EM environment. For this we run,

>>list(resource="Targets")

This command will spew a lot of text on your screen as there are likely to be numerous targets in your EM environment. So instead of listing all of them on the screen, let’s just get a count. For this, we need to understand the output format of this verb.

Any function that you run in the interactive or script mode returns an object of class Response (<class 'emcli.response.Response'>). The Response class has 4 key methods:

Function

Description

out()

Provides the verb execution output. The output can be text, or the JSON.

 isJson() method on the Response object can be used to determine whether the output is JSON.

error()

Provides the error text (if any) of the verb execution if there are any errors or exceptions during verb execution.

exit_code()

Provides the exit code of the verb execution. The exit code is zero for a successful execution and non-zero otherwise.

isJson()

Provides details about the type of output. It returns True if response.out() can be parsed into a JSON object.

So let’s look at a code snippet.

snippet

For the first function call to list all targets in EM, we store the results into a variable called ‘all_tgts’. This variable contains the response object. ‘all_tgts.out()’ will give us the actual output. The output returned is in JSON format which automatically gets converted into a Jython dictionary (collection of name-value pairs represented by curly brackets). The output dictionary has a key name called ‘data’ which contains all search results in the form of a Jython list as its value. Finally, len() is a native Jython function which returns the number of elements in a Jython list. As seen in the output, we found 878 targets in the EM environment which is clearly not what we desire.

 2. Now we add search parameters to filter our results. We add two search filters, first the target type should be equal to oracle_database, and second the target name be like db%. You can add multiple search filters to the function call, but all these filters should be encapsulated in a Jython list. The search filter supports various operators: =, !=, >, <, >=, <=, like, null, and not null. Similar to a SQL query, you can also control which columns are to be displayed in the output.

So let’s run our final function.

>>search_filters=["TARGET_TYPE ='oracle_database'","TARGET_NAME like 'db%'"]

>>list(resource="Targets", columns="TARGET_NAME,TARGET_TYPE", search=search_filters)

The formatted output looks like this. As mentioned before it is in the form of a Jython dictionary which can be easily accessed programmatically. The value of the ‘data’ key is a Jython list that contains all search results, while the other keys provide other metadata related to the result.

{

'exceedsMaxRows': False,

'columnHeaders': ['TARGET_NAME', 'TARGET_TYPE'],

'columnLength': [256, 64],

'columnNames': ['TARGET_NAME', 'TARGET_TYPE'],

'data':

[

{'TARGET_NAME': 'db9328.acme.com', 'TARGET_TYPE': 'oracle_database'},

{'TARGET_NAME': 'db3092.acme.com', 'TARGET_TYPE': 'oracle_database'},

],

'filler': '\n\n\n'}

You must have noticed that I hardly talk about the scripting mode. This is on purpose, as I believe that interactive mode is the best interface to learn the new EMCLI. Once you master the interactive mode, converting your code snippets into a script is fairly easy. In future blog posts, I will cover scripting mode and numerous other use cases that seem like a perfect fit for the new EMCLI.

In summary, ‘EMCLI with Scripting Option’ is a new kit that is built on top of a Jython interpreter. It is much superior to the classic EMCLI, as it provides a complete programming environment with the ability to use native Jython functions and primitives. The output is presented in the JSON format which is both human and machine readable, and avoids the need for parsing text output. The client is completely stateless, which means no user data is stored with the client. This means numerous sessions can be launched from a single client, each connecting to a different EM environment, and as a different user.

I encourage you to play around with this new EMCLI kit, and post the different use cases that you found interesting and would benefit the community. You can reach me on twitter @AdeeshF.

Additional Reading:

The EMCLI Documentation Guide

Stay Connected:

Twitter |  Face book |  You Tube |  Linked in |  Newsletter

Thursday Apr 11, 2013

Qualcomm Deploys Application Changes Faster with Oracle Enterprise Manager

Listen in as Qualcomm talks about saving time and energy by making application changes faster through Oracle Enterprise Manager.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app

Thursday Mar 14, 2013

Database as a Service: Glad that you asked these!

Thanks for visiting my earlier blog post on the new Database as a Service (DBaaS) features which got released in Enterprise Manager 12cR2 Plugin Update 1.

Our first public webcast on  DBaaS since the release was held this morning (the recording will be soon available on O.com). The webcast was pretty well attended with peak attendance going well over our expectation. I wish we had more time to handle the technical Q&A, but since we didn't, let me use the blogosphere to answer some of the questions that were asked. I am repeating some of the questions that we answered during the webcast, because they warrant details beyond what the duration permitted.

Kevin from the audience asked "What's the difference between a regular provisioning and DbaaS?" Sometimes the apparently obvious ones are the most difficult to answer. The recently released whitepaper covers the regular/traditional provisioning versus DBaaS in detail. Long story cut short, in a traditional provisioning model, IT (usually a DBA) uses scripts and tools to provision databases on behalf of end users. In DBaaS IT's role changes and the DBA simply creates a service delivery platform for end users to provision databases on demand as and when they need them. And that too, with minimal inputs ! Here's how the process unfolds:

  • The DBA pools together a bunch of server resources that can host databases or a bunch of databases that can host schema and creates a Self-Service zone.
  • The DBA creates a gold image and provisioning procedure and expresses that as a service template
  • As a result, the end users do not have to deal with the intricacies of the provisioning process. They input a couple of very simple things like the service template and the zone and everything else happens under the hood. The provisioning process, the physicality of the database, etc are completely abstracted out.
  • And finally, because DbaaS deals with shared resource utilization and self-service automation, a DBaaS is usually complemented by quota, retirement and chargeback. 

The following picture can make it clear.


In terms of licensing, for a traditional administrator driven database provisioning, you need the Database Lifecycle Management Pack.  If you want to enable DBaaS on top of it, simply add the Cloud Management Pack for Database.

I will combine the next two questions. Alfred asked, "Is RAC a requirement?" (the short answer for which is "No") while Jud asked, "Is the schema-level provisioning supported in an environment where the target DBs are running in VMs?" First of all, in our DBaaS solution we support multiple models, as shown below.

In the dedicated database model, the database can run on a pool of servers or a pool of cluster. So both single instance and RAC are supported. Similarly, in the dedicated schema (Schema as a Service) model, it can run on single instance or RAC, which can in turn be hosted on physical servers or VMs. Enterprise Manager treats both physical servers and VMs as hosts and as long as the hosts have the agent installed, they can participate in DBaaS. Bottomline is that as we move from IaaS and offer these higher order services, the underlying infrastructure becomes irrelevant. This should also satisfy Steve, who queried "As the technology matures is there an attempt by Oracle to provide ODA vs EXADATA as the foundation of the dbaas to lower the cost?". The answer is YES. But, why wait?  DBaaS is supported on Exa and ODA platforms TODAY. In fact, HDFC Bank in India is running DBaaS on Exadata. You can read about them in the latest Oracle Magazine.

Another interesting question came from Yuri. He asked, "Is there an option to disable startup/shutdown for the self-service users?" It can be answered in multiple ways. First of all, in Schema as a Service or dedicated schema model, the end user cannot control the database instance state because it houses database services (schemas) owned by others too. So this may be a good model for enterprises trying to limit what end users can do at the database instance level.  However, in a dedicated database model, the Enterprise Manager out-of-box self-service console allows the end user to perform operations like startup and shutdown on the database instance. In general, if you want to create your tailored own self-service console with a limited set of operations exposed in the self-service interface, using the APIs may be the way to go. Enterprise Manager 12c also supports RESTFul APIs for self-service operations and hence a limited set of capabilities may be exposed. Check this technical presentation for the supported APIs.

Gordon's question precisely brings out the value of the Enterprise Manager 12c offering. He asked, "How do the services in the cloud get added to Cloud Control monitoring and alerting?" Ever since Amazon became the poster child of public IaaS, enterprises tried emulating their model within the data centers. What most people ignore or forget is that there is a life of the resources in a cloud beyond the provisioning process. Initial provisioning is just the beginning of that lifecycle. In Amazon's case, the management and monitoring of resources is the headache of Amazon's IT staff and consumers are oblivious to the time and effort it takes for them to manage the resources. In a private cloud scenario, one does not have that luxury. Once the database gets provisioned, it needs to monitored for performance, compliance and configuration drifts by company's own  IT staff. In Enterprise Manager 12c, the agent is deployed on the hosts that constitute the pool making the databases automatically managed without any additional work. It comprehensively manages the entire lifecycle and both adminsitrators and self-service users have tailored views of the databases. Well, this also gives me an opportunity to address a question by a participant who alluded to a 3rd party tool exclusively for database provisioning purposes. First of all, as I mentioned during the webcast, Enterprise Manager 12c is the only tool that handles all the use cases- creation of full databases, schemas and cloning (both full clone and Snap Clone) from a single management interface. The point tools out there handle only fraction of these use cases- some specialize in cloning while others specialize in seed database provisioning. Second, as stated in the previous answer, provisioning is only the initial phase of the lifecycle and a provisioning tool cannot be synonymous with a cloud management tool. Thanks Gordon for helping me make that point!

Sam and Cesar share the honors for the most difficult question that came right at the beginning. "Has it started?  Been on hold for a while." was their reaction at two minutes past ten. This is possibly the most embarrassing one for me because I was caught in traffic. With due apologies for that, I wish my car operated like Enterprise Manager's  Database as a Service!

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Tuesday Mar 12, 2013

Monitoring virtualization targets in Oracle Enterprise Manager 12C

Contributed by Sampanna Salunke, Principal Member of Technical Staff, Enterprise Manager

For monitoring any target instance in Oracle Enterprise Manager 12C, you would typically go to target home page, and click on the target menu to navigate to:

  • Monitoring->All Metrics page to view all the collected metrics
  • Monitoring->Metric and Collection Settings to set thresholds and/or modify collection frequencies of metrics
The thresholds and collection frequencies modified affect only the target instance that you are making changes to.

However, some of virtualization targets need to be monitored and managed differently due to changes made to the way data is collected and thresholds/collection frequencies are applied. Such target types include:

  • Oracle VM Server
  • Oracle VM Guest

As an optimization effort to minimize number of connections made to Oracle VM Manager to collect data for virtualization targets, the performance metrics for Oracle VM Server and Oracle VM Guest targets are “bulk-collected” at the Oracle VM Server Pool level. This means that thresholds and collection frequencies of Oracle VM Server and Oracle VM Guest metrics need to be set on the Oracle Server Pool that they belong to. For example, if a user wants to set thresholds on the “Oracle VM Server Load:CPU Utilization” metric for Oracle VM Server target, the sequence of steps to be performed are:

1. Navigate to the homepage of the Oracle VM Server Pool target that the Oracle VM Server target belongs to

2. Click on the target menu->Monitoring->Metric and Collection Settings


3. Expand the view option to “All Metrics” if required, and find the “Oracle VM Server Load” metric and change the thresholds or collection frequency of "CPU Utilization" as required.


Note that any changes made at the Oracle VM Server Pool for a “bulk collected” metric affect all the targets for which the metric is applicable in the server pool. In this example, since the user modified the “Oracle VM Server Load: CPU Utilization” threshold, the change is applied to all the Oracle VM Server targets in the server pool sg-pool1.

To summarize – the differences between “traditional” monitoring and “bulk-collected” monitoring is that the thresholds and collection frequencies of metrics are modified at the parent target, and the changes made are applied to all the children targets for which the metrics are applicable. However, data and alerts uploaded continue to appear as normal against the child target.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Wednesday Mar 06, 2013

Snap Clone: Instant, self-serviced database-on-demand

Snap Clone: Introduction
Oracle just released Enterprise Manager 12c Release 2 plugin Update 1 in February, 2013. This release has several new cloud management features that, such as Schema as a Service and Snap Clone. While the relevance of Schema as a Service is in the context of new database services, Snap Clone is useful in performing functional testing on pre-existing data.

One big consumer group of cloud is QA Engineers or Testers. They perform User Acceptance Tests (UAT) for various applications. To perform an UAT, they need to create copies of the production database. For intense testing, such as in pre-upgrade scenarios, they need a full updateable copy of the production data. There are other situations, such as in functional testing, they need to perform minimal updates to the data, but at the same time, need multiple functional copies. Enterprise Manager 12c supports both the scenarios. In the former case, it leverages RMAN backups to clone the data. In the latter case, it leverages the “Copy on Write” technology at the storage layer to perform Enterprise Manager 12c Snap Clone (or just Snap Clone). Currently, NAS technologies viz. Netapp and ZFS Storage Appliance are supported for Snap Clone. By using this technology, the entire data does not need to be cloned, but the new database can physically point to the source blocks within the same filer and only needs to allocate new blocks if there are updates to the cloned copy.

Underlying “Copy on Write” technology
To cover the underlying technology, let us look at the Netapp  and Sun ZFS storage technologies. First of all, Netapp supports pooling of storage resources and creating volumes on top of those. These volumes are called Flexvols. NetApp FlexClone technology enables true data cloning - instant replication of the Flexvols without requiring additional storage space at the time of creation.  Each cloned volume is a transparent, virtual copy that can be used for a wide range of operations such as product/system development testing, bug fixing, upgrade checks, data set simulations, etc. FlexClone volumes have all the capabilities of a FlexVol volume, including growing, shrinking, and being the source of a snapshot copy or even another FlexClone volume. Data ONTAP makes it happen by Copy on Write technology. When a volume is cloned, ONTAP does not allocate any new physical space but simply updates the metadata to point to the old blocks of the parent volume. NetApp filers use a Write Anywhere File Layout (WAFL) to manage disk storage. When a file is changed, the snapshot copy still points to the disk blocks where the file existed before it was modified, and only the changes (deltas) are written to new disk blocks. A block in WAFL currently can have a maximum of 255 pointers to it. This means that a single FlexVol volume can be cloned upto 255 times. All the metadata updates are just pointer changes, and the filer takes advantage of locality of reference, NVRAM, and RAID technology to keep everything fast and reliable. I found this documentation on the Netapp site specially useful to understand the concept. The following picture provides a graphical illustration of how this works.



Oracle  ZFS employs a similar copy-on-write methodology that creates clones that point to the source block of data. When one needs to modify the block, data is never overwritten in place. Oracle Solaris ZFS then creates new pointers to the new data and a new master block (uberblock) that points to the modified data tree. Only then does it move to using the new uberblock and tree. In addition to providing data integrity, having new and previous versions of the data on disk allows for services such as snapshots to be implemented very efficiently.

The best way to think of storage snapshots is that it is a point-in-time view of the data. It’s a time machine, letting you look into the past. Because it’s all just pointers, you can actually look at the snapshot as if it was the active filesystem. It’s read-only, because you can’t change the past, but you can actually look at it and read the data. NetApp and SunZFS snapshots just write the new information to a special bit of disk reserved for storing these changes, called the SnapReserve. Then, the pointers that tell the system where to find the data get updated to point to the new data in the SnapReserve.

Space efficiency: Since we are only recording the deltas, you get the disk savings of copy-on-write snapshots (typically a few hundred kilobytes for a 1 terabyte database).

Time efficiency: Because the snapshot is just pointers, to restore data (using SnapRestore), we simply update the pointers to point to the original data again. This is faster than copying all the data back. So taking a snapshot completes in seconds, even for really large volumes (like, terabytes) and so do restores. A typical terabyte database therefore takes only a couple of minutes to clone, backup and restore.

So, what is the additional benefit of Enterprise Manager Snap Clone over storage level cloning?

Snap Clone is complementary to the copy-on-write technologies described above. It leverages the technologies mentioned above;  however it provides additional value in:

  1. Automated registration and association with Test Master database: Registering the storage with Enterprise Manager in context of the Test Master database. For example, it queries the filer to find the storage volumes and then associates those with the volumes that the datafiles are associated with. It provides granular control to the admins to make a database clonable, since there could be databases that DBAs do not want cloned off.
  2. Database as a Service using a self-service paradigm: Provides a self-service user (typically a functional tester) to provision a clone based on the Test Master. The self-service capability has administrator side feature like setting up the pool of servers which will host the databases, creating a zone, creating service templates for provisioning and setting access controls for the users both at the zone level and the service template level.
  3. Time travel: Functional testers often need to go back to an earlier incarnation of a database. Enterprise Manager provides the self service users to take multiple snapshots of the database as backups. The users can then easily restore from an earlier snapshot. Since the snapshot is only a thin copy, the backup and restore are almost instantaneous, typically a couple of minutes. During restore a large part of that is spent in actually starting the database, for example and discovering its state in Enterprise manager and not in the actual restore.
  4. Manageability: Finally, Enterprise Manager provides the complete manageability of these clones. This includes performance management, lifecycle management, etc. For example, when cloning at a storage volume level, sysadmin tools have little idea on the databases and applications that are consuming those volumes. From an inventory management, capacity planning and compliance it is important to track the storage association and lineage of the clones at the database level. Enterprise Manager provides that rich set of manageability features.


So how does this work in Enterprise Manager 12c?

In order to understand the Snap Clone feature of Enterprise Manager and its relevance to DBaaS, it is important to understand the sequence of steps that enable the feature and the DbaaS.


Step 1: Setting up the DbaaS Pool
First of all the Sysadmin has to designate few servers (which become Enterprise Manager hosts when the agent is deployed on them) to constitute the PaaS Infrastructure Zone. Each of these servers should have the connectivity to be able to mount the volumes participating in the Snap Clone process.

The DBA intrinsically knows the exact versions and flavor of databases being used within each LoB along with the operating system version compatibility. As the next level of streamlining he/she can add each unique type of the database configuration to a single place called Pool. For example, single Instance 11.1.0.7, cluster database 11.2.0.2 …etc.

A database pool contains a set of homogeneous resources that can be used to provision a database instance within a PaaS Infrastructure Zone. For Snap Clone in particular, the administrator needs to pre-provision the same version of Oracle Homes either on standalone hosts or in a RAC cluster, which should be a part of the PaaS Infrastructure Zone.


Step 2: Setting up the Test Master
In the first step the administrator has to set up a Test Master as a clone of the production. Sometimes, the administrator has to create another copy of production at the source itself for masking and subsetting. The solution would vary depending on the customer's specific need and infrastructure. One can use one of RMAN, Dataguard, Golden Gate or even storage technologies such as Netapp Snapmirror, but usually our customers have figured out one way or other to do it. If the customer wishes to use EM for this, they can also use the Database Clone feature to clone the data (this leverages RMAN behind the scenes) or even use data synchronization feature of the Change Manager (part of Database Lifecycle Management Pack) to keep production and Test Master consistent. There is no unique way of accomplishing this; it all depends on the specific use case. There can be cases where the customer may need to mask or subset the data at source for which they can use those EM features as well.



The test master has to be created on ZFS Storage Appliance or Netapp Filer. Currently, the versions supported are:
·    ZFS Storage Appliance models  7410 and 7420
·    Any Netapp storage model where Version ONTAP® 7.2.1.1P1D18 or above of Netapp is supported.  The Netapp interoperability matrix is available here

Here’s a sample of database files on a Netapp filer that could constitute the Test Master database.
·    /vol/oradata (datafiles and indexes): [8-16 luns]
·    /vol/oralog (redologs only): [2-4 luns]
·    /vol/orarch  (archived redo logs ):[2-4 luns]
·    /vol/controlfiles (small vol for controlfiles):[2-4 luns]
·    /vol/oratemp (temp tablespace):[4-8 luns]

Step 3: Register the storage and designate the Test Master
Once the Test Master database has been created, one has to
1.    Discover the Test Master database as an EM target
2.    Register the storage with Enterprise Manager. Enterprise Manager uses an agent installed on Linux x86-64 bit to communicate with the filer. For NetApp storage, the connection is over http or https. For Sun ZFS storage, the connection is over ssh.

 Enterprise Manager associates a database with a filer by deriving the volumes  from the data files and then associating the volumes with those seen by the filer. For a database to participate in Snap Clone, it should be wholly located on flexvols or shares with Copy on Write enabled. Enterprise Manager performs the necessary validations for that.

Step 4: Creating the service template using the Profile
Finally, the Test Master needs to be exposed as a source of cloning to functional clones to self-service users. This is done by creating a provisioning profile. Provisioning Profile, in general, is an Enterprise Manager concept that denotes a gold image-whether in the form of a “tarball” archive or an RMAN backup or a Test Master.  The concept of profile makes the process repeatable by several users, such as QA testing different parts of the application.

The profile is exposed to the service catalog via a service template which also includes the provisioning procedure, pre and post scripts for deploying the image.

Finally, comes the user side experience
. Enterprise Manager supports a self-service model where users can provision databases without being gated by DBA. The self-service user can pick a service template (which indirectly via the provisioning profile links to the Test Master) , specify the zone where to deploy and the database gets provisioned.  This new database is actually a "thin clone" of the Test Master and new blocks will get allocated only when the data is updated. The user can also take backup the cloned database, which are essentially read-only snapshots of the database. If the user needs to restore the database the latest incarnation of the database is simply pointed to the snapshot, so that the restore is instantaneous. This literally enables the self-service user to go back in time, in a "time travel" fashion. In addition to provisioning and backup, self-service users can also monitor the databases-check their statuses, look at session statistics, etc.



Before concluding this blog entry, let me point to a bunch of collateral related to DBaaS that we recently published. Check out the new whitepaper, demos, and presentation. We will soon publish a technical whitepaper on performing E-Business Suite testing using Snap Clone. Till then...

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Thursday Feb 28, 2013

Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 (12.1.0.2) now available on OTN

Contributed by Martin Pena, Director, Product Management, Oracle Enterprise Manager

The Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) software binaries, updated with the new plug-ins and updated plug-in versions, are now available for download from Oracle Technology Network (OTN). Note that this is not a patch or patch set release; the new and updated plug-ins have simply been integrated with the binaries to make it easier for users to deploy the plug-ins as part of installing or upgrading Cloud Control. The management agent software binaries have not been changed.


+  New Management Plug-Ins:  The following new and updated plug-Ins are now available as part of this release. In addition to providing new and enhanced functionality, the plug-ins incorporate numerous bug fixes.

Plug-In Name / Version
*Enterprise Manager for Oracle Database (DB) 12.1.0.3 (new)
*Enterprise Manager for Oracle Virtualization (VT) 12.1.0.4 (new)
*Enterprise Manager Storage Management Framework (SMF) 12.1.0.1 (new)
*Enterprise Manager for Oracle Cloud (SSA) 12.1.0.5 (new)

How  to access  Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 (12.1.0.2) :

Follow the option that is relevant for your installation to deploy the latest plug-in releases and updates into your environment:

+  If you have not yet installed Enterprise Manager Cloud Control 12c, download the Cloud Control 12c Release 2 Update 1 (12.1.0.2) binaries from Oracle Technology Network (OTN) and install the new version. The plug-ins will be deployed as part of the installation process.

+  If you already have Enterprise Manager Cloud Control 12c Release 1 (12.1.0.1) installed [with or without Bundle Patch 1], download Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 (12.1.0.2) from Oracle Technology Network (OTN) and upgrade to the new version. The plug-ins will be deployed as part of the upgrade process.

+  If you have already installed or upgraded to 12c Release 2 (12.1.0.2), you do not need to download the binaries from from Oracle Technology Network (OTN), but can instead deploy the new and updated plug-in versions using Cloud Control's Self Update feature. To deploy all of the plug-ins in simultaneously, use the deploy_plugin_on_server emcli command as shown below:

           emcli deploy_plugin_on_server -plugin="plug-in_id[:version][;plug-in_id[:version]]" [-sys_password=sys_password] [-prereq_check]

For example:

           emcli deploy_plugin_on_server -plugin="oracle.sysman.vt;oracle.sysman.ssa;oracle.sysman.db"

For more information, refer to the Plug-in Manager chapter in the Enterprise Manager Administrator's Guide, available here:
http://docs.oracle.com/cd/E24628_01/doc.121/e24473/plugin_mngr.htm

To review more install/upgrade usecases and the FAQ, please refer to the Getting Started chapter in the Basic Installation Guide, available here:
http://docs.oracle.com/cd/E24628_01/install.121/e22624/getting_started_overview.htm

Join live webcast "Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-In Update 1 Installation and Upgrade Overview" learn more and ask questions on Friday, March 01 - 9:00 AM PT

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Wednesday Feb 27, 2013

Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-In Update 1 Installation and Upgrade Overview

Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-In Update 1 Installation and Upgrade Overview

Friday, March 01, 9:00 a.m. PT

Register Now

Join us for this live technical presentation to learn about Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 release and how it impacts you. This webcast is a must-attend event for users who have EM 12.1.0.x in their environment or are planning to deploy EM 12.1.0.x or upgrade to EM 12.1.0.x from older Enterprise Manager versions.

Oracle has recently released new and updated Enterprise Manager Plug-ins which enables optimum utilization of compute resources giving customers more flexibility and control during application development, leading to faster time-to-market for delivering IT services. These plug-in provide enhanced support and extend EM's capabilities for Database as a Service (DBaaS), Infrastructure as a Service (IaaS), and introduce new features for Testing as a Service (TaaS).

During this presentation we will review the following topics:

  • Overview of Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1
    • Does it have new features or bug fixes?
    • How do I get EM 12c Release 2 Plug-in Update 1 binaries?
    • Various Install/Upgrade/Additional OMS Usecases
    • What happens to Agent Binaries?
    • Go over Install/Upgrade FAQ on EM 12c Release 2 Plug-in Update 1
  • Quick overview of new version /revision of plug-ins
  • How to deploy new plug-in in your existing EM 12.1.0.x environment
    • In Offline or Online mode
    • Using emcli to reduce OMS downtime
  • Documentation
  • Q&A

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Monday Feb 25, 2013

New and updated Oracle Enterprise Manager 12c Plug-Ins for Infrastructure as a Servce ( IaaS )

With the recent announcement of Oracle Enterprise Manager 12c Release 2 Plugin Update 1 (12.1.0.2), building and managing Infrastructure as a Service ( IaaS ) cloud is simpler than ever. Enterprise Manager for Oracle Virtualization (VT) plug-in 12.1.0.4 now support Oracle VM 3.2.1 which is enhanced for building much more secure and scalable enterprise class infrastructure cloud. Particularly, we were able to see significant performance improvement in handling parallel operations in the area of storage and VM management. Some of the key features supported for Oracle VM 3.2.1 include:

• Virtual Machine Tagging- During deployment, users specify tags (that can be edited later) for the machines they are creating, and search based on the tags.

• Periodic Storage Repository Refresh - Synchronization between data on the repository and that on the Oracle VM Manager can now be automated

• Oracle VM Agent password update support - Oracle VM Agent password can now be updated via the EM UI

• Virtual Server Roles support – Servers can be marked for playing different roles, such as utility role or virtual machine role

• VM start policy – Users can set the policy on where the VM should be started on (e.g., Current Server, Best Server or based on Pool Policy)

• OCFS2 timeout support – Users can set heartbeat timeout for servers in a clustered server pool

More details on Oracle VM 3.2.1 features can be found here

Enterprise Manager for Oracle Cloud (SSA) 12.1.0.5 enables the new capabilities provided by Enterprise manager for Oracle Virtualization plug-in 12.1.0.4 for self service users through self service portal.

Some of the key features and improvements in Enterprise Manager for Oracle Virtualization 12.1.0.4 and Enterprise Manager for Cloud 12.1.0.5 plug-ins for building Infrastructure as a Service cloud include:

• Faster and consistent synchronization with Oracle VM Manager – Any VM status change is reflected in Enterprise Manager must faster than before and more consistently. Both target status and OVM status of VM target instance in EM are consistently updated when VM goes through any status changes

• Improved UI page performance – Many of key UI pages for cloud management load much faster now

• Improved assembly deployment – There are multiple enhancements in this area including better error handling, improved and more robust Storage Repository selection for better distribution on storage usage, and enhanced network placement logic.

• Out of box assembly support - Database assemblies available in Self Update Console can be downloaded and used in Self Service Portal for easy deployment of database in your cloud. Deployment is completely integrated with EM agent deployment so that host and applications like database instance can be discovered and managed when the assembly deployment is complete

Follow the steps below to enable the latest functionality using these plug-ins:

1.    Apply patchset update (PSU) patch 14840279 to EM 12c Release 2 (12.1.0.2) OMS $ORACLE_HOME. This is a recommended patch on 12.1.0.2.0
2.    Deploy the Oracle Virtualization Plug-in 12.1.0.4.0 and Oracle Cloud Plug-in 12.1.0.5.0 on the OMS using the Enterprise Manager 12c console. If you plan to use the database assemblies, also deploy the Oracle Database Plug-in 12.1.0.3.0

To deploy the above three plug-ins use the following EM CLI command:

$ emcli login -username=sysman -password=<password>
$ emcli deploy_plugin_on_server -plugin="oracle.sysman.vt;oracle.sysman.ssa;oracle.sysman.db"

3.    Apply the Virtualization Plug-in patch 16235354 on to the Plug-in $ORACLE_HOME. (Make sure PSU mentioned in step 1 is applied before this step.)
4.    Deploy the Oracle Virtualization 12.1.0.4.0 Plug-in on the Enterprise Manager Agent managing Oracle VM Manager using the Enterprise Manager console. (This is the agent used when registering the OVM Manager to Cloud Control)
5.    Apply patch 16219750 on this same Agent used to manage the Oracle VM Manager target. This can be done via a Patching Plan in the Enterprise Manager Console.
6.    Apply the Virtualization Plug-in patch 16235337 on the Agent. (Make sure patches mentioned in steps 3 and 5 are applied before this step - this can also be applied via a Patching Plan)

Refer to support note 1371536.1 for more details.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Tuesday Feb 12, 2013

SquareTwo Financial uses Oracle Data Masking for Compliance and Improves Performance by 96%

Watch as leading financial services firm, SquareTwo Financial, talks about maintaining compliance while increasing IT productivity and performance by replacing in-house data masking with Oracle Data Masking solution.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app

Wednesday Jan 30, 2013

Coles Deploys Oracle Exadata and Oracle Enterprise Manager 12c

Read the latest news about Coles Supermarkets, one of Australia's largest grocery chains with more than 100,000 employees and 2,000 stores country-wide. Learn how Coles completely revamped their data warehouse with Oracle Exadata and Oracle Enterprise Manager 12c . The new system improved Coles's processes and critical reporting by as much as 3 to 4x out-of-the-box with a 4 to 6x faster query performance. The result, higher quality of service for the business and for customers during peak seasonal spikes.

LEARN MORE:

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app

Thursday Jan 24, 2013

HDFC Bank Deploys Database-as-a-Service with Oracle Enterprise Manager 12c

Listen in as one of India’s largest banks discusses the benefits of using Oracle Enterprise Manager 12c to manage their database-as-a-service deployment.

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter
Download the Oracle Enterprise Manager Cloud Control12c Mobile app
About

Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs




Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
3
5
6
7
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today