Wednesday May 29, 2013

The Enterprise Manager: Episode 8 - Database as a Service

Our intrepid hero Ed Muntz is back after a short stint in marketing to rescue the CIO Felix from IT challanges created by virtualization sprawl and management challanges for his enterprise databases.

Spinning VMs for database deployment is creating VM sprawl. Ed Muntz explains how Oracle Enterprise Manager enables rapid deployment of private database as a service and provides total control over the Database as a Service cloud.


Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter

Tuesday May 28, 2013

IOUG Webcast - Upgrading to Enterprise Manager 12c: Best Practices and Real World Lessons

Upgrading to Enterprise Manager 12c: Best Practices and Real World Lessons
Wednesday, June 5, 2013 12:00 PM - 1:00 PM CST

Join the IOUG webcast to learn how CernerWorks, the remote hosting division of Cerner, a global health IT company, upgraded their Oracle Enterprise Manager system from 11g to 12c.

In this webcast , Aaron Rimel from Cerner and Akanksha Sheoran from Oracle will discuss available upgrade paths and why Cerner chose what the upgrade path they followed.

With thousands of managed databases in Cerner's IT environment, high availability, ease of upgrade and minimal downtime were key requirements for the upgrade process. You will learn how all of these requirements were met with proper planning and out-of-the-box features included with Oracle Enterprise Manager 12c. 

Register here

You can also join us during the webcast and ask questions virtually via twitter using hash tag #em12c on twitter.com or by going to tweetchat.com/room/em12c .

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter


Monday May 20, 2013

Test Drive Oracle Enterprise Manager 12c Today!

Are you looking for a place to test Oracle Enterprise Manager Cloud Control 12c features? Amazon Web Services new Oracle Enterprise Manager Cloud Control 12c – Monitoring Essentials Test Drive is the perfect solution for anybody who’s interested in testing out Oracle Enterprise Manager 12c before upgrading or deploying! Through a partnership with Apps Associates, AWS will provide 5 hours free access to an Oracle Enterprise Manager 12c environment with database and middleware targets for a look into the new features and abilities of Oracle Enterprise Manager 12c. A step-by-step guided workflow is provided to introduce the monitoring framework, including an overview of navigation, auto discovery, monitoring, management and reporting features. 

Stay Connected:
Twitter |
Facebook | YouTube | Linkedin | Newsletter


Wednesday May 15, 2013

How to go Physical to Virtual with Oracle Solaris Zones using Enterprise Manager Ops Center

Many customers have large collections of physical Solaris 8, 9 and 10 servers in their datacenters and they are wondering how they are going to virtualize them. This leads to a commonly asked question. Can Enterprise Manager Ops Center 12C be used to P2V (Physical to Virtual) my old servers? Ops Center does not have a single button P2V capability, but it is possible for Ops Center to deploy physical servers, LDOMs and branded zones based on flash archives(flars) that have been taken of your existing physical servers. Ops Center achieves P2V by deploying flars and leveraging its patching and automation capabilities, to make the P2V process consistent, repeatable and as cost effective as possible.

As with any virtualization project, there will be a number of things that will need to be updated as you move from a physical to a virtual environment. It is a common misconception that you can virtualize a system and change nothing about it. There are always a few things that have to be changed on an OS or process level to make it compatible with the virtualized environment. As a best practice, there are many more things that should be updated, re-allocated and redesigned as part of a virtualization project but that is a subject for another blog.

In this blog, we will be covering migrating physical servers to Oracle Solaris Zones.

We will also review this in our community call on Monday , 05/20/2013. Here are web conference details.

----

Topic: How to P2V to Oracle Solaris Zones using Enterprise Manager Ops Center
Date: Monday, May 20, 2013
Time: 5:00 pm, Eastern Daylight Time (New York, GMT-04:00)
Meeting Number: 594 835 639
Meeting Password: oracle123
-------------------------------------------------------
To join the online meeting (Now from mobile devices!)
-------------------------------------------------------
1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=236653047&UID=1653030737&PW=NNzUxYjUwOWY5&RT=MiMxMQ%3D%3D
2. If requested, enter your name and email address.
3. If a password is required, enter the meeting password: oracle123
4. Click "Join".

To view in other time zones or languages, please click the link:
https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=236653047&UID=1653030737&PW=NNzUxYjUwOWY5&ORT=MiMxMQ%3D%3D

-------------------------------------------------------
To join the teleconference only
-------------------------------------------------------
Call-in toll-free number:       1-866-682-4770  (US/Canada)      
Other countries:
https://www.intercallonline.com/listNumbersByCode.action?confCode=7629343

Conference Code:       7629343#
Security code:            7777# 
-----

Ops Center does not do anything differently to what you could do by hand, it just automates the process and, by the use of templates and wizards, drives consistency and repeatable results.

Physical Server to Branded Zones

Let’s first look at converting existing Solaris 8, 9 and 10 physical servers into Solaris 8/9 branded zones (on Solaris 10) and Solaris 10 branded zones (on Solaris 11).

There are 4 basic steps involved in converting a physical server to a virtual server.

Capture

This is done by creating a flash archive of the source system. If the source system is a Solaris 10 environment with a ZFS root, it is possible to use a ZFS based flar, but for consistency and ease of coding in the grooming scripts, I recommend that all flars be taken as a cpio flash archive.

When capturing the flar, we should include all the root files systems that will be required for the new zone to boot. If the application data is small, it can be included in the flar, but if it is large, you should copy the application data after the P2V migration is complete.

Example flar capture command line

# flarcreate -n [HostNameOfSourceSystem] -S -c -L cpio \

-x /[Dir where archive will be stored] \

 /[Dir where archive will be stored]/[HostNameOfSourceSystem].flar

Note that we are using the –x flag to exclude the directory where we are storing the archive. You can use multiple –x flags to exclude any other directories you do not want to include as part of the archive, such as large application data. Large archives just become difficult to unpack/pack and upload. As you can imagine, if your source archive contained 100 GB of application data, you would need probably 300-400 GB of space just to perform grooming, and that would make the process much slower.

Grooming

The first question I often get is, “Why do I need to groom my flar?” In an ideal world, you would not need to, but rarely do we live in an ideal world:

1). If your environments are like most customers', there are a wide variety of configurations, patching levels and processes that have been applied to the source system during its lifecycle. This level of entropy means that there may well be things that must be fixed, like the following “real world” examples of problems with source flars that I have observed:
    • A customer’s patching methodology had disabled the startup scripts that are required for a branded zone to boot.
    • Some Solaris “df” command patches were missing that are mandatory for Solaris 8/9 branded zones (with a separate /var) to work.
So formalized grooming is a chance to automate the fixing of these anomalies and to help standardize your new environment.

2).  A common restriction that is placed on you, in a migration project, is that you cannot alter/update/reconfigure the source machine to suit its target virtualized environment. So you have to at least apply any mandatory fixes needed to the archive file itself and preferably also remove anything that may conflict with/fail in the new virtualized environment. So, once again, a formalized grooming is an ideal way to address these requirements.

You will often come across the argument that everything must be identical on the new virtualized environment, in an attempt to minimize change. Let’s be honest here, while virtualization will aim to provide a functionally identical environment, there is a massive amount of change that is happening under the covers when you virtualize a machine and there are a few things that must be changed to make it work. The history of virtualization projects has shown that once the existing servers are virtualized, they are almost never revisited to cleanup the idiosyncrasies of the source machines to bring them into a standard format. Therefore, I am a supporter of more aggressive grooming and the cleaning up of years of entropy as part of the P2V process.

The way to make grooming consistent and less manual and tedious is to convert the steps into an Operational Plan and use Ops Center to run them in a consistent manner.

Here are some of the modules I usually include in my grooming plans:

Module

What It Does

Get_FLAR_info

Export flar info as shell variables to be used elsewhere in the program.

Flar_Cpio_Fmt_Check

Check that the flar is in cpio format.

Unpack_Archive

Unpack the flar archive.

Min_OS_Release

Check that the OS release meets the minimum for branded zones.

ServiceTag_Identity

Remove the service tag identity file. Old service tags and duplicate service tags can confuse Ops Center.

SoftPartition_Check

Check that no soft partition data is included in the flar.

Clean_FMD

Remove any outstanding FMD faults from source machine.

Clean_SVM

Disable any references to disk suite (svm/sds/ods) from source machine.

Clean_NFS

Disable any NFS mounts in vfstab from source machine as they may not be available in the new virtualized environment and will stop the zone from booting.

Add_Packages

Add missing packages if required (the packages you want to add are laid out under a directory structure).

Add_Patches

Add missing patches if required (the patches you want to add are laid out under a directory structure)

App_Disable

Disable application startup in the flash archive. It is often advisable to disable automatic application startup as part of the P2V process, so that it cannot conflict/corrupt the production source machine.

Agent_Cleanup

Un-configure OC agent and cleanup OC identity files in case the source system was already under Ops Center control.

Pack_Archive

Repack the flar archive.

These are just some of the common modules that I have used over the years on P2V projects. You may not need any or all of these. You may need to create your own module if you find something unique in your source machines. Grooming is an iterative process, as your source machines can vary wildly from any machine I have ever come across. This is not an Ops Center thing; this is just the nature of P2V on a source pool of unknown quality. So if you hit an issue, troubleshoot it, add a new module to the Operational Plan and try again. To help get you started, I have included some sample code [Sample Grooming script]. This script provides examples of what can be done and should be considered an example of how you can build your own grooming script.

Loading the script into an Operational Plan is as simple as:

Creating a new Operational Profile
Give the plan a name, and select Operating System as the target type.

Creating an Operational Plan

Click browse to find the script and then load it. I have increased the profile time out to 180 minutes as grooming large flars can take a while.
Browse and load script
Finally, specify any variables for which the script requires user input. You can specify the variable as required or optional and provide a hint/description to help the user at run time.
Specify variable input


So how do I groom my flar?

Step 1) Copy the flar taken from the source system onto any system managed by Ops Center that has sufficient space.

Step 2) Launch your grooming “Operational Plan” against the system to which you copied the flar.

Laanching an op plan

Step 3) Have a cup of coffee... then check the logs

Check the logs

Step 4) You now have a flar ready for uploading into Ops Center.

Deployment

Deployment is where this process comes under full Ops Center control.

Step 1) Load your groomed flar into the EC library.

Load Groomed flar

Step 2) Create a profile for the zone you want to deploy.

Create zone profile

Step 3) Deploy the zone.

Deploy a zone

When the job completes, you have your source system as a zone.

Customization

What are the sorts of actions you can do here?

  • Add additional networks if you only deployed with a single network.
  • Update the application configuration.
  • Update secondary apps like backup software, application monitoring, etc.
  • Re-enable application startup scripts and remote file systems in vfstab.
  • Do any updates to patches/packages/applications.

 To make these actions repeatable and consistent, I employ Operational Plans or Update Profiles.

Operational Plans – These are scripts that make actions repeatable and can be modified by operator input (Operational Plan Variables) at run time.

Update Profiles - These can contain patches, packages, scripts and customized configuration files (based on the system you are deploying to).

Choose the right method for what you are trying to do or combine both of them together in a Software Deployment/Update Plan.

Congratulations you have P2Ved your system.

Conclusion

It is fairly straightforward to automate the migration of physical servers to Oracle Solaris Zones using Enterprise Manager Ops Center. Operational Plans and Update Profiles are great tools to automate many of those operational tasks and increase their reliability and repeatability.

Look out for a future blog on “How to P2V to Oracle Virtual Machine for SPARC using Enterprise Manager Ops Center“.

Thursday May 02, 2013

Bullet Proof Your Oracle Applications and Databases Workshop - Guest Blog by Rajesh Krishnan of Acolade

Guest Blog by Rajesh Krishnan of  Oracle Partner Acolade Consulting 

Testing can be a costly and time consuming process. Because of that, few organisations test their Data and Business Applications as thoroughly as they should.

As opposed to just testing for an upgrade or new application deployment, testing should be regarded as an essential part of an organisation's standard IT practice. Many think that to achieve this, the cost is prohibitive.

The tools to identify and prevent performance problems before they occur are almost always significantly cheaper than the cost of the problems in production.

Oracle offers a comprehensive suite of testing tools that makes this easy and cost effective for organisations to achieve.

A recent testing methodology and tools workshop was conducted in Australia by Oracle and Specialized Partner, Acolade Consulting to address issues and customer pain points relating to testing.  Testing best practices, techniques and tools rollout were showcased to address and mitigate risks and pain points.  Some of the key points discussed were:

  • Make your testing more comprehensive and robust
  • Test Automation and techniques
  • Take the risk out of change
  • Use real production workload to test planned changes in Applications and Databases
  • Replay real Database workloads to identify performance issues and SQL degradation, and to test Database consolidation, before changes affect Production
  • Reduce testing time and costs.
  • Case studies for Oracle EBS – Finance and HR modules

Following are the links to the presentations

1. Oracle's Approach to Database and Applications Testing

2. Acolade Consulting Testing Best Practices

3.Testing Lifecycle Management

4.Department of Primary Industries Victoria Case Study

5.Test Automation

How to Monitor Systems in Enterprise Manager using Ops Center

In this post, we'll use Ops Center to add hardware monitoring to Enterprise Manager. We'll discuss the existing capabilities of Host targets, show how to create an Infrastructure Stack and demonstrate some of the features it provides to Enterprise Manager.



A recording of this community call is now available here:

WebEx Recording: Ops Center integration with Cloud Control
          


Prerequisites

This blog post uses both the Enterprise Manager 12c Cloud Control and Ops Center products. The following list describes the initial setup state and provides links to Oracle documents you can use to install and configure both products:

  • Enterprise Manager 12c
    • Install and configure an Oracle Management Server and Repository (OMS and OMR)
    • Install the Oracle Management Agent (OMA) on a target OS instance
  • Ops Center 12c
    • Install and Configure the Enterprise and Proxy Controller (EC and PC)
    • Discover and manage the system and its associated OS (installing the Ops Center agent on the OS instance)

I've configured the environment for this post in the following way:

  • Enterprise Manager 12c: OMS and OMR running separately
  • Ops Center 12c: EC and PC running co-located
  • Two sample systems, both running Solaris 11
    • An Oracle SPARC T4-2 server
    • An Oracle SunFire X4200 M2 server

Host Capabilities in Enterprise Manager

By default, installing an Enterprise Manager agent on an OS instance creates an associated Host target. A Host provides a lot of useful data about the platform that hosts the OMA, such as

  • CPU and memory utilization
  • File system size and utilization
  • Network interfaces and activity
  • Program and process resources
  • User activity

The Host does not provide sensor data associated with the server and cannot report issues with the underlying hardware. Fortunately Ops Center can do both of these things, and you can incorporate server data, monitoring thresholds and alerts into your Enterprise Manager environment.

Setting Up an Infrastructure Stack

Ops Center: Configure the connection to Enterprise Manager

To connect Ops Center with Enterprise Manager, select the left-hand Navigation link Administration > Enterprise Manager Cloud Control. Click on the right-hand Action link Configure/Connect and open a pop-up dialog box. The dialog lets you configure the OMS and OMR settings. The screen shot below shows both steps from the wizard.

Enterprise Manager: Download and deploy the Ops Center Plug-In

Enterprise Manager 12c provides a deployable plug-in to manage an Infrastructure Stack. For Enterprise Manager installations running in online mode, you can download it from the Extensibility > Plug-Ins menu from the Servers, Storage and Network category.

Download and deploy the plug-in to the OMS. You can immediately deploy it to the OMA instances you want to integrate with Ops Center, or wait until you create an infrastructure stack. (Enterprise Manager will automatically install the plug-in to the OMA if it is not already present)

Enterprise Manager: Create an Infrastructure Stack Target

An Infrastructure Stack associates data from a system in Ops Center with targets in Enterprise Manager. To create one, select the menu option Setup > Add Targets > Add Targets Manually. From the wizard, select Infrastructure Stack from the pull-down menu and identify the Monitoring Agent that will be used.

The subsequent configuration screen allows you to define the name for the target, to identify the Enterprise Controller that will provide the data for the server, and to specify the Ops Center login credentials. Any user account defined in Ops Center is suitable for the target.

Infrastructure Stack Capabilities

What benefits does an Infrastructure Stack provide? As the consolidation point for server-related information, it enables you to perform three principal tasks:

  • Monitoring, with metrics and thresholds for the server
  • Reporting, using a set of standard Information Publisher reports
  • Incident Management, with Ops Center hardware alerts

Let's look at some examples for each.

Monitoring

An Infrastructure Stack provides a wealth of information about a server, including identification information, state, capabilities and sensor data. The home page provides a summary of current values for power consumption, temperature, fan speed, and reported incidents.

The metrics section provides more detailed data such as sensor values, thresholds, installed firmware and server capabilities.

Note that the reported values ultimately depend on what data is available for a specific type of server. Some earlier models don't report temperature data, for instance.

Reporting

Enterprise Manager provides three standard Information Publisher reports:

  • Infrastructure Stack Topology, showing the server and OS associated with the stack
  • Infrastructure Stack Configuration, providing a tabular summary of key data
  • Hardware Sensors, showing current values and thresholds for monitored server data

The following screen shots provide sample report output for the infrastructure stack configuration and hardware sensor data.

Incident Management

Incident reporting is an optional capability for an Infrastructure Stack. Enabling the feature causes Ops Center to forward hardware alarms, allowing you to consolidate problem management in Enterprise Manager.

To enable incident reporting, navigate to the Infrastructure Stack and select the menu option Monitoring > Metric and Collection Settings. Select the link Collection Schedule for Infrastructure Stack Alarms to edit the settings:

If you toggle the collection schedule to Enabled, Enterprise Manager will activate incident reporting based on hardware alarms. By default, the data refresh frequency is once every five minutes, with warning or critical alarms being reported.

In this example, we simulated an hardware alarm using the IPMItool utility from the Hardware Management Pack. Ops Center forwarded the event and all associated data, which generated an actionable incident in Enterprise Manager,

Summary

In this post, we have demonstrated how to integrate Ops Center data into Enterprise Manager, and described the features available for an Infrastructure Stack. If you would like to learn more, please join us for the WebEx demo on May 9th.


About

Latest information and perspectives on Oracle Enterprise Manager.

Related Blogs




Search

Archives
« May 2013 »
SunMonTueWedThuFriSat
   
1
3
4
5
6
7
8
9
10
11
12
13
14
16
17
18
19
21
22
23
24
25
26
27
30
31
 
       
Today