An Oracle blog about Oracle Enterprise Manager and Oracle Management Cloud

Recent Posts

Enterprise Manager on OCI Installation Phase 1 - EM App OCI Environment Prerequisites...You Must Do These Things Before Installing the EM App!

Hi there!  If you're reading this blog post, you are either in the process of installing the new Oracle Enterprise Manager (EM) app into Oracle Cloud Infrastructure (OCI), or you are an OCI Administrator who's been asked to help your friend (an EM administrator who wants to install the EM app) with Phase 1 of the deployment process:  the required OCI Compartment setup. Phase 1 contains some prerequisite preparation of the OCI Compartment that is required before installing the EM app, and the prerequisite steps must be done by an OCI Administrator.  The steps only take a few minutes, but THEY MUST BE COMPLETED BEFORE STARTING THE EM APP INSTALLATION OR YOUR EM INSTALLATION WILL FAIL.  Once you have completed the prerequisite steps the person installing EM will need to use the the name of the OCI Compartment as into which you should be installing your EM app.  In the example steps below, we are assuming the OCI administrator created a compartment called... eminfra   OCI Compartment setup prerequisites for EM App Installation (estimated time: 5 minutes, requires OCI Administrator role) Create a compartment for the EM App installation, and make sure the EM administrator's user account is added to the compartment.  We'll assume you've created one called "eminfra." Image 1:  The home page of your OCI Compartment.   In the OCI compartment, you’ll need to create a Dynamic Group called ‘OEM_Group’ to group the instances of the Compartment and add the following rule ALL {instance.compartment.id = ‘<compartment ocid>’} Image 2:  Creating a Dynamic Group for your EM instance In the OCI compartment, you’ll need to create a Policy with following rules in the root compartment of the tenancy: Allow dynamic-group OEM_Group to manage instancefamily in tenancy Allow dynamic-group OEM_Group to manage volumefamily in tenancy Image 3:  Creating a Policy for your EM instance In the OCI compartment, you’ll need to create a Virtual Cloud Network (VCN) with the following properties: Public Subnet Internet Gateway Image 4: Creating a VCN for your EM instance And you’ll need to add the following Stateful Ingress rules into the Security List of VCN you just created Destination Port Range Protocol Type Service 22 TCP SSH 7803 TCP Console 4903 TCP Agent Upload 7301 TCP JVMD 9851 TCP BIP Image 5: Creating Security Rules for your EM instance   (FastConnect Customers only) Configuring a Service Gateway: If (and only if) you are using a private subnet/FastConnect with your VCN, there are a few additional steps required to create the Service gateway, define the routing rule and egress rules.  If you are not using a private subnet/FastConnect, you can skip this section and move to the EM installation section. Create the Service Gateway and “All <RegionCode> Services in Oracle Services Network”, where <regioncode> refers to the OCI region of your EM compartment.  (FastConnect Customers only) Image FC-1.Creating a Service Gateway for your private subnet. Add a new Route Rule for the Service Gateway you just created. (FastConnect Customers only) Image FC-2.Creating a Route Rule for your new Service Gateway for your private subnet. If (and only if) your private subnet has restrictions on outgoing traffic/egress you have to add egress rules for service network CIDRs for your OCI region.  For a list of CIDRs that apply to your region, refer to the OCI documentation “Public IP Address Ranges for the Oracle Services Network”.     (FastConnect Customers only) Image FC-3.Sample Egress Rules for the CIDRs associated with the US-Ashburn region.Consult OCI documentation for your own region.   And that's it! You're now done with the OCI prerequisites and are ready to continue with Phase 2 of the EM app deployment process.  Alternatively, you can return to the overview of the EM app deployment process.   (In either case, make sure you copy and paste the Compartment Name so you can use it later).    TO:  Helpful OCI Administrator Many thanks for your help! From:  Your Friendly EM Administrator

Hi there!  If you're reading this blog post, you are either in the process of installing the new Oracle Enterprise Manager (EM) app into Oracle Cloud Infrastructure (OCI), or you are an OCI...

Enterprise Manager on OCI Installation Phase 2 - Installing the EM App Into Your OCI Compartment

Hi there!  If you're reading this blog post, you are in the process of installing the new Oracle Enterprise Manager (EM) app into Oracle Cloud Infrastructure (OCI),and you've already completed Phase 1 (OCI setup pre-requisites) and are ready to start on Phase 2:   Installing the EM App Into Your OCI Compartment.  STOP.  IF YOU ARE NOT SURE IF YOUR OCI ADMINISTRATOR HAS COMPLETED THE PHASE 1 PREREQUISITES, CHECK WITH THEM BEFORE PROCEEDING.  The prerequisite steps only take a few minutes, but THEY MUST BE COMPLETED BEFORE STARTING THE EM APP INSTALLATION OR YOUR EM INSTALLATION WILL FAIL.  OK, now that we've gotten that out of the way, we can proceed with the installation.  You will need to use the the name of the OCI Compartment as into which you should be installing your EM app.  In the example steps below, we are assuming the OCI administrator created a compartment called... eminfra    Licensing/Account Information (estimated time, 5 minutes): Read the Oracle Enterprise Manager License Policies to ensure you comply with the licensing requirements.     You’ll also need to set up and configure your OCI account and obtain the OCID of the Compartment where you intend to install the EM environment (see OCI sign-up and OCI Compartments for details). Image 1:  A view of the OCI home page for your compartment   Installing the EM image (estimated time, 5 minutes of work, then 30 minutes installation time) Select the EM image in the OCI Marketplace.  From the OCI Main Menu, Click on the Marketplace.  Search for Oracle Enterprise Manager 13.3 and Click on it. Image 2: Searching for Enterprise Manager in the OCI Marketplace Review the Oracle Enterprise Manager Overview and Click on Launch Instance Image 3: Enterprise Manager app overview Select the Package Version, specify your OCI Compartment name, Accept the Terms of Use and click Launch Instance Image 4: Selecting the Enterprise Manager version and specifying your OCI compartment Create an Instance Name, select your desired OCI Availability Domain and Select the desired shape for VM. You can choose any shape that is available. Image 5: Naming your EM app Instance, Selecting the Availability Domain and Shape Image 6: Selecting the OCI Compute Shape for your EM image instance Enter the ssh public key that will be used to access the instance as well as the Virtual Cloud Network and Subnet created in the prerequisites, and then Click on Create!    Image 7: Entering the SSH key, selecting the VCN.   <COFFEE BREAK!>  Now you can go get some coffee while the Image is installed in your OCI compartment.  The installation takes approximately 30 minutes of elapsed time. </COFFEE BREAK>   Logging into your new EM instance (estimated time: 5 minutes) Once your VM instance is running, Click on the instance and copy the Public IP Address.  SSH to the VM instance with the ssh key $ ssh –i <private_ssh_key> opc@<public IP Address> Image 8: Homepage of your OCI instance Check the status of your newly-installed EM:  In the command line console, change your user to ‘oracle’ user by executing below in the command and then Check the OMS status using the EMCLI. $ sudo su – oracle $ /u01/app/em13c/middleware/bin/emctl status oms Image 9: Sample output from the status oms command Change default passwords.  The password for the EM user sysman, EM Agent, Registration Password, Fusion Middleware user weblogic and Node Manager can be accessed in the below file (access as root user)" $ cat /root/.oem/.sysman.pwd Change the sysman password by executing the below command $ /u01/app/em13c/middleware/bin/emctl config oms -change_repos_pwd Log in to your new EM Console $ https://<public ip address>:7803/em Start monitoring …you can use a local agent to monitor your EM OMS and OMR while you get familiar with your installation.  In order to do that, make the following entry in the new hosts’s /etc/hosts file emcc.marketplace.com <public IP address of the EM VM> (if you do this, you can also start using this URL to log in to your new EM) $ https://emcc.marketplace.com:7803/em   What if I need to troubleshoot the EM installation on OCI? This installation is mostly automated, so we don’t expect you’ll need to do much troubleshooting.  It’s important to note that once your EM is installed and running, ongoing lifecycle management/maintenance/patching/upgrade of your Enterprise Manager is your responsibility, just as if you had installed it on-premises.  Similarly, if you are planning on using some of the larger OCI shapes, some configuration of the OMS and OMR may be required for your EM to fully take advantage of the increased CPU and Memory footprint…also just as if you had installed it on-premises.  Refer to the Cloud Control Administrator's Guide in our Enterprise Manager documentation for best practices. But if you do need to troubleshoot a failed installation, the installation log is located at /var/log/emgc_install.log . Make sure your OCI dynamic group and policies set properly if you find any authentication related errors in log file, and then rerun the below script after fixing the dynamic group and policies mentioned in prerequisites $ sudo -s $ cd /root/bootstrap  $ . ./configure_db_and_oms.sh Image 10: Sample EM on OCI deployment log   Congratulations, your installation is complete! What's next? In upcoming blog posts, we'll discuss how to get started with monitoring and managing targets, agent installation and agent lifecycle best practices for OCI-hosted Oracle Enterprise Manager instances.  You can also attend these free upcoming webinars to learn how to use your EM against specific targets such as Database, Exadata and others.  Two last reminders:  Once your EM is installed and running, ongoing lifecycle management/maintenance/patching/upgrade of your Enterprise Manager is your responsibility, just as if you had installed it on-premises. Refer to the Cloud Control Upgrade Guide and Cloud Control Administrator's Guide for best practices in these areas. Similarly, if you are planning on using some of the larger OCI IaaS Compute Options, some configuration of the OMS and OMR may be required for your EM to fully take advantage of the increased CPU and Memory footprint…also just as if you had installed it on-premises.  Refer to the Cloud Control Advanced Installation and Configuration Guide for more details.   How did it go? If you've gone through the experience, we'd like to hear from you.  How did it go?  Suggestions for things we can make simpler?  Are you using it for dev/test, production or both?  What are you monitoring?  Join the conversation in the Oracle Enterprise Manager Forum.  Finally, here's the link to the Oracle Cloud Marketplace again. Good luck and keep us posted on your progress. Enjoy your new Oracle Enterprise Manager instance! Image 16:  Enjoy your new Oracle Enterprise Manager instance!    

Hi there!  If you're reading this blog post, you are in the process of installing the new Oracle Enterprise Manager (EM) app into Oracle Cloud Infrastructure (OCI),and you've already completed Phase 1...

Maximize Oracle Exadata Performance from SQL to Storage

Contributing Author: Ashish Agrawal, Director, Product Management, Oracle Oracle Exadata is a high-performance system for hosting the Oracle Database and delivers the highest levels of database performance available. Oracle Exadata Database Machine consists of database servers, Oracle Exadata Storage Servers, an InfiniBand fabric for storage networking and all the other components required to host an Oracle Database. It delivers outstanding I/O and SQL processing performance for online transaction processing (OLTP), data warehousing (DW) and consolidation of mixed workloads. It delivers high performance due to many unique features such as ability to provides database aware storage with the ability to offload database processing from the database server to storage and accelerated Oracle Database processing by speeding I/O operations using Flash Cache. Oracle Enterprise Manager provides the tools to effectively and efficiently manage your Oracle Exadata Database Machine. Oracle Enterprise Manager provides Oracle Exadata monitoring, patching and provisioning, Oracle Exadata virtualization support, compliance management. With a consolidated view of the Oracle Exadata, Oracle Enterprise Manager provides a consolidated view of all the hardware components and their physical location with indications of status. Oracle Enterprise Manager also provides a software view of the databases residing on it and their resource consumption on compute node and Oracle Exadata Storage Cell. There are additional challenges in Oracle Exadata management. Optimize Resource Usage Which resources will the database machine be bottlenecked based on current load growth from on-boarded databases? Which Oracle Exadata systems have available capacity headroom to support consolidation of additional databases? Maximize SQL performance For databases migrated to Oracle Exadata, which SQLs are performing better or worse than expected? Which of the poor SQLs regressed due to bad plans vs. excessive waits? Is the application performance suffering due to excessive non-CPU or non-I/O waits? Troubleshooting Problems Which databases are reporting the most storage-related problems? How do I drilldown from the application tier to Oracle Exadata to perform RCA on my application? Oracle Management Cloud is a suite of next-generation integrated monitoring, management, and analytics cloud services. Data is automatically analyzed and correlated across all Oracle Management Cloud services, and the resulting insights are made instantly available via intuitive dashboards, Real time diagnostics, capacity planning, operational forecasting, and business analytics. Oracle Management Cloud’s unified data platform helps customers improve IT stability, prevent application outages, improve capacity planning, troubleshoot issues, increase DevOps agility and harden security across their entire application and infrastructure portfolio. Oracle Management Cloud enables Oracle Exadata customers to maximize their investments in Oracle Exadata by leveraging machine learning against the full breadth of the operational data set to maximize performance, optimize resources and troubleshoot operational issues rapidly. So, let us see how Oracle Management Cloud helps to overcome the above discussed challenges. 1. Optimize Resource Usage Oracle Exadata Analytics application in Oracle Management Cloud IT Analytics pinpoints Oracle Exadata systems expected to run out of resources in the short & long term. It Identifies the top growing databases across the fleet, identifies the key resource (CPU, Memory, I/O or Storage) expected to run out, expected lead time to acquire additional capacity and uncovers underutilized capacity. Oracle Exadata Analytics provides unified view of inventory, availability, performance and errors. It Identifies database Nodes that are CPU or Memory bound, Storage cell performance issues, Storage cells approaching max performance capacity and Storage cell outliers. Additional it can analyze and detect if Flash Cache utilization is underutilized and not used as expected. Oracle Exadata Analytics provides inventory and capacity across enterprise. It includes fleet view of Oracle Exadata systems across the enterprise by system type (Quarter, Half Rack), number of Oracle Exadata databases by database version. Further it provides drilldown to single Oracle Exadata to view hardware and software components, compute, storage, networking components. It also shows the configured capacity vs. utilized capacity of CPU, Memory, I/O and Storage. Oracle Exadata Analytics using regression models provides forecast into capacity growth and compares it with the available headroom. It shows single view of resource utilization across entire database fleet that includes compute (CPU & Memory), storage (ASM, Disk). Oracle Exadata Analytics identifies total lead time to expand capacity using machine learning based forecast, further it projects resource growth and compares it with configured capacity headroom. It uses seasonality to identify growth patterns, e.g. weekly peaks vs. daily peaks. It classifies each Oracle Exadata systems based on available lead times 30, 60, 90, 180 days of capacity. It has the ability to alert users on critical systems that are expected to hit capacity headroom in 30 days or less. One can set alert rule conditions and specify preference for receiving notifications when alerts are triggered. By default, the following alert rules are available for use: Oracle Exadata IOPS Capacity Headroom Rule Oracle Exadata Storage Capacity Headroom Rule Host CPU Capacity Headroom Rule Host Memory Capacity Headroom Rule One can receive these alerts as early warnings that inform about capacity utilization of host resource types such as Oracle Exadata IOPS, Oracle Exadata storage, host CPU and host memory. One can then take remedial measures to manage capacity, such as move the workload to another host, add more capacity, or analyze the reason for increased resource usage and resolve the issue. Oracle Management Cloud also provides out of the box Oracle Exadata Dashboard which analyzes Storage Cells and shows Storage Cells that are approaching IOPS & MBPS Limits. One can look at Small Read I/O per sec for Disk type = FlashDisk or Hard Disk and compare with Maximum Disk IOPS for Flash or Hard Disk to identify Storage Cells approaching max performance capacity for IOPS and Small Read Throughput for Disk type = FlashDisk or Hard Disk and compare with Maximum Flash Disk MBPS for Flash or Physical Disk. 2. Maximize Database & SQL Performance For maximizing database performance on Oracle Exadata, Database Performance Analytics automatically provides insights into the database performance by analyzing database performance degradation, databases with varying workload performance, database inefficiency and top SQL statements. There are some unique database and SQL performance challenges that are difficult to solve using conventional tools and methodology primarily resulting from lack of long term data and means to get insights into the data. identify databases that are not taking full advantage of CPU or I/O due to poor application design or contention, database workloads that are degrading with growth or anomalous workloads that are showing degradation and identify unstable workloads that are showing a high degree of variability in performance. identify SQLs whose response time is getting worse slowly over time, identify SQLs and associated databases contributing to high resource usage (CPU or I/O) and identify application SQLs that are a high degree of variability. Database Performance Analytics and SQL Analytics application in IT Analytics helps to solve these above challenges. The Database Performance Analytics application analyzes database performance based on long term performance data and provides insights into database performance. it Analyzes performance degradation by response time, identifies inefficient databases by wait time, analyze performance of high variant SQLs and databases, identifies databases which are increasingly inefficient. It identifies Top SQLs across enterprise wide databases. SQL Analytics is long term SQL warehouse which stores SQL performance data. SQL Analytics application analyzes SQL performance problems for enterprise-wide applications across a fleet of databases, provides trends and key insights to SQL performance issues thereby helping you to be proactive in avoiding future database performance problems. Oracle Exadata Database Administrator can find SQLs resulting in poor end-user experience because these SQLs are degrading in response time, SQLs that are resulting in varying end user performance because their response times are varying, SQLs that are inefficient and that represent the best tuning opportunity, so that the application can perform even better and SQLs that have multiple execution plans. An Oracle Exadata Database Administrator can also find CPU and IO Intensive SQLs as well. SQL Analytics categorizes SQLs across databases and applications, which need attention. The categories are Degrading SQLs: SQLs with more than 20% increase in SQL response time, based on linear regression. The value of the SQL response time is derived from the total elapsed time divided by the total number of executions for the SQL. Variant SQLs: SQLs with a relative variability of more than 1.66. Relative variability of an SQL is measured by the standard deviation of the SQL response time divided by the average of the SQL response time. Those SQLs that have a relative variability of more than three are identified as SQLs with highly variant performance. A Relative Variability of a SQL close to Zero indicates stable Response Times, while greater than 1.66 indicates higher variability in Response Times. A Relative Variability greater than three indicates a very high degree of variability in Response Times. Inefficient SQLs: SQLs with inefficiency of more than 20%. Inefficiency percentage of an SQL is derived from the inefficient wait time (wait time other than I/O, CPU, or idle wait time events) divided by the total database time. SQLs with Plan Changes: These SQLs use multiple execution plans. Typically, SQL with multiple execution plans may be a source of SQL performance issues leading to varying, unexpected or poor application performance. All the above category of SQLs needs an Oracle Exadata Database Administrators attention. SQL Analytics provides this analysis out of the box and helps them quickly narrow down the problem to specific database and SQL. Tuning these SQLs proactively will result in better application performance. Out of the box, SQL Analytics provides answer to the question which SQLs are consuming most CPU or IO across the entire Oracle Exadata estate. This helps the Oracle Exadata Database Administrator to quickly narrow down where is the most CPU or I/O resources consumed and if needed proactively tune those SQLs or control the resource consumption by using Oracle Database Resource Manager or I/O Resource Management (IORM). Out of the box, SQL Analytics provides fine-grained performance information and insights about an individual SQL. It provides SQL Text, the database name and host the SQL where the SQL is being executed. Further, it provides Performance Summary, which includes Average Response Time, % change in Average Response Time, Executions Per Hour, Variability and Inefficiency. The Execution Plan Insights shows number of execution plans the SQL has used, the best and worst performing plan, the execution plan which has consumed the most CPU and IO. The Performance Trend by various measure section includes Average Response Time, Executions Per Hour, Active Sessions, I/O Time, CPU Time and Another Wait Time. SQL Analytics provides Activity breakdown of Active Sessions by I/O Wait, CPU Time, and Another Wait. It provides by Response Time Distribution and Response Time Breakdown of the SQL. By getting access to find grained SQL Performance statistics of an individual SQL, Oracle Exadata Database Administrator can find out where the SQL went bad, if there was an execution plan change, which execution plan is good, which execution plan is bad.   3. Troubleshoot Problems Rapidly One of the most common issues in IT operation is related to application performance, so when apps are slow, fingers often point at the database and DBAs.  DBAs should be able to answer if the problem is somewhere else or the problem is in the database. The reality is databases are complex and full of critical information, so database monitoring and troubleshooting needs to become a priority.  DBAs should work with the rest of the IT department to provide complete visibility over database related entities to make sure they are not a bottleneck and help the team to faster troubleshoot issues. When it comes to troubleshooting, logs are one of the most important assets but unfortunately, logs spread across a variety of sources. When an issue happens, DBAs need to access different logs across many sources which is not easy and sometimes not possible because of lack of privileges. To provide answers/solution to issues mentioned above, DBAs can use Oracle Log Analytics as part of Oracle Management Cloud to collect, aggregate and store logs from across all databases (single instance, RAC, ASM, Oracle Exadata). Log Analytics can monitor each separate entity to automatically collect all database related logs through out-of-the-box log sources. There are different capabilities to collect/access logs in Log Analytics such as monitoring log files through Cloud Agent or extract data from Oracle Database through the same agent. Also, users can send data through syslog or API to Oracle Management Cloud. In addition, Log Analytics can automatically parse all events through rich built in parsers. So, collecting and parsing data require almost zero effort. Log Analytics solution supports all the component of Oracle Exadata out-of-the-box including: Database Alert logs, Database Trace logs, Database Incident, Listener log file, Database Audit logs, Database Audit XML logs, ASM alert log, ASM Trace log, OS message log file, OS Secure log file & etc. Once all the necessary data is in Log Analytics, system administrators can use the different search capabilities (query language or visual builder) to easily search and slice & dice through their data to troubleshoot, get insight, find unknowns and eventually find root-cause of issues.  databases can easily get answers to questions such as; how many times a database has restarted? How many login failures happened with last 24 hours? How many instance shutdowns and crashes happened within the last 7 days? Or monitor the hourly trend of logs to identify the change over the past 24 hours. The example below shows how easily users can search for a specific “OS Process ID” from all their “Oracle Database Instance” & “Oracle Database Listener” logs within the last 24 hours. Oracle Exadata Database administrators can easily build different custom dashboards to monitor their Oracle Exadata health overview. They can edit the content of the dashboards at any time. They can create dashboards which contain both log related data and metric related data from the monitoring services in one place. They can have all the necessary information in one place to correlate events together to get the context around the issue in troubleshooting scenarios. As an example, the user is monitoring “ORA-600” and “ORA-7445” errors at the dashboard level to get visibility over number of individual occurrences as shown below. This dashboard is also visualizing the error distribution over time to be able to easily drill-down into any specific one while troubleshooting.  Database Health Overview Dashboard Through Logs Dashboards and alerts are typically the first two things that administrators and product owners check to make sure the system is healthy and there is no issue or outages. Users can easily create these kind of dashboards and visualize what is important to them based on their own use cases. Here is the link to how create a dashboard in Oracle Management Cloud: https://docs.oracle.com/en/cloud/paas/management-cloud/logcs/creating-custom-dashboards.html. Oracle Exadata Database administrators can also take advantage of smart alerts in Log Analytics. They can create complex saved search queries and create alert to get notified about issues in real-time. They can even create anomaly alerts based on their data and use cases to eliminate noises based on calculated baseline for each search. In the example below, users can easily create an alert for authentication failures based on “Linux Secure Logs” since “Authentication Failure” is a known error category in Oracle Management Cloud Log Analytics. Topology-aware Log Exploration Log Analytics Service is not only able to collect logs from any source/entity; but also, it is aware of associations between those entities; for example, databases, servers, middleware servers etc. In addition, further associations can be defined to customize the topology view of an environment or application. Database Administrators can easily see all the components made their applications and use the topology flow to filter and drill-down into specific entities and its associations for troubleshooting and root-cause analysis. For instance, users can use Log Analytics to quickly find if there are any errors in their database instances as shown in the diagram below. As shown in the entity flow diagram below, there is a database instance which is not normal (colored red), users can select the entity or combination of entities to drill-down into relevant logs and search for any specific error i.e. “ORA*” from the events. Topology Aware Log Exploration Out-of-Box Deep Oracle Knowledge One of the unique capabilities of Log Analytics is all logs automatically get classified into commonly known/used error categories when it comes to Oracle products like Oracle Database, Oracle Exadata, Weblogic Server. All the relevant logs get associated with labels based on out-of-the-box defined conditions. There are many labels built in the product like “Data Corruption” instead of “ORA-0227”, “Connection Error” instead of “ORA-03106” or “Deadlock” instead of “ORA-00060’. As mentioned, for Oracle products like databases, Oracle Exadata, Web Logic Servers, etc. All error categorizes, and their trends will be shown automatically if there is any in data coming to Oracle Management Cloud. As shown in the screenshot below, the system automatically finds all the error category, priority, count and trend of each errors. Users not only get visibility over these out-of-the-box, but also can use them to drill-down into relevant data for troubleshooting and root-cause analysis use cases. Users can create their own labels based on simple/complex conditions to enrich their dataset and expand the error category labels for different data types. Out-of-the-box Error Categories These error categories (labels) help system Administrators to be able to search faster and easier in troubleshooting and root-cause analysis use cases. For instance, users can search for all “Memory Error” events across log sources in their environment by just searching for label = “memory error” as shown in the first screenshot below or search for label = deadlock in the second screenshot below. Search for Error Categories Across Different Data Types Search for Error Categories Across Different Data Types Log Analytics Machine Learning “Cluster” Using Oracle Log Analytics Machine Learning “Cluster” capability, users can reduce millions of log events into a smaller set of patterns based on common signatures, rapidly troubleshoot problems by quickly identifying outliers and potential Issues and show trend of each clustered group; correlate clustered events that show similar trends. Sometimes, there are unknown issues and DBAs have a hard time to find out where is the best place to start the investigation and how to narrow down the scope of the search to only relevant data. Using Machine Learning Clustering empowers users to automatically find potential issues and outliers (events that have only happened once). These are all out-of-the-box insight provided to users automatically specially when there is no indication or obvious information around the issue/problem for faster troubleshooting and get to the root-cause. Logs Clustering All the above helps the Oracle Exadata Database administrator rapidly troubleshoot Oracle Exadata problems. While most enterprise IT systems provide a reactive approach to system monitoring, capacity utilization, workload performance and issue resolution, Oracle Management Cloud delivers an automated, proactive and end to end integrated management solution to customers. Oracle Management Cloud for Oracle Exadata provides complete management solution for Oracle Exadata deployments–Oracle Exadata, Oracle Exadata Cloud Machine and Oracle Exadata Cloud Service. Oracle Management Cloud for Oracle Exadata enables customers to get complete Oracle Exadata visibility, streamline the capacity planning process, automate database performance issue identification while helping to proactively troubleshoot issues. It comes with low management effort, costs than comparable solutions and administrators get best prebuilt functionality for Oracle Exadata, Oracle Database and applications. One can conclude that Oracle Management Cloud provides comprehensive solutions for Oracle Exadata to enable Oracle Exadata administrators and DBAs to maximize performance, optimize capacity and rapidly troubleshoot problems in Oracle Exadata. Addition Resources: Webcast: Maximize Oracle Exadata Performance from SQL to Storage Thursday March 14, 2019 @ 9:00 a.m. PST Join us for an Oracle webcast where our experts will provide you with the tips and tricks you need to get 360–degree insight into Oracle Exadata's performance. Register Now!    

Contributing Author: Ashish Agrawal, Director, Product Management, OracleOracle Exadata is a high-performance system for hosting the Oracle Database and delivers the highest levels of database...

Enterprise Manager and Oracle Management Cloud

Simplify Management of Pluggable Databases: Complete Lifecycle Management of Oracle Multitenant Databases

Guest Contributor: Martin Pena, Senior Director, Product Management, Oracle Oracle Enterprise Manager Database Lifecycle Management Pack comes with out-of-box Deployment Procedures to provision, clone and patch the various configurations of the Oracle Database. The Management Pack offers new capabilities that simplify support for the entire lifecycle of pluggable databases, including migration, plugging and unplugging. The Management Pack features include; pluggable database (PDB) provisioning and management from the self-service portal, PDB patching and upgrades, and PDB relocation to new platforms. Recent updates to the solution include; support for Oracle Database 18c, as well as Fleet Maintenance for PDB patching and upgrades and new PDB relocation features.  The intuitive Oracle Enterprise Manager user interface and workflows enable DBAs to easily master provisioning new PDBs within a container database (CDB), migrating non-CDBs as new Pluggable Databases, and plugging and unplugging PDBs. Oracle Enterprise Manager extends its Database as a Service (DBaaS) capabilities to multitenant database, providing rapid provisioning of multitenant database via the built it self-service portal. Oracle Enterprise Manager’s DBaaS capabilities lets administrators identify pooled resources, configure role-based access, define the service catalog, and the related chargeback plans. Oracle Enterprise Manager is the industry most widely trusted product for administration and management of Oracle Database. As we see continuous adoption of Oracle Multitenant at large enterprise customers, we would like to introduce these customers to Oracle Enterprise Manager for Lifecycle Management of Multitenant Database. The Management Pack offers provisioning, patching, upgrade, cloning, unplug/plug, relocation operations for Oracle Database.  We could not stress enough that the basic design principle followed in building all of these capabilities, is to achieve enterprise scale and operational efficiencies for our customers. To learn more about Oracle Enterprise Manager Database Lifecycle Management Pack we invite you to watch an expert webcast listed below. Webcast: Simplified Lifecycle Management for Oracle Database 18c Multitenant To learn more about Oracle Enterprise Manager Database Lifecycle Management Pack watch this webcast. Discover how to manage pluggable database provisioning, patching and upgrades from the self-service portal, and find out how to relocate pluggable databases to new platforms. In the session, our experts will review recent updates to the Management Pack including; fleet maintenance for pluggable databases, patching and upgrades, and new pluggable databases relocation features. We will also cover the rich features offered by Oracle Database Lifecycle Management Pack as well as the Oracle Enterprise Manager Database as a Service offering for Oracle Multitenant.

Guest Contributor: Martin Pena, Senior Director, Product Management, Oracle Oracle Enterprise Manager Database Lifecycle Management Pack comes with out-of-box Deployment Procedures to provision, clone...

Best Practices

Webcast Series: Oracle Enterprise Manager and Oracle Management Cloud

These webcasts are brought to you by the Oracle Enterprise Manager and Oracle Management Cloud development organization. We are presenting some of the best sessions from Oracle OpenWorld San Francisco in case you missed them.  Please check out the Oracle Enterprise Manager and Oracle Management Cloud blogs for more information. Upcoming Webcasts Topic Date Webcast Registration Blog Efficient Troubleshooting Using Machine Learning in Oracle Log Analytics Oracle Log Analytics provides easy access to search, correlate, cluster, and analyze your log data. In this webcast, we'll discuss how to identify and detect anomalies within your business flows and applications using machine learning. May 9 9am PT Register Now   Extreme Database Optimization: Simple Steps to Maximize Performance Oracle Database and Oracle Enterprise Manager continuously introduce new and exciting capabilities that help improve DBA productivity. This session covers performance diagnosis and tuning and the new capabilities in Oracle Database and Oracle Enterprise Manager, such as Performance Hub, Automatic Workload Repository, multitenant enhancements, and many others. May 23 9am PT Register Now   Automate STIG CIS and Custom Oracle Database Security Standards Reduce Oracle Database attacks through automated system hardening based on industry standard benchmarks. In this session, we’ll explore lifecycle best practices including identifying assets, prioritizing vulnerability regression, and remediation management. June 6 9am PT Register Now         Watch On-Demand Topic Webcast Blog Simplified Lifecycle Management for Oracle Database 18c Multitenant In this session, learn about Oracle Database pluggable database patching, upgrading and provisioning from the self-service portal. Learn about fleet maintenance for pluggable databases and new pluggable databases relocation features.   Watch Now Read the Blog Learn to Troubleshoot Oracle Database and Oracle Exadata using Oracle Log Analytics In this session our Oracle experts will discuss how Oracle Database and Oracle Exadata administrators can use Oracle Log Analytics to troubleshoot performance issues faster. Watch Now Read the Blog Maximize Oracle Exadata Performance from SQL to Storage This session aims to provide you with the tips and tricks you need to get 360–degree insight into Oracle Exadata's performance. Watch Now Read the Blog

These webcasts are brought to you by the Oracle Enterprise Manager and Oracle Management Cloud development organization. We are presenting some of the best sessions from Oracle OpenWorld San Francisco...

Product Info

Reasons to upgrade to Enterprise Manager 13.2

Before, during and after the Open World, I have received numerous queries on the release of Enterprise Manager 13.2. This is an important release for Enterprise Manager 12c customers whose Premier Support expires this month and who traditionally wait for the release 2 of any product before upgrading, and for customers who have already upgraded to 13c, but want their critical issues addressed. And there are specific capabilities that are sought for by a specific set of customers, for example, security-sensitive customers have been asking for TLS 1.2 support.  Se here we are, announcing the release of Enterprise Manager 13.2. The primary focus of thisrelease was around security, scalability and availability of the platform. Somelong pending requirements have been addressed in the release. First, comes thesupport for TLS1.2. The communication between the agent and OMS, the agent andthe targets, and in case of target types such as database, between the OMS andthe targets are now secure with TLS1.2 protocol. Second, Enterprise Manager13.2 supports IPv6 addresses, allowing targets to be managed on IPv6-enabledhosts. Lastly, for ensuring higher availability of the Enterprise Managerplatform, the "Always On" Monitoring feature has now been certifiedto run on a different host than the OMS. Enterprise Manager 13.2continues to improve upon the 13c theme of having a unified hardware andsoftware management. The highlights in this area include better engineeredsystems and infrastructure management with the support for Exadata X6-2, X6-8and Oracle VM 3.4 (and therefore the latest PCA models). Couple of otherimprovements in infrastructure management worth mentioning, are seamless ASRintegration for hardware telemetry and Solaris Compliance checks. Management of databaseand middleware platforms continues to evolve in this release. Thefleet-maintenance feature, first introduced in 12cR5, can now support a widerrange of database configurations, making patching really simple for cloud-scaleenvironments. For Fusion Middleware, Enterprise Manager now supports Weblogic12.2.x, enabling customers to enjoy the benefits of multitenancy for theirWeblogic platform. Some of the benefits can also be enjoyed in a hybridenvironment, as on-prem targets as well as DBCS, JCS and SOACS targets can bemanaged by a single Enterprise Manager instance. Visit the OTN page to get more information on Enterprise Manager 13c.

Before, during and after the Open World, I have received numerous queries on the release of Enterprise Manager 13.2. This is an important release for Enterprise Manager 12c customers whose Premier...


Enterprise Manager 13.2 Now Available

Oracle Enterprise Manager Cloud Control 13c Release 2 (13.2) is nowpublicly available. This release marks an important milestone in the ongoingevolution of Oracle’s on-premise systems management platform, which provides asingle pane of glass for customers to manage the Oracle stack, whether in theiron-premises data centers or in the Oracle Cloud. Through deep integration withOracle’s product stack, Oracle Enterprise Manager provides market-leadingmanagement and automation support for Oracle applications, databases,middleware, hardware and engineered systems. Highlights of the release include, but are not limited to: More secure and scalablemanagement with the support for TLS1.2 and IPv6 protocols. Better engineeredsystems and infrastructure management with the support for Exadata X6-2 andX6-8, Oracle VM 3.4, ASR integration and out-of-box Solaris Compliance checks. Improved hybrid cloudmanagement with automated service discovery of public cloud (DBCS and JCS)assets and with support for Fusion Middleware 12.2.x in JCS and SOACSenvironments Additionally, the release contains critical fixes for monitoring andmanaging the Oracle ecosystem. You can find more information about this release on OTN. Please check back for additional resources. For the latest download, go to Oracle Enterprise Manager Downloads.

Oracle Enterprise Manager Cloud Control 13c Release 2 (13.2) is now publicly available. This release marks an important milestone in the ongoingevolution of Oracle’s on-premise systems management...


Some Exciting Customer Presentations!

While I always enjoy getting in front of customers to present on some new and really cool functionality in Enterprise Manager Cloud Control, it's even better to be in the audience listening to actual customers doing just that!  There are a couple of those presentations coming up on Thursday this week at OOW.  Unfortunately I can't be there to listen in, but if you're interested in learning how to provide scalable and flexible patching solutions, make sure you get along to these two presentations and learn directly from these customers! Pivot from Manual to Scalable with Oracle Database Lifecycle Management Pack [CON2494] This presentation is on at 10:45 am on Thursday in Moscone South - 305.  Sarah Brydon and Ashwin Vaidya, both from Paypal, are the presenters. Are your skilled DBAs spending time installing software and applying patches instead of growing your business? Come to this session to learn how PayPal moved from manual execution of homegrown scripts for Oracle Real Application Clusters installs and patching to leveraging the power of Oracle Database Lifecycle Management Pack. Oracle Real Application Clusters provisioning moved from days to an hour per node, with improved build consistency and quality. Security and reliability patching scaled to deliver patches across the enterprise faster and more consistently than ever before. The session includes best practices for implementing provisioning and discusses PayPal’s experience with patching plans and the new fleet maintenance feature in Oracle Enterprise Manager and 13c. Database Patch at 1,000 Scale: Nationwide’s Oracle Enterprise Manager Fleet Maintenance Odyssey [CON2260] This session is on at 12 noon on Thursday, in the same room (Moscone South - 305).  The presenters this time are Gary Henderson and John Norman, both from Nationwide. Database currency is becoming an increasingly important part of every company’s security posture, but there are significant challenges with patching at scale—say, 1,200 databases. This session is on how Nationwide followed Oracle’s best practice (out-of-place patching) and used Oracle Enterprise Manager’s command-line interface with the new switch database capability to accomplish patching at scale and position the company for forthcoming fleet maintenance. So there you have it - two fantastic presentations, learning from real customer experiences.  It doesn't come much more realistic than that! 

While I always enjoy getting in front of customers to present on some new and really cool functionality in Enterprise Manager Cloud Control, it's even better to be in the audience listening to...


Database Lifecycle Management and Database Cloud Management at OOW2016

Every year just before OOW, the product teams produce what we call Focus On documents, that show you all the sessions (both customer and Oracle presented) around a particular area. This year is, of course, no different in that respect, so if you want to know what’s coming up in the Enterprise Manager space, you can do no better than going directly to our Focus on Enterprise Manager document. Likewise, there is a Focus on Oracle Management Cloud document, for those that are interested in the OMC space. These are living, breathing documents, so as changes are made (whether those changes be changes in time, room allocation, speaker, whatever) they will be automatically propagated to the relevant Focus on document, so remember to come back and check on them regularly. In this post, however, I wanted to focus specifically on Database Lifecycle Management and Database Cloud management sessions at OOW this year.  So let’s drill into those sessions in a little more detail. Please note I’ve left the rooms / times off the listing below as they are still subject to change. See the Focus on Enterprise Manager document for the most up to date information on these (I’ve included the session ID’s for ease of cross reference). Also note the Hands-on Labs, where you can get practical experience on using the Enterprise Manager functionality in different areas, always fill up quickly so first in best-dressed for those! Harden and Standardize Your Database Configurations Across Clouds [CON6978] – A fantastic presentation by Martin Pena (Senior Director, Product Management, Oracle), along with Tim Albrecht (Database Administrator, Wells Fargo), and Madhav Ravipati (Lead/Supervisor Database Administrator, PG&E). Do you think your database environments are secure? This presentation examines how to really be sure by using Oracle Enterprise Manager 13c database lifecycle management functionality. It provides the ability to evaluate various targets along with Oracle’s engineered systems, as they relate to business best practices for configuration, security, and storage. In this session see the tools available to enforce standardization on your IT landscape, including Oracle Database 12c security technical implementation guide checks, ORAchk/EXAchk functionalities, and ways to build gold images of your environment to check configuration management, drift, and consistency. Hybrid DBaaS with Oracle Enterprise Manager: Comcast’s 7 Goals for the Cloud [CAS1595] – Presented by Bala Kuchibhotla (VP of Software Development at Oracle), Jayson Hurd (Database Administrator, Comcast) and Tejas Gohil (Manager, Comcast). Comcast has a footprint of more than 2,500 Oracle databases running on 1,500 servers. Oracle Enterprise Manager’s cloud management capabilities have revolutionized Comcast’s service delivery method, cutting process times and operational costs dramatically. Comcast has shifted its delivery model from a ticket-based, mostly manual process to a fully automated database-as-a-service (DBaaS), portal-based service catalog model. In this session Comcast demonstrates how it was able to cut costs and increase compliance and efficiency by embracing the following: fully automated environment, self-service, controlled delegation of routine tasks and maintenance, reduction in mean time to repair, rapid provisioning, a common interface, and maximized consolidation. Oracle Enterprise Manager 13c: Unifying Self-Service PaaS Across Multivendor Cloud [CAS2348] – A customer presentation, presented by Adrian Turner (Infrastructure Architect, Maersk). This session demonstrates Maersk’s approach to providing quota-limited self-service capability across multiple vendor clouds, using Oracle Enterprise Manager 13’s cloud management pack. It explores the following: The infrastructure-as-a-service delivery approach (templates, automation, and provisioning) Oracle Enterprise Manager 13c configuration (agent deployment, administration hierarchies, monitoring collections, and configuration collections) Architecture compliance enforcement through the Oracle Enterprise Manager 13c compliance framework Oracle Enterprise Manager cloud management pack configuration and allocation to database and middleware pools Self-service delivery via REST APIs Deep Dive: Snap Clone and Data Refresh Solutions using Oracle Enterprise Manager [CON6982] – Presented by Janardhana Korapala (Database Admin Consultant, Dell Inc), Subhadeep Sengupta (Consulting Product Manager, Oracle), and Martin Pena (Senior Director, Product Management, Oracle) – Creating private copies of large multiterabyte databases is a critical requirement for many customers in their efforts to build and deliver high-quality applications. The challenge is to provide these copies quickly and without adding additional storage costs. In addition, these copies may need to be continuously refreshed to match production data. In this session learn how the Oracle Enterprise Manager Snap Clone feature Performs instant cloning of large databases while saving more than 90 percent on storage costs Fits perfectly with your existing investments in storage, servers, and engineered systems Is best suited for performing thin cloning even within Oracle cloud instances Expanding DBaaS Beyond Data Centers: Hybrid Cloud Onboarding via Oracle Enterprise Manager [CON6985] – Presented by Bala Kuchibhotla (Vice President, Software Development, Oracle), Bharat Paliwal, (Snr. Development Director, Oracle), and Subhadeep Sengupta (Consulting Product Manager, Oracle) – Hybrid clouds are becoming standard for building next-generation SaaS applications. The challenge for IT is how to run and manage database operations supporting both environments. Oracle’s DBaaS hybrid cloud solutions simplify coexistence and adoption of Oracle cloud deployments. They allow transparent movement of workloads using the same architecture, standards, products, and management. In this session learn how Oracle Enterprise Manager 13c enables DBAs to: Migrate databases to/within/from Oracle Cloud on-demand Implement a unified service catalog that includes schemas, databases, and pluggables Obtain a single pane of glass view and perform lifecycle management tasks across database clouds Hands-on Lab: Harnessing the Value of Hybrid Cloud—Complete Management of Cloud Services [HOL7631] – In this hands on lab learn how Oracle Enterprise Manager 13 manages the full range of Oracle cloud services, including Oracle Compute Cloud Service, Oracle Database Cloud Service, Oracle Java Cloud Service, and Oracle Cloud Machine. Experience firsthand how cloud services are managed by the same Oracle Enterprise Manager tools that customers use on premises to monitor, provision, and maintain Oracle Database, Oracle engineered systems, Oracle Applications, Oracle Middleware, and a variety of third-party systems. Discover the latest enhancements, including gold image provisioning and standardized software deployment, data cloning and “continuous refresh” from production, and end-to-end provisioning on Oracle Compute Cloud Service. Hands-on Lab: Relief from Chronic Patching Pain—Oracle Enterprise Manager Fleet Maintenance [HOL7632] – Whether you are patching a handful or even thousands of databases, long downtimes and multiple maintenance windows can cause pain for DBAs and application owners. Ease the suffering and improve the quality of life for DBAs by using the Oracle Enterprise Manager fleet maintenance solution. It allows administrators to patch and upgrade database software with minimal downtime as well as updates at scale across your entire database estate, significantly reducing the time required for maintenance activities. Now available for both traditional database estates as well as cloud implementations, the Oracle Enterprise Manager fleet maintenance solution is a must-have for managing databases at scale. Attend this session to learn more. So there you have it! Plenty to see and do this year with Enterprise Manager at #OOW2016!

Every year just before OOW, the product teams produce what we call Focus On documents, that show you all the sessions (both customer and Oraclepresented) around a particular area. This year is, of...

Product Info

Oracle Enterprise Manager 13c: Centralized Administration and Auditing of Oracle WebLogic Server

Oracle Enterprise Manager Cloud Control 13c enablesadministrators to perform administration operations against Oracle WebLogicServer as well as to view the audit records for those operations.  Traditionally, administrators were requiredto use the WebLogic Server Administration Console or the Oracle EnterpriseManager Fusion Middleware Control console to administer their domain. However, administration and configuration operationsare available directly from theOracle Enterprise Manager Cloud Control 13c console. From the Cloud Control console,administrators can do the following: Leverage Change Center to lock a domainconfiguration prior to making configuration changes. Configure domain, cluster, server, machine – aswell as multi tenancy settings. Manage JDBC data sources. Access the System MBean Browser to view, invoke oredit MBeans. · Start, stop or restart middleware components. Schedule and track execution of WLST scripts. · View the audit trail of WebLogic operationsperformed from either the Oracle Enterprise Manager Cloud Control console orthe Oracle Enterprise Manager Command Line Interface (EMCLI). Having a single tool for monitoring, administering andviewing audit records for Oracle WebLogic Server not only simplifies anadministrator’s job and increases productivity, but also maintains security ofa middleware environment. To see first-hand some of the Oracle WebLogic Serveradministration and auditing capabilities available from Oracle EnterpriseManager Cloud Control 13c, watch and listen to the recorded demonstrationavailable on the Oracle Learning Library here. Additional information can also beobtained from the Oracle Enterprise Manager Cloud Control 13c Online DocumentationLibrary: How to enable auditing of WebLogic Server operations Understand what WebLogic Server operations can be audited Administer middleware targets Configure and manage auditing 

Oracle Enterprise Manager Cloud Control 13c enables administrators to perform administration operations against Oracle WebLogicServer as well as to view the audit records for those operations. ...

Product Info

Oracle Enterprise Manager 13c: Track Compliance with the Oracle WebLogic Server 12c Security Technical Implementation Guide (STIG)

Oracle Enterprise Manager Cloud Control 13c enablesadministrators to track compliance of their WebLogic Server environments to theDefense Information Systems Agency’s Oracle WebLogic Server 12c SecurityTechnical Implementation Guide (STIG). The Security Technical Implementation Guide contains technical guidanceto “lock down” information systems and software that might otherwise be vulnerableto a malicious computer attack. Originally published by the Defense Information Systems Agency for theDepartment of Defense, the STIG for Oracle WebLogic Server 12c can be used byadministrators from public or private sectors to ensure their WebLogic Serverenvironments are sufficiently secure and not vulnerable to attack. Beginning with Oracle Enterprise Manager Cloud Control 13cRelease 1 and the Fusion Middleware Plug-in Release, a predefinedcompliance standard named “Security Technical Implementation Guide (STIGVersion 1.1) for Oracle WebLogic Server 12c“ is available from the CloudControl console. Administrators canassociate this compliance standard with their 12c WebLogic Domains to ensurethat those domains adhere to strict security guidelines. The below screenshot highlights one of the 72 predefinedrules included in the Security Technical Implementation Guide (STIG Version1.1) for Oracle WebLogic Server 12c compliance standard. The below screenshot highlights a violation event toone of the automated rules included in the Security Technical ImplementationGuide (STIG Version 1.1) for Oracle WebLogic Server 12c compliance standard. To see first-hand how to use Oracle Enterprise Manager CloudControl 13c to track compliance to the Oracle WebLogic Server 12c STIG, watchand listen to the recorded demonstration available on the Oracle LearningLibrary here.   Additional information can also be obtainedfrom the Oracle Enterprise Manager Cloud Control 13c Online DocumentationLibrary: Identifythe automated rules included in the Security Technical Implementation Guide(STIG Version 1.1) for Oracle WebLogic Server 12c compliance standard Identifythe rules included in the Security Technical Implementation Guide (STIG Version1.1) for Oracle WebLogic Server 12c compliance standard which need to beverified manually Understandhow to use the STIG based compliance standard Accessthe actual Oracle WebLogic Server 12c STIG – Version 1, Release1 published byDISA

Oracle Enterprise Manager Cloud Control 13c enables administrators to track compliance of their WebLogic Server environments to theDefense Information Systems Agency’s Oracle WebLogic Server...

Product Info

DBaaS RESTful API Cook Book

Having a GUI tool like Enterprise Manager Cloud Control to manage your entire data center is an imperative for most enterprises these days. But in the cloud paradigm, you must also have a uniform API that can be used to tailor the cloud to your business processes and economic models. That API is known as the REST API, and services based on that are known as RESTful services.With the latest release of Oracle Enterprise Manager, enhancements have been made to the REST API to facilitate the administration as well as the operational use of features in the Cloud Pack for Oracle Database. You are now able to fully automate the administration of your cloud assets. The REST API also facilitates the self-service model for private cloud, enabling you to fully automate the actual request and governance of database needs by a given Line of Business (LOB). Through the use of the REST API, you can streamline the process of delivering critically needed databases to the LOBs at a fraction of the current Operational Expenses (OPEX) cost. Some critical business benefits are as follows:Standardization – With the implementation of the DBaaS REST API you will be able to reduce OPEX by standardizing the provisioning of databases in a fully automated fashion. A database request from different parts of your organization would be self-service and based on a standard build of the database specific to the binaries, structure and data.Centralization – With the streamlining of database provisioning via the DBaaS REST API, you can centralize the overall management and lifecycle of the entire private cloud landscape and ensure that service levels are maintained.Simplification – With a central and standard delivery model, you will also simplify the entire configuration landscape. This will reduce OPEX by way of reducing the number of Oracle Homes (OH) and databases being managed. Consolidation of OHs and databases will also contribute to a reduction in Capital Expense (CAPEX) by ensuring that you maximize the use of existing resources in a data center or ensure that pending purchases are fully used. Finally, the simplification and consolidation will also reduce CAPEX by reducing environments in the data center itself. This translates to reduced power, A/C and a reduction in real estate required to satisfy the business needs while ensuring that IT can provide the proper environments, maintaining world-class standards for the business.Recently, the Enterprise Manager product management team has released a cookbook that is intended to provide direction and requirements for the execution of DBaaS specific to the use of the REST API provided with the Cloud Pack for Oracle Database. That cookbook is available on OTN. If you want more information on DBaaS, you can find white papers, demonstrations, case studies and more here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control 12c Mobile app

Having a GUI tool like Enterprise Manager Cloud Control to manage your entire data center is an imperative for most enterprises these days. But in the cloud paradigm, you must also have a uniform API...


Enterprise Manager 13c: What’s New in Database Lifecycle Management

Enterprise Manager 13c: What’s New in Database Lifecycle Management A month back, we announced therelease of Enterprise Manager 13c. Enterprise Manager 13c includes severalimprovements over Enterprise Manager 12c, which was itself had witnessed prettysuccessful adoption over the last four years. Enterprise Manager 12c rested on the key themes of enterprise grademanagement, stack management and cloud lifecycle management. The 13c versionsimply bolsters those pillars. In the area of database lifecyclemanagement, we have delivered some key features that span both the depth andthe width of the data center. In terms of depth, it leverages the unificationof hardware and software management to manage Exadata better. In terms ofwidth, it offers simpler and more scalable cloud management. Over the next fewparagraphs, I will touch upon few features that highlight the above. Exadata Patching: Thepatching of Exadata has been laborious process for our key customers. Itinvolves multiple components across multiple nodes and often multipleadministrators in the process. In Enterprise Manager 13c, some of the importanthardware management features have been assimilated into the Cloud Controlproduct. In course of developing the hardware aka the infrastructure managementfeatures, we have modeled the infrastructure target types: servers, storage,network, VMs in Enterprise Manager. This also enables us to have a moresophisticated management of Engineered Systems. For example, Enterprise Managercan now patch an Exadata stack- storage servers, Infiniband drivers, compute nodes, grid infrastructure and as before, thedatabases on top. The patching application offers the facility to run thepre-flight checks and monitor the logs from a single place (imagine having to manuallymonitor the patch execution logs for grid infrastructure, operating system,storage for all the compute and storage cells in a rack). Enhancementsin Database Compliance Management:   There have been some significantenhancements in database compliance management. These include: a) Integration of Oracheck: A long-standingrequirement that should thrill Exadata customers is the integration of Oracheck(including Exacheck) into the Compliance framework of Enterprise Manager.Exacheck had formerly existed as a standalone utility, but customers have askedfor at-scale execution over all the Exa’s in a data center, with the ability toflag, suppress or escalate any violation. They can also generate reports onoverall health and share with other administrators and IT Managers. b) Integration with corrective action: Anothermuch sought-after feature has been to integrate the compliance violations withcorrective action. For example, a DBA may want to lock an account with defaultusername and password. Enterprise Manager 13c makes this a reality. Configuration Drift and Consistency Management: When it comes tomanaging a cloud horizontally at scale, the configuration comparison feature ofEnterprise Manager has always been a DBA’s favorite. The new configuration drift managementfeature evolves the feature further by enabling administrators to proactively spot the “needlein the haystack” among the hundreds and thousands of members that can be a partof a cloud or even span multiple clouds (example, on-prem and Oracle Cloud). The DBA can set up a golden standard with aselected list of configuration parameters and make sure that all the members ofa cloud or a pool of databases subscribe to that standard. In case of anydeviation, the DBA will be automatically notified. The consistency managementcan be applied across the different phases of lifecycle (for example, the testdatabase should be the same as production) or across the different members of asystem (for example, all the nodes of an Exadata should be exactly alike). Additional Info Database Lifecycle Management OTN Page New | Data Sheet:  Database Lifecycle Management New | Screenwatch:  Manage Consistency at Scale with Database Lifecycle Management

Enterprise Manager 13c: What’s New in Database Lifecycle Management A month back, we announced the release of Enterprise Manager 13c. Enterprise Manager 13c includes severalimprovements over Enterprise...


Oracle Enterprise Manager 13c: New and Exciting Features

Oracle Enterprise Manager 13c: What’s New We just announced the release of Oracle Enterprise Manager 13c. Well, if the number 13 makes you jittery, rest assured that this new release is an improvement on Oracle Enterprise Manager 12c which has witnessed unprecedented adoption among customers worldwide. Oracle Enterprise Manager 12c rests on the key themes of enterprise grade management, stack management and cloud lifecycle management. The 13c version simply bolsters those pillars. Our first goal in this release has been to make monitoring cloud scale and resilient. Oracle Enterprise Manager today is the nerve center of IT operations among thousands of enterprises, our very own public cloud operations being one among them. Millions of assets in Oracle’s SaaS and PaaS public cloud operations are managed by Enterprise Manager round the clock, which requires that Oracle Cloud’s own Oracle Enterprise Manager instance stays up and running during unplanned and planned downtime windows. Oracle Enterprise Manager 13c therefore introduces “always on” monitoring, where a small companion monitoring application continues to receive critical alerts from the agents out-of-band while the management server is down. One can start the application, take Oracle Enterprise Manager down for patching and continue to be alerted on critical events. Speaking of planned downtime windows, another exciting feature being introduced is “notification blackouts”, which lets administrators monitor their targets during their maintenance windows, while notifications from critical alerts are still turned off. When it comes to stack management, the BIG news for our customers is the unification of hardware and software management. Ever since Oracle acquired Sun, we have promised a converged systems management, but current customers have been managing hardware through a separate tool called Ops Center. In Oracle Enterprise Manager 13c, some of the important hardware management features have been assimilated into the Cloud Control product. This not only benefits platform administrators in that they can now drill down into the infrastructure problems easily, this also benefits system and storage administrators because they can enjoy the scalability, availability and security framework features of Cloud Control. As an example, critical incidents in the hardware layer can now be published to a 3rd party ticketing system using the connector framework, something that Oracle hardware customers have asked for a long time. As part of developing the hardware aka the infrastructure management features, we have modeled the infrastructure target types: servers, storage, network, VMs in Enterprise Manager. This also enables us to have a more sophisticated management of Engineered Systems, including the ability to patch a complete Exadata and Exalytics stack. The patching application offers the facility to run the pre-flight checks and monitor the logs from a single place (imagine having to manually monitor the patch execution logs for grid infrastructure, operating system, storage for all the compute and storage cells in a rack). Another enhancement that should thrill Engineered Systems customers is the integration of Exacheck into the Compliance framework of Enterprise Manager; this would let them generate automated notifications and reports for any violation in their Exadata configurations. One request we always received from our database customers was to enable fine grain access control. Most organizations have multiple personas (Central DBAs, application DBAs, Developers, etc) and would like segregation of duties among these personas. For example, a Developer may be allowed to tune the application but not patch the underlying database. Oracle Enterprise Manager 13c enables fine grained privileges for controlling access to specific features. In terms of cloud management, the release focuses on three key aspects: the ability to perform database consolidation planning for various scenarios, the ability to manage configuration drifts at scale and improved data lifecycle management across production and test instances. The new Consolidation Workbench provides an end-to-end workflow that provides three distinct steps: · What-if analysis on various consolidation scenarios: commodity to engineered systems, non-multitenant to multitenant databases and on-prem to Oracle Cloud. · The actual enactment of the consolidation by integrating with the provisioning features · Post-consolidation testing using the SQL Performance Analyzer The new configuration drift management feature enables administrators to proactively spot the “needle in the haystack” among the hundreds and thousands of members that can constitute a cloud or even across multiple clouds. And last but not the least, Snap Clone customers would be benefited by the ability to keep the test databases in sync with their production. It would be an incomplete disclosure to limit Oracle Enterprise Manager 13c’s capabilities to the above features. There are many more new features (see the full list) as well as hundreds of enhancements introduced into the existing features. I am certain that IT administrators and consultants would be looking for a top-12 feature list. For them, here’s the summary: 1. Gold image based agent lifecycle management (view screenwatch) 2. “Always on” monitoring (view screenwatch) 3. Notification blackouts for managing target downtime windows (view screenwatch) 4. Cloud-scale configuration drift management 5. Hardware and infrastructure management  (view screenwatch) 6. Engineered Systems patching 7. Exadata and Exalogic-Nimbula VM provisioning 8. Exacheck integration into the compliance framework 9. Flexible access control for database management (view screenwatch) 10. Database Consolidation workbench (view screenwatch) 11. Continuous data refresh between production and test databases 12. Unification of Middleware Consoles into Enterprise Manager 13c (view screenwatch) To summarize, Enterprise Manager 13c reinforces its ability to manage along two dimensions: vertically across the stack, and horizontally, within and across clouds. So, number 13 indeed sounds lucky for Enterprise Manager customers, right?

Oracle Enterprise Manager 13c: What’s New We just announced the release of Oracle Enterprise Manager 13c. Well, if the number 13 makes you jittery, rest assured that this new release is an improvement...


Oracle OpenWorld Sneak Preview: Oracle’s Newest PaaS Offerings for IT Operations Management – Oracle Management Cloud

OpenWorld is always a very exciting time for Oracle and our customers, and every year we reveal some of our newest offerings. We sat down with Prakash Ramamurthy, Senior Vice President of Oracle Management Cloud, to discuss this year’s conference. Q: Tell us about what’s happening this year at OpenWorld for Oracle Management Cloud? PR: This year, we have a rare and fantastic opportunity to introduce an entirely new category of systems management cloud services for Oracle customers – the Oracle Management Cloud. Oracle Management Cloud is a suite of next-generation, integrated monitoring, management and analytics solutions delivered as a service on Oracle Cloud. It is designed for today’s heterogeneous environments across on-premises, Oracle Cloud and third-party cloud services. Q: Why will Oracle Management Cloud be so valuable to customers? PR: Customers today are struggling with the need to reduce time-to-market for new applications, even as those applications are getting more and more complex and distributed. Whereas a new offering five years ago might have been supported by a monolithic, on-premises infrastructure, today that offering may be composed of half a dozen cloud services from several different providers. Oracle Management Cloud was purpose-built to provide the new level of operational efficiency and visibility that these rapidly-changing environments require. Q: What services will be available in the Oracle Management Cloud? PR: We will be offering three services out of the gate focused on some of the highest pain points for customers today. Oracle Application Performance Monitoring Cloud Service provides development and operations teams with the information that they need to find and fix application issues fast, via deep visibility into your application performance from end user experience, through application server requests, and down to application logs. Oracle Log Analytics Cloud Service monitors, aggregates, indexes, and analyzes all log data from applications and infrastructure – enabling users to search, explore, and correlate this data to troubleshoot problems faster, derive operational insight and make better decisions. And finally, Oracle IT Analytics Cloud Service provides 360-degree insight into the performance, availability, and capacity of applications and IT investments, enabling line-of-business executives, analysts, and administrators to make critical decisions about their IT operations based on comprehensive system and data analysis. Q: Where can customers go to learn about these new offerings? PR: The single best place to experience the power of these new offerings will be in the Oracle Management Cloud Launch General Session, which is titled GEN9778 - Oracle Management Cloud: Real-Time Monitoring, Log, & IT Operations Analytics and takes place on Tuesday, October 27 at 11AM in Moscone South Room 102. Oracle OpenWorld attendees should pre-register for the session as there is limited space and attendees will be receiving a gift from Oracle. In addition to the General Session, there will be several dedicated track sessions, demonstration stations and hands-on labs throughout the remainder of the week. It’s going to be a very exciting time. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

OpenWorld is always a very exciting time for Oracle and our customers, and every year we reveal some of our newest offerings. We sat down with Prakash Ramamurthy, SeniorVice President of Oracle...

Product Info

Snap Cloning Databases on Exadata using Enterprise Manager

Historically, Exadata has mostly been deployed for heavy, production workloads, leaving cheap commodity hardware and third-party storage to perform as infrastructure for Development and Testing. From the viewpoint of Enterprise Manager, we have seen customers clone production databases running on Exadata to secondary storage such as ZFS Storage Appliance or even third-party NAS or SAN for the purpose of testing. Customers mainly used RMAN (with or without Enterprise Manager) to clone the databases. While the master clones (often referred to as Test Master) could be further cloned via storage efficient snapshots, there were significant limitations to the approach. First of all, the testing on non-like systems from Exadata (both compute and storage) often yielded erroneous inferences. Second, both the compute and storage on existing Exadata racks often remained underutilized. Most surveys establish that there are several Dev/Test copies for every Production database, and leaving Dev/Test outside the realm of Exadata can only yield partial usage of engineered systems. Two recent advancements in Exadata break this existing barrier. First of all, the compute nodes on Exadata can now be virtualized. Consolidated environments can now use Oracle Virtual Machine (OVM) on X5-2, X4-2, X3-2, and X2-2 database servers to deliver higher levels of consolidation and isolation between workloads. The Virtual machines can be configured on demand with the appropriate number of Virtual CPUs (vCPUs) for iterative testing. Second, with Oracle Database 12c Release 1 (12.1) release BP5 or later, space-efficient database snapshots can now be quickly created for test and development purposes on Exadata. Snapshots start with a shared read-only copy of the production database (referred to as the Test Master in Enterprise Manager Parlance) that has been masked and/or cleansed of any sensitive information. Snapshot technology as deployed on Exadata is "allocate on first write", not copy on write. As changes are made, each snapshot writes the changed blocks to a sparse disk group. Multiple users can create independent snapshots from the same base database, therefore multiple test and development environments can share space while maintaining independent databases for each task. The base database must remain read-only during the usable lifespan of snapshots that are created from that base database. If there is a need to create another set of snapshots from a more recent copy of production data, a new read-only base from a production database needs to be created. Enterprise Manager 12cR5 leverages the capabilities of Exadata to extend the Snap Clone capabilities for Exadata Sparse clones. As shown in this tutorial, with Snap Clone, Enterprise Manager can create a Test Master using either Dataguard technology or RMAN preceded by data masking. The Test Master can be created either on the same Exadata rack or on a different one. Once the Test Master has been created, snapshots can then be created on the sparse disk groups using the Deployment Procedures. The Deployment Procedures also automate the post-cloning discovery and promotion of the cloned targets, making them fully managed right from inception. Internal testing confirms that for cloning a Terabyte of database with a complete discovery of all its components takes less than a minute. Enterprise Manager also helps DBAs track the lineage of the clones by providing a report on the production database, the Test Master and its clones. Enterprise Manager Snap Clone on Exadata supports both regular as well as pluggable databases with optional ACFS configuration. In addition to support for Exadata sparse clones, Snap Clone continues to support NAS (ZFS Storage Appliance and NetApp) and SAN (certain EMC storage arrays), in case users want to deploy these for their Dev-Test environments.  Further Reading Resources To learn more about Database as a Service visit the otn page. Prerequisites for setting up Exadata Snapshots are documented here. Watch the video, Snap Clone Multitenant (Pluggable Database) on Exadata here. --Subhadeep Sengupta (@Subh_here)

Historically, Exadatahas mostly been deployed for heavy, production workloads, leaving cheap commodity hardware and third-party storage to perform as infrastructure for Development and Testing. From...

Best Practices

Understanding Plans, Profiles and Policies in Enterprise Manager Ops Center

Enterprise Manager Ops Center uses a combination of Plans and Profiles to maximize repeatability and reuse, while giving you the degree of flexibility to provision and update what you need to in your environment. The structured building block approach will allow reuse of many of the components instead of re-entering data each time, but does make the whole thing look very confusing until you understand the relationship between plans and profiles. The sort activities covered by plans and profiles are: OS provisioning and configuration BIOS and Firmware updates LDOM creation Zone creation Boot environment creation Patching and adding packages (S10 and S11) Configuration files and pre/post action scripts (S10) Automation using operational plans (scripts) The Building Blocks So, firstly, let's look at the building blocks and see what is the difference between a Plan and a Profile. Profiles Profiles contain the configuration information required to perform an action eg: OS Provisioning plan - contains type/architecture, resourcing, OS version, software group, language, timezone, password, filesystems, name services OS configuration - contains type/architecture, agent/agentless, multipathing, network configuration Discovery - contains asset type, tags, IP address, credentials, grouping Logical Domains - contains type, name, CPU/Cores, memory, automatic recovery, storage, networks These are just some examples of the 16 types of profiles available in the BUI.  Plans Plans are objects that can be run (executed) to make something happen. Plans contain profiles, other plans, policies, or a combination of these. Policies In addition to these profiles, there are update and monitoring policies and credentials that can also be created and edited here. Examples Let's look at a couple of examples from a process prospective and how they actually look in the BUI. A side note here is the screenshots have been taken from an Ops Center 12.2.2 environment. Ops Center 12.3.0 introduces new styles of icons, but the principles are still all the same. Process overview For example, if we were going to provision a bare metal physical sever, you would have 2 choices: A simple provision that would just lay down an operating system A complex provision that could update the BIOS on an X86 lay down the OS update (patch) the OS Add applications Run scripts Apply a non default monitoring profile etc. You choose the one that best suits what you are trying to do. Simple OS Provisioning If you just wanted to get an OS laid down, basic network/filesystems configured and possibly an agent installed, you would choose a "Provision OS" plan This plan contains 2 profiles, an "OS Provisioning" profile and an "OS Configuration" profile. These profiles contain the answers to the same questions that would have been asked if you provisioned the server manually. A point to remember: it is required that you create your profiles (answering the questions that the wizard presents) before they are available to be added to a plan. In the BUI it looks like this: Complex OS provisioning This plan must have the same 2 plans that the simple approach did, but has the option to add many other plans to be able to patch the deployed OS (Update OS, Update software), add software (Install, Operational plans), and further configure the OS/Application using scripts (Pre-install, Post-install, Operational plans) In the BUI it looks like this: The steps (profiles) you choose will be determined by what you want to achieve and if you are provisioning Solaris 10/Linux or Solaris11.  Duplicating steps In addition, most of these optional steps can be duplicated to allow you to execute more than one profile. To do this, add your first profile for that step, then select (highlight) that step and if it is available, the copy icon (with 1, 2 shown on it) will become active.  Click that icon and the step will be duplicated allowing you to run more than one profile. This makes the whole operation much more flexible, as you could have an update profile for your OS , one for web servers and one for databases. So if this plan was to build a web server, your plan would contain both the OS update profile and the web server update profile, avoiding the need to have the OS patches in 2 profiles.  Other examples  Another example of this would be if you wanted to build and LDOM or if you wanted to build an LDOM and deploy an OS into it (complex or simple), you would choose the appropriate plan. Building an LDOM Building an LDOM requires a "Create Logical Domain" plan which only has a single step, which is a "Logical Domain" profile. Building an LDOM and provisioning an OS You can build the LDOM and provision the OS into it in a single action by creating a "Configure and Install Logical Domains" plan which contains two steps, which is a "Logical Domain" profile and an "Install Server Profile" Summary By now, hopefully, the pattern has become clear. Plans and profiles are just the building blocks that allow you to deploy your system in the way you want. Each of the components are modular, so they can be re-used as much as possible and make it easier to maintain as you have fewer places you need to change when you want to change your configurations. There are many other types of plans offered by Ops Center that will create zones , build M-series physical domains and deploy OVM for X86 virtual machines, individually or combined with OS deployment, but they all follow the same basic structure. While how to do this is all laid out in the online documentaion, my best advice is to get yourself some test systems and try it out. There is often no substitute for having actually done it. Regards, Rodney Lindner

Enterprise Manager Ops Center uses a combination of Plans and Profiles to maximize repeatability and reuse, while giving you the degree of flexibility to provision and update what you need to in your...

Best Practices

New Enterprise Manager Release Allows Customers to Manage Their Hybrid Cloud from a Single Interface

Oracle is today announcing Oracle Enterprise Manager 12c Release 5, which simplifies the journey to the cloud.  For the first time, you will be able to manage your hybrid cloud as one, from a single pane of glass. According to a recent Computerworld survey, cloud consumers are struggling with security issues, integration headaches and performance concerns in public and hybrid cloud environments. The latest release of Oracle Enterprise Manager addresses these concerns through new enhancements that manage, migrate, test and deploy applications and databases across hybrid clouds, with 100% reuse of existing management tools and practices.  These enhancements will provide centralized visibility and control to Oracle customers on their cloud journey, while helping to ensure that their existing company-wide standards are applied across the estate.  The following set of hybrid cloud features is now available for those using Oracle Database and Oracle Java Cloud Services: Workload portability & secure, bi-directional cloning: Replication of application artifacts and data across the hybrid estate with automated workload migration Quality of service management: deep, expert performance management, diagnostics, tuning, leveraging the full breadth of Oracle’s extensive best practices for optimizing our stack. Lifecycle and Cloud Management: automated patching and rapid provisioning of database and middleware with self-service access for agility. Data governance & configuration compliance controls:  Configuration and compliance management, data masking, compliance frameworks and rich notifications for control. Migrate on-premises IT assets to Oracle Cloud, or from Oracle Cloud to on-premises, with a single click A simple, secure deployment architecture This rich functionality is exposed through an extremely simple deployment architecture.  Administrators will  install the Oracle Enterprise Manager Hybrid Cloud Gateway to manage and encrypt all required communication between their existing on-premises Enterprise Manager and the Oracle Cloud Platform...and that’s it!  With no additional network reconfiguration, Oracle customers can immediately begin to manage their hybrid estate as if it were one. Customers, partners and analysts welcome Oracle’s hybrid cloud management approach "e-DBA, an Oracle Platinum Partner and customer, has seen the benefits Oracle Enterprise Manager brings to our clients regardless of the size and complexity of their estate, which is why we also use it internally,” said Phil Brown, Global Lead for Enterprise Management Technologies, e-DBA.  “Reusing our existing skills will accelerate adoption of the Oracle Cloud Platform since our hybrid cloud will be a natural extension of our existing estate, all seamlessly managed as one."  “As Bridge Consulting has expanded our Oracle footprint to include Oracle Cloud Platform, we have recognized the importance of centralized management control over our entire hybrid estate,” said Marco Bettini, Co-founder and CTO of Bridge Consulting.  “The new Oracle Enterprise Manager capabilities are exactly what we need to ensure our hybrid cloud can be managed and consumed as one single environment, and we are especially glad we can leverage our existing best practices and knowledge across the entirety of our hybrid estate.”  “Our customers have consistently told us that they want to treat their hybrid cloud as a single unit, with workload portability as well as consistent governance across the entire estate,” said Dan Koloski, Senior Director of Product Management, Systems and Cloud Management, Oracle.  “The latest release of Oracle Enterprise Manager provides customers the ability to lift-and-shift applications from on-premises to cloud and from cloud to on-premises, while leveraging all of the rich quality of service and lifecycle management capabilities they use today.  We are excited to enable a hybrid cloud that truly functions as an extension of an on-premises data center.”    “IDC predicts that the majority of IT organizations will adopt a hybrid cloud strategy in the next five years.  For these companies, a key success factor is the ability to consistently support workload monitoring, management and portability across on premises IT and public cloud services,” said Mary Johnston Turner, Research Vice President, IDC.  “Oracle customers who are evaluating a hybrid cloud strategy spanning Oracle on premises and public cloud databases, web services, backup services and development resources will be well served to consider the new Enterprise Manager hybrid cloud management capabilities.” Demonstration One of the most appealing aspects of this solution is that it is so simple to deploy and operate Clear benefits The value of Oracle Enterprise Manager has been empirically demonstrated by numerous studies from Crimson Consulting, Pique Solutions, Forrester and IDC.  Some of the recorded benefits are: Oracle Enterprise Manager 12c release 5 will help you simplify your journey to cloud by managing your hybrid estate as one.  A LIVE webcast with more information and Q&A will be aired on June 25, 2015.  Please register here to learn how to overcome the challenges involved with managing IT environments where public- and private clouds and on-premises infrastructure can thrive as one  Resources:   Read the Solution Brief on Hybrid Cloud Management Watch an Enterprise Manager demo Visit our Hybrid Cloud Management page on OTN Attend a technical webcast on Enterprise Manager Download Enterprise Manager Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager 12c Mobile app "THE PREVIOUS IS INTENDED TO OUTLINE OUR GENERAL PRODUCT DIRECTION. IT IS INTENDED FOR INFORMATION PURPOSES ONLY, AND MAY NOT BE INCORPORATED INTO ANY CONTRACT. IT IS NOT A COMMITMENT TO DELIVER ANY MATERIAL, CODE, OR FUNCTIONALITY, AND SHOULD NOT BE RELIED UPON IN MAKING PURCHASING DECISIONS. THE DEVELOPMENT, RELEASE, AND TIMING OF ANY FEATURES OR FUNCTIONALITY DESCRIBED FOR ORACLE’S PRODUCTS REMAINS AT THE SOLE DISCRETION OF ORACLE.”  

Oracle is today announcing Oracle Enterprise Manager 12c Release 5, which simplifies the journey to the cloud.  For the first time, you will be able to manage your hybrid cloud as one, from a single...

Best Practices

Data to collect when logging a Service Request on Enterprise Manager Ops Center

If you ever have to log a Service Request (SR) on your Enterprise Manager Ops Center instance, the question often arises as to what data you should include with the SR. Including the right information when you log the SR will greatly reduce your Time To Resolution (TTR). So, what data should you add to the SR? That will depend on what type of issue you are experiencing. Basic Information for All SR's While the My Oracle Support (MOS) portal will help by asking you relevent questions, here is a list of things to think about while answering them: A clear problem description - Tell us about what you were attempting to do when you encountered the issue, what host(s) were involved and what error message you saw. You would be surprised how many SR's get logged with "It didn't work" or just the error message as the problem description without telling us what you were actually trying to do.  Time and date - Tell us the time and date you saw the issue. The log files are time stamped, so knowing the time the issue occurred (at least approximately) will reduce the extent of searching of the log files that will need to be done. Software Version - Always tell us what version of Enterprise Manager Ops Center you are using. To find out your version, look on your Ops Center BUI under [Administration] ==> [Enterprise Controller] and at the top of the middle panel (Summary Tab) will be listed the Enterprise Controller Version. Don't forget to include if there are any IDR's (Patches) that have been applied to your Enterprise Manager Ops Center instance. Additional Data to include with your SR The most common thing to include with your SR is an OCDoctor --collectlogs output (# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs), but it will often depend on which part of the process Ops Center was at when the issue occurred. If your issue falls under multiple sections below, please supply the data from all relevant sections. It should be noted that all of the Ops Center logs will rollover over time and on a busy environment that may not be a long time. So, it is important to collect any log files as soon as possible after you have seen your issue or you should reproduce the issue just before collecting the logs. Browser User Interface (BUI) issues Collect screen shots of what you are seeing in the browser Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on your Enterprise Controller and collect the output file Job related issues (Job fails) Provide the job number Provide the full job logs In the BUI, click through to the failed task on the job and then export the logs Select the radio button for "Full Job Log" and save the job log Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on your Enterprise Controller and collect the output file OS provisioning issues Capture the full job log (as described above) Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the relevant Proxy Controller and collect the output file Capture any console logs of either the physical server or, if deploying to LDOMs, capture /var/log/vtnds/[LDOM name]/console.log If deploying to an LDOM, run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the relevant CDOM and collect the output file Patching issues Capture the full job log (as described above) Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the system being patched and collect the output file Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on your Enterprise Controller and collect the output file Discovery issues Capture the full job log (as described above) Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on your Enterprise Controller and collect the output file Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the relevant Proxy Controller and collect the output file Agent deployment issues Capture the agent install output and logfile (/var/tmp/agent_install, /var/scn/install/log) Capture the agent configuration output and logfile (/var/tmp/agentadm.log) If the agent has actually been installed, run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the system you are installing the agent on and collect the output file Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs on the relevant Proxy Controller and collect the output file Domain Model issues Run # /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --collectlogs -u [EC Master Admin User] on your Enterprise Controller and collect the output file This will collect the domain model information as well as the normal output from OCDoctor log collect. Note:  this can take a long time to run The [EC Master Admin User] is the user that you specified as part of the initial install. On some customer systems, this will be root but it does not have to be. Collecting the right information and providing it as part of the initial logging of the SR will make it easier for the engineer to analyze your issue and avoid a lot of unnecessary back and forth. Overall, it will get your issue resolved quicker. I hope this information will help you if you ever have to log a SR on Enterprise Manager Ops Center. Regards, Rodney Lindner

If you ever have to log a Service Request (SR) on your Enterprise Manager Ops Center instance, the question often arises as to what data you should include with the SR. Including the right information...

Product Info

Data Clone and Refresh (part 1)

In mid January we released the latest version of Enterprise ManagerCloud Control 12c, the 12cR4 Plugin Update.  This release includedmany enhancements in the cloudmanagement space, making it the complete DBaaS automation solution. The main areas of enhancement are: Self Service Portal and Service Catalog, includingadd and remove standby capabilities, externalized database size andimproved customization Cloud Framework, with an improved self serviceportal, role and service type based access across all pages, andimproved administration capabilities Data Clone and Refresh, including an improved selfservice experience, admin flows for non-cloud cloning and refreshes,and additional storage certification Fleet Maintenance, such as the ability to subscribeto database and grid infrastructure images, deploying themautomatically to servers In this posting, I'll be focusing on the data cloneenhancements.  My next post will focus on the refresh enhancements. Improved Self Service Experience Enterprise Manager ships with an out-of-box self-service portal thatallows developers, testers, DBA's, and other self service users to logon and request various services. It also provides an administrativeinterface for DBA's to deliver one-off or special requests for services.These services include: New single instance (SI) and Real Application Clusters (RAC)databases using pre-defined golden standards. This is ideal fordevelopers asking for standard databases with or without data. New SI and RACdatabases along with Data Guard standby databases either within thesame data-center or across different geographical regions. This is idealfor production and semi-production workloads that have highavailability requirements.  Schemashosted in one or more databases, and provided as Schema-as-a-Service. Pluggable databases that are hosted in one or more Oracle Database 12cMultitenant container databases provided as Pluggable Database as aService.  Databasethin clones, using the Enterprise Manager 12c Snap Clonefeature that leverages storage Copy-on-Write (CoW) technology on Oracleand non-Oracle storage. Snap Clone lets users, such as functional quality assurance testers, to create multiple copies of the database in minutes withoutconsumingadditional space, take private backups, and refresh data from the original source.Snap Clone supports cloning of databases on file and on Oracle Automatic Storage Management withblock storage.  Database full clones using RMAN backups or RMAN duplicates. Thiscould be ideal for intense testing, such as database upgrades andperformance testing on Exadata.  All cloning services come integrated with masking, and ability tochange configuration and software versions.  Fleet maintenance allows administrators to patch and upgradedatabase software, with near zero or zero downtime. The subscriptionbased model enables updates at scale, and across the entire cloudecosystem thus significantly reducing the time required for maintenanceactivities.  Additionally, self service users can perform lifecycle operations likestarting and stopping, checking the status, health monitoring, etc. on the requesteddatabases and schemas. Provisioning can be done on both physicalinfrastructure using Deployment Procedures and on an Oracle VMvirtualized server infrastructure using Virtual Assemblies and VirtualTemplates. The portal provides access to a Service Catalog which listsvarious published service templates for standardized databaseconfiguration and versions. Users can review their past and outstandingrequests, resource quotas, and current utilization as well as chargebackinformation for the databases and services they own. The portal alsoallows users to automatically backup their databases on a scheduled basisor take on demand backups. Users can restore the database to any ofthese backups. The self-service portal is the user's view into the cloud, thus it is designed to be easy to use and yet useful. The portalis also backed by both a command line interface and an API that can be used to request and managecloud resources via scripts in lieu of the user interface. Admin Flows for Non-Cloud Cloning and RefreshesOften, peoplethink of cloning as only important as a cloud operation, but that’s not the only place that cloning can be important.  It’sideal, for example, as a way to build environments for proofs ofconcept, test master creation, or indeed anywhere that you may need aone-off clone. With the Clone and Refresh functionality, you can clonefrom an existing snapshot of a database, or you can choose to clone toa particular point in time or SCN.  Not only that, you canintegrate both masking and patching (for PSU’s) in this flow as well. Once you have created a clone, you can then refresh it from thesource later with a few clicks. And finally, once you’ve done cloningthrough the user interface, you might then decide to clone and refreshin a scripted manner, using the EM CLI verbs or REST API's that are provided forthis.  You can even schedule the cloning through EM CLI to occurat a time that suits you.Now let’s look at the details of whatcan happen as part of this clone and refresh flow, based on theenvironment you can see in the image below:Onthe left hand side you can see our Production database.  In thisexample, this is an database running in a 3 node RACenvironment, with some RMAN backups already taken.  So what can Ido with this database when I clone it?Firstly, I can masksensitive data.  Generally, when you take a copy of yourProduction database to another environment, you want to mask some ofthe data in that database, such as credit card numbers, salary detailsand so on.  The admin flow allows you to apply a pre-definedmasking format to your data as it is cloned, or indeed execute your owncustom SQL scripts to change the data as you need to.Secondly,I can actually test patching as part of the flow.  In the exampleI’m showing here, I’m cloning my Production database to an11. test environment.Thirdly, I can change theconfiguration as part of the admin flow as well.  Again, in theexample shown here we're moving from a 3 node RAC environment inProduction to a single instance test environment.Finally, if myunderlying storage supports copy on write technology, I can also takeadvantage of that and take a snap or thin clone of my Productionenvironment, where blocks are only written to my test environment asthey are changed in Production.  That means I can build a lot moresnap clones and still require only a very small percentage of thestorage of my Production environment. Additional Storage Certifications Earlier releases of theEnterprise Manager Snap Clone functionality provided two differentsolutions from a storage perspective.  If you already had eitherOracle's Sun ZFS Storage Appliance or NetApp, you could create a vendorspecific hardware solution.  If not, you could use either theSolaris ZFS File System or the CloneDB functionality provided sineOracle Database 11g release   More details on theEM12cR4 Snap Clone functionality is provided here.Inthe January plugin release, we added additional storage certification,so you can now also use an EMC SAN with ASM to create snap clones. If you want more details on the Snap Clone and EMC solution, youcan find details about it here. Further Information You can find more information on the material I've just covered in Part VII of the Cloud Administration Guide in the online documentation here.  You can also watch the screenwatch we recorded to show you the cloning process in action here. Stay Connected: Twitter| Facebook | YouTube | Linkedin | Newsletter Download the Oracle EnterpriseManager 12c Mobile app

In mid January we released the latest version of Enterprise Manager Cloud Control 12c, the 12cR4 Plugin Update.  This release included many enhancements in the cloudmanagement space, making it the...

Best Practices

Test Drive Oracle’s Application Platform as a Service Solution

We've just launched a new set of workshops in several U.S. cities: Oracle Platform as a Service for Application Development. This follows the success of the Database as a Service workshop series (see previous blog entry). It’s another opportunity to test drive new Oracle Enterprise Manager capabilities, but it goes far beyond Enterprise Manager. This time we focus on Java development and testing in the private and public cloud, and the cloud operations needed to support them. So bring your laptop, connect to our live environment and try it for yourself! The day begins with an overview of APaaS benefits and the architectural choices for building your enterprise private or public cloud (or both). You then use step-by-step workbooks that guide you through creating an application platform / middleware cloud environment. The event is perfect for application developers, IT managers and anyone developing, testing and deploying Java applications. The time is evenly split between private and public cloud labs. These are the workbooks we’ll go through: Middleware as a Service SOA as a Service Fusion Middleware Provisioning Creating and Exploring Java Cloud Service Building and Deploying an Application with Java Cloud Service Managing Java Cloud Service Operations Looks interesting? Register for an event near you.

We've just launched a new set of workshops in several U.S. cities: Oracle Platform as a Service for Application Development. This follows the success of the Database as a Service workshop series (see pr...

Product Info

New TimesTen Plug-in for Oracle Enterprise Manager

Co-contributor:  Simon Law, TimesTen Product Manager  Last Friday, Oracle released a new version of TimesTen Plug-in for Oracle Enterprise Manager 12c Cloud Control. This is a landmark release with many new features for database administrators in the enterprise. Besides database performance and availability monitoring; the new plug-in offers administrators the ability to manage and administer their TimesTen instances and databases, such as starting and stopping TimesTen services, loading and unloading databases to and from memory, and scheduling backups and restoring databases. Additionally, users can monitor database and replication activities, memory and disk usages, workload performance statistics, and identify longest running and most executed SQL statements. International users will be pleased to know that the new TimesTen plug-in was globalized with its user interface translated to nine different languages, the same languages as available in Oracle Enterprise Manager. TimesTen plug-in can be down-loaded through Enterprise Manager Self-Update. For more information, visit the Enterprise Manager Extensibility Exchange or the Oracle TechnologyNetork (OTN) TimesTen product page.

Co-contributor:  Simon Law, TimesTen Product Manager  Last Friday, Oracle released a new version of TimesTen Plug-in for Oracle Enterprise Manager 12c Cloud Control. This is a landmark release...


Oracle Database In-Memory Advisor

Recently, Oracle announced the general availability of the Oracle Database In-Memory Advisor—a new capability built into Oracle Enterprise Manager 12c Tuning Pack. With this release, Oracle introduces new and extensive capabilities for managing Oracle Database In-Memory, an Oracle Database 12c Enterprise Edition option. Oracle Database In-Memory can be used to improve queries on a variety of OLTP and/or data warehouse operations. The In-Memory Advisor helps to optimize performance with recommendations to run analytics processing faster. It gives customers insight into the sizing of the workload and offers actionable recommendations for running workloads at peak performance. The Advisor analyzes workloads and related objects making specific recommendations to which objects would give you the greatest benefit and performance if they were placed In-Memory. Key Capabilities: Assists with In-Memory size selection Recommendations for tables, partitions and sub-partitions for a given In-Memory size Uses workload and performance data to prioritize objects Takes into account differences in disk and memory footprint, as well as compression ratios Actionable Recommendations Workload based cost/benefit analysis Cost: Offers estimated memory size with various compression options Benefit: Offers estimated database time reduction metrics for workload processing In-memory area population plan Reporting Ability to vary In-Memory size to receive specific loading plan Generates DDL scripts with all the tables/partitions/sub-partitions recommended Top SQL benefits from any given configuration For more information, visit the Oracle Database In-Memory Advisor page on the Oracle Technology Network. You can download the In-Memory Advisor, via MOS note 1965343.1. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

Recently, Oracle announced the general availability of the Oracle Database In-Memory Advisor—a new capability built into Oracle Enterprise Manager 12c Tuning Pack. With this release, Oracleintroduces...


Oracle Enterprise Manager at Collaborate15

April 12-16, 2015  •  Mandalay Bay Resort & Casino, Las Vegas  •  #C15LV Since the release of OracleEnterprise Manager 12c, the user community has responded to the significantincreased adoption of this popular product by featuring deep-dive sessions and labs in various userconferences spread all over the globe. Nowhere is this more evident than at Collaborate,a very popular user conference being held this year in Las Vegas and comprisedof the respected IOUG, OAUG and Quest User Groups.  Collaborate15 has almost 50 sessions devotedto Enterprise Manager topics ranging from Private Cloud Management, Oracletechnology or “stack” management and Applications Management heavily sprinkledthroughout all three groups. Althoughsome of those sessions on new features and “inside baseball” capabilities arebeing delivered by Oracle folks, the majority are developed and presented byexperienced Enterprise Manager customers and partners. Word is starting to get out. In fact, Kellyn Pot’Vin-Gorman just published an excellent blog, “Everything INeeded to Know about Enterprise Manager I Learned at Collaborate15” which takesyou on a highly-descriptive day-by-day journey of all the sessions, hands-onlabs, demos and SIG meetings which feature Oracle Enterprise Manager. Oracle has also published a handy guide forCollaborate15 attendees to help build their schedules, called “FocusOn Enterprise Managerat Collaborate15” which is duplicated below for your convenience. The Oracle Enterprise Manager product group will also have asolid contingent present through-out the conference so please stop by theOracle Demogrounds area and check out the OEM demo kiosks. See you there! Sunday April 12 - Pre-Conference Day Title Time Room ID Primary Presenter Hands-on Lab: Everything I Needed to Know About Enterprise Manager I Learned at COLLABORATE 9:00 AM start Kellyn Pot'Vin-Gorman, Oracle Courtney Llamas, Oracle Werner de Gruyter, Oracle Expanding Use of Your Enterprise Manager System 3:30 PM – 4:30 PM South Seas I 10188 Brent Sloterbeek, Gentex Monday April 13 - Day One Title Time Room ID Primary Presenter Zero to Manageability in One Hour: Build a Solid Foundation for Oracle Enterprise Manager 12c 9:15 AM – 10:15 AM Banyan B 0 Courtney Llamas, Oracle Architecting your own DBaaS in a Private Cloud with EM12c 9:15 AM – 10:15 AM Reef F 556 Gustavo Antunez , Pythian The Best Oracle Database 12c Tuning Features 10:30 AM – 11:30 AM Palm B 567 Rich Niemiec, ROLTA Exadata Exachk and Oracle Enterprise Manager 12c: Keeping up with Exadata 12:00 PM – 12:30 PM Banyan B 746 Bobby Curtis, Enkitec Application Management Suite Deep Dive - Patching and Cloning 3:15 PM – 4:15 PM South Seas I 10479 Ravi Madabhushanam, Apps Associates The Power of the AWR Warehouse 3:15 PM – 4:15 PM Banyan B 112 Kellyn Pot'Vin-Gorman, Oracle Panel: Wait, Before We Get the Project Underway, What Do You Think Database as a Service Is... 3:15 PM – 4:15 PM Reef F 598 Steve Lemme, Oracle Erik Benner, Mythics Seth Miller, Collier IT Gustavo Antunez, Pythian The New Tool- Automatic Diagnostic Repository (ADR): How Can It Help You Troubleshoot Your Databases? 3:15 PM – 4:15 PM Palm C 241 Michael Nelson, Northrop-Grumman Getting Started with Oracle Enterprise Manager Cloud Control 12c 4:30 PM – 5:30 PM Banyan B 856 Leighton Nelson, World Wide Technology Stabilize Your Plans with SQL Plan Management Including 12c 4:30 PM – 5:30 PM Palm B 231 Alfredo Krieg, Sherwin Williams Company Top Weblogic Administration Tasks You Can Automate Now 4:30 PM – 5:30 PM Breakers H 10506 Mrityunjay Kant, AST Corporation Welcome Reception Enterprise Manager Demos: Total Cloud Control with Oracle Enterprise Manager (Technology Area) Applications Management with Oracle Enterprise Manager (Applications Area) 5:30 PM – 8:00 PM Exhibit Hall - Oracle Demogrounds Oracle Enterprise Manager Team Tuesday April 14 - Day Two Title Time Room ID Primary Presenter Enterprise Manager Demos: Total Cloud Control with Oracle Enterprise Manager (Technology Area) Applications Management with Oracle Enterprise Manager (Applications Area) 9:30 AM – 3:30 PM Exhibit Hall - Oracle Demogrounds Oracle Enterprise Manager Team Protect Your Identities: Managing User Access Using Oracle Identity and Access Management With Bonus Tuning Tips 9:45 AM – 10:45 AM Banyan C 100 Ken Ramey, Centroid Systems Upgrading to an OEM 12c High Availability Architecture for Monitoring and Job Execution - Keeping the NOC from Knocking 9:45 AM – 10:45 AM Banyan B 741 Bill Petro, American Express Enable Oracle GoldenGate Monitoring for the Masses with Oracle Enterprise Manager 12c 11:00 AM – 12:00 PM Banyan B 219 Bobby Curtis, Enkitec What is My EnterpriseOne Web Users Real Experience? 11:00 AM – 12:00 PM Surf C 101760 Frank Jordan, ERP Suites The Best Oracle Database 12c New Features 11:00 AM – 12:00 PM Palm A 565 Rich Niemiec, ROLTA Oracle WebLogic Performance Tuning 11:00 AM – 12:00 PM Banyan C 239 Jon Gilmore, Zirous, Inc Where did my day go?: Oracle Enterprise Manager 12c Administration 2:00 PM – 3:00 PM Banyan B 230 Alfredo Krieg, Sherwin Williams Company Automated Database Patching with Clould Control 12c: Lessons Learned and Best Practices 4:30 PM – 5:30 PM Banyan B 182 Fernando de Souza, General Dynamics IT Happy Hour Enterprise Manager Demos: Total Cloud Control with Oracle Enterprise Manager (Technology Area) Applications Management with Oracle Enterprise Manager (Applications Area) 5:30 PM – 7:00 PM Exhibit Hall - Oracle Demogrounds Oracle Enterprise Manager Team Wednesday April 15 - Day Three Title Time Room ID Primary Presenter Your Own Private Cloud 8:00 AM – 9:00 AM Reef F 460 Gleb Otochkin, Pythian Managing and Monitoring Fusion Middleware Using Oracle Enterprise Manager 12c 8:00 AM – 9:00 AM Banyan C 336 Shawn Ruff, Mythics You've Got It—Flaunt It: Oracle Enterprise Manager Extensibility 8:00 AM – 9:00 AM Banyan B 281 Ray Smith, Portland General Electric Exadata 101 - What You Need to Know 8:00 AM – 9:00 AM Banyan D 569 Rich Niemiec, ROLTA How to Comply With Audit and Make Your Life Easier 8:00 AM – 9:00 AM South Pacific J 389 Frank Pound, Bank of Canada Enterprise Manager Demos: Total Cloud Control with Oracle Enterprise Manager (Technology Area) Applications Management with Oracle Enterprise Manager (Applications Area) 9:30 AM – 3:30 PM Exhibit Hall - Oracle Demogrounds Oracle Enterprise Manager Team Enterprise Manager 12c Cloud Control for Managing Oracle E-Business Suite 12.2 9:15 AM – 10:15 AM South Seas H 10550 Angelo Rosado, Oracle SIG: OAUG Oracle Enterprise Manager for Applications Special Interest Group 10:45 AM – 11:45 AM South Seas H 10083 James Lui, Aramark Erik Benner, Mythics Oracle 12c Multitenant Database Lifecycle Management with Enterprise Manager 12c Cloud Control 10:45 AM– 11:45 AM Banyan B 108 Krishna Kapa, UBS SIG: IOUG Oracle Enterprise Manager Special Interest Group Meeting 12:30PM – 1:00 PM South Seas Ballroom B 949 Enterprise Manager Leadership Committee Design and Implement your Own Self-Service Enabled Private Cloud with Oracle EM12c 2:45 PM – 3:45 PM Reef F 464 Kai Yu, Dell Migrate Your Cron Jobs to Oracle Enterprise Manager Cloud Control 12c 2:45 PM – 3:45 PM Banyan B 335 Vladimir Lugo, Loyola Marymount University PeopleSoft Application and System Monitoring Basics 2:45 PM – 3:45 PM Mandalay Bay Ballroom C 100270 Raj Garrepally, Emory University Migrate Your Cron Jobs to Oracle Enterprise Manager Cloud Control 12c 2:45 PM – 3:45 PM Banyan B 335 Vladimir Lugo, Loyola Marymount University Servers and Systems and Storage, Oh my! Infrastructure Management Using OEM 12c 4:00 PM – 5:00 PM Banyan B 453 Erik Benner, Mythics Recipe for Building a Private Database Cloud with Oracle RAC 12c 4:00 PM – 5:00 PM Reef F 656 Leighton Nelson, World Wide Technology Panel: Oracle Best Practices for Managing Oracle Applications 4:00 PM – 5:00 PM Mandalay Bay Ballroom B 104800 Ken Baxter, Oracle Thursday April 16 - Day Four (FinalDay) Title Time Room ID Primary Presenter Pluggable database as a service: Combining Oracle 12c Database multitenant architecture and Oracle Enterprise Manager 12c 8:30 AM – 9:30 AM Reef F 107 Krishna Kapa, UBS Future Now: Advanced Database Management for Today's DBA 9:45 AM – 10:45 AM Banyan B 866 GP Gongloor, Oracle 12 Things to Consider for Migrating EBS to Exadata 11:00 AM – 12:00 PM Banyan D 660 Arun Kumar Anthireyan, BIAS Corporation Anomaly Detection for Database Monitoring 11:00 AM – 12:00 PM Palm B 701 Alex Gorbachev, Pythian Data Clone and Refresh Made Easy with Enterprise Manager 12c Snap Clone 11:00 AM – 12:00 PM Palm D 921 Pete Sharman, Oracle Under The Hood of Enterprise Manager - A troubleshooting primer 11:00 AM – 12:00 PM Banyan B 157 Courtney Llamas, Oracle Werner de Gruyter, Oracle Best Practices for Planning and Deploying Private Database Clouds with Oracle RAC 12c Technologies 11:00 AM – 12:00 PM Reef F 729 Mark Scardina , Oracle Smarter Monitoring with Adaptive Thresholds and Time Based Metrics 12:15 AM – 1:15 PM Banyan B 144 Courtney Llamas, Oracle Case Studies Moving ASM Files 12:15 AM – 1:15 PM Palm D 211 Anthony Noriega, ADN  Updated March 10, 2015

April 12-16, 2015  •  Mandalay Bay Resort & Casino, Las Vegas  •  #C15LV Since the release of Oracle Enterprise Manager 12c, the user community has responded to the significantincreased adoption of this...


Q&A: The Cloud Journey with Enterprise Manager

On last week's Enterprise Manager webcast, we had a great opportunity to catch up on the latest product news and hear how Enterprise Manager is helping companies transition to the cloud. We specifically focused on new capabilities around Platform as a Service for databases and middleware. You can still view the webcast on demand. As a speaker on the webcast, I really enjoyed seeing the high number of questions in the text chat window - that's maybe the most fun part! Here are some of the questions & answers we had. (They've been slightly edited for clarity.) As always, you're welcome to comment via the blog. Q: Do you recommend a DBA onsite or do you provide DBA support?A: Either way. You can have your own DBA managing your database private cloud, or hire a consultant from Oracle or an Oracle partner. The important thing is that a DBA can support many more databases once users have self-service. Q: Please explain provisioning of a schema in the database.A: Schema as a Service allows you to deploy a schema and use it as if it were a separate DB - but with the benefits of consolidated management at the DB layer. In Database 12c, this is achieved using multi-tenant, pluggable DBs. Q: What is Showback?A: Showback is like chargeback, except that no money actually changes hands. IT is just showing the users how many resources they used. Q: In Database Replay, does the workload run on an actual production-like environment, or is it just a simulation?A: Database Replay allows you to replay the captured workload on the actual proposed environment. This is often useful for upgrade exercises and testing new configurations. Q: Yes, but if consolidation is in planning stage, then the actual proposed environment won't be there. Are you talking about proposed environment in the cloud only?A: Using Consolidation Planner, you can simulate the workload and arrive at the target environment requirement for a specific set of workloads. Also, using Real Application Testing and SQL Performance Analyzer, you can test using captured workloads and replay them.  Q: Can EM provision MS SQL Server, and can EM provision in the Azure or Amazon cloud?A: EM has no out of box capabilities for provisioning non-Oracle DBs. Current capabilities are focused on private cloud. Provisioning for public cloud is on the roadmap. Q: Can you buy plugins for MS SQL Server?A: Yes, plugins are available for non-Oracle DBs, including SQL Server, Sybase, DB2. The Oracle Extensibility Exchange on OTN has a list of available plugins from Oracle and from third party sources. Q: I have a customer that wants to use Azure Pack as their provisioning UI. Can EM talk to Azure Pack in order to do this?A: Yes, your customer can use Oracle Cloud APIs and build the relevant application for provisioning. Q: So would Oracle Cloud APIs be an integration layer to Azure Pack?A: You need to write your own code to integrate the Oracle private cloud solution into any third party solution. Q: What's difference between full and snap clone?A: As the names suggest, a full clone is a copy of the source database - it will occupy the same amount of space as the source. Snap Clone is a sparse copy of the source database, where the copy will occupy very little space on the disk as it uses copy-on-write storage technology. Q: So Snap Clone only stores the changes and reads most data from the source?A: Snap Clone stores only changes made in the clone copy.  Q: In Snap Clone, what is the continuous refresh from production?A: When you clone a database (typically from production to either a test or dev environment), there may be a need to refresh the clone on a regular basis. Q: What is drift tracking?A: Using the new configuration management features, you can define a configuration standard for your databases. If there is a deviation, you can get a report (typically called) drift analysis. Q: What are some of the critical security-related configurations that you recommend for DB12c?A: EM12c can be used to track and enforce compliance, including industry standards such as PCI or customer policies created and monitored by EM itself.  Q; With self-service DBaaS, can the end user can update or upgrade the database or DB home?A: The end user (Self Service User) can upgrade the database service that he/she created earlier. Only Cloud Admin can upgrade Oracle Homes. Q: Can you explain this with an example?A: The Cloud Admin will upgrade all Oracle Homes in a Pool to the next patchset release. Now, SSA users can subscribe to upgrade the database services they created. If SSA users do not do so by a certain time, the Cloud Admin can force-upgrade their databases. Q: What is the difference between a hosting environment and a cloud?A: Resource abstraction, metering, chargeback, self-service, and a few other things are industry-accepted characteristics of a cloud. Q: Is dev/test in the public cloud, and production in the private cloud, the most common architecture?A: It's definitely very common. Dev/test environments are often temporary in nature and that lends itself very well to public cloud. Q: What's the difference between Exadata and Exalogic?A: Both are Oracle hardware systems. Exadata is optimized for databases, whereas Exalogic is meant for Oracle middleware and applications. Q: Thank you for answering all my questions. Great webcast!A: Thanks so much! Glad we got the opportunity to share the news.

On last week's Enterprise Manager webcast, we had a great opportunity to catch up on the latest product news and hear how Enterprise Manager is helping companies transition to the cloud....


Test Drive Oracle Enterprise Manager at a City Near You

Are you tired of watching product demos? Prefer to try products for yourself? Then this is for you. Oracle is currently running a workshop series called Oracle Database as a Service Test Drives. You bring your laptop, connect to live Oracle Database 12c and Oracle Enterprise Manager 12c instances, and try our latest database cloud management solution for yourself. The day begins with an overview of DBaaS benefits and the architectural choices for building your enterprise cloud (OK, the introduction is actually a PowerPoint presentation!) You then use a step-by-step workbook that guides you throughout the day through the steps of creating a database cloud environment. The event is perfect for DBAs, application developers, IT managers and anyone involved in private cloud deployments. These are the workbooks we’ll go through: Database as a Service using Snap Clone – self service Database as a Service using pluggable databases (PDB) – self service Database as a Service setup for cloud administrators Cloud management – chargeback and metering Database consolidation testing with Real Application Testing Database lifecycle management with Enterprise Manager  Looks interesting? Register for an event near you.

Are you tired of watching product demos? Prefer to try products for yourself? Then this is for you. Oracle is currently running a workshop series called Oracle Database as a Service Test Drives. You...

Product Info

Snap Clone using EMC SAN and ASM

Recently we announcedthe latest update to Enterprise Manager Cloud Control 12c Release 4. One of the enhancements in that release is support for SnapCloneon Automatic Storage Management (ASM) and EMC Storage.  Before we examine the details of this specific enhancement, let's look at a quick refresher on what Snap Clone provides for you. What is Snap Clone?Snap Clone is a storage agnostic and selfservice approach to creating rapid and space efficient clones of largedatabases (andby large, we’re talking terabytes or more). Now that’s probably morebuzz words in one sentence than anyone’s brain can deal with withoutexploding, so let’s explain some of those terms more: Storageagnostic –by that I mean Snap Clone supports all storage vendors, both NAS andSAN. It can leverage storage layer APIs or layer a ZFS filesystem on top to provide copy on write. Selfservice –in the XaaS world – where X can be any of I, MW, P and DB –one of the key features is empowering the end user to do the work,rather than waiting on some techie to find time in their otherwise busyschedules. So it’s the end user who makes the adhoc clones here, notthe storage admin Rapid –People simply don’t have the time to wait weeks for provisioning tohappen any more, so you have to support the functionality to clonedatabases in minutes rather than the days or weeks things used to take. Spaceefficient –When you’re working with terabyte or larger databases, you simply maynot have the storage to create full-sized clones, so you have tosignificantly reduce the storage footprint to start with. Over the various EM releases,  more and more functionality hasbeen added to Snap Clone: EM12cR2 provided Snap Clone for NAS storage (NetApp and SunZFSSA).  Itprovided RMAN backup based clones, and included the Snap Clone Analyzerto show you the storage savings you could make using Snap Clone EM12cR3 added in support for Snap Clone using the SolarisFile System(ZFS) and admin flows for Snap Clone for PDB’s (pluggable databases) EM12cR4 added a lot more: Snap Clone using CloneDB – this is the biggie, as itmeans Snap Clonecan now be used with ANY Oracle database release that supports CloneDB,regardless of what storage it’s on Data Guard standby as a test master – allows offloadingthe impact of creating the test master from your Production environment NetApp Ontap 8.x cluster mode support Certification for engineered systems, with I/O overInfiniband Support for NFSv4 And in the latest plugin update that's just been shipped,we added: Integrated data lifecycle management Snap Clone using EMC SAN and ASM Admin flows for test master creation Integration with masking, patching, upgrades etc. Snap Clone using EMC SAN and ASM Most NAS technologies offer storage efficient clones in the form of Snapshots. The snapshots make use of underlying volumes, knows as flexvols (Netapp) or shares (ZFS). Unfortunately, SAN storage does not provide native snapshotting capability unless a file is created on it by leveraging TCP/IP over iSCSI over Ethernet. However this defeats the purpose of having high speed fiber channel fabric, not to mention that it makes little sense to overlay SAN with a filesystem. Another complaint we heard from our customers is that cloning is a data intensive operation that could flood the corporate IT backbone if Ethernet is used. Consequently, lot of our customers want native support for SAN for cloning purposes, especially, the ones who run ASM on SAN. And they are quite a lot in number.Using Snap Clone on ASM and EMC storage provides the ability to create‘live’ thin clones of databases that are on ASM. A live clone is NOTsnapshot based but rather a live copy of the database, residing on copy-on-write storage technology, that can bewithin the same cluster or indeed another one. Both single instance andRAC are supported – supported versions are or higher of thedatabase, 11.2 and higher of the Grid Infrastructure code. Thisfunctionality works on both EMC VMAX (with Time Finder VPSnap) and VNXstorage appliances. Diagrammatically, the configuration looks like this: Why Use Snap Clone with EMC SAN and ASMThere are a number of major challenges that Snap Clone can be used toaddress: Lack of automation - Manualtasks such as provisioning and cloning of new databases (for example,for test or development systems) is one area that many DBA’s complainis too time consuming. It can take days to weeks, often because of theneed to coordinate the involvement of different groups, as shown in theimage below: When an end user, be itadeveloper or a QA engineer, needs a database he or she typically has togo through an approval process like this, which then translates into aseries of tasks for the DBA, the sysadmin and storage admin. Thesysadmin has to provide the compute capacity while the storage adminhas to provide the space on a filer. Finally, the DBA would install thebits, create the database (optionally on Real Application Clusters),and deliver that to the user. Clearly, this is a cumbersome andtime-consuming process that needs to be improved on. Databaseunfriendly solutions – Obviously, when there is aneed looking for a solution, different people take different approachesto solving that need. There are a variety of point solutions andstorage solutions out there, but the vast bulk of them are not databaseaware. They tend to clone storage volumes rather than databases andhave no visibility into the database stack, which of course makes ithard to triage performance issues as a DBA. They also lack the abilityto track configuration, compliance and data security issues, as well ashaving limited or no lifecycle capabilities. As mentioned before, DBAs would like to leverage the native FDDI protocol of SAN for cloning.  This will make cloning fast and efficient without disrupting regular network traffic. Storage issues and archaic processes – Ofcourse, one of the main issues is storage. Data volumes are everincreasing, particularly in these Big Data days, and the growth canoften outpace your storage capacity. You can throw more disks at theproblem, but it never seems to be enough, and you can end up withdegraded performance if you take the route of sharing clones betweenusers. There can also be different processes and different prioritiesbetween the storage team and the DBA team, and you may still have fixedrefresh cycles, making it difficult to clone on an adhoc basis. So the end result of all of this is that far too often, there arecompeting priorities at odds. Users want flexibility – simplified selfservice access, rapid cloning, and the ability to revert data changes.IT, on the other hand, want standardization and control, which allows areduction in storage use, reduction in administrative overhead,visibility into the complete database stack and lineage tracking.  SnapClone on EMC storage helps you to address all these competingpriorities, using hardware you may already own.  Indeed, EMC is well established as an Oracle database storage vendor over many years, and that integration has become tighter and tighter over the past few years.  In addition tothat, the actual setup and configuration can be simpler than is thecase when using other hardware, as you do not need to create DatabaseProfiles in this configuration.  Service Templates are createddirectly on either a single instance or RAC database that resides onASM.  Because you're using this combination of ASM and EMC SANstorage, the database is already Snap Clone enabled as it resides oncopy-on-write storage technology.Inmy next post, I'll discuss more details of what else is new in the SnapClone product in this latest release, so stay tuned for more details onthat soon! ForMore Information Youcan see more details on how you actually set Snap Clone up on EMCstorage by viewing the following screenwatches: CloneDatabases in Minutes Using Snap Clone Self Service Portal with EMCStorage + ASM RegisterEMC Storage for Enterprise Manager Snap Clone Formore details on using Enterprise Manager Cloud Control 12c to provideDatabase as a Service functionality, visit the OTN page located here. Stay Connected: Twitter| Facebook | YouTube | Linkedin | Newsletter Download the Oracle EnterpriseManager 12c Mobile app

Recently we announced the latest update to Enterprise Manager Cloud Control 12c Release 4.  One of the enhancements in that release is support for Snap Cloneon Automatic Storage Management (ASM) and...

Product Info

Top Oracle Enterprise Manager 12c Questions

Guest contributors: Courtney Llamas and David Wolf What are the steps involved for upgrading from Oracle Enterprise Manager 12c Release 3 to Release 4? Is it an upgrade in place or a new install/migration Answer: Yes, Oracle Enterprise Manager 12c R3 to R4 is an out-of-place upgrade, which means the installer will install in a new Oracle Home and migrate the application over. You will need the additional space for a new Oracle Home. You can reduce downtime of the upgrade by performing a “Software Only Install”, and then upgrade later. See the upgrade documentation here. Do the patch and upgrade functionality work on Oracle Database 11g targets or only on Oracle Database 12c targets? A: Yes, patching and upgrading can be performed on any certified target version. For more details on database patching and lifecycle management visit this page. Where can I find a full list of new enhancements compared to Release 3? A: The new features are listed in Oracle Enterprise Manager Cloud Control Introduction Guide. Is there a list available of new features by management pack? A: The new features are listed in Oracle Enterprise Manager Cloud Control Introduction Guide. They are listed by plug-in, not management pack. For the license information read this page. What are the best hardware configuration and setup in order to provide high availability capabilities for database as a service (i.e. Oracle RAC, Oracle Exadata)? A: The more you rely on Oracle Enterprise Manager, the more you need to think about high availability. The best recommendation is to have a multi-OMS system with a standby for disaster recovery. You can scale up to this as the environment grows, but starting with a multi-OMS system will give you the availability and scalability you need. A standby database with Oracle Data Guard and a software replicated standby OMS would be the next step. Please read this whitepaper Deploying a Highly Available Oracle Enterprise Manager 12c Cloud Control for more information. Can Snap Clone be used with EMC storage or is Oracle ZFS storage required? A: Yes. Snap Clone can be used with either EMC VMAX and/or VNX Block Storage, both are supported. More details can be found on this page under the section: Database Cloning and Dynamic Data Refresh. Does Oracle Enterprise Manager support monitoring of hypervisors? A: Oracle Enterprise Manager supports monitoring of Oracle VM natively and VMWare via a Blue Medora third party plug in. More details on managing physical and virtual host can be found here. Is the metering and chargeback functionality part of the base product or does it requires an additional management pack? A: Metering and Chargeback for Oracle Database is part of the Oracle Cloud Management Pack for Oracle Database. Metering and Charge back for guest virtual machines (VMs) and hosts are included with the base product and does not require a separate license. Read this whitepaper for more on Metering and Chargeback with Oracle Enterprise Manager. Does Enterprise Data Governance require additional licensing? A: Use of the Enterprise Data Governance capabilities within Oracle Enterprise Manager requires an Oracle Database Lifecycle Management Pack license. How is the new AWR Warehouse feature different from the existing AWR report in Oracle Enterprise Manager 12c Release 3? A: The existing AWR report in Oracle Enterprise Manager pulls AWR data from the source database and relies on the AWR retention setting of that database. This is typically only saved for 8 days. There’s been a big demand for saving this data for 30-60 days and even longer. AWR Warehouse, automates the extract of the source AWR data and loading to a warehouse database so that data can be retained without affecting the performance and storage of the source database. The reports are the same, it just allows you to dig back further in time. Read this article for more details. Does this new release require more disk space for the AWR Warehouse? More CPU and memory? A: Not for the OMS. The recommendation is to setup the AWR Repository in a separate database, outside of the Oracle Enterprise Manager repository database. Hosting on the same server would be fine, so long as there is sufficient memory and CPU for both instances. Read this article on AWR Warehouse for more details. Does AWR Warehouse also support Oracle Database 10g and/or 11g AWR? A: AWR Warehouse must be installed on Oracle Database or higher or version with the appropriate patch. It also must be an equal or higher database version of the source databases it accommodates. Check out this demo to understand AWR Warehouse. Read the AWR Warehouse documentation. Is there a list of 3rd party technologies that can be managed by Oracle Enterprise Manager 12c? A: Check out the Oracle Enterprise Manager Extensibility Exchange for a list of the available 3rd party plug-ins and connectors. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

Guest contributors: Courtney Llamas and David Wolf What are the steps involved for upgrading from Oracle Enterprise Manager 12cRelease 3 to Release 4? Is it an upgrade in place or a...

Best Practices

Register Now for the Cloud Platform Online Forum

January 28th, 2015 10:00 a.m. PST/1:00 p.m. EST Register Today Modern Business. Modern Cloud. Is Your PaaS Delivering the Agility Your Users Demand? Don't miss the opportunity to join Oracle and IDC in a series of deep discussions on Cloud Platform as a Service. This online event kicks off on Wednesday January 28th at 10:00 a.m. PST. Take advantage of 20+ sessions, 10+ demos and 100+ tips and techniques to plan your PaaS adoption effectively. Topics encompass private, public and hybrid cloud. You will also learn how to plan your transition from traditional IT delivery to cloud delivery using Oracle technologies you use every day. Register at bit.ly/PaaSForum Oracle Enterprise Manager discussions during this event include tips and techniques on workload consolidation using database as a service and application platform as a service; transforming traditional IT delivery using self-service; effectively collaborating with business users with showback/chargeback, and more. Our speaker Sudip Datta, Vice President of Product Management will also provide a glimpse into the future of PaaS management. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

January 28th, 2015 10:00 a.m. PST/1:00 p.m. EST Register Today Modern Business. Modern Cloud. Is Your PaaS Delivering the Agility Your Users Demand? Don't miss the opportunityto join Oracle and IDC in a...

Best Practices

Enterprise Manager Ops Center - Ops Center EC/PC sizing

Counting Access Points Vs Assets When designing a customer's Ops Center environment, one of the first questions I get asked is how many servers can an Enterprise Controller (EC) or a Proxy Controller (PC) support.  Customers think of sizing, as being the number of servers,  LDOMs, etc. and these are lumped together into a total number of assets. In the real world, the answer to the "How many assets can my Enterprise Controller /Proxy Controller support?" question is, "It Depends".  How you place your Proxy Controllers in your network and what you manage from each Proxy Controller can impact the total number of assets that can be supported from an Enterprise Controller. The amount of load placed on an Enterprise Controller or Proxy Controller is not directly determined by the number of assets it manages, but by the number of Access Points (AP) it is supporting. A simple view of Access Points is that they are a connection. But it is a little more complex in that they are also a reflection of the way Ops Center internally models its data structures. The actual load on any given Enterprise Controller or Proxy Controller closely links to the number of Access Points it is managing. The number of Access Points seen by the Enterprise Controller can differ depending on whether an asset is connected via a single or multiple Proxy Controllers. Therefore, the Access Point count on the Enterprise Controller will be higher than the actual number of managed assets, if assets are managed by multiple Proxy Controllers. When the documentation is referring to Enterprise Controller /Proxy Controller capacity and uses the term "Asset", it is actually counting "Access Points". Let's look at a couple examples of this to make it clearer: Physical Server ILOM In this example, a single physical server has multiple data collection methods. A Proxy Controller can gather LOM data by accessing the service processor and by using the agent running inside the operating system. If the same Proxy Controller is used to access a single asset, the two data feeds are consolidated into a single Access Point. The Enterprise Controllers total load will be a single Access Point. If different Proxy Controllers are used to access a single asset,  each Proxy Controller will record it as a separate Access Point. The Enterprise Controllers total load will be the sum of both Proxy Controllers and will be a total of 2 Access Points. LDOMs Another example would be an LDOM guest where we obtain data from the Control Domain agent and the agent running inside the LDOM guest OS. Once again, if  both data feeds are via a single Proxy Controller they only count as 1 Access Point on both the Proxy Controller and the Enterprise Controller. Where as, if each data feed is via a separate Proxy Controller they each count as 1 Access Point on each Proxy Controller and the total Access Point count on the Enterprise Controller will be 2. With the release of Enterprise Manager Ops Center - 12.2.2, we have updated and renamed the Enterprise Manager Ops Center Sizing and Performance Guide (https://docs.oracle.com/cd/E40871_01/doc.122/e57052/toc.htm). This revised document is easier to understand and has been updated to reflect current releases of Ops Center. Note: Since the term "Access Point" was not considered to be commonly understood, the term "Asset" has been used in the documentation.  To make the counting of Access Points easier, we have added a program to the OCDoctor toolbox. This program "AssetCount.sh" is new in OCDoctor version 4.45 and can be found on any EC/PC or system that has an agent deployed. The path to the OCDoctor toolbox is /var/opt/sun/xvm/OCDoctor/toolbox. # /var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh        AssetCount.sh [-v] { standard | machine | agent | all }        standard - show Asset count        machine - show machine output with Assets names        agent - show information about agents, agentless Assets, Service Processors        all - show all the information together        -v : verbose output, present additional notes around the counting algorithm        (Version 1.0 2015/01/08) # Let's just look at the standard output (we are running this on the Enterprise Controller). The output shows: The total Access Point count for the EC (72) The number of Access Points for each type of asset, for each Proxy Controller ( Note: the total Access Point count for each Proxy Controller is labeled as Assets ) root@ec1:~# /var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh standardEC 72Proxy Assets Zones Ldoms OVMGuests Servers Storages Switches ExadataCells MSeriesChassis MSeriesDomain------------------------------------------------------------------------------------------------------pc4 32 5 25 0 2 0 0 0 0 0pc1 28 0 26 0 2 0 0 0 0 0pc0 12 2 4 0 6 0 0 0 0 0Use option '-v' to see additional notes on the counting algorithm. # You can also use the "machine" option to list out which asset is attached to which proxy. # /var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh machineEC 72Proxy 32 pc4Zones 5 S11zone101 S11zone102 S11zone100 S11zone103 S11zone104Ldoms 25 stdldom21 stdldom34 stdldom36 stdldom22 stdldom45 stdldom47 stdldom25 stdldom43 stdldom49 stdldom30 stdldom23 stdldom29 stdldom42 stdldom20 stdldom32 stdldom46 stdldom44 stdldom26 stdldom31 stdldom27 stdldom48 stdldom40 stdldom35 stdldom41 stdldom28OVMGuests 0Servers 2 pc4...Proxy 28 pc1Zones 0Ldoms 26 stdldom21 stdldom34 stdldom36 stdldom22 stdldom45 stdldom47 stdldom25 stdldom43 stdldom49 stdldom30 stdldom23 stdldom29 stdldom42 stdldom33 stdldom20 stdldom32 stdldom46 stdldom44 stdldom26 stdldom31 stdldom27 stdldom48 stdldom40 stdldom35 stdldom41 stdldom28OVMGuests 0Servers 2 pc1...# You can see this would be incredibly verbose on a large environment ( I have truncated it here). You can clearly see that the LDOMs (stdldomXX) are being reported by both "pc1" (via Control Domain) and "pc4" (via OS agent). Note: the differing LDOM count on "pc1" and "pc4"  as "stdldom33" has no OS or agent on it, so it only reports against "pc1". You can also use the "agent" option to display agent/agentless/LOM totals for each Proxy Controller. # /var/opt/sun/xvm/OCDoctor/toolbox/AssetCount.sh agentEC 72Proxy Agents Agentless SPs--------------------------pc4   25     2         0pc1   1      1         0pc0   5      5         5Use option '-v' to see additional notes on the counting algorithm.# In addition, OCDoctor.sh --troubleshoot will also be checking for the number of configured Access Points and comparing this number against the recommend Access Point count. It will print a warning at approximately 90% of the recommended capacity and a warning if you exceed the recommended capacity. Armed with the information above, you should be able to better design your Ops Center deployments and to monitor the growing load on the Enterprise Controller/Proxy Controllers as you add more assets. Regards, Rodney

Counting Access Points Vs Assets When designing a customer's Ops Center environment, one of the first questions I get asked is how many servers can an Enterprise Controller (EC) or a Proxy Controller...


Webcast: Zero to Manageability in One Hour—Build a Solid Foundation for Oracle Enterprise Manager 12c

Wednesday, January 21, 2015 1:00 p.m. EST | 10:00 a.m. PST The goal in every Oracle Enterprise Manager 12c rollout is to take it from zero to manageability in the shortest possible time. This presentation will show you how to accomplish this feat. Oracle experts will demonstrate how to properly architect and deploy Oracle Enterprise Manager 12c, including designing a highly available and scalable environment. Through this demonstration a list of essential techniques and tips compiled from Oracle Enterprise Manager Development’s Strategic Customer Programs team will also be shared. Topics such as; users, roles, groups, templates, and incidents will be discussed, plus key architectural decisions. By attending this webinar, you will learn how to: Organize targets, notifications and users properly Configure for best practices after the install is complete Properly plan and architect an Oracle Enterprise Manager 12c environment Featured Speaker: Courtney Llamas, Principal Member of Technical Staff, Oracle Registered Now Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

Wednesday, January 21, 2015 1:00 p.m. EST | 10:00 a.m. PST The goal in every Oracle Enterprise Manager 12c rollout is to take it from zeroto manageability in the shortest possible time. This...


New Enterprise Manager Release Delivers Adaptive Private PaaS

We are pleased to announce an update to Oracle Enterprise Manager Cloud Control 12c Release 4. The update is now available on OTN. So what exactly is adaptive private PaaS? Recent releases of Enterprise Manager have expanded capabilities around Platform as a Service (PaaS) delivery in your private cloud. In particular, the EM Cloud Management Packs have focused on two critical areas for Oracle customers: Database as a Service (DBaaS) and Middleware as a Service (MWaaS). In this release, these PaaS capabilities have become more adaptive to complex, rapidly growing environments. Let's look at 3 areas where database and middleware users and managers will benefit. Controlling Expanding Database as a Service Environments Rapid adoption of database as a service can lead to even faster growth in the number of database instances and the number of database versions and configurations. This can severely impact your management costs and could even cripple your database as a service initiative. The new release enhances our solution to this problem: Configuration standardization with integrated advisory, to detect differences across databases and eliminate configuration drift Database fleet patching using minimum downtime techniques, to bring database configurations back into compliance Rules for custom placement, to intelligently find a suitable target for database placement, based on current load, current population and placement constraints A database as a service approach can improve service to database users while simultaneously reducing database management costs.  Developing More Rapidly, with Increased Security Agile application development and testing requires convenient access to up-to-date test data. The Enterprise Manager Snap Clone feature gives DBAs, developers and QA engineers direct access to self-service cloning, so they can create fully functional copies of production databases within minutes. This release introduces several exciting new Snap Clone capabilities: Continuous data refresh from the source database. As your production system gets updated, you can continuously refresh your test data. Integrated data masking, subsetting and patching. Use the Enterprise Manager Data Masking and Subsetting Pack together with Snap Clone to keep your test databases lean and free of sensitive information, and keep them up to date with the latest PSUs and patch sets. Restore a database to a previous point in time with a convenient calendar view. Snap Clone support on EMC VMAX and VNX Block Storage. This adds to Snap Clone's native support for Oracle ZFS Storage Appliance and NetApp Storage Appliance in addition to generic support of other storage systems. Software developers can also take advantage of new test-to-production (and reverse) cloning of SOA, OSB and WebCenter environments with application artifacts automation. Flexible APaaS Service Catalogs If you're providing middleware (e.g. an application platform) as a service to application developers, you now have additional ways to adapt these services to developers' needs. More APaaS catalog options. In addition to Java apps and WebLogic Server, you can offer SOA, OSB and WebCenter in your self-service portal for easy, template-based provisioning. Updated REST API support for new service catalog options, for easy integration with 3rd party orchestration tools and service desks. Sharing of service instances among self-service users, for an efficient, consolidated platform on which to provision middleware services. For a more comprehensive list of updates, please visit Database as a Service and Middleware as a Service on OTN. In addition, we will post several articles about the new DBaaS and MWaaS capabilities on this blog over the next few weeks.

We are pleased to announce an update to Oracle Enterprise Manager Cloud Control 12c Release 4. The update is now available on OTN. So what exactly is adaptive private PaaS? Recent releases of Enterprise...

Best Practices

Enterprise Manager Ops Center - The Power of Groups

To Group or not to Group Any customer with more than a handful of servers will struggle to take advantage of Ops Center's full features, unless they embrace the use of groups. The ability to group a number of like servers into a logical target, that can then be operated on, means you can do a single action instead of running the same action hundreds of times, once for each server in the group. Grouping also allows you to view and manage your environment in line with your current organizational units and business practices. What can be done with Groups Groups avoid the need to select individual servers and can be used as a target for: Update profiles (Patching) - Update patches/packages and files and apply pre/post actions Operational plans - Run a script, update config files etc Perform actions - Reboot/halt/refresh/power-off/power-on actions to all assets in the group Monitoring profiles - Apply customized monitoring profiles Reporting -  Run a single report that includes multiple servers Groups can also: Display membership - The Membership Graph or the asset tree view both show the servers that make up a group Display incidents/alerts - Groups work as a roll-up point for incidents on any server in the group Display Top CPU/Memory/Network consumers - The "Asset summary" of a group shows top consumers of CPU/Memory/Network resources Restrict the assets that a given user has access to in Ops Center Types of Groups Groups are totally logical constructs. An asset (Server, OS, Zone, LDOM) can be a member of as many groups as you like. Deleting a group, does not delete the assets it contains. While most often groups will contain assets of all the same type (eg: System LOMs), as this will give you a group where an action like "power off" makes sense to all the members of the group, it is also possible to create a group that is made up of differing asset types eg: all the assets (OS/LOM/Zones) that are part of a physical server. This type of group would normally be used to restrict the permissions of users so that they could only view/manage the servers for which they are responsible.  A key fact to remember when thinking about groups is that an asset that is a member of one group is not precluded from being a member of other groups. Ops Center Predefined Groups As a starting point, Ops Center provides a number of predefined groups found under the [ Navigation ==> Assets ] menu. While most of the groupings are what you would expect, there are a few that require a little more explanation. Standard Views  [ All Assets ] - Not really a group as everything turns up here [ Engineered Systems ] - A grouping of all discovered Engineered Systems (SPARC SuperCluster). Note that each Engineered System is also its own sub-group [ Operating Systems ] -  Grouping based on OS type, release and version [ Servers ] - Grouping based on physical servers SPARC M-Series -  M[3/4/5/8/0]000, M6 and M10 servers Other SPARC - servers that are not sun4u or sun4v architecture or non Oracle/Sun servers U-Class - servers that have sun4u architecture CPU's (V240/V490/SF6800 etc.) V-Class - servers that have sun4v architecture CPU's (T1000/T2000/T5XX0/T3/T4/T5 etc.) - not V-series servers as you might first think x86 Other x86 - Non Oracle/Sun servers  x64 - 64 bit servers x86 32-bit - 32 bit servers [ Chassis ] - 6000/8000 blade based chassis and their server blades  [ Network Switches ] - Managed InfiniBand and network switches. Ops Center only manages a limited number of switch models and these will normally be found as part of an Engineered System (Exadata/Exalogic/SPARC Super Cluster). [ Racks ]  - Both Engineered System racks and manually declared racks. It is not commonly known that you can declare all the racks in your data center in Ops Center and place all your servers in their respective racks, giving you a useful data center view. All the predefined groups are useful but as you can see, they are based on broad brush physical characteristics of a server and its OS. There is no allowance for how you actually use your servers. For that you will need to build your own "User Defined Groups". User Defined Groups User Defined Groups are an extremely powerful addition to Ops Center and allow you to model your application, organizational units and business constraints into Ops Center's management interface. Basically, it makes Ops Center capable of working much more in alignment with the way your business operates. Before we go onto how to create "User Defined Groups", let's go over, in a little more detail, what you could use them for: Applications - create a group of all your web servers to report on patch levels, apply patches, update application configurations, restart applications, list top resource consumers. Prod/Dev/Test - create a group based on production status, so you can  apply differing monitoring/alerting profiles, produce management reports and restrict admin access. Admin By - create a group of all the servers that an admin(s) is responsible for, so they can quickly respond to alerts or you can limit the functions they are authorized to perform. Patching - create a group based on the servers that are in the 1st round of patching, so you can easily and consistently patch, do before and after patching reports and maintain consistent patch levels across key applications. These are just a few of the many things for which groups can be used. Setting up groups will greatly decrease your day to day workload and increase the manageability of your environment. Without the use of grouping, it is unlikely that you will be able to scale your Ops Center environment efficiently beyond about 30 servers. Creating a User Defined Group  First select the "Create Group" action [ Navigation ==> All Assets ==> Create Group ] Static Groups  Static groups are just as the name suggests, you define a group and place static members in it. The default action of the "Configure Group" wizard is to create a Static Group. As long  as the "Configure group rules" checkbox is unchecked this will be a static group. Give the group a name (mandatory), a description (Optional), and one or more group tags (Optional) and click "Next" and "Finish" to complete the wizard and launch the job that creates the group. Tagging is another powerful feature that will be the topic of another blog, but in summary, it is a way of storing an arbitrary tag (value pair) with an asset or group, which means you can store any important information with the asset, such as Asset Number, Production status, etc. Now, one by one, navigate to your servers and manually add the server to the group you have created. Select your individual servers page and select the "Add Asset to Group" action. Select the Group you want to add to (our example group is "Static Group") and the click then [Add Assets to Group] button. Dynamic (Smart) Groups  Dynamic (smart) groups are once again much as the label says. An asset(server/OS etc) will become part of the group based on it matching one or many criteria. The criteria is evaluated every time the group is accessed. So if you deploy a new server, create a zone or update any other matched attribute, it will change the group membership. The next time you access the group its membership will be automatically updated to include the current view of the environment. There is a large number of attributes that can be used to make criteria and the criteria can be combined to make complex grouping rules. There is more than enough to discuss on building these rules for another blog, but today, let's just go with a single example to give you the feel for the capabilities of dynamic grouping. We will launch the "Create Group" wizard, as we did for the static group, but this time we will give it a more descriptive name and description. Last but not least, we will check the "Configure group rules" check-box, which will make the group we create a dynamic group. Rules can be as simple as "hostname" starts with "prod" or as complex as having multiple rules each with multiple criteria matches. This is why I will be going into more details on building these rule sets in another blog in the next few weeks. For this example, I have chosen a moderate level of complexity. We have a single rule, but we will only match on any asset that has all 4 of the attributes set. OS Version contains 11 ( I also could have used Version equals 5.11) Has an IP address is on subnet Is a Non Global Zone Is managed by Ops Center (It would be unusual to not be in a managed state, but a Proxy Controller in a NGZ is an example of a non managed asset. ) Click [Next] to see the preview screen and to check that we matched the assets we want. You can see that we have matched on 4 Solaris 11 zones. Now let's see how that looks in the BUI [Navigation==>Assets ==>All User Defined Groups (pull-down)]. You see we have our static group and our dynamic group we have just created. OK, let's create a second group, but this time for managed Non Global zones of the network. Create a new name and description. Configure our rule, but this time look for Non Global Zones on the network. Preview shows no matching hosts, which in this case is correct, as I have not deployed a zone on that network yet. Finish the wizard and now let's look in the BUI to see what we have. Checking the [All User Defined Groups] pull-down, we now see our static group and 2 different S11 NGZ groups, one with 4 members and one with no members.  (I was not quite consistent with the group naming, but I could fix that using the [Modify Group action].) Now if I go off and deploy a few new zones, we can then see what our smart groups look like. I have deployed 2 zones on the subnet and one more zone on the subnet. As you can see, the new zones automatically appear in the correct groups. Dynamic (Smart) Groups - Criteria There are far too many attributes to go through here ( a few are listed below) and I will be writing a further blog to show you how to use some of the more useful ones. Semantic Tags But I will call out one special criteria (attribute) that is probably the most useful one of all - the Semantic tag. A Semantic tag is an arbitrary tag or a tag/value pair that can be added to any asset to basically store descriptive information about that asset. You can add a tag to an asset by simply clicking the [Edit Tag] action. Examples of Semantic Tags (Key):  Tag Name Description  PRODUCTION  Describes the production status  SALES  Describes the business unit that owns the server Examples of Semantic Tags/Value pairs (Key and Value): Tag Name  Value Description  PRODUCTION_STATUS  DEV/UAT/PROD The value describes the production status DEV/UAT/PROD  Admin_By  admin1@oarcle.com The value describes the name/group of the administrator of the system (could even be their email)  Business_Unit  SALES/ENGINEERING/ACCOUNTS The value describes the business unit that owns the server  Application  DataBase/Web/ApplicationServer The value describes the application running on the asset As you can see, you can create a Semantic Tag to store any information about your server that you require. These tags and tag/value pair scan be used as attributes to create Dynamic groups. Configure a group using a "Semantic Tag Key & Value". And the group based on the tag/value pair is ready to use. Nesting Groups One final feature of groups is that you can nest them (have a group that contains 1 or more other groups). Create a group as before. This time click the check-box for "Configure subgroups". Then you must drag the subgroups you want to include to the "Selected Group" icon. Repeat this procedure until you have all your required groups selected. Now click [Next], [Next] and [Finish], then check what our new group looks like. You can see the S11-Zones group contains both S11 NGZ groups. And by highlighting the S11-Zones group, we can see its membership and incident information for all included assets. Summary I hope this has given you a good understanding of groups and how they can make your use of Ops Center more productive. Regards, Rodney

To Group or not to Group Any customer with more than a handful of servers will struggle to take advantage of Ops Center's full features, unless they embrace the use of groups. The ability to group a...


Enterprise Manager Ops Center - OS provisioning across secondary network interfaces

Description One of Enterprise Manager Ops Center's core functionalities is to be able to provision the OS to bare metal servers. If the network you are provisioning across is connected to one of the onboard ports (on the first onboard network chip), all is well and provisioning will work as expected. This would be the case for 95% plus of all customers, but if you are trying to provision across a network that is connected to a port on a card in an expansion slot (or a second onboard network chip), your provisioning job will fail due to the incorrect MAC address being set in the JET/AI/Kickstart server. If you are one of the people who has hit this issue, please read on. If you are provisioning over an onboard NIC port, stop reading now and happy OS provisioning. The Cause When Ops Center discovers the ILOM (ALOM/XSCF and all the other various LOMs) of a server, there are only certain pieces of information that can be collected from the LOM while the OS is running. We maintain a policy of creating as little impact as possible during discovery, so we do not force you to shutdown the OS during discovery.  Information we can collect: the number of network interfaces the MAC address of the first network interface (port) Information we can NOT collect: the MAC address of all the other network interfaces (ports) Since the LOM only provides the first MAC address, Ops Center must calculate the MAC addresses of the remaining network interfaces. Ops Center will get the MAC addresses for the onboard NICs correct but its calculated MAC addresses will be wrong for any NICs not on the first onboard network chip. If we have an example system that has 4 onboard network ports (on the motherboard) and an expansion network card in the PCI-E/X slot with an additional 4 network ports, Ops Center's view of that server, based on the information from the LOM, would not match the physical server. Interfaces Name Ops Center's Mac Address - Calculated ( from LOM) Actual Mac Adress Correct Number of Network interfaces 8 8 YES Mac Address for interface 0 (onboard) net0 00:21:28:17:72:b2 00:21:28:17:72:b2 YES Mac Address for interface 1 (onboard) net1 00:21:28:17:72:b3 00:21:28:17:72:b3 YES Mac Address for interface 2 (onboard net2 00:21:28:17:72:b4 00:21:28:17:72:b4 YES Mac Address for interface 3 (onboard net3 00:21:28:17:72:b5 00:21:28:17:72:b5 YES Mac Address for interface 4 (PCI-E/X card) net4 00:21:28:17:72:b6 00:14:4f:6b:fd:28 NO Mac Address for interface 5 (PCI-E/X card) net5 00:21:28:17:72:b7 00:14:4f:6b:fd:29 NO Mac Address for interface 6 (PCI-E/X card) net6 00:21:28:17:72:b8 00:14:4f:6b:fd:30 NO Mac Address for interface 7 (PCI-E/X card) net7 00:21:28:17:72:b9 00:14:4f:6b:fd:31 NO  You can confirm that the Mac addresses for an expansion network card has been calculated, by looking at the Network tab in the BUI for the LOM object. You can see the displayed MAC addresses for GB_4 and GB_5 are just a simple increment of 1 from that of GB_3 which should not be the case as GB_4 and GB_5 are on a PCI-E/X expansion card. While most Oracle(Sun) servers have 4 on-board network interfaces of the same type, some servers may have 2 x 1GBit interfaces and 2 x 10Gbit interfaces. In this case, only the first on-board network interfaces will display the correct MAC addresses. It should be noted that if you have discovered the LOM and discovered the running operation system, Ops Center will have been able to identify the correct MAC addresses for all the network interfaces as it combines the information gathered from the LOM and the Operating System to display the full picture (correct values). Unfortunately, you can not rely on these when re-provisioning, as part of the OSP job will delete the OS object (we are re-provisioning it after all) and the cached values for the MAC address may expire before the JET/AI/Kickstart server is configured. The Impact If you were to provision across net0, net1, net2, or net3 all would work well, but if you selected net4 or above for provisioning, the job would fail due to a timeout in the "Monitor OS Installation" task as the Jet/AI/Kickstart server would have been configured with the wrong MAC address and so it would have not responded to the OSP request. Please note that a misidentified MAC address is not the only possible cause of a timeout in the "Monitor OS Installation" task. This error only indicates that some step of the OS provisioning has failed and can be caused by a number of different issues. The Solution There are 2 ways of provisioning to secondary network interfaces 1) Use the MAC address method (simplest method - only available in 12.2.0+) In Ops Center 12.2.0, we introduced an option to specify the MAC address to provision across directly in the BUI. When running the "Install Server" Action/Wizard, the "Boot Interface Resource Assignments" page has a check-box [Identify Network Interface by MAC Address]. Selecting this check-box will change the wizard from using netX interface names that rely on the discovered MAC address, to letting you manually enter the MAC address. This entered MAC address is used to setup the JET/AI/Kickstart server and is used to interrogate the OBP of the server to workout the netX interface that is required for wanboot. It is as easy as that and your provisioning job will progress as normal. 2) Overload the MAC address before provisioning (the way we did it before we had method #1) Assuming you have already discovered and managed the systems LOM, you can overload (update) the discovered/calulated network interface MAC addresses. In the BUI, select "Assets" ==> "All Assets" ==> "Add Assets" then choose "Manually declared server to be a target of OS provisioning" While this could declare multiple servers using an XML file, in this example, we will just be doing a single server. This wizard normally lets us declare a server network interfaces but as some of the MAC addresses we will be declaring are already part of an existing discovered server LOM, Ops Center will identify the overlaping MAC address and merge this data with the existing server. The matching interfaces will stay the same but the new MAC addresses will overload (replace) the incorrect addresses. Select Declare a single server, then click the [Green Plus] icon Enter the port name [GB_X] and the actual MAC address. Repeat this for all the interfaces, up to and including the one you want to provision across.  Do not skip any interfaces as the interface numbering is based on the order the entries are stored in the database. When you have entered all required interfaces, you then have to fill in the server details. Once completed, click the [Declare Asset] button and wait for the job to complete. Normally, this will just take a few seconds.  You can check in the BUI that the updated MAC addresses have been applied. Now, just run your provisioning job as per normal and the correct MAC address will be configured in the JET/AI/Kickstart server. As you can see, if you have updated your Enterprise Controller to Ops Center 12.2.0 or higher, option #1 is the simpler method. All the best with your OS provisioning, Rodney 

Description One of Enterprise Manager Ops Center's core functionalities is to be able to provision the OS to bare metal servers. If the network you are provisioning across is connected to one of...

Best Practices

Get Compliant with Oracle Java VM Database PSU OCT 2014 using EM12c

Check for compliance and automate patching of Oracle Databasefleet using EM12c &amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; Oracle along with its regular quarterly database PSU/SPU/CPU this October 2014 released a JAVA VM PSU patch which isrecommended to be applied to all databases in your fleet.  (For more information see support Note- 1929745.1 which explains in detail).Themandate is primarily to apply the patches to databases that use the JAVAVMoption. Ideally, you would need to apply it against all databases, so in case anew database is created in the future that uses JAVAVM, it is covered. Oracle Enterprise Manager 12c provides ComplianceManagement and automated Patching feature for Oracle Databases. Using these you can quickly identify databases in your fleet that need the patch, automate its application and track the process across your fleet. To begin download the starter kit here. It contains utilities to jump start your compliance reporting by providing a readily importable Compliance Standard and a step-by-step guide for the features divided into 2 parts.  Part 1: Compliance Standards and Reports: The standard contains rules that use the configuration search functionality based SQL to identify the Oracle Database instances in your fleet that uses Java VM/JServer option. It further runs a patch check to see if they have either of the JAVA VM PSU or the Mitigation patch.  The Compliance dashboard lists the databases for you to start planning your patching activity on. Follow the instructions in the document to import the compliance standard.  Part 2: Patching the Databases: Oracle Java VM OCT 2014 PSU is applicable on all databasedeployment types including Standalone and RAC Cluster databases. The JAVA PSU 2014 is a non-rolling patch, so if you apply it to RAC database itneeds to be done in non-rolling (parallel) or complete downtime mode. Very importantly, the JAVA PSU 2014 needseither Oracle DB PSU (OCT 2014) or Oracle DB SPU/CPU (OCT 2014) as a pre-requisite patch. If you cannot apply the JAVA PSUpatch but want to handle the vulnerability, you could apply the Mitigation patch and lock down Java in the database. EM12c supports patching the Oracle Database family of products which you can take advantage of using different approaches. The possible approaches are:  PatchOptions Approach Notes Apply the Oracle JAVA VM PSU OCT 2014 For versions,,, and . Ensure if you are on other versions you either pick up the Mitigation patch or plan to move to the applicable version and pick these patches. For Standalone Database Environments:     1. Apply along with Oracle Database Patch Set Update (PSU) OCT 2014 (Recommended)      2. Apply along with Oracle Security Patch Update (SPU) OCT 2014 Oracle JAVA VM PSU needs DB PSU or DB SPU of OCT 2014 as pre-requisite. In EM12c you can multi select targets and apply both JAVA VM PSU and its pre-requisite patch together. It handles the internal ordering and post-requisite steps post patching. For Oracle RAC Environments:     1. Apply along with Oracle Grid Infrastructure Patch Set Update (GI PSU) OCT 2014 (Recommended) If you want to skip GI and apply just to RAC DB:     2. Apply along with Oracle Database Patch Set Update (PSU) OCT 2014     3. Apply along with Oracle Security Patch Update (SPU) OCT 2014 EM12c support patching the complete cluster including Grid Infrastructure and Oracle Databases. Selecting GI PSU applies patch to the complete environment in a single go. The patching is to be done in “Parallel” mode, where all the Database Instances are shutdown completely as per the requirement of the Oracle JAVA VM PSU Apply Mitigation Patch only. For versions onwards. Applicable to Standalone Database and RAC Database Environments The patch application runs the SQL to lock JAVA. Has no pre-requisites. For RAC environments choose ROLLING mode. The appendix section in the document covers additional features available to enhance the automation to cover scale. We recommend using EM (R4), although support is available with EM12cR3 ( The kit contains the compliance standard xml for EM12.1.0.3 users as well. Get Started!! (Getting Started Kit Download - zip 3.69 MB)  Note the features are licensed under Enterprise Manager 12c - Database Lifecycle Management pack. Reference Links Oracle Recommended Patches -- "OracleJavaVM Component Database PSU" (OJVM PSU) Patches (Doc ID 1929745.1) Technical Reference Guide MOS Note on this topic - 1936634.1

Check for compliance and automate patching of Oracle Database fleet using EM12c &amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;am...

Best Practices

Using JVM Diagnostics (JVMD) to help tune production JVMs

Contributing Author: Shiraz Kanga, Consulting Member of Technical Staff, Oracle Tuning a production JVM involves more than merely adding more RAM to it via the -Xmx parameter. It depends upon an understanding of how your application truly behaves in a production environment. Most JVM tuning is done by developers with a simulated load in their Development or QA environment. This is unlikely to be truly representative of the production load running on production hardware with regards to proper JVM tuning. One of the tools that actually contains real-world production data is JVM Diagnostics (JVMD). Hence it is a good idea to use data collected by JVMD for tuning your production JVMs. Note that JVMD is a component of Oracle Enterprise Manager, licensed as part of both Weblogic Management Pack and the Non-oracle Middleware Management Pack. Figure 1. Heap Utilization and Garbage Collections for a specific JVM In this document we are primarily addressing the Hotspot JVM. There are several aspects of tuning this JVM that we will look into: Tuning Heap Size The main parameters needed to tune heap size are: -Xms<n>[g|m|k] is the initial and minimum size of the Java heap -Xmx<n>[g|m|k] is the maximum possible size that the Java heap can grow upto Figure 2. Heap after GC and Garbage Collection Overhead for a specific JVM The Java Heap size here refers to the total size of the young and the old generation spaces. To start, take a look at the Heap usage chart (Figure 1) of your production JVM under maximum load in the JVMD Performance Diagnostics page. You should see some patterns in the minimum and the maximum heap sizes over time. You can use this data as a rough guide for your choice of -Xms and -Xmx with a reasonable amount of padding. After setting these you should start monitoring the garbage collection charts of your production JVMs (Figure 2) in the JVMD Live Heap Analysis page. It is useful to look into the JVMD metric called "Heap use after GC" which provides a good reflection of the actual amount of heap memory being used by your application. Ideally this metric should remain relatively steady over time with only few full garbage collections occuring. If there are too many full garbage collections then performance of your production application is impacted since GC is done by blocking threads that take a while to scan the entire heap. You can monitor this metric with the JVM GC Overhead% chart on the same page of JVMD. Garbage collection overhead is the percentage of total time spent in garbage collection. Increasing -Xmx can help to make these happen less frequently but actually it is time to dig deeper into your tuning options. The key questions that you need to answer are - How frequently does garbage collection take place, How long does each collection take and what is the actual memory used (i.e. heap after GC). Also be sure that you NEVER make the heap size larger than the available free RAM on your system as disk will decrease performance as RAM will start getting swapped to disk. The Sun HotSpot JVM relies on generational garbage collection to achieve optimum performance. The -XX:SurvivorRatio command line parameter could further help in tuning garbage collection. The Java heap has a young generation for newly created objects and an old generation for long lived objects. The young generation is further subdivided into the Eden space where new objects are allocated and the Survivor space where new objects that are still in use can survive their first few garbage collections before being promoted to old generations. The Survivor Ratio is the ratio of Eden to Survivor space in the young object area of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. In applications that generate more medium and long lived objects, this setting should be lowered from the default and vice versa. For example, -XX:SurvivorRatio=10 sets the ratio between each survivor space and eden space to be 1:10. If survivor spaces are too small, they will overflow directly into the old generation. If survivor spaces are too large, they will be empty. At each GC, the JVM determines the number of times an object can be copied before it is tenured, called the tenure threshold. This threshold should be set to keep the survivor space half full. Most tuning operations represent a trade-off of some type or another. In the case of garbage collection the trade-off usually involves the memory used v/s throughput and latency. The throughput of a JVM is measured in terms of the time spent doing garbage collection vs. the time spent outside of garbage collection (referred to as application time). It is the inverse of GC overhead mentioned above and represents the amount of work done by an application as a ratio of time spent in GC. Throughput can be tuned with -XX:GCTimeRatio=99 where 99 is the default which represents a 1% GC overhead. Latency is the amount of time delay that is caused by garbage collection. Latency for GC pauses can be tuned by specifying rhe maximum pause time goal with the command line option -XX:MaxGCPauseMillis=<N>. This is interpreted as a hint that pause times of <N> milliseconds or less are desired. By default, there is no maximum pause time goal. If a pause time goal is specified, the heap size and other garbage collection related parameters are adjusted in an attempt to keep garbage collection pauses shorter than the specified value. Note that these adjustments may cause the garbage collector to reduce the overall throughput of the application and in some cases the desired pause time goal cannot be met. Some lesser-known options are about permanent generation space which is used by the JVM itself to hold metadata, classes structures and so on: -XX:PermSize=<n>[g|m|k] is the initial and minimum size of the permanent generation space. -XX:MaxPermSize=<n>[g|m|k] is the maximum size of the permanent generation space. If you ever get the message java.lang.OutOfMemoryError: PermGen space then it means that your application is loading a very large number of classes and this should be raised. -Xss=<n>[g|m|k]is the size of the thread stack. Each thread in a Java application has its own stack. The stack is used to hold return addresses, arguments to functions and method calls, and so on. The default stack size setting for a thread in Java is 1MB. In a highly multi-threaded system, like an application server at any given point in time there are multiple thread pools and threads that are in use so this may need to be reduced. Since stack size has to be allocated in contiguous blocks and if the machine is being used actively and there are many threads running in the system you may encounter an OutOfMemory error even when you have sufficient heap space. Recursive code can quickly exhaust the stack and if you use such code then you may need to increase the -Xss setting. However, if you see java.lang.OutOfMemoryError: unable to create new native thread then you may have too many threads, or each thread has a large stack; so you may need to decrease it. Tuning Garbage Collection Algorithm Garbage collection is expensive. Generational garbage collectors have the JVM  memory divided into several spaces. Eden space: All objects are placed here when first created Survivor spaces: One or more regions where objects mature Tenured space: Where long lived objects are stored Permanent generation: This area is only used by the JVM itself to hold metadata, such as data structures for classes, methods, interned strings One thing that people often forget to try, is to lower the amount of garbage being created in the first place. There are a lot of ways to do this which are specific to the application/code that is being written. This often involves techniques such as using StringBuilder/StringBuffer instead of Strings, lowering the amount of logging, etc. There are several GC algorithms which are available to be used in a Java VM. The following command line options allow to use a specific GC algorithm: -XX:+UseSerialGC uses a single threaded, young generation, and old generation garbage collector (Normally this is a poor choice and should be used only for small Java heap sizes such as -Xmx256m or smaller) -XX:+UseParallelGC utilizes a multithreaded (parallel) garbage collector for the young generation and a single-threaded garbage collector for the old generation space in parallel. -XX:+UseParallelOldGC uses a multithread garbage collector for both the young and old generations. -XX:+UseParNewGC -> enables a multithreaded, young generation garbage collector -XX:+UseConcMarkSweepGC -> enables the VM’s mostly concurrent garbage collector. It also auto-enables -XX:+UseParNewGC (use if If you are not able to meet your application’s worst case latency requirements due to full garbage collection duration being too long) -XX:+UseG1GC -> garbage first collector (default in java 7, can be also used in latest releases of Java 6) In practice, the default in Java 6 is ParallelGC and in Java 7 it is the G1GC. Changing the algorithm requires detailed analysis of the application behavior. If you see a nice regular sawtooth chart in the heap usage you may not need any changes at all. If not, we recommend trying out each GC algorithm under a realistic load and then comparing it to the default algorithm's behavior under the same load. Usually you will find that the default algorithm outperforms the new setting and that there is no reason to change it. As you can see, tuning the JVM and it's garbage collectors is largely a trade-off between space and time. If you had infinite heap space then you would never need to collect garbage. Inversely, if you could tolerate infinite time delays, then you could run a cleanup as frequently as you like and keep the heap compact. Clearly, both those situations are impossible. Finding the right middle ground that is right for you requires careful balancing act based on understanding how GC works and what the application requires. References: Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html

Contributing Author: Shiraz Kanga, Consulting Member of Technical Staff, Oracle Tuning a production JVM involves more than merely adding more RAM to it via the -Xmx parameter. It depends upon...


Oracle Announces Oracle Enterprise Manager for MySQL Database

To help customers simplify management of complex IT environments, Oracle is extending Oracle Enterprise Manager capabilities to MySQL databases. With this new offering, customers can manage deployments of MySQL, either on-premises or in a cloud. Existing users of Oracle Enterprise Manager can now easily add MySQL to their environments. With this capability, customers can manage their applications and technology stack, including web and departmental applications that rely on MySQL, all from a single console. With Oracle Enterprise Manager for MySQL Database, customers can benefit from: Auto-discovery of MySQL targets Availability monitoring Configuration and metrics collection Performance dashboards Configurable thresholds Watch the Demo: &amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; MySQL Enterprise Edition customers can download Oracle Enterprise Manager for MySQL Database, Release 5.5 and higher, which is available on Oracle Linux and other Linux distributions, Microsoft Windows, and Oracle Solaris. Watch the demo for more details. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

To help customers simplify management of complex IT environments, Oracle is extending Oracle Enterprise Manager capabilities to MySQL databases. With this new offering,customers can manage deployments...


Software Configuration Pollution: Is the invisible smog covering your Data center too?

Source: TIME Magazine Data center technologies are pushed to new frontiers with an unprecedented explosion in Data and Big Data Analytics. IT Resource Strategy  survey from IOUG 2014 ("Efficiency isn't enough: Data centers lead the drive to innovation") talks about the need to relieve the data management department mired into the maintenance activities. Its one of the key revenue pockets to dig into, to achieve the competitive edge beyond having an efficient IT. The explosion has resulted in an ever growing Database population. IOUG survey 2013 points out 28% have an annual database growth of more than 20% and less than 50% have consolidated. It paints a picture of the current state of the Database Deployment as being siloed, dispersed, varied and complex. We (EM team) looked into the statistics and more from the customer issues that we have been hearing. To make more sense of the situation we did our own closed survey with bunch of customers. We sent them scripts to collect software configuration information about the Database Oracle Homes. The results were shocking.  We analyzed the data from the collected 15,000 Oracle Homes. Taking a sample of a customer's database fleet which had 2196 Database Oracle Homes, we found there are 396 distinct configurations amongst them.  (Plotting this possibly used all the 256 colors available in the spectrum. j) Other than the standard differences the software is divided like product release versions and platforms, they were differing on the more troublesome layer "Patch" levels. We call this "Software Configuration Pollution".  There could be different reasons on how and why the pollution was created, like different applications supported, different LOBs , and also categorization based on the usage. But to be in a situation where the software differs at such a scale in patch levels doesn't provide the comfort of providing reliable, consistent and performant IT support. Most of it is invisible till the disaster strikes. Do you want to know if this invisible smog is covering your data center too ? Download this quick scan sql script & readme We are going to discuss this further and provide new innovative solutions to handle this invisible "Software Configuration Pollution" problem during the upcoming Oracle Open World 2014. If you are coming to OOW, stop by these sessions. List of sessions:Focus On Doc  | Quick add sessions to your calendar(.ics file) Swing by the demo booths @ Moscone South, Left - SLD-107, SLD-102 Sessions you don't want to miss: Monday Sep29th 2014 Database Software Currency: Using Oracle Enterprise Manager 12c Provisioning and Patching (From Nationwide Insurance)2:45 PM - 3:30 PM Moscone South - 301 CON3178   Tuesday Sep30th 2014:Oracle Enterprise Manager 12c general session, Drive the Future of Self-Service IT with Oracle Enterprise Manager 12c [GEN8250]. 12:00 PM - 12:45 PM Moscone South - 103 Wednesday Oct 1st 2014: Databases to Oracle Exadata: The Saga Continues for Oracle Enterprise Manager–Based Patching  10:15 AM - 11:00 AM Moscone South - 300 (CON8121)DBaaS 2.0: Rapid Provisioning, Richer Services, Integrated Testing, and More 3:30 PM - 4:15 PM Moscone South - 301 CON8016 Thursday Oct 2nd 2014:Security Compliance and Data Governance: Dual Problems, Single Solution   1:15 PM - 2:00 PM Moscone South - 301 CON8015 See you at Oracle Open World!! Safe Travels.

Source: TIME Magazine Data center technologies are pushed to new frontiers with an unprecedented explosion in Data and Big Data Analytics. IT Resource Strategy  survey from IOUG 2014 ("Efficiency isn't...


Oracle OpenWorld: Oracle Enterprise Manager 12c General Session

Heading to Oracle OpenWorld? Be sure to check out the Oracle Enterprise Manager 12c general session, Drive the Future of Self-Service IT with Oracle Enterprise Manager 12c [GEN8250]. Session Abstract: Successful strategies for cloud computing and self-service IT demand a unified management solution that provides visibility, insight, and control across the IT landscape. In this session, key representatives from Oracle Enterprise Manager Product Development will discuss customer and partner experiences in deploying and managing large-footprint private cloud environments encompassing Oracle Applications, Oracle Fusion Middleware, Oracle Database, and Oracle Engineered Systems. In the second part of the session, attendees will get a sneak preview of several exciting new offerings in the Oracle Enterprise Manager family. Don’t miss this opportunity to glimpse the future of Oracle’s systems management offerings. For the complete list of OpenWorld sessions, demos and hands-on labs, read the Oracle Enterprise Manager 12c Focus on Doc for more. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

Heading to Oracle OpenWorld? Be sure to check out the Oracle Enterprise Manager 12c general session, Drive the Future of Self-Service IT with Oracle Enterprise Manager 12c [GEN8250]. Session Abstract: Suc...

Best Practices

Simplify deployment of JVMD Agents to command line Java applications

Contributing Author: Shiraz Kanga, Consulting Member of Technical Staff, Oracle Most customers of Oracle Enterprise Manager using JVM Diagnostics use the tool to monitor their Java Applications servers like Weblogic, Websphere, Tomcat, etc. In this environment it is fairly easy to deploy the JVMD Agent. Since it is distributed as a war file, you merely deploy the agent into a running application server using the management GUI or command line tools. Then you can start monitoring with no need for a restart of the app server or for the modification of any startup commands or scripts. However, with other types of Java applications that do not allow for any code deployment at runtime such as AWT/Swing or command line java applications these steps are necessary. Modifying startup scripts is complex because each application comes with its own custom and unique launch script. Additionally, the command that actually launches the runtime needs to have the java command with its related parameters (like -Xmx) the JVMD Agent with its own parameters (like console host/port) and the application itself which may have some more custom parameters. People often get confused due to the complexity that is seen here. I've recently had customers that needed to monitor Hadoop, HDFS, Zookeeper, Kafka, Cassandra and Solr with JVMD. In order to simplify some of the complexity discussed above, I created a simple script based framework that makes things a bit easier. Feel free to use my approach to quickly setup JVMD with these or any other command line java programs. You can also use it as the basis for your own modifications. The framework modifies the startup scripts supplied with these tools in order to add the JVMD agent. All the code/scripts are attached in a zip file. Both original and modified versions of all changed scripts are included so you can easily see the modifications I made with a simple diff. Here's how these scripts are setup. Everything is configured using 4 environment variables as shown below: export JVMD_AGENT_HOME=/home/skanga/servers export JVMD_MANAGER_HOST=jvmdconsole.us.oracle.com export JVMD_MANAGER_PORT=3800 export JVMD_UNIQUE_ID=<unique name for each server process> where the JVMD_AGENT_HOME must contain the jamagent-env.sh (from the attached zip file) and jamagent.war (which can be downloaded from your JVMD console). The first three of these are likely to remain unchanged for all the JVMs being monitored so you can easily add them directly into jamagent-env.sh if needed. The JVMD_UNIQUE_ID will always be unique so it must not be placed there. However it has two other modes where you can use a pointer to the unique ID instead of specifying it directly. You can point to either an environment variable or to a JVM system property that holds the actual unique ID. If you are using these cases then you could add this one to the jamagent-env.sh script too. If JVMD_UNIQUE_ID starts with the string "sysprop-" then the actual unique ID will be read from the JVM system property named by the string following "sysprop-". For example if JVMD_UNIQUE_ID is "sysprop-server_name" and we have a system property -Dserver_name=MyTestingServer then JVMD will use MyTestingServer as the JVM unique identifier. If JVMD_UNIQUE_ID starts with the string "envvar-" then the actual unique ID will be read from the environment variable named by the string following "envvar-". For example if JVMD_UNIQUE_ID is "envvar-server_name" and we have an environment variable called server_name=MyTestingServer then JVMD will use MyTestingServer as the JVM unique identifier. Caution: Do not use dash (minus) character in the environment variable setup of unique id. Use underscore instead. Generic Launch Script ModificationsAfter these four environment variables are set we need to modify our launch scripts. Make sure you have a backup of all files before you proceed. In the main script that you use to launch your java application look for a line that has a format that is similar to the one below:  $JAVA $JAVA_OPTS $MAIN_CLASS $MAIN_CLASS_ARGS and replace it with $JAVA $JAVA_OPTS $JVMD_AGENT_INSERT $MAIN_CLASS $MAIN_CLASS_ARGS So we simply added a $JVMD_AGENT_INSERT just before the name of the Main class. If there are multiple such lines then you should modify them all in the same way. And in order to configure $JVMD_AGENT_INSERT we also need to source jamagent-env.sh (with some error checking). So we insert a snippet like this in the line just before the JAVA invocation.  # add JVMD Agent Env settings[[ -e "${JVMD_AGENT_HOME}/jamagent-env.sh" ]] && source "${JVMD_AGENT_HOME}/jamagent-env.sh" || { echo "ERROR: JVMD_AGENT_HOME undefined or does not contain jamagent-env.sh" 1>&2 ; exit 1; } NOTE: Everything after the comment above should in a single line of code in your launch script. This line gets mangled by the blogging software so it is best to cut & paste it from it from one of the scripts in the attached zip file. We will now look at how I used these techniques to add JVMD monitoring to Kafka, Hadoop, Zookeeper, Cassandra and Solr.  1) Kafka 2.8.0- I used Kafka 2.8.0- and downloaded it directly from the Kafka site. In Kafka, ALL processes are initiated through a common launcher called kafka-run-class.sh in the bin folder. All the other shell scripts (including the built-in Zookeeper) call this one. So this single insertion point is the only place that we will need to modify in order to add JVMD monitoring to Kafka. Pretty simple. Using the modified script (inside the attached zip file) you can run the servers as shown below: TEST - with mods to use JVMD cd /home/skanga/servers/kafka_2.8.0- JVMD_AGENT_HOME=/home/skanga/serversexport JVMD_MANAGER_HOST=jvmdconsole.us.oracle.comexport JVMD_MANAGER_PORT=3800# start a zookeeper serverexport JVMD_UNIQUE_ID=zookeeper-server./zookeeper-server-start.sh ../config/zookeeper.properties# start a kafka serverexport JVMD_UNIQUE_ID=kafka-server./kafka-server-start.sh ../config/server.properties 2) Hadoop 2.4.1 The scripts called hadoop, hfds, mapred and yarn in the hadoop bin directory will ALL need to be modified for JVMD monitoring. Using the modified scripts (inside the attached zip file) you can run all the servers as shown below: TEST - with mods for hadoop command to use JVMD cd /home/skanga/servers/hadoop-2.4.1export JVMD_AGENT_HOME=/home/skanga/serversexport JVMD_MANAGER_HOST=jvmdconsole.us.oracle.comexport JVMD_MANAGER_PORT=3802# Launch the hdfs nfs gatewayexport JVMD_UNIQUE_ID=hdfs-nfs3-gateway./bin/hdfs nfs3# Run a mapreduce history serverexport JVMD_UNIQUE_ID=mapred-historyserver./bin/mapred historyserver# Run a yarn resource managerexport JVMD_UNIQUE_ID=yarn-resourcemanager./bin/yarn resourcemanager# Run a hadoop map-reduce job to find the value of PI (QuasiMonteCarlo method)export JVMD_UNIQUE_ID=hadoop-test-pi-montecarlo./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi 1024 100 3) Zookeeper 3.4.6 The standalone version of zookeeper has a common environment setup script called zkEnv.sh where most JVMD setup can be done. After that a minor modification is needed in the java launch command in zkServer.sh after which all JVMD monitoring works fine. The scripts called zkCleanup.sh and zkCli.sh probably do not need monitoring but can be easily added if really needed. TEST - with mods for zkServer.sh command to use JVMD cd /home/skanga/servers/zookeeper-3.4.6/binexport JVMD_AGENT_HOME=/home/skanga/serversexport JVMD_MANAGER_HOST=jvmdconsole.us.oracle.comexport JVMD_MANAGER_PORT=3800export JVMD_UNIQUE_ID=zk-server# start the zookeeper server./zkServer.sh start./zkServer.sh status./zkServer.sh stop 4) Cassandra 2.0.9 The Apache Cassandra data store has a common environment setup script called conf/cassandra-env.sh where we can add the command to source our include script. Then a minor modification is needed to the java launch command in bin/cassandra after which all JVMD monitoring works fine. The other scripts probably do not need monitoring but can be easily added if really needed.  TEST - with mods for cassandra command to use JVMD cd /home/skanga/servers/apache-cassandra-2.0.9/binexport JVMD_AGENT_HOME=/home/skanga/serversexport JVMD_MANAGER_HOST=jvmdconsole.us.oracle.comexport JVMD_MANAGER_PORT=3800export JVMD_UNIQUE_ID=cassandra-server# start cassandra./cassandra -f 5) Solr 4.9.0 The Solr search server is an interesting case. In production scenarios, users will probably use the Solr war file in their own application server. In this scenario the standard JVMD warfile can be deployed to the same application server and monitored easily. However, the Solr distribution also include an embedded mode which may be used by simply running java -jar start.jar and for this scenario we have converted this java command into a simple script called start.sh and added it to the same folder as start.jar in order to run it. Using this script (inside the attached zip file) you can run a test as shown below: TEST - with addition of start.sh command to use JVMD with Solr cd /home/skanga/servers/solr-4.9.0/exampleexport JVMD_AGENT_HOME=/home/skanga/serversexport JVMD_MANAGER_HOST=jvmdconsole.us.oracle.comexport JVMD_MANAGER_PORT=3800export JVMD_UNIQUE_ID=solr-server# start solr./start.sh After everything is setup properly for your servers you should see all the relevant JVMs in the default pool with the proper ID as shown in the image below. JVMs in Default Pool (with hostnames & ip addresses blanked out) Click image to expand it in a new tab Remember to be a bit patient and wait a few seconds until the connections are established and the servers appear in the console.

Contributing Author: Shiraz Kanga, Consulting Member of Technical Staff, Oracle Most customers of Oracle Enterprise Manager using JVM Diagnostics use the tool to monitor their Java Applications...

Best Practices

Enterprise Manager Ops Center - Changing alert severity in default monitoring policies

Modifying Monitoring Policies Ops Center delivers default monitoring policies for the various types of assets managed and monitored by Ops Center. These policies are specific to each asset type. In the real world, these policies act only as a starting point and you will need to customize them to suit your own environment. Most of the customizations can be done in the BUI (Browser User Interface),  which is covered in the manuals and other blogs on this site, but occasionally, you will need to manually edit the underlying XML of the default policies to get the customization you require. The method of doing that is covered in this blog entry. In the BUI, you can easily copy these default policies and then modify them to suit your own environment. You can do the following modifications in the BUI: enable/disable monitoring rules add a new monitoring rule delete an existing monitoring rule Modify the thresholds/severities/triggers for most alert rules Modifications are normally done by highlighting the rule, clicking the edit [] icon, making your changes and then clicking the apply button. Remember that once you have made all the rule changes, the policy should be applied/reapplied to your target assets. Most rules are editable in this way. However, not all rules can be edited in the BUI. A rule like "Operating System Reachability" can not be edited from the BUI and must be done manually by editing the underlying XML. These rules can be identified by the fact that there is no edit [] icon available when the "Operating System Reachability" alert rule is selected. Only Ops Center factory default policies (product standard default policies) can be edited by modifying the XML on the filesystem. When a policy is modified, it copies the default policy to a custom policy which can be modified in the BUI. These modified policies are stored in the database, not as XML on the filesystem. This means that if you want to change one of these non editable rules, you must manually edit the factory default policy.  Then, make a copy of the policy to create a custom policy and, if required, re-apply any additional customizations in the BUI, so that your new policy adsorbs the manual modifications. While the default values are normally sufficient for most customers, I had a request from a customer who wanted to change the "Operating System Reachability" severity from Warning (the default) to Critical. He considered this to be an important event that needed to be alerted at a higher level so that it would grab the attention of his administration staff. Below is the procedure for how to achieve such a modification. Manually Modifying the Default Alert Severity As part of a standard install, Ops Center will create an alert of severity Warning if it loses connectivity with an Operating System (S8/9 OS or S10/11 GZ). This will create an alert with the description "The asset can no longer be reached" So here is the procedure for how to change the default alert severity for the "Operating System Reachability" alert from Warning to Critical. Be aware that there is a different alert for "Non-global zone Reachability", which will not be covered here, but modifying it, or other alerts, would follow a similar procedure. We will be modifying the XML files for the default monitoring policies. These can be found at /var/opt/sun/xvm/monitoringprofiles on your EC. root@ec:/var/opt/sun/xvm/monitoringprofiles# lsChassis.xml MSeriesDomain.xml ScCluster.xmlCiscoSwitch.xml NasLibrary.xml ScNode.xmlCloud.xml NonGlobalZone.xml ScZoneClusterGroup.xmlExadataCell.xml OperatingSystem.xml ScZoneClusterNode.xmlFileServer.xml OvmGuest.xml Server.xmlGlobalZone.xml OvmHost.xml Storage.xmlIscsiStorageArray.xml OvmManager.xml Switch.xmlLDomGuest.xml PDU.xml Tenancy.xmlLDomHost.xml RemoteOracleEngineeredSystem.xml VirtualPool.xmlLocalLibrary.xml SanLibrary.xmlMSeriesChassis.xml SanStorageArray.xmlroot@ec:/var/opt/sun/xvm/monitoringprofiles# Follow the steps below to modify the monitoring policy: In the BUI, identify which policies you want to modify. Look at an asset in the BUI and select the "Monitoring" tab. At the top of the screen, you will see what monitoring policy (Alert Monitoring Rules) it is running. In this case, the policy is called "OC- Global Zone", which will be the "GlobalZone.xml" file. Or alternatively, log on to the EC and grep for the alert rule name. # grep "Operating System Reachability" *GlobalZone.xml: <name>Operating System Reachability</name>OperatingSystem.xml: <name>Operating System Reachability</name># In this case, we will want to change "OC - Operating System" and "OC - Global Zone" policies, as they both have the "Operating System Reachability" rule, so we will be editing both the "GlobalZone.xml" and "OperatingSystem.xml" files. Make a backup copy of any XML file you modify (in case you mess something up). # pwd/var/opt/sun/xvm/monitoringprofiles# cp OperatingSystem.xml OperatingSystem.xml.orig# cp GlobalZone.xml GlobalZone.xml.orig Edit each file and look for the rule name <monitor> <enabled>true</enabled> <monitorType>Reachability</monitorType> <name>Operating System Reachability</name> <parameter> <name>unreachable.duration.minutes.WARNING</name> <value>3</value> </parameter> </monitor> and change "unreachable.duration.minutes.WARNING" to "unreachable.duration.minutes.CRITICAL". <monitor> <enabled>true</enabled> <monitorType>Reachability</monitorType> <name>Operating System Reachability</name> <parameter> <name>unreachable.duration.minutes.CRITICAL</name> <value>3</value> </parameter> </monitor> Repeat for the other file(s). Make a backup copy of  your modified XML files as these files may be overwritten during an upgrade process. Now restart the EC so that the new monitoring policies are re-read. You should now apply the new policy to the hosts you want to have the updated rule. Check the Message Center in the Navigation panel and you will see that your alert has now changed from "Warning" to "Critical". A Best Practice option would now use the BUI to copy the new  (OC - Global Zone and OC - Operating System) policies to your own custom policies, adding any additional rule modifications. Copying the new OC policy  to a custom policy saves it into the database so it will not get overridden by any subsequent Ops Center upgrade. Remember to apply the custom policy to your asset(s) or asset groups. It is good practice to keep the name of the source policy in the name of your custom policy. It will make your life easier if you ever get confused about which policy applies to which type of asset or if you want to go back to the original source policy. If you want your new custom policy to be automatically applied when you discover/provision a new asset, you will need to select the policy and click the "Set as Default Policy" action for that asset class. The green tick on the icon indicates that a policy is the default for that asset class. You have now successfully modified the default alert severity, for an alert that could not be modified in the BUI. Regards, Rodney LindnerSenior IT/Product ArchitectSystems Management - Ops Center Engineering

Modifying Monitoring Policies Ops Center delivers default monitoring policies for the various types of assets managed and monitored by Ops Center. These policies are specific to each asset type. In the...

Best Practices

Q&A: Oracle's Andrew Sutherland on Managing the Entire Oracle Stack with Oracle Enterprise Manager 12c

As senior vice president of middleware for Oracle in Europe, Dr. Andrew Sutherland has more than 20 years’ experience in emerging technologies and their application to business problems. Currently, he manages a team of architects, business development managers, and technical specialists who help customers make the best use of their investments in Oracle technologies. Given his breadth and depth of experience, we decided to ask Sutherland how Oracle Enterprise Manager 12c Release 4 is helping the Oracle customers he works with. Q. What makes Oracle Enterprise Manager 12c different from competitors' offerings? A. Oracle Enterprise Manager's approach is unique in that it manages across the entire Oracle stack, from applications, middleware, and the database all the way down to servers and storage. That means it can truly unlock the value of the entire Oracle stack. Q. What is the payoff for organizations that adopt such a comprehensive approach? A. Our customers are able to manage the entire Oracle estate in the most cost-effective way possible by automating many of their day-to-day tasks. To give you an idea of its scope, many of our customers have made sure that Oracle Enterprise Manager 12c’s dashboard is available to their senior IT management team. They use it to ensure that all parts of their IT stack are delivering what they should be delivering, when they should be delivering it. Perhaps most important of all, Oracle Enterprise Manager 12c enables organizations to move beyond the old paradigm of multiple independent IT stacks to offer infrastructure as a service and platform as a service. Q. As someone who helps customers make the most of their investment in Oracle technology, what do you find most promising about Oracle Enterprise Manager 12c Release 4? A. There are three key areas that are especially exciting. First, it provides an accelerated path to the cloud. Whether you are building a small, medium, or large private cloud within your organization, it provides the tools you need to make it happen, from designing the cloud to provisioning and testing. Secondly, this release provides monitoring and management tools that go both deeper into the stack and wider across components within the stack. That means an even more comprehensive dashboard. Finally Oracle Enterprise Manager 12c Release 4 offers true enterprise-grade management. With the growth of social and mobile connectivity, the need for a highly performant and robust stack is more prominent than ever. And Oracle Enterprise Manager 12c is there to do exactly that: manage true, enterprise-grade IT deployments. Q. What should Oracle customers do if they want to learn more about the latest release of Oracle Enterprise Manager 12c? A. First, speak to your Oracle contact, whether it is a partner or Oracle representative, to get more complete information. Also consider coming to an Oracle Day event in your area, especially if you can attend one dedicated to cloud computing. And in the meantime, you can always head to the Oracle Enterprise Manager pages on oracle.com to get started. Find out more about Oracle Enterprise Manager 12c Release 4. Watch a short video featuring Sutherland. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

As senior vice president of middleware for Oracle in Europe, Dr. Andrew Sutherland has more than 20 years’ experience in emerging technologiesand their application to business problems. Currently, he...

Best Practices

Reducing Downtime While Patching Multi-OMS Environments

Tweet Enterprise Manager has now been released for a few weeks, as well as the OMS Bundle patches (also known as System patches). If you plan to apply these bundle patches to your OMS, and you are concerned about the downtime, then, you can reduce the downtime by referring to this whitepaper that contains patching instructions to reduce downtime.  This whitepaper covers various Enterprise Manager High Availability (EM HA)  usecases (level 1, 2, 3, 4), and contains instructions on how to reduce downtime while applying patches to each of these usecases. It also clearly defines the steps that require downtime and those that do not. If you have a multi-OMS setup, you can also refer to this whitepaper which covers script creation using the opatchauto command, which automates the substeps and further reduces downtime.During our internal testing of this whitepaper on an EM HA setup, we have noticed a significant reduction in downtime.  If your customer plans to do an Enterprise Manager Upgrade to, then as a post upgrade recommendation, they should patch their OMS with the latest bundle patches by following the instructions outlined in this whitepaper. White paper on OTN: http://www.oracle.com/technetwork/oem/install-upgrade/reducedowntime-patchmultioms-2251257.pdf MOS note for the latest Bundle Patches: Enterprise Manager (PS3) Master Bundle Patch List (Doc ID 1900943.1) Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

Tweet Enterprise Manager has now been released for a few weeks, as well as the OMS Bundle patches (also known as System patches). If you plan to apply these bundle patches to...

Product Info

Enterprise Manager Ops Center 12c R2 U1 Released

We are happy and excited to announce that on July 20, 2014, Oracle Enterprise Manager Ops Center 12c Release 2 Update 1 was released for all platforms including Oracle Solaris SPARC/x86 and Linux. Ops Center 12cR2 PSU1 is an update release containing improvements and enhancements in the following areas: performance, new hardware support and general quality improvements. In the performance area we have made improvements in core Ops Center components such as the Enterprise Controller, Proxy Controller and Virtualization Agent. We have reduced the Enterprise Controller memory footprint and enhanced start up times for the Enterprise Controller and agents. We have looked at areas such as deployment wizards and the management of LUN's and made improvements in the performance of these areas. For new hardware we support the discovery, monitoring and provisioning of both OS and Firmware for: X4-4, X4-8, M4000 and M10. We also made improvements for firmware management of the X4-2 and introduced enhanced support for add / modify hardware configurations for Oracle SuperCluster.  In the general quality area we improved security, refined logging, made improvements to OS provisioning and enhanced areas such as LDAP and the UI. For customers with a support contract historically any new updates to the Ops Center components would automatically appear in the Download window of the UI. However, we have noticed a bug which prevents this new version appearing. A fix / IDR is in place and is described in the release notes for this version available here. There is also a MOS note 1908726.1 which describes the IDR and procedure to enable the download. Software downloads outside of this automatic process will be available on the OTN and edelivery in the coming days. Before upgrading your Enterprise Controller please check the upgrade guide available here. There is also an excellent blog which gives advice on pre upgrade tasks to ensure a smooth and successful upgrade experience here.

We are happy and excited to announce that on July 20, 2014, Oracle Enterprise Manager Ops Center 12c Release 2 Update 1 was released for all platforms including Oracle Solaris SPARC/x86 and Linux. Ops...

Best Practices

EM12c Release 4: Upgrading Agents with Ease

&amp;amp;amp;lt;span id=&amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;quot;&amp;amp;amp;gt;&amp;amp;amp;lt;/span&amp;amp;amp;gt; Now that Enterprise Manager 12cR4 has been out for a littlewhile, more people are getting around to upgrading their agents.  Since the monthly Patch bundles were releasedwe already have a few Agent side patches that we want to apply to our newly upgradedagents.  I’ve written aboutsimplifying your agent patching before, but this feature still seems to flyunder the radar.  It’s days like thesethat I miss running a massive Enterprise Manager with thousands of databases,because this is one of the things that would have made me dance in my cubicle. Let’s say, you have 100 agents (50 with Databaseplug-in, 50 with Middleware plug-in).  Inmy previous blog on EM patches, I explained the different types of patches available for EM,so I’m not going to go into detail here.   What I'm going to illustrate is how we can upgrade those 100 agents, and patch them with the followingpatches in one step (current as of today): Agent: 18873338 Agent-Side Plug-in:19002534 Agent Side Plug-in: 18953219 18502187,18721761 Apply Pre-Requisite Patches to OMS Servers Before staging and applying any Agent patches, be sure to read the readme's completely for each patch.   The Database and Middleware patches often rely on their OMS-side patch, so be sure to apply the OMS pre-requisite patches first.   In this example, applying the OMS Plug-in System patch 18945232 is required.  Not applying the OMS-side patches first can causes patch applications to fail or other various inconsistencies.   Stage Patches on OMS Servers First we need to download and stage the 5 patches on our OMS Servers. In case of multi-OMS, they must be staged on each server.   The patches get staged in the $ORACLE_HOME/install/oneoffs directory. During the upgrade or install of a new agent, the patches are picked up from here.   1.  Download all 5 patches and transfer to your OMSservers. 2.  Create the $ORACLE_HOME/install/oneoffs/ directories on each of yourOMS servers $ cd $ORACLE_HOME/install/oneoffs$ mkdir –p 3.  Stage the zip files in the $ORACLE_HOME/install/oneoffs/ $ cd$ cp /tmp/*.zip .$ ls -l /oracle/em/ 1 cllamas dba   516862 Jul 21 08:21 p18502187_111070_Generic.zip-rw-r--r-- 1 cllamas dba   101566 Jul 21 08:21 p18721761_121040_Generic.zip-rw-r--r-- 1 cllamas dba 14110135 Jul 21 08:21 p18873338_121040_Generic.zip-rw-r--r-- 1 cllamas dba    61619 Jul 21 08:21 p18953219_121060_Generic.zip-rw-r--r-- 1 cllamas dba    13242 Jul 21 08:21 p19002534_121060_Generic.zip  4.  Repeat on each OMS server. Upgrade an Agent Now that we've staged our 5 patches, let’s upgrade an Agent.  Starting with, agent upgrades are performed via the Upgrade Console.   1.  Click Setup / Manage Cloud Control / Upgrade Agents 2.  Click Add to select the Agent(s) you want to upgrade and click OK. 3.  In the Choose Credentials section, you’ll notice it’s only askingfor Privileged credentials. This is because the Agent uses its existingconnection to upgrade, and you only need to provide credentials for the root.shsteps if required.  If you have aPrivileged credential with root capabilities that is not set as your Preferred Credential, you can select Override Privileged Credentials and select or create the credential.  If you don’t have the root credentials, you will be prompted to runroot.sh manually as needed. 5.  In the Additional Inputs section, you can add pre-upgrade or post-upgrade scripts or specify alternate paramters or staging location if necessary.  6.  Click Submit to submit the upgrade procedure. 7.  If you did not provide a root credential, you will receive a warning.  Click OK to proceed.  8.  Once you submit the procedure, you’ll see the list of targetsin the top screen, and the steps of the selected target in the bottomhalf.   Clicking on any of the links willtake you to the specific job output for that step.  As you can see, the Agent Upgrade procedure takes care of starting and stopping the blackouts required to avoid false alarms on the targets.  Validate Patches  Once the agent is upgraded, let’s go to the agent and verifywhat patches were applied. $OPatch/opatch lsinventory -oh /scratch/agent/core/ Patch Installer version 2014, Oracle Corporation.  All rightsreserved. OracleHome       : /scratch/agent/core/ : /home/oraInventory   from           : /scratch/agent/core/    :       : location : /scratch/agent/core/ LsinventoryOutput file location : /scratch/agent/core/ -------------------------------------------------------------------------------- InstalledTop-level Products (1): EMPlatform (Agent)                                        1 products installed in this Oracle Home. Interimpatches (5) : Patch  18873338    : applied on Mon Jul 21 08:29:10 PDT 2014UniquePatch ID:  17759482Patchdescription:  "EM-AGENT BUNDLE PATCH12."   Created on 17 Jun 2014, 09:46:07 hrs PST8PDT   Bugs fixed:     18476937, 17438375, 18277098, 17995417,17579501, 18873338 Patch  18502187    : applied on Mon Jul 21 08:28:55 PDT 2014UniquePatch ID:  17615617   Created on 6 May 2014, 06:29:08 hrs PST8PDT   Bugs fixed:     6895422, 16479818, 18421945, 13583799 Patch  18721761    : applied on Mon Jul 21 08:28:30 PDT 2014UniquePatch ID:  17795715   Created on 27 Jun 2014, 04:34:49 hrs PST8PDT   Bugs fixed:     14671238, 8855559, 8563945, 13737031,13737032, 11807297, 12984377 Patch  10203435    : applied on Fri May 23 23:31:00 PDT 2014UniquePatch ID:  15915936.1   Created on 7 Feb 2013, 18:06:13 hrs PST8PDT   Bugs fixed:     10203435 Patch  17018143    : applied on Fri May 23 23:30:55 PDT 2014UniquePatch ID:  17273347   Created on 7 Feb 2014, 21:45:46 hrs UTC   Bugs fixed:     17018143 -------------------------------------------------------------------------------- OPatchsucceeded. Since this Agent only had the DB Plug-in installed at thetime of upgrade, it received the Database Plug-in patch, but not theMiddleware patch.  An Agent with the Middleware Plug-in would have received the Middleware patch and not the Database. $ $AGENT_HOME/OPatch/opatchlsinventory -oh /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12. OracleInterim Patch Installer version 2014, Oracle Corporation.  All rightsreserved. OracleHome       : /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12. : /home/oraInventory   from           : /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12.    :       : location : /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12. LsinventoryOutput file location : /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12. -------------------------------------------------------------------------------- InstalledTop-level Products (1): EnterpriseManager plug-in for Oracle Database              1 products installed in this Oracle Home. Interimpatches (1) : Patch  19002534    : applied on Mon Jul 21 08:32:04 PDT 2014UniquePatch ID:  17759438Patchdescription:  "EM DB PLUGIN BUNDLEPATCH (AGENT SIDE)"   Created on 17 Jun 2014, 09:10:22 hrs PST8PDT   Bugs fixed:     19002534, 18308719 -------------------------------------------------------------------------------- OPatchsucceeded. So, with one upgrade step in the console, we successfully upgraded our agent from to and applied the Agent bundle patch, the Database Agent-side Plug-in patch and two JDBC patches.  If you have problems, you can look at the Agent log files, starting with the $AGENT_HOME/cfgtoollogs/agentDeploy logs.  In here you will see the steps taken to upgrade, including applying the oneoff patches, as seen below: INFO: Mon Jul 21 08:32:20 2014 - ====== Summary ======INFO: Mon Jul 21 08:32:20 2014 - Following patches were successfully applied to the mentioned homes:INFO: Mon Jul 21 08:32:20 2014 - 19002534 => /scratch/agent/plugins/oracle.sysman.db.agent.plugin_12. Mon Jul 21 08:32:20 2014 - 18721761 => /scratch/agent/core/ Mon Jul 21 08:32:20 2014 - 18873338 => /scratch/agent/core/ Mon Jul 21 08:32:20 2014 - 18502187 => /scratch/agent/core/ Mon Jul 21 08:32:20 2014 - Following patches were not applied:INFO: Mon Jul 21 08:32:20 2014 - 18953219INFO: Mon Jul 21 08:32:20 2014 - Log file location:/scratch/agent/core/ Mon Jul 21 08:32:20 2014 - Apply completed. As you can see, patch 18953219 was rejected because we do not have the Middleware plug-in installed on this Agent.    After testing on a few agents, you're able to move forward with the rest of the upgrades knowing that they will be patched and ready to go!  Summary  Upgrading Agents with the latest and greatest patches is easy if you take the time to stage them in the OMS server.  The added benefit is that any new Agent deploys from the console will get the core Agent patches that you stage.   Plug-in patches will not get applied during Agent deploy as there is no Plug-in deployed at that time.  To always push the most current Plug-in, create a custom Plug-in Update from one of your patched agents using EM CLI, and then import to EM.  After doing this, all new plug-in deployments will include the patches you have tested.  For detailed instructions on how to create a Custom Plug-in Update read this previous blog post.  

&amp;amp;amp;lt;span id=&amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;quot;&amp;amp;amp;gt;&amp;amp;amp;lt;/span&amp;amp;amp;gt; Now that Enterprise Manager 12cR4 has been out for a littlewhile,...

Best Practices

Pre-Upgrade Checks Enterprise Manager Ops Center

With the release of Enterprise Manager Ops Center 12.2.1, it is time to go through the upgrade cycle. I thought I would share the pre-upgrade checks I go through when I upgrade to a new Ops Center build. As part of the development team, I get involved in pre-release Quality Assurance testing, which means I end up doing hundreds of upgrades as part of the testing process. Update releases come out regularly and contain enhancements and bug fixes. As with any other application in your environment, you should upgrade Ops Center to the current release/update in a timely and controlled manner. For those of you who are long time sys-admins, there is no rocket science here. It is the same sort of planning you would do for any other Enterprise level application.  In my test environments, I have my Enterprise Controller (EC) and Proxy Controllers (PC) inside Solaris Zones (Solaris 11), so I have a couple of extra checks I do, but the process as a whole is still valid if your EC/PC are on their own separate hardware. 1) Read the Release Notes Yes, those release notes/README files are important and you should spend the time reading them. They will contain the latest information about the update and any known issues and workarounds. 2) Check Free Disk Space Confirm that there is enough disk space to unpack and install the upgrade. How much is enough space is the ultimate question. It will vary with each different upgrade and will depend on how you have configured your underlying filesystems and your actual environment. Here are some guidelines. Please note that the numbers I quote tend to be a little generous as it is always better to have more free space than not enough. There should always be a few GB of space free in the root partition (it is just good sys-admin practice - below 90 % would be ideal). The filesystem that holds /var/tmp  will need space for the DB backup that is run as part of the installer. The size of this will depend on the size of your DB. So check how much a "ecadm backup" takes on your system. The filesystem that holds /var/tmp is also the temporary location where we unpack the upgrade bundle. The filesystem that holds /var/opt/sun/xvm will have the majority of the upgrade code installed into it as well as a copy of the installer under the update-saved-state directory. So what does that mean for size requirements? You need about 5 times the upgrade bundle. The current upgrade bundle is 3.8GB unpacked, so that would be 20GB. The DB backup will take about 10% of the actual DB size. root@ec:/ec_backup# du -hs * 1.3G   sat-backup-pre-12.2.1-upgrade.20140702 root@ec:/ec_backup# du -hs /var/opt/sun/xvm/oracle/oradata/OCDB14G   /var/opt/sun/xvm/oracle/oradata/OCDBroot@ec:/ec_backup#  Although more space is actually used during the backup before it is packed up, I would allow for about 4 GB of space. So for my environment, I would look for about 25GB (rounding up) free space (your number may vary). I am sure I could scrimp and save and get this number down, but the idea is to have plenty of free space to allow for the upgrade to go through without incident. 3) Backups Backups Backups Before commencing any upgrade, you should make sure you can roll back if something goes horribly wrong. Years of history in administration and support have made me a paranoid person. I believe you can never have too many backups, so I do the following: Confirm you have a successful database backup using "ecadm backup". (You should already be doing this on a weekly basis) root@ec:/# /opt/SUNWxvmoc/bin/ecadm backup -d pre-12.2.1-upgrade -o /ec_backup/sat-backup-pre-12.2.1-upgrade.20140702ecadm: using logFile = /var/opt/sun/xvm/logs/sat-backup-2014-07-02-11:52:16.logecadm: *** PreBackup Phase ecadm: *** Backup Phaseecadm: *** PostBackup Phase ecadm: *** Backup completeecadm: *** Output in /ec_backup/sat-backup-pre-12.2.1-upgrade.20140702 ecadm: *** Log in /var/opt/sun/xvm/logs/sat-backup-2014-07-02-11:52:16.logroot@ec:/# Of course, copy the generated backup file to somewhere safe on another system. Confirm you have a successful filesystem backup using your Enterprise backup software. (You should already be doing this on a weekly basis.) I would recommend full filesystem backups and having a separate backup of the /var/opt/sun/xvm directory and any of your Ops Center software libraries if you did not put them in the default location (/var/opt/sun/xvm/locallib/swlib[0-2]). Take a ZFS snapshot (recursive) of the full zone (rpool and any other zpool that are part of the zone). This is normally your easiest and fastest roll back method should you need it. NOTE: Make sure you know how to recover/rollback a zone. "zfs snapshot -r rpool" recursively snapshots all underlying filesystems, but "zfs rollback -r rpool" will only rollback a single filesystem. You need to rollback each filesystem separately. If you are not sure, practice it on a test zone first. ### Take a zfs snapshot ### root@ec:/# zfs list NAME                              USED  AVAIL  REFER  MOUNTPOINT rpool                             156G  41.1G    31K  /rpool rpool/ROOT                        134G  41.1G    31K  legacy rpool/ROOT/solaris                134G  41.1G  24.6G  / rpool/ROOT/solaris-backup-1       174K  41.1G  1.37G  / rpool/ROOT/solaris-backup-1/var   110K  41.1G  27.9G  /var rpool/ROOT/solaris-backup-2       296K  41.1G  24.2G  / rpool/ROOT/solaris-backup-2/var   232K  41.1G  48.4G  /var rpool/ROOT/solaris/var            109G  41.1G  77.2G  /var rpool/VARSHARE                     88K  41.1G  66.5K  /var/share rpool/ec_backup                  1.29G  41.1G  1.29G  /ec_backup rpool/export                      161K  41.1G    32K  /export rpool/export/home                 111K  41.1G    32K  /export/home rpool/export/home/ocadmin          61K  41.1G  40.5K  /export/home/ocadmin rpool/oracle                     20.7G  41.1G  20.7G  /var/opt/sun/xvm/oracle root@ec:/# root@ec:/# zfs snapshot -r rpool@pre-OC-12.2.1-install.20140702 root@ec:/# 4) Check for any failed services It is good practice to clear/enable/disable any broken SMF services, but there are a few key ones to check. Make sure all the Ops Center services that should be running are running and the ones that should not are not. A classic example here is when you have an EC running without a collocated PC. The PC shows as disabled, but still shows in a "svcs -xv" output. root@ec:/var/tmp/downloads# svcs -xv svc:/application/management/common-agent-container-1:scn-proxy (Cacao, a common Java container for JDMK/JMX based management solution)  State: disabled since June 12, 2014 08:07:08 AM EST Reason: Disabled by an administrator. See: http://support.oracle.com/msg/SMF-8000-05 See: man -M /usr/share/man -s 1M cacaoadm See: man -M /usr/share/man -s 5 cacao Impact: 1 dependent service is not running: svc:/application/scn/proxy-available:default root@ec:/var/tmp/downloads# In this case, our EC did not have a collocated PC, so we should ensure that these services are really disabled and don't try to start-up during the upgrade process. root@ec:/var/tmp/downloads# svcadm disablesvc:/application/scn/proxy-available:default root@ec:/var/tmp/downloads# svcadm disablesvc:/application/management/common-agent-container-1:scn-proxy  If you are using zones either on the system where the EC is installed in the GZ or your EC/PC run in a NGZ, you also need to check that the IPS proxies are running to allow the Solaris 11 packaging system to work correctly. In a Global Zone (GZ) check that zones-proxyd is online. root@t4-1-syd04-b:~# svcs svc:/application/pkg/zones-proxyd:defaultSTATE          STIME    FMRIonline         Jul_02   svc:/application/pkg/zones-proxyd:defaultroot@t4-1-syd04-b:~# In a Non Global Zone (NGZ)check that the zones-proxy-client is online. root@ec:~# svcs svc:/application/pkg/zones-proxy-client:defaultSTATE          STIME    FMRIonline          8:54:47 svc:/application/pkg/zones-proxy-client:defaultroot@ec:~# What you are looking for is a clean bill of health from "svcs -xv" command. root@ec:/var/tmp/downloads# svcs -xv root@ec:/var/tmp/downloads# 5) Check the pkg publishers To be able to do a successful upgrade, you need the pkg publisher for a system to be working. In a zones environment, that means the publishers in the GZ and all the NGZ should be working. Publishers that don't resolve when a package links into a zone will cause the whole upgrade to stop. So here are a couple things to look for when you are using an EC in a zone. If this was a test environment where you had multiple EC/PC in different zones, either those EC/PC should be running or the publishers that point to a NON running EC/PC should be cleared. This can be done by issuing a # pkg unset-publisher Publisher-Name The aim here is to clear all the local publishers in the zone and just use the proxied publishers in the GZ. If you have the GZ pointing to a PC that points to the EC that is being upgraded, where the EC is in a NGZ under the GZ (yes this is the whole chicken and egg problem), you have a slightly different problem. During the upgrade, parts of the EC will be shutdown which will stop the remote PC from proxying access to EC's IPS repository. So you need to set the publishers to point to an IPS repository that they can reach. Luckily, the actual IPS repository on the EC does still keep running on port 11000 throughout the upgrade. root@t4-1-syd04-b:~# pkg publisher PUBLISHER                   TYPE     STATUS P LOCATION solaris                     origin   online Fhttps://oracle-oem-oc-mgmt-pc217:8002/IPS/ cacao                       origin   online Fhttps://oracle-oem-oc-mgmt-pc217:8002/IPS/ mp-re          (non-sticky) origin   online Fhttps://oracle-oem-oc-mgmt-pc217:8002/IPS/ opscenter                   origin   online Fhttps://oracle-oem-oc-mgmt-pc217:8002/IPS/ root@t4-1-syd04-b:~# pkg unset-publisher opscenter root@t4-1-syd04-b:~# pkg unset-publisher mp-re root@t4-1-syd04-b:~# pkg unset-publisher cacao root@t4-1-syd04-b:~# pkg set-publisher -G '*' -ghttp://ec:11000/ solaris root@t4-1-syd04-b:~# pkg publisher PUBLISHER                   TYPE     STATUS P LOCATION solaris                     origin   online Fhttp://ec:11000/ root@t4-1-syd04-b:~# You can reset to their original state, all the publishers that were set by Ops Center, by rebooting the system or running the install_ips_ac.sh script in each zone. # /var/opt/sun/xvm/utils/install_ips_ac.sh -P PC_IP_Address Use as the IP address for the EC/PC when it is pointing too itself 6) Run OCDoctor troubleshoot Run the OCDoctor troubleshoot script over your EC and PC's before an upgrade. It is a good sanity check to look for and fix underlying problems before you start the upgrade process. If you are in connected mode, your EC should already have the latest version of OCDoctor downloaded. Otherwise, you can update it by running "OCDoctor.sh --update" or downloading from https://java.net/projects/oc-doctor/downloads/download/OCDoctor-4.36.zip Note: The error "'root' should not be a role" can be safely ignored as it was only required for earlier versions of Ops Center. root@ec:/var/tmp/downloads# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh -t Ops Center Doctor 4.34  [OC,SunOS11] [Read only] [02-Jul-2014 11:25AM EST] ======================== Checking Enterprise Controller...============================== OK: Total number of OSes: 12  Total LDOMs:7  Total Zones: ERROR: User 'root' should not be a role. You should convert it to a normal user before the installation.        This can be done by running: # rolemod -K type=normal root OK: Files in /var/opt/sun/xvm/images/agent/ have the right permissions OK: Files in /var/opt/sun/xvm/osp/web/pub/pkgs/ have the right permissions OK: both pvalue and pdefault in systemproperty are equal to false (at id 114) OK: Found only 285 OCDB*.aud files in oracle/admin/OCDB/adump folder OK: Found no ocdb*.aud files in oracle/admin/OCDB/adump folder OK: No auth.cgi was found in cgi-bin OK: User 'oracleoc' home folder points to the right location OK: User 'allstart' home folder points to the right location OK: Apache logs are smaller than 2 GB OK: n1gc folder has the right permissions OK: All agent packages are installed properly OK: All Enterprise Controller packages are installed properly OK: Enterprise Controller status is online OK: the version is the latest one ( OK: satadm timeouts were increased OK: tar command was properly adjusted in satadm OK: stclient command works properly OK: Colocated proxy status is 'disabled' OK: Local Database used space is 19%, 6G out of 32G (local DB, using 1 files) OK: Debug is disabled in .uce.rc OK: Debug is disabled for cacao instance oem-ec OK: no 'conn_properties_file_name' value in .uce.rc OK: 30G available in / OK: 30G available in /var OK: 30G available in /var/tmp OK: 30G available in /var/opt/sun/xvm OK: 30G available in /opt OK: DNS does not unexpectedly resolve hostname '_default_' OK: Found the server .uce.rc at /var/opt/sun/xvm/uce/opt/server/cgi-bin/.uce.rc OK: Server .uce.rc has the correct file permissions OK: Server .uce.rc has the correct ownership OK: Connectivity to the KB public servers works properly (using download_large.cgi) OK: Grouplock file doesn't exist OK: package hmp-tools@2.2.1 is not installed OK: package driver/x11/xsvc is not installed OK: Cacao facet is set to False OK: All Solaris 11 agent bundles in /var/opt/sun/xvm/images/agent are imported properly to the repository OK: Disconnected mode is not configured OK: Locales are OK ("en_US.UTF-8") OK: No need to check for Solaris 11 agent bundle issue as this EC is newer than Update 1 OK: No partially installed packages OK: UCE 'private' folder exists OK: No http_proxy is set in the user profile files OK: 'public' folder has the right ownership OK: 'public' folder is writable for uce-sds OK: 'private' folder has the right ownership OK: 'private' folder is writable for uce-sds OK: '/var/tmp' folder is writable for uce-sds OK: No old jobs rerun (CR 6990675) OK: No need to adjust SEQ_COUNT (MAXID:2986 SEQCOUNT:2986) OK: no row with ssh.tunnel.info found in DB table HD_RESOURCE_PARAMETER NOTICE: Can't perform cryptoadm test inside a zone.         Run --troubelshoot from the global zone as well to test the crypto services. OK: System time is not in the past OK: User uce-sds is part of all the proper groups OK: oracleoc user ulimit -Sn is 1024 OK: oracleoc user ulimit -Hn is 65536 OK: FC Libraries do not contain duplicate LUNs OK: 'update-saved-state' folder exists and has the right permissions OK: verify-db does not return 'Invalid pad value' message OK: No credential issues found =========== Proxy controller is installed but not configured, skipping ================== =========== Agent controller is installed but not configured, skipping ================== root@ec:/var/tmp/downloads# Now do the upgrade Choose whichever upgrade method you like. Both the BUI and CLI methods will give you the same end result. The Ops Center upgrade is not a difficult upgrade and following some simple pre-work checks will maximize your chance of a straightforward and successful upgrade. Regards, Rodney Lindner

With the release of Enterprise Manager Ops Center 12.2.1, it is time to go through the upgrade cycle. I thought I would share the pre-upgrade checks I go through when I upgrade to a new Ops...

Best Practices

Patching 101 - The User Friendly Guide to Understanding EM Patches

There was a conversation on twitter last week about available patches for Enterprise Manager (EM), and it got a little deeper than 140 characters will allow.  I've written this blog to give a quick Patching 101 on the types of EM patches you might see and the details around how they can be applied. OMS Patches The core Enterprise Manager system is typically patched with the quarterly PSU patches (released Jan, Apr, July, Oct) or a one-off when directed by support for a critical issue.  PSU patches will be cumulative, so you need not apply each of them, just apply the latest.  The OMSes must be shutdown during patching, however some patches are being released with rolling patch instructions for multi-OMS systems.  These patches must be applied at the host level, and cannot be automated via EM.   ALWAYS read the readme, yes every time.  The patching steps can change from patch to patch so it's critical to read the readme. OPatch or OPatchauto will be used to apply these patches.  Did I mention to read the readme for every patch?  It's also important to note that there may be additional steps when patching in a multi-OMS or standby environment, so read the output of OPatchauto carefully. Always download the latest OPatch release for the appropriate version.  If you read the readme, you already know this!   Download patch 6880880 for 11.1 (the OPatch version used by EM) and unzip into the $ORACLE_HOME.  Most errors in patching are related to not updating OPatch.  For more information on PSU Patches and patching EM:Oracle Enterprise Manager Cloud Control Administrators Guide - Chapter 16 Patching Oracle Management Server and the RepositoryEM 12c Cloud Control: List of Available Patch Set Updates PSU (Doc ID 1605609.1)How to Determine the List of Patch Set Update(PSU) Applied to the Enterprise Manager OMS and Agent Oracle Homes? (Doc ID 1358092.1) Each plug-in has binaries that will require patches as well.  Same downtime requirements apply for plug-in patches as the quarterly PSUs.  Starting in, the plug-in patches are being released as a monthly bundle.  This means that if you have 6 plug-ins, you may have 6 OMS side patches to apply - 1 for each plug-in.  Bundles are not always released for every plug-in every month.  They are cumulative, so pick the latest. Starting with, the individual OMS-side plug-in bundles are being grouped into a System  Patch each month. So for example, in June 2014 the System patch includes MOS, Cloud, DB, FA, FMW, SMF, and Siebel plug-ins.  Non-required patches will be skipped. For more information on the EM Patch Bundles and Patching EM:Enterprise Manager (PS3) Master Bundle Patch List (Doc ID 1900943.1)Enterprise Manager Bundle Patch Master Note (Doc ID 1572022.1) Agent Patches Agent patches are applied to each agent.  They can be applied via EM using the MOS patch plans, which makes it a lot easier when you have 100s or 1000s of Agents to patch!  The Patch Plans will start a blackout, validate prerequisites, check for conflicts, and update OPatch for you.  If you don't use the Patch Plan you can patch manually with OPatch, don't forget to read the readme!  The Agent must be shutdown during the patch application.  There are 4 main types of Agent patches you will see: Core Agent - Starting with the core Agent will have monthly patch bundles .  These are also cumulative, so my recommendation is to apply the latest one.   Agent-side Discovery Plug-in - This is the lightweight piece of the plug-in used for target discovery.  Discovery plug-in patches are cumulative with other discovery plug-in patches for that component.  Agent-side Monitoring Plug-in - This is the more detailed monitoring side of the plug-ins for the required components.  Monitoring plug-in patches are cumulative with other monitoring plug-in patches for that component.   So if there's a Discovery and Monitoring patch available for the DB Plug-in, you need to apply both of them.   JDBC patches for the Agent will be JDBC version  These patches do get applied to the Agent, and can be applied via the Patch Plans.   You can apply the latest Agent bundle, JDBC patch and the plug-in bundles in one patch plan.   If there's a conflict, you'll be notified.   If the Agents you've selected don't have specified plug-ins, you'll also receive notice during the analyze step.  As of now, for my agents, I would apply the patch (18873338) and the two available plug-in agent patches DB monitoring (19002534) and FMW monitoring (18953219) and the latest JDBC patches (18502187,18721761) all in one patch plan. I discovered a new feature in while testing this.  Normally you had to have Normal Oracle Home preferred credentials set for all Agent targets to patch, or select Override and specify the Normal Oracle Home credentials.   In, the Agent uses it's internal credentials to Patch itself, so setting preferred credentials or specifying at run-time is not required.  The user patching would require the Manage Target Patch and Patch Plan privileges.   For more details on Agent patching:Oracle Enterprise Manager Cloud Control Administrators Guide - Chapter 17 Patching Enterprise Manager Agents Simplified Agent and Plug-in Patching Infrastructure The OMS and Agent are the key components, and my main focus here.  However it's important to keep the infrastructure stack up to date as well.  This includes the Oracle Fusion Middleware and Oracle Database that are used for EM.   The recommendation is to follow the best practices for each of these components, and regularly update with the PSU patches available.   The following reference notes will help in identifying the current PSU patches.   The WebLogic Server version used by EM 12c is 10.3.6.  Oracle Recommended Patches -- Oracle Database (MOS 756671.1) Master Note on WebLogic Server Patch Set Updates (PSUs) (MOS 1470197.1) Summary Hopefully this will help you understand the various types of components involved with keeping EM up to date.   Obviously, you may not want to patch each month and maybe not every quarter, but the patches are available to keep the software up to date and make things easier to apply in bundles.  You'll want to setup a plan for planned software maintenance in your environment.  There's a whitepaper Oracle Enterprise Manager Software Planned Maintenance that will help guide you through the best practices.  

There was a conversation on twitter last week about available patches for Enterprise Manager (EM), and it got a little deeper than 140 characters will allow.  I've written this blog to give a...

Best Practices

Limit Self Service User Access to Database Self Service Portal

When implementing database as a service and/or snap clone, a common request was for a way to hide the other service types like IaaS, MWaaS, etc from the self service portal for the end users. Before EM12c R4, there was no way to restrict the portal view. Essentially, any user with the EM_SSA_USER role would be directed to the self service portal and would then be able to see all service types supported by EM12c. Of course, you could always set Database as your default self service portal from the 'My Preferences' pop up, but this only helps with their post-login experience. The end user still gets to see all the options as shown in screen above. In EM12c R4, a new out of the box role called EM_SSA_USER_BASE has been introduced. This role, by default, does not give access to any portal, that is an explicit selection. Here is how you use this role: 1. Create a custom role and add the EM_SSA_USER_BASE role to it.  2. Now in the Resource Privileges step, select the Resource Type 'Cloud Self Service Portal for Database', and edit it 3. Check the 'Access the Cloud Self Service Portal for Database.' privilege. Finish the rest of the wizard.  Now, when a user with this custom role accesses the self service portal, they can only do so for databases and nothing else. While the EM_SSA_USER role will continue to work, we recommend you start using the new EM_SSA_USER_BASE role. For more details on DBaaS or Snap Clone roles, refer to the cloud admin guide chapter on roles and users. -- Adeesh Fulay (@AdeeshF)

When implementing database as a service and/or snap clone, a common request was for a way to hide the other service types like IaaS, MWaaS, etc from the self service portal for the end users. Before...

Product Info

Convert Crontab to Enterprise Manager Jobs

Surprisingly, a popular question posted on our internal forum is about the possibility of using the Enterprise Manager (EM) Job System to replace customer’s numerous cron jobs. The answer is obviously YES! I say surprisingly because the EM Job system has been in existence for around 10 years (I believe since EM, and my hope was that, by now, customers would have moved to using more enterprise class job schedulers instead of cron. So, here is a quick post on how to get started with this conversion from cron to EM Jobs for some of our new users. Benefits of EM Job System  Before we learn about the how, let’s look at the why. The EM job system is: Free - (Yes, I said free) It is included with the base EM at no cost. Flexible - It supports multiple options for scheduling, notification, authentication, etc Infinitely scalable - the job system seamlessly scales to every new Oracle Management Server (OMS). In fact, in case of OMS failures, the job steps are automatically picked up by the next available OMS without affecting the job execution. General purpose - General purpose since it provides numerous out-of-the-box job types like run OS command, start/stop, backup, SQL Script, patch, etc that span multiple target types. As of today, there are over 50 job types available in the product. Enterprise grade - It allows users to automate multiple administrative tasks like backup, patching, cloning, etc across multiple targets. Customers have not only converted their cron jobs to EM, but have also replaced other enterprise tools like Autosys and migrated 1000s of jobs to EM Job System. APIs - Jobs can be scheduled and managed from the UI and using EMCLI (the command line interface). Now back to our topic. The Conversion Process Let’s start with a sample crontab that we want to convert. A cron expression consists of 6 fields, where the first 5 fields represent the schedule, while the last field represents the command or script to run.  Field Name Mandatory?  Allowed Values  Allowed special characters  Minutes  Yes 0-59  * / , -  Hours  Yes  0-23  * / , -  Day of month Yes  1-31 * / , - ? L W  Month Yes  1-12 or JAN-DEC * / , -  Day of week Yes  0-6 or SUN-SAT * / , - ? L # Cron jobs run on the operating system, often using the native shell or other tools installed on the operating system. The equivalent of this capability in Enterprise Manager is the ‘OS Command’ job type. Here are the steps required to convert the first entry in the crontab to an EM job:1. Navigate to the Job Activity page2. Select the ‘OS Command’ job and click GoA 5-tab wizard will appear. Let’s step through this one by one.3. Select the first tab called ‘General’. Here provide a meaningful name and description for the job. Since this job will be run on the Host target, keep the target type selection as ‘Host’. Next, select all host targets in EM that you wish to run this script against.While cron jobs are defined on a per host bases, in EM a job definition can be run and managed across multiple hosts or groups of hosts. This avoids having to maintain the same crontab across multiple hosts.4. Select the ‘Parameters’ tab. Here enter the command or script as specified in the last field of the crontab entry. When constructing the command, you can make use of the various target properties.5. Next select ‘Credentials’. Here we provide the credentials required to connect to the host and execute the required commands or scripts. Three options are presented to the user: Preferred – default credential set for the host Named - Credentials are stored within Enterprise Manager as "named" entities. Named credentials can be a username/password, or a public key-private key pair. Here we choose pre-created named credentials New – This allows us to create and use new named credential Note: If your OS user does not have the required privileges to execute the set command, Named Credentials also support use of sudo, powerbroker, sesu, etc. 6. Next, we set the schedule and this is where it gets interesting. As discussed before, crontab uses a textual representation for the schedule, while EM Job system has a graphical representation for the schedule.Our sample schedule in the crontab is ‘00 0 * * Sun’. This translates to a weekly job at 12 midnight on every Sunday. To set this in EM, choose the ‘Repeating’ schedule type. The screenshot below shows all the other selections. The key here is to select the correct ‘Frequency Type’, the rest of the selections are quite obvious. This also lets you choose the desired timezone for the schedule. Your options are to either start the job w.r.t a fixed timezone, or start it in individual target's timezone. The latter is very popular, for example, I want to start a job at 2 AM local time in every region around the world.Another selection of note is that for ‘Grace Period’. This is an extremely powerful feature, but often not used by many customers. Typically, we expect jobs to be started within a few seconds or minutes (based on the load on the system and number of jobs scheduled) of the start time, but a job might not start on time for many reasons. The most common reasons are the Agent being down or due to a blackout. The grace period controls the latest start time for the job in case the job is delayed, else its is marked as skipped. By default, jobs are scheduled with indefinite grace periods, but I highly recommend setting a value for it. In the sample above, I set a 3 hr limit which may seem large but given the weekly nature of the job seems reasonable. So the job system will wait until 3 am (the job start time is 12 am) to start the job, after which the iteration will be skipped. For repeating schedules, the grace period should always be less than the repeat interval. If the job starts on time, the grace period is ignored. 7. Finally, we navigate to the ‘Access’ tab. This tab has two parts: Privilege assignment to roles and users: this allows you to control job level access for other users Email notifications for the Job owner: this allows you to control the events for which you wish to receive notifications. Note, this only sets notification for the job owner, the other users can subscribe to emails by setting up notification and/or incident rules. To prevent EM from sending deluge of emails, I recommend the following settings in the notifications region: Match status and severity: Both Select severity of status: Critical Select status: Problems & Action Required        You can always come back and modify these settings to suit your needs. Not all cron jobs need to be converted to OS command. For example, if you are taking Oracle database backups using cron, then you probably want to use the out-of-the-box  job type for RMAN scripts. Just provide the RMAN script, list of databases to run this against, and the credentials required to connect to the database. Similarly, if you run sqls on numerous databases, you can leverage the SQL Script job type for this purpose. There are over 50 job types available in EM12c, all available for use from the UI and EMCLI.Finally, the best way to learn more about the EM Job System is to actually play with it. I also recommend blogs from Maaz, Kellyn, and other users on this topic. Good luck!! References Maaz Anjum: http://maazanjum.com/2013/12/30/create-a-simple-job-for-a-host-target-in-em12c/ Kellyn Pot'vin: http://dbakevlar.com/ -- Adeesh Fulay (@adeeshf)

Surprisingly, a popular question posted on our internal forum is about the possibility of using the Enterprise Manager (EM) Job System to replace customer’s numerous cron jobs. The answer is obviously...


Creating A Secondary I/O domain with Ops Center 12.2

Contributed by Juergen Fleischer and Mahesh Sharma. The purpose of this blog is to show you how to create a SecondaryI/O domain. The First I/O domain is commonly known as Control Domain (CDOM).There are various terms that are used for a Secondary domain, like alternativeI/O domain or redundant I/O domain. The secondary I/O domain will have been assigned somephysical I/O devices, which may be a PCIe bus root complex, a PCI device, or aSR-IOV (Single Root I/O Virtualization) virtual function. Within Ops Center when creating a Secondary Domain we alsouse the terms Physical I/O Domain and Root Domain. The Physical I/O Domain mapsPCI device end points, and the Root Domain maps PCIe buses, which also has anoption to create SR-IOV functions. In this blog we will show you how to create a Root Domain byassigning PCIe buses, so that we have a redundant I/O domain which will enableus to shutdown the CDOM without affecting any of our guests.Our host a T5-2 (we’re using the same host as in the previousblogs) has two free PCIe buses that have not been assigned to domains (pci_2and pci_3). So let’s create a Secondary I/O domain (Root Domain) with thesebuses and give the domain two whole cores with 4 GB of Memory.We’ll start by creating a Logical Domain Profile. This is created from: Navigation -> Plan Management -> Profiles and Polices,then click on Logical Domain. On theright under Actions select Create Profiles From the Identify Profile screen, give the Profile a name(Secondary-I/O in our case) and select Root Domain in the Sub-type, as shownabove. Click Next. Step 2: is where we provide a name for our Secondary I/Odomain, we called ours secondary and click Next to continue. The next screen, show below, we entered the amount of cores andmemory.  Step 4: is where we specify how many PCIe buses we will assignto this secondary domain. In our cases we have specified two for  pci_2and pci_3. The next few steps are optional and are not required forthis example. Step 7: Is the Summary, if everything looks correct clickFinish. Note: The Metadata is on the local disk (file://guest). Thisis fine for the Secondary Domain as it will not be migrated. It’s just the Logical domains and theirGuests that will get migrated, therefore making it mandatory to have the metadata onshared storage for these – if you want migration to succeed! Now that we have created the Profile, we will create ourSecondary domain (Root domain). This is done from Navigation -> Plan Management ->Deployment Plans – Create Logical Domain and select the plan we have justcreated. From the Actions panel on theright select "Apply Deployment Plan". From the Select Target Asset pop-up, select the CDOM andmove it to the Target list. Select Next. Complete Step 1, by specifying the secondary name. Step 2, we pick the PCIe Buses that will be assigned tothe secondary domain. In step 3: we kept the default Virtual Disk Sever (vds)name. The next few screens are not required in this example. From the Summary screen click Finish. Thiswill create the Secondary Domain. Once the Secondary Domain has been created we can check ifit’s built as we specified. We can checkvia the Ops Center BUI and also command line. Let’s do both: And from the command line While on the command line we can also check if the buses havebeen assigned correctly. This can also be seen from the BUI: Now that we have a Secondary I/O domain (Root domain). Wecan install Solaris 11 on it. Well coverthis in a possible future blog. Hope this helps!

Contributed by Juergen Fleischer and Mahesh Sharma. The purpose of this blog is to show you how to create a Secondary I/O domain. The First I/O domain is commonly known as Control Domain (CDOM).There...


Discovering an Oracle ZFS 7120 Storage Appliance with Ops Center 12.2 and creating an iSCSI LUN

Contributed by Juergen Fleischer and Mahesh Sharma. In this blog we will show you how to discover an Oracle ZFS7120 Storage Appliance within Ops Center (12R2) as a dynamic storage library. Oncediscovered we can create, delete, resize and dynamically assign iSCSI LUNs fromOps Center. First we need to configure the 7120 for iSCSI LUN distribution. Creating an iSCSI Target Group We need to create an iSCSI Target Group so to define theports and protocols by which the LUN will be presented. To do this we need to logon to the 7120 BUIfrom a browser i.e. https://<7120-hostname>:215 . To configure the iSCSI target, go to: Configure -> SAN -> click the “plus +” icon next to iSCSI. Shown below. From the pop-up enter a meaningful alias and decide whichnetwork interface to use. Click OK to end. Note: CHAPS authentication can be used and is supported byOps Center, and would be recommended for a Production environment. If you are not sure which Network Interface to use, have alook at the Network section, which will show how the Networks are defined. Once the iSCSI target has been created (OC-iscsi in our case), we need to create a"Target Group" that includes our created target. To create a Target Group, place the cursor over the OC-iscsiand a Move icon  () will appear. Drag the Move icon () to the iSCSITarget group. Once moved across to the iSCSI Target Group, a defaulttarget group is created called target-0. Click on the edit symbol next to this,rename it with something meaningful and click OK. In our case we used Ldom-Guest. Click Apply to confirm the changes. Now that we have created a Target on the ZFS 7120 appliance,we can discover the 7120 with Ops Center. Discovering the Oracle ZFS 7120 Storage Appliance. We started with creating a Discovery credentials for the7120, from: Plan Management -> Credentials, from the right hand sideunder Actions clicking Create Credentials. From the Create Credentials pop up we selected “StorageAdministrator” as the protocol from the drop down menu and completed the otherfields as shown: Next we created the Discovery Profile, from: Plan Management -> Profiles and Policies -> Discovery. From the right hand side under Actions clicking "Create Profile". From the pop-up we selected Sun ZFS StorageAppliance as the Asset type, as shown. Click Next when completed, Fromthe Targets page, we entered the IP address of the 7120.  The "Target Group" and"Target" were entered in the Plugin Specific Information Section, as shown below: We entered the Credentials for the 7120 (Step 4) that we created earlier, and then reviewedand accepted the Summary. Once the Discovery job completed, the 7120 can be seen underthe asset tree. Wecan also see that we have a new Dynamic Block Storage under the Libraries section. Now that we have discovered the 7120, let’s create a LUN. Creating an iSCSI LUN from Ops Center. Creating an iSCSI LUN is now very simple from OpsCenter. Go to : Navigation -> Libraries and select the 7120 from theDynamic Storage. As shown below: From the center panel, click “Add LUN” and a pop-up windowwill appear as shown: Enter the Name for the LUN and the size, and then click"Create". When the job completes you canseen in the Summary window, as below: Hope this helps!

Contributed by Juergen Fleischer and Mahesh Sharma. In this blog we will show you how to discover an Oracle ZFS 7120 Storage Appliance within Ops Center (12R2) as a dynamic storage library. Oncediscover...

Product Info

Patch automation of Data Guard Databases takes a leap with EM12cR4

Patching has always been one of the many joys (or dreads) of Oracle DBAs. ">The effort compounds as the Database sprawl grows and the architecture complexity increases. Additionally, there is an increasing pressure to patch the systems with minimal or zero downtime.  EM12c's Database patch automation functionality provides an end-to-end management starting with proactive patch recommendations (like quarterly PSUs and CPUs), complete pre-flight checker to ensure predictable patching during maintenance window and  automation for patching the comprehensive list of products under the Oracle Database family.  ( EM12c Patch Automation overview) With the introduction of "Out of Place" patching in EM12c, the automation feature got a complete overhaul. Over the past 3 years I have seen customers moving to this mode to achieve their automation goals.  ( What is Out of Place?) Patching Data guard environments (read as Primary and its corresponding Standby Databases) has always been a challenging task for the DBAs. In addition to handling the differences in steps needed for the configuration, the very nature of its distributed environment and incorporating any additional custom process tasks makes it more demanding. Till now in EM we only supported automation in disparate steps and in “In Place” mode. Starting (EM12c R4), the support is enhanced to be more tighter, can be done in 'Out of Place' mode, and has new supplemental features to manage the process along with its additional tasks. In this post we will take a closer look of the data guard stand by patching feature and then dive into a use case, you could try out in your environment. Oracle Data GuardPatching using EM12c Quick Overview: Patch databases in Data Guard setup Standalone (P)– Standalone (S)* RAC (P) – RAC (S) RAC (P)– Standalone (S) DB-A(P),DB-B (S) – DB-B(P) , DB-A(S)  (common in dev. environments) Multiple Standby environments Supports both In-Place and Out of Place methods, for RAC patch in rolling orparallel modes. Built in best practices – patch Standby first. Supports switch over and non switchover based flows too. Intelligent to identify patch levelsacross primary and standby- run  compensatory actions like run SQL on primary when patchingstandby’s with switchover. Advance theautomation with Change Activity Planner integration to handle rollout processacross mass scale. *(P) - Represents the Primary Database and (S) represents its Standby . What's Hot? # 1  Complete End to End Patching: The patch automation is end to end beginning with pro-active recommendations for the Primary and Standby DBs based on Oracle's quarterly recommended patches like Patch Set Updates (PSU), Grid Infrastructure (GI) PSU and Critical Patch Updates (CPU). As seen in the summary above, the patch automation covers a wide range of Data Guard deployments.  It ranges from simple Primary site with Standalone or RAC DB with Standalone Standby DB to more commonly seen  mixed configuration in the development environments like the Oracle Home in a host is shared by a Primary and Standby of the DBs with its corresponding other DB in another host sharing an Oracle Home,  and all the way to complex deployments where mission critical DBs with multiple standbys spread across sites. The automation is configuration aware and executes the right steps needed to patch the Standby and the Primary. For example, it patches standby ~ brings it back in the proper state (like mount and read only) and handles application of the SQL to the Primary.  Additionally, as  a part of the new enhancements the user is guided to the right way of patching the system to begin with the standby site but allows flexibility to handle cases where the patching starts with the Primary site. Note the current model is process based, where in patching of Primary and Standby are done in silo's at different time windows giving the administrator opportunity to do switch over and testing as needed. In the future, we will also introduce an option to make this procedural where in complete system can be patched in a single go without any pause. Said that, the process can be well managed using the Change Activity Planner as explained in the later portion of this post. #2 Built-in best practices The solution has quite a few gems built-in, let's look at few important ones.  Screenshot of  a Patch Plan created for patching the Standby RAC Cluster  a. Standby 1st Patching:  The patch automation follows Oracle's recommendations of patching the Standby first. The solution is intelligent to understand the topology of the databases picked, and its patch levels to guide the user with the right inputs.  The examples below show cases it. a. User starting off with the Standby Database receives thumbs up. b. If a user starts off with a Primary, it detects if the corresponding Standby Database is patched or not and raises flag. c. Also in situations where the user picks the new standby after the switch over of Primary after patching the standby, the automation auto pulls the Primary configuration into the patch plan to complete the SQL application. b. "Out of Place" Patching:  'Out of Place' is a very precious gem in the EM12c's patch automation feature. The feature is extended to patching of the Data Guard databases with Release 4. Its very important and popular mode of patching for 3 primary reasons: (i) Reduced or Zero Downtime:  Since you are patching the in-active Oracle Home (OH), it directly saves time otherwise spent in applying patching to the existing OH. It gets even better in Clusters where both GI and RAC OH's can be patched in 'Out of Place' with Zero downtime.(ii) Flexibility in scheduling maintenance:  'Out of Place' patching comes handy when you have a consolidated environment where multiple Databases share a single OH. Most of the times, it's hard for the DBAs to get a common maintenance window from their application colleagues. When you do patching in this mode, the databases with approval for maintenance can be selectively switched to the new OH, leaving the rest of the DBs running in the existing OH without any impact. (iii) Risk Free: In situation where you find issues post patching, since the patching process is 'out-of-place' the old OH is available to handle these situations. Using the patch automation's 'Switch Back' feature the user can switch the DB back to its old OH restoring normalcy quickly and can introspect the failure without any impact to business. To Switch or Not to Switch Typically, after patching the Standby Database it is switched with the Primary, tests are run, after successful tests the new standby is patched and finally the databases are switched back bringing the system back to its original configuration state. Different enterprises opt to switch or not switch during their patching practices based upon various parameters such as the usage, location etc.. You can still automate the patching with or without switching. In some cases like RAC Clusters in primary or both in primary and standby one could choose to patch without switch over, as patching happens in rolling fashion without affecting the data transport between the sites. (The example in the end of the post covers this scenario). If you opt to do switch over, EM 12c Rel 4 supports automated switch over either from its UI under the Database's menu option : or via the new emcli option: emcli switchover -primary_target_name="database" -primary_target_type="oracle_database" -standby_target_name="database1" -standby_target_type="oracle_database"emcli switchover -primary_target_name="primary" -primary_target_type="rac_database" -standby_target_name="standby" -standby_target_type="rac_database" -swap_jobs -swap_thresholds Advance Automation - Additional features for Enterprises: The common question I been asked from the customers is "How do I adapt this automation into my enterprise?". Enterprises normally have large number of environments, typically have few DBAs dedicated to complete the patching effort sometimes they are spread out across the globe, and have custom practices with additional steps involved in the patching practice. Other than the standard practice of moving to command line interface (EMCLI) to ease the automation as you mature, you can leverage couple features available in the automation suite to fit the automation within your enterprise. Use Patch Templates: Just as the literal meaning of 'Template', Patch Template's stores the patches and the deployment options used in the testing cycles. User can create a patch template from a successfully deployed patch plan. Patch templates do not store targets. The lead DBA creates the patch templates after the testing and shares it with the other DBAs. Then the DBA creates a Patch Plan from the templates and adds the target to be patched during their maintenance cylce. It simplifies the DBA communication by replacing the list of patches and options to choose from the documentation with just the name of the Patch Template to use for the rollout. This also removes human error of missing out the patches, which can happen while copying patches from a document.  Change Activity Planner: Change Activity Planner (CAP) helps in planning and handling long running process across large scale like patch rollouts. I covered its introduction in the EM12c R3 with this blog post.We have done quite a few enhancements to Change Activity Planner feature in Release 4. Let's take a look at the top three features that could be leveraged for handling patching of Standby Databases in scale. As a lead DBA or Engineering:  Use the new graphical interface to weave the tasks involved in patching standby's as followed in your enterprise CAP now supports Jobs and Deployment Procedures as tasks in addition to Patch Plan and Manual tasks. Add additional tasks like backup jobs from your EM Library or call into Switch backs using jobs or manual steps.  As a DBA or Operations DBA:  • Reviewthe tasks assigned to you • Directly Create the Patch Plan from the task , track  the progress from the task without having to land in Patching feature. Example Scenario Let's take a quick look at an example of how you could leverage EM12c R4 in patching the Clusters in Standby with the quarterly recommended GI PSU (applicable to both GI and RAC layers) Since cluster environment's both Primary and Standby DBs can be patched in rolling fashion in their silo's without affecting the data transport, you can skip the switch over process. Hope this was good comprehensive introduction to the new enhanced patch automation support for Data Guard Databases. I encourage you to explore the functionality and provide us with feedback. References:Lifecycle Management Guide in OTN Documentation for step by step patching tutorial for different configuration.

Patching has always been one of the many joys (or dreads) of Oracle DBAs. ">The effort compounds as the Database sprawl grows and the architecture complexity increases. Additionally, there is an...


Creating a Network Aggregated Link with Ops Center (12R2)

Contributed by Juergen Fleischer and Mahesh Sharma. In this short blog we will use Ops Center to create LinkAggregation. This will allow us tocombine multiple network connections in parallel to increase the throughput andprovide redundancy. If you recall from the ""Ops Center 12.2: Creating a CDOM - Detaching Unused PCIe Buses - Adding PCIe Buses",we created the CDOM with just one network connection (net0, ixbge2), this wasfrom pci_0 which is the onboard bus. Pci_0 also had device ixgbe3 attachedwhich we did not configure during provisioning. In the “"Ops Center 12.2: Creating a CDOM - Detaching Unused PCIe Buses - Adding PCIe Buses" blog we also attached another PCIebus (pci_1) to the Control Domain, which gave us another two network devices(ixbge0 and ixbge2) as seen below. So to summarize we have the following devices: Link PCIe Bus Physical Device net0 pci_0 (on board) ixgbe2 net1 pci_0 (on board) xgbe3 net4 pci_1 xgbe0 net5 pci_1 xgbe1 The idea here is that net1, net4 and net5 will be connectedto our Production network on the CDOM. As mentioned above we would like tocombine the Nets to provide someredundancy and better throughput. So, let us create the aggregated link with Ops Center. From: Navigation -> Assets , select the CDOM OS Asset asshown below: From the tabs in the middle panel select "Networks" then "LinkAggregations". Click on green plus sign “Create Link Aggregations” From the next screen, specify an aggregation name and movethe nets from “Available Network Interfaces” to “Link Aggregation Members”, asshown below: The next screen allows you to configure the Links: The last step is to review the Summary and click Finish. Let’s confirm these changes have taken place by command lineand from the Ops Center BUI. From the command line, log on to the Control Domain and run the following: We can see we have an aggregated link called aggr0, whichconsisting of net 1, 4 and 5. This can also be seen from the BUI. Hope this helps!

Contributed by Juergen Fleischer and Mahesh Sharma. In this short blog we will use Ops Center to create Link Aggregation. This will allow us tocombine multiple network connections in parallel to...


Ops Center 12.2: Creating a CDOM - Detaching Unused PCIe Buses – Adding PCIe Buses

Contributed by Juergen Fleischer and Mahesh Sharma. In this blog we wanted to show you how to provision a ControlDomain (CDOM) and detach the PCIe buses that are not required by the CDOM forI/O operations. By detaching the buses we have the opportunity to createanother I/O domain if required. We’ll also show you how to add a PCIe Bus tothe CDOM. We have a factory-default Sparc T5-2 server, which means bydefault a CDOM is already installed and it has all resources allocated to it. Let’s have a look at our Factory reset T5-2. As you can see from the above output the CDOM (PrimaryDomain) owns all the PCI’s and has all Memory and CPU’s assigned to it. Let’s use Ops Center to create a CDOM with the followingspecifications: 2 whole cores 4GB Memory Solaris 11.1 SRU 19.6 LDOM s/w version Filesystem with 4Mb Swap and the rest in / ·     And most importantly detach the PCIe buses that are not requiredby the Primary Domain, we will add all theses specification within our PlanManagement Profiles and apply them while provisioning. Creating the OS Provisioning ProfileLet’s start by creating a Profile for OS Provisioning(OSP). I will not go cover all thesteps, as I’m sure you’re familiar with most of them. From:   Navigation -> Plan Management -> Profiles and Policies. Highlight "OS Provisioning". On the right hand side under Actions select"Create Profiles". From the screen shown below "Name the Profile" and select"Oracle VM Server for SPARC". On the next screen we select the OS and OVM Server version asrequired. Continue through theother steps completing the information as required. On Step 6 we will specify how we want to layout our filesystem, remember our criteria is 4Mb for swap and the reset in /. View the Summary (Step 8) and click Finish. Creating the OS Configuration Profile Next we will create the OS Configuration (OSC) Profile. From:   Navigation ->Plan Management -> Profiles and Policies. Highlight "OS Configuration". Onthe right hand side under "Actions" select "Create Profile". In Step 1, identify the Profile Name  andselect Oracle VM for Server for Sparc as shown. Step 2 is where we identify the CDOM parameters -  like cores,memory and have the option to "Detach Unused Buses". Complete the remaining steps as required. Step 5 is the Summary. Creating the Deployment plan This is where we combine the OSP and OSC Profiles together to createthe Deployment Plan. From:     Navigation -> Plan Management -> Deployment Plans-> Provision OS. Click "Create Planfrom Template" from the right hand side. Name the Plan and select the OSP and OSC Profiles created inthe previous steps, as shown. Aftercreating the Deployment plan, you are ready to Provision the CDOM. To do this selects the iLOM from the assetsmenu and select "Install Server" from the Actions panel. A job will be created,that will similar to this: Once the Provisioning job has completed successfully, we canreview the CDOM and check if our parameters have been set. We can check via theBUI or command line – we’ll use a little of each. Let’s check if our specification of adding 4 GB of memoryand 2 whole cores have been applied via the BUI. We’ll now log onto the CDOM and check if the unused busesare detached – to do this via the Ops Center BUI,  just click on the I/OResources tab on the image above. Fromthe CLI the output will look like this: Here we can see that the Primary Domain now only owns pci_0. Compare this to the output of `ldm ls-io`given at the top of the blog. Adding a PCI bus Now that we have a CDOM using only the required PCIe buses - inour case pci_0. Let’s show you how to addanother PCIe bus to the CDOM. In thisexample we’ll be adding pci_1 to the CDOM which will give us some extra networkinterfaces to play with. So at present the network devices we have available to ourCDOM are ixgbe2(net0) and ixgbe3(net1). The device numbers at first seem strange, as you may beexpecting to see ixbge0 and ixbge1 – which would make more sense...right! The reason for the numbering is the waysolaris maps instance names to physical devices in /etc/path_to _inst. Solaris keepsa history about devices and how they are mapped. Let’s add another PCIe bus to the CDOM so that we can havesome more ixgbe interfaces. In our casewe want to eventually aggregate the interfaces from this PCIe bus. PCIe buses cannot be dynamically added to the CDOM. Thereforewe will set the CDOM in delayed reconfiguration mode. With delayed reconfigurationoperation changes take effect after the next reboot. Then add the pci card: As it states above a reboot will be required. After the reboot we can see that the Primary Domain (CDOM) is nowthe owner of pci_1 and pci_0: Let’s see the Network devices we have available now. In a future blog we’ll show you how we can aggregate theNetwork Devices via Ops Center. Hope this helps!

Contributed by Juergen Fleischer and Mahesh Sharma. In this blog we wanted to show you how to provision a Control Domain (CDOM) and detach the PCIe buses that are not required by the CDOM forI/O...


Creating a VLAN network in Ops Center 12.2

Contributed by Juergen Fleischer and Mahesh Sharma. In this short blog we would like to show you how to create aVLAN tagged network in Ops Center 12R2. In a future blog we will show you howwe’re using a VLAN tagged networks to separate the network traffic between ourLDOM guests for security reasons. To create a VLAN tagged Network from: Navigation ->Networks, then highlight the "default" network as shown below. On the right underActions click on “Define Network” From the next screen: complete the mandatory information(i.e. Network IP and Network Name) as the minimum. Click Next to continue. The next screen that is of interest to us is on Step 3 -"Assign Fabric". This is where we enterthe VLAN ID. Highlight any Fabric and enter the VLAN-ID. It really does notmatter which fabric you assign the VLAN ID to in this example, as theassociation between VLAN ID and fabric is an internal Ops Center connection.Assigning the correct fabric to a VLAN ID does become important when you’reusing Infinibands and Exalogics. The next few Steps are not required in this example and have been skipped. Step 9 is the Summary screen, if all is correct pressFinish. A job will run and the new VLANtagged network will be created. Hope this blog helps!

Contributed by Juergen Fleischer and Mahesh Sharma. In this short blog we would like to show you how to create a VLAN tagged network in Ops Center 12R2. In a future blog we will show you howwe’re using...


Creating and sharing FC LUNs from an Oracle ZFS 7120 Storage Appliance

 Contributed by Juergen Fleischer and Mahesh Sharma. While working on a larger project in which we will be usingOps Center to create a resilient OVM for Sparc environment, we borrowed a ZFS7120 Storage appliance. The 7120 will be used to distribute FC LUNs to boot ourControl Domain (CDOM). As we’ve neverhad the opportunity to play with a 7120 before, we needed to some quicklearning and thought it would be a good idea to share with you. Please note: we are not Storage experts, there may be aneasier way to create and share FC LUNs – as mentioned this is just onecomponent of a larger Project. Hopefullyyou may find some of the information useful. So our aim: Create FC LUN’s on the 7120 and share them. Make the LUN visible to our host – we’ll beusing a factory reset SparcT5-2 Server In a later blog we will show you how to discover the ZFSappliance with Ops Center so we can create a dynamic storage library. Creating and Sharing LUNs The first step was to figure out which WWN (World Wide Number) and ports areused for the FC. This is done by logging into the 7120 via a web –browser i.e. https://<7120-hostname>:215. Don’t forget to add the port number! Then click on Configure -> SAN -> Fiber Channel Port. Next to the Port that you are interested in (Port 2 in our case) click the iand another screen will pop up. From the pop-up the WWN of the host can be identified. We now need to see if the FC port is connected to the T5-2host which in our case is running solaris 11. To check if the ports can be seenfrom the host, we used the following: This shows us that the FC Port on the 7120 can be seen byour host. So all is good…so far! So now we need to create and presents some LUN’s to theT5-2 from the 7120. To do this first weneed to create a “Target group” to define the ports and protocols that will beinherited by the LUN and presented to the T5-2. This is done from Configure -> SAN -> Fibre ChannelPort, click the drop down as show in the image and change from Initiator toTarget. This will require a reboot ofthe 7120 – you’ll be prompted by a pop-up. Now we need to add the Fibre Channel Ports to the Fibre ChannelTarget Groups. This is done by placingthe mouse in the Fibre Channel Ports box and a Move icon () will appear, dragthis to the left. Once the Ports are under the Target Groups, you can renamethe group by clicking on the pen symbol (shown below), or leave them in thedefault group. We created two groups,one name Ops-Center-Primary (for the CDOM) and the other Ops-Center-Secondary(for the alternative Domain). Next we can create some LUNs. The first LUN we created will be used for installing the OSon the CDOM. We created a 20Gb LUN andnamed it Primary-LUN. The other LUN wecreated was for the Alternate domain called Secondary-LUN – we’re veryimaginative with our naming! We did this by going to: Shares -> SHARES and clicking the icon next to LUNs. From the “Create LUN” pop-up, we entered the Name andrequired size. Then clicked Apply Now we have created the shared LUN we need to define it to aTarget group that we created earlier. From: Shares->PROJECTS, highlight the default Projectand you’ll see a pen edit symbol - unfortunately not seen in my image grab! The next screen shows you the LUNs that were created, clickon the pen edit symbol. From the next screen click on Protocols and select the“Target Group” that you created earlier, in our case we choseOps-Center-Primary and then APPLY. Now that we have the LUN prepared on the 7120, we need tomake sure it’s visible on the T5-2. From the host we can see that our host adapter is notconnected. Let’s make sure that this is the correct device. From “format” we can also confirm the 20 GB LUN is nowfinally available. Pleasenote: that we have removed all ourinternal disks. Ops Center (12cR2) has been designed to pick the first disk itsee’s to provision the OS on, so removing all internal disks will ensure thatthe OS is provisioned on the LUN. Once we provision the Primary Domain (CDOM), we’ll followthe steps above again and make another LUN available to the Alternate domain. Hope you find this blog useful! -

 Contributed by Juergen Fleischer and Mahesh Sharma. While working on a larger project in which we will be using Ops Center to create a resilient OVM for Sparc environment, we borrowed a ZFS7120...

Product Info

EM12c Release 4: Job System Easter Eggs - Part 2

This is part 2 of my two part blog post on Job System Easter Eggs (hidden features). The two features being covered are: New Job progress tracking UI Import/Export of job definitions In the previous post, I talked about the new UI while in this post I will cover import and export for job definitions.  2.  Import/Export of job definitions The ability to export and import job definitions across EM environments has been a popular enhancement request for quite some time. There are 3 primary drivers: Test to Production: Moving active and library job definitions from test site to production Migration: Move job definitions across EM versions. This is often used as an alternative to upgrade. As Failsafe: For that extra protection, in addition to repository backups. This can be useful for selective restoration of job definition. In the first iteration, we are exposing these functions as emcli verbs.  Since these emcli verbs are not formally supported in the release, no documentation is included in the EMCLI Reference Guide. So to learn more about them, we will have to look at emcli help. The two verbs are called export_jobs, and import_jobs. >> ./emcli help | grep _jobs Some salient points about the import & export verbs are. Export: Rich filter criteria to form the list of jobs to export Active and Library jobs can be exported Preview mode, to view results before exporting them Job definitions are exported as zip files for ease of transfer Contextual information like targets, credentials, access settings, etc is exported, but not imported. In future, we may be able to support import for this information as well. System jobs and nested jobs are not exported Import: Special Preview mode to view contents of the exported file, and to determine possible conflicts during import Rich filter criteria to selectively import job definitions from the exported file Job definitions are imported to the library ONLY. This is true even if an active job was exported Two failure modes - skip on error, and rollback on error Import only works on same or higher version of EM12c. The patch set number matters. Export Job Definitions Let's walk through an example.1. We start with export_jobs. The help for this verb will show you all the available options. Most of the filters are self explanatory, so i will skip the explanation. The most important option of all is the -preview flag. This when used in conjunction with other filters, will show results without exporting the job definitions. >> ./emcli help export_jobs emcli export_jobs   [-name="job name1;job name2;..."]   [-type="job type1;job type2;..."]   [-targets="tname1:ttype1;tname2:ttype2;..."]   [-owner="owner1;owner2;..."]   [-preview]   -export_file=<Zip filename that will be created>"   -libraryjobs 2.  Now lets play with this verb. If we invoke the verb with just the -preview flag, it will list all job definitions that can be exported both that are active, and from the library. Note: system jobs and nested jobs are skipped from this output. >> ./emcli export_jobs -preview Not all job types are exportable. To determine the list of job types supported via the import and export verbs, use the get_job_types verb. >>  ./emcli get_job_types Currently, there are over 50 job types supported, and this list will continue to grow with every release. 3. From the list above, I am primarily interested in jobs that are created by the user AFULAY. So I apply the -owner filter. >> ./emcli export_jobs -owner=AFULAY -preview In this output i see 2 jobs, 'Library Job' which is a simple OS command job stored in the job library, while the 'Test Job' is an active OS command job scheduled against a bunch of targets. Note: if multiple options are specified, like -name and -owner, then these are ANDed together. 4. Since i am the lazy kind, i would rather export every thing and then later decide what i wish to import. So here it goes. The -export_file option takes the location and file name for the export file. Note the actual output file is an xml that contains job metadata, but the export file is always archived and stored in a zip format. At this time, I am sure most would instinctively unzip the file and start perusing through its contents, but doing so would be analogous to removing the warranty void sticker of your new TV or Blue Ray player. Basically, attempts to manually modify the contents of the exported zip file is highly discouraged. >> ./emcli export_jobs -export_file=/tmp/afulay_alljobs.zip Note how the status column reports success or failure for each job being exported. With the file exported, we now move on to the import verb. . Import Job Definitions In the previous section, we exported a file with all job definitions in it. Now lets say we share this file with bunch of other admins and ask them to import whatever job definitions that make sense or are relevant to their environment. 1. To understand the capabilities of the import verb, we take a look at the help content. >> ./emcli help import_jobs emcli import_jobs   [-name="job name1;job name2;..."]   [-type="job type1;job type2;..."]   [-targets="tname1:ttype1;tname2:ttype2;..."]   [-owner="owner1;owner2;..."]   [-preview]   [-force]   [-stoponerror]   -import_file=<Zip filename that contains job definitions>"  Most of the options are quite similar to export_jobs barring a few. The -force flag allows the admin to update an existing job definition. Typically, you will run into these situations when a conflicting job is found in the new environment, and you want to either update its definition with the new version in the import file or overwrite the localized changes. The -stoponerror flag, when specified, will stop the import process on first encountered failure, and then rollback all jobs imported in the session. We will likely change this label to rollbackonerror to correctly represent its behavior. The default behavior is to skip failed jobs and continue importing others. 2. Before our admins import job definitions, they first need to view the contents of the exported file. This again can be done using the -preview option. >> ./emcli import_jobs -preview -import_file=/tmp/afulay_alljobs.zip The -preview option in the import verb is very special. It not only just lists the contents of the exported file, but also connects to the new EM environment and looks for potential conflicts during import. So this is a deep validation test. As seen in the above screenshot, there are two sections in the output, first is just a listing of all job definitions from the import file, while the second is a list of all conflicts. Note: for demo sake, i am exporting and importing to the same EM site, and hence every job shows up as a conflict. To address this issue, i will eventually delete the 'Library Job' from my job library, and import it from the import file. Disclaimer: In the interest of full disclosure, i should mention that there are few known bugs for the import verb, hence the rationale for not releasing these verbs formally with EM12c R4. Some bugs i ran into when writing this blog were: you cannot export an active job, delete it, and import it back to the same EM environment, this currently is only possible with library jobs.  This is an obscure case though. The -force flag is a little flaky, so sometimes it wouldn't force import even if you want it to The -owner flag does not work on the import file, it instead will throw an exception That said, when the job does get imported, it does so properly, so there is never any risk of metadata corruption. 3. If i try to import the 'Library Job', the verb will fail and give me an error message. >> ./emcli import_jobs -name='LIBRARY JOB' -import_file=/tmp/afulay_alljobs.zip The Status column reports Error, while the Notes column gives the reason as 'job already exists'. 4. Now lets delete the library job and try to import it. >> ./emcli delete_library_job -name='LIBRARY JOB' >> ./emcli import_jobs -name='LIBRARY JOB' -import_file=/tmp/afulay_alljobs.zip Success!!We were able to delete the library job, and import it back from the import file. In summary, there two very useful enhancements made in EM12c R4. Unfortunately, due to time constraints and our inability to meet the set quality standards, we decided to ship these features in disabled state. This ensures that production sites and users are not impacted, while still giving the few brave souls the opportunity to test these features. In my assessment, i have found the new UI to be fairly robust as i have been using this exclusively for a while. On the other hand, there are few known bugs with the import and export emcli verbs, so use these with caution. -- Adeesh Fulay (@adeeshf)  

This is part 2 of my two part blog post on Job System Easter Eggs (hidden features). The two features being covered are: New Job progress tracking UI Import/Export of job definitions In the previous...

Product Info

EM12c Release 4: Job System Easter Eggs - Part 1

So you just installed a new EM12c R4 environment or upgraded your existing EM environment to EM12c R4. Post upgrade you go to the Job System activity page (via Enterprise->Job->Activity menu) and view the progress details  of a job. Well nothing seems to have changed, its the same UI, the same multi-page drill down to view step output, same no. of clicks, etc. Wrong! In this two part blog post, i talk about two Job System Easter Eggs (hidden features) that most of you will find interesting. These are: New Job progress tracking UI Import/Export of job definitions So before i go any further, let me address the issue of why are these features hidden? As we were building these features, we realized that we would not be ready to ship the desired quality of code by the set dates. Hence, instead of removing the code, it was decided to ship it in a disabled state so as not to impact customers, but still allowing a brave few to experiment with it and provide valuable feedback. 1.  New Job Progress Tracking UI The job system UI hasn't changed much since its introduction almost 10 years ago. It is a daunting task to update all the job system related UIs in a single release, and hence we decided to take a piecemeal approach instead. In the first installment, we have revamped the job progress tracking page. The current UI, as shown above, while being very functional, is also very laborious. Multiple clicks and drill downs are required to view the step output for a particular target. Also, any click leads to complete page refresh, which leads to wastage of time and resources. The new UI tries to address all these concerns. It is a single page UI, which means no matter where you click, you never leave the page and thus never lose context of the target or step you where in. It also significantly reduces the no. of clicks required to complete the same task as in the current UI. So lets take a look at this new UI.  First, as i mentioned earlier, you need to enable this UI. To do this, you need to run the following emctl command from any of the OMS: ./emctl set property -name oracle.sysman.core.jobs.ui.useAdfExecutionUi -value true  This command will prompt for the sysman password, and then will enable the new UI. NOTE: This command does not require a restart of the OMS. Once run, the new UI will be enabled for all user across all OMSes. Now revisit the job progress tracking page from before. You will be directed to the new UI. There are in all 6 key regions on this new single page job progress tracking UI. Starting from top left, these are: Job Run Actions - These are actions that can be performed on the job run like suspend resume, retry, stop, edit, etc. Executions - This region displays all the executions in the job run. An execution, in most cases, represents a single target and hence runs independently from other executions. This region thus shows the progress and status of all executions in a single view. The best part of this region is the column titled 'Execution Time'. The cigar chart in this column represents two things - one, the duration of the execution, and two, the difference in start times. The visual representation helps in identifying runaway executions, or just compare execution times across different targets. The Actions menu allows various options like start, stop, debug, delete, etc. Execution Summary - Clicking on an execution in the above region, paints the area on the right. This specific region shows execution summary with information like status, start & end date, execution id, command, etc Execution Steps - This region lists the steps that make up the execution. Step Output - Clicking on a step from the above region, paints this region. This shows the details of the step. This includes the step output and the ability to download it to a text file. Page Options - We imagine that learning any new UI takes time, and hence this final region provides the option to switch between the new and the classic view. Additionally, this also allows you to set the auto refresh rate for the page. Essentially, considering that jobs have two levels - executions and steps, we have experimented with a multi-master style layout. EM has never used such a layout and hence there were concerns raised when we chose to do so. Master 1 (region 2) -> Detail 1 (regions 3, 4, & 5) Master 2 (region 4) -> Detail 2 (region 5) In summary, with this new UI, we have been able to significantly reduce the no. of clicks required to track job progress and drill into details. We have also been able to show all relevant information in a single page, thus avoiding unnecessary page redirection and reloads. I would love to hear from you if this experiment has paid off and if you find this new UI useful. In the next part of this blog i talk about the new emcli verbs to import and export job definitions across EM environments. This has been a long standing enhancement request, and we are quite excited about our efforts. -- Adeesh Fulay (@adeeshf)  

So you just installed a new EM12c R4 environment or upgraded your existing EM environment to EM12c R4. Post upgrade you go to the Job System activity page (via Enterprise->Job->Activity menu) and view...

Product Info

EM12c Release 4: New Compliance features including DB STIG Standard

Enterprise Manager’s compliance framework is a powerful androbust feature that provides users the ability to continuously validate their targetconfigurations against a specified standard. Enterprise Manager’s compliance libraryis filled with a wide variety of standards based on Oracle’s recommendations, bestpractices and security guidelines. These standards can be easily associated to atarget to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release of Enterprise Manager thecompliance library will contain a new standard based on the US DefenseInformation Systems Agency (DISA) Security Technical Implementation Guide(STIG) for Oracle Database 11g. According to the DISA website, “The STIGscontain technical guidance to ‘lock down’ information systems/software thatmight otherwise be vulnerable to a malicious computer attack.” In essence, aSTIG is a technical checklist an administrator can follow to secure a system orsoftware. Many US government entities are required to follow these standardshowever many non-US government entities and commercial companies base theirstandards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs onthe DISA website. The Oracle Database 11g STIG consists of two categories ofchecks, installation and instance. Installation checks focus primarily on thesecurity of the Oracle Home while the instance checks focus on theconfiguration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, youwill see the rules organized into folders corresponding to these categories. The rule names contain a rule ID ( DG0020 for example )which directly map to the check name in the STIG checklist along with a helpfulbrief description. The actual description field contains the text from the STIGdocumentation to aid in understanding the purpose of the check. All of therules have also been documented in the Oracle Database Compliance Standardsreference documentation. In order to use this standard both the OMS and agent must be atversion as it takes advantage of several features new in this release including: Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports Agent-Side Compliance Rules Agent-side compliance rules are essentially the result of atighter integration between Configuration Extensions and Compliance Rules. Ifyou ever created customer compliance content in past versions of EnterpriseManager, you likely used Configuration Extensions to collect additionalinformation into the EM repository so it could be used in a Repositorycompliance rule. This process although powerful, could be confusing tocorrectly model the SQL in the rule creation wizard. With agent-side rules, theuser only needs to choose the Configuration Extension/Alias combination andthat’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle ismanaged together. When you associate an agent-side compliance standard to atarget, the required Configuration Extensions will be deployed automaticallyfor you. The opposite is also true, when you unassociated the compliancestandard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented asan agent-side standard which is why you simply need to associate the standard toyour database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. Manual Compliance Rules There are many checks in the Oracle Database STIG as well asother common standards which simply cannot be automated. This could besomething as simple as “Ensure the datacenter entrance is secured.” or complexas Oracle Database STIG Rule DG0186 – “The database should not be directlyaccessible from public or unauthorized networks”. These checks require a humanto perform and attest to its successful completion. Enterprise Manager now supports these types of checks inManual rules. When first associated to a target, each manual rule will generatea single violation. These violations must be manually cleared by a user who isin essence attesting to its successful completion. The user is able topermanently clear the violation or give a future date on which the violationwill be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Violation Suppression There are situations that require the need to permanently ortemporarily suppress a legitimate violation or finding. These include approvedexceptions and grace periods. Enterprise Manager now supports the ability totemporarily or permanently suppress a violation. Unlike when you clear a manual ruleviolation, suppression simply removes the violation from the compliance resultsUI and in turn its negative impact on the score. The violation still remains inthe EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can giveusers a grace period in which to address an issue. If the issue is notaddressed within the specified period, the violation will reappear in theresults automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher nowships and is integrated with Enterprise Manager This means userscan take full advantage of the powerful reporting engine by using the Oracle providedreports or building their own. There are many new compliance related reportsavailable in covering all aspects including the association status, libraryas well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Conclusion Together with the Oracle Database 11g STIG compliancestandard these features provide a complete solution for easily auditing andreporting the security posture of your Oracle Databases against this well knownbenchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their targetconfigurations against a specified standard. Enterprise...

Product Info

EM12c: Using the LIST verb in emcli

Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>  <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager The combination of list and...

Product Info

EM12c Release 4: New EMCLI Verbs

Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 ( This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.BI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaIncident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Update (6/11):- Please note that the "Gold Agent Image Verbs" and "Agent Update Verbs" verbs shown under "emcli help" are not supported yet. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 ( This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs ...

Best Practices

EM12c Release 4: Cloud Control to Major Tom...

With the latest release of Enterprise Manager 12c, Release 4 ( the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages !  Updates : In response to Phil's comment/query below :- Phil, you can indeed view the impact by target by selecting a large metric 'bubble' and clicking it, you drill down into the default view which shows the detailed information for the impact of the metrics within a metric grouping.  You can however instead select the radio button to switch this view to show the impact as a per target (Top 25) view.  By way of illustration I've posted an example screenshot below.  You can see the bubble chart changes to show the impact collections and rows collected by target instead.  Hope this helps Phil.

With the latest release of Enterprise Manager 12c, Release 4 ( the EM development team has added new functionality to assist the EM Administrator to monitor the health of the...