X

Best Practices, tips and news for managing Oracle’s Engineered Systems for both on-premise and cloud.

Recent Posts

Discovering Exadata Cloud in Enterprise Manager: Quickly and Easily

Running a database up in the cloud is no longer a new concept.  Oracle has taken their Exadata Database Machine and provides a couple different Exadata Database Machine cloud options.  These options are referred to as Oracle Exadata Cloud Service and Oracle Exadata Cloud Machine (collectively referred to as Exadata Cloud).  Moving a database into the cloud does not remove the requirement for proper monitoring of that database.  For those that have depended on the Oracle Enterprise Manager tool to monitor your database environments and Exadata Database Machines in the past, you will be happy to know that just because you are running a database as part of the Exadata Cloud, you can still discover that database in your on-premise Enterprise Manager environment and monitor it along with all of your other enterprise databases.  The discovery process has been documented in a whitepaper found on the Maximum Availability Architecture Enterprise Manager page and is titled Oracle Enterprise Manager for Exadata Cloud - Implementation, Management, and Monitoring Best Practices.  This whitepaper will guide you through the step by step process of discovering your Exadata Cloud database to provide the necessary end-to-end monitoring solution. 

Running a database up in the cloud is no longer a new concept.  Oracle has taken their Exadata Database Machine and provides a couple different Exadata Database Machine cloud options.  These options...

What everybody needs to know about monitoring and managing Exadata with Oracle Enterprise Manager

With the recent release of the new Exadata X7, it is a good time to point out an important MOS note that provides details for the proper monitoring and managing of an Exadata environment via Oracle Enterprise Manager.  The note is Exadata Storage Software Versions Supported by the Oracle Enterprise Manager Exadata Plug-in (Doc ID 1626579.1).  With the new components and software versions introduced for the Exadata Servers (including Exadata Storage software 18.1 and IB Switch firmware 2.2.0, etc), it is very important to make sure you are running the correct version of Enterprise Manager as well as the correct versions of the Exadata Plug-in and the Systems Infrastructure Plug-in.  Doing so will ensure that you are able to properly discover, monitor and manage your Exadata servers.  This document provides important notes such as the following: Enterprise Manager 13.2.0.0.0 with Exadata plug-in version 13.2.2.0.0, Systems Infrastructure plug-in version 13.2.3.0.0, and the use of EM 13c target types are required in order to discover, monitor, and manage Exadata X7 and earlier Exadata hardware with any of the following software versions: Exadata storage software 18.1.0.0.0 or above Exadata storage software 12.2.1.1.0 or above IB Switch firmware 2.2.0 or above This MOS document is a must for anybody using Oracle Enterprise Manager to manage their Exadata servers.  Be sure to bookmark it and refer to it often for any updated requirements/dependencies!

With the recent release of the new Exadata X7, it is a good time to point out an important MOS note that provides details for the proper monitoring and managing of an Exadata environment via...

Availability, Reliability & MAA

July 2017 EM Recommended Patch List

The critical patch list for Oracle Products was released on Tuesday, July 18th.  As a result, a list of recommended patches has been compiled for both EM 13.2 and EM 12.1.0.5.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch.  For more details on these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was created to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 13.2.0.0 and 12.1.0.5 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible.

The critical patch list for Oracle Products was released on Tuesday, July 18th.  As a result, a list of recommended patches has been compiled for both EM 13.2 and EM 12.1.0.5.  This list contains...

Availability, Reliability & MAA

Don't get bit by a newly discovered bug in EM 13c DR environments using alias hostnames

Do you have a 13c EM environment that is setup for disaster recovery using alias hostnames for the OMS servers?  If so, you need to know about an issue that exists when trying to do tasks such as provisioning, staging of directives, etc.  This issue occurs because certain tasks in EM require the target agent to communicate back to the central agent running on the OMS server.  To do this, the target agent must be able to resolve the hostname of the central agent in which it is trying to request a file transfer.  If the OMS server is installed under an alias name which is configured via the /etc/host file on that server, then the target agent will not be able to resolve the hostname of the server where that central agent is running and the file transfer will fail.  This will stop any provisioning tasks that use this agent to agent file transfer method causing them to fail.  There is however an identified solution to this problem which is documented in the following My Oracle Support note: Agent to agent peer file transfer in provisioning, patching fails contacting central agent in EM HA/DR environment with alias hostnames (Doc ID 2254095.1).  If your EM environment is configured for DR in this way, make sure you read through this MOS note to see if you are impacted and if so, follow the simple steps to prevent these agent file transfer issues.

Do you have a 13c EM environment that is setup for disaster recovery using alias hostnames for the OMS servers?  If so, you need to know about an issue that exists when trying to do tasks such as...

General

January 2017 EM Recommended Patch List

The critical patch list for Oracle Products was released on Tuesday, January 17th.  As a result, a list of recommended patches has been compiled for both EM 13.2 and EM 12.1.0.5.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch.  For more details on these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was created to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 13.2.0.0 and 12.1.0.5 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible.

The critical patch list for Oracle Products was released on Tuesday, January 17th.  As a result, a list of recommended patches has been compiled for both EM 13.2 and EM 12.1.0.5.  This list contains...

General

October 2016 EM Recommended Patch List

The critical patch list for Oracle Products was released on Tuesday, October 18th.  Also recently released was Enterprise Manager 13.2 so as a result, a list of recommended patches has been compiled for both EM 13.2 and EM 12.1.0.5.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch.  Keep in mind that since EM 13.2 was just recently released, there are very few patches available for this version. For more details on these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 13.2.0.0 and 12.1.0.5 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible.

The critical patch list for Oracle Products was released on Tuesday, October 18th.  Also recently released was Enterprise Manager 13.2 so as a result, a list of recommended patches has been compiled...

Availability, Reliability & MAA

Oracle Enterprise Manager 13c and Always-On Monitoring -Part 2

As discussed in an earlier blog post, Always-On Monitoring (AOM) provides the capability of monitoring all (or a specific list of) targets using the same EM agents already deployed to these targets during complete downtime of the EM environment. It is especially useful for monitoring critical targets during planned  EM downtime such as patching and/or upgrades. In this previous blog post we reviewed the high-level steps for the setup and configuration of the AOM tool. In this blog post, we will discuss the following topics: configuring High Availability, configuring Disaster Recovery and troubleshooting AOM. As with any other application, Always-On Monitoring should be configured for High Availability (HA) and Disaster Recovery (DR) according to the Maximum Availability Architecture standards. This will ensure that the identified list of targets that must be monitored during Enterprise Manger downtime will continue to be monitored even if a system failure occurs on one of the AOM instances. High Availability Setting up High Availability for AOM is as simple as setting up additional AOM instances. This helps ensure availability of monitoring but also provides load sharing. The incoming alerts from agents can be directed to another AOM instance if one goes down via a server load balancer (SLB). To keep events from a target in the proper order, a single target will send a message to one AOM instance only. If that AOM instance goes down, the application assigns the responsibility of the queues previously managed by the down AOM instance to another instance. Not only are the incoming agent alerts shared by multiple AOM instances, but the work of sending out the notifications can also be shared among the AOM instances. Adding another AOM instance is as simple as running the following command from the new server: emsca add_ems. Below is a sample of the output from this command: % $AOM_HOME/scripts/emsca add_ems Oracle Enterprise Manager Cloud Control 13c Release 1 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Always-On Monitoring Repository Connection String : (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=aomRepo.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=aom))) Always-On Monitoring Repository Username [ems] : Always-On Monitoring Repository Password [ems] : Enterprise Manager Repository Connection String : (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=emServer.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=emdb))) Enterprise Manager Repository Username : sysman Enterprise Manager Repository Password : Connecting to Always-On Monitoring repository. Enter Enterprise Manager Middleware Home : /u01/app/oracle/MWare Registering Always-On Monitoring instance Always-On Monitoring Upload URL: https://aomserver.domain:8081/upload Oracle PKI Tool : Version 12.1.3.0.0 Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. Certificate was added to keystore removed the key from the repository Disaster Recovery For those familiar with the Disaster Recovery setup for an Enterprise Manager 13c environment, the AOM Disaster Recovery setup is pretty much the same  implementation. The AOM Disaster Recovery setup is an active/passive configuration. This means that if the primary AOM site goes down, the virtual IP will fail over to a standby node. Once the AOM instance is started, it will run exactly the same on the DR site as it did on the primary site. To support DR, the AOM  repository needs to be available on the DR site and the AOM instance file systems need to be replicated to the DR site. Troubleshooting The following commands are helpful in troubleshooting AOM. emsctl status This command will return the status of the AOM service. Below is an example: % $AOM_HOME/scripts/emsctl status Oracle Enterprise Manager Cloud Control 13c Release 1 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. ------------------------------------------------------------------ Always-On Monitoring Version : 13.1.0.0.0 Always-On Monitoring Home: /u01/app/oracle/aom Started At: January 13, 2016 4:11:01 PM PST Last Repository Sync: February 2, 2016 1:41:17 PM PST Upload URL: https://aomserver.domain:8081/upload Always-On Monitoring Process ID: 15399 Always-On Monitoring Repository: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=aomRepo.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=aom))) Enterprise Manager Repository: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=emServer.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=emdb))) Notifications Enabled<: false Always-On Monitoring is up. emsctl ping Another way to check the status of the AOM service is to use the ping option as seen below: Oracle Enterprise Manager Cloud Control 13c Release 1 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. ------------------------------------------------------------------ Always-On Monitoring is running emsctl list_agents This command provides a count and list of URLs that have communicated with the AOM service in the past hour. emsctl getstats This command will display AOM performance statistics. Log files are also helpful during AOM troubleshooting. AOM records events that occur during operation in log files located under the $AOM_HOME/logs directory and are listed below. The .log files are rotated once the primary file reaches the maximum size of 10M. Two copies of the files are kept, the primary and one previous (rotated) file. emsca logs: emsca.err (only errors) emsca.log.0 (rotating log file that contains all output including errors) ems logs: ems.err (only errors) ems.log.0(rotating log file that contains all output including errors) There are different log levels that determine the type of information recorded in these log files. The levels can be set at INFO, DEBUG, WARN(default setting), and ERROR. The log level can be changed by adding the logLevel property to the $AOM_HOME/conf/emsconfig.properties file. The AOM instance must be bounced for the change to take effect. This is done with the commands below: emsctl stop emsctl start Below is a recommended list of steps to take when troubleshooting AOM: Check the log file. Verify that the URL is set in Enterprise Manager. This is done using this command: emctl get property -name "oracle.sysman.core.events.ems.emsURL" Verify that the URL is set on the Management Agent. To do this, verify the proper setting for EMS_URL in the $AGENT_HOME/sysman/config/emd.properties file. Verify that Always-On Monitoring is running and notifications are enabled. This can be verified by running the emsctl status command Verify that downtime contacts have been specified in Enterprise Manger and Always-On Monitoring. Verifying this is a manual process depending on how the downtime contacts were configured. If a global downtime contact was specified, verify this setting using emcli get_oms_config_property -property_name='oracle.sysman.core.events.ems.downtimeContact' If per target downtime contacts were specified, it can be verified by looking at the “Downtime Contact” target property for each of the targets. To summarize, Always-On Monitoring offers a solution to the long-time problem of who monitors our critical targets when EM is down. To make sure you get the most from this solution, at a minimum, it is important to setup AOM with High Availability. If a DR/Standby site is setup for the EM environment, it is an MAA best practice to also setup a DR site for the AOM application. For more detailed information on Always-On Monitoring, refer to the Always-On Monitoring chapter in the Enterprise Manager Cloud Control Administrator’s Guide.

As discussed in an earlier blog post, Always-On Monitoring (AOM) provides the capability of monitoring all (or a specific list of) targets using the same EM agents already deployed to these targets...

General

July 2016 EM Recommended Patch List

The critical patch list for Oracle Products was released on Tuesday, July 19th.  As a result, a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch. For more details and to review these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 12.1.0.5 and 12.1.0.4 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible. These MOS notes will be updated for EM 13 starting with the release of version 13.2.  Also, it is important to note that the final error correction patch for EM 12.1.0.4 was April 2016 so the patch list for this version in the notes only contain updates for the following components: OMS:  Update for the bundle patches and any identified one-off patches WLS:  Update for the latest PSU patch RDBMS: Update for the latest PSU patches Agent:  Update for the bundle patches  

The critical patch list for Oracle Products was released on Tuesday, July 19th.  As a result, a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for...

Availability, Reliability & MAA

Oracle Enterprise Manager 13c and Always-On Monitoring

  One of the common concerns of the Administrators of an Oracle Enterprise Manager (EM) environment is that when you have to take EM down for planned  maintenance, you are blind to the status of some of your company’s most critical assets. Well, as of EM 13c, Oracle now has a solution for you! Introducing Always-On Monitoring – or AOM for short. Always-On Monitoring provides the capability of monitoring all or a specific list of targets using the same EM agents already deployed to these targets during complete downtime of the EM environment. It is especially useful for monitoring critical targets during planned EM downtime such as patching and/or upgrades. AOM will synchronize with EM for data such as notification contacts and is able to take the alerts from the EM agents and send email notifications to the identified downtime contacts. AOM can be configured and left running all the time so that it is ready to “take over” the alert notification for critical targets once notifications have been enabled. This blog contains the first of a couple of posts on AOM starting with the setup and configuration of the tool. The setup/configuration of the AOM tool consists of the following steps: 1.  AOM Installation - The AOM application is included with the EM 13c software distribution under the sysman/ems directory and is also available via the Self-Update function in EM. The installation of the application is as simple as unzipping the zip file in the location chosen for the AOM install. An AOM installation consists of a database to hold the AOM repository, and a server to run the AOM Instance. Chapter 12 of the Oracle Enterprise Manager Cloud Control Administrator’s Guide contains the details steps required for installing AOM. This server must have JAVA version 1.7 installed. At this time, the AOM instance/application must run on the EM OMS but the AOM repository database can be created on any Oracle database server.  With the EM 13.2 release, the AOM instance/application will be able to be installed on any host within the monitored environment. 2.  Configure AOM Communication to EM - The configuration is pretty simple and is all handled via the script called emsca found under the AOM installation home in the scripts directory. (Fun Fact: the name of this script and the command line utility all start with ems rather than aom due to a pre-release name change of the product). This script will handle all of the necessary steps to configure AOM including the creation of the user and schema for the AOM  repository. Once this configuration is complete, make sure that EM has an Email Server configured. At this time, AOM can only send email messages for the alerts and it uses the email server configuration that it gets from EM during synchronization. See "Configuring Email Servers in Enterprise Manager" for more details on how to setup Email Servers in EM. 3.  Configuration of downtime contacts - The downtime contacts are those that will receive the email notifications in the event of an alert on the identified  targets. Setting up the downtime contacts is done in your EM environment and can be done in a couple of different ways and is documented in the section “Configuring Downtime Contacts in Enterprise Manager”. 4.  Synchronizing AOM and EM - This synchronization copies over the notification configuration and downtime contacts for the EM targets over to the AOM application. This should be done before starting AOM for the first time. It is a simple command using the AOM command line utility called emsctl. Here is a sample of the command: % $AOM_HOME/scripts/emsctl sync 5.  Note that AOM will run a synchronization job every 24 hours by default although this is configurable. It is a good practice to run a manual sync after any changes to downtime contacts. 6.  Configure EM for AOM - The final step in the configuration process is to tell EM to include the upload URL for the AOM service. Once this is done, EM will  send this URL to all existing agents and any future agents. This makes this a one time step that will then be handled for you as your number of agents change in the future. To get the AOM upload URL, issue the command below (the upload URL is in bold below): % $AOM_HOME/scripts/emsctl status Oracle Enterprise Manager Cloud Control 13c Release 1 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. ------------------------------------------------------------------ Always-On Monitoring Version: 13.1.0.0.0 Always-On Monitoring Home:/u01/app/oracle/aom Started At: January 13, 2016 4:11:01 PM PST Last Repository Sync: February 2, 2016 1:41:17 PM PST Upload URL: https://aomserver.domain:8081/upload Always-On Monitoring Process ID: 15399 Always-On Monitoring Repository: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=aomRepo.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=aom))) Enterprise Manager Repository: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=emServer.domain)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=emdb))) Notifications Enabled: false Total Downtime Contacts Configured: 29 Then issue the following emctl command to set the property in EM (note this is emctl and not emsctl) – substituting the information in bold below with the correct upoad url and sysman password for your environment: % emctl set property -name "oracle.sysman.core.events.ems.emsURL" -value "https://aomserver.domain:8081/upload" -sysman_pwd sysman 7.  Starting AOM and Enabling notifications - Starting AOM is done via the emsctl command and specifying the start option: %$AOM_HOME/scripts/emsctl start Even though the setup of AOM is complete and it has been started, you will not receive any alerts when EM is down unless you enable notifications. As you probably guessed by now, this is done via the emsctl command: %$AOM_HOME/scripts/emsctl enable_notification A sync with EM is done each time you enable_notification unless you use the option –nosync. To disable the notification, use the same sort of command with the disable_notification option: %$AOM_HOME/scripts/emsctl disable_notification It is a good practice to leave AOM started but make sure you disable the notifications. Only enable them just before you shutdown EM for planned  maintenance. REMEMBER that you will get more notification than you probably want if you leave this enabled all of the time because AOM has no knowledge of the notifications that EM is already doing. To summarize, Always-On Monitoring offers a solution to the long-time problem of who monitors our critical targets when EM is down. The next blog will provide the details on how to configure AOM for High Availability and Disaster Recovery.

One of the common concerns of the Administrators of an Oracle Enterprise Manager (EM) environment is that when you have to take EM down for planned  maintenance, you are blind to the status of some of...

General

April 2016 EM Recommended Patch List

  As part of the recent April 2016 PSU Patch release for Enterprise Manager (EM), a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch. For more details and to review these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 12.1.0.5 and 12.1.0.4 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible.

As part of the recent April 2016 PSU Patch release for Enterprise Manager (EM), a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for the EM...

General

January 2016 EM Recommended Patch List

As part of the recent January 2016 PSU Patch release for Enterprise Manager (EM), a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for the EM infrastructure including the OMS, WLS, Plugins, Repository and Agent. These patches are also included in the Exadata Quarterly Full Stack Download patch. For more details and to review these EM recommended patches for Exadata monitoring, refer to My Oracle Support Note titled Patch Requirements for Setting up Monitoring and Administration for Exadata [1323298.1].   This note was to provide a list of the recommended patches for monitoring an Exadata Machine but can be applicable for any EM environment.  Just keep in mind that your EM environment may require other agent plugin bundle patches depending on what target types are managed in your EM environment. Also, the information in the note contains not only a recommended list of patches for both EM versions 12.1.0.5 and 12.1.0.4 but also provides a link to a My Oracle Support Note that provides the steps for applying these patches. This note is titled Applying Enterprise Manager 12c Recommended Patches [1664074.1].  The patch apply process in this note provides maximum availability for the EM environment by structuring the patch apply steps so that the primary OMS server is up as long as possible.

As part of the recent January 2016 PSU Patch release for Enterprise Manager (EM), a list of recommended patches has been compiled for EM 12c.  This list contains recommended patches for the EM...

Architecture & Planning

Maintaining Trace and Log Files for an Enterprise Manager 12c Environment

In an EM 12c environment, the majority of the log and trace files for the OMS, WebLogic Server, HTTP Server, Agent and database are maintained  automatically. However, there are a few exceptions. For these exceptions, the size/retention of the files must be maintained manually. Also, for most of the automated methods, the details such as file size and number of files to retain can also be controlled. This is an important item to understand to help ensure that log and trace files are maintained for the expected period of time and that the amount of disk space consumed by these type of files is also maintained. Below are some helpful tips for managing these files as well as pointers to MOS notes and Oracle documentation for further information. Management Agent Log and Trace Files Agent logs are segmented and have a limited overall size and hence need not be deleted or managed. The agent log files are located in the $AGENT_INST/sysman/log directory. The log files are segmented (archived) into a configurable number of segments of a configurable size. These settings are controlled by properties in emd.properties but should be modified via the EMCTL utility. The latest segment is always filename.log and the oldest is the filename.log.X where X is the highest number. For more details on the Management Agent log and trace files, refer to the Oracle Enterprise Manager Cloud Control Administrator’s Guide. For details on these emctl setproperty commands, refer to MOS note 12c Cloud Control Agent: How to Configure Logging for the 12.1.0.1 Management Agent? [1363757.1]   Key Considerations: Control the size of the logs: The size of each of the individual agent log files is controlled by the property <handler>.totalSize. This property will specify the total size in MB for the file segments. Once the file reaches this size, it will be archived to a new file. The default setting is 100M. Here is an example of finding the value for the gcagent.log file in 12c: emctl getproperty agent –name “Logger.log.totalSize”. For more details on setting the max size of the agent log files, refer to the documentation and MOS links above. Control when the agent writes to log files: The amount of data that an agent will log is controlled by the property <handler>.level where the possible levels are DEBUG, INFO,WARN, ERROR, and FATAL. By default, this is set to INFO which means that logs messages with the level of INFO and above (WARN, ERROR, and FATAL) will be logged. The recommendation is to keep the default setting unless debugging an agent issue. Then, set the logging level to DEBUG for specific modules only rather than changing the root logging level. For more details on setting the agent logging level, refer to the documentation and MOS links above. Control when to purge log files: The number of archived log segments that will be maintained is specified by the property <handler>.segment.count. The default setting varies by log file. Here is an example of finding the value for the gcagent.log file in 12c: emctl getproperty agent –name “Logger.log.segment.count”. For more details on setting the number of log files to retain, refer to the documentation and MOS links above OMS Trace and Log Files The maintenance of trace and log files on the OMS servers can be handled automatically via properties set for the different components. Oracle Management Service (OMS) Log and Trace Files The OMS log and trace files are stored under the middleware_home/gc_inst/em/OMS_NAME/sysman/log directory. These files are segmented (archived) into a configurable number of segments of a configurable size so they do not need to be manually deleted or managed. These settings are controlled by properties in emomslogging.properties but should be modified via the EMCTL utility. For more details on the OMS log and trace files, refer to the Oracle Enterprise Manager Cloud Control Administrator’s Guide. For details on the location of the OMS log and trace files as well as the commands to modify the logging and tracing parameters, refer to MOS note EM 12c: Steps to Locate and Manage the Various Logs/Trace files in a 12c OMS Installation [1448308.1]   Key Considerations: Control the size of the logs and trace files: the size of the individual log and trace files are controlled by the properties log4j.appender.emlogAppender.MaxFileSize and log4j.appender.emtrcAppender.MaxFileSize respectively. These properties will specify the total  size in bytes for the file segments. Once a file reaches this size, it will be archived to a new file. Here is an example of finding the value for the gcagent.log file in 12c: emctl get property –name “log4j.appender.emlogAppender.MaxFileSize”. For more details on setting the max size of the OMS log files, refer to the documentation and MOS links above. Control when the OMS writes to log files: The amount of data that an OMS will log is controlled by the property –value “<LEVEL>” where the possible levels are DEBUG, INFO,WARN, ERROR, and FATAL. The specific category or logging module name must also be specified. The   recommendation is to keep the default setting unless debugging an issue. Then, set the logging level to DEBUG for specific modules only rather than changing the root logging level which can cause a lot of messages to be written to the trace files and potentially slowing down the system. For more details on setting the OMS logging level, refer to the documentation and MOS links above Control when to purge log files: The number of archived log and trace files that will be maintained is specified by the properties  log4j.appender.emlogAppender.MaxBackupIndex and log4j.appender.emtrcAppender.MaxBackupIndex respectively. Here is an example of finding the value for the gcagent.log file in 12c: emctl get property –name “log4j.appender.emlogAppender.MaxBackupIndex”. For more details on setting the max number of OMS log and trace files to retain, refer to the documentation and MOS links above. Oracle WebLogic Server and HTTP Server Log Files The different WebLogic Server components generate their own log files. These files are stored under different sub-directories in the middleware_home/gc_inst location. For the majority of these log files, the size and number of files can be maintained automatically. To maintain the size and number of backup files, the log files are segmented (archived) into a new segment of a configurable size based on a configurable rotation type. These settings can be set in the WebLogic Server Administration Console or via the WLST utility. Starting with the 12.1.0.2 OMS release onwards, the log file rotation and retention options are set   out-of-the box for the GCDomain.log*, EMGC_ADMINSERVER.log* and access.log* files. For more details on the WebLogic and HTTP Server log files, refer to the Oracle Fusion Middleware Administrator's Guide. For the steps required to modify the rotation file size and retained limit for the GCDomain.log*,  EMGC_ADMINSERVER.log* and access.log* files if required, refer to MOS note 12 Cloud Control: How to Enable Log Rotation Policy and Automatically Delete Older GCDomain.log, EMGC_ADMINSERVER.log and access.log Files? [1450535.1]   Key Considerations: Control the size of the logs files: the size of the individual log files can be based on file size or time. If the Rotation type “By Size” is selected, the log file will be archived to a new segment once the file reaches a specified size. If the Rotation type “By Time” is selected, the log file will be   archived to a new segment according to the specified Rotation interval (in hours). The default setting is a rotation type of “By Size” and a files size of 5M. These default settings should be sufficient for most EM installations. If needed, the settings can be modified by connecting to the Oracle WebLogic Server Administration Console. For more details on how to modify this setting, refer to the MOS note above. Control when the WebLogic and HTTP servers write to log files: The amount of data that will be logged can be controlled by specifying the level for the log and different message destinations. Some of the different levels are Debug, Info, Notice, Warning, Trace, Error, Critical, Alert, Emergency and Off. The default setting is Warning for the log files and Error for the domain log broadcaster file. These settings should be sufficient for most EM installations. If needed, the settings can be modified by connecting to the Oracle WebLogic Server Administration Console. Control when to purge log files: The maximum number of archived log files that will be maintained can be controlled by selecting the option to limit the number of retained files and then specifying the number of files to retain. This number does not include the current log file. The default setting is to retain 10 files and should be sufficient for most EM installations. If needed, this setting can be modified by connecting to the Oracle WebLogic Server Administration Console. For more details on how to modify this setting, refer to the MOS note above.   NOTE: The following log files need to be maintained and manually purged. One method for addressing these files would be to create a cronjob that would find all files older than a specific period of time and delete them. Here is an example crontab entry to remove all of the *.out* files under the OMS instance domain directory that are older than 30 days: 00 * * * * cd /u01/app/oracle/gc_inst/user_projects/domain;find . -name "*.out*" -mtime +30 -exec rm {} \;   All files under middleware_home/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs#/. Log files under the admin server and emgc_oms server: middleware_home/gc_inst/user_projects/domains/<domain_name>/servers/EMGC_ADMINSERVER/logs/*.out* middleware_home/gc_inst/user_projects/domains/<domain_name>/servers/EMGC_OMS#/logs/*.out* Database Trace and Log Files Starting with release 11g of Oracle Database, all diagnostic data including the alert log is stored in the Automatic Diagnostic Repository (ADR). Each instance of each product stores diagnostic data underneath its own ADR home directory. The ADR homes are grouped together under the same directory referred to as the ADR Base. This location is set via the DIAGNOSTIC_DEST initialization parameter. The different ADR home directories that are known to the Automatic Diagnostic Repository Command Interpreter (ADRCI) utility can be seen by issuing a “show homes” command. To execute any commands such as purge or list, the ADRCI utility must be told which home to operate against via the “set home adr_home” command. Failure to do this will result in this error: DIA-48448: This command does not support multiple ADR homes. Alert Log The alert log is stored as an XML file in the ADR and can be viewed with Enterprise Manager and with the ADRCI utility. Oracle also provides a text-formatted version of the alert log for backward compatibility. For details on using the ADRCI PURGE command, refer to the Oracle Database Utilities 12c Release 1 (12.1) guide.   Key Considerations: Control the size of the text based alert log: the size of the alert log must be controlled manually. It is safe to delete the alert log while the  instance is running but it is recommended to create an archived copy first for possible future reference. Control when to purge the XML based alert log file: The ADRCI utility can be used to purge the XML-formatted alert log file. The content is only automatically purged based on these purging policies. Therefore, data in the files that does not meet the time in the purging policy is maintained. Alert log content is subject to purging after one year (long-lived or LONGP_POLICY). To see the default purging policies for long-lived ADR content, issue the SHOW CONTROL command in ADRCI. The values are specified in hours. The default values should be sufficient for most database installations. NOTE: if you have multiple homes on the server, you must issue the SET HOMEPATH command to a specific home before issuing a SHOW CONTROL command. The content can also be purged manually. Here is an example of a command to purge the alert log content older than 30 days (note that the age is specified in minutes): purge -age 43200 -type alert. For further details on the ADRCI utility refer to the Oracle Database Utilities 12c Release 1 (12.1) guide. Trace Files Trace files are created by server and background processes, the SQL trace facility and also by enabling SQL tracing for a session or an instance. The file extension for trace files is .trc (i.e. orcl_ora_762.trc). Trace files are sometimes accompanied by a corresponding trace map file which contains structural information about trace files. These trace map files use the file extension .trm.   Key Considerations: Control the size of the trace files: the maximum size of all trace files can be controlled using the initialization parameter MAX_FILE_SIZE. This will limit the size of trace files to a specific number of operating system blocks. Control when the database writes to trace files: The only background process that allows this control is the ARCn process. This is controlled via the LOG_ARCHIVE_TRACE initialization parameter. There are multiple trace levels that can be set which control the amount of trace data written. For more details, refer to the Oracle Database Administrator’s Guide 11c Release 1 (12.1) guide. Control when to purge trace files: Trace files can be purged from the ADR home based on purging policies. The content is only purged based on these purging policies. Therefore, data in the files that does not meet the time in the purging policy is maintained. Incidents and incident dump content are subject to purging after one year (long-lived or LONGP_POLICY) however the trace files, core dumps and incident packaging information are subject to purging after 30 days (short-lived or SHORTP_POLICY). Some Oracle products (Oracle Database) automatically purge diagnostic data at the end of its life cycle. The default values should be sufficient for most database installations. To see the default purging policies for short and long-lived ADR content, issue the SHOW CONTROL command in ADRCI. The values are specified in hours. NOTE: if you have multiple homes on the server, you must issue the SET HOMEPATH command to a specific home before issuing a SHOW CONTROL  command. The content can also be purged manually. Here is an example of a command to purge the trace files (including dumps) older than 30 days (note that the age is specified in minutes): purge -age 43200 -type trace. For further details on the ADRCI utility refer to the Oracle Database Utilities 12c Release 1 (12.1) guide. Listener Log Files The listener log, much like the alert log, is stored in the ADR home for the listener in an xml file. It is also stored in a text-formatted version for backward compatibility.   Key Considerations: Control the size of the trace files: When the XML-formatted listener file (log.xml) reaches 10MB in size, it will be archived into a file named log_1.xml, log_2.xml, etc. This only applies to the XML-formatted file. Purge listener log files: Listener log files and information within a listener log are not purged via the automatic purge. Unlike the RDBMS product, the network products have not supported purging. Therefore, the maintenance of the listener log and trace files is manual. The XML-formatted files can be purged using the ADRCI utility as with the XML-formatted alert logs. NOTE: I have found on an 11.2 RAC install that the ADR location for the LISTENER trace and log files is not the same as for the SCAN LISTENER log and trace files. When connecting into the ADRCI utility, it will show one ADR Base location only. This can be seen when issuing the “show base” command. To point to the ADR Base for the Oracle Grid install, issue the “set base” command. For example: set base /u01/app/11.2.0.4/grid/log. After issuing the set base command, then a “show homes” will show all of the diag locations that can be controlled. Be sure to issue the “set home” command before purging trace files. The locations are as follows: ADR Base for LISTENER log and trace files - This is based on the DIAGNOSTIC_DEST initialization parameter. ADR Base for the SCAN LISTENER log and trace files is located under the GRID_HOME/log directory regardless of the value for the  DIAGNOSTIC_DEST initialization parameter in the ASM or database instances. REFERENCES · Master Note for 12c Cloud Control OMS [1957139.1] · 12 Cloud Control: How to Enable Log Rotation Policy and Automatically Delete Older GCDomain.log, EMGC_ADMINSERVER.log and access.log Files?  [1450535.1] · EM 12c: Steps to Locate and Manage the Various Logs/Trace files in a 12c OMS Installation [1448308.1] · 12c Cloud Control Agent: How to Configure Logging for the 12.1.0.1 Management Agent? [1363757.1] · Oracle Database Utilities 12c Release 1 (12.1) · Oracle Database Administrator’s Guide 11c Release 1 (12.1) · Oracle Fusion Middleware Administrator's Guide  

In an EM 12c environment, the majority of the log and trace files for the OMS, WebLogic Server, HTTP Server, Agent and database are maintained  automatically. However, there are a few exceptions. For...

Performance & Scalability

Tools For Generating Consistent Loads

It's finally ready. The new database machine you've spent months, planning and architecting. All those shiny new components perfectly aligned for extreme performance. Will it give you the results you expect? How can you baseline and compare your new and existing systems? Pointing your application at your new machine may take some time to setup and depending on the behavior of the application, may not stress test all hardware components as you might like. You need a set of easy to configure scripts for load testing and you need tools and procedures to compare old systems to new.  This will be the first of three blog posts to help with that effort. In this post, I'll go over some tools you can use to generate various loads. The next two posts in the series I'll talk about using Enterprise Manager to evaluate and baseline loads, and strategies to compare systems using consistent loads. Test Suite My current test suite includes two methods to generate loads: one leverages Swingbench, which is a well known and popular load generating tool, and the other is a solution I cooked up myself. Both sets of scripts can be altered to tailor their load characteristics. I've also included a variable load script wrapper for each, which you can use to adjust the load over time. For example: you can have a load test that runs for a total of 5 hours and within that 5 hour window your load could fluctuate every 30 minutes from heavy to light. The custom scripts are also flexible enough to support altering their behavior if you have a specific set of SQL/PLSQL commands you would like to run. For this article, my database is running on an Exadata X2-2 quarter rack. Using Swingbench Swingbench is a great tool for quickly generating loads on an Oracle database. It's easy to setup and has many configurable options. Although swingbench has a nice GUI interface for creating your test schemas and running your load, I really like the command line interface. With the CLI you can create scripts to interact with Swingbench and nohup loads on remote hosts so your load can run hours or days without needing to be logged in. Setup If you don't have it already, download a copy of Swingbench and unzip the files on your host machine. You can run Swingbench from your database host or a remote client. If you co-locate them on your database host, take this into account during load measurement.  There are a few different types of schemas you can create with Swingbench, and each type has an associated XML wizard file in the bin directory to help with creating that schema. I tend to use the Order Entry (OE) schema the most as it's behavior is more representative of an OLTP system, so we will be using the oewizard.xml file for this example. Open up the XML file in your favorite editor and update the connection information for the system user that will create the schema, then run oewizard on the file like this... oewizard -c oewizard.xml -cl -cs //<your_host_or_vip>/<service> -u <test_user_name> -p <test_user_pw> -ts <tablespace_name> -create -df <asm_disk_group> -part -scale 4 -debug You can use -scale to adjust the size of your schema which will also increase the time it takes to build. A scale of 4 gives me about a 10G schema.   Execution When your schema is ready, edit the supplied swingconfig.xml file with your connection info and use charbench to verify your schema.  charbench -c swingconfig.xml With our schema ready, now we can define our load also using the swingconfig.xml file. There are a number of parameters you can adjust to define your load. Here are the ones I find affective. <NumberOfUsers>40</NumberOfUsers> <MinDelay>100</MinDelay> <MaxDelay>1000</MaxDelay> <LogonDelay>150</LogonDelay> <WaitTillAllLogon>false</WaitTillAllLogon> MinDelay and MaxDelay specify the wait time between transactions in milliseconds. A LogonDelay helps avoid connection storms (unless that's what you want to test) and I like setting WaitTillAllLogin to false so my load starts right away and there is a nice ramp up over time. If I want to push the system hard I set Min/MaxDelay low and increase the number of users. Further down the swingconfig.xml file you will find descriptions of the actual transactions that will be executed. Each transaction type can be turned on/off and it's weighted value compared to other transactions can be adjusted. This section is were you will do most of your tweaking to get the load profile you want. Example Here's a Top Activity graph in Enterprise Manager showing two consecutive tests. The first test had 300 users with a Min/MaxDelay of 15/20. I decreased the Min/MaxDelay to 10/15 for an increased load which you can see below.   Here's an example of a heavily overloaded system in which the application doesn't scale. I've setup Swingbench with 800 users connecting every 2 seconds for a slow buildup, Min/MaxDelay of 5/15, and I'm only using the "Order Products" transactions. These transactions perform single row inserts with no batching. Around 11:30am there are ~500 sessions and the system starts to overload. CPU has become saturated and other background processes like the database writers start to slowdown causing excessive Concurrency and Configuration waits in the buffer cache. Our SGA for this test was 10G. Variable Loads With Swingbench In order to generate variable swingbench loads over time, I've created a small wrapper script, variable_load.pl written in Perl that can be used to define how long your load should run and also the variation in that load. To adjust the load you define how many users will be connected. Here's a snippet of the script which describes each parameter. ### how long you want the test to run in hourse:minutes$runtime = "00:30";### your swingbench config file$conf_file = 'swingconfig.xml';### Adjust your vaiable loads here###### RunTime = how ling this load will run in hours:minutes### UserCount = how many user connections### SleepTime = how many seconds to sleep before running the load, if needed###### RunTime UserCount SleepTime@swing_list = ( ["00:02", 400, 120], ["00:05", 200, 0], ["00:05", 100, 0] );  With these settings here's what our load profile looks like. Custom Load Generation There have been times during my own performance testing in which I needed to generate a very specific type of load. Most recently, I needed to generate a heavy large block IO load, so I put together these scripts in response to that need. I tried to keep them easy to setup, run and alter if necessary. The load uses a single schema and creates a test table for each session that will be connected, so the load needs to be initialized based on the maximum number of sessions expected for testing. Setup and Execution Download the package to your host and unzip/tar in an empty directory. Edit the load_env.sh file to setup variables for your test. This is where you will define the maximum number of test tables you will need. Run the init_load.sh script to setup your user test schema and test tables. You will be prompted for the SYSTEM user password. Run the start_load.sh script to begin your test. This script requires two parameters, the low and high values for the test tables to use and thus the number of sessions. This was done to allow running a small load and then ramping up and running additional loads as needed. Examples... start_load.sh 1 10 : Will run ten sessions, using test tables 1 through 10. start_load.sh 16 20 : Will run 5 sessions, using test tables 16 through 20. start_load.sh 1 1 : Will run 1 session. Running stop_load.sh will kill all sessions and thus stop the load.  Here's what a load of 20 users looks like. Lots of large block IO! Custom variable loads can also be run using the variable_load.pl script found in the package. It has the same parameters to adjust as in the Swingbench script. Here's an example of a variable load that ramps up, overloads the system, then drops back down again. As the IO gets heavy we start seeing more contention in the log buffer.  Customizations It's possible to design your own custom loads with these scripts, as you may need to execute a particular PL/SQL package or perhaps test how well SQL will scale against a large partitioned table. This can be achieved by editing the init_object.sh and load_object.sh files. init_object.sh : Edit this script to create or initialize any objects needed for your test. This script gets executed multiple times depending on how many sessions you plan to run concurrently. If you don't have a session specific setup, you can leave an empty code block. load_object.sh : This is the code that gets executed for each session you define. If you had PL/SQL you wanted to test, this is where you would put it. As an example, for this test I created some database links for each instance and altered the script to select from our test table using the database links, thus creating a heavy network load. I've included this example script load_object_network.sh in the package zip file as well. Ready! With a set of tools to define consistent, predictable loads we are now ready to baseline our systems. Next in the series I will go over the tools available in Enterprise Manager which will help in that effort. 

It's finally ready. The new database machine you've spent months, planning and architecting. All those shiny new components perfectly aligned for extreme performance. Will it give you the results you...

Architecture & Planning

Fast Recovery Area for Archive Destination

/span&gt; If you are using Fast Recovery Area (FRA) for the archive destination and the destination is set to USE_DB_RECOVERY_FILE_DEST, you may notice that the Archive Area % Used metric does not trigger anymore. Instead you will see the Recovery Area % Used metric trigger when it hits a Warning threshold of 85% full, and Critical of 97% full. Unfortunately, this metric is controlled by the database and the thresholds cannot be modified (see MOS Note 428473.1 for more information). Thresholds of 85/97 are not sufficient for some of the larger, busier databases. This may not give you enough time to kickoff a backup and clear enough logs before the archiver hangs. If you need different thresholds, you can easily accomplish this by creating a Metric Extension (ME) and setting thresholds to your  desired values.  This blog will walk through an example of creating an ME to monitor archive on FRA destinations, for more information on ME's and how they can be used, refer to the Oracle Enterprise Manager Cloud Control Administrator's Guide.  To determine if you are using FRA for your archive destination, issue an archive log list command from SQL or you can view in EM 12c by selecting the Availability menu and clicking on Recovery Settings. In this example, we will create a Metric Extension that uses the following query to monitor the Fast Recovery Area destination: select name, round(space_limit / 1048576) space_limit_in_mb, round(space_used / 1048576) space_used_in_mb, round((space_used / 1048576) / (space_limit / 1048576),2)*100 percent_usage from v$recovery_file_dest; Create a Metric Extension To create a metric extension select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor. If you have both RAC and Single instance you may need to  create one for each. In this example we will create a Cluster Database metric. Enter a Name for the ME and a Display Name. Then select SQL for the Adapter. Adjust the Collection Schedule as  desired, for this example we will leave it at 15 minutes. Enter the query that you wish to execute, in this example we will use the query above that checks for space used on the recovery destination. The next step is to create the columns to store the data returned from the query. Click Add and add a column for each of the fields in the same order that data is returned.   The table below shows how each field is defined as either a Key or Data column.   Name Display Name Column Type Value Type Metric Category Unit FRADestination Fast Recovery Area Destination Key String Capacity   FRALimit Fast Recovery Area Limit Data Number Capacity MB FRAUsed Fast Recovery Area Used Data Number Capacity MB FRAPercentUsed Fast Recovery Area % Used Data Number Capacity % For the Fast Recovery Area % Used column, you can set a default threshold by selecting a Comparison Operator and setting Warning and Critical thresholds.  In this example we set Warning to > 60% and Critical to > 75% . When all columns have been added, review your columns and click Next. On the Credentials screen, you can choose to use the default monitoring credentials or specify new credentials. We will use the default monitoring credentials established for our target, in this case that is DBSNMP.  The next step is to test your metric extension. Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s).  We can see in the example below that the metric extension has executed and returned a value of 77 for Fast Recovery Area % Used. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status, before you can deploy to targets you must select the metric, click Actions and select Save as Deployable Draft.  This step allows you to test your metric on a set of targets.  Once you've confirmed the metric is as you expect, click Actions and select Publish Metric Extension.   Finally, we want to apply this metric to a target. You can also add a metric to a template which can be mass deployed to all targets, however in this example we will deploy to a target directly. Select the  metric, click Actions and then Deploy to Targets. Click Add and select the target you wish to deploy to, then click Submit. The deployment job will be shown in the final window. Once deployment is complete, we can go to the target and select Cluster Database / Monitoring / Metric & Collection Settings to see the new metric and its thresholds. After 15 minutes, we should be able to see the most recent upload. In our example, you can see we have already triggered a Critical event as our destination is 77% full. If you have and Incident Rule which creates an incident or notification for all events of Warning or Critical, you will receive an incident similar to the one below.  If you have selective metrics creating incidents or notifications, be sure to add your new ME to the Incident Rules. By creating a Metric Extension, we are now able to customize the monitoring thresholds at which our Fast Recovery Area notifies us when it’s full. Taking this a step further, we could create a Corrective  Action on the Metric Extension we just created to kickoff an archive log backup. By creating a Corrective Action to trigger the archive log backup, you can reduce time spent by DBAs logging in, looking at the issue and kicking off a backup. 

/span&gt; If you are using Fast Recovery Area (FRA) for the archive destination and the destination is set to USE_DB_RECOVERY_FILE_DEST, you may notice that the Archive Area % Used metric does not...

Installation & Configuration

Monitoring Archive Area % Used on Cluster Databases

One of the most critical events to monitor on an Oracle Databaseis your archive area. If the archivearea fills up, your database will halt until it can continue to archive theredo logs. If your archive destinationis set to a file system, then the Archive Area % Used metric is often the bestway to go. This metric allows you tomonitor a particular file system for the percentage space that has beenused. However, there are a couple of thingsto be aware of for this critical metric. Cluster Database vs.Database Instance You will notice in EM 12c, the Archive Area metric exists onboth the Cluster Database and the Database Instance targets. The majority of Cluster Databases (RAC) arebuilt against database best practices which indicate that the Archivedestination should be shared read/write between all instances. The purpose for this is that in case ofrecovery, any instance can perform the recovery and has all necessary archivelogs to do so. Monitoring thisdestination for a Cluster Database at the instance level caused duplicatealerts and notifications, as both instances would hit the Warning/Criticalthreshold for Archive Area % Used withinminutes of each other. To eliminate duplicatenotifications, the Archive Area % Used metric for Cluster Databases wasintroduced. This allows the archivedestination to be monitored at a database level, much like tablespaces aremonitored in a RAC database. In the Database Instance (RAC Instance) target, you will noticethe Archive Area % Used metric collection schedule is set to Disabled. If you have a RAC database and you do not share archivedestinations between instances, you will want to Disable the Cluster Databasemetric, and enable the Database Instance metric to ensure that each destinationis monitored individually.

One of the most critical events to monitor on an Oracle Database is your archive area. If the archive area fills up, your database will halt until it can continue to archive theredo logs. If your...

Availability, Reliability & MAA

Oracle Enterprise Manager Software Planned Maintenance

A critical component of managing an application includes both patching and maintaining the software. Applying patches is not only required for bug fixes, it is also a means of obtaining new and beneficial functionality. Thus it becomes an important  task to maintain a healthy and productive Enterprise Manager (EM) solution. The process of patching itself can present different challenges that can potentially increase the work and time involved in each patching exercise. Issues could arise such as patch conflicts, not meeting required  prerequisites and even unnecessary downtime. Spending the proper time to setup a patching strategy can save time and effort as well as possible errors and risk when patching a production EM environment. The MAA team has recently published a new whitepaper which provides an overview of the recommended patching strategy for Oracle Enterprise Manager.  This information is intended to be a guideline for maintaining a patched and highly available EM environment and may need customization to accommodate requirements of an individual organization. There are five main topics covered in this paper as outlined below: Enterprise Manager Components Defining Business Requirements Patching Strategy Overview Types of Patches Define Patch List and Steps Planning Preparation Testing Sample Patching Steps Optional Patching Strategy    http://www.oracle.com/technetwork/database/availability/em-patching-bp-2132261.pdf  

A critical component of managing an application includes both patching and maintaining the software. Applying patches is not only required for bug fixes, it is also a means of obtaining new and...

Installation & Configuration

Java Heap Size Settings For Enterprise Manager 12c

This blog is to provide an update to a previous blog (Oracle Enterprise Manager 12c Configuration Best Practices (Part 1 of 3)) on how to increase the java heap size for an OMS running release 12cR3.  The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration BestPractices [1553342.1]. Increase JAVA Heap Size For larger enterprises, there may be a need to increase the amount of memory used for the OMS.  One of the symptoms of this condition is a “sluggish” performance on the OMS.  If it is determined that the OMS needs more memory, it is done by increasing the JAVA heap size parameters.  However, it is very important to increase this parameter incrementally and be careful not to consume all of the memory on the server.  Also, java does not always perform better with more memory.  Verify:  The parameters for the java heap size are stored in the following file: <MW_HOME>/user_projects/domains/GCDomain/bin/startEMServer.sh Recommendation:  If you have more than 250 agents, increase the -Xmx parameter which specifies the maximum size for the java heap to 2 gb.  As the number of agents grows, it can be incrementally increased.  Note:  Do not increase this larger than 4gb without contacting Oracle.  Change only the –Xmx value in the line containing USER_MEM_ARGS="-Xms256m –Xmx1740m …options…" as seen in the example below.   Do not change the Xms or MaxPermSize values. Note:  change both lines as seen below.  The second occurrence will be used if running in debug mode. Steps to modify the Java setting for versions prior to 12cR3 (12.1.0.3) Before  if [ "${SERVER_NAME}" != "EMGC_ADMINSERVER" ] ; then   USER_MEM_ARGS="-Xms256m -Xmx1740m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled"   if [ "${JAVA_VENDOR}" = "Sun" ] ; then     if [ "${PRODUCTION_MODE}" = "" ] ; then       USER_MEM_ARGS="-Xms256m -Xmx1740m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:CompileThreshold=8000 -XX:PermSize=128m"     fi   fi   export USER_MEM_ARGS fi After  if [ "${SERVER_NAME}" != "EMGC_ADMINSERVER" ] ; then   USER_MEM_ARGS="-Xms256m -Xmx2560m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled"   if [ "${JAVA_VENDOR}" = "Sun" ] ; then     if [ "${PRODUCTION_MODE}" = "" ] ; then       USER_MEM_ARGS="-Xms256m –Xmx2560m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:CompileThreshold=8000 -XX:PermSize=128m"     fi   fi   export USER_MEM_ARGS fi Steps to modify the Java setting for version 12.1.0.3 emctl set property -name JAVA_EM_MEM_ARGS -value "<value>" emctl stop oms -all emctl start oms Please note that this value gets seeded inside emgc.properties and is used to start the OMS.  Please be careful setting this property as this would be the property used by the OMS to start and the oms can fail to start if it is not specified correctly.  Below is an example of the command: emctl set property -name JAVA_EM_MEM_ARGS -value "-Xms256m -Xmx2048m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled"

This blog is to provide an update to a previous blog (Oracle Enterprise Manager 12c Configuration Best Practices (Part 1 of 3)) on how to increase the java heap size for an OMS running release 12cR3. ...

Maintenance & Health

Creating a disk monitoring metric extension for Exadata Compute Nodes

 It is highly desirable to monitor the Exadata Compute node disks for current failures or degraded performance. By using the Enterprise Manager metric extension functionality Compute nodes can be monitored for these conditions and an alert created in the event of an issue. The following steps will guide you through this process 1. First a root monitoring credential set must be created . Login into the OMS using emcli $ ./emcli login -username=sysman Enter password : Login successful 2. Create the credential set: $ ./emcli create_credential_set -set_name=root_monitoring -target_type=cluster -supported_cred_types=HostCreds -descript=root_monitoring monitoring Create credential set for target type host 3. Next login to EM and go to the monitoring credentials page to setup credentials for a test target Setup--> Security-->Monitoring Credentials Select Cluster and the push the "Manage Monitoring Credentials" button Find the target you want to test on with the credential set defined in step 2( In this case root_monitoring) Highlight the credential set and push the "Set Credentials" button. Enter the credentials and use the test and save button to ensure they are correctly defined 4. Next create the metric extenstion Enterprise-->Monitoring-->Metric Extensions Select the create button 5. On the General Properties Screen set the following Target type select "Host" Name "Compute_Node_Disk_Monitoring" Display Name "Compute Node Disk Monitoring" Adapter "OS Command - Multiple Columns" Data Collection "Enabled" Repeat Every "5 Minutes" Use of Metric Data "Alerting and Historical Trending" Upload Interval "1 Collections" Select the Next Button 6. Now create the script to run on the agent On your local machine create a file called megaclicommand.sh that contains the following /opt/MegaRAID/MegaCli/MegaCli64 AdpAllInfo -aALL | grep "Virtual Drives" -A 6 | grep -w 'Degraded\|Critical\|Offline\|Failed' | sed 's/Degraded/Virtual Drives Degrades/g' | sed 's/Offline/Virtual Drives Offline/g' | sed 's/Critical Disks/Critical Physical Disks/g' | sed 's/Failed Disks/Failed Physical Disks/g' 7. On the "Edit Storage Server Disk Status (ME$Storage_Server_Disk_Status) v1 : Adapter" page enter the following Command "/bin/sh" Script "%scriptsDir%/megaclicommand.sh" Delimiter ":" 8. On the Upload Custom Files Section Select the upload button and select the file created in step 6 Click okay and one back to the Create New:Adapter page select the "Next" button 9. On the "Create New : Columns" page create two columns Column one should be setup as: Name "Type" Display Name "Type" Column Type "Key Column" Value Type "String" Metric Category "Fault" Column two should be setup as: Name "Value" Display Name "Value" Column Type "Data Column" Value Type "Number" Metric Category "Fault" Comparison Operator ">" Critical "0" After Setting up the two column select the next button 10. On The Credentials Screen Select the “Specify Credential Set” radio button In the drop down box select the credential set created in step 1 Click the next button 11. On the “Create New : Test” page Add a target to test with in the “Test Targets” section Click the “Run Test” button and ensure that results are displayed properly in the “Test Results” box. The results should be similar to below Type Value Virtual Drives Degrades 0 Virtual Drives Offline 0 Critical Physical Disks 0 Failed Physical Disks 0 12. Next the the Metric Extension must be saved as a deployable draft. This is accomplished on the main metric extension page. This allows the metric to be deployed to targets for testing. However at this stage only the developer has access to publish the metric. After satisfactory testing is completed the metric is then published. This is once again accomplished from the main metric extension page. To ensure that administrators are notified in the event the metric created fails an incident rule should be created. 1, To Begin navigate to the Incident Rules Home Page          From the Setup button on the upper right hand corner of the Enterprise Overview Page          Setup->Security->Incident Rules          Now click the “Create Rule Set..” button  2. On the Create Rule Set screen enter the following information         Name: Whatever the rule should be called. i.e. Metric Collection Error Rule         Enabled Box: Should be checked         Type: Enterprise         Applies To: Targets         Select the “All Targets of Type” radio button on the bottom of the screen followed by Host in the drop down box 3. Now select the “Rules” tab at the bottom of the screen 4. Chose the "Create.." button on the middle of the screen 4. On the “Select Type of Rule to Create” Popup box select the “Incoming events and updates to events” radio button. Click the continue button. 5. On the “Create New Rule: Select Events” screen check the type check box. In the drop down select “Metric Extension Update”. Click the next button 6. On the “Add conditional Actions” page you can specify conditions that must occur for the action to apply, if an incident should be created and email notifications. Specify the desired properties and select the continue button. 7. If no additional rules are required select the next button on the “Create New Rule: Add Actions” page. 8. On the next screen either accept the default rule name or specify the desired name 9. For the “Create New Rule : Review” page, ensure everything looks correct and select the “continue button” 10. Lastly click the “Save” button to complete the setup 11. The metric can now be deployed to the desired target by selecting the “Deploy to Targets” option from the “Actions” drop down button on the Metric Extensions Page

 It is highly desirable to monitor the Exadata Compute node disks for current failures or degraded performance. By using the Enterprise Manager metric extension functionality Compute nodes can be...

Installation & Configuration

Simplified Agent and Plug-in Deployment

On your site of hundreds or thousands of hosts have you had to patch agents immediately as they get deployed?  For this reason I’ve always been a big fan of cloning an agent that has the required  plug-ins and all the recommended core agent and plug-in patches, then using that clone for all new agent deployments. With Oracle Enterprise Manager 12c this got even easier as you can now clone the agent using the console “Add Host” method. You still have to rely on the EM users to use the clone. The one problem I have with cloning is that you have to have a reference target for each platform that you support. If you have a consolidated environment and only have Linux x64, this may not be a problem. If you are managing a typical data center with a mixture of platforms, it can become quite  the maintenance nightmare just to maintain your golden images. You must update golden image agents whenever you get a new patch (generic or platform specific) for the agent or plug-in, and recreate the clone for each platform. Typically, I find people create a clone for their most common platforms, and forget about the rest. That means, maybe 80% of their agents meet their standard patch  requirements and plug-ins upon deployment, but the other 20% have to be patched post-deploy, or worse – never get patched! While deployed agents and plug-ins can be patched easily using EM Patches & Updates, but what about the agents still getting deployed or upgraded? Wouldn’t it be nice if they got patched as part of   the deployment or upgrade? This article will show you two new features in EM 12.1.0.3 (EM 12cR3) that will help you deploy the most current agent and plug-in versions. Whether you have 100s or 1000s of agents to manage, reducing maintenance and keeping the agents up to date is an important task, and being able to deploy or upgrade to a fully patched agent will save you a lot of time and  effort. Agent One-off Patches Using the new feature available in EM 12cR3, you can enforce which one-off patches get applied during agent deployment or when using the Console or EM CLI. This keeps all Agents at a consistent  patch level and removes the extra steps required to patch agents after deployment or upgrade. As part of your change management process, it is recommended to have a gold image agent that you perform all your patching and testing on. You will need one per platform if you have platform specific patches. After you have fully tested the Agent one-off patches and decide they are to be part of your agent golden image, stage them on each OMS in $OMS_HOME/install/oneoffs/<agentversion>/<platform> where agentversion is like 12.1.0.3.0 and platform matches an option in the table below: Operating System Platform Directory Generic Generic Linux Linux Linux X64 linux_x64 Oracle Solaris on x86-64 (64-bit) solarix_x64 Oracle Solaris on SPARC (64-bit) Solaris HP-UX PA-RISC (64-bit) hpunix HP-UX Itanium Hpi IBM S/390 Based Linux (32-bit) linux_zseries64 IBM AIX on Power Systems (64-bit) Aix IBM Linux on Power Systems (64-bit) linux_ppc64 Microsoft Windows x64 (64-bit) windows_x64 Microsoft Windows (32-bit) win32   For example, a non-platform specific patch for 12.1.0.3.0 would reside in the following file: $<OMS_HOME>/install/oneoffs/12.1.0.3.0/Generic/ Once the patches are staged in the install directory for each OMS, agent deployment and agent upgrades done using the EM 12c Console will apply the one-off patches as a post-install/post-upgrade step. Using EM CLI upgrade_agents or submit_add_host verbs will also deploy the one-off patches. Patches can be verified in the $AGENT_HOME/cfgtoollogs/agentDeploy/agentDeploy<timestamp>.log or by checking the Agent inventory with Opatch. $<AGENT_HOME>/OPatch/opatch lsinventory -oh <AGENT_HOME> -invPtrLoc <AGENT_HOME>/oraInst.loc For full details and examples on this new feature, see the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide in Appendix D Applying One-Off Patches to Oracle Management Agents and watch the self-running demo How to push Plug-in patches while doing Fresh Agent Deployment or Agent Upgrade using Create Custom Plug-in Update . Agent Side Plug-in One-off Patches So now that you’ve got your agent deployed and patched, it’s time for the Plug-ins. These patches get applied to the plug-in home (i.e. /oracle/agent/plugins/oracle.sysman.db.agent.plugin_12.1.0.4.0 is the plug-in for the DB 12.1.0.4.0 plug-in for an Agent installed in /oracle/agent). Just as with agent patches, you can easily deploy patches to existing agent plug-in using EM Patches & Updates, but what about all those new plug-ins you have to deploy to newly installed agents? With EM 12cR3 you can now ensure that the patches are automatically applied each time the plug-in is deployed as well. This removes an additional step of having to go back and patch an agent plug-in that you just deployed. Using your gold image agent you selected earlier, apply any plug-in patches that you want to apply. Once you’ve successfully validated and tested the plug-in, create the Custom Plug-in Update using EM CLI. To create a Custom Plug-in Update the user must have the EM_INFRASTRUCTURE_ADMIN role. The overwrite flag is required once the first Custom Plug-in Update is created to update. For example there is a plug-in patch for DB plug-in that we have applied to all our existing agents and we would like all new agents to have this plug-in patch. $emcli create_custom_plugin_update -agent_name="server1.oracle.com:3872" -plugin_id=”oracle.sysman.db” -overwrite To view a list of patches included in a particular Custom Plug-in Update, run the following command or view the details in the Console: $emcli list_patches_in_custom_plugin_update -plugin=<plugin_id>:<version> [-discovery] For more details on the Custom Plug-in Update feature in 12cR3, see the Enterprise Manager 12c Cloud Control Administrator’s Guide section on Plug-ins and watch the self-running demo How to push Plug-in patches while doing Fresh Agent Deployment or Agent Upgrade using Create Custom Plug-in Update . Summary In summary, using these two new features of 12cR3 helps you ensure that freshly deployed or upgraded agents and plug-ins get the appropriate patches in one step. This will help in reducing maintenance and maintain a consistent agent profile across all servers.

On your site of hundreds or thousands of hosts have you had to patch agents immediately as they get deployed?  For this reason I’ve always been a big fan of cloning an agent that has the required ...

Maintenance & Health

Operational Considerations and Troubleshooting Oracle Enterprise Manager 12c

    Oracle Enterprise Manager (EM) 12c has become a valuable component in monitoring and administrating an enterprise environment. The more critical the application, servers and services that are monitored and maintained via EM, the more critical the EM environment becomes. Therefore, EM must be as available as the most critical target it manages. There are many areas that need to be discussed when talking about managing Enterprise Manager in a data center. Some of these are as follows: • Recommendations for staffing roles and responsibilities for EM administration • Understanding the components that make up an EM environment • Backing up and monitoring EM itself • Maintaining a healthy EM system • Patching the EM components • Troubleshooting and diagnosing guidelines The Operational Considerations and Troubleshooting Oracle Enterprise Manager 12c whitepaper available on the  Enterprise Manager Maximum Availability Architecture (MAA) site will help define administrator requirements and responsibilities.  It provides guidance in setting up the proper monitoring and maintenance activities to keep Oracle Enterprise Manager 12c healthy and to ensure that EM stays highly available.

    Oracle Enterprise Manager (EM) 12c has become a valuable component in monitoring and administrating an enterprise environment. The more critical the application, servers and services that are...

Installation & Configuration

Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

  This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues   The entire series can be found in the My Oracle Support note titled id=1553342.1">Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the  EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured.   1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…”   2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “Specified targets” and select "Targets" from the Add drop down list.  Click on the green "+" sign.  Click on the drop down arrow for Target Type and deselect all target types except "EM Service" and “OMS and Repository".  Click "Search".  Select the targets returned and click "Select". 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules (names for rules in 12.1.0.3 are in parentheses if they have changed): a. Incident creation Rule for metric alerts (Create incident for critical metric alerts) i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add EM Service and OMS and Repository by clicking on the drop down list selecting both "EM Service" and “OMS and  Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: > Repository Database down > All OMS are down > Repository side collection job that is broken or has an invalid schedule > Notification job that is broken or has an invalid schedule   Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL used by EM as the URL in notifications and by this EM Service test must be modified to point to the load  balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. Setting the Console URL if a multi-OMS system is implemented: 1.  Click on Setup / Manage Cloud Control / Health Overview 2.  Click on the "Add" push button next to Console URL as seen in the picture below. 3.  Type in the URL and click OK. Modifying the HTTP Method for the EM Console Service test: 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section should point to the load balancer name instead of a specific server name if multi-OMSes have been implemented and the Console URL was set according to the steps above. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup → Manage Cloud Control → Repository, the job called “Job Purge” is shown as down and the Next  Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job.  NOTE:  this issue is fixed in version 12.1.0.3 Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below:   $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

  This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A...

Installation & Configuration

Oracle Enterprise Manager 12c Configuration Best Practices (Part 2 of 3)

  This is part 2 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covers recommended changes for the Weblogic server Part 3 will cover general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled id=1553342.1">Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. WebLogic Server Recommendations Stuck Thread Max Time By design WLS will ping applications and wait for a response for up to the value of Stuck Thread Max Time which is set to 600 seconds by default. This is a heartbeat to ensure that a particular thread is  not stuck. EM on the other hand will keep threads running as long as there is work in the queue and they will not respond to a heartbeat. This is expected behavior for both EM and WebLogic Server   however it will cause WLS to timeout and error which will create an incident within EM. If this parameter is not increased, the number of incidents created by this WLS error can be significant. Below is an example of the incident that may be seen. Please note, an enhancement bug has been created requesting that EM install out of the box with a higher value for this parameter.   Recommendation: To assist in reducing these errors, increase the stuck thread timeout in the Admin server as per the steps below. Note that this will reduce the number of above alerts but may not  remove them completely. 1. Log onto the WLS Admin server. 2. Click on Environment in the top right side menu and expand Servers. Click on one of the OMS server names. 3. Click on the Tuning tab on in the middle window and then on the Lock and Edit under the Change Center (top left). 4. Change the value for Stuck Thread Max Time to 1800. 5. Save and Activate the change. This will require a restart of the OMS server for it to go into effect and will need to be repeated for all servers in the Admin Console (i.e. OMS servers and  ADMINSERVER) but only needs to be done once per site/domain. If the environment contains standby OMS servers, repeat these steps for all standby OMS servers and the ADMINSERVER although a  reboot is not required for the standby OMS servers as they are not running. Modify Log Settings The default severity setting for logging information in the WebLogic Server is set at a level that will create excessive logging data. These settings should be set to a higher severity level. Recommendation: To modify these settings, follow the steps below: 1.  Log onto the WLS Admin server. 2.  Click on Environment in the top right side menu and expand Servers. Click on the first OMS server.   3. Click on the Logging tab in the middle window and then on the Lock and Edit under the Change Center (top left).     4. Expand the Advanced option at the bottom of the page. 5. Change the Minimum log severity from Info to Warning. 6. Change the Domain Log Broadcaster Severity Level from Notice to Error. 7. Save and Activate the change. This does not require a restart of the OMS server for it to go into effect but will need to be repeated for all servers in the Admin Console (i.e. OMS servers and  ADMINSERVER. This change only needs to be done once per site/domain. If the environment contains standby OMS servers, repeat these steps for all standby OMS servers and the ADMINSERVER.

  This is part 2 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A...

Installation & Configuration

Oracle Enterprise Manager 12c Configuration Best Practices (Part 1 of 3)

    The objective of this three-part blog series is to summarize the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment.  A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter  on Sizing for more details on sizing your environment properly. Part 1 of this series covers recommended configuration changes for the OMS and Repository Part 2 will cover recommended changes for the Weblogic server Part 3 will cover general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. OMS Recommendations Increase JAVA Heap Size For larger enterprises, there may be a need to increase the amount of memory used for the OMS.  One of the symptoms of this condition is a “sluggish” performance on the OMS.  If it is determined that the OMS needs more memory, it is done by increasing the JAVA heap size parameters.  However, it is very important to increase this parameter incrementally and be careful not to consume all of the  memory on the server.  Also, java does not always perform better with more memory.  Verify:  The parameters for the java heap size are stored in the following file: <MW_HOME>/user_projects/domains/GCDomain/bin/startEMServer.sh Recommendation:  If you have more than 250 agents, increase the -Xmx parameter which specifies the maximum size for the java heap to 2 gb.  As the number of agents grows, it can be  incrementally increased.  Note:  Do not increase this larger than 4gb without contacting Oracle.  Change only the –Xmx value in the line containing USER_MEM_ARGS="-Xms256m –Xmx1740m …options…" as seen in the example below.   Do not change the Xms or MaxPermSize values. Note:  change both lines as seen below.  The second occurrence will be used if running in debug mode. Before  if [ "${SERVER_NAME}" != "EMGC_ADMINSERVER" ] ; then   USER_MEM_ARGS="-Xms256m -Xmx1740m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled"   if [ "${JAVA_VENDOR}" = "Sun" ] ; then     if [ "${PRODUCTION_MODE}" = "" ] ; then       USER_MEM_ARGS="-Xms256m -Xmx1740m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:CompileThreshold=8000 -XX:PermSize=128m"     fi   fi   export USER_MEM_ARGS fi After  if [ "${SERVER_NAME}" != "EMGC_ADMINSERVER" ] ; then   USER_MEM_ARGS="-Xms256m -Xmx2560m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled"   if [ "${JAVA_VENDOR}" = "Sun" ] ; then     if [ "${PRODUCTION_MODE}" = "" ] ; then       USER_MEM_ARGS="-Xms256m –Xmx2560m -XX:MaxPermSize=768M -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:CompileThreshold=8000 -XX:PermSize=128m"     fi   fi   export USER_MEM_ARGS fi Repository Recommendations Repvfy execute optimize This command can be executed to establish a baseline and set the environment to the “recommended” values based on the configuration of that environment.  The following command will check the existing settings and modify them if needed. $ repvfy execute optimize This command does several things some of which include the following: 1.                 Internal task system: Verify there are at least 2 short running and 2 long running worker threads Verify that the availability worker threads are disabled since these threads are now obsolete 2.                 Repository settings: Set the retention time for the MGMT_SYSTEM_ERROR_LOG table to 7 days (unless this setting has already been changed) Disable PL/SQL and metric tracing to reduce logging when not necessary Recompile any invalid SYSMAN objects 3.                 Target system: Tune the PING grace period to allow the OMS to wait a longer period of time after startup before checking the heartbeat of the agents Increase Task Workers Task worker threads are used to pick up tasks from the dbms_scheduler jobs queue based on their type.  These jobs are used to calculate metrics, rollup metrics for clusters and provide the  self-monitoring metrics for EM.  Tasks are defined as short or long.  Many larger systems require more than one short and long task workers to do the housekeeping jobs in a timely manner without  creating a backlog.   The recommendation is to have at least 2 short-running worker threads and 2 long-running worker threads. Verify:  To determine if you have a backlog:  $ repvfy verify repository -test 1001 If you have a backlog, execute the command below to gather more details on the performance data for the task workers.  $ repvfy dump task_health Recommendation:  If the output from the dump task_health indicates a backlog, execute the following statement to set the recommended number of task workers for both short running tasks (type 0) and long running tasks (type 1).  This will increase the settings to the recommended settings for your environment (this command is not necessary if you already ran it on your environment from the first recommended step above).  $ repvfy execute optimize If after setting the recommended settings, the site had grown to such a size that there is still a task worker backlog, use this routine to increase the number of workers above 2: $ sqlplus /nolog SQL> connect SYSMAN; SQL> exec gc_diag2_ext.SetWorkerCounts(<number>); The number can be 3 or 4 (the routine will not accept values larger than 4). If you need to go higher than 4, contact Oracle Support. Increase Ping Grace Period Upon system startup, the OMS must ping each agent to get a current heartbeat and update the availability state for all the agents.  In systems with 100’s or 1000’s of agents, this can take longer.   By  increasing the grace period for the ping/heartbeat system to kick in and contact Agents we allow more time for the agents to start uploading first.  Recommendation: Execute the following statement. This command will evaluate the system and set the appropriate value for the Ping Grace Period to give the majority of the agents a chance to begin their upload upon  system startup (this command is not necessary if you already ran it on your environment). $ repvfy execute optimize  If after an OMS restart, you still see a high number of pending agents for a prolonged period of time, this value may need to be set higher.  Execute the following statement and contact Oracle Support,  providing the output from the dump ping_health command. $ repvfy dump ping_health  

    The objective of this three-part blog series is to summarize the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment.  A...

Installation & Configuration

Relocating targets with EM 12c

Multi-Agent targets Some targets (like RAC databases, clusters, FMW domains etc) are considered 'clustered' and have failover build into them 'by design'. Enterprise Manager handles those targets in a special way: They are marked as 'multi-Agent' targets: They are discovered on all Agents of the 'cluster' or 'set of hosts' they can run on. The OMS will decide which Agent will do the actual 'monitoring' of that target in question. (OMS mediation) If that Agent goes down or becomes 'unavailable', the OMS will choose another Agent from the discovered set to take responsibility of that target and continue the monitoring. For these targets, the 'relocate_target' functionality should not be used, since the OMS will take care of the failover, and move the monitoring to a 'surviving Agent' in case a failover is needed.Forcing a target to get moved to another Agent should also not be done with 'relocate' functionality, since the target is in almost all cases linked to other targets (like CRS or cluster targets) which have to have known associations with these targets. To see which targets are considered 'OMS mediated', run this query in the repository: SELECT host_name, entity_type, entity_name FROM em_manageable_entities WHERE manage_status = 2 -- Managed AND promote_status = 3 -- Promoted AND monitoring_mode = 1 -- OMS mediated ORDER BY host_name, entity_type, entity_name ; To be able to see the list of Agents that have discovered these OMS mediated targets, and can assume monitoring of them, use these REPVFY commands: To see which Agents can monitor an OMS mediated target:$ repvfy show master_agent -name "<name_of_target>" -type "<type_of_target>" To see the last 10 Agent failovers for a given target:$ repvfy show master_agent_history -name "<name_of_target>" -type "<type_of_target>" To see which Agents can monitor the various components of a Database Machine (Exadata):$ repvfy show exadata_master_agent -name "<name_of_the_dbmachinetarget>" For debugging/maintenance purposes, a special routine exists in REPVFY to force an OMS mediated failover:SQL> exec gc_diag3_ext.ForceFailover("<name_of_target>","<type_of_target>"); For more information on EMDIAG, see:421053.1: EMDIAG Master Index Manual relocation of targets In case a 'regular' target needs to get moved to another Agent, a special EMCLI verb exists to move the definition and monitoring settings of a target from one Agent to another Agent: $ emcli relocate_targets -src_agent=<source_agent_target_name> -dest_agent=<dest_agent_target_name> -target_name=<name_of_target_to_be_relocated> -target_type=<type_of_target_to_be_relocated> -copy_from_src -force=yes Gotchas: For targets that have known 'associations' (services like FMW, EBiz, etc) the '-force' flag will move all related targets together with the main service to the new Agent. For those targets that have monitoring settings that are host/Agent specific, the values of those properties will have to get updated when the target moves$ emcli relocate_targets <...options...> -changed_param=<propName>:<propValue> If there is 'clock skew' between the source and the destination Agent, the availability of the target might be impacted when the target gets moved from the old to the new Agent.To force the new time of the target, the 'ignoreTimeSkew' parameter can be used, to make the repository 'accept' the 'older' time from the new Agent:$ emcli relocate_targets <...options...> -ignoreTimeSkew=yes Automated relocation of targets For cold failover cluster (CFC), the EMCLI way of moving a target from one Agent to another Agent will not work because of the interactive nature of EMCLI (and the password requirement) For those setups, there is an EMCTL command to take ownership of a target. An Agent can only assume control over a target. It can not give a target 'away' or push a target on another Agent For security reason, the list of Agents that can 'assume' control over a given target need to get registered first in the repository. For every target requiring automated failover (emctl failover), run this EMCLI command once to setup the list of possible Agents: $ emcli set_standby_agent -src_agent=<source_agent_target_name> -dest_agent=<dest_agent_target_name> -target_name=<name_of_target_to_be_relocated> -target_type=<type_of_target_to_be_relocated> If more than 2 Agents are needed, run multiple EMCLI commands for each target, each time with a new Agent specified for the '-dest_agent' parameter. Once the setup has been done, an Agent can take control over a target by running this command: $ emctl relocate_target agent "<name_of_target>" "<type_of_target>" Using this EMCTL command, the cluster scripts that are run before and after the failover of a node can then be enhanced with these EMCTL commands to let the Agent on the new (surviving) node assume control over the desired targets. There is no build-in way in EM today to visualize or retrieve the failover configuration of a target.To be able to see the setup done for a particular target, the following commands have been added to REPVFY: To see which Agents can 'assume' control over a target:$ repvfy show agent_failover -name "<name_of_target>" -type "<type_of_target>" For more information on EMDIAG, see:421053.1: EMDIAG Master Index Stay Connected with Oracle Enterprise Manager: Twitter | Facebook | YouTube | Linkedin | Newsletter

Multi-Agent targets Some targets (like RAC databases, clusters, FMW domains etc) are considered 'clustered' and have failover build into them 'by design'. Enterprise Manager handles those targets in a...

Installation & Configuration

Using Advanced Notifications in Oracle Enterprise Manager 12c

    When using an enterprise monitoring tool such as Oracle Enterprise Manager 12c, one of the most critical components is notification. Once an alert or issue has been identified, how do you tell the right  people at the right time? Most enterprises use e-mail or open a trouble ticket. As you can imagine, no two enterprises are the same when it comes to their tools and processes. Many customers  use one of the more common and well known trouble ticketing systems but quite a few use non-standard or custom (homegrown) trouble ticketing systems. Some customers have special routing requirements or corporate standards and have custom applications which handle all emailing functions instead of directly emailing using an SMTP server. Oracle Enterprise Manager 12c can handle all of these situations by utilizing one of the various notification methods provided: E-mail, 3rd party connectors and advanced notification methods. There are  three types of advanced notifications: SNMP, OS Command or PL/SQL. This blog will introduce you to the OS Command and PL/SQL notification methods available in EM 12c and provide an example of using a custom OS script for notifications. Advanced Notification Methods: OS Command and PL/SQL With the advanced notification methods, you can write a notification directly to a table or an OS log for further processing or push a notification to any trouble ticketing system using their command line  tools providing the data and variables from EM 12c as input.  This method is used by some customers whose corporate standard requires that all alerts be written to a log file, which the ticketing system can poll on a regular basis for alerts. Additionally, advanced notifications allow you to call a procedure whose interface is PL/SQL or access additional data in the Enterprise Manager repository.  For  example, for a database alert on % of Processes or Sessions Used, you could use PL/SQL to notify the proper application team (stored in a custom target property) that they have too many sessions.   To create and configure advanced notifications, the user must have Super Admin privileges. Creating OS or PL/SQL Scripts When creating an advanced notification the first step is to create the OS or PL/SQL script. It’s highly recommended to include debugging and logging information so you can fully understand what  information EM 12c is passing and assist in troubleshooting.  When using an OS Command in a multi-OMS environment, the script needs to reside on all OMS servers or preferably in a shared location. If you plan to use PL/SQL, the procedure must be created in the repository database before configuring it as a notification method.   Of course, any custom objects created in the repository should be  created under a separate schema and privileges granted to the SYSMAN user.  The Oracle Enterprise Manager Cloud Administrator’s Guide chapter on Notifications has detailed examples of OS Scripts and PL/SQL that can be utilized in different situations.  This chapter also  provides detailed information on using passing information to the OS or PL/SQL script and troubleshooting.   Creating Custom Notification Methods After defining the OS Command or PL/SQL Script, you need to add it to EM 12c as a notification method.  In this example, we will create a simple OS script that logs events to a log file, which can be  further processed by a custom ticketing system. #!/bin/ksh LOG_FILE=/tmp/event.log if test -f $LOG_FILE then echo $TARGET_NAME $MESSAGE $EVENT_REPORTED_TIME >> $LOG_FILE else    exit 100 fi To create a notification method login as a user with Super Admin privileges and select Setup / Notifications / Notification Methods.  Under the Scripts and SNMP Traps section, click the Add drop  down box and select OS Command and click Go.   Enter a name and provide the fully qualified script location.  Use the Test button to validate and click Save. Adding Notification Methods to Incident Rules To receive notifications, you will need to create an Incident Rule set and select relevant targets and events to notify and then select the OS Command advanced notification method we created earlier. In  this section we will go through the steps to create a simple incident rule.  For full details on how to configure your Incident Rules, see the Oracle Enterprise Manager Cloud Control Administrator’s Guide. Go to Setup / Incidents / Incident Rules.   At this point you can either select a rule set to edit, or create a new rule set.  In this example we are going to create a new rule set by clicking Create.    On the first screen, enter a user friendly name and description. Since this example is planning to use an advanced notification choose Enterprise.    Since we only want notifications on our Production targets, we’re going to narrow down the list of targets. On Targets tab select Specific targets, select Groups in the drop down box and click Add. Search for the desired group and select. Click on the Rules tab, and select Create.   Select the type of rule to create, in this example we are using Incoming events and updates to events. Enter filter criteria, in this example we are filtering on all Metric Alerts in Severity Warning and Critical, click Next. On the Add Actions screen you will define the action that you wish the notification to perform.  Click Add.   In the Create Incident or Update Incident section you can choose to create an Incident, assign Incidents to an administrator, set priority/status or escalate (if update is selected).  Under the  Notifications section, you can select Basic Notifications to send an e-mail or page to a particular user or users, on top of any methods you might select in the Advanced Notifications section.  In  Advanced Notifications you will see the method we previously created, select that method and click Continue.   Click Next. Provide a user friendly name for your rule and a description, click Next.  Review the details of your rule and click Continue.    Note the rules are not saved until the Save button is clicked. Click OK. Review your rule set and click Save.   Click OK.   Validating Incident Rules Once you have configured your rule set, you should trigger a critical alert on one of the targets included in the rule set. Verify that the rule set triggers the notification method you created and logs the event in the log file.  It's helpful to keep the e-mail option turned on during testing so you can validate when the rule was triggered as well.  Identify a target in the group you configured notifications for in the previous section. For this example, we will use a database instance and trigger a Process Limit Usage (%) event. From Oracle Database / Monitoring /  All Metrics, I’ve identified a metric that I can lower thresholds on to trigger an event. Click on Process Limit Usage (%) to drill down to this metric. Notice our Real Time Value  of Processes is 16.4.   Click on Modify Thresholds to set thresholds. Set the Warning and Critical thresholds lower than the Real Time Value we made note of earlier. Ours was 16.4, so setting the Warning to 5, and Critical to 10. In the Occurrences before Alert field, you  may wish to change the value to 1. Since the collection frequency is 10 minutes, if occurrences are set to 3, you will have to wait 30 minutes for it to trigger and again to clear. Click Save Thresholds.   Close the confirmation window.   Navigate to Oracle Database / Monitoring / Incident Manager. Since we have the out-of-box rule set enabled we get an Incident for every critical metric alert and we can see the incident for Process Limit in the Unacknowledged incidents view. If you have disabled the out-of-box rule set or don’t see the incident, check the Events without incidents view. Select the appropriate line and in the lower pane click on the Events tab.   On the Events tab click on the Message link to drill into the event details.   Here you’ll see the notification method was called under the Last Comment field. From here, click on Updates tab to see more details.   In the Updates tab, you will see the details of the alert and the notification method that was called to run our /tmp/event_log.sh script.   Finally, we can check the /tmp/event.log file and see the information that was reported.   Be sure to go back and set your threshold to its regular value to clear the false alert you triggered!   Summary The options for notifications in Oracle Enterprise Manager 12c are very flexible. You can choose e-mail, one of the available connectors to integrate with a 3rd party ticketing system, or use one of the advanced notification methods (SNMP, OS Commands or PL/SQL). In this blog, we’ve shown you how to create an advanced OS Command notification method, how to configure an incident rule set to call that method and how to validate by triggering an alert. Once you are familiar with how to create advanced notification methods and rule sets, you can customize your notifications to suit your needs.  For additional information on configuring your environment for enterprise monitoring, see the whitepaper Strategies for Scalable, Smarter Monitoring using Oracle Enterprise Manager Cloud Control 12c.   Stay Connected with Oracle Enterprise Manager: Twitter | Facebook | YouTube | Linkedin | Newsletter          

    When using an enterprise monitoring tool such as Oracle Enterprise Manager 12c, one of the most critical components is notification. Once an alert or issue has been identified, how do you tell the...

Installation & Configuration

Network Ports Used in Oracle Enterprise Manager 12c

    When planning and configuring your Oracle Enterprise Manager 12c implementation, you will have many infrastructure considerations. One of the most often discussed pieces is the network ports that  are used and how to configure load balancers, firewalls and ACLs for communication. This blog post will help identify the typical default port and range for each component, how to identify it and how to modify the port usage. To modify most ports during installation, select the Advanced Installation and set the appropriate ports on the Port Configuration Details screen. Once the system is installed, you can use the following EMCTL or OMSVFY commands to validate components and port assignment: $emctl status oms -details $omsvfy show opmn $omsvfy show ports To verify if a port is free, run the following command: On Unix: $netstat -an | grep <port no> On Microsoft Windows: >netstat -an|findstr <port_no> For more information on OMSVFY (part of the EMDIAG toolkit) see MOS Note 421053.1: EMDIAG Troubleshooting Kits Master Index   External Ports These ports will be used in every Enterprise Manager 12c installation and will require firewall and/or ACL modifications if your network is restricted.  These are also the components that will be added to your load balancer configuration. Default Port Range Component Usage Modify 4889 4889 – 4898 Enterprise Manager OHS Upload HTTP Agent Communication to OMS (unsecure). Used in load balancer. To modify after install follow notes 1381030.1 and 1385776.1. Requires changes on all Agents. 1159 1159, 4899 – 4908 Enterprise Manager OHS Upload HTTP SSL Agent Communication to OMS (secure). Used in load balancer. To modify after install follow notes 1381030.1 and 1385776.1. Requires changes on all Agents. 7788 7788 – 7798 Enterprise Manager OHS Central Console HTTP (Apache/UI) Web browser connecting to Cloud Control Console (unsecure). Used in load balancer and for EM CLI. To modify after install follow notes 1381030.1. 7799 7799 - 7809   Enterprise Manager OHS Central Console HTTP SSL (Apache/UI) Web browser connecting to Cloud Control Console (secure). Used in load balancer and for EM CLI. To modify after install follow note 1381030.1. 7101 7101 - 7200 EM Domain WebLogic Admin Server HTTP SSL Port Cloud Control Admin Server. To modify after install follow note 1109638.1. 3872   3872, 1830 – 1849   Cloud Control Agent Only the OMS will connect to this port, to either report changes in the monitoring, submit jobs, or to request real-time statistics. Port can be provided during Agent install. If the agent port needs to be changed at a later date this can be done with the following command on the agent: emctl setproperty agent -name EMD_URL -value https://hostname.domain:port/emd/main/ This will allow the agent to run on the new port, however the target does not get renamed so continues to show the original port. 1521* Depends on Listener Configuration Database Targets -  SQL*Net Listener       For Repository database, only the OMS will connect to store management data from the agents. For all monitored target databases OMS will retrieve information requested by browser clients. To modify this port for the repository database:   Change the listener.ora file for the EM repository. Restart the listener. Then for every OMS machine using that repository run the following: emctl stop oms emctl config oms -store_repos_details -repos_conndesc <connect descriptor of database> -repos_user sysman emctl start oms emctl config emrep -agent <agent name> -conn_desc <connect descriptor of database>  To modify this port for monitored targets, change the listener configuration on the target, then update Monitoring Configuration in EM. 7101 7101 - 7200 FMW Targets – Admin Console Outgoing from OMS, used for managing FMW targets. To modify after install follow note 1109638.1. NA NA ICMP Outgoing from OMS to host servers if the Agent is unreachable. Validates if server is up or down. NA Internal Ports These ports are required for internal Enterprise Manager communication and typically do not require additional firewall/ACL configuration. Default Port Range Component Usage Modify 7201 7201 – 7300 EM Domain WebLogic Managed Server HTTP Port Used for Fusion Middleware communication. Configured during installation 7301 7301 – 7400 EM Domain WebLogic Managed Server HTTP SSL Port Used for Fusion Middleware communication. Configured during installation 7401 7401 – 7500 Node Manager HTTP SSL Port Used for Fusion Middleware communication. Configured during installation 6702 6100 - 6199 Oracle Notification Server (OPMN) Local Ports used by OPMN  can be verified from <MW_HOME> /gc_inst/WebTierIH1 /config /OPMN/opmn/opmn.xml: <debug comp="" rotation-size="1500000"/> <notification-server interface="any"> <port local="6700" remote="6701"/>   Modify the opmn.xml to use free ports as below: 1. Stop OMS 2. Take a backup of the existing opmn.xml and ports.prop in the <MW_HOME>/ gc_inst/WebTierIH1/ config /OPMN/opmn directory. 3. Edit the opmn.xml file, under the <notification-server> element, modify the local / remote port, as necessary to the free port available and save the file. 4. Edit the ports.prop file and modify the remote / local port parameters as necessary and save the file. 5. Start the OMS 6703 6200 - 6201 Oracle Notification Server (OPMN) Remote Ports used by OPMN  can be verified from <MW_HOME> /gc_inst/WebTierIH1 /config/OPMN/opmn/opmn.xml: <debug comp="" rotation-size="1500000"/> <notification-server interface="any"> <port local="6700" remote="6701"/>   Modify the opmn.xml to use free ports as below: 1. Stop OMS 2. Take a backup of the existing opmn.xml and ports.prop in the <MW_HOME> /gc_inst/WebTierIH1/ config/OPMN/opmn directory. 3. Edit the opmn.xml file, under the <notification-server> element, modify the local / remote port, as necessary to the free port available and save the file. 4. Edit the ports.prop file and modify the remote / local port parameters as necessary and save the file. 5. Start the OMS Optional These ports required only if certain components are to be used and firewall/ACL changes may be needed.   Default Port Range Component Usage Modify 443   Secure web connection (https - 443) to updates.oracle.com support.oracle.com ccr.oracle.com login.oracle.com aru-akam.oracle.com Outgoing from OMS used for online communication with Oracle for OCM, MOS, Patching, Self-Updates, ASR   Proxy settings defined via the UI (Setup -> Proxy Settings) Do not use the OMS parameters!   51099   Application Dependency and Performance RMI Registry Port ADP Configured during installation 55003   Application Dependency and Performance Java Provider Port   ADP Configured during installation 55000   Application Dependency and Performance Remote Service Controller Port   ADP Configured during installation 4210   Listen ADP Configured during installation 4211   SSL Listen Port ADP Configured during installation 3800   JVM Managed Server Listen JVM Configured during installation 3801   JVM Managed Server SSL Listen JVM Configured during installation 9701 9701-49152 BI Publisher HTTP BI Publisher During install can modify with configureBIP script.  Post-install can be modified per Note 1524248.1 9702 9701-49152 BI Publisher HTTP SSL Port BI Publisher During install can modify with configureBIP script.  Post-install can be modified per Note 1524248.1 Stay Connected with Oracle Enterprise Manager: Twitter | Facebook | YouTube | Linkedin | Newsletter  

    When planning and configuring your Oracle Enterprise Manager 12c implementation, you will have many infrastructure considerations. One of the most often discussed pieces is the network ports that ...

Oracle

Integrated Cloud Applications & Platform Services