Wednesday Jul 15, 2015

Release of BP01 for IDMLCM

By Brad Donison

The release of Bundle Patch (BP) 01 for IDMLCM has taken place today.

This release is important, as it is the first release of the IDMLCM Deployment Tool that is supported for Production systems. Previous to this, LCM based deployments were not meant for production usage. As this BP01 is a production supported release, additional deployment options are now available with this Bundle Patch, over the original release of the tool:

  •      OIM (DB) with HA (supported on single node as well as 4 nodes)
  •      OAM (OMSS) with HA and existing OID (supported on single node as well as 4 nodes)
  •      OAM (OMSS) with HA and existing OUD (supported on single node as well as 4 nodes)

To download BP01 for IDMLCM, follow the link below.

More information about the IDMLCM Deployment Tool can be found here:

Release of BP 01 for OIM

By Devesh Nautiyal

The first BP that is BP 01 for OIM is released now.

This release is identified as Bug 21169810 – Oracle Identity Management Suite Bundle Patch

BP01 includes 8 customer-visible bug fixes and 21 non visible bug fixes. The OIM OPatch Version is #  21308416.

To download BP 01 for OIM, go to document:

Information And Bug Listing of Oracle Identity Manager Bundle Patches: (11gR2PS3) Version (Doc ID 2031368.1).

Within the document, select the link and then click on the Patch:21169810 link.

This will take you to the download page.

Release of BP 07 for OIM

 By Kevin Burnett

The long awaited release of BP 07 for OIM happened today.

This release is identified as Bug 20963120 – Oracle Identity Management Suite Bundle Patch

Besides having a plethora of bug fixes from previous Bundle Patches, BP 07 includes 37 customer-visible bug fixes and 12 non visible bug fixes. The OIM OPatch Version is # 21322054.

One of the more interesting bugs fixed in this release is a bug where if you create a user in OIM and then create a proxy of that user, and have both display names the same name, you will get an exception.

To download BP 07, go to document: Information And Bug Listing of Oracle Identity Manager Bundle Patches: (11gR2PS2) Version (Doc ID 1676169.1).

Within the document, select the link and then click on the Patch:20963120 link. This will take you to the download page.

 With BP 07 for OIM, many problems can be solved with one simple install.

Wednesday Jul 01, 2015

Configuring OAM SSO for ATG BCC and Endeca XM

by 2 Comments


Single sign-on, or “SSO” as it’s commonly referred to, is an authentication method that allows a user access to multiple applications through a single, secure, point of entry. Rather than authenticate separately for each application, users authenticate once through a centralized service. The benefits of SSO to end users are obvious, but there are also many cost and compliance advantages that are of interest to large organizations, which is why Oracle’s enterprise customers have increasingly demanded SSO integration with Oracle Access Manager (OAM). With the introduction Oracle Commerce 11 they now have it, and in this blog I will be demonstrating how to use OAM to enable SSO between the ATG Business Control Center (BCC) and Endeca Experience Manager (XM).

Note: The above document is from our A-Team chronicles page. To access full article go here:

Monitoring OAM Environment

by 1 Comment


Security systems, including OAM, reside in a dynamic environment where the parameters that affect system performance are ever changing. On top of that, access management Infrastructure like OAM serve as the front door or gate to every application/system in an organization. Therefore continuous monitoring of such key components is mandatory to ensure continuous success of not just your access and SSO solution but indeed your very applications themselves. Effective monitoring involves two types of controls; preventive monitoring and detective monitoring. Preventive monitoring makes sure failure does not take place and detective monitoring helps you detect any failure if it occurred and take corrective measures. OAM has features to facilitate both the types of monitoring. We will go over all the monitoring capabilities offered by the product.

Note: The full article is from our A-Team chronicles page and can be viewed at:

Avoiding LibOVD Connection Leaks When Using OPSS User and Role API

The OPSS User and Role API ( provides an application with access to identity data (users and roles), without the application having to know anything about the underlying identity store (such as LDAP connection details).

For new development, we no longer recommend the use of the OPSS User and Role API – use the Identity Governance Framework (IGF) APIs instead. However, if you already have code which uses the User and Role API, that code is still supported for the time being.

The OPSS User and Role API supports both LDAP and non-LDAP backends (such as XML files); however, for production use, LDAP backends are the most common, and are strongly recommended. When using an LDAP backend, the OPSS User and Role API retrieves the LDAP connection details from the security providers defined in the WebLogic Security Realm.

Now there are two different modes in which the OPSS User and Role API may operate – virtualize=true, in which all LDAP access is via LibOVD; and virtualize=false, in which LibOVD is not used and the User and Role API’s own LDAP access code is used instead. The difference is that without LibOVD, the User and Role API can only access a single LDAP server (the first defined in the WebLogic security realm); whereas, with LibOVD enabled, the User and Role API uses LibOVD to join the users and roles of two or more LDAP servers into one. A common use case is where you have different LDAPs for different user communities (for example, customers might have user accounts in OUD, while employees might instead be stored in Active Directory).

When using the OPSS User and Role API with LibOVD, you need to be careful not to cause connection leaks. The most common symptom of a connection leak is this error: “NamingException: No LDAP connection available to process request for DN”

Connection leaks can be caused through the use of any of the methods of which return an object. For example, the search( method. Calling any of these methods, when the SearchResponse object is created, it is associated with one or more backend LDAP connections in the LibOVD connection pools. These connections will not be returned to the pool until one of the following occurs:

  • you iterate until the end of the response – i.e. call hasNext() until it returns false
  • you call the close() method on the response

If you do not do either of the above, you will leak LibOVD backend LDAP connections. The JavaDoc for the close() method points this out:

Closes this response object and frees all resources associated with it. After close() is called the response object becomes invalid. Any subsequent invocation of its methods will yield undefined results. Close() should be invoked when discarding a response object so that resources are freed. If a response proceeds to its natural end (i.e. when hasNext() returns false) then close() will be called internally. There is no need to call close() explicitly in such case.

Some people decide to use netstat, or packet capture tools such as Wireshark or tcpdump, to try to diagnose this issue. Now, in itself, that is not a bad line of thought; however, you may not be able to detect this type of leak with those tools. That is because, after some time (depending on your configuration), either the LDAP server or else some middle box such as a load balancer may shutdown the underlying network socket; however, the in-memory structure which holds the network socket will still be checked out of the connection pool. There is a cleaner thread which runs inside LibOVD, which clears out closed network sockets from the pool; however, if a connection is marked as in use, the cleaner thread will not remove it from the pool, even though the underlying network socket is closed. In this case, you will not see any build up of actual network connections, even as your LibOVD connection pools are becoming exhausted. One symptom you may see, however, is lack of reuse of network connections.

Eventually, when the pool is completely filled with leaked connections, firstly all clients will hang for the maximum pool wait time, then “NamingException: No LDAP connection available to process request for DN” will occur. At this point, LibOVD may (depending on the version) decide to shutdown and restart the connection pool, thus recovering from the problem. However, the connection leak will continue, and the hang followed by errors will occur again in the future.

Note: The above document is from our A-Team chronicles page and not official Oracle documentation:

Improve SSL Support for Your WebLogic Domains

by 1 Comment


Every WebLogic Server installation comes with SSL support. But for some reason many installations get this interesting error message at startup:

Ignoring the trusted CA certificate “CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See,O=Entrust, Inc.,C=US”. The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.

This looks odd and many people ignore these error messages. However, if your strategy is to show real error messages only, you are quickly looking for a solution. The Internet is full of possible solutions. Some recommend to remove the certificates from the JDK trust store, some recommend to use a different trust store. But is this the best solution and what are the side effects?

Main Article

Our way to the solution starts by understanding the error message. Here it is again.

Ignoring the trusted CA certificate “CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See,O=Entrust, Inc.,C=US”. The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.

The first sentence is the result while the second sentence explains the reason. Looking at the reason, we quickly find the “certificate parsing exception“. But what does “PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11” tell us?

  • PKIX stands for the Public Key Infrastructure (X.509). X.509 is the standard used to export, exchange, and import SSL certificates.
  • OID stands for the Object Identifier. Object Identifiers are globally unique and organized in a hierarchy. This hierarchy is maintained by the standards bodies in every country. Every standards body is responsible for a specific branch and can define and assign entries into the hierarchy.

With this background information we can lookup the number 1.2.840.113549.1.1.11 in the OID Repository (see References for the link) and get this result “iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1) sha256WithRSAEncryption(11)“.

Combining the certificate information in the first sentence and the information from the OID lookup we have the following result:

The certificate from CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See,O=Entrust, Inc.,C=US uses SHA256WithRSAEncryption which is not supported by the JDK!

You will probably see more messages for similar or different encryption algorithms used in other certificates.

The Root Cause

These factors cause this (and similar) error messages:

  • By default the Java Cryptography Extension (JCE), that comes with the JDK, implements only limited strength jurisdication policy files.
  • The default trust store of the JDK that holds this and other certificates can be found in JAVA_HOME/jre/lib/security/cacerts.
  • WebLogic Server versions before 12c come with the Certicomm JSSE implementation. The Certicomm implementation will not be updated because the required JDK already comes with the standard SunJSSE implementation.

The Problem

The Certicomm implementation works perfectly with many SSL certificates but does not support newer and stronger algorithms. Removing certificates from the default trust store or using a new trust store works only if you do not need to install third party certificates, for example from well known Certificate Authorities.

The Solution

To remove these error messages and support newer SSL certificates we have to do these steps:

  • Upgrade the jurisdication policy files with the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy files. You can download the Unlimites Strength Jurisdication files that fit for your JDK version from the Oracle Technology Network (see References). Follow the installation instructions that come with the distribution.
  • Enable SunJSSE Support in WebLogic Server
    • Login to Weblogic console
    • Go to [Select your Server] -> SSL -> Advance
    • Set “Enable JSSE” to true.
  • Restart your domain completely (including NodeManager)
    • If you start your domains with a WLST script:


    • If you start your domains with the scripts,, or


Your Java and WebLogic environment is now ready to support newer SSL certificates!



Note: The above document is from our A-Team chronicles page and not official Oracle documentation:

Last Login Tracking in Oracle Unified Directory

by Leave a Comment

Earlier I posted about the performance impact of last login tracking in Oracle Internet Directory (OID). I was asked, do the same performance concerns apply to Oracle Unified Directory (OUD)?

Well, OUD also supports last login tracking, and enabling it will have some performance impact. However, OUD does last login tracking in a smarter way than OID does, so the performance impact of enabling it for OUD is potentially substantially less.

OID stores the last login time to the nearest second. Which means if a single account binds once per a second, then every second OID will perform a modify operation to update the orcllastlogintime. Potentially this can be a substantial extra modify load on storage and on replication.

OUD by contrast allows you to configure the granularity at which the last login time is stored. You can set it to second granularity to get the same behaviour as OID. Alternatively, you can set it to daily granularity; in which case, no matter how many times in the same day a single user logs in, only the first login for that day will be recorded. Although it depends on your exact requirements, you may well find that daily granularity is all you need from a security/auditing viewpoint, and thus you can take advantage of the substantial performance benefits it offers.

The granularity is implemented by enabling you to specify a custom date/time format for storing the last login attribute. So, for example, you can use yyyyMddHHmmSS'z' to get the same format which OID uses. Or you could use yyyyMMdd to only record the year, month and day, and exclude the time of day. To make it easier to read, you could add dashes such as yyyy-MM-dd. The syntax to use is that of the java.text.SimpleDateFormat class, which is documented here:

Just like OID does, OUD supports having different password policies for different sets of users, so last login tracking can be turned on for some but not others. To enable last login tracking for a password policy, you need to set the last-login-time-attribute property to the name of the attribute in which you wish to store the last login time. We can do this through either ODSM, or through the command line. In this post, I’m going to focus on the command-line approach. So, this is how to check if it is turned on for a given password policy:

$ dsconfig -h localhost -p 4444 -D "cn=oudadmin" -j pwdfile -X \
-n get-password-policy-prop --policy-name "Default Password Policy"
Property                                 : Value(s)
account-status-notification-handler       : -
allow-expired-password-changes            : false
allow-user-password-changes               : true
default-password-storage-scheme           : Salted SHA-512
deprecated-password-storage-scheme        : -
expire-passwords-without-warning          : false
force-change-on-add                       : false
force-change-on-reset                     : false
grace-login-count                         : 0
idle-lockout-interval                     : 0 s
last-login-time-attribute                 : -
last-login-time-format                    : -
lockout-duration                          : 0 s
lockout-failure-count                     : 10
lockout-failure-expiration-interval       : 0 s
lockout-soft-duration                     : 0 s
lockout-soft-failure-count                : 0
max-password-age                          : 0 s
max-password-reset-age                    : 0 s
min-password-age                          : 0 s
password-attribute                        : userpassword
password-change-requires-current-password : false
password-expiration-warning-interval      : 5 d
password-generator                        : Random Password Generator
password-history-count                    : 0
password-history-duration                 : 0 s
password-validator                        : -
previous-last-login-time-format           : -
require-change-by-time                    : -
require-secure-authentication             : false
require-secure-password-changes           : false

You can see how, in the default configuration last-login-time-attribute is not set, so last login time tracking is disabled.

(Note: pwdfile is the path to a file containing the admin password. This is more secure than providing the password on the command line using the -w option, in which case someone running “ps” at the same time may see the command line including the password.)

So, let’s enable last login tracking in OUD. While OID has an out-of-the-box attribute to store the last login time – orcllastlogintime – for OUD we need to define a custom attribute in which to store it. So let’s start by creating a custom LDAP attribute (customLastLogin) and an object class to contain it (customUser).

We’ll create an LDIF file to contain our schema additions, and load it using ldapmodify. This is lastLogin.ldif:

dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: ( customLastLogin-oid
NAME 'customLastLogin'
DESC 'Contains last login time'
X-ORIGIN 'Custom'
USAGE userApplications )
add: objectClasses
objectClasses: ( customUser-oid
NAME 'customUser'
DESC 'Custom user'
SUP inetOrgPerson
MAY customLastLogin
X-ORIGIN 'Custom' )

Next we’ll load that LDIF file into our OUD:

$ ldapmodify -h localhost -p 1389 -D "cn=oudadmin" -j pwdfile -f lastlogin.ldif
Processing MODIFY request for cn=schema
MODIFY operation successful for DN cn=schema

OK, now let’s create a new password policy and tell it to store the last login time in the attribute we just created:

$ dsconfig -h localhost -p 4444 -D "cn=oudadmin" -j pwdfile -X -n \
create-password-policy \
--policy-name "Test Password Policy" \
--set password-attribute:userPassword \
--set default-password-storage-scheme:"Salted SHA-1" \
--set lockout-duration:300s --set lockout-failure-count:3 \
--set password-change-requires-current-password:true \
--set last-login-time-attribute:customLastLogin \
--set last-login-time-format:yyyy-MM-dd

Let’s create a container in which to store our test users, in testUsers.ldif:

dn: ou=testUsers,dc=example,dc=com
objectClass: organizationalUnit
ou: testUsers

And we’ll load that LDIF file

$ ldapmodify -h localhost -p 1389 -D "cn=oudadmin" -j ~/pwdfile -a -f ~/testUsers.ldif

Finally, let’s create a user:

dn: cn=testUser1,ou=testUsers,dc=example,dc=com
objectclass: customUser
sn: testUser1
givenName: testUser1
userPassword: welcome1

Load the LDIF file in the same way as before.

Now we’ll assign the “Test Password Policy” to the testUsers container. Let’s do that using a collective attribute:

dn: cn=testUsers password policy,dc=example,dc=com
objectClass: top
objectClass: subentry
objectClass: collectiveAttributeSubentry
objectClass: extensibleObject
cn: testUsers password policy
ds-pwp-password-policy-dn;collective:cn=Test Password Policy,cn=Password Policies,cn=config
subtreeSpecification: {base "ou=testUsers", minimum 1}
collectiveConflictBehavior: real-overrides-virtual

Let’s check the user confirm the attribute set

$ ldapsearch -h localhost -p 1389 -D "cn=oudadmin" -j ~/pwdfile -b "cn=testuser1,ou=testusers,dc=example,dc=com" -s base '(objectclass=*)' ds-pwp-password-policy-dn
dn: cn=testUser1,ou=testUsers,dc=example,dc=com
ds-pwp-password-policy-dn: cn=Test Password Policy,cn=Password Policies,cn=config

Yes we see the ds-pwp-password-policy-dn set. So let’s try logging in as the user,:

$ ldapsearch -h localhost -p 1389 -D "cn=testuser1,ou=testusers,dc=example,dc=com" -w welcome1 -b "" -s base '(objectclass=*)'
objectClass: ds-root-dse
objectClass: top

And check to see if customLastLogin gets populated:

$ ldapsearch -h localhost -p 1389 -D "cn=oudadmin" -j ~/pwdfile -b "cn=testuser1,ou=testusers,dc=example,dc=com" -s base '(objectclass=*)' customLastLogin
dn: cn=testUser1,ou=testUsers,dc=example,dc=com
customLastLogin: 2014-10-26

Note: The above document is from our A-Team chronicles page and not official Oracle documentation:

Cheat-sheet for installing Oracle Unified Directory (OUD)


This is a cheat-sheet for installing Oracle Unified Directory (OUD) including the graphical administration tool (Oracle Directory Services Manager – ODSM). While the core of OUD does not require an application server such as WebLogic, ODSM does, so you need to install that too (unless you want to do all administration from the command line). All of this information can be found in the documentation and the Oracle Technology Network (OTN) website – all I’m doing here is drawing all the bits together in one place. I tested the below steps on Linux, but the steps for other supported platforms will be similar. These steps are for a single node install (such as a development environment).

Step 1: Download JDK, WebLogic, ADF and OUD

OUD is written in Java, so it requires the JDK. OUD itself does not use WebLogic or ADF, but ODSM does. So we need to download all these bits:

Step 2: Install JDK

Extract jdk-7u67-linux-x64.tar.gz to a directory on your filesystem. There is no installer to be used for this product, just extract the archive into the desired install location. I’m going to extract it under /app/oud, so I’ll end up with my JDK under /app/oud/jdk1.7.0_67. Make sure the newly installed JDK comes first in your PATH, e.g:

export JAVA_HOME=/app/oud/jdk1.7.0_67
export PATH=$JAVA_HOME/bin:$PATH

We’ll assume the above environment variables are set for the remainder of these steps.

Step 3: Install OUD

Unzip into a temporary directory.

cd into Disk1, and then run: ./runInstaller -jreLoc $JAVA_HOME

You need to choose some install directory – I’ll once more use /app/oud. Then the OUD software is installed into /app/oud/Oracle_OUD1.

Step 4: Configure OUD

Go to the OUD install directory, e.g. /app/oud/Oracle_OUD1. Run the oud-setup script. The setup process will create a new OUD instance and start it for you.

Step 5: Install WLS

Run: java -jar wls_generic1036.jar

Do a “Typical” install. The screens are largely self-explanatory. Important: you must select the OUD install location as the Middleware Home – this is the parent of the Oracle_OUD1 directory. So in my case, the directory is /app/oud. If you install WLS somewhere else, you won’t be able to install ODSM. You will get a warning that the directory is not empty – ignore it.

Step 6: Install ADF Runtime

Unzip into a temporary directory.

cd into Disk1, and then run: ./runInstaller -jreLoc $JAVA_HOME

Choose the same Middleware Home as in the previous step.

Step 7: Create WebLogic domain

Go to the Middleware Home, then wlserver_10.3/common/bin, and run

Choose to create a new domain. Make sure to tick “Oracle Directory Services Manager” on the “Select Domain Source” screen.

Step 8: Start Admin Server

Go to the just created WLS domain directory, e.g. /app/oud/user_projects/domains/base_domain, and then into the bin subdirectory. Run to start your admin server

Step 9: Connect to ODSM

Now you can login to ODSM. Go to http://HOSTNAME:PORT/odsm, where HOSTNAME is your hostname and PORT is your Admin Server port. Login with the OUD credentials you configured in step 4.

Note: The above document is from our A-Team chronicles page and not official Oracle documentation:

Managing the performance impact of OID last login tracking


Does your environment have demanding performance requirements? High volume, customer-facing applications such as eCommerce or Internet banking, with business critical requirements for low response time?

Then having last login tracking enabled in OID (orclpwdtracklogin=1 in your password policy) can have a substantial performance cost. It converts every login, every bind/compare against an OID entry, into a modify of that OID entry to update the last login time attribute. This substantially increases the write/commit load on the database and the storage layer. It also substantially increases replication traffic in replicated environments. On that basis, you should seriously consider disabling it.

Here’s an example of how to check your OID password policies, to see if orclpwdtracklogin is enabled:

> $ORACLE_HOME/bin/ldapsearch -h localhost -p 3060 -D “cn=orcladmin” -w PASSWORD -b “” ‘(objectclass=pwdpolicy)’
displayname=Password Policy for Realm dc=example,dc=com

This search will return multiple password policies – for brevity, I’ve only shown the first one above. Once you’ve identified a policy with login tracking enabled, you can turn it off as follows:

> $ORACLE_HOME/bin/ldapmodify -h localhost -p 3060 -D “cn=orcladmin” -w PASSWORD
dn: cn=default,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=example,dc=com
changetype: modify
replace: orclpwdtracklogin
orclpwdtracklogin: 0

modifying entry cn=default,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=example,dc=com

What if you have a security requirement to keep track of last login times?

Explore whether there is somewhere else in the stack you can meet this requirement, other than in OID. For example, if you are using OAAM, then OAAM will be keeping track of last login information, and you don’t need to track it separately in OID.

And since this is controlled at a per-password policy level in OID, it is possible to have it turned on for some users, and turned off for others. For example, if all your end-users are having their logins tracked by OAAM, you don’t need to track last login information in OID. But if you have system administrators who on occasion log in to OID directly, you might still need to have OID track their last login times. So, you can have one OID password policy for normal users, which disables last login tracking; and another for system administrators, which keeps it on.

What about service accounts used by applications to access OID?

With normal user accounts, you might have five logins happening at the same time, but each of those logins will be as a distinct user account. For service accounts, however, it is not uncommon for there to be numerous concurrent logins into the same service account – for example, if one of your applications runs on a five node cluster, and if each node establishes an LDAP connection at the same time, that could be five concurrent logins to the same service account. With login tracking this will result in multiple concurrent updates to the same LDAP entry, hence multiple concurrent updates to the same database row. With RAC, these updates might end up on different RAC nodes, with the result that multiple RAC nodes are contending for the same database row. So the negative performance impact of login tracking can be far worse than that for end-user or system administrator accounts. For these reasons, having OID last login tracking turned on for service accounts is not recommended. Given the frequency of logins to these accounts, it has limited security value.

Monday Jun 22, 2015

Release of Remote Diagnostic Agent (RDA) version 8.08

Enterprise Manager development is pleased to announce the release of Remote Diagnostic Agent (RDA) 8.08 on June 9, 2015.

RDA is a set of command-line diagnostic scripts used to gather detailed diagnostic and trace information for various Oracle products and their environments. Collected data is used to assist in problem diagnosis.

RDA 8.08 includes:

  • 2 new and 29 extended data collection modules
  • 1 new and 26 extended data collection profiles
  • 30 extended HCVE rule sets, including 371 new rules
  • 6 extended tools

The RDA team completed 109 functional requests:

  • 74 bugs and enhancement requests from bug database
  • 18 direct mail requests
  • 17 Instrumentation Lifecycle Tracking System (ILTS) enhancement requests

RDA features include:

  • RDA collection modules and HCVE rule sets are identified by their RDA abbreviation. Any abbreviation can be prefixed by the group name (for example, OS.NET, DB.XDB, OFM.WLS, CGBU.NCC).
  • Collections are based on result set. Each result set has its own definition file and result directory (with the default names output.cfg and output respectively).
  • The result set definition file is object oriented. Each property name includes a type, allowing RDA to perform many validations automatically. RDA generalizes the target concept; this allows it to define a database once and re-use in multiple modules.
  • Reports have been moved to a collect subdirectory to reduce the impact of large collections on browsing performance.
  • By default RDA runs in the current working directory. Consider making the RDA software directory read-only and put the result set elsewhere. This is not mandatory but it simplifies software upgrades.
  • Temporary files are stored outside the result set directory.  This allows packaging the result set directory without issues.
  • For UNIX and Windows, Perl taint mode is enabled by default. This validates inputs before using them in operating system commands.

For More Information:

Complete details are available in RDA Release Notes (Doc ID 414970.1)

Friday Jul 11, 2014

Prepare for My Oracle Support (MOS) Release 14.3—Browser Requirements

If you have a CSI number, then you are eligible for technical support for your Oracle products using the My Oracle Support (MOS) portal.

With the rollout of My Oracle Support 14.3 on July 18, 2014, there will be changes to certified browsers.

If your browser is not upgraded, you may encounter issues or receive unexpected results and the solution will be to move to the certified browser. Fixes will not be generated for browsers that are no longer supported.

Check your browser version before July 18 to minimize any adverse impact later, when you do need to use MOS for any research or to file an SR.

Happy Trails!


Wednesday May 28, 2014

MOSC Bits - Personalized Profile

It is a good idea to have a unique profile in MOSC. Your activities there are better recognized and might even become a well known brand! This leads to recognition and trust.

My Oracle Support Communities (MOSC)  is a well established platform where experiences are shared. Reputation and trust are the basis for the quality of all communication there.

A personalized  profile can help to build up a good reputation. Besides the experience counter, a good name, details about your location and business experience are valuable details. Although a little bit hidden, the profile's avatar can be customized, too. The profile's avatar is an eye catcher and can act as an unique visual representation for  you. 

How to add / modify MOSC profile avatar (picture, icon)  ?   

  • Don't look in Edit Profile section.
  • After login, click on  your profile's name on top right.  

  • This lists all public information as part of the Bio section.
  • Select the Activity tab.

  • The Change Avatar link is on same level at far right.
  • A list of predefined symbolic pictures is populated.
  • Choose from the list of existing pictures or try Add Another to upload an image file from your local computer (JPG, PNG, GIF, or BMP only, maximum file size of 2.0 MB).
  • Note: New added images can be used only after running through a review process. Usually after one business day they can be selected for your personal avatar.

Tuesday May 06, 2014

Finding Your Way in the OIM RDA - a cheat sheet

Have you used the Remote Diagnostic Agent (RDA)?

It's a remarkable tool that keeps evolving, with new files added to give you a comprehensive look at your environment. Our Support Engineers use it to get a quick, accurate picture of your system without all the back and forth it would otherwise take for you to run commands and to collect necessary files used to troubleshoot and solve problems.

But because it changes and grows, with lots of modules and profiles, it can be hard to find what you're looking for. We constantly strive to improve its usability, most recently completing a project to reduce the number of questions you are asked when running it.

So, for Oracle Identity Manager (OIM), we are announcing a Cheat Sheet listing navigation to the most commonly referenced files, so that you can find what you're looking for more easily.

Let us know what you think! Do you have other files that you often use, but may have trouble finding in an RDA? We will add it to the Cheat Sheet.

For details on how to run an RDA, start here:  Resolve Problems Faster! Use Remote Diagnostic Agent ( RDA ) - Fusion Middleware and WebLogic Server (Doc ID 1498376.1)

For our previous blog post on the RDA, see: Accelerate your resolution…

Don't be afraid to run the RDA, just to see how robust a tool it can be!

And if you run into a problem that you can't figure out, please upload a fresh copy of the RDA output to a Service Request for your Support Engineer to review.

Happy Trails

Tuesday Apr 22, 2014

OIM Clustering: Keeping separate environments separate

Oracle Identity Manager 11g incorporates several clustering technologies in order to ensure high-availability across its different components. Several of these technologies use multicast to discover other cluster nodes on the same subnet. For testing and development purposes, it is common to have multiple distinct OIM environments co-existing on the same subnet. In that scenario, it is essential that the distinct environments utilise separate multicast addresses, so that they do not talk to each other – if they do, they will confuse one another, and many things can go wrong. This problem is less common with production environments, since best practice dictates that the production environment should be on a separate subnet from development and test, and multicast traffic cannot transverse subnet boundaries without special configuration.

Overview of OIM Clustering

Here’s a rough diagram of the clustering components inside OIM:

Quartz Scheduler Cluster

Data Caching Cluster

( and earlier only)


Application Server Cluster
(WebLogic or WebSphere)

There are three basic layers of clustering in OIM:

  • Application Server Clustering: This is the clustering layer of the underlying Java EE Application Server (Oracle WebLogic or IBM WebSphere). This is responsible for replication of the JNDI tree, EJBs, HTTP sessions, etc.
  • Data Caching: This provides in-memory caching of data to improve performance, while ensuring that database updates made on one node are propagated promptly to the others. OIM uses OSCache (OpenSymphony Cache) as the underlying technology for this.
  • Scheduler Clustering: This is used to ensure that in a cluster each execution of a scheduled job only runs on one node. Otherwise, if a job is scheduled to start at 9am, every node in the cluster might try to start it at the same time, resulting in multiple simultaneous executions of that job

Clustering layers present in older versions only

  • In OIM 11gR1, and 11gR2 base release, OIM used EclipseLink data caching, which included its own multicast clustering layer. From OIM onwards, while EclipseLink is still being used for data access, its caching features are no longer used, so this form of multicast clustering is no longer present.
  • As well as using JGroups for OSCache, OIM 9.x also used JGroups for a couple of additional functions (forcibly stopping scheduled tasks and diagnostic dashboard JMS test.) In OIM 11g, JGroups is now used for OSCache only.

Underlying technologies used

Different clustering components in OIM use different technologies:

Component Technology Details
Application Server Cluster Unicast or Multicast Consult Application Server documentation:
(OIM and earlier only)
  • Multicast for node discovery
  • T3 JNDI for node-to-node communication (WebLogic)
  • RMI for node-to-node communication (WebSphere)
Multicast is only used to find other nodes in the cluster. With WLS, JNDI connections are opened between the nodes for the cache coordination traffic. On WebSphere, RMI is used instead.
  • Multicast using JGroups package

Quartz Scheduler
  • Database tables
Unlike other clustering components, Quartz does not use direct network communication between the nodes. Database tables are used for inter-cluster communication

Relevant Configuration Settings

I’m only going to talk about the OIM-specific clustering settings here. So I won’t go into the configuration of the WebLogic/WebSphere clustering layer, only the data cache and scheduler clustering layers. All configuration relevant to these can be found in the /db/oim-config.xml file in MDS. So let’s discuss the settings in this file which are relevant to clustering.

Setting Explanation
<cacheConfig clustered=”…”> Must be set to true in a clustered install, and false for a single-instance install. This controls whether OSCache operates in a clustered mode.
<cacheConfig>/<xLCacheProviderProps multicastAddress=””> Multicast address which is used for OSCache. (Also used by EclipseLink in versions and earlier; the same address is used for both.) Make sure this address is unique for each distinct OIM environment on the same subnet.
<xLCacheProviderProps>/<properties> Can be used to manually override JGroups configuration used by OSCache. Not recommended.
<schedulerConfig clustered=”…”> Must be set to true in a clustered install, and false for a single-instance install.
<schedulerConfig multicastAddress=”…”> In OIM 9.x, JGroups was used to forcibly stop jobs. In OIM 11g, a different mechanism is used instead. This configuration setting is a left-over from OIM 9.x, and is now ignored. However, to avoid confusion, it is recommended to set this to the same multicastAddress as the xLCacheProviderProps above.
<deploymentConfig>/<deploymentMode> In a clustered install, should be set to clustered; in a single instance, should be set to simple. This is used to control whether EclipseLink operates in a clustered mode.
<SOAConfig>/<username> As its name implies, this is the username used by OIM to login to SOA. However, in OIM and earlier, it also serves an additional purpose – on WebLogic, this username is used by EclipseLink clustering for inter-node communication. By default, this is weblogic; if you have renamed the weblogic user, you must change it; you are free to use another user if you wish, so long as they are a member of the Administrators group. (On WebSphere, this user is used for OIM-SOA integration only, not for EclipseLink clustering.)To change this, see “2.6 Optional: Updating the WebLogic Administrator Server User Name in Oracle Enterprise Manager Fusion Middleware Control (OIM Only)”. (If step 11 in those steps gives you a permissions error, just skip that step.)
<SOAConfig>/<passwordKey> This is the name of the CSF Credential which stores the password for the <SOAConfig> user. You should never change this setting in oim-config.xml from its default of SOAAdminPassword, but you will need to change the corresponding CSF entry whenever you change that user’s password.

What can go wrong

As I’ve mentioned, it is important that you have the correct clustering configuration for your environment. If you do not, many things can go wrong. I don’t propose to provide an exhaustive list of potential problems in this blog post, but just give one example I recently encountered at a customer site.

This customer was preparing to go live with Oracle Identity Manager As part of their pre-production activities, they needed to document and test the procedure for periodic change of the weblogic password. They began by their testing by changing the weblogic password in one of their development environments. Restarting the OIM managed server, they saw this message multiple times in their WebLogic log: <Authentication of user weblogic failed because of invalid password>. They also found that the WEBLOGIC user in OIM was locked.

What went wrong here? Well, several things were wrong in this environment:

  • They had <SOAConfig>/<username> set to weblogic, but they had not updated the SOAAdminPassword credential in CSF to the new weblogic password. This customer does not currently use any of the OIM functionality which requires SOA, so they normally leave their SOA server down, including for this test. You would think therefore that the <SOAConfig> would not be relevant to them; but, as I have pointed out above, it is also used for EclipseLink clustering.
  • Even though their development environments were single instance installs, they all had <deploymentConfig>/<deploymentMode> set to cluster instead of simple. As a result, EclipseLink clustering was active even though it did not need to be.
  • <cacheConfig>/<xLCacheProviderProps multicastAddress=””> was set to the same address in multiple development environments on the same subnet. As a result, even though these environments were meant to be totally separate, they were formed into a single EclipseLink cluster.

So, what would happen, was that this environment (let’s call it DEV1) at startup would initialise EclipseLink clustering (since <deploymentConfig>/<deploymentMode> is set to cluster.) It would then add itself to the multicast group configured in <cacheConfig>/<xLCacheProviderProps multicastAddress=””>. At this point, DEV1 becomes visible to the other development environments (say DEV2 and DEV3). DEV2 tries to login to DEV1 over T3, using the <SOAConfig>/<username> user (weblogic) and the SOAAdminPassword password from CSF. However, the weblogic password having changed, both DEV2 and DEV3 will receive an invalid credential error, and DEV1 will experience <Authentication of user weblogic failed because of invalid password>. Setting <deploymentConfig>/<deploymentMode> to simple resolved this.


This is the official blog of the Proactive Support Team for Identity Management: OIM, OAM, OID, OVD, OUD, DSEE, etc. Find information about our activities, publications, product related information and more.


« June 2016