Setting a simple high availability configuration for an Oracle Unified Directory Topology
By Andreea Vaduva on Apr 18, 2012
Oracle Unified Directory is the latest full java implementation, of LDAP directories offered by Oracle.
It offers several improved features over earlier LDAP versions such as higher speed on read/write operations, better handling of high volume data capacity, ease of scaling, replication and proxy capabilities.
In this topic we will explore some of the replications features of the Oracle Unified Directory Server, by providing a simple setup for high availability of any OUD topology.
Production OUD topologies need to offer, a continues, synchronized, and high availability flow of the business data, managed / hosted on OUD Ldap Stores. This is achieved by using OUD Ldap data nodes, with specific OUD Ldap directories as "Replication" nodes. A cluster of "OUD replication nodes" associated then with OUD Ldap datastores offers a high degree of availability for the data hosted on the OUD Ldap directories.
A Replication node, is considered an instance of an Oracle Unified Directory, which is used only to synchronize data (read/write operations) between several OUD dedicated LDAP stores. We can setup a replication node, on the same JVM which is hosting an OUD Ldap store, or we can create a specific JVM process which will handle the replication process (this is our approach for this demo).
For our demo, we will create a simply 4 node topology, on the same physical host.
Two OUD nodes (every node is a separate JVM process) , both of them are active, will handle the user data, and two OUD Replication nodes (as separate JMV processes ), will handle the synchronization of any modification applied on the data of the node1/node2.
As a best practice, its good to have at least 2 separate replication nodes in our topology, although we can start the replication scenario by using only 1 replication node, and then adding one additional instance.
Having at least 2 replication nodes in our system, we ensure that any operation on the data nodes of our Ldap systems (node1/noede2) will be propagated to the other Ldap nodes, even if one of the replication node fails.
The whole scenario can be run into one physical host (VirtualBox, or VMaware), without any overhead. In some other topics we will discuss tuning and monitoring operations.
Lets Start by creating the two LDAP nodes (node1, node2) which will hold our data.
This is done by executing the oud-setup script in graphical mode:
Creation of the first node, the node1 listens to 1389 port, and the 4444 is its admin port.
The node 1 is a standalone LDA server
The server will manage the directory tree dc=example,dc=com which is a sample site. For this site, the wizard will generate 2000 samples entries.
this is the final setup for the node1
At this stage, we have already one instance of our LDAP topology ( node1) up and running !.
We will continue, with the creation of the second OUD Ldap node (node2)
The setup for the node node2 is nearly similar to the node1, the LDAP listen port is 2389, and the admin port is 5444.
For the node2, we will create the same directory tree structure, but we will leave the directory database empty. During the synchronization phase (see later slides), we will provision the directory with data coming from the node1.
This is the final setup for the node2
And at this stage we have the second node2 up and running !
The creation of the replication nodes is "nearly similar process" as for the previous LDAP nodes. We will create 2 OUD instances, with the configuration wizard, and we will setup the replication process as an additional setup using the dsreplication command.
The first replication node will listen to port 3389, the admin port will be 6444.
The second replication node will listen to 4389, its admin port will be at 7444.
At this stage we have 4 OUD instances running to our system. Two LDAP nodes (node1, node2) with the "business data", and two replication nodes. The node1 is actually populated with data, the node2 will be provisioned during the setup of the replication between this node and the node1.
Now lets setup the replication process between the node1, and the first replication OUD server, by executing the following dsreplication command. The node1 , is the LDAP data node, and the first replication node, will hold only the replication information (it will not hold any LDAP data)
dsreplication enable \ --host1 localhost --port1 4444 --bindDN1 "cn=Directory Manager" \ --bindPassword1 welcome1 --noReplicationServer1 \ --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \ --bindPassword2 welcome1 --onlyReplicationServer2 \ --replicationPort2 8989 --adminUID admin --adminPassword password \ --baseDN "dc=example,dc=com" -X -n
This is what we should see in our prompt:
Then we should associate the second replication node with the node1.
The only parameters that we have to change to the previous scripts are the admin port for the second replication node, and the replication port for the second replication node
dsreplication enable \ --host1 localhost --port1 4444 --bindDN1 "cn=Directory Manager" \ --bindPassword1 welcome1 --noReplicationServer1 \ --host2 localhost --port2 7444 --bindDN2 "cn=Directory Manager" \ --bindPassword2 welcome1 --onlyReplicationServer2 \ --replicationPort2 9989 --adminUID admin --adminPassword password \ --baseDN "dc=example,dc=com" -X -n
We should execute the same script now, on the node2 in order to associate this node with the first replication node :
dsreplication enable \ --host1 localhost --port1 5444 --bindDN1 "cn=Directory Manager" \ --bindPassword1 welcome1 --noReplicationServer1 \ --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \ --bindPassword2 welcome1 --onlyReplicationServer2 \ --replicationPort2 8989 --adminUID admin --adminPassword password \ --baseDN "dc=example,dc=com" -X -n
At this stage we have associated the node1, node2, with the two replications nodes.
Before to start our operations (read/write) on node1,and node1 we have to initialize the replication topology with the following script :
dsreplication initialize-all --hostname localhost --port 4444 \ --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password
Here is what we should see :
As you notice, there is a fully successful provisioning of the node2, from the data of the node1!
Now we can monitor our configuration by executing the following command :
dsreplication status --hostname localhost --port 4444 \ --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password
To test our replication configuration, you can use any LDAP browser client, connect to the first instance, modify one or several entries, then connect to the second instance and check that your modification are applied :)
For additional training see: Oracle Unified Directory 11g: Services Deployment Essentials
About the Author:
Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.