Friday Apr 03, 2009

The Stupid Simple DPS MySQL Example

Rationale

    Even though I use DPS every day, I find myself looking for tips quite frequently.
Here is just a REALLY simple example of how to get started with MySQL as a data store.

There are very good, detailed examples in the documentation but none really dead simple. And that's precisely what this entry aims at.

Bird's Eye View

Below is a graph trying to depict how we map in DPS from SQL to LDAP. To be honest, SQL is a quite radically different model and therefore, even in this "stupid simple" example, there are a number of things that DPS cannot guess, namely:

  1. The data source is configured to point to a specific database (through the jdbc url)
  2. The data view is configured to represent an LDAP objectClass from an SQL table
  3. Each column of the SQL table need to be mapped to an LDAP attribute

 Here's how all this looks from a DPS configuration stand point:


The Meat

   In this example, we have a database engine containing a single database named "DEADSIMPLE". The DEADSIMPLE database has a single table "USERS" with two columns, "NAME" and "PASSWORD". The "USERS" table content is a single row as described in the above figure. This is all to make it as small and as easy as possible.

    We will here want to expose this data from the MySQL database as a proper "person" object, containing a "cn" (common name) attribute, "sn" (surname) attribute and a "userPassword" attribute in order for us to be able to authenticate as user cn=admin,dc=example,dc=com with password "password". Eventually, we want the entry to look as follows:

dn: cn=admin,dc=example,dc=com
objectclass: top
objectclass: person
userpassword: password
sn: 0
cn: admin

And here the log of my session. I'll upate this article later with more details.

$ echo password > /tmp/pwd
$ dpadm create -p 7777 -P 7778 -D cn=dpsadmin -w /tmp/pwd dps
Use 'dpadm start /path/to/sun/dsee/6.3/dps' to start the instance
$ dpadm start dps
Directory Proxy Server instance '/path/to/sun/dsee/6.3/dps' started: pid=966
$ dpconf create-jdbc-data-source -b replication -B jdbc:mysql:/// -J file:/path/to/mysql-connector-java-5.1.6-bin.jar -S com.mysql.jdbc.Driver sourceA
$ dpconf set-jdbc-data-source-prop sourceA db-user:root db-pwd-file:/tmp/pwd
The proxy server will need to be restarted in order for the changes to take effect
$ dpadm restart dps
Directory Proxy Server instance '/path/to/sun/dsee/6.3/dps' stopped
Directory Proxy Server instance '/path/to/sun/dsee/6.3/dps' started: pid=1065

$ dpconf create-jdbc-data-source-pool poolA
$ dpconf attach-jdbc-data-source poolA sourceA
$ dpconf create-jdbc-data-view viewA poolA dc=example,dc=com
$ dpconf create-jdbc-table dpsUsersTable users
$ dpconf add-jdbc-attr dpsUsersTable sn id
$ dpconf add-jdbc-attr dpsUsersTable cn name
$ dpconf add-jdbc-attr dpsUsersTable userPassword password
$ dpconf create-jdbc-object-class viewA person dpsUsersTable cn
$ldapsearch -p 7777 -D cn=admin,dc=example,dc=com -w password -b dc=example,dc=com "(objectClass=\*)"
version: 1
dn: dc=example,dc=com
objectclass: top
objectclass: extensibleObject
description: Glue entry automatically generated
dc: example

dn: cn=admin,dc=example,dc=com
objectclass: top
objectclass: person
userpassword: password
sn: 0
cn: admin


$ dpconf set-jdbc-attr-prop dpsUsersTable sn sql-syntax:INT

$ cat add.ldif
dn: cn=user,dc=example,dc=com
objectClass: person
cn: user
sn: 1
userPassword: password

$ ldapadd -p 7777 -D cn=admin,dc=example,dc=com -w password < add.ldif
adding new entry cn=user,dc=example,dc=com
ldap_add: Insufficient access
ldap_add: additional info: No aciSource setup in connection handler "default connection handler"


$ ldapmodify -p 7777 -D cn=dpsadmin -w password
dn: cn=mysql_aci,cn=virtual access controls
changetype: add
objectClass: aciSource
dpsAci: (targetAttr="\*") (version 3.0; acl "Allow everything for MySQL"; allow(all) userdn="ldap:///anyone";)
cn: mysql_aci

adding new entry cn=mysql_aci,cn=virtual access controls

$ dpconf set-connection-handler-prop "default connection handler" aci-source:mysql_aci

$ ldapadd -p 7777 -D cn=admin,dc=example,dc=com -w password < add.ldif
adding new entry cn=user,dc=example,dc=com

$ ldapsearch -p 7777 -D cn=admin,dc=example,dc=com -w password -b dc=example,dc=com "(objectClass=\*)"
version: 1
dn: dc=example,dc=com
objectclass: top
objectclass: extensibleObject
description: Glue entry automatically generated
dc: example

dn: cn=admin,dc=example,dc=com
objectclass: top
objectclass: person
userpassword: password
sn: 0
cn: admin

dn: cn=user,dc=example,dc=com
objectclass: top
objectclass: person
userpassword: password
sn: 1
cn: user

$ ldapmodify -p 7777 -D cn=admin,dc=example,dc=com -w password
dn: cn=user,dc=example,dc=com
changetype: modify
replace: userPassword
userPassword: newPassword

modifying entry cn=user,dc=example,dc=com

\^C
$ ldapsearch -p 7777 -D cn=admin,dc=example,dc=com -w password -b dc=example,dc=com "(cn=user)"version: 1
dn: cn=user,dc=example,dc=com
objectclass: top
objectclass: person
userpassword: newPassword
sn: 1
cn: user

<script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>

Wednesday Apr 01, 2009

Setting DPS As Replication Hub - Part 1: a simple tut'

Rationale

    There may be cases where you would like to keep two environments up to date with the same data but there is no replication or synchronization solution that fit your particular needs. One example that comes to mind is to migrate away from a legacy LDAP (RACF, OiD, Sun DS 5...) to  OpenDS. After having initialized your new data store with the former data store contents, without synchronization mechanism, you would have to switch to the new data store right away. That would not quite be acceptable in production because for one thing, importing the data might take longer than the maintenance window, and more importantly, may something unexpected happen, all real-life deployments want to preserve the option of rolling back to the legacy system (that has proved to work in the past -even if performance or functionality could use a dust-off- ).

Enters DPS "replication" distribution algorithm. The idea is quite simple: route reads to a single data store, duplicate writes across all data stores. I use the term data store here because it needs not be LDAP only but any SQL data base that has a JDBC driver can be replicated to as well. For this tutorial though, I will use two LDAP stores. We will see a MySQL example in Part 2.

Bird's Eye View

    Unlike load balancing and fail over algorithm, which work across sources in a same pool, distribution algorithms work across data views. A distribution algorithm is a way to pick the appropriate data view among eligible data views to process a given client request. In this tutorial, I will show how the "replication" distribution algorithm allows to duplicate write traffic across two distinct data sources.

In the graph below, you can see how this is structured in DPS configuration.

The Meat

We will assume here that we have two existing LDAP servers running locally and serving the same suffix dc=example,dc=com:

  1. Store A: dsA on port 1389
  2. Store B: dsB on port 2389

Let's first go about the mundane task of setting up both stores in DPS:
    For Store A:

#dpconf create-ldap-data-source dsA localhost:1389
#dpconf create-ldap-data-source-pool poolA
#dpconf attach-ldap-data-source poolA dsA
#dpconf set-attached-ldap-data-source-prop poolA dsA add-weight:1 bind-weight:1 delete-weight:1 modify-weight:1 search-weight:1
#dpconf create-ldap-data-view viewA poolA dc=example,dc=com

    For Store B:

#dpconf create-ldap-data-source dsB localhost:2389
#dpconf create-ldap-data-source-pool poolB
#dpconf attach-ldap-data-source poolB dsB
#dpconf set-attached-ldap-data-source-prop poolB dsB add-weight:1 bind-weight:1 delete-weight:1 modify-weight:1 search-weight:1
#dpconf create-ldap-data-view viewB poolB dc=example,dc=com

    Now, the distribution algorithm must be set to replication on both data views:

#dpconf set-ldap-data-view-prop viewA distribution-algorithm:replication replication-role:master
#dpconf set-ldap-data-view-prop viewB distribution-algorithm:replication replication-role:master

  And finally, the catch:

    When using dpconf to set the replication-role property to master, it effectively writes distributionDataViewType as a single valued attribute in the data view configuration entry when in reality the schema allows it to be multi-valued. To see that for yourself, simply do:

#ldapsearch -p <your DPS port> -D "cn=proxy manager" -w password "(cn=viewA)"
version: 1
dn: cn=viewA,cn=data views,cn=config
dataSourcePool: poolA
viewBase: dc=example,dc=com
objectClass: top
objectClass: configEntry
objectClass: dataView
objectClass: ldapDataView
cn: viewA
viewAlternateSearchBase: ""
viewAlternateSearchBase: "dc=com"
distributionDataViewType: write
distributionAlgorithm: com.sun.directory.proxy.extensions.ReplicationDistributionAlgoritm


and then try to issue the following command:

#dpconf set-ldap-data-view-prop viewA replication-role+:consumer
The property "replication-role" cannot contain multiple values.
XXX exception-syntax-prop-add-val-invalid

...

...or just take my word for it. 

The issue is that in order for DPS to process read traffic (bind, search, etc...), one data view needs to be consumer but for the replication to work across data views, all of them must be master as well. That is why you will need to issue the following command on one (and one only) data view:

#ldapmodify -p <your DPS port> -D "cn=proxy manager" -w password
dn: cn=viewA,cn=data views,cn=config
changetype: modify
add: distributionDataViewType
distributionDataViewType: read

That's it!
Wasn't all that hard except it took some insider's knowledge, and now you have it.
Your search traffic will always go to Store A and all write traffic will get duplicated across Store A and B.

Caveats

Note that while this is very useful in a number of situations where nothing else will work, this should only be used for transitions as there are a number of caveats.
DPS does not store any historical information about traffic and therefore, in case of an outage of one of the underlying stores, contents may diverge on data stores. Especially when this mode is used where no synchronization solution can catch up after an outage.

Store A and Store B will end up out of synch if:

  • either store comes to be off-line
  • either store is unwilling to perform because the machine is  outpaced by traffic
<script type="text/freezescript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/freezescript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>

Wednesday Aug 06, 2008

From An Old Suffix To A New One Live With No Down Time

You currently have your entries an "old" suffix that we will here on call dc=old and you would like to be able to move your entries to a new suffix we will refer to as dc=new for the purpose of this article. The catch is that you cannot stop your server and migrate your data off line. On top of this, during the transition period when your client applications get reconfigured to use dc=new, you need entries to appear to be in both dc=new and  dc=old.

To make this a little simpler to picture in our minds, let's look at the data life cycle:

  1. Before the migration starts, all data resides under dc=old, requests on dc=new would fail but no application is "aware" of dc=new
    • e.g. cn=user.1,dc=old works but cn=user.1,dc=new fails. At this point the DIT looks like this:
      • dc=old
        • cn=user.1
        • cn=user.3
      • dc=new
  2. When the migration starts, all data resides under dc=old, both requests on dc=old and dc=new are honored
    • e.g. cn=user.1,dc=old and cn=user.1,dc=new will work but the entry is actually only stored physically under dc=old. The DIT is unchanged compared to 1. but we have shoehorned DPS in the topology and added a data view to take care of the transformation.
  3. While the migration is ongoing, to be migrated data resides under dc=old but migrated data resides on dc=new
    • this is when it will start to get a little complicated. Here is what the DIT might look like:
      • dc=old
        • cn=user.1
      • dc=new
        • cn=user.3
    • At this point, a request on cn=user.1,dc=old will work along with a request for cn=user.1,dc=new. But both requests for cn=user.3,dc=new and cn=user.3,dc=old will work as well. Basically, to the client application there is a true virtualization of where the data actually resides. This is crucial when you have a heterogeneous environment of applications being reconfigured to use the new suffix while some older applications might take too much work to reconfigure and are simply waiting to be discontinued.
  4. When data migration is complete, all data resides under dc=new but both requests for entries under both suffixes will be served. At this point our DIT would look like this:
    • dc=old
    • dc=new
      • cn=user.1
      • cn=user.3
  5. Once we are confident no application requests entries under dc=old then we can take the virtualization data views down. Only requests to dc=new will be served.

If you want to try it out for yourself, here are the steps to follow to get such a setup with DPS 6 but first let us agree on the environment:

  • Directory Server bits are installed in/path/to/sun/dsee/6.3/bits/ds6
  • Directory Proxy Server bits installed in /path/to/sun/dsee/6.3/bits/dps6
  • I won't use "cn=Directory Manager" but uid=admin instead with a password file containing the "password" string in /path/to/pwd
  • the data for dc=new and dc=old in our example is as follows

    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain
    aci: (targetattr=\*) ( version 3.0; acl "allow all anonymous"; allow (all) userdn="ldap:///anyone";)

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

    dn: dc=new
    dc: new

    objectClass: top
    objectClass: domain
    aci: (targetattr=\*) ( version 3.0; acl "allow all anonymous"; allow (all) userdn="ldap:///anyone";)

    dn: cn=user.3,dc=new
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

Before we begin, it is very convenient to set the following environment variables to make subsequent calls the CLIs much easier to read:

export PATH=${PATH}:/path/to/sun/dsee/6.3/bits/ds6/bin:/path/to/sun/dsee/6.3/bits/dps6/bin:/path/to/sun/dsee/6.3/bits/dsrk6/bin
export LDAP_ADMIN_PWF=/path/to/pwd
export LDAP_ADMIN_USER=uid=admin
export DIRSERV_PORT=1389
export DIRSERV_HOST=localhost
export DIRSERV_UNSECURED=TRUE
export DIR_PROXY_HOST=localhost
export DIR_PROXY_PORT=7777
export DIR_PROXY_UNSECURED=TRUE

First off we need to create an instance of Directory Server to store our entries. This is a two step process:

  1. Create and start the instance that we will name master
    >dsadm create -D uid=admin -w /path/to/pwd -p 1389 -P 1636 master
    Use 'dsadm start 'master'' to start the instance
    >dsadm start master
    Directory Server instance '/path/to/sun/dsee/6.3/live-migration/master' started: pid=2968
  2. On to creating a suffix and populating it with data
    >dsconf create-suffix dc=old
    >dsconf import /path/to/sun/dsee/6.3/instances/dc\\=old.ldif dc=old
    New data will override existing data of the suffix "dc=old".
    Initialization will have to be performed on replicated suffixes.
    Do you want to continue [y/n] ?  y
    ## Index buffering enabled with bucket size 40
    ## Beginning import job...
    ## Processing file "/path/to/sun/dsee/6.3/instances/dc=old.ldif"
    ## Finished scanning file "/path/to/sun/dsee/6.3/instances/dc=old.ldif" (3 entries)
    ## Workers finished; cleaning up...
    ## Workers cleaned up.
    ## Cleaning up producer thread...
    ## Indexing complete.
    ## Starting numsubordinates attribute generation. This may take a while, please wait for further activity reports.
    ## Numsubordinates attribute generation complete. Flushing caches...
    ## Closing files...
    ## Import complete.  Processed 3 entries in 4 seconds. (0.75 entries/sec)
    Task completed (slapd exit code: 0).
  3. We can now check the data was successfully loaded with a quick broad sweep search:
    >ldapsearch -p 1389 -b "dc=old" "(objectClass=\*)"
    version: 1
    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==
  4. Repeat these last 3 steps for dc=new

Directory Proxy Server configuration

  1. Create and start an instance of the proxy

    >dpadm create -p 7777 -P7778 -D uid=admin -w /path/to/pwd proxy
    Use 'dpadm start /path/to/sun/dsee/6.3/live-migration/proxy' to start the instance

    >dpadm start proxy
    Directory Proxy Server instance '/path/to/sun/dsee/6.3/live-migration/proxy' started: pid=3061

  2. Connect the proxy to the Directory Server instance

    >dpconf create-ldap-data-source master localhost:1389

    >dpconf create-ldap-data-source-pool master-pool

    >dpconf attach-ldap-data-source master-pool master

    >dpconf set-attached-ldap-data-source-prop master-pool master add-weight:1 bind-weight:1 compare-weight:1 delete-weight:1 modify-dn-weight:1 modify-weight:1 search-weight:1

  3. Create a straight data view to dc=old and verify we can get through to the source

    >dpconf create-ldap-data-view actual-old master-pool dc=old

    >ldapsearch -p 7777 -b dc=old "(objectClass=\*)"
    version: 1
    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

    >ldapsearch -p 7777 -b dc=old "(cn=user.1)"
    version: 1
    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

  4. Create a virtual data view representing the physical entries under dc=old as it were under dc=new

    >dpconf create-ldap-data-view virtual-new master-pool dc=new

    >dpconf set-ldap-data-view-prop virtual-new dn-mapping-source-base-dn:dc=old

    >ldapsearch -p 7777 -b dc=new "(cn=user.1)"
    version: 1
    dn: cn=user.1,dc=new
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==


  5. Repeat 3 and 4 for the physical dc=new suffix and we now have totally virtualized back end.

    >dpconf create-ldap-data-view actual-new master-pool dc=new

    >dpconf create-ldap-data-view virtual-old master-pool dc=old

    >dpconf set-ldap-data-view-prop virtual-old dn-mapping-source-base-dn:dc=new

    >ldapsearch -p 7777 -b dc=old "(cn=user.3)"
    version: 1
    dn: cn=user.3,dc=old
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}5LFqYHLashsY7PFAvFV9pM+C2oedPTdV/AIADQ==

    >ldapsearch -p 7777 -b dc=new "(cn=user.3)"
    version: 1
    dn: cn=user.3,dc=new
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}5LFqYHLashsY7PFAvFV9pM+C2oedPTdV/AIADQ==

<script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>
About

Directory Services Tutorials, Utilities, Tips and Tricks

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today