Saturday Dec 19, 2009

DPS Coherence Plug-In: New Features

<script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>

Rationale

After having conversations on this plug-in both internally and with a handful of you out there, I sat down and added what I felt was the main missing features in my current implementation. This mainly covers security but also some flexibility. 

Bird's Eye View

On the security front, you now have the ability to filter out attributes so they don't make it in the cache (e.g. userPassword). What's more, you can filter out what entries you don't want in the cache in order to avoid the cache to be polluted by the occasional non-production (e.g. administration) hits.

On the flexibility front, you can set a time to live for entries that do make it in the cache. This allows to control whether you want to retain a value forever in the cache (default) or you want to make sure that it gets evicted after a certain time. You can also provide a list of regular expressions for the Bind DN (the identity) you grant access to the cache. And of course, you can decide to inlcude (default) or exclude unauthenticated clients to access the cache as well.

The Meat 

  • attributeNotToCache: userPassword
  • This attribute in the plugin's configuration entry can be multivalued and is not a regular expression but a plain string. Case, as always in LDAP, matters not. Any attribute name matching on of the provided values will be stripped from the entry before storing in Coherence.

  • dnNotToCache: .\*,ou=secret,o=o
This attribute can multivalued and allows to prevent DNs matching the regular expression to be stored in Coherence.
  • cacheForBindDN: cn=[2-3],o=o
This is attribute can be multivalued. It is a regular expression. Any authenticated clients' Bind DN must match one of the provided regular expressions to be granted access to the contents stored in Coherence.
  • cacheForAnonymous: false
This attribute is single valued. It is a boolean, either true or false. When false, unauthenticated clients will not be granted access to the contents stored in Coeherence and will therefore always hit the back-end.
  • cacheExpirationInMS: 30000

This attribute is single valued. It is a long and represents the length of time in milliseconds that an entry should be kept in the cache after the last time it has been accessed.

So, in the end, here is an example configuration entry:

dn: cn=CoherencePlugin,cn=Plugins,cn=config
objectClass: top
objectClass: configEntry
objectClass: plugin
objectClass: extensibleObject
cn: CoherencePlugin
description: Oracle Coherence Cache Plugin
enabled: true
pluginClassName: com.sun.directory.proxy.extensions.CoherencePlugin
pluginType: preoperation search
pluginType: search result entry
pluginType: postoperation delete
pluginType: postoperation modify
pluginType: postoperation modify dn
cacheName: LDAPCache
attributeNotToCache: userpassword
attributeNotToCache: aci
dnNotToCache: .\*,ou=secret,o=o
dnNotToCache: .\*,cn=config
cacheForBindDN: cn=[2-3],o=o
cacheForBindDN: uid=user.[0-9]+,ou=People,o=o
cacheForAnonymous: false
cacheExpirationInMS: 30000

That's it for my Friday night, let me know if there is more than DPS+Coherence can do for you!

As always, if you want to try this DPS plug-in, ping me: arnaud -at- sun -dot- com

Wednesday Dec 16, 2009

Oracle Coherence Support Right In DPS!!!

Rationale

Why do caching in DPS ?

The Directory Server back-ends are not able to "absorb" as many updates when they're stressed with a large proportion of searches. After all there's already caching on the back-end Directory Server itself. It helps a lot performance, since reads return to the client faster, it relieves some of the stress and frees up resources to take care of the writes that lock resources for longer atomic stretches of time. But as long as searches hit the back-end, even with cache, there's some weight lifting to be done: open the connection, parse the request, put the request in the work queue, lookup entries in the cache, return the entry, close the connection...

That's why caching right in DPS started to look appealing to me.

Why Coherence ?

Well, as much as one may think it is because Sun is about to be gobbled up by Oracle that I made this choice but the answer is no. Coherence is simply a compelling choice, these guys seem to have made all the technical choices I would have ... and then some. For one, you download the bits, then just start it and it works. It may sound like a marketing pitch but see for yourself. Excellent 0 to 60 in my book. Once you have it working, you get acquainted with it, the protocol is dead simple, the API is clean, robust and pretty lean. After that, you check the docs out (that's right, I read the docs after the fact) and start to realize how powerful a platform it is, how expandable it is, how far you can push deployments to accommodate growing performance or reliability needs. 

Bird's Eye View

The integration with Coherence is done by way of DPS (7+) plug-in that will -asynchronously- populate a Coherence cache with entries being returned by your regular traffic. When requests come in, a lookup is done to check if the entry is present in the cache and returned immediately if it is the case, otherwise the request is routed to the back-end as usual.

Note that I'm not making any claims on the performance aspect from the client's perspective of this caching approach because our Directory Server back-end is already pretty darn fast. This for sure relieves it from a bit of "frequent" traffic and will certainly benefit the overall performance of the topology. The relief will most certainly result in improved write response times but nothing speaks to the performance of the Coherence cache lookup. I just haven't collected enough data so far.

The Meat

nitty gritty anyone ?

Suppose we have a setup like this ... 

 the first read would be processed as described below:


but the second one would be processed without hitting the back-end.

Understand the tremendous impact this will have on your back-end ?

It frees it up to process writes or some heavy weight searches...

How to get the plug-in? ask nicely. 

Wednesday Dec 02, 2009

A Script To Rule Them All ... clients and backends that is

<script type="text/freezescript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/freezescript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>

Rationale

With Directory Proxy Server, regardless of the version, investigating traffic can get:

  • a) real tricky
  • b) time consuming
  • c) confusing
  • d) all of the above

and the answer is ... rolling drum ... d) !

So here is a script that you can feed your DPS access log to. It will output a CSV file that you can then load in your favorite spreadsheet software or just graph with tools like gnuplot and the like... it just will make your life easy...er.

Bird's Eye View

Disclaimer: It's not as anywhere as clever as logconv.pl for Directory Server, it only munches the data so that YOU can more easily spot issues or identify patterns. So what does this script produce in the end ?

It will take your DPS 6.x/7.0 access log in and output three csv files, one with the transaction volumes (suffixed "tx"), one with the average response times (suffixed "avg") and finally one with the maximum response time over a minute (suffixed "max"). Why not all in one file? I did initially but in a csv it turned out to really not be practical. So at least when you open up one of these files you know what you're looking at.

The Meat

Pre-requisites

Since I really started this initially to simply be able to "grep" a file on a windows system, I really had no plan and no idea it would end up in a tool like this. All that to say that I wrote in python instead of our customary Java tools. At least it has the merit of existing so you don't have to start from scratch. So you'll need python, at least 2.4. If you're on Solaris or Linux, you're covered. If on windows, simply download your favorite python, I have installed the 2.6.4 windows version from here.

You will also need to download the script. You may as well get the netbeans project if you'd like to change it to adapt it to your specific needs or help improve on it.

How Does It Work

0 To 60 In No Time

python dps-log-cruncher.py access 

The Rest Of The Way

-c      : break up statistics per client
-s      : break up statistics per back-end server
-f hh:mm: start parsing at a given point in time
-t hh:mm: stop parsing after a given point in time
-h      : print this help message
-v      : print tool version

Some examples:

split the output per client for all clients:

python dps-log-cruncher.py -c \* access 

 split the output per back-end server for client 192.168.0.17:

python dps-log-cruncher.py -c 192.168.0.17 -s \* access 

 split the output for all clients, all servers:

python dps-log-cruncher.py -c \* -s \* access 

 only output results from 6pm (18:00) to 6:10pm (18:10):

python dps-log-cruncher.py -f 18:00 -t 18:10 access 

 output results between 6:00pm (18:00) to 6:10pm (18:10) and split results for all clients and back-end servers:

python dps-log-cruncher.py -f 18:00 -t 18:10 -c \* -s \* access 

Enhancements

This is a list to manage expectations as much as it is one for me to remember to implement:

  1. Selectable time granularity resolution. Currently, all data is aggregated per minute. In some case, it would be useful to be able to see what happens per second
  2. Improve error handling for parameters on the CLI.
  3. Add a built-in graphing capability to avoid having to resort to using a spreadsheet. Spreadsheets do however give a lot of flexibility
  4. Add the ability to filter / split results per bind DN
  5. Output the response time distribution

Support

Best effort is how I will label it for now, you can send your questions and support requests to arnaud -- at -- sun -- dot -- com.

Enjoy!

About

Directory Services Tutorials, Utilities, Tips and Tricks

Search

Categories
Archives
« December 2009
SunMonTueWedThuFriSat
  
1
3
4
5
6
9
10
11
12
13
17
20
22
23
24
25
26
27
28
29
30
31
  
       
Today