Responding to Virtual Directory and Persistent Cache
By Mark Wilcox - OVD Product Manager on Jan 30, 2009
Over at Ash's Identity Management Rantings - Ash had a post asking questions about a topic that never seems to go away - "virtual directories and persistent cache".
This seems to be Ash's response to an earlier post by Clayton.
Ash had two questions:
1. Since performance is the main point here, does any one have numbers on the performance hit caused by virtual directories?
The overhead is absolutely minimal - it's generally around 2-5 milliseconds. And worst I've ever seen is around 50 milliseconds (remember that's still only 5/100s of a second). This includes doing a join of data. In short the performance is effectively the same as applications directly accessing the original source. Or to put it another way - there is a telco in Europe who uses OVD to authenticate all of their 3G phone users. Without using any cache within OVD. If direct access is sufficient for a telco - it is going to be more than adequate for any traditional IT application.
2. Is performance the only real justification for persistent cache?
When customers mention "persistent cache" - typically they are concerned on performance. Usually this is because they have a mis-conception on direct access overhead. The other reason I've heard for customers becoming interested in persistent cache is if the source becomes unavailable. However, a persistent cache is probably not the best way to address this issue. There are two reasons for this. The first reason is that most of the time your primary resources are mission critical to the point if they are not functioning, the company is not functioning regardless if it was copied or not (e.g. if AD goes down, you can't login to Windows). The second reason is that spending the effort to make those systems redundant/highly-available will generally pay bigger dividends then synchronizing that data to another database that wouldn't help make the core application that generates the data more redundant.
Also "Blink Industries" added this in the comment to Ash's post:
"I thought the whole point of the cache is to lighten the load against the system as a whole. It's a compromise of data freshness for performance. Plus the entire point of a cache is to "cache" frequently used data, of course depending on the algorithm used (LRU, MRU, etc.). I also assume that the cache is adjustable and can have specific timeouts for freshness. I think for a highly trafficked directory this is a great trade-off."
I posted this here because it allows me to address a key difference in caching. OVD does provide a Cache plug-in that is granular - you can apply it globally or per adapter. It also doesn't require any other data-store (or software license, neither of which our competition can currently claim). But the reality is that it usually never needed (I don't know of any production use of it including in high volume/low latency like phone activation or online-user registration). This is because caching already occurs at other points in the system. Pretty much every enterprise data source has a built-in cache. Additionally the clients that talk to OVD - are usually caching the results. And finally given modern hardware and software - I doubt any enterprise has reached any capacity level on their enterprise identity stores. Thus no actual need for any type of cache at the virtual directory layer.