Answering Outstanding Directory Questions (aka Continuing Education Process for James McGovern)

James McGovern has more questions

That being said - here are the questions and my answers.

1. There is a lot of talk about leveraging a virtualization layer but no discussion on when it is a better strategy to simply copy data. Most directories I have ran across aren't that big and most will fit into memory.

Answer: Virtual directories are NOT in memory copies of existing directories.

Identity Virtualization means that when clients make a request, the virtual directory figures out how to retrieve and package the data from the proper sources and then present it in proper format to the client.

Virtualization is not intended to replace replication (for example your core master sources must be replicated for high availability which you need to do regardless ).

What virtualization allows you to avoid is unnecessary synchronization of data and abstracts your applications from the sources of identity data. This gives you more flexibility and allows you to deploy applications in less time because you don't have to worry about continuing to build identity silos.

2. Mark Wilcox provided an example of virtualizing Microsoft ADAM that
was technically sound but didn't talk about the economic aspects. He
didn't mention what OVD costs but I have to assume it is pricier than
an ADAM instance. Does it make sense to spend say $30K to virtualize
something that costs say $3K?

Answer: I don't cover pricing because there is a lot of negotiation that goes into it.

Anyway ADAM is a meta-directory solution and as with any meta-directory solution - the license cost is the smallest part of the total cost.

Meta-directory means you must synchronize data, backup that data, secure that data, make it highly available by replicating that data, monitor that all of that is working and assign someone to manage that data.

I shouldn't have to remind you - that you're already doing that with your existing sources - why would anyone want to redo all of that work? Virtualization allows you to increase your return on existing investments (ROEI) because you can leverage what you already have done in new ways.

Even if the license was free (I mean you could do what ADAM does with OpenLDAP or Fedora directory and they are free) - you can see that license cost is not the true total cost.

And oh - if it turns out you deploy a new application that can use that same data but needs it in different format (e.g. they want different attribute names, different structure OR require different access rules that maybe ADAM can't support) - you have to setup that entire infrastructure all over again.

Compare that to identity virtualization which you just run as a lightweight service that transforms your data on demand. And since it can do discrete application views (Regardless if that data is stored in AD, a database, a Web Service or anything you can connect Java to) thus a single service can appear to be many. Finally because it's stateless it's much easier to manage and make highly available.

3. Enterprise Applications such as Documentum
use a syncronization paradigm for group structures and at some level
this approach is fugly. Directories such as Active Directory have the
ability to create dynamic group structures based on specifying
attributes. How should products consume dynamic group structures?
Additionally, what will cause Documentum, Alfresco and other products
that are still doing LDAP syncronization in a legacy fashion to

Answer: I don't disagree that synchronizing groups is fugly :).

Dynamic Groups are a very nice way of simplifying the management of groups in a directory.

However, as mentioned applications can have a hard time dealing with dynamic groups.

This is why in OVD we ship with a plug-in that converts Netscape-style dynamic groups into looking & acting like static groups. And if you are using OID - it will answer membership test queries against its dynamic groups just as if it was a static group.

In terms of when will this architecture change - the same way it always has. Either they will change their product to meet this need or competing products emerge that do this and the market decides that's a compelling feature and customers move to this new architecture.

But the primary reason why this has happened is because developer's have had to be identity management experts as well as experts in their business domain.

This is why at Oracle all of our Fusion applications will leverage our Identity Services as a platform service instead of writing their own IdM code (plus any application written on Fusion Middleware will be able to take advantage of this as well). And it's why we're helping to make it easier on developers to use Identity as a platform service via standard efforts like the IGF Identity Attribute Services API.

4. If directory enabled products are still doing syncronization instead
of binding at runtime, this would lead me to believe that the community
at large needs to define best practices in creating directory enabled
enterprise applications. Who is in the best position to lead this

Answer: Instead of trying to just focus on how to best do LDAP it would be better for community to converge around something like IGF.

IGF is helping define the standards and a simpler Identity Attribute access API for developers so that they can begin to access identity in the same way they use DNS or Database.

They make the call and don't really worry about how the data is accessed or stored.

And it is protocol agnostic so that it could be LDAP today or some type of Web Service in the future.

Because IGF defines a standard way to represent the Identity object(s) needed for any application via the CARML specification that itself is also abstracted from its storage it will make it easier to map domain specific identity to an enterprise source(s) very similar to how applications are able to port their application specific data to different databases. I would also encourage everyone to keep tabs on Clayton Donley's thoughts on this.

5. Without exploring the whole legacy conversation, shouldn't it be
considered a best practice for modern applications to not even be aware
of LDAP as a protocol? If an application instead interacted with an
STS, shouldn't it pick up the attributes at runtime vs having
hard-coded binding to a directory interactions?

Answer: For current applications LDAP is the only standard and LDAP is going to be around for a long-time. Obviously in the future new standards may emerge. STS may or may not be one of them But that is the goal of IGF Identity Attributes API - applications won't need to worry about that. The Identity Attribute Service provider will handle that for them.

6. Taking this one step further, if you have XACML and you are writing
a PEP, why would your application ever need to know about LDAP?

Answer: Correct (thous if you write your application using JAAS standard today - you don't need to worry much about LDAP anyway) - if you do XACML you will be focusing on XACML. However, the information the PDP needs that your PEP talks to likely will be pulling data from at least one directory. So a PEP probably won't need to know LDAP, but the PDP will. But considering it's been over a decade since LDAP was created and applications are still just now adopting it, I would not be planning on XACML happening any faster.

7. Oracle has a wonderful product known as OctetString which allows a
LDAP directory service to work with JDBC but this is on the client.
This begs the question of whether products such as Sun One Directory
Server, OpenLDAP, Microsoft Active Directory and others should instead
figure out how to allow a SQL client to connect to the directory and
support it natively. What prevents vendors from going down this route?

Answer:  I'm glad James gave OctetString nice praise but OctetString was the name of our company that created the virtual directory before we were acquired in November 2006. The name of the product is Oracle Virtual Directory.

The SQL question is common enough, that I will answer it in a separate post to make it easier to reference in the future.


Post a Comment:
Comments are closed for this entry.

This is the blog for Oracle Consulting Security North America team. Edited by Mark Wilcox - Chief Technology Officer for Oracle Consulting Security - North America.


« July 2016