Friday Sep 14, 2007

OpenDS 1.0.0-build005 is now available

I have just uploaded OpenDS 1.0.0-build005, built from revision 3056 of our source tree, to our weekly builds folder. The direct link to download the core server is The direct link to download the DSML gateway is

I have also updated the archive that may be used to install OpenDS via Java Web Start. You may launch that using the URL, or visit for more information.

NOTE: -- Even though it is displayed as an option in the QuickSetup installer, we do not support upgrading from previous OpenDS builds to the 1.0.0-build005 release. There are some changes in this release that are not backward compatible with the configuration used by previous releases, and these changes may cause the upgrade process to fail.

Detailed information about this build is available at Some of the changes that have been incorporated since OpenDS 1.0.0-build004 include:
  • Revision 2796 (Issue #2030) -- Update the filesystem entry cache to provide the ability to use a compact entry encoding.

  • Revision 2804 (Issues #2104, 2162) -- Update a number of the command-line utilities so that they operate in interactive mode rather than non-interactive mode by default.

  • Revision 2806 -- Update the account status notification handler API so that it is possible to provide additional properties along with the notification. This makes it possible to develop account status notification handlers that can act more intelligently and/or provide more useful information.

  • Revision 2811 (Issue #2097) -- Fix a problem in which total update initialization can fail to send a message to the replication cache.

  • Revision 2820 (Issues #43, 72) -- Implement support for the numSubordinates and hasSubordinates virtual attributes. Also, provide a new dbtest tool, which can be used to perform low-level debugging for the backends using the Berkeley DB Java Edition.

  • Revision 2824 (Issue #581) -- Provide an SMTP account status notification handler that can be used to send e-mail messages whenever an account status notification is generated. The notification message can be sent to the user that is the target of the notification and/or a specified set of administrators. The messages that will be sent are generated based on user-editable templates.

  • Revision 2843 (Issue #1831) -- Implement complete support for an interactive mode for dscfg. The tool now provides a menu-driven interface for examining and updating the server configuration.

  • Revision 2856 -- Update the CreateRCScript tool so that it provides the ability to specify the user that the server should run as, and also lets the user specify the JAVA_HOME and JAVA_ARGS settings that should be used. Also, update the start-ds and stop-ds commands to support a "--quiet" argument, which causes them to not generate any output. This mode will be used when starting and stopping the server through the generated RC script.

  • Revision 2877 -- Fix a memory leak that can occur when a backend based on the Berkeley DB Java Edition is disabled.
  • Revision 2879 (Issue #2180) -- Fix a problem in the JE backend in which contention on an index key might cause that key to contain an incomplete or incorrect value.

  • Revision 2882 (Issue #2205) -- Fix a problem that caused replication to behave incorrectly when a replicated change included an attribute type that was not defined in the schema of the target server.

  • Revision 2889 (Issue #2158) -- Add support for storing compressed schema representations in the JE backend and re-enabled the compact entry encoding by default.

  • Revision 2894 -- Add number of new configuration definitions for objects that were previously using "generic" definitions. This will help make it much easier for users to create new instances of these kinds of configuration objects.

  • Revision 2899 -- Add new directory environment properties that can be used to indicate whether the server should maintain a configuration archive, and if so the maximum number of archived configurations that should be maintained.

  • Revision 2900 (Issue #1945) -- Update the server so that it has the ability to save a copy of its current configuration into a ".startok" file whenever it starts successfully. Also, expose an option in the start-ds script and in the directory environment configuration that provide the ability to start the server using the "last known good" configuration rather than the current configuration.

  • Revision 2904 (Issues #1481, 2031) -- Add the ability to set any Berkeley DB JE property in the server configuration, for both backends based on the Berkeley DB Java Edition and the filesystem entry cache.

  • Revision 2913 (Issue #257) -- Implement support for a plugin that can be used to maintain referential integrity within the server. Whenever an entry is deleted or renamed, then any references to that entry in a specified set of attributes will be removed or renamed accordingly.

  • Revision 2921 -- Update the LDAP connection handler to explicitly close the selector when it is disabled or the server is shut down to prevent problems with being unable to re-bind to that port when the server is restarted.

  • Revision 2926 (Issue #139) -- Implement support for a maximum blocked write time limit in the LDAP connection handler. If an attempt to write data to the client is stalled for too long, then the client connection will be terminated.

  • Revision 2932 (Issue #261) -- Implement support for a 7-bit clean plugin, which can be used to ensure that the values of a specified set of attributes will only be allowed to contain ASCII characters.

  • Revision 2933 (Issue #2218) -- Update the LDIFPluginResult object to provide a mechanism that can be used to indicate why an entry should not be imported/exported.

  • Revision 2935 (Issue #1830) -- Implement support for secure communication in the dsconfig utility.

  • Revision 2950 (Issue #2216) -- Implement support for an LDIF connection handler, which may be configured to watch for new LDIF files to be created in a specified directory and have changes defined in those files automatically applied in the server through internal operations.

  • Revision 2955 (Issue #2181) -- Implement support for delete and modify operations in the task backend.

  • Revision 2961 -- Update a number of command-line tools that can be used to perform operations either directly against a backend or through the task backend so that if a port number and/or bind DN are provided, then the tool will default to using the tasks interface.

  • Revision 2966 -- Implement support for encryption and authentication when using replication.

  • Revision 2974 (Issue #2155) -- Update the server configuration so that a password storage scheme is referenced by its DN rather than the storage scheme name.

  • Revision 2986 -- Update the replication changelog database so that it implements the backend API. This provides the ability to backup and restore the changelog database, and provides a groundwork for future LDAP access to the changelog contents.

  • Revision 2998 (Issue #1594) -- Provide the ability to expose a monitor entry for the server entry cache.

  • Revision 2999 (Issue #2057) -- Update the server to provide a basic framework to control when plugins will be invoked. In particular, this adds the ability to indicate whether a plugin should be invoked for internal operations, and it also adds the ability to have plugins that are notified whenever changes are applied through synchronization. The unique attribute plugin has been updated so that it can detect uniqueness conflicts introduced through synchronization and generate an alert to notify administrators of the problem.

  • Revision 3006 -- Make a number of minor changes to improve server performance.

  • Revision 3008 -- Update the server configuration handler to fix a problem in which some change listeners may not be notified when the associated entry is updated.

  • Revision 3024 -- Make a number of additional changes to improve server performance.

  • Revision 3031 (Issues #1335, 1336, 1878, 2201, 2250) -- Provide new utilities that can be used to configure the server to participate in a replication environment.

  • Revision 3033 -- Upgrade the Berkeley DB Java Edition library to version 3.2.44.

  • Revision 3044 -- Make a couple of minor changes to improve server performance.

  • Revision 3048 -- Add a new tool that may be used to manage tasks defined in the server.

  • Revision 3051 (Issue #2059) -- Display SHA-1 and MD5 digests of a certificate fingerprint instead of the complete certificate when prompting the user about whether the certificate should be trusted in the status panel.

  • Revision 3054 -- Update the server so that it is possible to call EmbeddedUtils.startServer after having previously called EmbeddedUtils.stopServer. Previously, the server shutdown process did not leave the server in a sufficient state to allow it to be restarted in the same JVM.

Friday Jun 08, 2007

OpenDS 0.9.0-build003 is now available

I have just uploaded OpenDS 0.9.0-build003, built from revision 2052 of our source tree, to our weekly builds folder. The direct link to download the core server is The direct link to download the DSML gateway is

I have also updated the archive that may be used to install OpenDS via Java Web Start. You may launch that using the URL, or visit for more information.

Detailed information about this build is available at Some of the changes that have been incorporated since OpenDS 0.9.0-build002 include:
  • Revision 1997 (Issue #1749) -- Update the way that privileges are evaluated by the server. Previously, they were always evaluated based on the authentication identity. Now, all privileges except proxied-auth are evaluated based on the authorization identity.

  • Revision 2000 (Issue #1760) -- Fix a problem in the access control implementation that could prevent the use of operational attributes in the userattr bind rule.

  • Revision 2001 -- Rename the default error log file name from "error" to "errors" in order to be more consistent with other products.

  • Revision 2002 (Issue #1758) -- Update the server to provide a lockdown mode. This is a mode in which the server will only allow client connections over loopback interfaces, and will reject all requests from non-root users. A task has been added that can allow an administrator to manually enable or disable this mode, and an internal API is available to expose it to other server components.

  • Revision 2004 (issue #609) -- Update the replication mechanism to provide modify conflict resolution for single-valued attributes. This uses a different mechanism than for multi-valued attributes and can allow the server to maintain less historical information for the attribute.

  • Revision 2009 (Issue #1761) -- Fix a problem that could prevent the QuickSetup installer from running properly (especially on Windows systems) if JAVA_HOME is not set.

  • Revision 2010 -- Fix a problem in the error logger that prevented an override severity of "all" from being handled properly.

  • Revision 2011 (Issue #1753) -- Fix a problem on Windows systems where manually running the setup utility where arguments could be incorrectly interpreted.

  • Revision 2017 (Issue #1601) -- Update the QuickSetup and Status Panel tools to improve the way that they handle focus changes between components so that it is easier to interact with these tools using only the keyboard.

  • Revision 2021 (Issue #1616) -- Update the QuickSetup tool so that it will always provide a button that can be used to launch the status panel even if the installation fails.

  • Revision 2024 (Issue #1634) -- Update the GUI tools so that when a text field gets input focus, its text is automatically selected.

  • Revision 2025 (Issue #1764) -- Fix a problem in the replication initialization where it can enter an infinite loop if there is no replication server available.

  • Revision 2026 (Issue #1117) -- Provide an entry cache implementation that is backed by a Berkeley DB JE instance. The backing database can be placed on a tmpfs or other kind of memory-based filesystem to allow for a space-efficient caching mechanism.

  • Revision 2042 (Issue #1750) -- Update the access control handler so that if it encounters any access control rules that cannot be parsed when the server is starting up, they will be logged and the server will be placed in lockdown mode. This will help avoid problems in which an incorrectly-specified access control rule wouldn't be enforced as an administrator intended and inadvertently grant too much access to users.

  • Revision 2045 (Issue #1729) -- Make changes to the server to allow for better integration with the Penrose virtual directory product.

  • Revision 2046 (Issue #1633) -- Ensure that the JMX connection handler is disabled by default. Given that there is currently no way to configure it in the QuickSetup utility, it is better to have it disabled than running, potentially without the administrator knowing about it.

  • Revision 2048 -- Update the global ACI definitions so that they allow read access to the entryUUID operational attribute.

  • Revision 2049 (Issues 660, 1675, 1770) -- Provide a new mechanism for encoding entries. This provides a mechanism for excluding the DN from the encoded entry (which can be helpful for the filesystem entry cache), and also for compressing the object class sets and attribute descriptions to conserve space and improve encode/decode performance.

  • Revision 2050 (Issue #1775) -- Add a virtual attribute provider that can be used to assign entryUUID values for entries in private backends. The entryUUID values for these entries will be based on an MD5 digest of the normalized DN, but this should not present an instability problem because these entries aren't allowed to be renamed.

  • Revision 2051 (Issues #1765, 1776) -- Eliminate the search-unindexed privilege, since the unindexed-search privilege was added to do the same thing. Also, eliminate the index-rebuild privilege and fold all of its functionality into the ldif-import privilege, since having it as a separate privilege didn't add much value and created unnecessary administrative overhead.

  • Revision 2052 -- Update the entry cache initialization process so that a default entry cache is always instantiated before the backends are brought online. This helps avoid problems in backends that attempt to interact with the cache before the full entry cache initialization is complete.

Monday Jun 04, 2007

A Comparison of OpenDS and DSEE Functionality

I've been asked to give a brief presentation on OpenDS. The target audience is a group of people that are presumably already familiar with our existing DSEE product, so there's not much need to go into detail on what a directory is and why you might need one. I've also only been given about 10 minutes to talk about it, so I can't go into a lot of detail. With this in mind, I thought that one of the most pertinent topics to cover is a quick overview of where OpenDS is today in comparison to DSEE.

I have uploaded the slides for this presentation so that they are available at I apologize that it's relatively light on content, but with only ten minutes I don't have time for much more than a very high-level overview. More detailed information is available at, and you can also check our issue tracker at for even more detail.

Friday Jun 01, 2007

OpenDS 0.9.0-build002 is now available

I have just uploaded OpenDS 0.9.0-build002, built from revision 1988 of our source tree, to our weekly builds folder. The direct link to download the core server is: The direct link to download the DSML gateway is:

I have also updated the archive that may be used to install OpenDS via Java Web Start. You may launch that using the URL, or visit for more information.

Detailed information about this build is available at Some of the changes that have been incorporated since OpenDS 0.9.0-build001 include:
  • Revision 1924 -- Add support for trailing arguments in the command-line parser.

  • Revision 1926 (Issue #1606) -- Fix an access control problem that could cause some attributes to be incorrectly excluded from search results. Also, correct a problem with the way that userattr bind rule was interpreted.

  • Revision 1928 (Issue #1621) -- Fix an access control problem that could cause incorrect evaluation for search filters using a NOT component.

  • Revision 1934 (Issue #202) -- Fix a problem that could cause warnings about setting log file permissions on Windows systems.

  • Revision 1951 (Issue #1085) -- Make changes to the synchronization protocol to ensure that it can remain extensible and backward compatible across OpenDS versions.

  • Revision 1953 (Issue #1623) -- Fix a problem that prevented the access control handler from properly applying a target that was equal to the root DSE.

  • Revision 1966 (Issue #1103) -- Add monitor data for replication naming/modify conflicts, including the number of unresolved naming conflicts, the number of resolved naming conflicts, and the number of resolved modify conflicts since startup. This change was contributed by Chris Drake.

  • Revision 1967 (Issue #1561) -- Make sure that all threads are terminated before returning upon disabling a replication domain.

  • Revision 1968 (Issue #1323) -- Fix a problem that could cause error messages on startup with replication enabled due to two replication servers connecting to each other at the same time.

  • Revision 1975 (Issue #1624) -- Fix a string index out of range exception that could be encountered during a replication total update.

  • Revision 1982 (Issue #1620) -- Fix a problem that could cause get effective rights results to be incorrect for the delete and proxy rights.

  • Revision 1984 -- Fix a misplaced quotation mark in the server access log for search operations that included the proxied authorization control.

Friday May 18, 2007

OpenDS 0.9.0-build001 is now available

I have just uploaded OpenDS 0.9.0-build001, built from revision 1918 of our source tree, to our weekly builds folder. The direct link to download the core server is: The direct link to download the DSML gateway is:

I have also updated the archive that may be used to install OpenDS via Java Web Start. You may launch that using the URL, or visit for more information.

Note that starting with this build, we have changed our build numbering system to be more consistent with other projects. The base build number is the build of the next official release we are working toward (in our case 0.9.0, since 0.8.0 was released last week), rather than what we had been using in the past, which was the number of the last official release.

Some of the changes that have been incorporated since the 0.8.0 release build include:
  • Revision 1835 -- Update the QuickSetup code to eliminate references to classes in OpenDS.jar, which could force QuickSetup to have to download that JAR file before displaying the setup box instead of being able to download it in the background.
  • Revision 1837 -- Update the Windows batch files so that they will pause if an error occurs because no suitable JVM was found. This makes it possible for users to see the error message if the batch file was launched in a graphical mode (e.g., by double-clicking the icon in Windows Explorer).
  • Revision 1839 -- Update the status panel so that it will resize itself whenever the user first authenticates. This will allow it to be big enough for the user to see all of its content.
  • Revision 1841 -- Update the status panel so that it does not display replication monitoring information if replication is not enabled.
  • Revision 1842 (Issue #1585) -- Update the server so that the audit logger is disabled by default. Also, change the format of the audit log messages so that they use standard LDIF change syntax and therefore can be easily replayed if necessary.
  • Revision 1843 (Issue #1584) -- Update the DSML gateway so that it properly treats the request ID as optional rather than required. Also, update the DSML search processing so that it is more forgiving when parsing scope and deref policy strings.
  • Revision 1850 (Issue #1597) -- Fix a typo in the name of the "pilotPerson" object class.
  • Revision 1856 (Issue #612) -- Update the replication code so that it can replay operations in parallel using multiple threads while still preserving dependencies between operations (e.g., ensuring that a child is always added after its parent).
  • Revision 1861 (Issue #1604) -- Update the status command so that it allows reading the user password from standard input.
  • Revision 1868 (Issue #1507) -- Update the search processing code of the Berkeley DB JE backend so that any unindexed searches will be checked against the virtual attribute subsystem to see if any of the virtual attribute providers may be used to process the search.
  • Revision 1873 (Issue #1502) -- Fix an issue with the override severity configuration for the error logger so that it works as described in the documentation. Also, rename the error and debug logging levels so that they use all lowercase characters and dashes rather than underscores.
  • Revision 1875 (Issue #1430) -- Update the LDAP protocol tools so that they provide a "--version" argument to display information about the version of the tool.
  • Revision 1889 (Issue #1283) -- Change the filename for backups created by the Berkeley DB JE backend so that they no longer contain a ".zip" extension. Even though some forms of the backups do use zip format, others (e.g., when encryption is enable) do not.
  • Revision 1890 (Issues #1479, 1587, 1606) -- Update the access control processing code so that operational attributes do not get automatically included by clauses like (targetattr="\*").
  • Revision 1897 (Issue #1615) -- Fix a problem that could prevent the server from starting under certain network configurations.
  • Revision 1907 -- Perform miscellaneous cleanup and bugfixes identified by the FindBugs utility.
  • Revision 1908 (Issue #1614) -- Add a stub to preserve the previous HistoricalCsnOrderingMatchingRule in its original package to allow older databases to continue to be used with upgraded versions of the server.
  • Revision 1918 (Issue #1622) -- Add global ACIs that allow anonymous read access to certain key operational attributes, like those in the root DSE and schema subentries, as well as server-wide attributes like entryDN, modifiersName, and modifyTimestamp.

OpenDS and Other Sun-Sponsored Open Source Projects

Last night, I gave a talk at CACTUS (the Capital Area Central Texas UNIX Society, Since it was a UNIX-focused group, the first part of the talk was about general open source projects that Sun is involved with, including OpenSolaris, OpenSPARC, and OpenJDK. The second part of the talk was more specific to OpenDS, including general information, information about its current state and what still needs to be done, and how people can get involved.

As requested, I have uploaded the slides for my presentation.

[UPDATE] -- We have now posted the slides and an MP3 recording of the presentation to the OpenDS documentation wiki. You can find them at

Monday May 14, 2007

The OpenDS Virtual Attribute Subsystem

One of the key OpenDS components that makes virtual static groups possible is the virtual attribute subsystem. Virtual attributes are those attributes whose values are computed on the fly rather than actually being stored in the database. There are a number of uses for virtual attributes in the server, and there is an API (org.opends.server.api.VirtualAttributeProvider) that can be used to create new types of virtual attributes.

Some of the virtual attribute providers we have defined in OpenDS include:
  • The entryDN provider -- This is used to compute the entryDN operational attribute, which simply contains the DN of the entry. (as defined in draft-zeilenga-ldap-entrydn).
  • The subschemaSubentry provider -- This is used to compute the subschemaSubentry operational attribute, which is used to specify the location of the schema governing the associated entry (as defined in RFC 4512).
  • The isMemberOf provider -- This is used to compute the isMemberOf attribute, which lists the DNs of the groups in which the associated user is a member.
  • The member provider -- This is used to compute the member or uniqueMember attribute for virtual static groups.
  • The user-defined provider -- This is used to allow users to define their own virtual attributes that will appear in entries based on a given set of criteria. More information about user-defined virtual attributes is provided below.

Virtual Attribute Configuration

Virtual attributes are configured below "cn=Virtual Attributes,cn=config". These entries need to have the ds-cfg-virtual-attribute object class, which requires the following attributes:
  • ds-cfg-virtual-attribute-class -- This specifies the class providing the virtual attribute logic.
  • ds-cfg-virtual-attribute-enabled -- This indicates whether the virtual attribute should be enabled so that it can generate values for the target entries.
  • ds-cfg-virtual-attribute-type -- This specifies the name of the attribute type for which the values will be generated
  • ds-cfg-virtual-attribute-conflict-behavior -- This specifies how the server should behave if an entry already has one or more real values for an attribute that could be virtually generated. Allowed values are "real-overrides-virtual" (to indicate that only the real values should be used), "virtual-overrides-real" (to indicate that the real values should be ignored and only the virtual values should be used), and "merge-real-and-virtual" (in which both the real and virtual values will be used).

With only the above configuration attributes, the virtual attribute may be generated for all entries. If you wish to pare down the set of entries in which the virtual attribute may be present, you can use one or more of the additional configuration attributes (all of which are multivalued):
  • ds-cfg-virtual-attribute-base-dn -- This specifies the base DN(s) for the branches below which the virtual attribute may be used. If this is present, then only entries below one of the specified base DNs may include the virtual attribute.
  • ds-cfg-virtual-attribute-filter -- This specifies a search filter that may be used to control the entries in which the virtual attribute may be used. If this is present, then only entries matching at least one of the specified filters may include the virtual attribute.
  • ds-cfg-virtual-attribute-group-dn -- This specifies the DN(s) for the groups whose members will be allowed to have this virtual attribute. If this is present, then only user entries belonging to one of the specified groups may include the virtual attribute.

User-Defined Virtual Attributes

User-defined virtual attributes can be used to supply specific administrator-supplied values to entries matching the virtual attribute criteria. The net effect is essentially the same as what you can get using the Class of Service (CoS) capabilities of the Sun Java System Directory Server, but I think that the implementation and configuration is much more straightforward (although I may be a bit biased since I wrote the code).

In order to create a user-defined virtual attribute, add a new entry to the server configuration. It should contain the ds-cfg-user-defined-virtual-attribute object class (which extends the ds-cfg-virtual-attribute class and therefore takes all of the configuration attributes that it uses as described above), and it should also have at least one value for the ds-cfg-virtual-attribute-value attribute to specify the value that entries matching the criteria should be given. The ds-cfg-virtual-attribute-class should be set to "org.opends.server.extensions.UserDefinedVirtualAttributeProvider".

For example, the following configuration entry assigns a default postalCode value for everyone in the Austin office (although if they already have a postalCode value in their entry, it will be used instead of the virtual value):
dn: cn=Austin postalCode,cn=Virtual Attributes,cn=config
objectClass: top
objectClass: ds-cfg-virtual-attribute
objectClass: ds-cfg-user-defined-virtual-attribute
cn: Austin postalCode
ds-cfg-virtual-attribute-class: org.opends.server.extensions.UserDefinedVirtualAttributeProvider
ds-cfg-virtual-attribute-enabled: true
ds-cfg-virtual-attribute-type: postalCode
ds-cfg-virtual-attribute-value: 78727
ds-cfg-virtual-attribute-conflict-behavior: real-overrides-virtual
ds-cfg-virtual-attribute-base-dn: ou=People,dc=example,dc=com
ds-cfg-virtual-attribute-filter: (&(l=Austin)(st=Texas))

Note that because of the way that virtual attributes are implemented in OpenDS, you can use them to supply values for pretty much any kind of attribute, including operational attributes. For example, you could use it to set the ds-pwp-password-policy-dn operational attribute to give users a custom password policy, ds-rlim-size-limit to define a custom size limit, or ds-privilege-name to assign one or more privileges. For example, the following virtual attribute configuration entry gives a special set of privileges to everyone on the "Administrators" group:
dn: cn=Administrator Privileges,cn=Virtual Attributes,cn=config
objectClass: top
objectClass: ds-cfg-virtual-attribute
objectClass: ds-cfg-user-defined-virtual-attribute
cn: Administrator Privileges
ds-cfg-virtual-attribute-class: org.opends.server.extensions.UserDefinedVirtualAttributeProvider
ds-cfg-virtual-attribute-enabled: true
ds-cfg-virtual-attribute-type: ds-privilege-name
ds-cfg-virtual-attribute-value: modify-acl
ds-cfg-virtual-attribute-value: config-read
ds-cfg-virtual-attribute-value: config-write
ds-cfg-virtual-attribute-value: ldif-import
ds-cfg-virtual-attribute-value: ldif-export
ds-cfg-virtual-attribute-value: backend-backup
ds-cfg-virtual-attribute-value: backend-restore
ds-cfg-virtual-attribute-value: password-reset
ds-cfg-virtual-attribute-value: update-schema
ds-cfg-virtual-attribute-conflict-behavior: merge-real-and-virtual
ds-cfg-virtual-attribute-group-dn: cn=Administrators,ou=Groups,dc=example,dc=com

Friday May 11, 2007

Virtual Static Groups in OpenDS

Big static groups (with tens or hundreds of thousands of members, or more) are a problem in many large enterprise directories. Since a static group contains an explicit list of the DNs of its members, the more members it contains, the larger the entry will become. Maintaining these groups can become a management problem and isn't very efficient, and some types of searches involving them can be slow as well. Dynamic groups are much better when the groups contain thousands or millions of members, but the problem is that many client applications don't support them. It's easy to understand why, since the client does have a significant amount of work to do in order to determine whether a given user is a member of a dynamic group, but it's also unfortunate because it leads to a lot of cases in which directories are forced to end up with large static groups just to suit those applications.

OpenDS provides an interesting solution to this problem in the form of virtual static groups. It's a special type of entry that behaves like a static group, but all operations which attempt to determine membership are passed through behind the scenes to another group. In many cases, virtual static groups can give you the management and scalability benefits that dynamic groups provide while still maintaining compatibility with clients that only support static groups.

In order to use virtual static groups, you first need a dynamic group that will provide the membership criteria. For the purposes of this example, let's say that we have the following entry:
dn: cn=Austin Users,ou=Groups,dc=example,dc=com
objectClass: top
objectClass: groupOfURLs
cn: Austin Users
memberURL: ldap:///ou=People,dc=example,dc=com??sub?(&(l=Austin)(st=Texas))
This group will automatically include any user with a location of Austin and a state of Texas. It's a much better choice for a dynamic group than a static group because the set of members will be automatically adjusted as new users are added, existing users are removed, or if someone moves from one place to another.

To create a virtual static group that allows clients to interact with the Austin Users group in a static manner, add the following entry:
dn: cn=Virtual Static Austin Users,ou=Groups,dc=example,dc=com
objectClass: top
objectClass: groupOfUniqueNames
objectClass: ds-virtual-static-group
cn: Virtual Static Austin Users
ds-target-group-dn: cn=Austin Users,ou=Groups,dc=example,dc=com
With this group, uniqueMember will be treated as a virtual attribute (if we had used the groupOfNames instead of groupOfUniqueNames, then the member attribute would have been used instead). The key here is the ds-virtual-static-group auxiliary object class and the corresponding ds-target-group-dn attribute. OpenDS sees this and knows that it should treat the entry like a virtual static group.

Now, consider that the following users exist in the directory:
dn: uid=nawilson,ou=People,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
uid: nawilson
givenName: Neil
sn: Wilson
cn: Neil Wilson
l: Austin
st: Texas

dn: uid=bowendk,ou=People,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
uid: bowendk
givenName: Don
sn: Bowen
cn: Don Bowen
l: Peoria
st: Illinois
With thiese entries, the user "nawilson" will be considered a member of the group because he is located in Austin, Texas, whereas user "bowendk" will not. A quick test with ldapsearch confirms this:
$ bin/ldapsearch -b 'cn=Virtual Static Austin Users,ou=Groups,dc=example,dc=com' \\
  -s base --countEntries '(uniqueMember=uid=nawilson,ou=People,dc=example,dc=com)' 1.1
dn: cn=Virtual Static Austin Users,ou=Groups,dc=example,dc=com

# Total number of matching entries:  1

$ bin/ldapsearch -b 'cn=Virtual Static Austin Users,ou=Groups,dc=example,dc=com' \\
  -s base --countEntries '(uniqueMember=uid=bowendk,ou=People,dc=example,dc=com)' 1.1
# Total number of matching entries:  0
Note that while dynamic groups are very efficient for determining whether a given user is a member, they can be very inefficient when it comes to retrieving the entire set of members. This is also the case with virtual static groups that use dynamic groups to get their membership information. If you were to actually retrieve the uniqueMember attribute to list all of the members, that could be a very expensive operation. OpenDS provides a way to deal with this in the form of the ds-cfg-allow-retrieving-membership configuration attribute. It defaults to "false", which means that queries like those above which try to determine whether a given user is a member of the group will succeed, but the uniqueMember attribute won't be included in the resulting entry even if it's requested by the client. Most well-behaved clients won't ask for the membership attribute anyway, and of those that do many of them don't use it in any way, so this doesn't cause a problem. However, if you do have an application that expects to retrieve the membership attribute and won't behave properly if it isn't returned, then you're probably stuck with a traditional static group.

Also note that there's no requirement that virtual static groups be used only with dynamic groups. You can actually use it with any type of group (other than another virtual static group, to avoid circular references) that OpenDS supports (right now, we've only got static and dynamic groups, but we may add other types in the future). For example, if you've got a static group based on the groupOfUniqueNames object class but your client only supports groups with the groupOfNames class, then you could create a virtual static group with the groupOfNames structural object class and point the ds-target-group-dn attribute at the static group with the groupOfUniqueNames class.

Tuesday May 08, 2007

OpenDS builds on OpenJDK on OpenSolaris

Sun announced today that OpenJDK is now fully-buildable. Aside from a few third-party components that Sun doesn't have the right to open source (which are currently available in binary-only form until that can be resolved), the entire JDK codebase is GPLv2 with the classpath exception. I decided to take a shot and see what it takes to build it. It turns out, it's a pretty simple process.

There are instructions available for building with NetBeans, but honestly I don't use NetBeans and didn't particularly want to install it just for this. Plus, if you really do need to have NetBeans to build it, then that somehow doesn't seem fair. At any rate, I didn't find not using NetBeans to be a problem.

The system that I was using was an Ultra 40 Workstation running Solaris Nevada (Sun's OpenSolaris distribution -- see and for details) build 61. I already had the Sun Studio 11 compilers installed, and the Subversion client is included as part of Nevada.

Here's what I did to build OpenJDK and then use that to build OpenDS:
  1. I started in /export/home/nawilson/sandbox, which is the directory that I use to hold all of the source code I check out from various repositories.

  2. I checked out the OpenJDK source code with the command:
    svn checkout --username guest
    When prompted for a password, I just pressed ENTER without typing anything.

  3. Once the checkout completed, I went into the openjdk/hotspot/trunk directory.

  4. I found the page, which is the HotSpot FAQ, and on that page I found a link to the osse-build-solaris-i586 file that can be used as a simple build script. I downloaded that file and copied it into the current working directory. I then edited it to point both ALT_BOOTDIR and ALT_JDK_IMPORT_PATH to "/usr/java".

  5. I edited my path so that /opt/SUNWspro/bin was the first directory contained in it. This was necessary to make sure that the Sun Studio 11 compiler was getting used instead of /usr/ucb/cc which fails right away.

  6. I made the edited osse-build-solaris-i586 file executable and ran it without any arguments. About two hours later, I had a full JDK 1.6.0_01-b06 build in the build/solaris/jdk-solaris-i586 subdirectory.

  7. I returned to the /export/home/nawilson/sandbox directory and checked out OpenDS with the command:
    svn checkout --username guest
    Once the checkout completed, I went into the opends subdirectory.

  8. I set the JAVA_HOME environment variable to /export/home/nawilson/sandbox/openjdk/hotspot/trunk/build/solaris/jdk-solaris-i586 and ran "./" with no arguments. About 33 seconds later, I had a build/package/ file containing the core OpenDS server.

  9. I went into the build/package/OpenDS-0.8 directory and ran ./setup to launch our QuickSetup utility to configure the server, populate it with sample data, start it up, and use our graphical status panel to verify that everything looks fine.
I also later used "./ all" to perform a full build of OpenDS including the DSML gateway, Javadoc documentation, and running all 34,000+ unit tests. Everything passed with flying colors. As far as OpenDS is concerned, there is no distinguishable difference between the OpenJDK build that I just created and the real Java 1.6.0_01-b06 build that you can download from

OpenDS 0.8 Is Now Available

OpenDS has been public for about nine months now, and for all of that time we have tagged it with a 0.1. With our latest build, we're bumping it up to 0.8. In a few months, we'll go to 0.9, and then 1.0 a couple after that.

The easiest way to get OpenDS 0.8 is to use our Java Web Start installer (read more about it at Alternately, you can download the full server zip file at We also have a DSML gateway available as a WAR file at

If you haven't looked at OpenDS recently, here are some of the things we've added in the last few months:
  • We have added support for updating the schema with the server online. You can do this using modify operations, or using an "add schema file" task to add a new schema file. The schema file structure will be preserved, even when schema changes are replicated between servers.

  • We have added support for subordinate attribute types. Specifying a superior attribute will target all of its subordinate types (e.g., referencing attribute "name", will include attributes like cn, sn, givenName, initials, c, l, st, o, ou, title, and generationQualifier).

  • We have added a lastmod plugin that maintains the creatorsName, createTimestamp, modifiersName, and modifyTimestamp attributes.

  • We have implemented a DSEE-compatible access control handler. This uses the aci attribute with a syntax that is handled like that in the Sun Java System Directory Server. See Managing Access Control for information on OpenDS support for access control.

  • We have implemented a privileges subsystem to make it possible to configure capabilities on a fine-grained level. You can use this to remove privileges from root users, or grant additional privileges to non-root users. See Root Users and Privileges for more information.

  • We have added support for the proxied authorization control, which makes it possible to perform an operation as one user while authenticated as another. We have also added support for using an alternate authorization identity when using SASL DIGEST-MD5 or PLAIN mechanisms.

  • We have added support for the get effective rights control, which can be used to determine what rights a user has for a particular entry.

  • We have added support for the server-side sort control, which can be used to sort entries before returning them to the client. We have also added support for the virtual list view control, which can be used in conjunction with the server-side sort control to retrieve the result set in pages rather than all at once.

  • We have added support for rejecting requests from unauthenticated clients. When operating in this mode, the server will require that clients authenticate before requesting any operations other than bind and StartTLS.

  • We have added support for groups. This includes static groups (both groupOfNames and groupOfUniqueNames variants) and dynamic groups (based on groupOfURLs). We have also added support for virtual static groups, which makes it possible to mirror a dynamic group as a static group.

  • We have improved support for SSL and StartTLS, including making it possible to configure them through the QuickSetup utility, and providing the ability to use different certificates for different listeners. We have added a number of new certificate mappers that can be used to map a certificate to a user entry when performing SASL EXTERNAL authentication.

  • We have added a number of password validators to the server. One looks at how similar the new password is to the user's current password. One looks at whether the new password matches the value of other attributes in the user entry. One looks at whether the new password matches a value in a dictionary. One looks at the sets of characters included in the new password. One looks at the number of unique characters in the password. One looks at whether there are strings of repeated characters in the password.

  • We have implemented support for the CRYPT password storage scheme, which may be needed by some UNIX clients to use the OpenDS for authenticating users.

  • We have added a virtual attribute subsystem, including implementing support for the standard entryDN and subschemaSubentry attributes. We also have added support for isMemberOf, which will include the DNs of all groups in which the associated user is a member. We also have support for user-defined virtual attributes, much like the Class of Service functionality in the Sun Java System Directory Server.

  • We have added support for rebuilding indexes, both as an offline operation or with the server online using the tasks interface.

  • We have improved support for configuration archiving in the server, so that it includes changes made with the server offline. We have also implemented a mechanism for detecting external changes to the configuration file with the server online.

  • We have improved support for Windows systems, including the ability to run as a service.

Monday May 07, 2007

OpenDS Documentation Wiki

It seems that I've gone through another span of not writing much here, but as usual I've been busy with other things. OpenDS is coming along quite nicely, and we've added lots of new features and done a lot of cleanup work. One of the things that we've been focusing on recently is documentation. We've just recently moved all of our documentation to a wiki, and in the process we have updated things that were out of date and added lots of new content. You can find it at

Some of the notable content that on the wiki includes:

Monday Mar 05, 2007

Read-Only Replicas Considered Harmful

Back in the dark ages, the Netscape/iPlanet Directory Server 4.x only supported single-master replication. No matter how many directories you had, only one instance was writable and all the others had to be read-only. If you tried to write to one of those read-only directories, you'd get a referral redirecting you to the master. In simple deployments, you'd have what was basically a star topology, where each of the read-only replicas was directly updated by the masters. In more complicated environments, you might see replication hubs that accept changes from the supplier and forward them on to consumers, so that the supplier itself wasn't directly responsible for updating all of the read-only replicas. While the topology was simple, it didn't lend itself well to highly-available deployments, and it caused problems for applications that didn't handle referrals (although Directory Proxy Server was able to hide a lot of that from clients if it was installed).

When Directory Server 5.0 came out, we added the ability to have two masters (as long as they were both in the same data center). This was big step forward for high availability, and for many deployments where two servers were enough to handle all the load you didn't need to have any read-only replicas. However, if you wanted to have more than two servers, or if you wanted to have servers in multiple data centers, then you still had to have read-only replicas.

When Directory Server 5.2 was released, we added support for up to four masters, and support for masters in different data centers. This was an even bigger leap forward. For the vast majority of single data center deployments, four servers is more than enough to handle all the client load, and many two data center environments, two servers per data center was fine as well. However, you still needed those pesky read-only servers if you wanted to have more than two data centers with high availability in each one.

Now that Directory Server 6.0 is available, there's no longer any limit on the number of masters that you can have. You can make every server a master, and in the vast majority of environments that's exactly what you should do. No matter how many data centers you have or how many servers per data center, it's just plain easier if they're all masters. Note that you don't have to have them all directly connected to each other -- in larger environments spanning multiple data centers it's probably nice to have all of the local servers fully-interconnected but only a couple of cross-WAN links into other data centers -- but you can if you want. Some of the benefits of having only masters include:
  • Binary copy becomes much easier since you only have one type of server to manage. You can't use binary initialization from a master to a consumer (or vice versa), but if you have only masters then you can use binary copy between them as long as they meet the other constraints (e.g., they have the same system architecture, filesystem layout, and indexing configuration).

  • Password policy works better across the environment. If you have account lockout enabled and you try to authenticate to a read-only replica, then the failure counter will only be updated on that one replica but other servers in the environment won't know about it. But if it happens on a master, then that login failure will be replicated throughout the environment so that other servers will see it as well. The same is also true for the last login time functionality if you want to enable that.

  • If there are read-only replicas in the environment, then clients that try to modify them will need to be able to handle referrals, and in the event that they do encounter a referral and send their write somewhere else then they could still have to deal with issues around propagation delay if they immediately read the entry back from the read-only server after having to perform the write somewhere else. As I mentioned before, Directory Proxy Server can follow referrals on behalf of the client, and in fact the new release has features like server affinity that help avoid problems with propagation delay. However, if there are any clients that attempt to directly communicate with the server instead of going through a proxy, then it's a lot easier if all the servers are masters.

  • In the past, there were cases in which you were able to get higher modify performance if you constrained yourself to only sending writes to one server. That's not true anymore, and in fact you'll probably find that you get better overall write performance by spreading the writes out across multiple servers than you do if you just send them to one instance.

I have seen Directory Server 5.2 deployments that included read-only replicas just because the people who set things up thought that was just the way it was always done without thinking about whether or not it was the right approach. I have already seen a couple of cases with Directory Server 6 where people talking about how to deploy an environment were thinking about including read-only servers. Certainly it's still an option if you really do have a legitimate need for read-only servers, but don't feel like there's any need to do it that way simply because that's the way things were done in the past.

Note that with OpenDS, we're taking even more steps to help eliminate the last few potential arguments against making all servers masters. We're introducing an architecture where it's possible to separate the changelogs from the server instances (where only some of the servers need to have changelogs, or you can put the changelogs on completely separate machines ), so you can have masters without changelogs if you're concerned about the extra disk space associated with the changelog. We'll also be adding support for writable partial replicas (containing a subset of the attributes and/or a subset of the entries). If there are still other reasons that you you think might tie to into a scenario that requires read-only replicas then let us know so we can think about ways to eliminate those road blocks as well.

Thursday Mar 01, 2007

Data Distribution in DSEE 6

The latest version of the Sun Java Enterprise System was officially released today, and included in it is the 6.0 release of our Directory Server Enterprise Edition suite. There are some great changes in the Directory Server itself (no more limit on the number of masters, new graphical and command-line administrative interfaces, security improvements, added 64-bit platform support for Solaris x86/x64, etc.), and they'll make great fodder for future posts. However, I want to shift the focus of this entry to Directory Proxy Server. I haven't talked about it much in the past, but it has always offered very useful features like transparent load balancing and failover, improved compatibility for clients, data translation, and added security features. But Directory Proxy Server 6 takes a huge leap forward from its predecessor. Not only are there a lot of improvements in the core proxy functionality (e.g., operation-based load balancing, improved connection pooling, support for SASL EXTERNAL, etc.), but it also two major new categories of features: virtual directory operations and data distribution. In this post, I want to focus on data distribution.

The new data distribution capabilities in Directory Proxy Server 6 make it possible to dramatically scale the size and performance of your directory environment. On its own, the Directory Server is able to take advantage of large amounts of memory and large numbers of CPUs. However, eventually you're going to hit a limit on the amount of data you can put in a single box and still get acceptable performance. Some of our largest customers (both in terms of the number of entries in their directory environment and in the size of those entries) also have the very strict response time requirements (often single-digit milliseconds). To meet those requirements, you don't have the luxury of going to disk so you've got to serve the data all from memory (in some cases, going with a solid-state disk solution may be a possibility, but that's probably yet another topic for another time). Sun has some big machines (and for Directory Server, it's going to be hard to find anything available right now that can beat the Sun Fire X4600 with 16 Opteron cores and up to 128GB of memory, and if you've got to go monolithic then the E25K can hold over a terabyte of memory), but eventually there's a limit to what one box can hold.

Data distribution changes the game by allowing you to split up your data across multiple sets of servers. If one server can hold 25 million entries but you need to support 100 million, then you can break up the data into four sets. This is done in a manner that is virtually transparent to clients, so there's no need to artificially create hierarchy in your data or perform other kinds of transformations. When a client request comes into the Distribution Server, it figures out which set(s) of backend servers might need to be involved in processing that request, and then forwards it on to one of the servers in each of the sets (most of the time, only one set is involved, but some kinds of searches may need to involve multiple sets). You can customize how the data gets split up by specifying which distribution algorithm you want to use (or if you don't like any of them that are provided with the server, you can write your own), and you can customize the way that the Distribution Server picks the actual backend server within that set through a pluggable load balancing algorithm.

Another benefit that data distribution can provide is improved write performance. In the past, it's been easy to get improved read performance by simply adding more replicas, but that doesn't work for write operations because in a standard replicated environment, all of the changes have to go everywhere. With data distribution, replication only needs to occur between the servers in a backend set, so if you've got five sets of servers, then you've got the potential for five times the aggregate write performance. We've demonstrated this technology to a number of customers over the last couple of years, and we've seen some very impressive results.

I'll be the first to admit that data distribution isn't for everyone. It really is targeted at those environments with large amounts of data that can't fit on a single system, or for those cases in which the single-server write performance isn't adequate. If you're doing fine in your current environment and don't expect to grow by leaps and bounds in the near future, then it's probably not for you. There is a bit of a learning curve, and it's wise to put some thought into how best to split up the data. We've already got improvements lined up for when this functionality gets integrated into OpenDS that we hope will make it easier to use and lower the barrier to entry, but we're also making improvements that we hope will allow for more effective use of single-server (or single replicated set) deployments. If you're doing fine in your current environment and don't expect to grow a lot in terms of amount of data or performance requirements, then the traditional approach is probably still the best. But if you expect to see a lot of new data being added to the server, or a lot more stringent performance requirements, then data distribution might be right up your alley.

Monday Feb 12, 2007

LDAP, Transactions, and OpenDS

A few days ago, Trey (who, by the way, is doing a tremendous job in his new role as the OpenDS community liaison) wrote about LDAP directory servers versus relational databases. He neglected to mention one of my favorite benefits, which is the standardized protocol (which means that you don't have to rewrite your applications or at the very least change drivers if you switch from one server implementation to another, and you don't really have to worry about what exactly "varchar" means in the particular flavor you're using). But ignoring that, some of the comments were interesting. In this post, I want to focus on the ones regarding the ACID properties of the protocol and the server.

First, I do think that it's important to point out that both the Sun Java System Directory Server (through the Berkeley DB) and OpenDS (through the Berkeley DB Java Edition) use an underlying data store that fully supports ACID semantics, and we do make extensive use of transactions when interacting with it. In particular, any time that you need to perform a write operation the server will need to update multiple databases (in particular, the main id2entry DB and any associated indexes), and that is protected with a transaction to ensure that they will all be updated together as a single atomic unit, or if some problem occurs that none of them get updated. We also rely on the transactional nature for recovery in the event that the server isn't stopped gracefully (e.g., if the server should happen to crash or get forcefully killed, or if there's a hardware failure or underlying system crash) so that it comes back to a consistent state. The Directory Server is able to operate in a fully ACID-compliant manner so that any acknowledged change will be guaranteed on stable storage before returning the result to the client (although administrators can also configure the server to relax these restrictions if they can accept the trade-offs for better performance).

However, when most people are talking about the ACID-compliant nature (or potential lack thereof) of LDAP directories, they probably aren't talking about the underlying data store. They're talking about what is exposed through LDAP itself. It's true that the core protocol specification doesn't have much in the way of atomicity other than to say that multiple changes included in a single modification should all be applied atomically. However, there are a number of extensions to the protocol that can provide various forms of atomicity. They include:

  • Modify DN Operation -- In its simplest form, this operation simply changes the RDN of a leaf entry. However, it also provides the ability to move the entry below a new parent, and it doesn't necessarily need to be performed on a leaf entry. In fact, the modify DN operation can be used to rename an entire subtree (in which case all entries in that subtree will be renamed as a single atomic unit), although we do recommend that it be restricted to small subtrees to avoid locking huge portions of the tree while the operation is in progress. This capability is currently available in OpenDS, and will also be available in the imminent release of the Sun Java System Directory Server 6.0.

  • Subtree Delete Operation -- draft-armijo-ldap-treedelete defines a control which can be used to delete an entire subtree of entries as a single atomic operation. OpenDS currently supports this capability.

  • LDAP Increment Modify Extension -- RFC 4525 defines a new "increment" attribute modification type which can be used to atomically increment or decrement the integer value of an attribute, without the need to know its current value. This can be particularly useful in conjunction with the LDAP read entry controls as described below. OpenDS currently supports this capability.

  • LDAP Read Entry Controls -- RFC 4527 defines a pair of controls which can be used to atomically retrieve an entry as it appeared either immediately before or immediately after applying some change. This can be used to obtain an atomic "set and check" type of behavior, especially when combined with the increment extension. OpenDS currently supports this capability.

  • LDAP Assertion Control -- RFC 4528 defines a control which can be used to perform an operation only if a given assertion is satisfied. This can be used to obtain an atomic "check and set" type of behavior. OpenDS currently supports this capability.

  • LDAP Transactions -- draft-zeilenga-ldap-txn defines a set of extended operations and controls that can be used to perform multiple write operations as a single atomic entity. This draft currently isn't quite far enough along yet to have official OIDs assigned for these components, but we do intend to provide support for it in OpenDS.

With the capabilities that we already have and what we intend to provide in the future, hopefully we can put concerns about the ACID and transactional nature of directory servers to rest.

Friday Jan 26, 2007

Understanding Schema in OpenDS

It's been quite a while since my last post. It's very easy to get out of the habit, especially when there's so much else going on. But we're making lots of progress on OpenDS, and we're getting ready to start delving into some very interesting areas. We've recently had lots of good in-depth discussions about features like synchronization and proxy/virtual/distribution capabilities, we're also making headway in areas like schema management, configuration management, and access control, and we're hoping to have a project roadmap published in the near future.

This morning, I gave an internal presentation about LDAP schema in general, with emphasis on how it's treated in OpenDS. OpenDS is heavily dependent upon schema for correct operation, and also provides more obscure features like name forms, DIT content rules, and DIT structure rules that aren't as widely supported in other directories. The content of this presentation is available online at along with the rest of the OpenDS documentation that's currently available.

Other new documentation that has been added fairly recently includes an Introduction to OpenDS Development and a description of how to write a simple OpenDS plugin. We do hope to have more advanced plugin documentation in the near future once we get a more complete configuration framework in place, and we'll be getting some doc writers in the very near future, so hopefully we'll be able to provide even more information and keep it up to date as things change.




Top Tags
« June 2016