Monday Nov 14, 2005

Unsupported Class Error with slamd

Not a gotcha, more of a reference point/information piece. If you try to import a job pack with Slamd, and the Major or Minor versions are unsupported you will get an error similar to the following.

java.lang.UnsupportedClassVersionError: com/sun/slamd/example/NewTestJobClass (Unsupported major.minor version 49.0) java.lang.ClassLoader.defineClass0(Native Method) java.lang.ClassLoader.defineClass( org.apache.catalina.loader.WebappClassLoader.findClassInternal( org.apache.catalina.loader.WebappClassLoader.findClass( org.apache.catalina.loader.WebappClassLoader.loadClass( org.apache.catalina.loader.WebappClassLoader.loadClass( java.lang.ClassLoader.loadClassInternal( java.lang.Class.forName0(Native Method) java.lang.Class.forName( com.sun.slamd.server.SLAMDServer.loadJobClass( com.sun.slamd.admin.JobPack.processJobPack( com.sun.slamd.admin.AdminServlet.handleInstallJobPack( com.sun.slamd.admin.AdminServlet.doPost( javax.servlet.http.HttpServlet.service( javax.servlet.http.HttpServlet.service(

which in this case is due to an attempt to import a job class built with a 1.5 jvm on a slamd server using a 1.4 jvm. The workaround is simple, move your slamd server to a 1.5 jvm. The javadoc for UnsupportedClassVersionError sheds a little bit more light on the subject.

Thursday May 19, 2005

Not nuking your Access Manager ldap config.....

This one caught me today, so time to share and help avoid ;). I wasn't watching what I had in a script for adding ldif data to a directory server for an Access Manager benchmark that we run as part of our ongoing Java Enterprise System benchmarking effort.

When you install Access Manager it creates a bunch of entries in your directory server related to the access manager. Now to add some userdata into this I generated a 100,000 user file with MakeLDIF from slamd, lets say its /tmp/foo.ldif, and added it into my userRoot instance of the directory server using ldif2db.

ldif2db -n userRoot -i /tmp/foo.ldif
All fine one would think, but it actually rebuilds the entire user root, and hence when I try to access the Access Manager login screen I get the following error in my logs (/var/opt/SUNWam/amAuthentication.error in this case).
"2005-05-19 15:09:11"   "Invalid Domain"        amAuthentication.error  AUTHENTICATION-20
"Not Available"    "Not Available" INFO    "Not Available" "Not Available" 
"cn=dsameuser,ou=DSAME Users,dc=jestest,dc=sun,dc=com"     "Not Available"
So what I should have done is backup the original contents and add them back in, like so
./db2ldif -n userRoot -a /tmp/bkup.ldif
./ldif2db -n userRoot -i /tmp/bkup.ldif -i /tmp/foo.ldif
And now back to my regular scheduled work....

[ update - May 20th ]
Just noticed I had a typo in the ldif2db ordering, the original ldif file has to go first or you end up in the situation I was in initially.

Wednesday Oct 06, 2004

SLAMD goes opensource...

SLAMD has gone opensource, released under the Sun Public License. You can download the lot from here [] or here [].

Friday Sep 03, 2004

A Sample Slamd Deployment - Benchmarking Directory Server

A few weeks ago I did a brief posting about Slamd, so I figured its time to flesh it out with a few more details. I won't go into the details on setting up a slamd server, all of this is dealt with in the Slamd Documentation

The Sample Rig

The rig I'm going to discuss here is a Sun Java System Directory Server box which I set up a few weeks ago. Hardware wise, the directory server machine is a v20z, the client machines that I use for this particular rig are six old Netra boxes and an Ultra 30 as the slamd server.

Diagrammatically the layout looks like

The rig is on a private subnet. The gigabit switch is dedicated to just this rig.

The LDAP Data, using MakeLDIF

One of the tools that comes with Slamd is MakeLDIF. In general production LDAP data is not available for benchmarking purposes, so MakeLDIF allows you to create a sample ldif file which closely represents real world LDAP data.

For the rig above we have created sample datasets of one hundred thousand, two hundred and fifty thousand, one million and five million users. For the purposes of this article I will use a simplified version of the two hundred and fifty thousand user example.
Firstly you create a sample template to use with MakeLDIF, i.e.

define suffix=dc=example,dc=com
define numusers=250000

branch: [suffix]

branch: ou=People,[suffix]
subordinateTemplate: person:[numusers]

template: person
rdnAttr: employeeNumber
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
givenName: <first>
sn: <last>
cn: {givenName} {sn}
initials: {givenName:1}{sn:1}
uid: {givenName}.{sn}
mail: {uid}@[maildomain]
userPassword: password
telephoneNumber: <random:telephone>
homePhone: <random:telephone>
pager: <random:telephone>
mobile: <random:telephone>
employeeNumber: <sequential:1>
street: <random:numeric:5> <file:streets> Street
l: <file:cities>
st: <file:states>
postalCode: <random:numeric:5>
postalAddress: {cn}${street}${l}, {st}  {postalCode}
description: This is the description for {cn}.

(you can download this template here).

So this is a pretty generic ldap config, nothing complex in it.
The major things to note here are the various tags. The <first> and <last> tags that lookup first and last names respectively in two files associated with slamd, first.names and last.names MakeLDIF has an algorithim built in to ensure that no combination of first and last names is repeated in the data.
The tag <sequential> creates sequentially incrementing values, while <random>, as you might guess, generates random numbers. <random> also takes some varying attributes that allow you to create (in this example) phone numbers etc.

So after creating your template your all set to generate your data. In our case we want to grab the login information as well for our benchmarking, so we run the following

java -jar MakeLDIF.jar -t twofifty.template -L logins.txt -o twofifty.ldif

Well actually, lets time it to show its a nice quick process (this is on an ultra 30).
# timex java -jar MakeLDIF.jar -t twofifty.template -L logins.txt -o twofifty.ldif
Processed 1000 entries
Processed 2000 entries

Processed 249000 entries
Processed 250000 entries
Processing complete.
250002 total entries written.

real       34.60
user       31.41
sys         1.81

and thats it for MakeLDIF, you end up with an ldif file with your benchmarking data.

# ls -l twofifty.ldif  
-rw-r--r--   1 root     3456     150331659 Sep  3 15:33 twofifty.ldif

MakeLDIF is explained in much more detail in the Slamd Documentation.

Loading the data

The data load into the directory server is extremely straight forward. The import cache was increased to 1/2 a gig from the default of 20Mbs, and we just use ldif2db. In this case we are using the userRoot backend db, so the exact command is
./ldif2db -n userRoot -i /export/data/twofifty.ldif

Slamd Job Data

Slamd has a large set of available jobs for benchmarking purposes. For this article I will look at just two, the prime job and the searchrate job.

For the purposes of repeatable benchmarks I tend to lean towards using commandline tools as much as possible, so with Slamd I use the CommandLineJobScheduler.
First off you have to generate a configuration file, lets say we want to generate the config file for the SearchRate job.

#cd /opt/slamd/webapps/slamd/WEB-INF
#java -cp lib/slamd_server.jar:classes:lib/ldapjdk.jar \\
        CommandLineJobScheduler \\
        -g com.sun.slamd.example.SearchRateJobClass \\
        -f /tmp/searchrate.conf
Which gives you a file such as this.

We then go and customise this file for our particular config. So in our case we want to specify the specific clients we are going to use, the duration, search base etc. The config file is pretty much self explanatory, but for the sake of completeness I'll stick up the major parameters that I changed here.

param_binddn=cn=Directory Manager
param_searchscope=One Level Below Base
A similar type config file is generated for the prime job as well.

Slamd Client Configs

For each of the slamd clients we use this config file. The only things really worth mentioning for this example are the following.
# Specify the amount of memory to use in megabytes.

# Specify the address and port information for the SLAMD server.


Kicking off the job

So we have our slamd job config files, our rig is loaded with the ldap data, its time to get some numbers. We start up each of the slamd clients (log into the machine and run /opt/slamd_client/, and then kick off the job using the CommandLineJobScheduler (you can of course do all of this via the web interface as well).

java -cp lib/slamd_server.jar:classes:lib/ldapjdk.jar \\
        CommandLineJobScheduler \\
	-f /tmp/prime.conf
And then sit back ;).
Your client logs will show up an entry such as
The SLAMD client has started.
Ready to accept new job requests.
Received a request to process job 20040831124550-0897501
Received a request to start processing.
Once the job has finished you will see something like
Done processing job 20040831124550-0897501

The directory server is now primed, so we repeat the process with the searchrate job, and grab our results.

A bit further into the automation

Now obviously enough kicking everything off manually is not exactly desirable, so we have a couple of little wrappers that we throw around the entire process. I won't go into a huge amount of detail, but to do a full benchmarking run in a consistent and coherent manner (logging a bug off one run that can't be reproduced is going to do nothing but irritate people) we have a few steps that we go through.

First off the system under test is rebooted, once it comes back up it then reboots the slamd server and all of the client machines. Then once these are all back up again it logs into the slamd server, starts up the directory server backend and the slamd server again, then in turn goes to each one of the clients and starts up the slamd client.

The job is then submitted via the CommandLineJobScheduler and off we go. Once the length of time for the benchmark has passed we monitor the client log files to check that everything has finished, grab the results and start the entire process again.

OS Tunings

The group that I work in sets up benchmarks slightly differently than our colleagues in the Market Development Engineering group. We aim to set things up as close to out of the box performance as possible, avoiding things such as sq_max_size=0[1] etc. Rather we will set up the OS as close to what you as a customer would set up (within reason, obviously we don't want things such as i/o wait time on a system, so we may create say a raid 0 stripe for log files or datasets etc).
The ndd tunings that were used in this config are
ndd -set /dev/tcp tcp_conn_req_max_q 1024
ndd -set /dev/tcp tcp_conn_req_max_q 4096
ndd -set /dev/tcp tcp_keepalive_interval 600000
ndd -set /dev/tcp tcp_ip_abort_cinterval 10000
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_smallest_anon_port 8192
while in /etc/system we set
set rlim_fd_max=100000
set rlim_fd_cur=100000
I guess you could say that we use tunings to make things realistic and also to remove any variance that may arise in repeated runs of the benchmark.

Directory Server Tunings

Sun Java System Directory Server has multitudes of things to tune. For the specific example mentioned above I did the following (the eventual tunings went a lot further, but thats for another post).

First off we did runs with logging disabled and enabled, obviously enough disabling logging makes life a lot faster, but its not that realistic. The result size limit was dropped down to 100 from 2000, operation time limit down to 300, idle timeout set to half an hour, database cache upped to half a gig (this is the most important one to be honest) as was the memory cache.

And into the future.....

Well seeing as Adam has just putback plockstat and its associated DTrace provider I just have to play a bit with DTrace on the directory server rig. Once I get some D scripts together I will post them up here.

Next up for slamd articles, an example of a multitier setup, using several components from the Java Enterprise System.

[1] sq_max_size=0 should never be used on a production system, see the sq_max_size section of the Solaris Tunables Guide for more details.

Monday Aug 09, 2004

Slamd - Distributed Load Generation Engine

Slamd is a distributed load generation engine which has been developed internally in Sun, and is now available externally at To put it simply this is the dogs danglies of generic load generation engines.

Primarily designed for use on LDAP products such as the Sun Java System Directory Server (available as part of the Java Enterprise System), it can also be used against a multitude of other applications, and includes its own scripting engine etc.

I have been using Slamd internally for a while now, working on some stuff with Directory Server on amd64 hardware (screams along, and with stuff like fire engine in S10 its just getting faster). Over the next few weeks I'll post (as I get time) a few entries around using slamd.




« February 2017