Tuesday Apr 21, 2009

How to write a JMS Faban driver (Part One)

Choosing a  benchmarking tool to put load on a system under test (SUT) can be subjective.  Most people will prefer something quick and easy to get something up and running as soon as possible.  SoapUI and JMeter are two examples that be slotted in that category since they have a user interfact that will do a lot of the heavy lifting for you.   However, I mostly use Faban, an open source driver and harness framework.  Akara Sucharitakul, a colleague at Sun, is the primary author of this great tool.  You can use it to place load on an application (the driver) and also use its scheduling capabilities to schedule a run since its also has a web interface. You can conveniently use this interface to change run parameters such as ramp up, steady state, ramp down.  The scheduling portion (harness) gives the facility to programmatically add tasks to facilitate the repetitious tasks when benchmarking which often requires a lot of experimentation.  Examples of these routing tasks could include restarting your application server or reloading your database before each run.  There is good news too - there is a early access of Faban just released in March available as a 03/20/09 nightly build on the Faban website.

I will admit that there is a learning curve associated with using Faban - you need to write Java code.  However, there are sample drivers available that you can use in the Faban package to use as a template.  How do you start to write a driver?

 There are detailed instructions that explain this, but I'm just going to give a quick primer.  Suppose we want to write a simple driver to write a JMS message to a queue.  Two files that you'll have to write (or edit if you are using the provided samples as a template) are your Java driver file, and the accompanying config file, run.xml.

  1. Download the 1.0ea build (03/20/09 build). (or later build if a newer build is available from the website)

  2. Extract the file to a working directory.  This will now be referred to as your FABAN_HOME.

  3. Enter the samples directory.  There are some sample drivers that you can peruse, like web101.  Let's use that one as a template for our JMS driver.

  4. Copy the directory 'web101' and rename it, eg. jms.

  5. The driver code resides in the $FABAN_HOME/samples/jms/src/sample/driver directory.   You'll have to refactor the one that we copied over called WebDriver.java to something like JMSDriver.java.

  6. The config file resides in $FABAN_HOME/samples/jms/deploy/run.xml.  You will also have to edit that with the values that are appropriate for a JMS driver.

The advantage of copying an existing samples as a template is that you can take advantage of the build.xml and use the same targets to build, run and deploy the resulting driver code to the harness.  But let's keep things simple and just build and run the driver via the ant run target.

Let's change some of the values we need for the JMS Driver.  Firstly, we will need to change two annotation to correspond to JMS Driver, instead of a web driver - @BenchmarkDefinition and @BenchmarkDriver.  The third annotation, @FixedSequence, refers to how my operations are going to be called in the driver.  In this case, it's very simple - I only have one operation, sendJMSMessage, so I am going to call it in a fixed sequence, without randomness.  The two values associated with this annotation are deviation which refers to how much deviation I will allow, and the sequence of operations which is really relevant here because I only have one operation called sendJMSMessage.

The 4th annotation, @NegativeExponential, refers to the negative exponential distribution
for think or cycle time distribution.  Think time, or cycle time, is the length of time between requests, or the time between sending the messages.  This is the default think time distribution for the class, but we can override it at the method level (i.e at the operation level, sendJMSMessage).  You can delete this if you want, or just leave it as is.  You will see how this annotation is overwritten at the method level.

@BenchmarkDefinition(name = "Sample JMS Driver",
version = "0.2")
@BenchmarkDriver(name = "JMSDriver",threadPerScale = 1)
@FixedSequence(deviation = 2,
value = {"sendJMSMessage"})
@NegativeExponential(cycleType = CycleType.CYCLETIME,
cycleMean = 5000,
cycleDeviation = 2)
public class JMSDriver {

We now have to code the sendJMSMessage operation that we referred to in the FixedSequence annotation.  The sample web101 has 3 operations defined, doOperation1, doOperation2 and doOperation3.  We can erase two of them, and just refactor doOperation1 for our JMS purposes.

There are two annotations relevant here.  The first one, @BenchmarkOperation designates that the following method definition is an operation for your driver.  Max90th refers the value of response time that is acceptable which is 2 seconds in this example.  Timing is set to AUTO which means that I'm letting Faban take care of the timing mechanics.  (There is a way of setting this to MANUAL so you can precisely record what method call you are interested, but let's leave it here at AUTO for the sake of simplicity).  Here you see that we have overwritten the think time annotation with a @FixedTime, where we are going to send a message every second.  There is no distribution to the think time - it's going to be fixed at 1 second, but we will tolerate a 2% deviation.

@BenchmarkOperation(name = "sendJMSMessage",
max90th = 2,
timing = Timing.AUTO)
@FixedTime (
cycleType = CycleType.CYCLETIME,
cycleTime = 1000,
cycleDeviation = 2
public void doSendJMSMessage() {
logger.info("sending message");
try {
} catch (JMSException ex) {
logger.info("Exception: "+ ex.getMessage());

  I've simplified the code greatly and omitted the plumbing code.  The ConnectionFactory, Connection, Session, and Destination objects are created in the JMSDriver constructor so that those are all created when the JMSDriver is instantiated.  I need to show how to read values from the configuration file, run.xml, to do some of that plumbing code and that topic will be covered in my next blog entry.

Tuesday Feb 10, 2009

OK - where do I start if I need to tune GlassFish?

Scenario: The hot new application you have deployed on GlassFish is expected to garner multitude of users who are going to hammer away at your application. Where do you start to optimize your performance?

Yes, there is a performance guide available but you don't have time to digest this material. You could use the new Performance Advisor's "Tuner" feature of Enterprise Manager that helps you tune your application with a series of questions, but you have to present to your boss and you need to be prepared to answer all of his technical questions. Where do you start?

Solution: Along with the release of the GlassFish Portfolio, a whitepaper that I wrote has been published today. This performance white paper, titled "Optimize GlassFish Performance", lists the top 11 parameters that you can investigate when tuning your application deployed on GlassFish based on the data that the Java Performance Team has collected during our numerous benchmarking exercises. There is a brief explanation of each parameter with a recommended 'default' value followed with some data that illustrates the importance of tuning GlassFish for performance. Benchmarking in general does require experimentation and is very much application-specific, but this whitepaper should provide a good primer for those wanting to get their feet wet.

Wednesday Mar 05, 2008

A Lesson Learnt with Netbeans 6 and the HTTP Monitor tool

Our lab (Benchmarking and Profiling Web2.0 Applications) from JavaONE 2007 is currently being presented at Sun Tech Days. Since then, Netbeans 6 has been released so we had to update the content and ensure that it worked smoothly. Sang Shin, who is presenting the lab in Australia and South Africa, contacted me with the problem that the benchmark test that we run in the lab was failing and sent me his summary file that Faban produces at the end of each run. (Note: Faban is the open source benchmarking framework we use). To my surprise, the response times for the first request when all of the jmaki components are loaded up, had increased 1000 fold. On my laptop, we had generally observed a response time of 0.02 seconds but it now had ballooned to 20 to 30 seconds! and this was happening with only 6 users!

What was causing this drastic increase? To spare you my painful debugging technique, it turned out that we were using the bundled version of Glassfish that comes with the Netbeans Web and JavaEE installation. And now, the HTTPMonitor tool is turned ON by default. (HTTP Monitor is a tool you can use to monitor requests, headers from your client to your server) So what was happening that under load, the HTTPMonitor became the bottleneck causing the response times to increase.

So how do you turn this tool off?

1. Click on the Services Tab of Netbeans.

2. Right click on the Glassfish node, then select Properties
3. A second window will pop up displaying the properties of Glassfish. Uncheck the first check box at the bottom of the window denoting "Enable HTTP Monitor".

4. Restart your Glassfish server.

NOTE: You may have to undeploy and redeploy your application for this to kick into effect for an already deployed application. For some reason, I had to undeploy and redeploy after restarting Glassfish had no effect. I suspect that there must be some modification of the web descriptors. Caveat: Be sure to turn off the HTTP Monitor which is on by default in Glassfish bundled with Netbeans BEFORE doing any doing any performance tests.



« July 2016