Wednesday May 13, 2009

Olio Java EE source code released

I'm happy to announce the availability of Olio Java source code as an apache incubator project!  

 

What is Olio exactly?

Project Olio is a web2.0 toolkit that can be used to investigate the functionality and behavior of web technologies.  The kit consists of a web application, driver and harness code that is designed to be used with Faban, an open source benchmark driver and harness framework.  The web application is a social calendar where you can perform such activities as registering a personal profile, create new events, register to attend existing events, and extend friendship requests.  The driver and harness code allows you to schedule benchmarking runs with varying number of concurrent users to examine how this application behaves under load.  More information is available about Olio at the Olio Apache website.

There is already a binary release of a PHP and JRuby version of Olio; however the Java implementation is a bit behind so we do not have a binary release as yet.  We are hoping the release of the Java code will encourage new involvement from other developers to help up release a binary version.   As such, it is not advisable to do a performance comparison between Java and PHP or JRuby until we are feature complete with respect to the other versions.

The Web Application 

Let's take a closer look at the web application.  On the home page, you can see a list of the top 10 events that you can browse:  (you'll have to ignore the nonsense words for the event, person, city, state, and country data.  The kit includes code to randomly generate data for the number of concurrent users you specify)

Olio home page

 

Suppose you click on a particular event you interested in - you would be shown the event details that lists the name, time and location of the event.  It will also list the attendees for this event with the option of attending this event if you were logged in and registered as a user.  The blank map below shows location of the event with Yahoo Maps geolocator.  However, since I mentioned before that the data was created randomly, the location of this event does not map to anywhere recognizable.

 

event details 

When you click on a particular attendee of this event, you are brought to the person details page showing your profile, your friend cloud and your recent posts.

 

Olio person page

This concludes the brief whirlwind tour of the application but I encourage you to download, build and deploy the code yourself to see what I haven't shown here.

What do I need?

 For the application, you will need:

  • Java SE 5 or 6
  • Java EE 5 compliant application server (we have only tested so far with GlassFish v2)
  • A Java Persistence API (JPA) provider (Eclipselink is the JPA provider packaged with GlassFish v2)
  • MySQL Database (any DB could be used but we have scripts and instructions for MySQL)

For the harness/driver, you will need:

What is under the covers? 

Some of the technologies that this application features:

  •  JPA for the Object-Relational persistence solution
  • AJAX 
  • Rest based Services (partially implemented)
  • JMaki widgets wrappers for Yahoo and dojo widgets.

Where are we going? 

As I mentioned before, this implementation is not near a binary release quite yet and needs some work.  Tasks that remain to be completed include:

  •  Re-implementation of the memcached layer for caching (this was stripped out for this release but needs to be put back)
  • Rest based services with JSR-311, JAX-RS.  I've started this already using the Jersey implementation.
  • Replacement of the jMaki widgets with appropriate alternative
  • Minor features to 'catch up' with the PHP and JRuby version.
  • Investigation of file distribution system eg. Hadoop (current implementation only uses local filesystem)
help wanted

I hope I have piqued your interest.  Want to help out?  Interested in what direction we are taking or want to provide your input?  

  1.  Visit our Olio page at Apache:  http://incubator.apache.org/olio/
  2.   Subscribe to our mailing lists via the Olio home page as a user (olio-user@incubator.apache.org), or contribute as a developer (olio-dev@incubator.apache.org)  

 

Tuesday Apr 21, 2009

How to write a JMS Faban driver (Part One)

Choosing a  benchmarking tool to put load on a system under test (SUT) can be subjective.  Most people will prefer something quick and easy to get something up and running as soon as possible.  SoapUI and JMeter are two examples that be slotted in that category since they have a user interfact that will do a lot of the heavy lifting for you.   However, I mostly use Faban, an open source driver and harness framework.  Akara Sucharitakul, a colleague at Sun, is the primary author of this great tool.  You can use it to place load on an application (the driver) and also use its scheduling capabilities to schedule a run since its also has a web interface. You can conveniently use this interface to change run parameters such as ramp up, steady state, ramp down.  The scheduling portion (harness) gives the facility to programmatically add tasks to facilitate the repetitious tasks when benchmarking which often requires a lot of experimentation.  Examples of these routing tasks could include restarting your application server or reloading your database before each run.  There is good news too - there is a early access of Faban just released in March available as a 03/20/09 nightly build on the Faban website.


I will admit that there is a learning curve associated with using Faban - you need to write Java code.  However, there are sample drivers available that you can use in the Faban package to use as a template.  How do you start to write a driver?


 There are detailed instructions that explain this, but I'm just going to give a quick primer.  Suppose we want to write a simple driver to write a JMS message to a queue.  Two files that you'll have to write (or edit if you are using the provided samples as a template) are your Java driver file, and the accompanying config file, run.xml.



  1. Download the 1.0ea build (03/20/09 build). (or later build if a newer build is available from the website)

  2. Extract the file to a working directory.  This will now be referred to as your FABAN_HOME.

  3. Enter the samples directory.  There are some sample drivers that you can peruse, like web101.  Let's use that one as a template for our JMS driver.

  4. Copy the directory 'web101' and rename it, eg. jms.

  5. The driver code resides in the $FABAN_HOME/samples/jms/src/sample/driver directory.   You'll have to refactor the one that we copied over called WebDriver.java to something like JMSDriver.java.

  6. The config file resides in $FABAN_HOME/samples/jms/deploy/run.xml.  You will also have to edit that with the values that are appropriate for a JMS driver.


The advantage of copying an existing samples as a template is that you can take advantage of the build.xml and use the same targets to build, run and deploy the resulting driver code to the harness.  But let's keep things simple and just build and run the driver via the ant run target.


Let's change some of the values we need for the JMS Driver.  Firstly, we will need to change two annotation to correspond to JMS Driver, instead of a web driver - @BenchmarkDefinition and @BenchmarkDriver.  The third annotation, @FixedSequence, refers to how my operations are going to be called in the driver.  In this case, it's very simple - I only have one operation, sendJMSMessage, so I am going to call it in a fixed sequence, without randomness.  The two values associated with this annotation are deviation which refers to how much deviation I will allow, and the sequence of operations which is really relevant here because I only have one operation called sendJMSMessage.


The 4th annotation, @NegativeExponential, refers to the negative exponential distribution
for think or cycle time distribution.  Think time, or cycle time, is the length of time between requests, or the time between sending the messages.  This is the default think time distribution for the class, but we can override it at the method level (i.e at the operation level, sendJMSMessage).  You can delete this if you want, or just leave it as is.  You will see how this annotation is overwritten at the method level.

@BenchmarkDefinition(name = "Sample JMS Driver",
version = "0.2")
@BenchmarkDriver(name = "JMSDriver",threadPerScale = 1)
@FixedSequence(deviation = 2,
value = {"sendJMSMessage"})
@NegativeExponential(cycleType = CycleType.CYCLETIME,
cycleMean = 5000,
cycleDeviation = 2)
public class JMSDriver {
...


We now have to code the sendJMSMessage operation that we referred to in the FixedSequence annotation.  The sample web101 has 3 operations defined, doOperation1, doOperation2 and doOperation3.  We can erase two of them, and just refactor doOperation1 for our JMS purposes.


There are two annotations relevant here.  The first one, @BenchmarkOperation designates that the following method definition is an operation for your driver.  Max90th refers the value of response time that is acceptable which is 2 seconds in this example.  Timing is set to AUTO which means that I'm letting Faban take care of the timing mechanics.  (There is a way of setting this to MANUAL so you can precisely record what method call you are interested, but let's leave it here at AUTO for the sake of simplicity).  Here you see that we have overwritten the think time annotation with a @FixedTime, where we are going to send a message every second.  There is no distribution to the think time - it's going to be fixed at 1 second, but we will tolerate a 2% deviation.

@BenchmarkOperation(name = "sendJMSMessage",
max90th = 2,
timing = Timing.AUTO)
@FixedTime (
cycleType = CycleType.CYCLETIME,
cycleTime = 1000,
cycleDeviation = 2
)
public void doSendJMSMessage() {
logger.info("sending message");
try {
producer.send(message);
} catch (JMSException ex) {
logger.info("Exception: "+ ex.getMessage());
ex.printStackTrace();
}
}

  I've simplified the code greatly and omitted the plumbing code.  The ConnectionFactory, Connection, Session, and Destination objects are created in the JMSDriver constructor so that those are all created when the JMSDriver is instantiated.  I need to show how to read values from the configuration file, run.xml, to do some of that plumbing code and that topic will be covered in my next blog entry.

Friday Apr 17, 2009

visualgc plugin for visualvm

VisualGC plugin in visualvm gives a visual representation on your objects in the different spaces of your heap.[Read More]

Tuesday Feb 10, 2009

OK - where do I start if I need to tune GlassFish?

Scenario: The hot new application you have deployed on GlassFish is expected to garner multitude of users who are going to hammer away at your application. Where do you start to optimize your performance?

Yes, there is a performance guide available but you don't have time to digest this material. You could use the new Performance Advisor's "Tuner" feature of Enterprise Manager that helps you tune your application with a series of questions, but you have to present to your boss and you need to be prepared to answer all of his technical questions. Where do you start?

Solution: Along with the release of the GlassFish Portfolio, a whitepaper that I wrote has been published today. This performance white paper, titled "Optimize GlassFish Performance", lists the top 11 parameters that you can investigate when tuning your application deployed on GlassFish based on the data that the Java Performance Team has collected during our numerous benchmarking exercises. There is a brief explanation of each parameter with a recommended 'default' value followed with some data that illustrates the importance of tuning GlassFish for performance. Benchmarking in general does require experimentation and is very much application-specific, but this whitepaper should provide a good primer for those wanting to get their feet wet.

Wednesday Mar 05, 2008

A Lesson Learnt with Netbeans 6 and the HTTP Monitor tool

Our lab (Benchmarking and Profiling Web2.0 Applications) from JavaONE 2007 is currently being presented at Sun Tech Days. Since then, Netbeans 6 has been released so we had to update the content and ensure that it worked smoothly. Sang Shin, who is presenting the lab in Australia and South Africa, contacted me with the problem that the benchmark test that we run in the lab was failing and sent me his summary file that Faban produces at the end of each run. (Note: Faban is the open source benchmarking framework we use). To my surprise, the response times for the first request when all of the jmaki components are loaded up, had increased 1000 fold. On my laptop, we had generally observed a response time of 0.02 seconds but it now had ballooned to 20 to 30 seconds! and this was happening with only 6 users!

What was causing this drastic increase? To spare you my painful debugging technique, it turned out that we were using the bundled version of Glassfish that comes with the Netbeans Web and JavaEE installation. And now, the HTTPMonitor tool is turned ON by default. (HTTP Monitor is a tool you can use to monitor requests, headers from your client to your server) So what was happening that under load, the HTTPMonitor became the bottleneck causing the response times to increase.

So how do you turn this tool off?

1. Click on the Services Tab of Netbeans.


2. Right click on the Glassfish node, then select Properties
3. A second window will pop up displaying the properties of Glassfish. Uncheck the first check box at the bottom of the window denoting "Enable HTTP Monitor".

4. Restart your Glassfish server.

NOTE: You may have to undeploy and redeploy your application for this to kick into effect for an already deployed application. For some reason, I had to undeploy and redeploy after restarting Glassfish had no effect. I suspect that there must be some modification of the web descriptors. Caveat: Be sure to turn off the HTTP Monitor which is on by default in Glassfish bundled with Netbeans BEFORE doing any doing any performance tests.

Monday Nov 12, 2007

Out of a long hibernation

I've just been awakened from a very very long slumber. My blog has been silent for such a long time, it's been very embarrassing. I have been busy, just not diligent nor active on the blogging front despite having a few things that may interest a few souls out there. A colleague of mine, Matt Ingenthron, has recently chided me over IM that my blog needs a serious update so here is my first entry in a .... well, some things are best left unsaid.

So I will start gently in case I overdo it on the first day and develop blisters on my fingers and ease slowly into it. I have been working on Web20 benchmarking on JAMM (Java application Server with Memcached and Mysql). The Java application part is actually Glassfish, but somehow the original acronym, GMMJ (Glassfish Memcached Mysql with Java) didn't sound as catchy as JAMM. A JavaONE lab we had originally presented at last year's conference (2007) is being modified for a future Tech Day presentation later this year. And I wanted to blog a bit about the Faban open source project, a benchmarking harness that we are using for performance work in many of our team's projects.

Which brings me back to why I have been awakened to write a blog entry, albeit a short one with very little content, today... Matt and I had been working on developing a Faban driver for one of our customers to use in their project. My next blog entry will describe a bit more in detail how we developed this driver. (Remember? I didn't want to tax my out of shape fingers from their long period of inactivity.)

Sunday Mar 19, 2006

It's always that little thing that throws you off...

Update on the VMware experiments:

I was having a hard time with the performance of VMware on my laptop running Windows XP. As I typed my commands in the terminal window, I would see the letters slowly appear while my hard drive spun away. Booting the Solaris x86 was also a very slow process - like a three year old copy of Windows 95 on your old desktop. FTPing from the solaris was 136 kb/sec compared to 784 kb/sec on the Windows OS. What the heck was going on?

I was already envisioning some cryptic configuration files that I would need to edit, setting some parameter to "true", or attributing some numerical value to some variable like Ne_x23_avw=235. Luckily for me, I didn't spend too much time twirling the knobs around. The problem? I seemed to have forgotten about my trusty Symantec antivirus software. Seems like each IO activity from the Solaris x86 process was being monitored by the real time protection of Symantec. Solution? Create exclusion filters for extensions VMEM and VMDK. This worked like a charm for me. Sure, I still get annoying things like "lsss" or "pwddd" when I swear I am typing in short staccato keystrokes. But it does happen a lot less now

I gleaned the above bit of information from two sites: how to get more virtualization performance enhancements and to set exclusion filters. The latter article doesn't exactly fit the shoe, but it gives you the right menu header to hit to enter the exclusion extensions.

Friday Mar 03, 2006

VMWare server and Solaris x86

Well, we're into March already and I've been negligent about blogging more regularly.  As one would typically say: "I've been sooo busy with work". I have been experimenting with VMWare server to put Solaris x86 on my Gateway laptop.  I have Windows XP running as my host OS (need Windows for some .NET stuff) and wanted to get a sandbox for Solaris 10 to play with.  Another reason was to compare the JVM monitoring tools in JDK1.5 on Windows versus Linux/Solaris (more on this later - another good blog subject matter). 

Thanks to Matt who sent me the link to this article by Bill Rushmore which outlines out how to create a virtual machine from the VMWare software.  What also proved to be helpful was this page outlining good resources for installing Solaris x86. 

There are some issues that I need to sort out; namely how to make my Solaris networked when I am on VPN, and also when I use my wireless connection.  It works fine when I am plugged into SWAN at work. As the Rushmore article suggest, I am using the bridged network setting for my VM. Obviously, I still have to dig a bit further for that.  If anyone has any tips, I would greatly appreciate any pointers.
vmware_screenshot




Tuesday Jan 31, 2006

As promised, the JAXB2.0 techtip and one other observation


The techtip I wrote about JAXB2.0 is published now which describes two of the new features of JAXB 2.0 compared to JAXB 1.0. Hopefully, after reading this tip, you will check out the jaxb samples provided with Java WSDP 2.0 in the jaxb directory and look at other features of this 2.0 release.  Don't forget, there is also some code you download and try that accompanies this tip.  As always,  any feedback is always appreciated.   For those of you that are looking for a more introductory article on JAXB, you can refer to the JWSDP 1.6 tutorial on JAXB since as of today, the tutorial for JWSDP 2.0 is still not yet available.


I did find out something interesting when writing code for this techtip. In the example code, I do some unmarshalling with the invoice schema (that's creating a Java object from the XML document) and marshalling (Java object to XML).

 Here is the relevant code:


JAXBContext jc = JAXBContext.newInstance( "techtip" );
Unmarshaller u = jc.createUnmarshaller();
.....
 JAXBElement poElement = (JAXBElement)u.unmarshal( new FileInputStream( "po.xml" ) );
 PurchaseOrderType po = (PurchaseOrderType)poElement.getValue();
 ...
Marshaller m = jc.createMarshaller();
 m.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE );
 OutputStream os = new FileOutputStream( "incorrectpo.xml" );
 m.marshal(po, os);

This looks reasonable, doesn't it? But why does the compiler complain? Here is the exact message:

javax.xml.bind.MarshalException - with linked exception [org.xml.sax.SAXException: unable to marshal type "techtip.PurchaseOrderType" as an element because it is missing an @XmlRootElement annotation]

Let's look at the javadoc:

marshal

public void marshal(java.lang.Object obj,
java.io.Writer writer)
throws JAXBException
Marshal the content tree rooted at obj into a Writer.

Parameters:
obj - The content tree to be marshalled.
writer - XML will be sent to this writer.
Throws:
JAXBException - If any unexpected problem occurs during the marshalling.
MarshalException - If the ValidationEventHandler returns false from its handleEvent method or the Marshaller is unable to marshal obj (or any object reachable from obj). See Marshalling objects.
java.lang.IllegalArgumentException - If any of the method parameters are null

Still looks reasonable, doesn't it?

 Pay attention to the last line
m.marshal(po, os);

I am marshalling the purchase order object, po, since I thought that this is the object that I got when I used the (PurchaseOrderType)poElement.getValue() call.  The JAXBContext, or so I thought, should know about this PO object because of the invoice schema. So, it should be able to marshal this java Object, right?

Actually, no - and for a good reason. The correct line should read:

m.marshal(poElement, os);

 I should be using the poElement object from the unmarshalling rather than the PurchaseOrderType object (which is one of the generated classes after the xjc process - the JAXB binding).
For the explanation, I turn to Joe Fialli, one of the JAXB 2.0 specification leads (JSR 222) and served as the technical reviewer for the article.

"The JAXBContext knows the PurchaseOrderType. However, for "po" above, it does not know the xml element tag. The XmlElement tag can be statically associated with a class with @XmlRootElement. (How it is done with Java to Schema.) However, for schema to java, PurchaseOrderType was generated from a complex type definition. More than one xml element \*could\* use PurchaseOrderType. The above code can only work for "poe", not just for instances of PurchaseOrderType. Schema to Java does generate @XmlRootElement on classes generated for anonymous complex type defintions since that type can only have one @XmlRootElement. However, a named complex type definition in XML Schema typically can easily have a one to many relationship with xml elements that reference the complex type definition."

Ahh - this makes sense now. A good reason on why to use the poElement when marshalling instead of the PurchaseOrderType (po) object.

Thursday Jan 19, 2006

More about XML parsing

Well, I have just submitted a techtip on JAXB 2.0 that is to be published at the end of this month. It will be discussing two of the many new features that JAXB 2.0 offers - other than faster performance - the new JAXP Schema Validation and the Java to Schema Binding. I'll blog when the techtip becomes officially available. Where can you get the new JAXB 2.0? JAXB 2.0 is now available as part of JWSDP 2.0 and also as separate binaries available from the java.net JAXB 2.0 project. The great thing about these techtips is that there is code accompanying each tip so you have a chance to get your hands dirty to really understand what is going on.

While on the topic of techtips, I have also written another one in October 25th, 2005 on StAX, Sun Java Streaming XML Parser. This tip discusses specifically the iterator API - when and how to use this compared to the more well-known cursor API that most are familiar with. Interested about the performance? Check out a whitepaper I co-wrote with other Sun colleagues comparing the performance of Sun's implementation of StAX (JSR 173) to other vendors. You can find this whitepaper on java.sun.com performance site with all of the nitty gritty details. By the way, you'll find this site a great resource for information about java performance. Our team regularly publishes papers here so be sure to visit regularly.

I'm currently working with our team to produce a whitepaper that will describe our performance of the web services stack on our new Niagara servers. Hopefully, we'll be able to get this out by the end of February - so stay tuned for further blogs entries about this.

Tuesday Dec 06, 2005

Today is ...

Well, what better way to start off my blog than to launch it concurrently with the new UltraSPARC T1 processor?

If you haven't heard so already, Sun is launching a new line of servers called the Sun Fire CoolThreads servers that feature the new UltraSPARC T1 processor. What is so cool about this server that it runs 4 hardware threads concurrently per core instead of sequentially. The results? Greater efficiency and throughput with much lower power consumption.

I'll leave the nitty gritty details about the chip to more qualified sources, but I wanted to blog about how this relates back to the work that I am currently doing with Java performance. I have been looking at XML performance with XMLTest, a benchmark we have been using to measure parsing performance with a different battery of XML documents. (You can get more details about XMLTest on java.net). Specifically, we have been measuring how different XML parsing technologies scale on the Sun Fire CoolThreads server: Sun's implementation of the Streaming XML API for XML (StAX) called Sun Java Streaming XML Parser (SJSXP), Java Architecture for XML Binding (JAXB).

I wanted to focus specifically on the scalability of these parsers on this server. What is scalability? In this case, I am focusing on vertical scalability. If I add additional processing power within the same server machine, can I get additional throughput for my application? This would depend on the ability of the application to take advantage of the additional processing power by running different threads on different CPUs (or cores) within the same machine.

If I can get linear scalability, what does this mean? This typically refers to the ability to increase the throughput of an application in direct proportion to the additional processing power added (eg. hardware threads). For example, say I can get a throughput of 40 transactions per sec with 4 threads. With perfect linear scalability, if I double my processing power to 8 threads, then I could expect to double my throughput to 80 transactions per second. Recall that with the new Sun Fire UltraSPARC T1 server, you get 4 threads per core. If you are concerned with throughput on a large server, this is a GREAT thing. I can adjust to an increase demand by simply adding processing power, which unfortunately, is not always the case. Bottlenecks may arise and decrease throughput with increasing number of processors which may be due different reasons such as lock contention.

So back to the XML processing world and the UltraSPARC T1 processor. As I mentioned before, this server runs 4 hardware threads per core. The 8 core machine that we used with 4 hardware threads essentially functions as a 32 way box! We have done experiments that show perfect scalability for JAXB and StAX as we increase the number of hardware threads. The figure below the scalability for JAXB1.0:





We used a 100KB XML document for this study. We did not need to adjust any Java tunings as we scaled up. With our initial tunings (-server option with JDK 1.5), we were able to scale the performance out-of-the-box. (For more information on the JVM and CoolThreads, please check out ions, take a look at Dave Dagastine and Brian Doherty's blogs)

You can expect more details about this in later entries, but the take home message is clear: with our benchmark framework (feel free to check out the code and take a look what we are doing under the covers), JAXB and StAX can scale linearly up to 32 hardware threads.

Monday Dec 05, 2005

A first step

My first step (belated, admittedly) into the popular world of blogging. I hope to share in this forum some thoughts and aspects of my work in the Java Performance team amongst other things.

I'll keep this one short. Like a baby step.

About

klichong

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks