vendredi juil. 10, 2009

Top reasons why GlassFish v3 is a lightweight server

This blog has moved to alexismp.wordpress.com
Follow the link for the most up-to-date version of this blog entry.

I have been involved in helping a couple of consultants put together a presentation on the future of app servers and one thing that struck me was that in the resulting slides, they equated lightweight appserver with the use of the Spring framework. Using Spring in WebSphere doesn't make any lighter. I don't think that deploying an archive with 90% being runtime qualifies as lightweight (hence the SpringSource tc and dm server offerings), but I also think that painting every application server as being monolithic and heavyweight is a gross caricature, so here are my top reasons why GlassFish \*is\* a lightweight application server.

#1 • Download size
For some people download size matters. For them and for everybody else, GlassFish v3 downloads start at 30MB for the web profile (get it here). The updatetool will then help you scale up or down from there. Of course you can also start with the "a la carte" approach and go even lighter (20MB for a functional RESTful+EJB31 server). Some others are fighting hard to fit on a single DVD or CD.

#2 • Pay for what you use
With the extensible architecture of GlassFish v3, services and containers and brought online only when artifacts using them are deployed to the runtime. Deploy your first WAR and the web container will take a couple of seconds to start. Deploy your second webapp in a fraction of a second. Remove the last webapp and the web container will not be restarted on subsequent server restarts. Some people call that on-demand services.

#3 • Fast (re)deployment
Beyond incremental compilation (which most IDE's offer nowadays) and deploy-on-change (simply save the source and reload the web page), the time to (re)deploy an application is key to a developer's productivity. The GlassFish team has spent time optimizing that process to offer sub-second redeploy time for simple applications. GlassFish v3 also offers the preservation of sessions across redeployments which is a pretty safe operation (new class-loader, new application) and costs less than 5 seconds to recreate a Spring context (for instance with the jpetstore demo on my laptop), and even less on traditional JavaEE webapps. This is all built into the product with no configuration or add-on required. Check out this recent (and short) screencast for an illustration.

#4 • Startup time
Even in the days of (fast) redeploy, startup time still matters to both developers and operations. GlassFish v3 starts in about 3 seconds with a warm felix cache. Starting the web container is about an extra 3 seconds. Deploying individual applications depends largely on their size and complexity but let's say that it starts at around 100ms and should not go beyond 30 seconds. Starting GlassFish v3 with Apache Roller already deployed (not exactly the lightest webapp there is out there) will cost you about 20 seconds.

#5 • Memory consumption
One might think the OSGi nature or the application server introduces an unwelcome memory overhead. For an application servers like GlassFish v3, that certainly isn't a problem as a base GlassFish v3 runtime is using less than 20MB (another "side effect" of the modular & extensible architecture) and a non-trivial application only 50MB of heap (as reported by visualvm). Not quite small enough to run on a feature phone, but that may well happen sooner than we all think, especially when using the Static mode (no OSGi) and the embedded api.

#6 • Spring \*and\* OSGi
No need to choose between standard JavaEE, Spring, and OSGi. You can have all of the above in a single integrated product. In fact you can even use the unmodified OSGi-fied Spring DM version of the framework, and make it available at the expense of a couple of clicks in the update tool. The HK2 layer in GlassFish v3 is abstracting OSGi away and manages to have GlassFish retain its lightweight feel while allowing for Java EE components to inject any OSGi-based declarative services at the expense of a standard @Resource annotation. I don't know if you think this lightweight but I certainly find this to be an elegant integration.

#7 • Open Source
GlassFish is open source, so you can make it whatever you want, even a heavyweight monster if you so decide! Certainly the barrier to entry for using GlassFish is very lightweight.

But the real question is - is GlassFish v3 lightweight for you(r application)?
Whatever the answer is, I'd love to hear your comments and experience!

lundi janv. 19, 2009

Random Java performance podcast comments

I was recently listening to this JavaWorld podcast on Java scalability and was surprised to hear a few things (hopefully well paraphrased here) :

"Java is not the best choice given its threading model and synchronizing". I just don't understand that statement. Still Java 5's concurrency API is still under-used by many IMO.

"30 seconds GC pauses are common". Really? This sounds like 2000. Are you still seeing GC pauses above 10 seconds? There's now multiple GC algorithms to chose from and default options now provide really good performance in a large majority of setups.

"Real-time Java is around the corner and will fix many latency issues". Real-time java is really not the issue to most web site scalability or latency issues. Maybe the garbage-first GC scheduled for Java 7 will be an easier answer than the current required JVM tuning.

"ORM's are not needed, straight JDBC is better for scalability". I can't help but think that this can apply to only a handful of popular web sites. For everyone else, frameworks like JPA are just no-brainers.

"More generally speaking, frameworks are bad for performance and scalability". I think some stack-traces can be indeed intimidating. Frameworks should strike a good balance between productivity and out-of-the-box performance. Tuning expertise for a given framework is usually a function of the popularity of that framework while some popular frameworks are known to have scalability issues.

"Application servers are not a good idea because they mix business logic and web requests... a JVM should suffice". I am a strong believer in multi-tiered architectures and stateful applications so this sounds very wrong to me. The notion of a container is one of the most important break-through in recent years for productivity, transactions, persistence, and scalability. Clearly I don't buy or even really understand this assertion.

Performance and scalability objectives can justify to de-normalize the architecture (much like you would for database schemas) but while this podcast has value, I don't believe most developers should follow every recommandation in this podcast right off the bat.

mardi févr. 12, 2008

La nature, le vide et les multi-coeurs

En lisant le billet d'Olivier Rafal sur les Microsoft Tech Days (ou Sun est sponsor au passage), je me fais deux réflexions:

Rencontré dans la salle des speakers (avec mon beau polo Java j'ai eu du succès ;), Didier Girard m'a indiqué que NHibernate et NSpring (sujet de sa conf. DNG la veille) ont des téléchargements à peine moins de 10 fois inférieurs à leur grands frères Hibernate et Spring. Première réflexion donc: y-a-t-il un tel manque dans la technologie existante (la nature a horreur du vide) pour justifier de tels volumes de téléchargement?

En lisant que "A terme (...), .Net pourra générer du code (...) optimisé pour fonctionner sur du multi-CPU multi-coeurs", je me dis que l'arrivée de l'API de concurrence de Java 5 (2004!) n'était clairement pas trop anticipée et qu'il paraîtrait bien difficile aujourd'hui d'expliquer à un client qu'une application n'est pas capable de tirer partie de multi-coeurs parce qu'elle est écrite en Java (le fork/join de Doug Lea ou encore Scala promettant d'aller bien plus loin encore). Peut-être est-ce là l'avantage d'être à la fois constructeur et éditeur (nous on aime bien "fournisseur de système" ;-) ...

Java est décidément un cancrelat innovant!

mardi nov. 27, 2007

Scalabilité GlassFish v2

Il y a quelques mois et peu de temps avant la sortie de GlassFish v2, je parlais de son record de performance devant les leaders Weblogic et WebSphere. C'est maintenant un nouveau résultat [1] à plus de 8400 JOPS soit près de 10 fois le résultat du précédent!

En réalité c'est un benchmark en mode cluster qui permet de démontrer la scalabilité horizontale de GlassFish (6 noeuds, 18 instances) autant que sa scalabilité verticale était démontrée dans le résultat précédent (une seule instance sur 8 coeurs à 4 threads).

Ce qui est intéressant en particulier dans cette configuration en cluster, c'est:
•  Le rôle de la base de données et de son dimensionnement (fait partie du SUT)
•  Les machines utilisées par HP pour un score dans la même catégorie: 2 superdomes, rien que ça!

Notre Monsieur Benchmark GlassFish (entre autres choses), Scott a une analyse détaillée des résultats. Bien évidement, dans tous les cas, seule la performance et la scalabilité avec votre application compte. Disons que c'est un peu comme la Formule 1, nous ne roulons pas dans ces voitures, mais nous bénéficions régulièrement des mêmes avancées technologiques.

SPEC and the benchmark name SPECjAppServer 2004 are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of 11/26/07. For the latest SPECjAppServer 2004 benchmark results, visit http://www.spec.org/. Referenced scores:
[1] Six Sun SPARC Enterprise T5120 (6 chip, 48 cores) appservers and one Sun Fire E6900 (24 chips, 48 cores) database; 8,439.36 JOPS@Standard

jeudi oct. 04, 2007

SDPY - SPECj and GlassFish

If you're following at all GlassFish, you probably know that it has performance results above commercial vendors. Just one year ago, we had high hopes we would dramatically improve the performance of v2 over the v1 results. And we did. Performance is up by 60%!

jeudi sept. 27, 2007

Is the GlassFish competition feeling pressure?

It sounds like BEA has noticed that GlassFish v2 was released with excellent performance.

Bill Roth's argument about Sun using JDK 6 is just funny. Apparently, we should be penalized by proactively supporting new releases of the JDK. It's hardly a deficiency on the GlassFish side if you ask me.

The GlassFish vs. SJS Application Server argument is just as weak. First of all, as Scott Oaks explains it, SPEC submissions can only be done on commercial products. Second, GlassFish V2 and SJS Application Server 9.1 are technically one and the same thing. No performance difference to expect whatsoever.

So Bill, when will Weblogic support Java 6 (released almost a year ago)?
(my blog engine doesn't require an account, feel free to answer here)

Oh, and marcf is also trying to dismiss GlassFish. Interesting how a fish can make people nervous.

vendredi sept. 07, 2007

Geronimo and GlassFish startup time compared

Julien has a comparison of open source application servers in which has an interesting quote about Geronimo's startup time :

"The startup is a bit slow: I suspect that the GBeans framework (IoC) flexibility price is some CPU time."

The interesting part to me was Julien making a connection between startup performance and IoC. Trying for myself the Tomcat incarnation of Geronimo 2.0.1 (at 55MB, the download is the same as GlassFish v2 which comes with clustering, WSIT, and OpenESB), I obtained the following start-up time -

On my machine, using Java 6, a Geronimo warm start is 43 seconds (cold start with Java 5 is 60sec). A good chunk of that time is taken by starting the EJB container. GlassFish v2 on the same machine starts in 10 seconds (stopping actually takes almost longer than starting!).

To be fair, startup time was an important effort for GlassFish v2 resulting in not all services being started as part of those 10 seconds. So I also tried Little-G which also starts up in 10 seconds. Such identical figures don't make startup-time an argument in favor of having the stripped-down version.

As a reminder, the demo of the preview of GlassFish v3 this year at JavaOne showed the HK2 kernel and the web container start in about a second. Now GlassFish v3 work is only starting, but I understand the goal is to improve startup time and to contradict Julien's thinking about flexible architectures ;). Modularity shouldn't come at price.

mercredi juil. 11, 2007

SDPY - Episode GlassFish

Il y a à peu près deux ans (déjà!), je présentais l'objectif de GlassFish. Nous voici maintenant à l'aube d'une version 2 intégrant un clustering complet et un pile web services performante, interopérable avec Microsoft .Net 3.0 (WCF) et conforme au WS-I BSP (Basic Secure Profile) 1.0.

Bien sûr l'actualité c'est aussi le record de performance SPEC annoncé hier qui place GlassFish tout simplement devant BEA, Oracle et les autres, mais aussi cet autre résultat basé exclusivement sur une pile Open Source. En matière de serveur d'application Java, il n'y a désormais plus à choisir entre Open Source et fonctions d'entreprise.

Coté contributions, au delà d'Oracle qui fournit l'implémentation de référence de JPA, Ericsson est le second acteur important contribuant à GlassFish dans le cadre du projet SailFin, serveur de communication SIP Open Source. Le nombre de contributeurs individuels est également en progression. Les plus méritants ont été récompensés en mai dernier. Le nombre de contributeurs "par emprunts" se porte bien lui aussi: JAX-WS dans Weblogic 10, Metro & JSF dans JBoss 5 pour ce citer que les plus significatifs.

Enfin, coté déploiements, http://blogs.sun.com/stories recense une partie des mises en production de GlassFish. Inutile de parler du nombre de refus (polis) essuyés pour une référence en ligne. Enfin, plusieurs clients français participent au programme beta pour GlassFish 2 qui débute ces jours-ci et d'autres n'attendent qu'une version finale pour passer en production.

mardi juil. 10, 2007

Maj. GlassFish - SPECjAppServer et Beta 3 dispo

GlassFish version 2 beta 3 est disponible. Le build est le 50g (correction bugs, Woodstox, HADB, in-process MQ, ...). La date de mi-septembre 2007 est toujours d'actualité pour une version finale. Partez en vacances et un GlassFish tout beau tout neuf (clustering, metro, ...) sera dispo à votre retour.

Pour ce qui est des performances, le dernier résultat SPECjAppServer2004 en date pour GlassFish, c'est du 778.14 JOPS@Standard, c'est à dire 11% derrière le record qui n'utilise que du propriétaire (OS, HW, AppServer). Ce qui est peut-être plus intéressant c'est l'utilisation de briques complètement Open Source, y compris pour la base de données (qui fait partie du SUT) : PostgreSQL (8.2.4) optimisé pour Solaris. PostgreSQL apparait comme le réel concurrent d'Oracle pour des bases de données transactionnelles et scalables en SMP. Enfin, ce résultat GlassFish/SJS Application Server est obtenu à 1/3 du prix de celui d'Oracle...

lundi mai 21, 2007

GlassFish tips for demoers and others (avoid those restarts)

One of the good things about having your environment change is that it makes you ask yourself the question of why you ended up with some habits. NetBeans 6.0 Milestone 9 not bundling Tomcat by default (but still supporting it) is one such example I think. On that note I'd invite you to read Geertjan's post on Oddly Shaped Bicycles.

So the above thread got the discussion going about NetBeans experiences with people using GlassFish as part of their demos whether it is to demo Java EE 5 features, deploy and run OpenESB artifacts, run OpenPortal, an interoperable JAX-WS Web Service, or a JRuby on Rails application, a whole lot of people use GlassFish nowadays. Whether using Tomcat or GlassFish a seamless experience can be achieved with fast startups or incremental deployments.

Startup time for GlassFish is not perfect (we're working on it) but very good for a full-blown application server. Luckily, incremental deployment is most often extremely fast and, if no restart is required which makes the life of demoers but also pretty much everyone else's so much more enjoyable (having an unplanned application server restart during a demo is never good).

So here's a little list of do's and dont's when using GlassFish in demos (not your typical use-case but still...). If this looks too long, skip to the last bullet.

 &bull Use GlassFish v2. First of all if you're using GlassFish v1, this version was pretty much frozen more than a year ago. GlassFish v2 has much better startup time, better error handling, and less restarts (almost everything in the web container is now dynamic). GlassFish v2 is now in beta 2, so if you're using a NetBeans milestone, you really shouldn't be afraid to use a GlassFish beta! :)

 &bull Delete does not undeploy. One of the most annoying problem reported is application server restarts. With GlassFish v1, this clearly happens when there is a mismatch between you domain.xml (in domains/domain1/) and the file system and this is most likely due to deleting projects from NetBeans (after you're done with the previous demo) and not undeploying them (I filed this RFE).

 &bull Undeploy only works on existing projets. When using the web UI to undeploy non-existing applications, you get a (what could be a bit friendlier) error/exception. All it says it that it couldn't find the application (most likely a DPL5040). When using the GlassFish web UI, make sure you pay attention to the upper right corner which will tell you ("Restart Required") a restart is needed (and subsequent deployments from NetBeans will trigger a restart if you don't do it yourself before).

 &bull Hand cleaning. If you don't want to start the AS to clean older applications to then restart it, you can always be careful and remove the appropriate entries in domain.xml (not sure I should be suggesting this). Only make sure you keep an backup of domain.xml and run a asadmin verify-domain-xml to make sure you have a valid config file.

 &bull Clean, then restart. Always (re)start your application server before the demo only after any conflicts between domain.xml and the filesystem have been resolved (see above).

 &bull Clean or remove the log file. When Starting GlassFish from NetBeans, the log file of the application server is shown in the IDE. If it's big it'll take (what can seem) forever to 'cat' thru before actually showing the start-up sequence. You should probably delete the glassfish logfile (domains/domain/logs/server.log) or have the log file rotate with on a small size (a few hundred Kb).

 &bull Restart from scratch? I've you want to be really extra safe, you can re-create the domain using:
% asadmin delete-domain domain_name
% asadmin create-domain domain_name
(be careful, this creates a brand new domain with no resources other than the default ones)

 &bull Backup/Restore! Probably a better alternative (one that could make most of the above obsolete) is to do domain backup/restore:
% asadmin backup-domain domain_name
% asadmin restore-domain [--filename backup_file] domain_name
(make sure the domain is stopped and in the desired stable state when creating the backup).

If you have reproducible use-cases of an annoying/unneeded application server restart with GlassFish v2, please report it (bug, or comment here).
Feel free to suggest tips. I'm sure I'm missing a few.

mardi mai 08, 2007

GlassFish v3, dispo en "preview"

Même si GlassFish v2 n'est pas encore terminé (prévu pour septembre), on parle déjà de GlassFish v3. Le principe même de l'open source permet de le faire sans compromettre des revenus de licence qui n'existent pas. Ceci permet d'avoir une retour de la part d'utilisateurs et de contributeurs potentiels le plus en amont possible du projet.

GlassFish v3 est disponible en preview ici. Un seul binaire (.jar) pour toutes les plate-formes, seulement 14 Mo, et surtout une architecture modulaire pour exécuter des applications web (classique), du PHP (via Quercus), du JavaScript coté server (Phobos) ou encore des applications JRuby et le tout avec avec un démarrage en moins d'une seconde....

Plus d'info:
 &bull Hundred Kilobytes Kernel (blog de l'architecte)
 &bull Video de v3 en action.

mercredi mai 02, 2007

GlassFish v3 screencast follow-up and more

Jerome Dochez, GlassFish architect has a follow-up to the GlassFish v3 screencast.
Bits and documentation now live on http://hk2.dev.java.net/.

In other GlassFish news as we're approaching the GlassFish Day and JavaOne :


• Lighter weight Tomcat has to be more scalable? Think again.
• LoadBalancer administration for GlassFish
• GlassFish only an apt-get away for Dell users

Oh and GFv2 Beta 2 anytime now....

samedi févr. 10, 2007

Bloated blog


I've just added Snap to my blog. I'm not sure why I didn't do this before, I find it quite pleasant when reading people using links. I've been adding way to many things to this blog lately (Google Analytics, StatCounter, MyBlogLog, ...) and this page is becoming a bit bloated IMO and obviously it can't load as fast as a regular non-instrumented web page/blog.

FireBug 1.0 is a really neat Web debugger I've been using to debug some of my jMaki wanderings (more on that later) and I've found it to be very intuitive. It doesn't get in your way when you don't need it and when you do, you can easily drill down to the exact information you're looking for in no time.

I imagine most people use it to explore the DOM, debug JavaScript and CSS code (I try not write any), etc, but I found the "Net" profiling feature to be extremely helpful for debugging AJAX apps as it lets you see XMLHttpRequest calls and responses (including headers). Try it out on GMail, you'll be amazed (or shocked) to see all those requests flying by.

This feature can also serve as a network profiler. Here's what it shows on a simple page load of this blog:

Purple colored time-lines above are noise more than anything else. Time for some cleaning up...

OK, by way of popular demand, Snap is gone! I hope you'll like FireBug more than Snap :)

lundi févr. 05, 2007

JAX-WS 2.1, la pile Web Service qui déchire (ceci est un blog pas un communiqué de presse)


JAX-WS 2.1 est désormais disponible. Il s'agit, comme pour GlassFish v2 (le serveur d'application Java EE 5 Open Source), d'une version de maturité qui simplifie l'expérience de l'utilisateur, augmente la flexibilité du framework tout en améliorant ses performances. Un vrai programme de campagne!

Le Web Services ne sont pas nécessairement lents si l'on utilise la bonne architecture (le plus difficile à faire) et la bonne implémentation (ça change rapidement). Qu'il s'agisse de Web Services ou d'autres technologies, les benchmarks sont intéressants dans le cadre d'un développement en intégration continue pour l'équipe qui développe la technologie, mais comme il n'existe pas de standard en la matière (SPEC ou autre), tout résultat doit être interprété avec précaution. Ils ne sont pas tous pour autant suspects, simplement il n'est souvent pas possible de comparer des résultats de benchmarks distincts. Ceci étant dit, ces derniers jours, les piles Web Services Java ont été à la une.

Ce fut tout d'abord la société WSO2 (l'essentiel des développeurs d'Axis 2 chez Apache) qui a publié des résultats comparants Axis 2 et XFire de Codehaus. La réponse (fleurie) du camp XFire ne s'est pas fait attendre (si vous ne connaissez pas le BileBlog, attachez vous au fond, pas à la forme :).

Avec la sortie de JAX-WS 2.1, c'est le tour de l'équipe GlassFish JAX-WS (qui y travaillait depuis longtemps) de publier ses chiffres JAX-WS contre Axis 2. Ces tests ont été effectués sur le même matériel, la même partie cliente pour Axis 2 comme pour JAX-WS 2.1 et les résultats affichent clairement un avantage compris entre 30% et 100% en faveur de GlassFish JAX-WS :


Comme il en est question dans les commentaires du billet ci-dessus, les tests ont été réalisés avec Java 5(_10). L'utilisation de Java 6 (disponible depuis décembre 2006) semble augmenter les performances de GlassFish de 7%-10%. Quoi qu'il en soit, les options de tuning de la JVM sont beaucoup moins nombreuses que dans le test de WSO2.

Dans tout benchmark Web Services, la partie binding est un sous-ensemble important. JAXB 2 (le standard intégré dans Java EE 5 et utilisé par JAX-WS) a eu beaucoup de succès en peu de temps (il ne date que de début 2006) grâce à sa flexibilité, sa performance et son adoption par JBoss, BEA et d'autres encore qui reprennent l'implémentation de GlassFish à l'identique dans leurs produits. Axis 2 ne permet pas (encore) d'utiliser JAXB 2 si bien que les tests ont été effectués avec XMLBeans (technologie Apache d'origine BEA) dont les performances de sérialisation/dé-sérialisation sur des données volumineuses et complexes est très bonne.

Si l'on compare ces résultats à ceux de WSO2, tout d'abord, on constate l'utilisation de ADB dont je n'avais pas entendu parlé auparavant et qui semble avoir des limitations sur le support des schemas XML. De quoi se poser la question : "que se passe-t-il quand mon contrat WSDL défini indépendamment de toute structure de données se met à utiliser des schémas non-supportés par ADB?". En supposant que Axis 2 soit nettement supérieur à XFire comme le suggère le test WSO2, il est étonnant de voir un nombre de transactions par seconde aussi inférieur à ceux constatés par l'équipe JAX-WS (environ 4 fois meilleures) sur la même technologie Axis. Y aurait-il une limitation du listener HTTP de Tomcat qui en justifie (la machine n'était peut-être pas chargée au maximum ce qui est pourtant plus important que de saturer le réseau gigabit...)? Ici, les tests ont été effectués avec Grizzly.

JAX-WS 2.1 est la version intégrée dans GlassFish v2 dont la beta ne saurait tarder et dont la version finale est attendue avant la JavaOne. Ceci permet d'utiliser dans un seul produit intégré non seulement la pile JAX-WS 2.1, mais aussi son extension naturelle WSIT qui propose le support de nombre de spécifications WS-\* (sécurité, fiabilité, optimisation) comme l'indique ce tableau et une inter-opérabilité validée avec la technologie Microsoft .Net 3.0 (WCF).

Enfin, ces résultats ne sont pas définitifs :
- même si des messages relativement conséquents ont été échangés dans le benchmark JAX-WS 2.1, les technologies MTOM/XOP (optimisation inter-operable) et FastInfoset (encore plus performant mais moins inter-opérable) n'ont pas été utilisées. On peut espérer améliorer encore les résultats obtenus.
- une fois que XFire et Axis 2 deviennent conformes à JAX-WS (le travail est en cours), il sera intéressant de refaire des tests qui s'appuyent tous sur le même modèle de programmation.
-  il serait intéressant de comparer tous ces résultats Java à des tests comparables coté .Net.
- ceci me fait pensez au serveur Systinet? Qu'est-il devenu ?
- les seuls tests qui comptent sont les vautres.

L'intérêt de JAX-WS 2.1 n'est pas limité à la performance et propose des évolutions intéressantes pour le développeur comme c'est décrit dans ce billet (intégration MavenWeb Services avec étatintégration dans différents serveurs d'applications ou encore avec Spring ne sont que quelques exemples). Pour tester JAX-WS 2.1, vous pouvez télécharger les binaires autonomes depuis http://jax-ws.dev.java.net ou bien télécharger GlassFish v2 M4. Dans les deux cas, la documentation complète et à jour se trouve ici.

Why JAX-WS 2.1 isn't a dot release (ok, technically it is :)


JAX-WS 2.1 is final. Much like GlassFish v2, this release is about removing rough edges, focusing on usability, extensibility, and increasing performance.

Web Services are not slow when used wisely which boils down to choosing the right architecture (hardest part) and implementation (things change often). Performance benchmarks are interesting certainly for internal teams to track progress in a continuous build mode, but since there's no industry standard benchmark (ala SPEC) at this point, any such benchmark results should be taken with a grain of salt. It doesn't mean they're wrong, just that you can't do straight comparison of results across benchmarks. With that in mind some Web Services-related benchmarks popped-up lately discussing the main Java Web Stacks various merits...

First, four days ago, the WS02 guys (the main force behind Axis 2 as I understand) published a benchmark comparing Axis 2 and Codehaus XFire. It didn't take long before the XFire camp replied (via the BileBlog).

Now the GlassFish JAX-WS team is releasing their own numbers (testing started long before WSO2 published their results) vs. Axis 2. Theses tests use the same hardware and the same driver talking to Axis 2 and JAX-WS 2.1 and show better results across the board, from 30% to 100% better than Axis 2 :


As discussed in the post comments, the JAX-WS 2.1 benchmark is using Java 5(_10). A move to Java 6 improves GlassFish's performance up by 7-10%. The VM tuning certainly has less options than in WSO2's tests.

Obviously, in any Web Services benchmark, the binding part plays a great role. JAXB 2 (the standard Java EE 5 API used by JAX-WS) has really been kicking butt in terms of flexibility, performance and adoption (JBoss, BEA and others are using it straight out of GlassFish). Axis 2's support for JAXB is not ready yet, so the test was run using XMLBeans which shows good sustained marshalling/unmarshalling performance on bigger and more complex data sets.

Contrasting this with the WSO2 results, I had never heard of ADB before, I don't know how many people are using it, but it seems to have schema limitations which begs the question of what to do when the contract-first approach with generic data (a protocol needs to be data-independant) starts using XML Schema which break ADB? Also, admitting Axis 2 beats XFire in the WSO2 test, the Axis TPS figures seem pretty low compared to what the JAX-WS team measured on Axis. Maybe the server utilization isn't high enough (more important than saturating the giga-bit network IMHO) and they're hitting some Tomcat (HTTPd ?) listener limitation. GlassFish has Grizzly.

Note JAX-WS 2.1 is the version integrated into GlassFish v2 which will be shortly in beta and final before JavaOne. This will package up into a single product the JAX-WS 2.1 implementation together with it's natural extension WSIT which brings many WS-\* specification implementations (as shown here) and great Microsoft WCF interop.

Of course, these results are by no means final :
- while some fairly big messages were exercised in the JAX-WS 2.1 benchmark, MTOM/XOP (more interoperable) and FastInfoset (better performance) were not used. These could help achieve even better results.
- when XFire and Axis 2 get to JAX-WS conformance (they both have plans), it will be interesting to compare them all using the same programming model.
- it would also be interesting to see how these numbers all compare to .Net.
- this all reminds me of Systinet's Web Services engine. Whatever happened to it?
- test with your own messages/payloads/architecture/load, this is the only benchmark that matters.

JAX-WS 2.1 not only about performance, it's also about developer ease of use (Statefull Web Services, Maven integration, integration with multiple application frameworks including Spring just to name a few). See this blog for more info. Want to test-drive JAX-WS 2.1? Download the standalone bits from http://jax-ws.dev.java.net or get GlassFish v2 M4. In both cases, extensive and up-to-date documentation can be found here.

About

This blog has moved

Alexis Moussine-Pouchkine's Weblog

GlassFish - Stay Connected

Search

Archives
« avril 2014
lun.mar.mer.jeu.ven.sam.dim.
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today
Blogroll

No bookmarks in folder

News

No bookmarks in folder