Tuesday Oct 11, 2011

Silicon Valley Code Camp 2011 Trip Report

2200+ code campers attended 209 sessions delivered by 172 passionate speakers, ate 507 large pizzas and 1269 sandwiches, 5670 snack packs, drank 5000 bottles of water and 18 cambros of coffee and consumed 1560 parking spaces.

Where does this happen ? Of course, Silicon Valley Code Camp 2011!

This organically grown code camp over the past 6 years has seen tremendous growth. The Java tracks were added in the year 2007 after Van Riper visited the very first version of the code camp which was .NET focused. Now .NET sessions are about 1/3rd, Java-based sessions are about 1/3rd, and every thing else fills up rest of the agenda.

Here is a brief chart showing the growth in number of code campers registered, attended, and the sessions delivered over past 6 years:

I've attended each code camp since the introdction of Java (2007, 2008, 2009, 2010, and now this one) as a speaker.

This year, Oracle speakers delivered several sessions.

My first session was "Develop, Deploy, and Monitor a Java EE 6 session using GlassFish 3.1 Cluster" and explained
  • Walk through and deploy a typical Java EE 6 application using NetBeans and GlassFish.
  • Create a two-instance GlassFish cluster and front end with a Web server and a load balancer.
  • Demonstrate session replication when one of the instances fails.
  • Use the extensible monitoring infrastructure to generate application-specific monitoring data.
The complete instructions and the source code are available and you can try it yourself as well.

The second session was a packed standing-room-only and talked about "The Java EE 7 Platform: Developing for the Cloud" and the slides are available:

The demo shown in the talk is explained below (complete instructions):

I'll be giving this similar presentation at JAX London (London), JFall (Netherlands), Oredev (Sweden), Devoxx (Belgium), IndicThreads (India), and JavaOne Latin America (Brazil) in the days to come. Make sure to catch me and ask your questions or share your usecases to make Java EE 7 successful.

If you attended any of the sessions, please make sure to complete an evaluation

Check out some pictures from the camp ...

And enjoy the album from Saturday morning:

I had to rush back home because of prior personal commitments. Anyway looking forward to v7.0 next year.

Java EE 7 is planned to go final around the same time so there will be lots of material to share. And looking at the current trends, the registration and attendance numbers will be much higher than this year. So make sure to register early :-)

Here are some tweet-feedback ...

#svcc The Sillcon Valley Code Camp was very well done, organisation was excellent. Thanks to the SVCC staff and @vanriper ;-)
. @sv_code_camp if anyone with a passion for tech needs a reason to move to the bay area, all they have to do is attend #svcc
#svcc very well organized this year, they're handling growth quite well.

Many thanks to Van, Kevin, Peter, and rest of the crew for yet another successful camp!

Sunday Feb 27, 2011

GlassFish 3.1 Now Released: Java EE 6 with Clustering and High Availability + Commercial Support

GlassFish 3.1 is now released - download now!

Here are some numbers about the release:

This is the first major release of GlassFish under Oracle and fourth major in the overall train:

  • GlassFish v1 was about Java EE 5 compliance (the first one to be so!)
  • GlassFish v2 added Clustering and High Availability
  • GlassFish 3 was about Java EE 6 compliance (yet again, the first one!)

The two main themes in GlassFish 3.1 are:

  • Clustering & Centralized Administration - ability to create multiple clusters per domain and multiple instances per cluster from CLI and web-based Admin console
  • High Availability - in-memory state replication

This is the only application server that provides Java EE 6 Web Profile and Full Platform compliance in a clustered environment. And yes, you can buy commercial support from Oracle!

Now you may think GlassFish 3.1 = GlassFish 3 + GlassFish v2. But in reality there are a lot more improvements made exclusively in GlassFish 3.1 such as:

GlassFish 3.1 >= GlassFish 2.x + GlassFish 3.0 provides a complete list of items added/improved above & beyond the previous releases.

The commercial version also contain the closed-source value adds in GlassFish Server Control (nee GlassFish Enterprise Manager):

  1. DAS Backup & Recovery - A disaster recovery solution that allows you to back up an existing domain in an archive and recover in case of a disaster.
  2. Performance Tuner (online help only) - Analyse the underlying infrastructure and tunes the GlassFish runtime for optimal throughput & scalabilty.
  3. Monitoring scripting client - Allows to introspect the runtime using JavaScript and create your own dashboard.
  4. Coherence ActiveCache - New feature that enable integration with Oracle Coherence. Allows to replace in-memory session replication with Oracle Coherence and so move the storage of replication to a separate tier. This allows to scale out application tier independent of application tier. Need to license separately from Oracle GlassFish Server and will be available later this year.
  5. Integration with Oracle Access Manager - Delegate authorization & authentication to OAM.
  6. Load Balancer & Plugin Installer -  Reverse proxy engine that runs on Oracle Web Server and provides failover.

The value-adds #1 through #5 are pre-bundled with the commercial version and #6 needs to be downloaded and configured explicitly.

The nightly, promoted, and milestone builds have been available for many months now and this is the final build. Here are some pointers for you to get started:

Here are some specific screencasts to get you started:

GlassFish For Business talks about the umpteen update and patch releases done in between the major releases and also explain the difference between open source and commercial versions. Based upon the standard Oracle middleware support policy, the premier support for GlassFish 3.1 will end on 3/2016 and extended support will end on 3/2019.

OK, so you heard all about it, ready to download, and try it out. Here is a little graphic to assist you in deciding which bundle to download:

Download Java EE 6 SDK, Oracle GlassFish Server, or GlassFish Server Open Source Edition based upon your needs. And if you don't know your needs (that's possible!) then start with the Java EE 6 SDK as that is a comprehensive bundle including JDK and/or NetBeans (the complete IDE for your Java EE 6 and GlassFish 3.1 needs), docs, tutorials, and samples.

What is holding you back from using GlassFish 3.1 as your primary Java EE 6 deployment platform ? :-)

Wednesday Nov 17, 2010

Screencast #34: GlassFish 3.1 Clustering, High Availability and Centralized Administration

Two main themes of GlassFish 3.1 are: High Availability and Centralized Administration. This screencast demonstrates how to create a 2-instance cluster on a single machine using web-based Administration Console, deploy the canonical clusterjsp application, and show session fail-over capabilities in GlassFish 3.1.

From download to testing an application with high availability ... all in under 10 minutes ... enjoy!

A future screencast will show how to create multiple nodes on different machines, with 1 or more instance on each node, and all of them front-ended with a load balancer.

Technorati: screencast glassfish clustering highavailability

Friday Jun 25, 2010

TOTD #142: GlassFish 3.1 - SSH Provisioning and Start/Stop instance/cluster on local/remote machines

GlassFish 3.1 Milestone 2 enables SSH provisioning that allows you to create, start, stop, and delete a cluster spanning multiple instances on local and remote machines from the Domain Administration Server (DAS). This Tip Of The Day (TOTD) builds upon TOTD #141 and explains how you can create such a cluster on Amazon EC2 with Ubuntu 10.04. Carla also blogged about a similar scenario here.

The cluster topology created is shown below:

The key points shown in the topology are:

  • It consists of DAS and a remote machine "fruits" (shown in green color)
  • There is one cluster "food" spanning these two machines (shown in yellow color)
  • DAS has "broccoli" and "spinach" instances (shown in red color)
  • "fruits" has "apple", "banana", and "orange" instances (shown in red color)

Amazon EC2 assigned the public IP address of "ec2-184-72-12-163.us-west-1.compute.amazonaws.com" to DAS and "ec2-184-72-17-228.us-west-1.compute.amazonaws.com" to the remote machine. These IP addresses are used in the command invocations below.

Lets get started!

  1. Configure SSH between DAS and the remote machine - More details about SSH key setup here.
    1. Copy the keypair  generated in TOTD #141 (ec2-keypair.pem) to DAS as:
      ~/.ec2 >scp -i /Users/arungupta/.ec2/ec2-keypair.pem /Users/arungupta/.ec2/ec2-keypair.pem \\\\
      ec2-keypair.pem 100% 1751 1.7KB/s 00:00
      Notice, the public IP address of DAS is specified here. This key will be used to copy the private keys generated in next step to the remote machine.
    2. Generate a private/public key pair on DAS as:
      ubuntu@ip-10-160-47-196:~$ ssh-keygen -t dsa
      Generating public/private dsa key pair.
      Enter file in which to save the key (/home/ubuntu/.ssh/id_dsa): 
      Enter passphrase (empty for no passphrase): 
      Enter same passphrase again: 
      Your identification has been saved in /home/ubuntu/.ssh/id_dsa.
      Your public key has been saved in /home/ubuntu/.ssh/id_dsa.pub.
      The key fingerprint is:
      0a:b8:cd:8c:a0:7f:3d:00:9e:ec:ac:06:a1:f1:2f:cb ubuntu@ip-10-160-47-196
      The key's randomart image is:
      +--[ DSA 1024]----+
      | |
      | |
      | |
      |o .. |
      |o\*.o. S |
      |+.=\*.. . |
      |ooo.+o. |
      | ++ o o |
      |o.E+ . |
    3. Copy the generated public to ".ssh" directory of the remote machine as:
      ubuntu@ip-10-160-142-175:~/.ssh$ scp -i ec2-keypair.pem id_dsa.pub \\\\
    4. Make sure the ssh connection works between DAS and remote machine without specifying any key or passphrase as shown below:
      ssh ubuntu@ec2-184-72-17-228.us-west-1.compute.amazonaws.com
  2. Install "sun-java6-jdk" and "unzip" package and GlassFish on DAS and remote machine as explained in TOTD #141. In short:
    ssh -i /Users/arungupta/.ssh/ec2-keypair.pem ubuntu@ec2-XX-XX-XX-XX.us-west-1.compute.amazonaws.com
    sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"
    sudo apt-get update
    sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-jdk
    sudo update-java-alternatives -s java-6-sun
    sudo apt-get install unzip
    wget http://dlc.sun.com.edgesuite.net/glassfish/3.1/promoted/glassfish-3.1-b06.zip
    unzip glassfish-3.1-b06.zip
  3. Start GlassFish on DAS and remote machine as:
    export ENABLE_REPLICATION=true
    export PATH=~/glassfishv3/bin:$PATH
    asadmin start-domain --verbose &
  4. Create the cluster and instances by issuing the following commands on the DAS
    1. Create the cluster as:
      ubuntu@ip-10-160-142-175:~$ asadmin create-cluster food
      _ThreadID=23;_ThreadName=http-thread-pool-4848(2);|Hibernate Validator bean-validator-3.0-JBoss-4.0.2_03|#]
      Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver.|#]
      Command create-cluster executed successfully.
    2. Create a node on the remote machine as:
      ubuntu@ip-10-160-142-175:~3$ asadmin create-node-ssh --nodehost \\\\
      ec2-184-72-17-228.us-west-1.compute.amazonaws.com --nodehome /home/ubuntu/glassfishv3 fruits
      Command create-node-ssh executed successfully.
    3. List all the nodes as:
      ubuntu@ip-10-160-142-175:~$ asadmin list-nodes
      Command list-nodes executed successfully.
    4. Create two instances ("broccoli" and "spinach") on DAS as:
      ubuntu@ip-10-160-142-175:~$ asadmin create-instance --cluster=food \\\\
      --systemproperties AJP_INSTANCE_NAME=broccoli:AJP_INSTANCE_PORT=19090 broccoli
      Creating instance broccoli on localhost|#]
      Using DAS host localhost and port 4848 from existing das.properties for nodeagent 
      ip-10-160-142-175. To use a different DAS, create a new nodeagent by specifying a
      new --nodeagent name with the correct values for --host and --port.|#]
      Command _create-instance-filesystem executed successfully.|#]
      Command create-instance executed successfully.
      ubuntu@ip-10-160-142-175:~$ asadmin create-instance --cluster=food \\\\
      --systemproperties AJP_INSTANCE_NAME=spinach:AJP_INSTANCE_PORT=19091 spinach
      Creating instance spinach on localhost|#]
      Using DAS host localhost and port 4848 from existing das.properties for nodeagent 
      ip-10-160-142-175. To use a different DAS, create a new nodeagent by specifying a
      new --nodeagent name with the correct values for --host and --port.|#]
      Command _create-instance-filesystem executed successfully.|#]
      Command create-instance executed successfully.
      The AJP_INSTANCE_NAME and AJP_INSTANCE_PORT properties will be used by mod_jk in a subsequent blog.
    5. Create three instances ("apple", "banana", and "orange") on the remote machine as:
      ubuntu@ip-10-160-142-175:~$ asadmin create-instance --cluster=food --node=fruits \\\\ 
      --systemproperties AJP_INSTANCE_NAME=apple:AJP_INSTANCE_PORT=19090 apple
      Creating instance apple on fruits|#]
      Command _create-instance-filesystem executed successfully.
      Command create-instance executed successfully.
      ubuntu@ip-10-160-142-175:~$ asadmin create-instance --cluster=food --node=fruits \\\\
      --systemproperties AJP_INSTANCE_NAME=banana:AJP_INSTANCE_PORT=19091 banana
      Creating instance banana on fruits|#]
      Using DAS host ip-10-160-142-175.us-west-1.compute.internal and port 4848 from 
      existing das.properties for nodeagent ip-10-160-142-20. To use a different DAS,
      create a new nodeagent by specifying a new --nodeagent name with the correct
      values for --host and --port.
      Command _create-instance-filesystem executed successfully.
      Command create-instance executed successfully.
      ubuntu@ip-10-160-142-175:~$ asadmin create-instance --cluster=food --node=fruits \\\\
      --systemproperties AJP_INSTANCE_NAME=orange:AJP_INSTANCE_PORT=19092 orange
      Creating instance orange on fruits|#]
      Using DAS host ip-10-160-142-175.us-west-1.compute.internal and port 4848 from 
      existing das.properties for nodeagent ip-10-160-142-20. To use a different DAS,
      create a new nodeagent by specifying a new --nodeagent name with the correct
      values for --host and --port.
      Command _create-instance-filesystem executed successfully.
      Command create-instance executed successfully.
  5. Start the cluster
    1. List all instances as:
      ubuntu@ip-10-160-142-175:~3$ asadmin list-instances
      broccoli not running
      spinach not running
      apple not running
      banana not running
      orange not running
      Command list-instances executed successfully.
    2. Start the cluster as:
      ubuntu@ip-10-160-142-175:~$ asadmin start-cluster food
      . . .
      Command start-cluster executed successfully.
    3. List all the instances again as:
      ubuntu@ip-10-160-142-175:~$ asadmin list-instances
      . . .
      broccoli running
      spinach running
      apple running
      banana running
      orange running

      The HTTP ports of each instance can be grepped from DAS's "domain.xml". Here are the ports for each created instance:
      broccoli 28080
      spinach 28081
      apple 28080
      banana 28081
      orange 28082

      On Amazon, you may have to poke holes in the firewall as:
      ec2-authorize default -p 28080
      ec2-authorize default -p 28081
      ec2-authorize default -p 28082
      And now "http://ec2-184-72-12-163.us-west-1.compute.amazonaws.com:28080/" ("broccoli" instance on DAS) will show the default index page. Similarly other host and port combinations will show this page as well.

This blog showed how to create a GlassFish 3.1 cluster spanning multiple instances on Amazon EC2 with Ubuntu 10.04.

Subsequent blogs will show:

  • How to deploy an app to this cluster and some variations ?
  • How to front-end this cluster with mod_jk for load-balancing ?

Technorati: totd glassfish clustering ssh instance amazon ec2 ubuntu

Thursday May 27, 2010

TOTD #138: GlassFish 3.1 Milestone 1 - Clustering and Application Versioning Demos

GlassFish Server Open Source Edition 3.1 Milestone 1 is now available. The key functional drivers of this release are:

  • Clustering and Centralized Administration
  • High Availability

The Feature List shows the complete set of features planned for the release. The Draft Engineering Schedule shows what/when the features will be delivered. Per the schedule, Milestone 1 is the promoted build b02.

This is a bleeding-edge build and the first time clustering capabilities and application versioning are shown in GlassFish 3.1. GlassFish Server Open Source Edition 2.1.1 is the current stable release for all clustering and high availability capabilities.

The key features that work in this milestone build are Basic Clustering Support and Application Versioning. These features are now explained below.

Basic Clustering - This feature allows to create a multi-node cluster with few local/remote server instances and start them. The concept essentially remains similar to GlassFish v2. Here are the key concepts:

  1. New commands such as "create-cluster", "create-local-instance", "start-instance", and "list-instances" are now available.
  2. There is a central Domain Administration Server (DAS), corresponds to a GlassFish domain, that manages the entire cluster. This is also the "central repository" of all the artifacts and is the single point of entry to the cluster administration.
  3. The cluster consists of multiple instances (local and/or remote) that are synchronized with the DAS using SSH Provisioning (as opposed to the "Node Agent" in the v2). The first boot of an instance synchronizes the file system with DAS for configuration files, domain.xml, and any deployed applications. The communication between instance and DAS happen uses CLI interface.
  4. All applications are deployed to the DAS with a "--target" switch indicating the target cluster.
  5. Using an interim switch (ENABLE_REPLICATION=true), any command executed on the DAS is re-executed on the local/remote instances. This allows application deployment, JDBC connection pool/resource CRUD, and other similar commands to be re-executed on the instances participating in the cluster.
  6. Each instance's administration data is accessible at "http://{host}:{port}/management/domain".
The complete details about how to create a cluster, instances, deploy an application, enable replication etc are available at 3.1 Milestone Clustering Demo Wiki. Here is a quick summary of commands that worked for me. The WAR file used below from the demo wiki.
./bin/asadmin start-domain --verbose &
./bin/asadmin create-cluster c1
./bin/asadmin create-local-instance --cluster c1 --systemproperties HTTP_LISTENER_PORT=18080:HTTP_SSL_LISTENER_PORT=18181:IIOP_SSL_LISTENER_PORT=13800:IIOP_LISTENER_PORT=13700:JMX_SYSTEM_CONNECTOR_PORT=17676:IIOP_SSL_MUTUALAUTH_PORT=13801:JMS_PROVIDER_PORT=18686:ASADMIN_LISTENER_PORT=14848 in1
./bin/asadmin create-local-instance --cluster c1 --systemproperties HTTP_LISTENER_PORT=28080:HTTP_SSL_LISTENER_PORT=28181:IIOP_SSL_LISTENER_PORT=23800:IIOP_LISTENER_PORT=23700:JMX_SYSTEM_CONNECTOR_PORT=27676:IIOP_SSL_MUTUALAUTH_PORT=23801:JMS_PROVIDER_PORT=28686:ASADMIN_LISTENER_PORT=24848 in2
./bin/asadmin list-instances
\*\*\* list-instances \*\*\*name: in1, host: dhcp-usca14-133-151.SFBay.Sun.COM, port: 14848, state: Not Runningname: in2, host: dhcp-usca14-133-151.SFBay.Sun.COM, port: 24848, state: Not Running
./bin/asadmin deploy --target c1 helloworld.war
./bin/asadmin start-local-instance in1
./bin/asadmin start-local-instance in2
./bin/asadmin list-instances
. . .
name: in1, host: dhcp-usca14-133-151.SFBay.Sun.COM, port: 14848, state: Uptime: 1 minutes, 8 seconds, Total milliseconds: 68984

name: in2, host: dhcp-usca14-133-151.SFBay.Sun.COM, port: 24848, state: Uptime: 31,665 milliseconds, Total milliseconds: 31665
. . .
curl http://localhost:18080/helloworld/hi.jsp
<html><head><title>JSP Test</title>

<h2>Hello, World.</h2>
Thu May 27 17:53:47 PDT 2010
curl http://localhost:28080/helloworld/hi.jsp
<html><head><title>JSP Test</title>

<h2>Hello, World.</h2>
Thu May 27 17:53:56 PDT 2010
./bin/asadmin stop-instance in1
./bin/asadmin stop-instance in2
./bin/asadmin delete-local-instance in1
./bin/asadmin delete-local-instance in2
./bin/asadmin delete-cluster c1

AS_DEBUG=true and/or AS_LOGFILE=true variables can be set to see some interesting debugging information.

This is only the beginning of a journey and much more exciting features will be released in the subsequent milestones. Milestone 2 is planned for Jun 21st, stay tuned!

Application Versioning - Will Hartung provided an excellent description of why application versioning is important. Basically, multiple versions of an application can be easily deployed concurrently on a GlassFish domain, with one version enabled at a given time. The application may be rolled back to a previous version, quickly is the keyword, in case a bug is encountered in a newer version. The time taken to undeploy the current application, copying the new archive over to server, expanding the archive, deploying/starting the application is all cut down since the application is pre-deployed.

Here is a quick summary of commands that worked for me:

./bin/asadmin deploy helloworld.war
./bin/asadmin deploy --name=helloworld:test helloworld.war
./bin/asadmin deploy --name=helloworld:beta helloworld.war
./bin/asadmin deploy --name=helloworld:rc helloworld.war
./bin/asadmin list-applications
helloworld <web>
helloworld:1 <web>
helloworld:beta <web>
helloworld:rc <web>
./bin/asadmin show-component-status helloworld:beta
Status of helloworld:beta is disabled.
./bin/asadmin show-component-status helloworld:rc
Status of helloworld:rc is enabled.
./bin/asadmin enable helloworld:beta
./bin/asadmin show-component-status helloworld:rc
Status of helloworld:rc is disabled.
./bin/asadmin show-component-status helloworld:beta 
Status of helloworld:beta is enabled.
./bin/asadmin undeploy helloworld:1
./bin/asadmin list-applications
helloworld <web>
helloworld:beta <web>
helloworld:rc <web>

An exception with the message "javax.management.MalformedObjectNameException: Invalid character ':' in value part of property" is thrown if this WAR deployed though. This is tracked as issue #12077.

In this milestone build, the "show-component-status" command is used to check the enabled status of a particular version. The milestone 2 build will provide support for "--verbose" option with "list-applications" and "list-components" command that will show the enabled status as part of the console output.

Read more details in Clustering Infrastructure One Pager, Clustering Design Spec, and Application Versioning One Pager.

Please try out these features, help review the different One Pagers, and file issues in the Issue Tracker.

Technorati: totd glassfish 3.1 clustering version cluster application

Tuesday Aug 11, 2009

TOTD #92: Session Failover for Rails applications running on GlassFish

The GlassFish High Availability allows to setup a cluster of GlassFish instances and achieve highly scalable architecture using in-memory session state replication. This cluster can be very easily created and tested using the "clusterjsp" sample bundled with GlassFish. Here are some clustering related entries published on this blog so far:
  • TOTD #84 shows how to setup Apache + mod_proxy balancer for Ruby-on-Rails load balancing
  • TOTD #81 shows how to use nginx to front end a cluster of GlassFish Gems
  • TOTD #69 explains how a GlassFish cluster can be front-ended using Sun Web Server and Load Balancer Plugin
  • TOTD #67 shows the same thing using Apache httpd + mod_jk
#67 & #69 uses a web application "clusterjsp" (bundled with GlassFish) that uses JSP to demonstrate in-memory session replication state replication. This blog creates a similar application "clusterrails" - this time using Ruby-on-Rails and deploy it on GlassFish v2.1.1. The idea is to demonstrate how Rails applications can leverage the in-memory session replication feature of GlassFish.

Rails applications can be easily deployed as a WAR file on GlassFish v2 as explained in TOTD #73. This blog will guide through the steps of creating the Controller and View to mimic "clusterjsp" and configuring the Rails application for session replication.
  1. Create a template Rails application and create/migrate the database. Add a Controller/View as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby script/generate controller home index
    JRuby limited openssl loaded. gem install jruby-openssl for full support.
          exists  app/controllers/
          exists  app/helpers/
          create  app/views/home
          exists  test/functional/
          create  test/unit/helpers/
          create  app/controllers/home_controller.rb
          create  test/functional/home_controller_test.rb
          create  app/helpers/home_helper.rb
          create  test/unit/helpers/home_helper_test.rb
          create  app/views/home/index.html.erb

  2. Edit the controller in "app/controllers/home_controller.rb" and change the code to (explained below):

    class HomeController < ApplicationController
      include Java

      def index
        @server_served = servlet_request.get_server_name
        @port = servlet_request.get_server_port
        @instance = java.lang.System.get_property "com.sun.aas.instanceName"
        @server_executed = java.net.InetAddress.get_local_host().get_host_name()
        @ip = java.net.InetAddress.get_local_host().get_host_address
        @session_id = servlet_request.session.get_id
        @session_created = servlet_request.session.get_creation_time
        @session_last_accessed = servlet_request.session.get_last_accessed_time
        @session_inactive = servlet_request.session.get_max_inactive_interval

        if (params[:name] != nil)
          servlet_request.session[params[:name]] = params[:value]

        @session_values = ""
        value_names = servlet_request.session.get_attribute_names
        unless (value_names.has_more_elements)
          @session_values = "<br>No parameter entered for this request"
            @session_values << "<UL>"
            while (value_names.has_more_elements)
                param = value_names.next_element
                unless (param.starts_with?("__"))
                  value = servlet_request.session.get_attribute(param)
                  @session_values << "<LI>" + param + " = " + value + "</LI>"
            @session_values << "</UL>"


      def adddata
        servlet_request.session.set_attribute(params[:name], params[:value])
        render :action => "index"

      def cleardata
        render :action => "index"

    The "index" action initializes some instance variables using the "servlet_request" variable mapped from "javax.servlet.http.ServletRequest" class. The "servlet_request" provides access to different properties of the request received such as server name/port, host name/address and others. It also uses an application server specific property "com.sun.aas.instanceName" to fetch the name of particular instance serving the request. In this blog we'll create a cluster with 2 instances. The action then prints the servlet session attributes name/value pairs entered so far.

    The "adddata" action takes the name/value pair entered on the page and stores them in the servlet request. The "cleardata" action clears any data that is storied in the session.
  3. Edit the view in "app/views/home/index.html.erb" and change to (explained below):

    <p>Find me in app/views/home/index.html.erb</p>
    <B>HttpSession Information:</B>
    <LI>Served From Server:   <b><%= @server_served %></b></LI>
    <LI>Server Port Number:   <b><%= @port %></b></LI>
    <LI>Executed From Server: <b><%= @server_executed %></b></LI>
    <LI>Served From Server instance: <b><%= @instance %></b></LI>
    <LI>Executed Server IP Address: <b><%= @ip %></b></LI>
    <LI>Session ID:    <b><%= @session_id %></b></LI>
    <LI>Session Created:  <%= @session_created %></LI>
    <LI>Last Accessed:    <%= @session_last_accessed %></LI>
    <LI>Session will go inactive in  <b><%= @session_inactive %> seconds</b></LI>
    <% form_tag "/session/home/index" do %>
      <label for="name">Name of Session Attribute:</label>
      <%= text_field_tag :name, params[:name] %><br>

      <label for="value">Value of Session Attribute:</label>
      <%= text_field_tag :value, params[:value] %><br>

        <%= submit_tag "Add Session Data" %>
    <% end  %>
    <% form_tag "/session/home/cleardata" do %>
        <%= submit_tag "Clear Session Data" %>
    <% end %>
    <% form_tag "/session/home/index" do %>
        <%= submit_tag "Reload Page" %>
    <% end %>
    <B>Data retrieved from the HttpSession: </B>
    <%= @session_values %>

    The view dumps the property value retrieved from the servlet context in the action. Then it consists of some forms to enter the session name/value pairs, clear the session and reload the page. The application is now ready, lets configure it for WAR packaging.
  4. Generate a template "web.xml" and copy it to "config" directory as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble war:webxml
    mkdir -p tmp/war/WEB-INF
    ~/samples/jruby/session >cp tmp/war/WEB-INF/web.xml config/
    1. Edit "tmp/war/WEB-INF/web.xml" and change the first few lines from:

      <!DOCTYPE web-app PUBLIC
        "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"


      <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">

      This is required because the element to be added next is introduced in the Servlet 2.4 specification.
    2. Add the following element:


      as the first element, right after "<web-app>". This element marks the web application to be distributable across multiple JVMs in a cluster.
  5. Generate and configure "warble/config.rb" as described in TOTD #87. This configuration is an important step otherwise you'll encounter JRUBY-3789. Create a WAR file as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble
    mkdir -p tmp/war/WEB-INF/gems/specifications
    cp /Users/arungupta/tools/jruby-1.3.0/lib/ruby/gems/1.8/specifications/rails-2.3.2.gemspec tmp/war/WEB-INF/gems/specifications/rails-2.3.2.gemspec

    . . .

    mkdir -p tmp/war/WEB-INF
    cp config/web.xml tmp/war/WEB-INF
    jar cf session.war  -C tmp/war .

  6. Download latest GlassFish v2.1.1, install/configure GlassFish and create/configure/start a cluster using the script described here. Make sure to change the download location and filename in the script. This script creates a cluster "wines" with two instances - "cabernet" runing on the port 58080 and "merlot" running on the port 58081.
  7. Deploy the application using the command:

    ~/samples/jruby/session >asadmin deploy --target wines --port 5048 --availabilityenabled=true session.war
Now, the screenshots from the two instances are shown and explained below. The two (or more) instances are front-ended by a load balancer so none of this is typically visible to the user but it helps to understand.
Here is a snapshot of this application deployed on "cabernet":

The instance name and the session id is highlighted in the red box. It also shows the time when the session was created in "Session Created" field.

And now the same application form "merlot":

Notice, the session id exactly matches the one from the "cabernet" instance. Similarly "Session Created" matches but "Last Accessed" does not because the same session session is accessed from a different instance.

Lets enter some session data in the "cabernet" instance and click on "Add Session Data" button as shown below:

The session attribute is "aaa" and value is "111". Also the "Last Accessed" time is updated. In the "merlot" page, click on the "Reload Page" button and the same session name/value pairs are retrieved as shown below:

Notice, the "Last Accessed" time is after the time showed in "cabernet" instance. The session information added in "cabernet" is automatically replicated to the "merlot" instance.

Now, lets add a new session name/value pair in "merlot" instance as shown below:

The "Last Accessed" is updated and the session name/value pair ("bbb"/"222") is shown in the page. Click on "Reload page" in "cabernet" instance as shown below:

This time the session information added to "merlot" is replicated to "cabernet".

So any session information added in "cabernet" is replicated to "merlot" and vice versa.

Now, lets stop "cabernet" instance as shown below:

and click on "Reload Page" in "merlot" instance to see the following:

Even though one instance from which the session data was added is stopped, the replicating instance continues to serve both the session values.

As explained earlier, these two instances are front-ended by a load-balancer typically running at port 80. So the user makes a request to port 80 and the correct session values are served even if one of the instance goes down and there by providing in-memory session replication.

Please leave suggestions on other TOTD that you'd like to see. A complete archive of all the tips is available here.

Technorati: totd glassfish clustering rubyonrails jruby highavailability loadbalancer

Tuesday Jun 16, 2009

TOTD #84: Using Apache + mod_proxy_balancer to load balance Ruby-on-Rails running on GlassFish

TOTD #81 explained how to install/configure nginx for load-balancing/front-ending a cluster of Rails application running on GlassFish Gem. Another popular approach in the Rails community is to use Apache HTTPDmod_proxy_balancer. A user asked the exact details of this setup on the GlassFish Gem Forum. This Tip Of The Day (TOTD) will clearly explain the steps.
  1. Create a simple Rails scaffold and run this application using GlassFish Gem on 3 separate ports as explained in TOTD #81.
  2. Setup and configure HTTPD and mod_proxy_balancer
    1. Setup and install Apache HTTPD as explained here. I believe mod_proxy_balancer and other related modules comes pre-bundled with HTTPD, at least that's what I observed with Mac OS X 10.5.7. Make sure that the "mod_proxy_balancer" module is enabled by verifying the following line is uncommented in "/etc/apache2/httpd.conf":

      LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so

      Please note another similar file exists in "/etc/httpd/httpd.conf" but ignore that one.
    2. Setup a mod_proxy_balancer cluster by adding the following fragment in "httpd.conf" as:

      <Proxy balancer://glassfishgem>
      BalancerMember http://localhost:3000
      BalancerMember http://localhost:3001
      BalancerMember http://localhost:3002

      The port numbers must exactly match with those used in the first step.
    3. Specify the ProxyPass directives to map the cluster to a local path as:

      ProxyPass / balancer://glassfishgem/
      CustomLog /var/log/glassfishgem.log/apache_access_log combined

      The "/" at the end of "balancer://glassfishgem" is very important to ensure that all the files are resolved correctly.
    4. Optionally, the following directive can be added to view the access log:

      CustomLog /var/log/glassfishgem.log/apache_access_log combined

      Make sure to create the directory specified in "CustomLog" directive.
  3. Now the application is accessible at "http://localhost/runlogs". If a new GlassFish instance is started then update the <Proxy> directive and restart your HTTPD as "sudo httpd -k restart". Dynamic update of BalancerMembers can be configured as explained here.
TOTD #81 started the Rails application in root context. You can alternatively start the application in a non-root context as:

~/tools/jruby/rails/runner >../../bin/jruby -S glassfish -e production -c myapp
Starting GlassFish server at: in production environment...
Writing log messages to: /Users/arungupta/tools/jruby-1.3.0/rails/runner/log/production.log.
Press Ctrl+C to stop.
. . .
~/tools/jruby/rails/runner >../../bin/jruby -S glassfish -e production -c myapp -p 3001
Starting GlassFish server at: in production environment...
Writing log messages to: /Users/arungupta/tools/jruby-1.3.0/rails/runner/log/production.log.
Press Ctrl+C to stop.
. . .
~/tools/jruby/rails/runner >../../bin/jruby -S glassfish -e production -c myapp -p 3002
Starting GlassFish server at: in production environment...
Writing log messages to: /Users/arungupta/tools/jruby-1.3.0/rails/runner/log/production.log.
Press Ctrl+C to stop.

and then the ProxyPass directive will change to:

ProxyPass /myapp/ balancer://glassfishgem/myapp/

The changes are highlighted in bold. And the application is now accessible at "http://localhost/myapp/runlogs".

After discussing on Apache HTTP Server forum, the BalancerMember host/port can be printd in the log file using a custom log format. So add the following log format to "/etc/apache2/httpd.conf":

LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-agent}i\\" \\"%{BALANCER_WORKER_NAME}e\\"" custom

And change the format from the default "combined" to the newly defined "custom" format as:

CustomLog /var/log/glassfishgem.com/apache_access_log custom

Three subsequent invocations of "http://localhost/runlogs" then prints the following log entries:

::1 - - [17/Jun/2009:10:53:53 -0700] "GET /runlogs HTTP/1.1" 304 - "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv: Gecko/2009060214 Firefox/3.0.11" "http://localhost:3002"
::1 - - [17/Jun/2009:10:54:04 -0700] "GET /runlogs HTTP/1.1" 200 621 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv: Gecko/2009060214 Firefox/3.0.11" "http://localhost:3000"
::1 - - [17/Jun/2009:10:54:05 -0700] "GET /runlogs HTTP/1.1" 304 - "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv: Gecko/2009060214 Firefox/3.0.11" "http://localhost:3001"

As evident from the last fragment of each log line, the load is distributed amongst three GlassFish Gem instances. More details on load balancer algorithm are available here.

Feel free to drop a comment on this blog if you are using GlassFish in production for your Rails applications. Several stories are already available at rubyonrails+glassfish+stories.

Technorati: glassfish rubyonrails apache httpd mod_proxy_balancer loadbalancing clustering

Wednesday Apr 29, 2009

TOTD #81: How to use nginx to load balance a cluster of GlassFish Gem ?

nginx (pronounced as "engine-ex") is an open-source and high-performance HTTP server. It provides the common features such as reverse proxying with caching, load balancing, modular architecture using filters (gzipping, chunked responses, etc), virtual servers, flexible configuration and much more.

nginx is known for it's high performance and low resource consumption. It's a fairly popular front-end HTTP server in the Rails community along with Apache, Lighttpd, and others. This TOTD (Tip Of The Day) will show how to install/configure nginx for load-balancing/front-ending a cluster of Rails application running on GlassFish Gem.
  1. Download, build, and install nginx using the simple script (borrowed from dzone):

    ~/tools > curl -L -O http://sysoev.ru/nginx/nginx-0.6.36.tar.gz
    ~/tools > tar -xzf nginx-0.6.36.tar.gz
    ~/tools > curl -L -O http://downloads.sourceforge.net/pcre/pcre-7.7.tar.gz
    ~/tools > tar -xzf pcre-7.7.tar.gz
    ~/tools/nginx-0.6.36 > ./configure --prefix=/usr/local/nginx --sbin-path=/usr/sbin --with-debug --with-http_ssl_module --with-pcre=../pcre-7.7
    ~/tools/nginx-0.6.36 > make
    ~/tools/nginx-0.6.36 > sudo make install
    ~/tools/nginx-0.6.36 > which nginx

    OK, nginx is now roaring and can be verified by visiting "http://localhost" as shown below:

  2. Create a simple Rails scaffold as:

    ~/samples/jruby >~/tools/jruby/bin/jruby -S rails runner
    ~/samples/jruby/runner >~/tools/jruby/bin/jruby script/generate scaffold runlog miles:float minutes:integer
    ~/samples/jruby/runner >sed s/'adapter: sqlite3'/'adapter: jdbcsqlite3'/ <config/database.yml >config/database.yml.new
    ~/samples/jruby/runner >mv config/database.yml.new config/database.yml
    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S rake db:migrate
  3. Run this application using GlassFish Gem on 3 separate ports as:

    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish
    Starting GlassFish server at: in development environment...
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    The default port is 3000. Start the seond one by explicitly specifying the port using "-p" option ..

    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish -p 3001
    Starting GlassFish server at: in development environment...
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    and the last one on 3002 port ...

    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish -p 3002
    Starting GlassFish server at: in development environment...
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    On Solaris and Linux, you can run GlassFish as a daemon as well.
  4. Nginx currently uses a simple round-robin algorithm. Other load balancers such as nginx-upstream-fair (fair proxy) and nginx-ey-balancer (maximum connections) are also available. The built-in algorithm will be used for this blog. Edit "/usr/local/nginx/conf/nginx.conf" to specify an upstream module which provides load balancing:
    1. Create a cluster definition by adding an upstream module (configuration details) right before the "server" module:

      upstream glassfish {

      The cluster specifies a bunch of GlassFish Gem instances running at the backend. Each server can be weighted differently as explained here. The port numbers must exactly match as those specified at the start up. The modified "nginx.conf" looks like:

      The changes are highlighted on lines #35 through #39.
    2. Configure load balancing by specifying this cluster using "proxy_pass" directive as shown below:

      proxy_pass http://glassfish;

      in the "location" module. The updated "nginx.conf" looks like:

      The change is highlighted on line #52.
  5. Restart nginx by using the following commands:

    sudo kill -15 `cat /usr/local/nginx/logs/nginx.pid`
    sudo nginx
Now "http://localhost" shows the default Rails page as shown below:

"http://localhost/runlogs" now serves the page from the deployed Rails application.

Now lets configure logging so that the upstream server IP address and port are printed in the log files. In "nginx.conf", uncomment "log_format" directive and add "$upstream_addr" variable as shown:

    log_format  main  '$remote_addr - [$upstream_addr] $remote_user [$time_local] $request '
                      '"$status" $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

Also change the log format to "main" by uncommenting "access_log logs/access.log main;" line as shown above (default format is "combined"). Accessing "http://localhost/runlogs" shows the following lines in "logs/access.log": - [] - [29/Apr/2009:15:27:57 -0700] GET /runlogs/ HTTP/1.1 "200" 3689 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-" - [] - [29/Apr/2009:15:27:57 -0700] GET /favicon.ico HTTP/1.1 "200" 0 "http://localhost/runlogs/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-" - [] - [29/Apr/2009:15:27:57 -0700] GET /stylesheets/scaffold.css?1240977992 HTTP/1.1 "200" 889 "http://localhost/runlogs/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-"

The browser makes multiple requests (3 in this case) to load resources on a page and they are nicely load-balanced on the cluster. If an instance running on port 3002 is killed, then the access log show the entries like: - [] - [29/Apr/2009:15:28:53 -0700] GET /runlogs/ HTTP/1.1 "200" 3689 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-" - [,] - [29/Apr/2009:15:28:53 -0700] GET /favicon.ico HTTP/1.1 "200" 0 "http://localhost/runlogs/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-" - [] - [29/Apr/2009:15:28:53 -0700] GET /stylesheets/scaffold.css?1240977992 HTTP/1.1 "200" 889 "http://localhost/runlogs/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1" "-"

The second log line shows that server running on port 3002 did not respond and so it automatically fall back to 3000, this is nice!

But this is inefficient because a back-end trip is made even for serving a static file ("/favicon.ico" and "/stylesheets/scaffold.css?1240977992"). This can be easily solved by enabling Rails page caching as described here and here.

More options about logging are described in NginxHttpLogModule and upstream module variables are defined in NginxHttpUpstreamModule.

Here are some nginx resources:
Are you using nginx to front-end your GlassFish cluster ?

Apache + JRuby + Rails + GlassFish = Easy Deployment! shows similar steps if you want to front-end your Rails application running using JRuby/GlassFish with Apache.

Hear all about it in Develop with Pleasure, Deploy with Fun: GlassFish and NetBeans for a Better Rails Experience session at Rails Conf next week.

Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.

Technorati: rubyonrails glassfish v3 gem jruby nginx loadbalancing clustering

Thursday Apr 23, 2009

GlassFish asadmin CLI-driven Cluster Setup

Here is simple script that:
  • Installs GlassFish
  • Creates a new domain using cluster profile
  • Create 2 instances in the cluster
  • Deploys a sample application to verify the cluster setup
Everything in just one simple script!

This script can be used on a virtual (Virtual Box, EC2 instance, etc.) or a physical image of an Operating System.

echo A | java -Xmx256m -jar ~/Downloads/glassfish-installer-v2.1-b60e-darwin.jar -console
chmod +x ./lib/ant/bin/ant
./lib/ant/bin/ant -f setup.xml
echo 'AS_ADMIN_ADMINPASSWORD=adminadmin' > password
echo 'AS_ADMIN_PASSWORD=adminadmin' >> password
echo 'AS_ADMIN_MASTERPASSWORD=changeit' >> password
./bin/asadmin create-domain --user admin --passwordfile ./password --savelogin=true --portbase 5000 --interactive=false --profile cluster cloud
./bin/asadmin start-domain cloud
./bin/asadmin create-node-agent --user admin --port 5048 --interactive=false --passwordfile ./password cloud-nodeagent
./bin/asadmin start-node-agent --interactive=false --passwordfile ./password cloud-nodeagent
./bin/asadmin create-cluster --port 5048 wines
./bin/asadmin create-instance --port 5048 --nodeagent cloud-nodeagent --systemproperties HTTP_LISTENER_PORT=58080 --cluster wines cabernet
./bin/asadmin create-instance --port 5048 --nodeagent cloud-nodeagent --systemproperties HTTP_LISTENER_PORT=58081 --cluster wines merlot
./bin/asadmin deploy --target wines --port 5048 --availabilityenabled=true samples/quickstart/clusterjsp/clusterjsp.ear
./bin/asadmin start-cluster --port 5048 --interactive=false --passwordfile ./password wines

After the script execution is complete, open up "http://localhost:58080/clusterjsp". The page shows it is served from the "cabernet" instance. Enter some session data by adding values in the text box placed towards end of the page. Then stop the "cabernet" instance as explained in TOTD #67 after the string "OK, now show time!".

Now load the page "http://localhost:58081/clusterjsp" and it shows that the page is served from "merlot" instance. And the exact same session data is displayed towards the bottom of the page.

It basically shows that the session data added in one instance is replicated to the "buddy instance" ("merlot" in this case) automatically.

This scipt is tested on Open Solaris 2008/11, Windows Vista, and Mac OSX 10.5.x.

The cluster and instance creation can be easily done using the web-based admin console as described here. But this TOTD also shows the power of scriptability for common GlassFish administration commands - asadmin is your hidden gem and learn more about it in this recorded webinar!

TOTD #69 explains how to use Sun Web Server and Load-Balancer Plugin for the load balancer + clustering setup on Windows Vista. TOTD #67 explains the same steps for Apache + mod_jk on Mac OSX.

Technorati: glassfish cli asadmin clustering highavailability

Wednesday Feb 11, 2009

TOTD #69: GlassFish High Availability/Clustering using Sun Web Server + Load Balancer Plugin on Windows Vista

TOTD #67 shows how to configure GlassFish High Availability using Apache httpd + mod_jk on Mac OS X. Even though that's a standard and supported configuration, there are several advantages for replacing Apache httpd with Sun Web Server and mod_jk with Load Balancer plugin that comes with GlassFish.

This Tip Of The Day (TOTD) shows how to configure Clustering and Load Balancing using GlassFish v2.1, Sun Web Server, Load Balancer plugin on Windows Vista. This blog is using JDK 6 U7, GlassFish v2.1 (cluster profile), Sun Web Server 7 U4, and Load Balancer plug-in with Sun GlassFish Enterprise Server 2.1 Enterprise Profile (with HADB link).

Lets get started!
  1. Install the required software
    1. Download JDK (if not already installed).
    2. Download and Install GlassFish v2.1. Make sure to configure using "ant -f setup-cluster.xml". This will ensure that the created domain is capable of creating clusters and can perform in-memory session replication for applications deployed on the cluster.
    3. Download and Install Sun Web Server. The process is very simple by unzipping the downloaded bundle, clicking on "setup.exe" and taking all the defaults.
    4. Download GlassFish Enterprise Profile for Load Balancer plugin bits. Start the install by clicking on the downloaded file and select the options as shown below:

    5. Copy the following "loadbalancer.xml" in "https-<host>" (replace <host> with the host name of your machine) directory of Sun Web Server installation directory:

      <?xml version="1.0" encoding="UTF-8"?>
      <!DOCTYPE loadbalancer PUBLIC "-//Sun Microsystems Inc.//DTD Sun Java
      System Application Server 9.1//EN"

       <cluster name="cluster1" policy="round-robin" policy-module="">
        <instance name="instance1" enabled="true"
      disable-timeout-in-minutes="60" listeners="http://localhost:38080" weight="100"/>
        <instance name="instance2" enabled="true"
      disable-timeout-in-minutes="60" listeners="http://localhost:38081" weight="100"/>
        <web-module context-root="/clusterjsp"
      disable-timeout-in-minutes="30" enabled="true" error-url=""/>
        <health-checker interval-in-seconds="7" timeout-in-seconds="5" url="/"/>
       <property name="response-timeout-in-seconds" value="120"/>
       <property name="reload-poll-interval-in-seconds" value="7"/>
       <property name="https-routing" value="false"/>
       <property name="require-monitor-data" value="false"/>
       <property name="active-healthcheck-enabled" value="false"/>
       <property name="number-healthcheck-retries" value="3"/>
       <property name="rewrite-location" value="true"/>

      The parameters to be changed are highlighted in bold and explained below:
      1. Sun Web Server installation directory
      2. HTTP port of instances created in the cluster. The ports specified are the default ones and can be found by clicking on the instance as shown below:

      3. Context root of the application that will be deployed in the cluster. The Domain Administration Server (DAS) can be configured to populate this file whenever any application is deployed to the cluster.
  2. Create the cluster as explained in TOTD #67. The admin console shows the following screenshot after the cluster is created and all instances are created/started:


    and the following for 2 instances:

  3. Deploy "clusterjsp" as explained in TOTD #67. The admin console shows the following screenshot after "clusterjsp" is deployed:

  4. Start Sun Web Server using "startserv.bat" in "https-<host>" directory.
This concludes the installation and configuration steps, now show time!

Accessing "http://localhost/clusterjsp" shows:

The Sun Web Server is running on port 80 and uses "loadbalancer.xml" to serve the request from the configured instances in <loadbalancer> fragment. This particular page is served by "instance1" as indicated in the image. Lets add session data with property name "aaa" and value "111". The value is shown as:

The instance serving the data, "instance1" in this case, and the session data are highlighted.

Now lets stop "instance1" using the admin console and it looks like:

Click on "RELOAD PAGE" and it looks like:

Exactly same session data is served, this time by "instance2".

The sequence above proves that the session data created by the user is preserved even if the instance serving the data goes down. This is possible because of GlassFish High Availability. The session data is served by the "replica partner" where its already copied using in-memory session replication.

The following articles are also useful:
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.

Technorati: totd glassfish highavailability clustering loadbalancing lbplugin sunwebserver windows vista

profile image
Arun Gupta is a technology enthusiast, a passionate runner, author, and a community guy who works for Oracle Corp.

Java EE 7 Samples

Stay Connected


« July 2016