Sunday Feb 27, 2011

GlassFish 3.1 Now Released: Java EE 6 with Clustering and High Availability + Commercial Support

GlassFish 3.1 is now released - download now!

Here are some numbers about the release:

This is the first major release of GlassFish under Oracle and fourth major in the overall train:

  • GlassFish v1 was about Java EE 5 compliance (the first one to be so!)
  • GlassFish v2 added Clustering and High Availability
  • GlassFish 3 was about Java EE 6 compliance (yet again, the first one!)

The two main themes in GlassFish 3.1 are:

  • Clustering & Centralized Administration - ability to create multiple clusters per domain and multiple instances per cluster from CLI and web-based Admin console
  • High Availability - in-memory state replication

This is the only application server that provides Java EE 6 Web Profile and Full Platform compliance in a clustered environment. And yes, you can buy commercial support from Oracle!

Now you may think GlassFish 3.1 = GlassFish 3 + GlassFish v2. But in reality there are a lot more improvements made exclusively in GlassFish 3.1 such as:

GlassFish 3.1 >= GlassFish 2.x + GlassFish 3.0 provides a complete list of items added/improved above & beyond the previous releases.

The commercial version also contain the closed-source value adds in GlassFish Server Control (nee GlassFish Enterprise Manager):

  1. DAS Backup & Recovery - A disaster recovery solution that allows you to back up an existing domain in an archive and recover in case of a disaster.
  2. Performance Tuner (online help only) - Analyse the underlying infrastructure and tunes the GlassFish runtime for optimal throughput & scalabilty.
  3. Monitoring scripting client - Allows to introspect the runtime using JavaScript and create your own dashboard.
  4. Coherence ActiveCache - New feature that enable integration with Oracle Coherence. Allows to replace in-memory session replication with Oracle Coherence and so move the storage of replication to a separate tier. This allows to scale out application tier independent of application tier. Need to license separately from Oracle GlassFish Server and will be available later this year.
  5. Integration with Oracle Access Manager - Delegate authorization & authentication to OAM.
  6. Load Balancer & Plugin Installer -  Reverse proxy engine that runs on Oracle Web Server and provides failover.

The value-adds #1 through #5 are pre-bundled with the commercial version and #6 needs to be downloaded and configured explicitly.

The nightly, promoted, and milestone builds have been available for many months now and this is the final build. Here are some pointers for you to get started:

Here are some specific screencasts to get you started:

GlassFish For Business talks about the umpteen update and patch releases done in between the major releases and also explain the difference between open source and commercial versions. Based upon the standard Oracle middleware support policy, the premier support for GlassFish 3.1 will end on 3/2016 and extended support will end on 3/2019.

OK, so you heard all about it, ready to download, and try it out. Here is a little graphic to assist you in deciding which bundle to download:

Download Java EE 6 SDK, Oracle GlassFish Server, or GlassFish Server Open Source Edition based upon your needs. And if you don't know your needs (that's possible!) then start with the Java EE 6 SDK as that is a comprehensive bundle including JDK and/or NetBeans (the complete IDE for your Java EE 6 and GlassFish 3.1 needs), docs, tutorials, and samples.

What is holding you back from using GlassFish 3.1 as your primary Java EE 6 deployment platform ? :-)

Wednesday Nov 17, 2010

Screencast #34: GlassFish 3.1 Clustering, High Availability and Centralized Administration

Two main themes of GlassFish 3.1 are: High Availability and Centralized Administration. This screencast demonstrates how to create a 2-instance cluster on a single machine using web-based Administration Console, deploy the canonical clusterjsp application, and show session fail-over capabilities in GlassFish 3.1.

From download to testing an application with high availability ... all in under 10 minutes ... enjoy!

A future screencast will show how to create multiple nodes on different machines, with 1 or more instance on each node, and all of them front-ended with a load balancer.

Technorati: screencast glassfish clustering highavailability

Tuesday Aug 11, 2009

TOTD #92: Session Failover for Rails applications running on GlassFish

The GlassFish High Availability allows to setup a cluster of GlassFish instances and achieve highly scalable architecture using in-memory session state replication. This cluster can be very easily created and tested using the "clusterjsp" sample bundled with GlassFish. Here are some clustering related entries published on this blog so far:
  • TOTD #84 shows how to setup Apache + mod_proxy balancer for Ruby-on-Rails load balancing
  • TOTD #81 shows how to use nginx to front end a cluster of GlassFish Gems
  • TOTD #69 explains how a GlassFish cluster can be front-ended using Sun Web Server and Load Balancer Plugin
  • TOTD #67 shows the same thing using Apache httpd + mod_jk
#67 & #69 uses a web application "clusterjsp" (bundled with GlassFish) that uses JSP to demonstrate in-memory session replication state replication. This blog creates a similar application "clusterrails" - this time using Ruby-on-Rails and deploy it on GlassFish v2.1.1. The idea is to demonstrate how Rails applications can leverage the in-memory session replication feature of GlassFish.

Rails applications can be easily deployed as a WAR file on GlassFish v2 as explained in TOTD #73. This blog will guide through the steps of creating the Controller and View to mimic "clusterjsp" and configuring the Rails application for session replication.
  1. Create a template Rails application and create/migrate the database. Add a Controller/View as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby script/generate controller home index
    JRuby limited openssl loaded. gem install jruby-openssl for full support.
          exists  app/controllers/
          exists  app/helpers/
          create  app/views/home
          exists  test/functional/
          create  test/unit/helpers/
          create  app/controllers/home_controller.rb
          create  test/functional/home_controller_test.rb
          create  app/helpers/home_helper.rb
          create  test/unit/helpers/home_helper_test.rb
          create  app/views/home/index.html.erb

  2. Edit the controller in "app/controllers/home_controller.rb" and change the code to (explained below):

    class HomeController < ApplicationController
      include Java

      def index
        @server_served = servlet_request.get_server_name
        @port = servlet_request.get_server_port
        @instance = java.lang.System.get_property "com.sun.aas.instanceName"
        @server_executed =
        @ip =
        @session_id = servlet_request.session.get_id
        @session_created = servlet_request.session.get_creation_time
        @session_last_accessed = servlet_request.session.get_last_accessed_time
        @session_inactive = servlet_request.session.get_max_inactive_interval

        if (params[:name] != nil)
          servlet_request.session[params[:name]] = params[:value]

        @session_values = ""
        value_names = servlet_request.session.get_attribute_names
        unless (value_names.has_more_elements)
          @session_values = "<br>No parameter entered for this request"
            @session_values << "<UL>"
            while (value_names.has_more_elements)
                param = value_names.next_element
                unless (param.starts_with?("__"))
                  value = servlet_request.session.get_attribute(param)
                  @session_values << "<LI>" + param + " = " + value + "</LI>"
            @session_values << "</UL>"


      def adddata
        servlet_request.session.set_attribute(params[:name], params[:value])
        render :action => "index"

      def cleardata
        render :action => "index"

    The "index" action initializes some instance variables using the "servlet_request" variable mapped from "javax.servlet.http.ServletRequest" class. The "servlet_request" provides access to different properties of the request received such as server name/port, host name/address and others. It also uses an application server specific property "com.sun.aas.instanceName" to fetch the name of particular instance serving the request. In this blog we'll create a cluster with 2 instances. The action then prints the servlet session attributes name/value pairs entered so far.

    The "adddata" action takes the name/value pair entered on the page and stores them in the servlet request. The "cleardata" action clears any data that is storied in the session.
  3. Edit the view in "app/views/home/index.html.erb" and change to (explained below):

    <p>Find me in app/views/home/index.html.erb</p>
    <B>HttpSession Information:</B>
    <LI>Served From Server:   <b><%= @server_served %></b></LI>
    <LI>Server Port Number:   <b><%= @port %></b></LI>
    <LI>Executed From Server: <b><%= @server_executed %></b></LI>
    <LI>Served From Server instance: <b><%= @instance %></b></LI>
    <LI>Executed Server IP Address: <b><%= @ip %></b></LI>
    <LI>Session ID:    <b><%= @session_id %></b></LI>
    <LI>Session Created:  <%= @session_created %></LI>
    <LI>Last Accessed:    <%= @session_last_accessed %></LI>
    <LI>Session will go inactive in  <b><%= @session_inactive %> seconds</b></LI>
    <% form_tag "/session/home/index" do %>
      <label for="name">Name of Session Attribute:</label>
      <%= text_field_tag :name, params[:name] %><br>

      <label for="value">Value of Session Attribute:</label>
      <%= text_field_tag :value, params[:value] %><br>

        <%= submit_tag "Add Session Data" %>
    <% end  %>
    <% form_tag "/session/home/cleardata" do %>
        <%= submit_tag "Clear Session Data" %>
    <% end %>
    <% form_tag "/session/home/index" do %>
        <%= submit_tag "Reload Page" %>
    <% end %>
    <B>Data retrieved from the HttpSession: </B>
    <%= @session_values %>

    The view dumps the property value retrieved from the servlet context in the action. Then it consists of some forms to enter the session name/value pairs, clear the session and reload the page. The application is now ready, lets configure it for WAR packaging.
  4. Generate a template "web.xml" and copy it to "config" directory as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble war:webxml
    mkdir -p tmp/war/WEB-INF
    ~/samples/jruby/session >cp tmp/war/WEB-INF/web.xml config/
    1. Edit "tmp/war/WEB-INF/web.xml" and change the first few lines from:

      <!DOCTYPE web-app PUBLIC
        "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"


      <web-app version="2.4" xmlns="" xmlns:xsi="" xsi:schemaLocation="">

      This is required because the element to be added next is introduced in the Servlet 2.4 specification.
    2. Add the following element:


      as the first element, right after "<web-app>". This element marks the web application to be distributable across multiple JVMs in a cluster.
  5. Generate and configure "warble/config.rb" as described in TOTD #87. This configuration is an important step otherwise you'll encounter JRUBY-3789. Create a WAR file as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble
    mkdir -p tmp/war/WEB-INF/gems/specifications
    cp /Users/arungupta/tools/jruby-1.3.0/lib/ruby/gems/1.8/specifications/rails-2.3.2.gemspec tmp/war/WEB-INF/gems/specifications/rails-2.3.2.gemspec

    . . .

    mkdir -p tmp/war/WEB-INF
    cp config/web.xml tmp/war/WEB-INF
    jar cf session.war  -C tmp/war .

  6. Download latest GlassFish v2.1.1, install/configure GlassFish and create/configure/start a cluster using the script described here. Make sure to change the download location and filename in the script. This script creates a cluster "wines" with two instances - "cabernet" runing on the port 58080 and "merlot" running on the port 58081.
  7. Deploy the application using the command:

    ~/samples/jruby/session >asadmin deploy --target wines --port 5048 --availabilityenabled=true session.war
Now, the screenshots from the two instances are shown and explained below. The two (or more) instances are front-ended by a load balancer so none of this is typically visible to the user but it helps to understand.
Here is a snapshot of this application deployed on "cabernet":

The instance name and the session id is highlighted in the red box. It also shows the time when the session was created in "Session Created" field.

And now the same application form "merlot":

Notice, the session id exactly matches the one from the "cabernet" instance. Similarly "Session Created" matches but "Last Accessed" does not because the same session session is accessed from a different instance.

Lets enter some session data in the "cabernet" instance and click on "Add Session Data" button as shown below:

The session attribute is "aaa" and value is "111". Also the "Last Accessed" time is updated. In the "merlot" page, click on the "Reload Page" button and the same session name/value pairs are retrieved as shown below:

Notice, the "Last Accessed" time is after the time showed in "cabernet" instance. The session information added in "cabernet" is automatically replicated to the "merlot" instance.

Now, lets add a new session name/value pair in "merlot" instance as shown below:

The "Last Accessed" is updated and the session name/value pair ("bbb"/"222") is shown in the page. Click on "Reload page" in "cabernet" instance as shown below:

This time the session information added to "merlot" is replicated to "cabernet".

So any session information added in "cabernet" is replicated to "merlot" and vice versa.

Now, lets stop "cabernet" instance as shown below:

and click on "Reload Page" in "merlot" instance to see the following:

Even though one instance from which the session data was added is stopped, the replicating instance continues to serve both the session values.

As explained earlier, these two instances are front-ended by a load-balancer typically running at port 80. So the user makes a request to port 80 and the correct session values are served even if one of the instance goes down and there by providing in-memory session replication.

Please leave suggestions on other TOTD that you'd like to see. A complete archive of all the tips is available here.

Technorati: totd glassfish clustering rubyonrails jruby highavailability loadbalancer

Thursday Apr 23, 2009

GlassFish asadmin CLI-driven Cluster Setup

Here is simple script that:
  • Installs GlassFish
  • Creates a new domain using cluster profile
  • Create 2 instances in the cluster
  • Deploys a sample application to verify the cluster setup
Everything in just one simple script!

This script can be used on a virtual (Virtual Box, EC2 instance, etc.) or a physical image of an Operating System.

echo A | java -Xmx256m -jar ~/Downloads/glassfish-installer-v2.1-b60e-darwin.jar -console
chmod +x ./lib/ant/bin/ant
./lib/ant/bin/ant -f setup.xml
echo 'AS_ADMIN_ADMINPASSWORD=adminadmin' > password
echo 'AS_ADMIN_PASSWORD=adminadmin' >> password
echo 'AS_ADMIN_MASTERPASSWORD=changeit' >> password
./bin/asadmin create-domain --user admin --passwordfile ./password --savelogin=true --portbase 5000 --interactive=false --profile cluster cloud
./bin/asadmin start-domain cloud
./bin/asadmin create-node-agent --user admin --port 5048 --interactive=false --passwordfile ./password cloud-nodeagent
./bin/asadmin start-node-agent --interactive=false --passwordfile ./password cloud-nodeagent
./bin/asadmin create-cluster --port 5048 wines
./bin/asadmin create-instance --port 5048 --nodeagent cloud-nodeagent --systemproperties HTTP_LISTENER_PORT=58080 --cluster wines cabernet
./bin/asadmin create-instance --port 5048 --nodeagent cloud-nodeagent --systemproperties HTTP_LISTENER_PORT=58081 --cluster wines merlot
./bin/asadmin deploy --target wines --port 5048 --availabilityenabled=true samples/quickstart/clusterjsp/clusterjsp.ear
./bin/asadmin start-cluster --port 5048 --interactive=false --passwordfile ./password wines

After the script execution is complete, open up "http://localhost:58080/clusterjsp". The page shows it is served from the "cabernet" instance. Enter some session data by adding values in the text box placed towards end of the page. Then stop the "cabernet" instance as explained in TOTD #67 after the string "OK, now show time!".

Now load the page "http://localhost:58081/clusterjsp" and it shows that the page is served from "merlot" instance. And the exact same session data is displayed towards the bottom of the page.

It basically shows that the session data added in one instance is replicated to the "buddy instance" ("merlot" in this case) automatically.

This scipt is tested on Open Solaris 2008/11, Windows Vista, and Mac OSX 10.5.x.

The cluster and instance creation can be easily done using the web-based admin console as described here. But this TOTD also shows the power of scriptability for common GlassFish administration commands - asadmin is your hidden gem and learn more about it in this recorded webinar!

TOTD #69 explains how to use Sun Web Server and Load-Balancer Plugin for the load balancer + clustering setup on Windows Vista. TOTD #67 explains the same steps for Apache + mod_jk on Mac OSX.

Technorati: glassfish cli asadmin clustering highavailability

Wednesday Feb 11, 2009

TOTD #69: GlassFish High Availability/Clustering using Sun Web Server + Load Balancer Plugin on Windows Vista

TOTD #67 shows how to configure GlassFish High Availability using Apache httpd + mod_jk on Mac OS X. Even though that's a standard and supported configuration, there are several advantages for replacing Apache httpd with Sun Web Server and mod_jk with Load Balancer plugin that comes with GlassFish.

This Tip Of The Day (TOTD) shows how to configure Clustering and Load Balancing using GlassFish v2.1, Sun Web Server, Load Balancer plugin on Windows Vista. This blog is using JDK 6 U7, GlassFish v2.1 (cluster profile), Sun Web Server 7 U4, and Load Balancer plug-in with Sun GlassFish Enterprise Server 2.1 Enterprise Profile (with HADB link).

Lets get started!
  1. Install the required software
    1. Download JDK (if not already installed).
    2. Download and Install GlassFish v2.1. Make sure to configure using "ant -f setup-cluster.xml". This will ensure that the created domain is capable of creating clusters and can perform in-memory session replication for applications deployed on the cluster.
    3. Download and Install Sun Web Server. The process is very simple by unzipping the downloaded bundle, clicking on "setup.exe" and taking all the defaults.
    4. Download GlassFish Enterprise Profile for Load Balancer plugin bits. Start the install by clicking on the downloaded file and select the options as shown below:

    5. Copy the following "loadbalancer.xml" in "https-<host>" (replace <host> with the host name of your machine) directory of Sun Web Server installation directory:

      <?xml version="1.0" encoding="UTF-8"?>
      <!DOCTYPE loadbalancer PUBLIC "-//Sun Microsystems Inc.//DTD Sun Java
      System Application Server 9.1//EN"

       <cluster name="cluster1" policy="round-robin" policy-module="">
        <instance name="instance1" enabled="true"
      disable-timeout-in-minutes="60" listeners="http://localhost:38080" weight="100"/>
        <instance name="instance2" enabled="true"
      disable-timeout-in-minutes="60" listeners="http://localhost:38081" weight="100"/>
        <web-module context-root="/clusterjsp"
      disable-timeout-in-minutes="30" enabled="true" error-url=""/>
        <health-checker interval-in-seconds="7" timeout-in-seconds="5" url="/"/>
       <property name="response-timeout-in-seconds" value="120"/>
       <property name="reload-poll-interval-in-seconds" value="7"/>
       <property name="https-routing" value="false"/>
       <property name="require-monitor-data" value="false"/>
       <property name="active-healthcheck-enabled" value="false"/>
       <property name="number-healthcheck-retries" value="3"/>
       <property name="rewrite-location" value="true"/>

      The parameters to be changed are highlighted in bold and explained below:
      1. Sun Web Server installation directory
      2. HTTP port of instances created in the cluster. The ports specified are the default ones and can be found by clicking on the instance as shown below:

      3. Context root of the application that will be deployed in the cluster. The Domain Administration Server (DAS) can be configured to populate this file whenever any application is deployed to the cluster.
  2. Create the cluster as explained in TOTD #67. The admin console shows the following screenshot after the cluster is created and all instances are created/started:


    and the following for 2 instances:

  3. Deploy "clusterjsp" as explained in TOTD #67. The admin console shows the following screenshot after "clusterjsp" is deployed:

  4. Start Sun Web Server using "startserv.bat" in "https-<host>" directory.
This concludes the installation and configuration steps, now show time!

Accessing "http://localhost/clusterjsp" shows:

The Sun Web Server is running on port 80 and uses "loadbalancer.xml" to serve the request from the configured instances in <loadbalancer> fragment. This particular page is served by "instance1" as indicated in the image. Lets add session data with property name "aaa" and value "111". The value is shown as:

The instance serving the data, "instance1" in this case, and the session data are highlighted.

Now lets stop "instance1" using the admin console and it looks like:

Click on "RELOAD PAGE" and it looks like:

Exactly same session data is served, this time by "instance2".

The sequence above proves that the session data created by the user is preserved even if the instance serving the data goes down. This is possible because of GlassFish High Availability. The session data is served by the "replica partner" where its already copied using in-memory session replication.

The following articles are also useful:
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.

Technorati: totd glassfish highavailability clustering loadbalancing lbplugin sunwebserver windows vista

Thursday Jan 29, 2009

TOTD #67: How to front-end a GlassFish Cluster with Apache + mod_jk on Mac OSX Leopard ?

GlassFish provides support for High Availability by creating a cluster of server instances and session state replication. This enhances the scalability and availability of your application and is a critical piece of decision making critieria when selecting an Application Server. Clustering in GlassFish Version 2 provides comprehensive introduction to clustering, high availability and load balancing in GlassFish.

GlassFish provides out-of-the-box support for load-balancing HTTP(S), JMS, and RMI/IIOP traffic and front-ended by Sun Java System Web Server, Apache Web Server, and Microsoft IIS (more details here) using the Load Balancer plug-in. This plug-in however is not available for Mac OS X and a popular technique used on that platform for front-ending is to use Apache httpd + mod_jk. This is exactly what this TOTD (Tip Of The Day) is going to describe.

This TOTD is going to explain how to front-end a 3-instance GlassFish cluster with Apache httpd and mod_jk on Mac OS X.

This blog is using information from the following blogs:
And thanks to Vivek and Shreedhar for helping me understand the guts of GlassFish High Availability.

Without further ado, lets get started. The steps are slightly involving so strap your seatbelts!
  1. First, lets create a 3-instance cluster following the screencast at GlassFish Clustering in under 10 minutes. Use the cluster name as "cluster1" and instance names as "instance1", "instance2", "instance3". The admin console will look like:

    Deploy "clusterjsp" and make sure it works using port hopping as explained in the screencast. Click on each instance to identify their associated HTTP port.
  2. Define "jvmRoute" and "enableJK" properties on the newly created cluster as:

    ~/samples/v2/clustering/glassfish/bin >./asadmin create-jvm-options --target cluster1 "-DjvmRoute=\\${AJP_INSTANCE_NAME}"
    Command create-jvm-options executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-jvm-options --target cluster1 "-Dcom.sun.enterprise.web.connector.enableJK=\\${AJP_PORT}"
    Command create-jvm-options executed successfully.

    These properties are required to enable "stickiness" for "mod_jk". More details about how these properties are used internally is explained here.
  3. Configure the above system properties for each instance in the cluster as shown:

    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance1 AJP_INSTANCE_NAME=instance1
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance1 AJP_PORT=9090
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance2 AJP_INSTANCE_NAME=instance2
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance2 AJP_PORT=9091
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance3 AJP_INSTANCE_NAME=instance3
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance3 AJP_PORT=9092
    Command create-system-properties executed successfully.

    Note the value of "AJP_PORT" property for each instance, this will be used for configuring "mod_jk" later. You may have to restart the cluster in order for these properties to be synchronized for each instance. This can be easily done using the admin console as explained in the screencast above.
  4. Install httpd: Mac OS X Leopard 10.5.6 comes bundled with Apache httpd 2.2, so that's cool! Otherwise it can be downloaded from However in the pre-installed version there are some intricacies with directory names that are explained below.
  5. Lets install and configure "mod_jk" in "httpd".
    1. The mod_jk binaries for Mac OSX are supposedly available at But installing the available module in httpd gives the following error:

      httpd: Syntax error on line 116 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/ into server: dlopen(/usr/libexec/apache2/, 10): no suitable image found.  Did find:\\n\\t/usr/libexec/apache2/ mach-o, but wrong architecture
    2. The alternative is to build "mod_jk". Fortunately it turned out to be a straight forward process because of this blog.
      1. Download latest Connectors source code (version 1.2.27). The file "BUILD.txt" (included in the source bundle) provide clear instructions to build and described below as well.
      2. Configure the build environment as shown:

        ~/workspaces/tomcat-connectors-1.2.27-src/native >./configure --with-apxs=/usr/sbin/apxs

        It shows the output as:

        . . .
        checking for target platform... unix
        no apache given
        no netscape given
        configure: creating ./config.status
        config.status: creating Makefile
        config.status: creating apache-1.3/Makefile
        config.status: creating apache-1.3/Makefile.apxs
        config.status: creating apache-2.0/Makefile
        config.status: creating apache-2.0/Makefile.apxs
        config.status: creating common/Makefile
        config.status: creating common/
        config.status: creating common/jk_types.h
        config.status: creating jni/Makefile
        config.status: creating common/portable.h
        config.status: executing depfiles commands
      3. Edit Edit "native/apache-2.0/Makefile.apxs" and add "-arch x86_64" as described here. Please note that this string needs to be specified twice.
      4. Invoke "make" and "" is generated in "native/apache-2.0" directory.
    3. Copy the generated "" as:

      ~/workspaces/tomcat-connectors-1.2.27-src/native/apache-2.0 >sudo cp /usr/libexec/apache2/
    4. Load the "mod_jk" module in httpd by editing "/etc/apache2/httpd.conf". Please note another similar file exists in "/etc/httpd/httpd.conf" but ignore that one. Add the following as the last "LoadModule" line:

      LoadModule jk_module     libexec/apache2/
    5. Configure "mod_jk" by adding the following lines immediately below the previously "LoadModule" line:

      JkWorkersFile /etc/apache2/
      # Where to put jk logs
      JkLogFile /var/log/httpd/mod_jk.log
      # Set the jk log level [debug/error/info]
      JkLogLevel debug
      # Select the log format
      JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
      # JkRequestLogFormat set the request format
      JkRequestLogFormat "%w %V %T"
      # Send all jsp requests to GlassFish
      JkMount /\*.jsp loadbalancer

      The key lines in this configuration are the first and the last one. The first line informs "mod_jk" about the location of "" file (explained later). The last line instructs to redirect only JSP requests. This will allow the static content such as images, text files, and media to be served from "httpd" itself.

      Also create the log directory specified in the configuration as:

      sudo mkdir /var/log/httpd
    6. Create a new file "/etc/apache2/". Again, this is not in "/etc/httpd" directory. Use the following contents:

      # Define 1 real worker using ajp13
      # Set properties for instance1
      # Set properties for instance2
      # Set properties for instance3


      Read more about format. Essentially this file is used to load-balance a 3-instance cluster and specify configuration values for each instance. Note that the value of "worker.instanceX.port" for instance X is exactly same as specified during instance configuration earlier.
  6. Copy "tomcat-ajp.jar" from the "lib" directory of the latest Tomcat 5.5.xcommons-logging.jar (version 1.1.1), and commons-modeler.jar (version 2.0.1) to GLASSFISH_HOME/lib. This is done as:

    ~/samples/v2/clustering/glassfish/lib >cp ~/tools/apache-tomcat-5.5.27/server/lib/tomcat-ajp.jar .
    ~/samples/v2/clustering/glassfish/lib >cp ~/Downloads/commons-logging-1.1.1/commons-logging-1.1.1.jar .
    ~/samples/v2/clustering/glassfish/lib >cp ~/tools/commons-modeler-2.0.1/commons-modeler-2.0.1.jar .

    You may have to restart cluster in order for these JARs to be loaded by each instance.
  7. An "httpd" instance is already running on port# 80 in my particular instance of Mac OS X. Instead of mangling with that, I decided to change the listening port for the new instance that will be spawn for out front-end. This can be easily done by editing "/etc/apache2/httpd.conf" and looking for lines similar to:

    Listen 80

    And change "Listen 80" to "Listen 81".
That completes the configuration, phew!

Lets start "httpd" as:

sudo httpd

The "httpd" logs are available in "/private/var/log/apache2". The following message indicates a successful start of the web server:

. . .
[Thu Jan 29 11:14:16 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
[Thu Jan 29 11:14:16 2009] [warn] No JkShmFile defined in httpd.conf. Using default /usr/logs/jk-run
[Thu Jan 29 11:14:16 2009] [warn] No JkShmFile defined in httpd.conf. Using default /usr/logs/jk-run
[Thu Jan 29 11:14:16 2009] [notice] Digest: generating secret for digest authentication ...
[Thu Jan 29 11:14:16 2009] [notice] Digest: done
[Thu Jan 29 11:14:16 2009] [warn] pid file /private/var/run/ overwritten -- Unclean shutdow
n of previous Apache run?
[Thu Jan 29 11:14:16 2009] [notice] Apache/2.2.9 (Unix) mod_ssl/2.2.9 OpenSSL/0.9.7l DAV/2 mod_jk/1.
2.27 configured -- resuming normal operations

OK, now show time!

If everything has been configured properly as described above then "http://localhost:81/clusterjsp/HaJsp.jsp" looks like:

Enter session attribute as "aaa" and value of the attribute as "111". After you click on "ADD SESSION DATA" button, the updated page looks like:

The highlighted part shows that the request is served from "instance1" and recently added session data. Lets stop "instance1" and see if the promise of high availability is fulfilled :)

Click on "Instances" tab, select "instance1" and click "Stop". The admin console looks like:

Notice "instance1" is shown as stopped. Clicking on "Reload Page" on "http://localhost:81/clusterjsp/HaJsp.jsp" shows:


Even though "instance1" is not runing the session data is still available. And that is possible because of the seamless session failover from primary ("instance1") to the replica partner ("instance2"). The highlighted part indicates that the request is now indeed served by "instance2".

Here are some other useful links to consider:
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.

Technorati: totd glassfish highavailability clustering loadbalancer mod_jk apache httpd mac leopard

profile image
Arun Gupta is a technology enthusiast, a passionate runner, author, and a community guy who works for Oracle Corp.

Java EE 7 Samples

Stay Connected


« July 2016