Tuesday Aug 11, 2009

TOTD #92: Session Failover for Rails applications running on GlassFish


The GlassFish High Availability allows to setup a cluster of GlassFish instances and achieve highly scalable architecture using in-memory session state replication. This cluster can be very easily created and tested using the "clusterjsp" sample bundled with GlassFish. Here are some clustering related entries published on this blog so far:
  • TOTD #84 shows how to setup Apache + mod_proxy balancer for Ruby-on-Rails load balancing
  • TOTD #81 shows how to use nginx to front end a cluster of GlassFish Gems
  • TOTD #69 explains how a GlassFish cluster can be front-ended using Sun Web Server and Load Balancer Plugin
  • TOTD #67 shows the same thing using Apache httpd + mod_jk
#67 & #69 uses a web application "clusterjsp" (bundled with GlassFish) that uses JSP to demonstrate in-memory session replication state replication. This blog creates a similar application "clusterrails" - this time using Ruby-on-Rails and deploy it on GlassFish v2.1.1. The idea is to demonstrate how Rails applications can leverage the in-memory session replication feature of GlassFish.

Rails applications can be easily deployed as a WAR file on GlassFish v2 as explained in TOTD #73. This blog will guide through the steps of creating the Controller and View to mimic "clusterjsp" and configuring the Rails application for session replication.
  1. Create a template Rails application and create/migrate the database. Add a Controller/View as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby script/generate controller home index
    JRuby limited openssl loaded. gem install jruby-openssl for full support.
    http://wiki.jruby.org/wiki/JRuby_Builtin_OpenSSL
          exists  app/controllers/
          exists  app/helpers/
          create  app/views/home
          exists  test/functional/
          create  test/unit/helpers/
          create  app/controllers/home_controller.rb
          create  test/functional/home_controller_test.rb
          create  app/helpers/home_helper.rb
          create  test/unit/helpers/home_helper_test.rb
          create  app/views/home/index.html.erb

  2. Edit the controller in "app/controllers/home_controller.rb" and change the code to (explained below):

    class HomeController < ApplicationController
      include Java

      def index
        @server_served = servlet_request.get_server_name
        @port = servlet_request.get_server_port
        @instance = java.lang.System.get_property "com.sun.aas.instanceName"
        @server_executed = java.net.InetAddress.get_local_host().get_host_name()
        @ip = java.net.InetAddress.get_local_host().get_host_address
        @session_id = servlet_request.session.get_id
        @session_created = servlet_request.session.get_creation_time
        @session_last_accessed = servlet_request.session.get_last_accessed_time
        @session_inactive = servlet_request.session.get_max_inactive_interval

        if (params[:name] != nil)
          servlet_request.session[params[:name]] = params[:value]
        end

        @session_values = ""
        value_names = servlet_request.session.get_attribute_names
        unless (value_names.has_more_elements)
          @session_values = "<br>No parameter entered for this request"
        else
            @session_values << "<UL>"
            while (value_names.has_more_elements)
                param = value_names.next_element
                unless (param.starts_with?("__"))
                  value = servlet_request.session.get_attribute(param)
                  @session_values << "<LI>" + param + " = " + value + "</LI>"
                end
            end
            @session_values << "</UL>"
        end

      end

      def adddata
        servlet_request.session.set_attribute(params[:name], params[:value])
        render :action => "index"
      end

      def cleardata
        servlet_request.session.invalidate
        render :action => "index"
      end
    end

    The "index" action initializes some instance variables using the "servlet_request" variable mapped from "javax.servlet.http.ServletRequest" class. The "servlet_request" provides access to different properties of the request received such as server name/port, host name/address and others. It also uses an application server specific property "com.sun.aas.instanceName" to fetch the name of particular instance serving the request. In this blog we'll create a cluster with 2 instances. The action then prints the servlet session attributes name/value pairs entered so far.

    The "adddata" action takes the name/value pair entered on the page and stores them in the servlet request. The "cleardata" action clears any data that is storied in the session.
  3. Edit the view in "app/views/home/index.html.erb" and change to (explained below):

    <h1>Home#index</h1>
    <p>Find me in app/views/home/index.html.erb</p>
    <B>HttpSession Information:</B>
    <UL>
    <LI>Served From Server:   <b><%= @server_served %></b></LI>
    <LI>Server Port Number:   <b><%= @port %></b></LI>
    <LI>Executed From Server: <b><%= @server_executed %></b></LI>
    <LI>Served From Server instance: <b><%= @instance %></b></LI>
    <LI>Executed Server IP Address: <b><%= @ip %></b></LI>
    <LI>Session ID:    <b><%= @session_id %></b></LI>
    <LI>Session Created:  <%= @session_created %></LI>
    <LI>Last Accessed:    <%= @session_last_accessed %></LI>
    <LI>Session will go inactive in  <b><%= @session_inactive %> seconds</b></LI>
    </UL>
    <BR>
    <% form_tag "/session/home/index" do %>
      <label for="name">Name of Session Attribute:</label>
      <%= text_field_tag :name, params[:name] %><br>

      <label for="value">Value of Session Attribute:</label>
      <%= text_field_tag :value, params[:value] %><br>

        <%= submit_tag "Add Session Data" %>
    <% end  %>
    <% form_tag "/session/home/cleardata" do %>
        <%= submit_tag "Clear Session Data" %>
    <% end %>
    <% form_tag "/session/home/index" do %>
        <%= submit_tag "Reload Page" %>
    <% end %>
    <BR>
    <B>Data retrieved from the HttpSession: </B>
    <%= @session_values %>

    The view dumps the property value retrieved from the servlet context in the action. Then it consists of some forms to enter the session name/value pairs, clear the session and reload the page. The application is now ready, lets configure it for WAR packaging.
  4. Generate a template "web.xml" and copy it to "config" directory as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble war:webxml
    mkdir -p tmp/war/WEB-INF
    ~/samples/jruby/session >cp tmp/war/WEB-INF/web.xml config/
    1. Edit "tmp/war/WEB-INF/web.xml" and change the first few lines from:

      <!DOCTYPE web-app PUBLIC
        "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
        "http://java.sun.com/dtd/web-app_2_3.dtd">
      <web-app>

      to

      <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">

      This is required because the element to be added next is introduced in the Servlet 2.4 specification.
    2. Add the following element:

      <distributable/>

      as the first element, right after "<web-app>". This element marks the web application to be distributable across multiple JVMs in a cluster.
  5. Generate and configure "warble/config.rb" as described in TOTD #87. This configuration is an important step otherwise you'll encounter JRUBY-3789. Create a WAR file as:

    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble
    mkdir -p tmp/war/WEB-INF/gems/specifications
    cp /Users/arungupta/tools/jruby-1.3.0/lib/ruby/gems/1.8/specifications/rails-2.3.2.gemspec tmp/war/WEB-INF/gems/specifications/rails-2.3.2.gemspec

    . . .

    mkdir -p tmp/war/WEB-INF
    cp config/web.xml tmp/war/WEB-INF
    jar cf session.war  -C tmp/war .

  6. Download latest GlassFish v2.1.1, install/configure GlassFish and create/configure/start a cluster using the script described here. Make sure to change the download location and filename in the script. This script creates a cluster "wines" with two instances - "cabernet" runing on the port 58080 and "merlot" running on the port 58081.
  7. Deploy the application using the command:

    ~/samples/jruby/session >asadmin deploy --target wines --port 5048 --availabilityenabled=true session.war
Now, the screenshots from the two instances are shown and explained below. The two (or more) instances are front-ended by a load balancer so none of this is typically visible to the user but it helps to understand.
Here is a snapshot of this application deployed on "cabernet":



The instance name and the session id is highlighted in the red box. It also shows the time when the session was created in "Session Created" field.

And now the same application form "merlot":



Notice, the session id exactly matches the one from the "cabernet" instance. Similarly "Session Created" matches but "Last Accessed" does not because the same session session is accessed from a different instance.

Lets enter some session data in the "cabernet" instance and click on "Add Session Data" button as shown below:



The session attribute is "aaa" and value is "111". Also the "Last Accessed" time is updated. In the "merlot" page, click on the "Reload Page" button and the same session name/value pairs are retrieved as shown below:



Notice, the "Last Accessed" time is after the time showed in "cabernet" instance. The session information added in "cabernet" is automatically replicated to the "merlot" instance.

Now, lets add a new session name/value pair in "merlot" instance as shown below:



The "Last Accessed" is updated and the session name/value pair ("bbb"/"222") is shown in the page. Click on "Reload page" in "cabernet" instance as shown below:



This time the session information added to "merlot" is replicated to "cabernet".

So any session information added in "cabernet" is replicated to "merlot" and vice versa.

Now, lets stop "cabernet" instance as shown below:



and click on "Reload Page" in "merlot" instance to see the following:



Even though one instance from which the session data was added is stopped, the replicating instance continues to serve both the session values.

As explained earlier, these two instances are front-ended by a load-balancer typically running at port 80. So the user makes a request to port 80 and the correct session values are served even if one of the instance goes down and there by providing in-memory session replication.

Please leave suggestions on other TOTD that you'd like to see. A complete archive of all the tips is available here.

Technorati: totd glassfish clustering rubyonrails jruby highavailability loadbalancer

Thursday Jan 29, 2009

TOTD #67: How to front-end a GlassFish Cluster with Apache + mod_jk on Mac OSX Leopard ?

GlassFish provides support for High Availability by creating a cluster of server instances and session state replication. This enhances the scalability and availability of your application and is a critical piece of decision making critieria when selecting an Application Server. Clustering in GlassFish Version 2 provides comprehensive introduction to clustering, high availability and load balancing in GlassFish.

GlassFish provides out-of-the-box support for load-balancing HTTP(S), JMS, and RMI/IIOP traffic and front-ended by Sun Java System Web Server, Apache Web Server, and Microsoft IIS (more details here) using the Load Balancer plug-in. This plug-in however is not available for Mac OS X and a popular technique used on that platform for front-ending is to use Apache httpd + mod_jk. This is exactly what this TOTD (Tip Of The Day) is going to describe.

This TOTD is going to explain how to front-end a 3-instance GlassFish cluster with Apache httpd and mod_jk on Mac OS X.

This blog is using information from the following blogs:
And thanks to Vivek and Shreedhar for helping me understand the guts of GlassFish High Availability.

Without further ado, lets get started. The steps are slightly involving so strap your seatbelts!
  1. First, lets create a 3-instance cluster following the screencast at GlassFish Clustering in under 10 minutes. Use the cluster name as "cluster1" and instance names as "instance1", "instance2", "instance3". The admin console will look like:



    Deploy "clusterjsp" and make sure it works using port hopping as explained in the screencast. Click on each instance to identify their associated HTTP port.
  2. Define "jvmRoute" and "enableJK" properties on the newly created cluster as:

    ~/samples/v2/clustering/glassfish/bin >./asadmin create-jvm-options --target cluster1 "-DjvmRoute=\\${AJP_INSTANCE_NAME}"
    Command create-jvm-options executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-jvm-options --target cluster1 "-Dcom.sun.enterprise.web.connector.enableJK=\\${AJP_PORT}"
    Command create-jvm-options executed successfully.

    These properties are required to enable "stickiness" for "mod_jk". More details about how these properties are used internally is explained here.
  3. Configure the above system properties for each instance in the cluster as shown:

    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance1 AJP_INSTANCE_NAME=instance1
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance1 AJP_PORT=9090
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance2 AJP_INSTANCE_NAME=instance2
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance2 AJP_PORT=9091
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance3 AJP_INSTANCE_NAME=instance3
    Command create-system-properties executed successfully.
    ~/samples/v2/clustering/glassfish/bin >./asadmin create-system-properties --target instance3 AJP_PORT=9092
    Command create-system-properties executed successfully.

    Note the value of "AJP_PORT" property for each instance, this will be used for configuring "mod_jk" later. You may have to restart the cluster in order for these properties to be synchronized for each instance. This can be easily done using the admin console as explained in the screencast above.
  4. Install httpd: Mac OS X Leopard 10.5.6 comes bundled with Apache httpd 2.2, so that's cool! Otherwise it can be downloaded from httpd.apache.org. However in the pre-installed version there are some intricacies with directory names that are explained below.
  5. Lets install and configure "mod_jk" in "httpd".
    1. The mod_jk binaries for Mac OSX are supposedly available at www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/macosx/. But installing the available module in httpd gives the following error:

      httpd: Syntax error on line 116 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_jk-1.2.25-httpd-2.2.4.so into server: dlopen(/usr/libexec/apache2/mod_jk-1.2.25-httpd-2.2.4.so, 10): no suitable image found.  Did find:\\n\\t/usr/libexec/apache2/mod_jk-1.2.25-httpd-2.2.4.so: mach-o, but wrong architecture
    2. The alternative is to build "mod_jk". Fortunately it turned out to be a straight forward process because of this blog.
      1. Download latest Connectors source code (version 1.2.27). The file "BUILD.txt" (included in the source bundle) provide clear instructions to build and described below as well.
      2. Configure the build environment as shown:

        ~/workspaces/tomcat-connectors-1.2.27-src/native >./configure --with-apxs=/usr/sbin/apxs

        It shows the output as:

        . . .
        checking for target platform... unix
        no apache given
        no netscape given
        configure: creating ./config.status
        config.status: creating Makefile
        config.status: creating apache-1.3/Makefile
        config.status: creating apache-1.3/Makefile.apxs
        config.status: creating apache-2.0/Makefile
        config.status: creating apache-2.0/Makefile.apxs
        config.status: creating common/Makefile
        config.status: creating common/list.mk
        config.status: creating common/jk_types.h
        config.status: creating jni/Makefile
        config.status: creating common/portable.h
        config.status: executing depfiles commands
      3. Edit Edit "native/apache-2.0/Makefile.apxs" and add "-arch x86_64" as described here. Please note that this string needs to be specified twice.
      4. Invoke "make" and "mod_jk.so" is generated in "native/apache-2.0" directory.
    3. Copy the generated "mod_jk.so" as:

      ~/workspaces/tomcat-connectors-1.2.27-src/native/apache-2.0 >sudo cp mod_jk.so /usr/libexec/apache2/
    4. Load the "mod_jk" module in httpd by editing "/etc/apache2/httpd.conf". Please note another similar file exists in "/etc/httpd/httpd.conf" but ignore that one. Add the following as the last "LoadModule" line:

      LoadModule jk_module     libexec/apache2/mod_jk-1.2.25-httpd-2.2.4.so
    5. Configure "mod_jk" by adding the following lines immediately below the previously "LoadModule" line:

      JkWorkersFile /etc/apache2/worker.properties
      # Where to put jk logs
      JkLogFile /var/log/httpd/mod_jk.log
      # Set the jk log level [debug/error/info]
      JkLogLevel debug
      # Select the log format
      JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
      # JkRequestLogFormat set the request format
      JkRequestLogFormat "%w %V %T"
      # Send all jsp requests to GlassFish
      JkMount /\*.jsp loadbalancer

      The key lines in this configuration are the first and the last one. The first line informs "mod_jk" about the location of "worker.properties" file (explained later). The last line instructs to redirect only JSP requests. This will allow the static content such as images, text files, and media to be served from "httpd" itself.

      Also create the log directory specified in the configuration as:

      sudo mkdir /var/log/httpd
    6. Create a new file "/etc/apache2/worker.properties". Again, this is not in "/etc/httpd" directory. Use the following contents:

      # Define 1 real worker using ajp13
      worker.list=loadbalancer
      # Set properties for instance1
      worker.instance1.type=ajp13
      worker.instance1.host=localhost
      worker.instance1.port=9090
      worker.instance1.lbfactor=50
      worker.instance1.cachesize=10
      worker.instance1.cache_timeout=600
      worker.instance1.socket_keepalive=1
      worker.instance1.socket_timeout=300
      # Set properties for instance2
      worker.instance2.type=ajp13
      worker.instance2.host=localhost
      worker.instance2.port=9091
      worker.instance2.lbfactor=50
      worker.instance2.cachesize=10
      worker.instance2.cache_timeout=600
      worker.instance2.socket_keepalive=1
      worker.instance2.socket_timeout=300
      # Set properties for instance3
      worker.instance3.type=ajp13
      worker.instance3.host=localhost
      worker.instance3.port=9092
      worker.instance3.lbfactor=50
      worker.instance3.cachesize=10
      worker.instance3.cache_timeout=600
      worker.instance3.socket_keepalive=1
      worker.instance3.socket_timeout=300

      worker.loadbalancer.type=lb
      worker.loadbalancer.balance_workers=instance1,instance2,instance3

      Read more about worker.properties format. Essentially this file is used to load-balance a 3-instance cluster and specify configuration values for each instance. Note that the value of "worker.instanceX.port" for instance X is exactly same as specified during instance configuration earlier.
  6. Copy "tomcat-ajp.jar" from the "lib" directory of the latest Tomcat 5.5.xcommons-logging.jar (version 1.1.1), and commons-modeler.jar (version 2.0.1) to GLASSFISH_HOME/lib. This is done as:

    ~/samples/v2/clustering/glassfish/lib >cp ~/tools/apache-tomcat-5.5.27/server/lib/tomcat-ajp.jar .
    ~/samples/v2/clustering/glassfish/lib >cp ~/Downloads/commons-logging-1.1.1/commons-logging-1.1.1.jar .
    ~/samples/v2/clustering/glassfish/lib >cp ~/tools/commons-modeler-2.0.1/commons-modeler-2.0.1.jar .

    You may have to restart cluster in order for these JARs to be loaded by each instance.
  7. An "httpd" instance is already running on port# 80 in my particular instance of Mac OS X. Instead of mangling with that, I decided to change the listening port for the new instance that will be spawn for out front-end. This can be easily done by editing "/etc/apache2/httpd.conf" and looking for lines similar to:

    #Listen 12.34.56.78:80
    Listen 80

    And change "Listen 80" to "Listen 81".
That completes the configuration, phew!

Lets start "httpd" as:

sudo httpd

The "httpd" logs are available in "/private/var/log/apache2". The following message indicates a successful start of the web server:

. . .
[Thu Jan 29 11:14:16 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
[Thu Jan 29 11:14:16 2009] [warn] No JkShmFile defined in httpd.conf. Using default /usr/logs/jk-run
time-status
[Thu Jan 29 11:14:16 2009] [warn] No JkShmFile defined in httpd.conf. Using default /usr/logs/jk-run
time-status
[Thu Jan 29 11:14:16 2009] [notice] Digest: generating secret for digest authentication ...
[Thu Jan 29 11:14:16 2009] [notice] Digest: done
[Thu Jan 29 11:14:16 2009] [warn] pid file /private/var/run/httpd.pid overwritten -- Unclean shutdow
n of previous Apache run?
[Thu Jan 29 11:14:16 2009] [notice] Apache/2.2.9 (Unix) mod_ssl/2.2.9 OpenSSL/0.9.7l DAV/2 mod_jk/1.
2.27 configured -- resuming normal operations

OK, now show time!

If everything has been configured properly as described above then "http://localhost:81/clusterjsp/HaJsp.jsp" looks like:



Enter session attribute as "aaa" and value of the attribute as "111". After you click on "ADD SESSION DATA" button, the updated page looks like:



The highlighted part shows that the request is served from "instance1" and recently added session data. Lets stop "instance1" and see if the promise of high availability is fulfilled :)

Click on "Instances" tab, select "instance1" and click "Stop". The admin console looks like:



Notice "instance1" is shown as stopped. Clicking on "Reload Page" on "http://localhost:81/clusterjsp/HaJsp.jsp" shows:




Aha!

Even though "instance1" is not runing the session data is still available. And that is possible because of the seamless session failover from primary ("instance1") to the replica partner ("instance2"). The highlighted part indicates that the request is now indeed served by "instance2".

Here are some other useful links to consider:
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.

Technorati: totd glassfish highavailability clustering loadbalancer mod_jk apache httpd mac leopard
About

profile image
Arun Gupta is a technology enthusiast, a passionate runner, author, and a community guy who works for Oracle Corp.


Java EE 7 Samples

Stay Connected

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today