Wednesday Mar 02, 2011

Enhancements to JMS clustering in GlassFish 3.1

GlassFish 3.1 introduces several enhancements to JMS clustering. This blog highlight the key changes.

GlassFish 3.1 continues to support auto-clustering of both conventional (service HA only) and enhanced (service and data HA) MQ clusters in LOCAL and REMOTE JMS integration modes. Besides these, we have also added support for Embedded mode conventional clusters. In the Embedded mode, the broker is started in the same process space (JVM) as GlassFish and hence eliminates the overhead of multiple processes. This mode is now the default for both clustered and stand-alone GlassFish instances. However, while the Embedded mode for stand-alone (non-clustered) GlassFish servers with MQ use direct in-process communication with the MQ broker, clustered instances use TCP. This communication mode is selected automatically and it cannot be configured by the user. Direct mode communication uses API calls and completely bypasses the network stack. As a consequence, there is a significant speed-up. This cannot be used in the case of clustered instances since when running in a cluster these is a need to be able to handle broker or connection failures by failing over to another broker.

The following MQ clustering modes and JMS integration options are supported
MQ conventional cluster with master broker in EMBEDDED and LOCAL modes
MQ conventional cluster without master broker in both EMBEDDED and LOCAL modes
MQ enhanced cluster - LOCAL mode only 


Lazy-initialization of MQ broker in embedded mode

In the embedded mode, the start-up of the MQ broker is deferred until it is really required. A light weight Grizzly Service is configured to listen on the JMS port. When a request comes in on the JMS port for the first time, the MQ broker is started-up before processing the request. The grizzly service proxies all subsequent requests on this port to the MQ broker. This behaviors is controlled by the JmsHost.lazy-init property of the default_JMS_host in domain.xml. The value is true by default. To disable lazy initialization, turn the lazy-init flag to false. This will disable the grizzly service and the MQ broker will be started-up eagerly along with the GlassFish server. A restart of the GF server is required to enable this change once the lazy-init property is changed.

Support for Dynamic Cluster changes

In Glassfish v2.1, the MQ broker address-list was populated only during start-up. As a consequence, any changes to the cluster topology at run-time were not reflected until the entire cluster was restarted. As an enhancement in  Glassfish 3.1, we now support dynamic changes in cluster topologies. The JMS service listens for cluster change events. These changes are propagated to the MQ broker dynamically and hence eliminating the need for a restart.


Improvements to MQ conventional cluster with master broker

Conventional clusters in MQ have traditionally required configuring a master broker for certain admin related operations like create/update/delete of durable subscription and physical destination. MQ broker instances also require to rendezvous with the master broker at start-up.
This imposes the requirement for the master broker to be have been started before the remaining broker instances can start and function correctly. They have been several complaints from users running into "master broker not started" errors if there are delays in the start-up of the master broker. To address this issue, a new broker property ‘imq.cluster. nowaitForMasterBrokerTimeoutInSeconds  has been introduced that can be configured through GlassFish (as a property in the jms-host element of domain.xml) and this defines the timeout interval before the instances start reporting the error message. This is designed to make the MQ cluster more tolerant towards delays in master broker start-up.

Dynamically changing the master broker

A significant enhancement to the MQ conventional cluster is a new feature that allows users to dynamically change the master broker without requiring a cluster restart. In earlier releases, changing the master broker required the user to follow a manual backup and restore operation of the MQ configuration store and a subsequent restart of the whole cluster. This is now possible through by running a single Glassfish command - change-master-broker. As a consequence of this new feature, restart of the cluster is not required for this operation any longer. By default, the first configured instance in the GF instance list for the cluster is the master broker. This can now be changed to any other Glassfish instance in the cluster. The only restriction is the chosen instance should be a part of the cluster. While running this command, ideally all the instances in the cluster should be running. However, at a minimum the instance associated with the old master broker and the instance associated with the new master broker should be running.





















MQ conventional cluster of peer brokers

Another significant feature in this release is a new mode in MQ clustering called the - MQ conventional clusters of peers brokers. This mode is newly introduced in MQ 4.5/GF 3.1. In this mode, the earlier limitation of nominating one of the clustered instances as a master broker is done away with and all MQ instances are now equal peers. Instead, a user configured database is used to store shared config data. This mode can be enabled by using the new CLI command - configure-jms-cluster (converted in the next section).


















New CLI command to switch between JMS clustering modes

A new CLI command – configure-jms-cluster, has been introduced to switch between the different JMS clustering modes. This command allows users to configure and switch between different MQ clustering modes such as from conventional to enhanced and vice-versa. The syntax for the command is -

configure-jms-cluster [--clustertype =conventional | enhanced] [--messagestoretype=jdbc | file] [--configstoretype=masterbroker | shareddb] [--dbvendor] [--dbuser] [--dburl] [--force] [--property (name=value)[:name-value]\*] clusterName

a. Message store type (JDBC | file) – defaults to file

b. Config store type (MasterBroker | SharedDB) – defaults to MasterBroker

c. Cluster Type: Conventional | Enhanced (if enhanced, then (a) and (b) are ignored).

d. DB password needs to be passed in through the passwordfile using the key – AS_ADMIN_JMSDBPASSWORD

The command only handles the configuration change between the existing clustering mode to the new one. Hence extreme caution should be taken when running this against an existing cluster where JMS related activities have occurred. By JMS related activity, I am referring to activities such as creation of destinations or durable subscriptions and exchange of messages. When running against such a cluster, manual steps need to be followed to back-up the config and message stores. The steps are detailed-out in the MQ admin guide. The best practice for running this command is right after you have created a cluster but haven't added any instances to it. This way you can be sure that no JMS activities have occurred and this operation is perfectly safe.


Setting arbitrary broker props
A small but important improvement to GF JMS is the ability to configure any MQ broker property through GF. They can be configured in either jms-service or on jms-host. If the same property is configured on both the jms-service and jms-host, than the property configured on jms-host takes precedence. There are two ways to configure these properties – the first by specifying these properties while using the create-jms-host command. If you are using local and embedded JMS integration modes, you will need to set this new Jms-host as the default-jms-host. The second way of configurating this is by using the asadmin set command. The property names should be the fully qualified names and any
'.' should be escaped with '\\\\'.




Thursday May 18, 2006

Migrating to GlassFish

If you are thinking of moving your existing enterprise application over to GlassFish, the first Java EE 5 compliant application server, but worried about migration woes, don't fret! There is help on hand by way of Sun's Application Server Migration Tool. This tool allows you to migrate your existing enterprise applications archives (ear's), web archives (war's), resource archives (rar's) and source code from a host of supported application server's over to GlassFish. And, the tool is free to use!

So, what does the tool do?

The tool works on the input archive or source code to translate the runtime deployment descriptors from the source application server format to GlassFish complaint ones. It also parses the JSP and Java source code files (in case of source code input) and provides runtime support for certain custom JSP tags and proprietary API's. It generates ANT based scripts to build the archive in case of source code inputs. In case of archive inputs, it rebundles the migrated archive.

The migration tool can help you migrate to GlassFish from earlier versions of Sun's Application Server and a host of other competitive application servers such as BEA Weblogic - 5.x, 6.x, 8.x, WebSphere – 4.x, 5.x, Jboss 3.x and Apache Tomcat.

How to use the tool?

The tool is available for download as a zip archive. It needs to be downloaded and extracted to a location on your local harddisk. You will need to make sure you have JDK 5 and GlassFish installed on the machine and you should have the JDK_PATH, J2EE_PATH, AS_HOME and ASMT_HOME environmental variables setup correctly. The complete set of installation instructions are available here. The tool can be run in either the UI mode or the command line mode. All you need to do is specify the location of your archive/ source code, the app server for which the source was created, point it to where you want the output to be generated and you are all set. Once the migration is complete, the tool generates a comprehensive report describing what has been migrated, what else needs to be done and errors, if any.

What the tool can't do?

The tool provides limited support for proprietary API's and custom JSP tags. The rest of the unsupported API usages show up in the report and need to be manually corrected before you can deploy your application. Hence, before you start the migration, it would be a good idea to run the Application Verification Kit (AVK) to get an idea of how Java EE compliant your application is. The report from AVK should be able to give you a fairly good idea of how much effort is involved in the migration.

If you need more help with this tool, you can refer to the online documentation.















About

satishkumar

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today