CAPS 6 & Java MQ: Part 1 - Overview & Considerations

Why use Java MQ with CAPS 6?

Java MQ is now the default provider in CAPS 6 and offers a number of advantages over STCMS, the default provider in previous versions. Although STCMS is still supported, Java MQ offers multiple clustering modes, five nines availability using MySQL Cluster or HADB, wildcard destinations, schema validation, fine management and tuning capabilities and a growing list of features.

Java MQ is mature (since 2001) and has conservatively developed stable features over time whilst maintaining strict compliance with the JMS specification. The online documentation is clear and current throughout and makes claims only to functionality that is well tested - not to be underestimated! If you're looking for a stable and reliable JMS provider, you should look at Java MQ - it just works!

Last and by no means least, it's now completely open sourced in the Open MQ community project. So it's an open source JMS provider that does what it says on the tin, is very reliable and supports multiple modes for HA! And did I mention that it's the default for CAPS 6? So give it a go...

A 'little' intro on JMS resource adapters

GlassFish ships with a standard resource adapter, referred to in the Admin Console and server.log files as jmsra or MQRA. CAPS 6 also installs its own resource adapter into GlassFish known as sun-jms-adapter or JMSJCA which can also be installed in other app servers and/or with other JMS servers. Sun also provides a third adapter for plugging into other app servers called genericra but JMSJCA already does that and a whole lot more.

Confused? There's no need to be really. You must use JMSJCA for all CAPS 6 deployments. There are efforts underway to unify the three but for now your projects are already set up to use JMSJCA so all you need to decide is how to deploy and configure MQ with JMSJCA.

Incidentally, CAPS 6 installs default JMS connection pools and resources into GlassFish that simplifies the use of JMSJCA. However, you may need to add new connection pools for different broker clusters and configurations. You can read more about how to do this and the available resource adapters on this Java CAPS Grok entry.

Some things to be aware of are that if you deploy a standard MDB from your CAPS-enabled NetBeans IDE, it will use jmsra by default. It is recommended that you therefore instead use JMSJCA MDBs, provided by CAPS 6. Note also that the GlassFish asadmin commands for JMS configuration will use jmsra by default. My Grok entry explains more about these common gotchas.

JMS lifecycle options for running with GlassFish

Lifecycle options for STCMS

In CAPS 5.1.x, the STCMS message server ran as a sub-process but was managed by the Integration Server. A new feature for CAPS 6 is that multiple STCMS instances can now be configured and GlassFish can manage each of their lifecycles using a lifecycle-module.

See step 2 from my previous blog entry on how to create additional managed STCMS servers for this purpose.

Lifecycle options for Java MQ

CAPS 6 ships with GlassFish Enterprise Server which 'embeds' a Java MQ broker by default. This can be changed. There are in fact three lifecycle options for running MQ brokers with GlassFish.

NB: These modes of lifecycle management are provided through MQRA. However, I've already stated that only JMSJCA should be used for CAPS 6 deployments. This is true, but it doesn't prevent functionality being 'borrowed' from MQRA, in this case for the purposes of Java MQ lifecycle management.

Here's my view of the applicability of each of the options with specific regard to CAPS 6 deployments:

Lifecycle / deployment option Explanation
EMBEDDED Starts and runs inside the GlassFish server VM process. This is the default option after an installation and is only supported for non-clustered use. EMBEDDED mode using MQRA connections enables direct VM communication without going over the TCP/IP stack. However, when used with JMSJCA (as it will be for all CAPS 6 deployments) a local TCP/IP connection will always be formed. That said, EMBEDDED does not currently support any form of MQ broker clustering.
LOCAL The app server process forks off the MQ broker process and will automatically restart it if the process is killed. This strictly means there must be only one broker per domain making it an inflexible option for future change. LOCAL is currently only supported only for non-clustered deployments, although there is no technical reason that I am aware of and this restriction may be removed in future CAPS 6 updates.
REMOTE This requires that MQ broker process(es) are manually started and stopped by an administrator, independent of the app server lifecycle. Most flexibility is offered in terms of deployment considerations such as the number of brokers required. This option is not tied to any future GlassFish clustering that may be introduced. This option also makes it easier to upgrade and install patches to the MQ broker. If you're using REMOTE, you can cluster Java MQ brokers without clustering the app server.

As far as JMSJCA is concerned, you can start brokers manually and connect to them from any deployed applications without actually caring about the above lifecycle options. However, it is good practice to set to REMOTE so that the app server does not unnecessary start another broker process thereby consuming valuable resources.

Remember that if you use LOCAL or EMBEDDED, your brokers are co-located and as such will start and stop with the domain. Therefore, the JMS service will be unavailable whenever you stop the domain. If you require clustering or are using remote JMS clients (i.e. standalone JMS clients or JMS clients running in another application server) that potentially connect to the broker then I would recommend using a separate tier for your JMS brokers, and therefore choose REMOTE.

If you need to scale the number of brokers independently of the number of domains e.g. to support increased messaging capacity, you would typically use REMOTE and manually manage a separate tier for your messaging.

In summary, and if you agree with my logic, use REMOTE if you require HA or have external clients to manage the lifecycle of each broker manually. Otherwise, use EMBEDDED or LOCAL. Each of these options are supported for all style of CAPS applications whether they are repository-based, use JMSJCA MDBs or for composite applications that use the JMS BC (available today in Open ESB). I hope this is clear.

Changing the lifecycle mode

You can change the mode in the Admin Console or via the app server domain's configuration file.

To change in the Admin Console, start GlassFish and navigate to Configuration -> Java Message Service within the Admin Console in your browser. Now select the desired value from the Type dropdown of the JMS Service as shown below:

To change directly in the domain's configuration file, edit domain.xml in .../JavaCAPS6/appserver/domains/<domain_name>/config/. Locate the jms-service tag and change the type attribute to EMBEDDED, LOCAL or REMOTE, as discussed. I recommend taking a backup of domain.xml first.

<jms-service addresslist-behavior="random" addresslist-iterations="3" init-timeout-in-seconds="60"
reconnect-attempts="3" reconnect-enabled="true" reconnect-interval-in-seconds="5" type="EMBEDDED">
</jms-service>

A server restart is required in either case. Note also that removing the jms-service altogether is not an option since your domain will fail to start if you do so!

Clustering and high availability in Java MQ

One of the strengths of Java MQ is that it has multiple modes of clustering. Which mode to use depends on the type of availability that your application(s) require. There are two main types of availability that we might want to achieve:

Service availability Application client failover to another broker if a broker fails
Data availability Data replication and preservation of JMS semantics if a broker fails

Data availability usually implies some level of service availability as described at Ramesh's blog.

If it's data availability that is required, this can be achieved with conventional clustering using either a file or JDBC store. If service availability as well as data availability is required, this currently requires high availability clustering using the JDBC HA store option with a HA database such as MySQL Cluster. Irrespective of the type of clustering required, with CAPS 6, you will need to use REMOTE mode and manage the broker lifecycle manually.

Part 2 discusses the use of service availability with Java MQ and CAPS 6, how to configure it and how to build, deploy and test the different styles of CAPS 6 applications. Broker clustering using JDBC HA will be covered in Part 3.

Comments:

hi louis,

first of all thanks for the great 2 articles on CAPS an HA. this two articles a really helping to figure out what should / can be used in CAPS and what should not be used in CAPS. since this was one of the biggest discussion points (from my opinion) for the V6 releasy since it came out.

do you if it is possible to manage the mq instances that are local or remote (not embedded) via emanager ? or if there are plans to implement no embedded mq instances in emanager? for me it is not really obvious how to e.g. insert test messages or investigate on message properties when doing REMOTE or LOCAL servers without haveing to code something custom.

regards chris

Posted by Christian Brennsteiner on August 20, 2008 at 01:42 AM BST #

Very helpful post IMHO. Thank you.

Posted by Johan Gardner on November 11, 2008 at 02:45 PM GMT #

Hi,

I found everywhere documentation on how to create HA brokers. However, I cannot find a single reference on how to create HA clients. I am interested let's say to have 2 clients receiving data. One is active and one is not. Only the active one receives data, If it crashes the second should take over. I see the takeover mechanism only for brokers. I think that for an HA system this is not enough.

Any ideas on how to create HA clients?

Thanks
Bogdan

Posted by Bogdan on July 27, 2009 at 08:55 PM BST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Louis Polycarpou

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today