Highly what ?

I'm breaking my own personal record of posting once every few months and writing something again (3rd time in 5 days). I may pass out due to the shock of it all. Obviously my claim of having a life is proving to be unfounded.

I'm going to completely diverge from my last post and not go anywhere close to talking about the basics of JMS and how to use MQ with it. I'm also not writing about the features of MQ (a request in the comments from 2 entries ago). I'm going to actually write something pseudo-technical (it happens some days) although very high level.

What I'm going to talk about is MQ's implementation of High Availability, what we have now, why we did it that way and where it is going. (I've promised to do this at least twice in my old infrequent blogs).

Before I go anywhere at all, I should start with the very basic facts. MQ has a couple of components and one of those pieces is a server which we call the broker ( this entry gives a quick overview of the pieces). You can connect those brokers together into a cluster.

In the far distant past, I did a fairly lightweight entry on clustering. It doesn't go into a lot of depth on the subject (actually it does a really high level overview of various forms of clustering and then dives right into the protocol) but it might be worth a read.

Before I start the serious stuff, I may as well start with the basic stuff - terminology. I have a pet peeve which I'm breaking in these blog entries just to make it simple. The term high availability drives me nuts because it means so many things to so many people. To me there are really two facets of HA, service availability which means that your client applications can continue to operate if something goes wrong and data availability which indicates that not only does your service operate BUT your messages don't get stuck somewhere in the process. Now I'm going to still use HA, but understand that when I do it means data availability (which was introduced in 4.1) and not service availability which I will refer to as clustering (which was introduced in 3.0 or something similar).

Ok, disclaimers are out of the way.

A little background

While we mention that HA is a new feature in 4.1, there are multiple ways to get availability.

  • clustering - provides service availability
  • SunCluster- provides data availability through an active/passive configuration. The file based persistence implementation (as opposed to JDBC persistence) stored messages on a highly available filesystem. When one broker crashed, another (with the same identity) would start up, load the data and begin to process messages
  • HA- provides availability by using a highly available database (something like Oracle RAC, HADB or MySQL Cluster)
I'm only going to talk about the last.

How it works

Yes - I did just say that we provide HA by using an HA database (seem redundant doesn't it).

In the 4.1 version of HA, all of the brokers in a cluster share a single JDBC database (although they use their own sets of tables). The brokers watch each other and then when they notices someone went down have an election to decides who gets the messages. When a broker goes down, the elected server gobbles up the messages in the database tables owned by the failed server. Once the takeover of the messages is complete, the clients fail to the takeover broker to guarantee message ordering.

At some point, I'll write something that talks about how we monitor a broker failure. But that is more information than I plan to go into today. (I'm actively trying not to delve into the nitty gritty bits as much as I want to)

Why we did it the way we did

Obviously there are a lot of ways to implement HA.

First, you need to decide if you are doing active/backup (some servers are hanging out waiting for the moment when they get to be the one to start processing the messages) or active/active (all servers are operational). Once you have made that decision you need to decide if you want a shared store (where all the messages are stored in a single place) or a replicated store (where copies of each message is stored on every server).

We decided on an active/active configuration because it mapped well with the cluster architecture used by glassfish (which we integrate into).

We decided on using a single shared store (in this case a JDBC database) for several reasons. The most applicable one at this point is the fact that there are two hard problems in HA - handling takeover and processing of messages when a failure occurs and maintaining data integrity across the cluster through failures.

Using an HA database allowed us to focus on getting the first right because we pushed the data integrity off on to the database.

Where we are going next

Our path forward is to solve the second (data integrity) issue and supply a version of HA that does not require a database at its core. This means that in what we are calling MQ.next (which really just means some version of MQ in the future) we are planning to have a version of HA without SunCluster and without an HA database. How are are going to do it is TBD but there will be more on the mq.dev.java.net or this blog as we start to solve it.

Comments:

Thanks!

Great entry Linda.

My question: You have this entry in the blog:

"all of the brokers in a cluster share a single file store."

Is the file store you are referring to the HA database?

Posted by Tom Kincaid on October 23, 2007 at 04:55 AM BST #

Hi Tom,

You are right - that statement is confusing. Thanks for pointing it out. Isn't it good blogs are editable ?

-- Linda

Posted by Linda Schneider on October 23, 2007 at 07:52 AM BST #

hi linda,

do you know which HA DB is generally supported or useable for open mq.

regards chris

Posted by Christian Brennsteiner on February 14, 2008 at 07:26 AM GMT #

Post a Comment:
Comments are closed for this entry.
About

A blog for Open Message Queue, the JMS provider in GlassFish Server, Open Source Edition

Search

Top Tags
Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today
Feeds