How Does Sailfin (Sun GlassFish Communications Server) SIP Session Replication Module Select Replica Instances?

For scalable deployments of middleware with high availability, employing a session state persistence approach to persist session state to all instances in the cluster could be a sub-optimal solution. Replicating sessions to all instances in the cluster would result in significantly higher network traffic just for replicating state reducing bandwidth for growing application user requests. This approach of sharing sessions across all instances perhaps is suited for small clusters with limited number of concurrent requests.

One of the better approaches to use to secure scaling advantages is the approach of buddy replication. In this approach, each instance selects one (or more) other instance(s) in the cluster to replicate any and all of its sessions. This is a superior approach and in fact, works for fairly large deployments. There are factors to consider here, in terms of the overhead the replication subsystem will need to handle at the cost of performance particularly when large number of concurrent sessions are being processed and later expired. An overhead to consider is the need for instances to form ring-like replica partnerships based on a certain order in which buddies would be available and selected. When a buddy instance fails, there is the cost of re-adjusting and forming new buddy relationship with another surviving instance, and when the original buddy recovers, to re-adjust again to use this upcoming instance as a replica partner by one of the instances in the cluster. Think of this as a chain based ring with its links randomly being removed but with the consistent goal of retaining a connected chain ring with the overhead of relinking each time a link is removed or added or a new one added to the chain.

There is also cost to be considered (if such were the design approach), each time the cluster shape changes for dynamically changing/updating any cookie information pertaining to replica locations that could be sent back as part of the response headers to the LB - typically that cost should also be avoided through more efficient means.

In the case of GlassFish, we have fairly successfully used buddy replication with each instance having a single replica buddy. We use the approach of locating sessions in the cluster when a request is directed by the LB to any random instance when a failure of an instance that was processing requests occurs. This has worked well for reasonably large mission critical environments where the scalability and availability requirements are within the boundaries of this approach.

In Sailfin 2.0, scaling and reliability needs for telco applications is typically very high and we needed a scalable approach to ensure Sip Session Replication overhead sustained good performance with the added reliability and availability. We, therefore, used a consistent hashing algorithm to dynamically assign a replica instance for each new session. This we did by leveraging the consistent hashing mechanism that the Sailfin Converged Load Balancer (CLB) uses for proxying requests to a target instance using a BEKey. In the case of replication, the same logic of using a hashed key for the target instance assignment is taken a bit further.

For replica selection, for each new session, we pre-calculate the most likely target instance that the CLB would failover to, if the current target primary instance that would serve the session, were to fail in future. This gives us the instance to which, the current primary instance, should replicate to. This gave us significant benefits in that there were no client cookie updates required to include replica partner information dynamically. There was no readjustment of replica partnerships needed when a particular instance failed as the hashing algorithm would provide another instance to replicate to with just an API call. When the failed instance comes back into the cluster, the sessions that were owned by it in its prior incarnation that are unexpired, would migrate back to it to maintain a balanced set of sessions across the cluster. And the replica selection algorithm would assign the original failover instance for this primary, as the replication partner.

Since this is based on a hashed selection algorithm with predetermined failover target, replica selection is dynamic, and does not need the knowledge of a particular order of instances being ready in the cluster to point all sessions from another instance as a replication partner. And more importantly, as the failover occurs to the specifc instance where replica data is located, there is significantly less network overhead to locate any particular session in the cluster when a particular request within the session scope is sent to the CLB. This allows for more bandwidth being available for a larger number of user sessions to be served. This approach is thus superior to the buddy replication approach and helped us scale to higher throughput and sustain a larger number of long running sessions.

It must be emphasized here that system level, and application server level tuning, and sizing are essential to ensure sustained performance, scalability and reliability in addition to the improvements provided with the SSR replication scheme and other parts of the Sailfin v2 server (aka Sun GlassFish Communications Server 2.0) .

As always, we welcome your feedback and encourage you to try Sailfin and send us any inputs and questions you may have in this respect.

Sailfin Promoted Builds are available here : Sailfin Downloads


Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Shreedhar Ganapathy

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today