Introducing GlassFish Communications Server

This is the big blog that goes over what is Project SailFin.  I hope it is an interesting read and I hope to receive your comments.

Introduction

Convergence at happening at various levels in communication industry. Carriers seek to consolidate fixed and mobile services. There is operator consolidation due to acquisitions. When two operators merge, they need to rapidly combine subscriber databases, CRM and billing systems, to reap the cost savings benefits. So carriers find that moving to a standards-based network and software infrastructure is good for them. It allows them to be agile and and keep costs low. Increasingly you hear about a Services Delivery platform (SDP), one universal platform for network facing functions, operating consumer portals, running CRM applications, integration with OSS/BSS and even service exposure through gateways. We wanted to drive GlassFish adoption in this challenging segment: if it is good enough for carriers, it is more than good enough for enterprises.  Project SailFin started with this vision in June 2007. There is a lot of magic that goes into delivering reliable digital television and home phone services, besides an application server, however good and extensive it may be. Carriers and Network Equipment Providers are good at building SDPs.We just want to be a part of it with GlassFish.

Services Delivery platforms are mission critical. Lets face it, software bugs and hardware failures are unavoidable. So the game was about detecting failures, rapid recovery, and keeping downtime to a minimum. Clusters need to expand and shrink with demand fluctuations. Upgrades need to be performed without downtime. A lot of the work that happened this past year went into integrating a JSR289 compliant SIP Servlets Container in GlassFish, adding a converged load balancer and session data replication system. Well, we are done and about ready to make it generally available. The official name is GlassFish Communications Server (GlassFish CS), because it is based on GlassFish Enterprise. We may also use the nickname SailFin to refer to the same thing.

GlassFish CS is an open-source applications server that supports Java EE 5 specification and SIP Servlets 1.1 API (figure 1). It is based on the GlassFish v2.1 and so it inherits much of the core architecture and administrative infrastructure. Onecan deploy Java EE Applications, pure SIP applications and converged applications that mix all of the above. In addition, SailFin includes a novel built-in converged load balancer, support for session data high availability, support for integration in SAF API based deployments, choice of Application Routers and IMS/Diameter support in future release. Since GlassFish Communications Server is built on GlassFish Enterprise Server v2.1, it puts fresh and powerful technology in the hands of developers for creating next generation services. Developers can mix SIP Servlets technology with technologies like EJB, Java Persistence API, JDBC, Web services, and reuse container services like JNDI, JMS, Dependency Injection, Security, Transaction management.

Sailfin sub systems
Figure 1 GlassFish: An Integrated Execution Platform for Converged SOA, SIP, and Java EE Services

SIP Protocol Stack and SIP Servlets

GlassFish CS integrates a high-performance SIP protocol stack implemented using Grizzly. Incoming requests pass through a chain of handlers that perform specific functions. For example, the Overload Handler, if enabled, would send an appropriate error response, if the server is overloaded. The optional load balancer, if enabled, would apply the specified sticky load balancing logic and forward the request to the appropriate instance in the cluster, or pass it up the stack. Initial requests are matched with deployed applications and appropriately routed. Optionally, the default or user deployed Application Router is consulted to make the initial routing decisions. Many SIP listeners can be configured to accept TCP and or UDP traffic on multiple network interfaces. SIP over TLS is supported.



Figure 2: Sip Protocol and Request Processing Stack in GlassFish

Web Services

Network Services like dialing, SMS and service enablers like location are often exposed to developers using web services gateways. Web Services are mechanism to mash-up services from simpler building blocks. SailFin includes Metro Web Services stack. Metro implements important WS-\* standards and WS-I standardized interoperability profiles in order to assure interoperability between Java and .NET web services. Metro web services is a popular stack reused by many and well tuned for performance. It is likely to be of great interest to Carriers who are looking to build service gateways.

Figure 3: Metro Web Services Stack in GlassFish

JBI and Open ESB Support

GlassFish has built-in support for Open ESB, by including the implementation of Java Business Integration (JBI) specifications. JBI as specified in JSR 208. This is a Java standard for structuring business systems according to a service-oriented architecture (SOA). In JBI approach to SOA, a composite application is assembled from distinct services, where each service performs one or more business-related processes. JBI gives an enterprise a lot of agility in building composite applications because services and data that are used in one application can be shared and reused in other applications.

Open ESB runtime hosts a set of pluggable component containers, which integrate various types of IT assets. These pluggable component containers are interconnected with a fast, reliable, in-memory messaging bus called the Normalized Message Router (NMR) also referred to as the JBI Bus. Service containers adapt IT assets to a standard services model, based on XML message exchange using standardized message exchange patterns (MEP) based on abstract WSDL. This improves interoperability and allows a mix-and-match of technologies from various vendors. When sending and receiving messages outside the JBI environment, the engine component containers communicate using the in-memory NMR messaging infrastructure and pass messages out to the client through an appropriate binding component container. When communication is entirely within the JBI environment, no protocol conversion, message serialization, or message normalization is necessary because all messages are already normalized and are in standard abstract WSDL format. JBI standardizes the way composite applications are packaged.

GlassFish is Open ESB ready, because it contains the Normalized Message Router and a Java EE Service Engine that is used to expose any EJB or Web Service End Points on the JBI bus, optionally. With this basic infrastructure SGCS instance can host service engine and binding components for any protocol, participate in workflows and integrate with billing, provisioning or network element management systems.

Management and Monitoring 

GlassFish management framework allows an administrator to configure and monitor instances and clusters securely and remotely, from a web-based central administration console. Administration in a carrier deployment environment can be automated in many ways. The command line interface (CLI) can be used to script and automate processes. A stable JMX API is made available to programmatically monitor the server, query configuration and change configuration data.



Figure 4: Centralized Administration


GlassFish administration infrastructure, based on Domain Administration Server (DAS) and Node Agent (NA) is carried over. DAS performs aggregate actions such as starting and stopping clusters and instances, propagating configuration changes and distributing deployed applications, to affected instances. DAS also manages the central repository where all deployed applications, libraries and configuration data is stored. The repository can be automatically backed up and restored, to revert to a previous configuration after failure. The DAS is not necessary to be up and running for normal operation of services in a domain. Node Agents assist the DAS in managing the lifecycle of instances and watchdog restart service for instances. Communication between the DAS, NA and instances is always secure.

Monitoring is supported through JMX and SNMP interfaces. Monitoring level may be varied dynamically from OFF to LOW and high, changing the amount of information that is collected. Monitoring level can be controlled at fine-grained level to watch only the desired sub-systems making it suitable for production deployments where performance degradation of running services is not desirable. Similar to monitoring, logging can also be enabled and disabled dynamically for any sub-system, log verbosity level controlled and log files automatically rotated for field serviceability. Additional log filters can be installed on the field for customization.

Administration Web console is commonly used to administer and monitor GlassFish Communication Server. The web console provides an intuitive interface to navigate the complex configuration elements and wizards for completing common tasks like deploying applications, creating clusters and configuring high-availability and load balancer.


Figure 5: GlassFish Web Administration Console


GlassfIsh provides a Self Management Framework. System administrators, responsible for monitoring the application server for unpredictable conditions and apply corrective actions, no longer have to watch for these conditions manually. This feature enables prevention of failures by detecting emerging problems rapidly and preemptively addressing them, thus improving availability through self-healing. A self-management rule is an association between an event and actions. Events and Actions are performed by MBeans. Rules can be created to act on pre-defined events or new event types. Some pre-defined event sources are:

•    Lifecycle events: Event is generated when an instance is ready, shutdown or abruptly down
•    Monitor event: Monitor any statistic and generate an event when a threshold is crossed
•    Log events: When a log message is produced at a certain level from a specified logger
•    Trace events: configured to emit an event when a specified EJB or Servlet is entered/exited
•    Timer events: used to perform actions at pre-determined times
•    Notification events: Generic JMX notification event


Carrier Grade Service Execution

In this section, we will look at how SailFin provides high quality and continuous service. High availability requires that:

•    System operates at the desired latency levels even when above the engineered capacity
•    System throughput is steady and is stable under increased loads 
•    Hardware and software failures are tolerated
•    Service failover is reliable, fast and invisible to the clients
•    Client’s application state is persisted across recovery
•    System and services are designed to scale horizontally
•    Adding/Removing capacity to/from the platform should not affect running services and clients
•    Spread the load evenly based on simple and configurable criteria
•    Configuration must be allowed to change dynamically without needing restarts
•    Detect and correctly handle load spikes and overloaded resource conditions

GlassFish Communication Server is designed to provide carrier grade service execution environment for web and converged SIP applications.

Active and Fully Replicated Services

Java EE and SIP Container services are replicated in every instance in a cluster. Each server instance has a local JNDI service that is kept synchronized with global namespace maintained on the Domain Administration Server, local Java Message Service, Transaction Manager, Timer Service and so on. There are no specially designated instances that perform special functions. This eliminates all single points of failure. Since services are available locally in every instance, default service invocation is local and optimized to be fast. Services scale naturally as more instances and capacity is added.

GlassFish uses Active/Active approach to high availability. All deployed applications are available at every instance in the cluster and the load balancer can send request to any of them. There is no need to configure an active primary and standby secondary. All servers actively handle traffic. There is no need to configure passive stand-by servers for the uncommon failure scenario. Hardware investment is efficiently utilized.

Cluster health is actively monitored by a heartbeat service that detects when an instance goes down and triggers appropriate recovery actions from neighboring instances. For example, when a server crashes, in-flight distributed transactions are arranged for automatic recovery by another instance, to prevent database row locking from freezing the application for other users.  If the failed instance tries to come up during the recovery, it is paused until recovery is completed and then allowed to join the cluster and process traffic.

Performance: Low Latency and Consistent Throughput

Using the SipInviteProxy performance test with immediate call tear down (Call Length is 0 seconds), GlassFish Communications Server is benchmarked at 900 CPS on a dual processor 2-core (2.8GHz AMD) system using SuSE Linux and Sun JDK 1.5u12. Average latency is 13.67ms. 99.86% percent of calls complete in under 120ms in this stress load. Since this is based on GlassFish, all other containers are also well tuned for performance. GlassFish was used to publish very impressive SPECjAppServer2004 benchmarks scores.

Overload Protection

SIP is an asynchronous Peer-to-Peer protocol. SIP requests are retransmitted, if responses are not received in time. When a server falls behind in handling traffic, retransmissions are inevitable. This can lead to snowball effect of increasing traffic, spiraling degradation in quality of service and even total service failure. GlassFish Communication Server employs Overload Protection features that constantly monitor the system CPU utilization and Java Heap memory utilization to trigger traffic throttling logic. New incoming requests are replied with 503 Service Unavailable responses, until the system returns to operating in safe and sustainable condition. Clients may choose to try another server or try again after some time. Existing clients and traffic is ensured with consistent service levels.

Converged Load Balancing

Communications service delivery platforms run on horizontally scaled blade servers. It is important to distribute traffic uniformly across all the blades. It is more efficient to route the subsequent messages to the same target server where the session state is cached in memory. In converged collaborative applications, one or more users may interact with the same service and use different protocols like SIP and HTTP concurrently. Converged Load Balancing refers to a unique feature in GlassFish Communications Server that recognizes if the incoming SIP and HTTP messages belong to the same executing application session instance and route them coherently to the same target server.

Converged Load Balancing (CLB) function can be enabled in any GlassFish cluster. When enabled for self-load balancing function, each instance can act as a load balancer in addition to handling traffic.Customers save money because they can use a simple L3 load balancer and do not need the more expensive SIP capable load balancers from third parties.


Figure 6: Horizontal Scaling with Converged Load Balancer

It is possible to use GlassFish clusters without self-load balancing and use an external SIP aware hardware load balancer. When load-balancing logic is done inside the server instance, a simple L3 load balancer will suffice to distribute incoming packets to any instance in server.

The default CLB policy performs sticky round robin load balancing on SIP and HTTP traffic. Since many SIP clients and proxies may not preserve all headers, no additional SIP headers are added. The load balancing decision consumes SIP from-tag, to-tag, and call-id parameters of the request. HTTP requests are load balanced using the session identifier, in this mode.

As additional hardware capacity is added to the cluster it is important that the benefit is made immediately visible to all clients, both old and new. CLB monitors cluster size increase and incorporates them into the load-balancing table. Requests that are not already sticky to target servers are directed to the new server instances. Server capacity can also be reduced and user sessions and requests are migrated to remaining load processing servers. The dynamic clustering capability enables planned downtime and service upgrades without disrupting the entire service. Blades can be brought down one by one, for maintenance and restarted to resume function. This functionality is based on applying a consistent hashing scheme to configurable headers or parts of incoming requests.

Data Centric Rules

The default SIP and HTTP load balancing policies suitable for many scenarios and provide sticky load distribution. However, in many IMS scenarios it may be necessary to look inside different SIP headers and URL fields or even translate URI domains. HTTP load balancing may need to consult custom cookies and headers. GlassFish Communications Server provides a facility for the administrator to customize the load balancing to be based on data embedded in SIP and HTTP messages. The consistent-hash algorithm is applied to these administrator specified filed. For example, pne may specify SIP and HTTP rules as: if a Conference-Name header is available in a SIP message load balancing is performed on that, otherwise use the user part of the From: SIP header. For HTTP requests, look for a parameter called Conference-Name in the request or use the Host: header.

                  <user-centric-rules>   
                     <sip-rules>
                        <if>
                           <header name="Conference-Name"

                                 return="request.Conference-Name">
                              <exist/>
                          </header>
                      <else return="request.to.uri.resolve.user" />
                      </if>
                    </sip-rules>

                    <http-rules>
                      <if>
                        <request-uri parameter="ConferenceName" 

                             return="parameter.ConferenceName">
                          <exist/>
                        </request-uri>
                      <else return="request.Host"/>
                    </if>
                  </http-rules>
              </user-centric-rules>

Session Data Availability

While CLB assures service availability, even load distribution, fail over of requests and fail back. But the data associated with user sessions must be restored after a failure. GlassFish employs a Peer-To-Peer in-memory replication system to save critical container and user interaction state and ensure data availability. All active SIP dialogs have container state: dialog state represented as Dialog Fragments, Session object(s) comprising of SipSession, SipApplicationSession and HttpSession, and associated timers. Modified state objects are replicated to a peer, at the end of every SIP transaction or after a web request is completed. Just like the CLB, the replication management sub-system also runs inside the GlassFish server instance. It is responsible for making replicas and saving them to its replica peer, and acting as a replicating peer to another instance in the cluster. It is also responsible for delivering requested replicas acquiring replicas, during fail over and cluster healing.  


Figure 7: Session Data Replication in GlassFish

The diagram above shows a healthy four-instance GlassFish cluster running is highly available mode with CLB and session data replication enabled. For illustration, we show two SIP user agents and a HTTP browser interacting with the converged service. The CLB has directed this traffic collectively to Instance 4. The state for the interaction consists of one SipApplicationSession, two SipSession objects, one HttpSession object and possibly many SipTimer objects. These objects are transparently replicated to the peer, Instance 1.

When a failure occurs and Instance 4 does down, the CLB performs request rerouting. For illustration lets assume that this group of requests are now sent to Instance 3. This instance does not have the necessary session data and when the request is received that touches this data, a query is sent to locate the replica. Instance 1 responds to this query and delivers the network of dialog state and session data to Instance 3. The session data ownership is now migrated to Instance 3 and it will maintain replicas on its replication partner. When instances leave or enter the cluster, replication relations are recalculated to always ensure that there is always one replica available in the cluster.

Comments:

and more xml
:-)

Posted by Suzanne Beezlebub on December 23, 2008 at 04:10 PM PST #

Nice work Sreeram.

Posted by R.Rogers on January 08, 2009 at 11:06 AM PST #

Post a Comment:
Comments are closed for this entry.
About

Various things I do at Sun Microsystems.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today