Monday Jul 13, 2009

Support for Cloud images of SailFin

If you are looking for support for SailFin AMI, I am told there are currently two issues:

  1. This AMI was packaged with OpenSolaris Just Enough OS (jeOS) and they say it is not supported yet on EC2. I know it has been in use for a while, so they may be close to offering support. I do not know.
  2. The SailFin build is v2b20. The supported release is 1.5

This should not stop you doing a Beta using this AMI.

The PM wants to know which platforms there is demand on. So post a comment or two to let him know what you are looking for.

Thursday Jul 09, 2009

Sailfin in the Amazon EC2 Cloud

Getting Started with SailFin on Amazon EC2

A few SailFin users have inquired about SailFin hosting. I run one out of my garage. It is free but sorry I do not have the additional capacity :-) I had also rented a dedicated server at GoDaddy ($80/mo) to host our demo server. So far it has worked out well and it is quite a reasonable option. There may be other hosting providers there, we do not know about. Do let us know, if you run into anyone.

Sun ISV Engineering guys stepped up to create a rentable ready-to-go Sailfin AMI for Amazon EC2. This is easier to use because it has SailFin and MySQL pre-installed and pre-configured as services. So when the image comes up, you have a running server. All it needs is your service/application. The downside of any cloud image is that everytime you bring it up, it forgets what was configured. So you will have to write some scripts that start up the image and then deploy your application etc. Not a big deal. I will get you started in a moment...

It is always good to know where additional help is, in case you need it. For me, Getting Started document for EC2 & OpenSolaris [1], was all that was needed. I will condense the steps provided there and add some specific details for using SailFin.

  • First, get yourself an account on Amazon EC2, if you have not already done it. EC2 is not free, nor is it super cheap. Be prepared to supply a credit card number for billing. It appears to be reliable. I have not done any performance testing yet.
  • After you sign up you will get a Private Key and a Certificate. You need to save them as separate files with the .pem extension. The location of these two files needs to be set in the environment for the EC2 command line tools to work.
  • The AMI/API command line tools can be downloaded from the following locations:
    http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
    http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
    After completing the downloads, follow the instructions in the Amazon Getting Started Guide for setting up the tools to make it available or usable. The guide is available from: http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/setting-up-your-tools.html. You will see how to download and configure the EC2 CLI tools in the the early sections of [1].  Here is what I have in my setup:
EC2_HOME=/Users/Sreeram/ec2-sailfin/ec2-api-tools-1.3-36506
export EC2_HOME

EC2_PRIVATE_KEY=${EC2_HOME}/pk-<yourPKid>.pem
export EC2_PRIVATE_KEY

EC2_CERT=${EC2_HOME}/cert-<yourCERT-id>.pem
export EC2_CERT

EC2_URL=http://ec2.amazonaws.com
export EC2_URL

Just how does one fire up an AMI instance?

The AMI  is called: ami-9df312f4. It is SailFin v2, build 20, which has recent bug fixes. This AMI is provisioned with OpenSolaris, SailFin and MySQL all pre-configured.  When you start the AMI, SailFin and MySQL should both have started up as well and ready to use. I suspect when you have your own application on top of it, you will want to automate the deployment of your application also. 

ElasticFox is a very efficient graphical interface for EC2. When beginning, lets use the command line interface, so you get the hang of what is going under the hood. All the commands are in ${EC2_HOME}/bin directory.

  • First, generate a keypair:
ec2-add-keypair mykeypair

This will produce output which you must copy and paste into a file called mykeypair. Save everything between (and including) the "-----BEGIN RSA PRIVATE KEY-----" and "-----END RSA PRIVATE KEY-----" lines. You can generate as many keypairs as you want. You may want to keep one for sailfin usage.

chmod +x mykeypair 

  • Now start up the SailFin 2.0b20 AMI, for example:
ec2-run-instances ami-9df312f4 -k mykeypair
The output should be something like this:

RESERVATION    r-c73171ae    609350199924    default
INSTANCE    i-dfa197b6    ami-9df312f4            pending    mykeypair    0        m1.small    2009-07-09T22:00:03+0000    us-east-1d    aki-6552b60c    ari-6452b60d        monitoring-disabled

  • It does take a while for the instance to fire up. Check status with the instance id reported above:
ec2-describe-instances | grep i-dfa197b6

INSTANCE    i-dfa197b6    ami-9df312f4            pending    mykeypair    0        m1.small    2009-07-09T22:00:03+0000    us-east-1d    aki-6552b60c    ari-6452b60d        monitoring-disabled

Eventually (5 minutes or less) it will enter running state and is usable. You will then see a public DNS address for the active instance:

INSTANCE    i-dfa197b6    ami-9df312f4
ec2-174-129-140-114.compute-1.amazonaws.com    ip-10-244-15-112.ec2.internal    running    mykeypair    m1.small    2009-07-09T22:00:03+0000    us-east-1d    aki-6552b60c    ari-6452b60d        monitoring-disabled

You can see the DNS name of the instance in the command output, in this case:  ec2-174-129-140-114.compute-1.amazonaws.com. Also note the instance identfier, in this case i-dfa197b6.

Opening Ports

The typical SailFin server needs to open some critical ports for traffic and administration. You can authorize ports explicitly. This profile definition is persistent. Here we use the default security profile.

    • ec2-authorize default -p 22 (opens up SSH port. Otherwise, ssh command above would not work)
    • ec2-authorize default -p 8080 (HTTP traffic)
    • ec2-authorize default -p 4848 (Admin Server Port)
    • ec2-authorize default -p 5060 (SIP Traffic).

    • To allow UDP SIP traffic, ec2-authorize default -p 5060 --protocol udp

Explore the running image

To login, you could use the DNS name of the instance in configuring the SIP phones. I used DynDNS service to point a domain to this Amzon instance. The IP address of the instance is encoded in the DNS name. For example,  the external IP Address of ec2-72-44-33-244.compute-1.amazonaws.com is 72.44.33.244.

You can ssh to our instance. Use the DNS name that was reported by ec2-describe-instances (above)

ssh -i mykeypair -l root ec2-174-129-140-114.compute-1.amazonaws.com

Macintosh-202:ec2-api-tools-1.3-36506 Sreeram$ ssh -i mykeypair -l root ec2-75-101-184-167.compute-1.amazonaws.com
The authenticity of host 'ec2-75-101-184-167.compute-1.amazonaws.com (75.101.184.167)' can't be established.
RSA key fingerprint is 15:3a:03:d1:b9:3e:2d:37:7f:77:41:05:a2:1e:9a:d6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-75-101-184-167.compute-1.amazonaws.com,75.101.184.167' (RSA) to the list of known hosts.
Last login: Thu Jul  9 22:47:48 2009 from sca-ea-fw-1.sun

Sun Microsystems Inc.   SunOS 5.11      snv_101b        November 2008

Welcome to an OpenSolaris on Amazon EC2 instance

Use of this OpenSolaris instance is subject to the license terms found at
http://www.sun.com/amazon/license/ami and in /etc/notices/LICENSE.

Users are advised to protect their MySQL database by following guidelines
mentioned within MySQL 5.1 database documentation
http://dev.mysql.com/doc/refman/5.1/en/security-guidelines.html

Additional software packages from the OpenSolaris Package Repository are
governed by the licenses provided in the individual software packages which can
be viewed using the package manager graphical user interface.

OpenSolaris 2008.11 AMI contains temporary fix for bug id #6787193 and
#6788721.
For details refer to http://blogs.sun.com/ec2/entry/opensol2008_11_fixes

For important security information, and image usage instructions please review
the files in /root/ec2sun.

For latest updates and late breaking information, please visit the OpenSolaris
on EC2 blog at http://blogs.sun.com/ec2

Register at http://www.sun.com/third-party/global/amazon/ to receive latest
news on OpenSolaris AMIs

For technical questions: ec2-solaris-support@sun.com
You have new mail.
root@ip-10-244-159-143:~# ls
ec2sun
root@ip-10-244-159-143:~# cd ec2sun
root@ip-10-244-159-143:~/ec2sun# ls
DTrace.README  README  mysql.README  sailfin.README  sysbench.README

Be sure to look at the README files once before you start using the instance. You should see an instance of MySQL and SailFin already running. To check the status you can use:
root@ip-10-244-159-143:~/ec2sun# svcs -a | grep -i mysql
online         22:41:31 svc:/application/mysql:default
root@ip-10-244-159-143:~/ec2sun# svcs -a | grep -i domain1
online         22:42:33 svc:/application/SUNWappserver/domain1:default


You can also point your browser to the administration server: http://<DNS-address>:4848 

For your convenience, as database connection pool and JDBC resource has been configured in SailFin. You can see it by going down to explore JDBC Resources in the Admnin console. Ping the Connection pool named mysql and it should be succesful. Now run your usual asadmin scripts that deploy the service to the SailFin.

Stopping the Instance 

    You can terminate the instance as follows:

    ec2-terminate-instances i-<something>

    Using ElasticFox

    ElasticFox and SailFin

    ElasticFox is a a free FireFox plugin that simplifies dealing with the EC2 cloud. You can search for images, start them up, configure and do most of the management. Download details for ElasticFox are at AWS website. Documentation is great, and you will figure it out quite easily.


    Tuesday Dec 23, 2008

    Introducing GlassFish Communications Server

    This is the big blog that goes over what is Project SailFin.  I hope it is an interesting read and I hope to receive your comments.

    Introduction

    Convergence at happening at various levels in communication industry. Carriers seek to consolidate fixed and mobile services. There is operator consolidation due to acquisitions. When two operators merge, they need to rapidly combine subscriber databases, CRM and billing systems, to reap the cost savings benefits. So carriers find that moving to a standards-based network and software infrastructure is good for them. It allows them to be agile and and keep costs low. Increasingly you hear about a Services Delivery platform (SDP), one universal platform for network facing functions, operating consumer portals, running CRM applications, integration with OSS/BSS and even service exposure through gateways. We wanted to drive GlassFish adoption in this challenging segment: if it is good enough for carriers, it is more than good enough for enterprises.  Project SailFin started with this vision in June 2007. There is a lot of magic that goes into delivering reliable digital television and home phone services, besides an application server, however good and extensive it may be. Carriers and Network Equipment Providers are good at building SDPs.We just want to be a part of it with GlassFish.

    Services Delivery platforms are mission critical. Lets face it, software bugs and hardware failures are unavoidable. So the game was about detecting failures, rapid recovery, and keeping downtime to a minimum. Clusters need to expand and shrink with demand fluctuations. Upgrades need to be performed without downtime. A lot of the work that happened this past year went into integrating a JSR289 compliant SIP Servlets Container in GlassFish, adding a converged load balancer and session data replication system. Well, we are done and about ready to make it generally available. The official name is GlassFish Communications Server (GlassFish CS), because it is based on GlassFish Enterprise. We may also use the nickname SailFin to refer to the same thing.

    GlassFish CS is an open-source applications server that supports Java EE 5 specification and SIP Servlets 1.1 API (figure 1). It is based on the GlassFish v2.1 and so it inherits much of the core architecture and administrative infrastructure. Onecan deploy Java EE Applications, pure SIP applications and converged applications that mix all of the above. In addition, SailFin includes a novel built-in converged load balancer, support for session data high availability, support for integration in SAF API based deployments, choice of Application Routers and IMS/Diameter support in future release. Since GlassFish Communications Server is built on GlassFish Enterprise Server v2.1, it puts fresh and powerful technology in the hands of developers for creating next generation services. Developers can mix SIP Servlets technology with technologies like EJB, Java Persistence API, JDBC, Web services, and reuse container services like JNDI, JMS, Dependency Injection, Security, Transaction management.

    Sailfin sub systems
    Figure 1 GlassFish: An Integrated Execution Platform for Converged SOA, SIP, and Java EE Services

    SIP Protocol Stack and SIP Servlets

    GlassFish CS integrates a high-performance SIP protocol stack implemented using Grizzly. Incoming requests pass through a chain of handlers that perform specific functions. For example, the Overload Handler, if enabled, would send an appropriate error response, if the server is overloaded. The optional load balancer, if enabled, would apply the specified sticky load balancing logic and forward the request to the appropriate instance in the cluster, or pass it up the stack. Initial requests are matched with deployed applications and appropriately routed. Optionally, the default or user deployed Application Router is consulted to make the initial routing decisions. Many SIP listeners can be configured to accept TCP and or UDP traffic on multiple network interfaces. SIP over TLS is supported.



    Figure 2: Sip Protocol and Request Processing Stack in GlassFish

    Web Services

    Network Services like dialing, SMS and service enablers like location are often exposed to developers using web services gateways. Web Services are mechanism to mash-up services from simpler building blocks. SailFin includes Metro Web Services stack. Metro implements important WS-\* standards and WS-I standardized interoperability profiles in order to assure interoperability between Java and .NET web services. Metro web services is a popular stack reused by many and well tuned for performance. It is likely to be of great interest to Carriers who are looking to build service gateways.

    Figure 3: Metro Web Services Stack in GlassFish

    JBI and Open ESB Support

    GlassFish has built-in support for Open ESB, by including the implementation of Java Business Integration (JBI) specifications. JBI as specified in JSR 208. This is a Java standard for structuring business systems according to a service-oriented architecture (SOA). In JBI approach to SOA, a composite application is assembled from distinct services, where each service performs one or more business-related processes. JBI gives an enterprise a lot of agility in building composite applications because services and data that are used in one application can be shared and reused in other applications.

    Open ESB runtime hosts a set of pluggable component containers, which integrate various types of IT assets. These pluggable component containers are interconnected with a fast, reliable, in-memory messaging bus called the Normalized Message Router (NMR) also referred to as the JBI Bus. Service containers adapt IT assets to a standard services model, based on XML message exchange using standardized message exchange patterns (MEP) based on abstract WSDL. This improves interoperability and allows a mix-and-match of technologies from various vendors. When sending and receiving messages outside the JBI environment, the engine component containers communicate using the in-memory NMR messaging infrastructure and pass messages out to the client through an appropriate binding component container. When communication is entirely within the JBI environment, no protocol conversion, message serialization, or message normalization is necessary because all messages are already normalized and are in standard abstract WSDL format. JBI standardizes the way composite applications are packaged.

    GlassFish is Open ESB ready, because it contains the Normalized Message Router and a Java EE Service Engine that is used to expose any EJB or Web Service End Points on the JBI bus, optionally. With this basic infrastructure SGCS instance can host service engine and binding components for any protocol, participate in workflows and integrate with billing, provisioning or network element management systems.

    Management and Monitoring 

    GlassFish management framework allows an administrator to configure and monitor instances and clusters securely and remotely, from a web-based central administration console. Administration in a carrier deployment environment can be automated in many ways. The command line interface (CLI) can be used to script and automate processes. A stable JMX API is made available to programmatically monitor the server, query configuration and change configuration data.



    Figure 4: Centralized Administration


    GlassFish administration infrastructure, based on Domain Administration Server (DAS) and Node Agent (NA) is carried over. DAS performs aggregate actions such as starting and stopping clusters and instances, propagating configuration changes and distributing deployed applications, to affected instances. DAS also manages the central repository where all deployed applications, libraries and configuration data is stored. The repository can be automatically backed up and restored, to revert to a previous configuration after failure. The DAS is not necessary to be up and running for normal operation of services in a domain. Node Agents assist the DAS in managing the lifecycle of instances and watchdog restart service for instances. Communication between the DAS, NA and instances is always secure.

    Monitoring is supported through JMX and SNMP interfaces. Monitoring level may be varied dynamically from OFF to LOW and high, changing the amount of information that is collected. Monitoring level can be controlled at fine-grained level to watch only the desired sub-systems making it suitable for production deployments where performance degradation of running services is not desirable. Similar to monitoring, logging can also be enabled and disabled dynamically for any sub-system, log verbosity level controlled and log files automatically rotated for field serviceability. Additional log filters can be installed on the field for customization.

    Administration Web console is commonly used to administer and monitor GlassFish Communication Server. The web console provides an intuitive interface to navigate the complex configuration elements and wizards for completing common tasks like deploying applications, creating clusters and configuring high-availability and load balancer.


    Figure 5: GlassFish Web Administration Console


    GlassfIsh provides a Self Management Framework. System administrators, responsible for monitoring the application server for unpredictable conditions and apply corrective actions, no longer have to watch for these conditions manually. This feature enables prevention of failures by detecting emerging problems rapidly and preemptively addressing them, thus improving availability through self-healing. A self-management rule is an association between an event and actions. Events and Actions are performed by MBeans. Rules can be created to act on pre-defined events or new event types. Some pre-defined event sources are:

    •    Lifecycle events: Event is generated when an instance is ready, shutdown or abruptly down
    •    Monitor event: Monitor any statistic and generate an event when a threshold is crossed
    •    Log events: When a log message is produced at a certain level from a specified logger
    •    Trace events: configured to emit an event when a specified EJB or Servlet is entered/exited
    •    Timer events: used to perform actions at pre-determined times
    •    Notification events: Generic JMX notification event


    Carrier Grade Service Execution

    In this section, we will look at how SailFin provides high quality and continuous service. High availability requires that:

    •    System operates at the desired latency levels even when above the engineered capacity
    •    System throughput is steady and is stable under increased loads 
    •    Hardware and software failures are tolerated
    •    Service failover is reliable, fast and invisible to the clients
    •    Client’s application state is persisted across recovery
    •    System and services are designed to scale horizontally
    •    Adding/Removing capacity to/from the platform should not affect running services and clients
    •    Spread the load evenly based on simple and configurable criteria
    •    Configuration must be allowed to change dynamically without needing restarts
    •    Detect and correctly handle load spikes and overloaded resource conditions

    GlassFish Communication Server is designed to provide carrier grade service execution environment for web and converged SIP applications.

    Active and Fully Replicated Services

    Java EE and SIP Container services are replicated in every instance in a cluster. Each server instance has a local JNDI service that is kept synchronized with global namespace maintained on the Domain Administration Server, local Java Message Service, Transaction Manager, Timer Service and so on. There are no specially designated instances that perform special functions. This eliminates all single points of failure. Since services are available locally in every instance, default service invocation is local and optimized to be fast. Services scale naturally as more instances and capacity is added.

    GlassFish uses Active/Active approach to high availability. All deployed applications are available at every instance in the cluster and the load balancer can send request to any of them. There is no need to configure an active primary and standby secondary. All servers actively handle traffic. There is no need to configure passive stand-by servers for the uncommon failure scenario. Hardware investment is efficiently utilized.

    Cluster health is actively monitored by a heartbeat service that detects when an instance goes down and triggers appropriate recovery actions from neighboring instances. For example, when a server crashes, in-flight distributed transactions are arranged for automatic recovery by another instance, to prevent database row locking from freezing the application for other users.  If the failed instance tries to come up during the recovery, it is paused until recovery is completed and then allowed to join the cluster and process traffic.

    Performance: Low Latency and Consistent Throughput

    Using the SipInviteProxy performance test with immediate call tear down (Call Length is 0 seconds), GlassFish Communications Server is benchmarked at 900 CPS on a dual processor 2-core (2.8GHz AMD) system using SuSE Linux and Sun JDK 1.5u12. Average latency is 13.67ms. 99.86% percent of calls complete in under 120ms in this stress load. Since this is based on GlassFish, all other containers are also well tuned for performance. GlassFish was used to publish very impressive SPECjAppServer2004 benchmarks scores.

    Overload Protection

    SIP is an asynchronous Peer-to-Peer protocol. SIP requests are retransmitted, if responses are not received in time. When a server falls behind in handling traffic, retransmissions are inevitable. This can lead to snowball effect of increasing traffic, spiraling degradation in quality of service and even total service failure. GlassFish Communication Server employs Overload Protection features that constantly monitor the system CPU utilization and Java Heap memory utilization to trigger traffic throttling logic. New incoming requests are replied with 503 Service Unavailable responses, until the system returns to operating in safe and sustainable condition. Clients may choose to try another server or try again after some time. Existing clients and traffic is ensured with consistent service levels.

    Converged Load Balancing

    Communications service delivery platforms run on horizontally scaled blade servers. It is important to distribute traffic uniformly across all the blades. It is more efficient to route the subsequent messages to the same target server where the session state is cached in memory. In converged collaborative applications, one or more users may interact with the same service and use different protocols like SIP and HTTP concurrently. Converged Load Balancing refers to a unique feature in GlassFish Communications Server that recognizes if the incoming SIP and HTTP messages belong to the same executing application session instance and route them coherently to the same target server.

    Converged Load Balancing (CLB) function can be enabled in any GlassFish cluster. When enabled for self-load balancing function, each instance can act as a load balancer in addition to handling traffic.Customers save money because they can use a simple L3 load balancer and do not need the more expensive SIP capable load balancers from third parties.


    Figure 6: Horizontal Scaling with Converged Load Balancer

    It is possible to use GlassFish clusters without self-load balancing and use an external SIP aware hardware load balancer. When load-balancing logic is done inside the server instance, a simple L3 load balancer will suffice to distribute incoming packets to any instance in server.

    The default CLB policy performs sticky round robin load balancing on SIP and HTTP traffic. Since many SIP clients and proxies may not preserve all headers, no additional SIP headers are added. The load balancing decision consumes SIP from-tag, to-tag, and call-id parameters of the request. HTTP requests are load balanced using the session identifier, in this mode.

    As additional hardware capacity is added to the cluster it is important that the benefit is made immediately visible to all clients, both old and new. CLB monitors cluster size increase and incorporates them into the load-balancing table. Requests that are not already sticky to target servers are directed to the new server instances. Server capacity can also be reduced and user sessions and requests are migrated to remaining load processing servers. The dynamic clustering capability enables planned downtime and service upgrades without disrupting the entire service. Blades can be brought down one by one, for maintenance and restarted to resume function. This functionality is based on applying a consistent hashing scheme to configurable headers or parts of incoming requests.

    Data Centric Rules

    The default SIP and HTTP load balancing policies suitable for many scenarios and provide sticky load distribution. However, in many IMS scenarios it may be necessary to look inside different SIP headers and URL fields or even translate URI domains. HTTP load balancing may need to consult custom cookies and headers. GlassFish Communications Server provides a facility for the administrator to customize the load balancing to be based on data embedded in SIP and HTTP messages. The consistent-hash algorithm is applied to these administrator specified filed. For example, pne may specify SIP and HTTP rules as: if a Conference-Name header is available in a SIP message load balancing is performed on that, otherwise use the user part of the From: SIP header. For HTTP requests, look for a parameter called Conference-Name in the request or use the Host: header.

                      <user-centric-rules>   
                         <sip-rules>
                            <if>
                               <header name="Conference-Name"

                                     return="request.Conference-Name">
                                  <exist/>
                              </header>
                          <else return="request.to.uri.resolve.user" />
                          </if>
                        </sip-rules>

                        <http-rules>
                          <if>
                            <request-uri parameter="ConferenceName" 

                                 return="parameter.ConferenceName">
                              <exist/>
                            </request-uri>
                          <else return="request.Host"/>
                        </if>
                      </http-rules>
                  </user-centric-rules>

    Session Data Availability

    While CLB assures service availability, even load distribution, fail over of requests and fail back. But the data associated with user sessions must be restored after a failure. GlassFish employs a Peer-To-Peer in-memory replication system to save critical container and user interaction state and ensure data availability. All active SIP dialogs have container state: dialog state represented as Dialog Fragments, Session object(s) comprising of SipSession, SipApplicationSession and HttpSession, and associated timers. Modified state objects are replicated to a peer, at the end of every SIP transaction or after a web request is completed. Just like the CLB, the replication management sub-system also runs inside the GlassFish server instance. It is responsible for making replicas and saving them to its replica peer, and acting as a replicating peer to another instance in the cluster. It is also responsible for delivering requested replicas acquiring replicas, during fail over and cluster healing.  


    Figure 7: Session Data Replication in GlassFish

    The diagram above shows a healthy four-instance GlassFish cluster running is highly available mode with CLB and session data replication enabled. For illustration, we show two SIP user agents and a HTTP browser interacting with the converged service. The CLB has directed this traffic collectively to Instance 4. The state for the interaction consists of one SipApplicationSession, two SipSession objects, one HttpSession object and possibly many SipTimer objects. These objects are transparently replicated to the peer, Instance 1.

    When a failure occurs and Instance 4 does down, the CLB performs request rerouting. For illustration lets assume that this group of requests are now sent to Instance 3. This instance does not have the necessary session data and when the request is received that touches this data, a query is sent to locate the replica. Instance 1 responds to this query and delivers the network of dialog state and session data to Instance 3. The session data ownership is now migrated to Instance 3 and it will maintain replicas on its replication partner. When instances leave or enter the cluster, replication relations are recalculated to always ensure that there is always one replica available in the cluster.

    2008 is over and SailFin is getting ready to leave the station

    My last post was exulting about JSR289 passing the vote way back in July. We have been moving along since then. I thought SailFin would be the first to pass the JSR289 compatibility tests. But Oracle made sure that they already passed the TCK by the time it was made public! And then we found that there were issues in the TCK that needed discussion with the Spec lead. Thanks to Mihir, we got past all of that and had a clean run in November.

    The economic turmoil and U.S presidential election have taken their share of headline news. It has been a less sensational and productive year for SailFin community. Some major accomplishments:

    • Community membership crossed 200, with over 45 with developer status.
    • Found early adopters through the community. Great interest seen from companies building unified communications solutions and conferencing systems for enterprises and SMBs.
    • Ericsson has been a tremendous partner all along. Ericsson Service Development Studio 4.1 is an Eclipse based IDE that includes SailFin by default.
    • IPTV live field trials with GlassFish and SailFin with Sonaecom in Portugal. I am told some more such things are in progress.
    The GA release of SailFin is planned January 2009. As we approach this big date, there will be many more informative blog entries on the performance, unique features, and the team behind SailFin.

    Thursday Jul 24, 2008

    JSR-289 passes JCP Exec Committe Vote

    The JCP Executive Committee has approved the JSR 289 Final Approval Ballot. Congratulations to the EG and Spec Leads! We had many along way, so it is good to name them all. Nasir Khan, Jarek Wilkiewicz, Mihir Kulkarni and Yannis. JSR-289 standardizes the converged SIP Servlets and Java EE application model which is a great step forward and keeps up the momentum. We just got a TCK preview. Hope to get the final version of TCK soon. Look for more news on SailFin release dates with JSR-289 compatibility, high availability and everything else we are working on.

    Monday May 21, 2007

    Adding Voice to Java Web Applications - Samples

    We just posted our slides for the Java One talk (TS-4919: Adding Telephony to Enterprise Java Applications) on sailfin website. During the talk we described two samples. A simple Click-To-Dial applications that lets two logged in users have a conversation and a full-blown multi-uaser collaboration application Conference Manager with live voice mixing. The source code for the latter is coming soon. Enjoy the Click-To-Dial sample and let us know how it works for you.

    Tuesday May 08, 2007

    Project SailFin launches in GlassFish Community

    surfinduke

    After last JavaOne I started looking at a completely different area, Java and Communications. After tinkering to see how I could add voice to a Java EE web application, it seemed like there was no easy and portable way to do this. I looked around to see what was already there and was close enough to Java EE, and could be accepted by enterprise developers. The answer seemed to be SIP Servlets. Sun joined the JSR-289 expert group and focused on the proposed application model that combines SIP and Java EE. JSR-289 is in Early Draft Review stage. Take a look and get involved.

    We were also lucky to find a great partner in Ericsson. By the end of the 2006, we had Ericsson's SIP Application Server running alongside GlassFish. We were able to inject Java EE services like EntityManager inside Sip Servlets and initiate calls from inside web applications. One thing led to another and here we are at another JavaOne! SailFin is live today as the first open source SIP Servlets technology project, in the GlassFish Community. Visit us to see the sources and build instructions.

    We sent around an evaluation kit inside Sun and the people at Sun Labs built an amazing fully Java EE and JSR-289 based Conferencing application on top of SailFin. Jonathan Kaplan from the Labs and I have a talk about this on Thursday (TS-4919: Adding Telephony to Java Technology Based Enterprise Applications, 4:10-5:10pm, Hall E, Rm 134). Also visit us at the Pod (# 968) to see the demo.

    An intersting tidbit: The Surfing Duke image on SailFin project web page was designed by James Gosling. Thanks James.
    About

    Various things I do at Sun Microsystems.

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today