Tuesday Sep 20, 2011

Grizzly 2.0: Asynchronous HTTP client

Ryan has implemented Grizzly provider for Ning Asynchronous HTTP client.

More info on Ryan's blog.

Tuesday Aug 23, 2011

Grizzly 2.1.2 has been released

Grizzly team is glad to announce Grizzly 2.1.2 release.

Here is the link to Ryan's blog, where you can find more info.

Grizzly 2.0: HttpServer API. Asynchronous HTTP Server - Part II

In my previous blog "Asynchronous HTTP Server - Part I", we discussed the possibility of delegating HTTP request processing to a custom thread in order to not effect entire HTTP server stability and response time. An example of where this might be useful is when dealing with tasks, like database requests, which may take relatively long time to complete.

We saw that by using Grizzly's ability to suspend/resume HTTP request processing, we'd be able to move a possible bottleneck from Grizzly HttpServer service thread-pool to an application specific custom thread-pool. This is very important, cause it lets us isolate applications inside the HttpServer and avoid situations when clients of one application block clients of another application and potentially the entire server.

Now we will try to go further and see how we deal with another potential bottleneck, client<->server communication using blocking streams, without having to resort to the use of an additional thread pool. Here are some usecases (may be you'll be able to come with more of them):

  • Client<->server communication is slow. We spend significant amount of time waiting for data to become available or for the outbound socket buffer to be flushed so that we can continue writing. This slowness might be caused by underlying network architecture (for ex. relatively slow, high-latency mobile networks), or DoS-like attack.
  • Client or/and server is sending large amounts of data (for ex. large file upload/download). This operation usually can be time consuming even if client<->server communication is fast. Given this, it would only take a couple of clients uploading/downloading files, to occupy all available threads and make server unresponsive to other clients.
  • Combination of two above.

The Grizzly 2.x HttpServer API has the concept of NIO streams (non-blocking streams), which are pretty similar to standard java.io streams, but have ability to check whether next read/write operation can be executed without blocking. For example:

-----------------------------------------------
public class MyHttpHandler extends HttpHandler {
    @Override
    public void service(Request request, Response response) throws Exception {
        NIOInputStream nioInputStream = request.getInputStream();
        System.out.println("Number of bytes we can read without blocking is: " + nioStream.readyData());

        NIOOutputStream nioOutputStream = response.getOutputStream();
        System.out.println("I'm able to write 8K without blocking: " + nioOutputStream.canWrite(8192));
    }
}
-----------------------------------------------

The methods in the sample above can help you figure out if next I/O operation going to block or not. Additionally Grizzly provides a notification mechanism, that may be used in order to be notified when next read/write operation can be executed without blocking.

The general schema for non-blocking reading may look like:

-----------------------------------------------
final NIOInputStream nioInputStream = request.getInputStream();

// Register ReadHandler, which is going to be notified
// when at least 1 byte is available for reading
nioInputStream.notifyAvailable(new ReadHandler() {

    @Override
    public void onDataAvailable() throws Exception {
        processAvailable();

        // Reregister this ReadHandler to be notified 
        // when next chunk of data gets available
        nioInputStream.notifyAvailable(this);
    }

    @Override
    public void onAllDataRead() throws Exception {
        // Process the last chunk of data
        processAvailable();
        // Complete request processing
        // (preparing and sending response back to the client)
        complete();
    }

    @Override
    public void onError(Throwable t) {
        // Error occurred when reading data
        handleError(t);
        complete();
    }
});
-----------------------------------------------

* It's possible to set the minimum number of bytes you require to become available before notifying ReadHandler.

A similar approach is available for non-blocking writing using NIOOutputStream.

Usually we register Read/WriteHandler in the HttpHandler.service(...) method, but as we understand the actual notification may come from Grizzly asynchronously in a different thread, so before registering Read/WriteHandler we have to suspend HTTP request processing (so Grizzly won't finish it after exiting HttpHandler.service() method), and don't forget to resume it once processing is done.

-----------------------------------------------
public class MyHttpHandler extends HttpHandler {
    @Override
    public void service(final Request request, final Response response) throws Exception {
        // ReadHandler might be called asynchronously, so
        // we have to suspend the response
        response.suspend();
        final NIOInputStream nioInputStream = request.getInputStream();

        nioInputStream.notifyAvailable(new ReadHandler() {

            @Override
            public void onDataAvailable() throws Exception {
                processAvailable();
                nioInputStream.notifyAvailable(this);
            }

            @Override
            public void onAllDataRead() throws Exception {
                processAvailable();                
                complete();
            }

            @Override
            public void onError(Throwable t) {
                handleError(t);
                complete();
            }

            private void complete() {
                // Complete HTTP request processing
                ...................
                // Don't forget to resume!!!
                response.resume();
            }
        });
    }
}
-----------------------------------------------

PS: While the samples above demonstrate how the NIO streams feature may be used in binary mode via NIOInputStream and NIOOutputStream abstractions. The same set of methods/features could be used in character mode, using NIOReader and NIOWriter.

For more information please use following docs and samples:

Monday Aug 08, 2011

Grizzly 2.0: HttpServer API. Asynchronous HTTP Server - Part I

In my previous blog entry I described basic Grizzly HttpServer API abstractions and offered a couple of samples to show how one can implement light-weight Servlet-like web application.

Here I'll try to show how we can process HTTP requests asynchronously within HttpHandler, in other words implement asynchronous HTTP application.

What do we mean by "asynchronous"?

Normally HttpServer has a service thread-pool, whose threads are used for HTTP requests processing, which includes following steps:

  1. parse HTTP request;
  2. execute processing logic by calling HttpHandler.handle(Request, Response);
  3. flush HTTP response;
  4. return service thread to a thread-pool.
Normally the steps above are executed sequentially in a service thread. Using "asynchronous" feature it's possible to delegate execution of steps 2 and 3 to a custom thread, which will let us release the service thread faster.

Why would we want to do that?

As it was said above, the service thread-pool instance is shared between all the HttpHandlers registered on HttpServer. Assume we have application (HttpHandler) "A", which  executes a long lasting task (say a SQL query on pretty busy DB server), and application "B", which serves static resources. It's easy to imagine the situation, when couple of application "A" clients block all the service threads by waiting for response from DB server. The main problem is that clients of application "B", which is pretty light-weight, can not be served at the same time because there are no available service threads. So it might be a good idea to isolate these applications by executing application "A" logic in the dedicated thread pool, so service threads won't be blocked.

Ok, let's do some coding and make sure the issue we've just described is real :)


-----------------------------------------------
HttpServer httpServer = new HttpServer();

NetworkListener networkListener = new NetworkListener("sample-listener", "127.0.0.1", 18888);

// Configure NetworkListener thread pool to have just one thread,
// so it would be easier to reproduce the problem
ThreadPoolConfig threadPoolConfig = ThreadPoolConfig
        .defaultConfig()
        .setCorePoolSize(1)
        .setMaxPoolSize(1);

networkListener.getTransport().setWorkerThreadPoolConfig(threadPoolConfig);

httpServer.addListener(networkListener);

httpServer.getServerConfiguration().addHttpHandler(new HttpHandler() {
    @Override
    public void service(Request request, Response response) throws Exception {
        response.setContentType("text/plain");
        response.getWriter().write("Simple task is done!");
    }
}, "/simple");

httpServer.getServerConfiguration().addHttpHandler(new HttpHandler() {
    @Override
    public void service(Request request, Response response) throws Exception {
        response.setContentType("text/plain");
        // Simulate long lasting task
        Thread.sleep(10000);
        response.getWriter().write("Complex task is done!");
    }
}, "/complex");

try {
    server.start();
    System.out.println("Press any key to stop the server...");
    System.in.read();
} catch (Exception e) {
    System.err.println(e);
}
-----------------------------------------------

In the sample above we create, initialize and run HTTP server, which has 2 applications (HttpHandlers) registered: "simple" and "complex". To simulate long-lasting task in the "complex" application we're just causing the current thread to sleep for 10 seconds.

Now if you try to call "simple" application from you Web browser using URL: http://localhost:18888/simple - you see the response immediately. However, if you try to call "complex" application http://localhost:18888/complex - you'll see response in 10 seconds. That's fine. But try to call "complex" application first and then quickly, in different tab, call the "simple" application, do you see the response immediately? Probably not. You'll see the response right after "complex" application execution completed. The sad thing here is that service thread, which is executing "complex" operation is idle (the same situation is when you wait for SQL query result), so CPU is doing nothing, but still we're not able to process another HTTP request.

How we can rework the "complex" application to execute its task in custom thread pool? Normally application (HttpHandler) logic is encapsulated within HttpHandler.service(Request, Response) method, once we exit this method, Grizzly finishes and flushes HTTP response. So coming back to the service thread processing steps:

  1. parse HTTP request;
  2. execute processing logic by calling HttpHandler.handle(Request, Response);
  3. flush HTTP response;
  4. return service thread to a thread-pool.

we see that it wouldn't be enough just to delegate HTTP request processing to a custom thread on step 2, because on step 3 Grizzly will automatically flush HTTP response back to client at the state it currently is. We need a way to instruct Grizzly to not do 3 automatically on the service thread, instead we want to be able to perform this step ourselves once asynchronous processing is complete.

Using Grizzly HttpServer API it could be achieved following way:

  • HttpResponse.suspend(...) to instruct Grizzly to not flush HTTP response in the service thread;
  • HttpResponse.resume() to finish HTTP request processing and flush response back to client.
So asynchronous version of the "complex" application (HttpHandler) will look like:

-----------------------------------------------
httpServer.getServerConfiguration().addHttpHandler(new HttpHandler() {
    final ExecutorService complexAppExecutorService =
        GrizzlyExecutorService.createInstance(
            ThreadPoolConfig.defaultConfig()
            .copy()
            .setCorePoolSize(5)
            .setMaxPoolSize(5));
            
    @Override
    public void service(final Request request, final Response response) throws Exception {
                
        response.suspend(); // Instruct Grizzly to not flush response, once we exit the service(...) method
                
        complexAppExecutorService.execute(new Runnable() {   // Execute long-lasting task in the custom thread
            public void run() {
                try {
                    response.setContentType("text/plain");
                    // Simulate long lasting task
                    Thread.sleep(10000);
                    response.getWriter().write("Complex task is done!");
                } catch (Exception e) {
                    response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
                } finally {
                    response.resume();  // Finishing HTTP request processing and flushing the response to the client
                }
            }
        });
    }
}, "/complex");
-----------------------------------------------

* As you might have noticed, "complex" application uses Grizzly ExecutorService implementation. This is the preferred approach, however you can still use own ExecutorService.

The three most important steps in the code above are marked red:

  1. Suspend HTTP response processing: response.suspend()
  2. Delegating task to the custom thread pool: complexAppExecutorService.execute(...)
  3. Resuming HTTP response processing: response.resume()

Now, using your browser, you can make sure "simple" and "complex" applications are not affecting each other, and the "simple" application works just fine when the "complex" application is busy.

Wednesday Jul 20, 2011

Grizzly 2.0: HttpServer API. Implementing simple HTTP server

HttpServer API is Grizzly 2.0 replacement of GrizzlyWebServer API described here by Jeanfrancois.

HttpServer inherits all the GrizlyWebServer features and additionally introduces new NIO Streams API, which lets you make all you reads and writes non-blocking.

But let's start from the basics.

HttpServer API is pretty similar to Servlet API, so everyone who worked with Servlets will easily map basic HttpServer API abstractions to correspondent Servlet API abstractions. Here they are:

 HttpServer This is the Grizzly HTTP server which can be used to create standalong HTTP programs or embed Grizzly within other application to provide HTTP services
 NetworkListener
This is an abstraction of the Grizzly NIOTransport and Filter implementations. It also allows the enabling/disabling of HTTP-related features such as keep-alive, chunked transfer-encoding, cusom addons etc. HttpServer can support multiple NetworkListeners. Also, keep in mind that all HttpHandlers added to the ServerConfiguration will be shared across all listeners
 HttpHandler HttpHandler is akin to javax.servlet.Servlet
 Request  Request is similar to javax.servlet.http.HttpServletRequest
 Response Request is similar to javax.servlet.http.HttpServletResponse
 Session Session is similar to javax.servlet.http.HttpSession
 ServerConfiguration This class allows developer to add custom HttpHandler implementations to the server as well as exposing JMX/monitoring features
 AddOn The general interface for HttpServer addons, which suppose to extend basic HttpServer functionality


1. Serving Static Resources.

Ok, now we're ready to implement and start simplest HTTP server, which will serve static resources in the current folder on all available network interfaces (0.0.0.0) and default TCP port 8080.


-----------------------------------------------
HttpServer server = HttpServer.createSimpleServer();
try {
    server.start();
    System.out.println("Press any key to stop the server...");
    System.in.read();
} catch (Exception e) {
    System.err.println(e);
}
-----------------------------------------------

After running the code above, the files in the current folder could be retrieved using URL: http://localhost:8080/<filename>.

If we want to customize our HTTP server to serve static resources in the folder "/home/myhome/downloads" on localhost interface only, port 18888 with context-root "/xyz" - the code has to be changed:


-----------------------------------------------
HttpServer httpServer = new HttpServer();
        
NetworkListener networkListener = new NetworkListener("sample-listener", "localhost", 18888);
httpServer.addListener(networkListener);
        
httpServer.getServerConfiguration().addHttpHandler(new StaticHttpHandler("/home/myhome/downloads"),
    "/xyz");
        
try {
    server.start();
    System.out.println("Press any key to stop the server...");
    System.in.read();
} catch (Exception e) {
    System.err.println(e);
}
-----------------------------------------------

A bit more work isn't it? ;) Now the files located in the "/home/myhome/downloads" folder could be retrieved using URL: http://localhost:18888/xyz/<filename>.


2. Serving Dynamic Resources.

Ok, what if we want to serve dynamic resources, in other words generate response at runtime?

That's easy, instead of using StaticHttpHandler (sample above), we're going to create a custom HttpHandler, which API and features are pretty similar to javax.servlet.Servlet.  Let's implement HttpHandler, which returns current server time in format HH:mm:ss.


-----------------------------------------------
HttpServer httpServer = new HttpServer();
        
NetworkListener networkListener = new NetworkListener("sample-listener", "localhost", 18888);
httpServer.addListener(networkListener);
        
httpServer.getServerConfiguration().addHttpHandler(new HttpHandler() {
    final SimpleDateFormat formatter = new SimpleDateFormat("HH:mm:ss");
            
    @Override
    public void service(Request request, Response response) throws Exception {
        final Date now = new Date();
        final String formattedTime;
        synchronized (formatter) {
            formattedTime = formatter.format(now);
        }
                
        response.setContentType("text/plain");
        response.getWriter().write(formattedTime);
    }
}, "/time");

try {
    server.start();
    System.out.println("Press any key to stop the server...");
    System.in.read();
} catch (Exception e) {
    System.err.println(e);
}
-----------------------------------------------

Looks familiar? :) You can test the implementation using URL: http://localhost:18888/time .

Indeed, Grizzly HttpServer API is similar to Servlets, having similar API and features. In the next blog I'll describe how to process HTTP requests asynchronously, which is particularly useful if you deal with a long lasting tasks, to not block the execution of other HTTP request coming at the same time.

Monday Jul 11, 2011

Grizzly based Sun RPC server/client

Quoting Tigran's email:

"Finally I am brave enough to publish our Sun RPC implementation code.

http://code.google.com/p/nio-jrpc/

There are two ways to use it.

Direct usage from your code:

        final ProtocolFilter protocolKeeper = new ProtocolKeeperFilter();
        final ProtocolFilter rpcFilter = new RpcParserProtocolFilter();
        final ProtocolFilter rpcProcessor = new RpcProtocolFilter();
        final ProtocolFilter rpcDispatcher = new RpcDispatcher(_programs);

        final ProtocolChain protocolChain = new DefaultProtocolChain();
        protocolChain.addFilter(protocolKeeper);
        protocolChain.addFilter(rpcFilter);
        protocolChain.addFilter(rpcProcessor);

        // use GSS if configures
        if (_gssSessionManager != null ) {
            final ProtocolFilter gssManager = new GssProtocolFilter(_gssSessionManager);
            protocolChain.addFilter(gssManager);
        }
        protocolChain.addFilter(rpcDispatcher);


or provided helper class:


    final int PORT = 1717;
    final int PROG_NUMBER = 100017;
    final int PROG_VERS = 1;

        RpcDispatchable dummy = new RpcDispatchable() {

            @Override
            public void dispatchOncRpcCall(RpcCall call) throws OncRpcException, IOException {
                call.reply(XdrVoid.XDR_VOID);
            }

        };

        OncRpcSvc service = new OncRpcSvc(PORT);
        service.register(new OncRpcProgram(PROG_NUMBER, PROG_VERS), dummy);
        service.start();


The code used in out NFS implementation and since September 2009 in production serving 12K requests per second"

Thursday Feb 24, 2011

Grizzly 2.0 release

Grizzly 2.0 has been released!

Grizzly team did a great job improving Grizzly 1.x usability, API, performance and finally we're ready to release Grizzly 2.0, which includes:

  • Completely new core module
    • Transport, Connection API, which simplifies network channels management, life-cycle control;
    • FilterChain, Filter API, providing easy mechanism for message parsing, serializing, asynchronous processing;
    • MemoryManager API, to separate memory management from essential core to be able to customize memory allocation if needed;
    • IOStrategy API, to customize (if needed) the processing of channel I/O events: process in the same thread, switch to a worker thread... etc;
  • Non-blocking HTTP Codec to work with HTTP messages, both client and server side;
  • Improved http-server module
    • Non-blocking Input, Output streams, which provide support for asynchronous, non-blocking HTTP requests processing;
  • Websockets;
  • Comet
  • Documentation ( the most wanted part :)) )

Talking about performance, here is a informal comparison of http-server benchmarks for Grizzly 1.9.32 and 2.0, which also show advantages of having customizable IOStrategies and applying them depending on specific requirements and server machine configuration. We ran the HTTP Echo test, which used HTTP post to send some content and Grizzly HTTP server replied with the same content back.

HTTP Echo benchmark

  • Bytes
    The Single-Byte Echo The 65K Byte Echo

  • Characters
    The Single-Char Echo The 65K Char Echo


Please give us feedback!

Thursday Sep 30, 2010

Grizzly 1.0.38 has been released

We've released Grizzly 1.0.38 to fix:

  • CR 6981517 "Synchronizing a large application which worked in<=2.1.1p3 fails with an EOF error with 2.1.1p4+"
  • CR 6979497 ClassCastException using key.attachment()s 
  • https://grizzly.dev.java.net/issues/show_bug.cgi?id=891 "default linger=100 causes problems with connection closing" 

You can download it from: http://download.java.net/maven/2/com/sun/grizzly/grizzly-framework-http/1.0.38/

Here are instructions how to patch Glassfish v2.1.x.

Thanks to all who helped!

 

Wednesday Sep 29, 2010

Grizzly 2.0 RC2 has been released

We've just released Grizzly 2.0 RC2!

  1. The most significant change we've made since RC1 - it's source and maven repackaging:
    • Public Grizzly 2.0 API classes will be located under org.glassfish.grizzly.\* (earlier com.sun.grizzly.\*)
    • The new Grizzly 2.0 maven artifacts groupId is org.glassfish.grizzly (earlier com.sun.grizzly)
  2. We've renamed webcontainer module to http-server (plus basic abstractions renaming), cause name "webcontainer" has a Servlet implication, which is not the case for this specific module
  3. Provided fixes, perf. improvements. For more information see the complete list of changes.

Will appreciate your feedback.

Wednesday Jun 16, 2010

Grizzly 2.0: websockets support

We've added websockets (draft-hixie-thewebsocketprotocol-76) support into Grizzly 2.0, both client and server side and ported websockets-chat sample taken from Grizzly 1.9.19 (blog from Justin).

The API, common for client and server parts:

  • WebSocket, abstraction, which represents websocket unit
  • WebSocketEngine, which works as WebSocketApplication registry for server part and also contains base websockets handshake logic
  • WebSocketFilter, the key part of websocket implementation. It's Grizzly filter, which is responsible for handling websockets communication and and forward processing to correct websocket handlers.

The server-side API consists of

The client-side API:

Here are websockets chat sample sources, which includes:

To run the standalone websockets server please do following:

  • svn checkout https://www.dev.java.net/svn/grizzly/branches/2dot0/code/samples/websockets/chat
  • cd chat
  • mvn install
  • java -cp ./target/grizzly-websockets-chat-2.0.0-SNAPSHOT.jar com.sun.grizzly.samples.websockets.ChatWebSocketServer ./webapp

To run the standalone client (assume you have completed first 3 steps from the server side list)

  • java -cp ./target/grizzly-websockets-chat-2.0.0-SNAPSHOT.jar com.sun.grizzly.samples.websockets.ChatWebSocketClient

To run the application in browser, I used the latest dev version of Chromium, which implements the latest websockets draft and requested following URL: http://localhost:8080/index.html

So finally you're able to chat using standalone client and browser.

Monday Apr 19, 2010

Grizzly 1.9.19-beta2 with Websockets support!

We have released Grizzly 1.9.19-beta2, which has fixes for the issues [1], [2]. But the most interesting part is that this release has websockets support, and considering, that beta2 will be integrated into Glassfish v3.1 branch this week - Glassfish v3.1 will start to support websockets! Details on websockets implementation you can find on Justin's blog [3] and ask questions on Grizzly mailing list.

[1] http://tinyurl.com/y4zkt4j 

[2] http://tinyurl.com/y54zfpr

[3] http://www.antwerkz.com/ 

Monday Mar 15, 2010

Grizzly 2.0: asynchronous HTTP server and client samples

We've completed initial implementation of the Grizzly 2.0 HTTP module. The main different with the Grizzly 1.x - is that we separated HTTP parsing and processing logic, so HTTP module has 2 HttpFilter implementations: client and server, which are responsible for asynchronous parsing/serializing of HTTP messages. So developer will be responsible for implementing just the HTTP processing logic. Here is general schema of HTTP message processing:

(read phase):  TransportFilter  - (Buffer) -> [SSLFilter] - (Buffer) -> HttpFilter - (HttpContent) -> CustomHttpProcessorFilter

(write phase): CustomHttpProcessorFilter - (HttpPacket) -> HttpFilter - (Buffer) -> [SSLFilter] - (Buffer) -> TransportFilter

The big advantage of the Filter approach - is that it's possible to reuse HttpFilter logic for both HTTP client and HTTP server code. 

I've also created a sample, which includes:

1) \*Simple Web server, which is implemented as custom Grizzly 2.0 Filter, and is able to serve HTTP requests for the static resources (local files).

2) \*Simple HTTP client, which downloads remote HTTP resource to the local file.

The next step for Grizzly 2.0 is to simplify Web server and HTTP client development by providing higher level API. On server side it will be similar GrizzlyWebServer/GrizzlyAdapter/GrizzlyRequest/GrizzlyResponse API, known from Grizzly 1.9.x and as for client side we will also need to come with some reasonable API.

Feedback is very appreciated. 

\* Just a small note, that both server and client are operating in non-blocking mode. 

Thursday Mar 11, 2010

Release: Grizzly 1.0.33

We've released Grizzly 1.0.33, which has fixes for the following 2 issues:

"Empty request entity received"
https://grizzly.dev.java.net/issues/show_bug.cgi?id=11452

"file cache breaks virtual server docroot discrimination"
CR6910122

You can download it from:

http://download.java.net/maven/2/com/sun/grizzly/grizzly-framework-http/1.0.33/

Here is instruction on how to patch existing GFv2.1.x with the latest Grizzly binaries.

Thanks to all who helped!

Thursday Feb 25, 2010

Grizzly 2.0: simple authentication example

I've just created a simple example, which shows how we can implement simple authentication for string-based protocol using Grizzly 2.0.

Server side is implemented as a chain of following Filters:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ServerAuthFilter <- MultiLinePacket w/o auth headers -> EchoFilter

Client side filter chain is following:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ClientAuthFilter <- MultiLinePacket w/o auth headers -> ClientFilter

The most interesting thing happens on client side inside ClientAuthFilter. If it appears, that we're sending some message via non-authenticated client - ClientAuthFilter suspends the message(s) sending and initializes authentication process, once authentication will be completed - ClientAuthFilter resumes the suspended message writes.

So communication happens on following layers: 

Client Server
 TransportFilter TransportFilter
 StringFilter StringFilter
 MultiLineFilter MultiLineFilter
 ClientAuthFilter <- Authentication messages -> ServerAuthFilter
ClientFilter <-- Cusom Messages --> EchoFilter

Wednesday May 27, 2009

Grizzly 2.0: Streaming and Messaging

Originally, when designing Grizzly 2.0 Connection API, we were thinking that Streams could cover all the possible scenarios developers may have. and could be easily used with TCP and UDP transports. But when started to implement UDP transport, we came to conclusion, that it's not actually true. For sure UDP is message oriented protocol, but thought we assumed it will be easy to emulate that using Streams like:

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush();

for connected UDP Connection, or

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush(dstAddress); 

for non-connected UDP Connections. Similar trick could be done for reading UDP messages.

So, from API point of view, Streams could work fine even for message-oriented protocols, but the problem appears, when we build UDP server (have non-connected UDP socket). In this case server may receive packets from different clients, but as Streams are not thread-safe, we can process only one single client packet at the time and block other clients, until first packet will be completely processed. This fact creates sensitive performance issue for UDP server, because many UDP packets may get lost because of big delays in processing. For such a usecase we need possibility to not block the connection, when processing incoming message, but make the connection available to process next message. This is not doable with Streams, because of mentioned thread-safety limitation, so in Grizzly 2.0 M2 we've added message-based API for Connections.

Message-based API:

Message-based API is reachable via Connection interface, which is now extending Readable and Writable interfaces, and is represented by set of read and write methods:

  • Future<ReadResult<Buffer, L>> read(...);
  • Future<WriteResult<Buffer, L>> write(...);

where <L> represents source/destination address type (SocketAddress for NIO TCP and UDP transports).

Ok, with Connections it should be straightforward, how we can use Message-based API, what about FilterChains? In the original Grizzly samples, TransportFilter, which we add first to a FilterChain, provided us StreamReader/StreamWriter objects to be used by next Filters in chain. So TransportFilter was oriented to work just in Stream mode. Now it's possible to create TransportFilter, which will work in Message mode:

transport.getFilterChain().add(new TransportFilter(TransportFilter.Mode.Message)); 

if we add TransportFilter like above, then it will not provide Streams, which could be used by the rest of FilterChain, but message (Buffer), which could be accessed via FilterChainContext:

Buffer message = (Buffer) context.getMessage();

so next Filters in chain should deal directly with message, not Streams. 

Here is the simple example of using Message-based API for UDP echo server.

Stream-based API:

We discussed Stream-based API in one of my previous blogs. So here I just want to add, that for TCP transport we recommend to use Stream API, because it makes implementation easier for most of usecases and takes care about internal buffer management, optimizing memory allocation etc.

Let's try to make some summary for Streaming and Messaging API. For sure in each particular case, it's up to you to decide which API to use. Here are just our recommendations:

  • Streams API is recommended to be used with TCP transport and could be also used for connected UDP Connections;
  • Message API is recommended to be used with non-connected UDP Connections (UDP servers);

About

oleksiys

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today