Tuesday Aug 23, 2011

Grizzly 2.0: HttpServer API. Asynchronous HTTP Server - Part II

In my previous blog "Asynchronous HTTP Server - Part I", we discussed the possibility of delegating HTTP request processing to a custom thread in order to not effect entire HTTP server stability and response time. An example of where this might be useful is when dealing with tasks, like database requests, which may take relatively long time to complete.

We saw that by using Grizzly's ability to suspend/resume HTTP request processing, we'd be able to move a possible bottleneck from Grizzly HttpServer service thread-pool to an application specific custom thread-pool. This is very important, cause it lets us isolate applications inside the HttpServer and avoid situations when clients of one application block clients of another application and potentially the entire server.

Now we will try to go further and see how we deal with another potential bottleneck, client<->server communication using blocking streams, without having to resort to the use of an additional thread pool. Here are some usecases (may be you'll be able to come with more of them):

  • Client<->server communication is slow. We spend significant amount of time waiting for data to become available or for the outbound socket buffer to be flushed so that we can continue writing. This slowness might be caused by underlying network architecture (for ex. relatively slow, high-latency mobile networks), or DoS-like attack.
  • Client or/and server is sending large amounts of data (for ex. large file upload/download). This operation usually can be time consuming even if client<->server communication is fast. Given this, it would only take a couple of clients uploading/downloading files, to occupy all available threads and make server unresponsive to other clients.
  • Combination of two above.

The Grizzly 2.x HttpServer API has the concept of NIO streams (non-blocking streams), which are pretty similar to standard java.io streams, but have ability to check whether next read/write operation can be executed without blocking. For example:

-----------------------------------------------
public class MyHttpHandler extends HttpHandler {
    @Override
    public void service(Request request, Response response) throws Exception {
        NIOInputStream nioInputStream = request.getInputStream();
        System.out.println("Number of bytes we can read without blocking is: " + nioStream.readyData());

        NIOOutputStream nioOutputStream = response.getOutputStream();
        System.out.println("I'm able to write 8K without blocking: " + nioOutputStream.canWrite(8192));
    }
}
-----------------------------------------------

The methods in the sample above can help you figure out if next I/O operation going to block or not. Additionally Grizzly provides a notification mechanism, that may be used in order to be notified when next read/write operation can be executed without blocking.

The general schema for non-blocking reading may look like:

-----------------------------------------------
final NIOInputStream nioInputStream = request.getInputStream();

// Register ReadHandler, which is going to be notified
// when at least 1 byte is available for reading
nioInputStream.notifyAvailable(new ReadHandler() {

    @Override
    public void onDataAvailable() throws Exception {
        processAvailable();

        // Reregister this ReadHandler to be notified 
        // when next chunk of data gets available
        nioInputStream.notifyAvailable(this);
    }

    @Override
    public void onAllDataRead() throws Exception {
        // Process the last chunk of data
        processAvailable();
        // Complete request processing
        // (preparing and sending response back to the client)
        complete();
    }

    @Override
    public void onError(Throwable t) {
        // Error occurred when reading data
        handleError(t);
        complete();
    }
});
-----------------------------------------------

* It's possible to set the minimum number of bytes you require to become available before notifying ReadHandler.

A similar approach is available for non-blocking writing using NIOOutputStream.

Usually we register Read/WriteHandler in the HttpHandler.service(...) method, but as we understand the actual notification may come from Grizzly asynchronously in a different thread, so before registering Read/WriteHandler we have to suspend HTTP request processing (so Grizzly won't finish it after exiting HttpHandler.service() method), and don't forget to resume it once processing is done.

-----------------------------------------------
public class MyHttpHandler extends HttpHandler {
    @Override
    public void service(final Request request, final Response response) throws Exception {
        // ReadHandler might be called asynchronously, so
        // we have to suspend the response
        response.suspend();
        final NIOInputStream nioInputStream = request.getInputStream();

        nioInputStream.notifyAvailable(new ReadHandler() {

            @Override
            public void onDataAvailable() throws Exception {
                processAvailable();
                nioInputStream.notifyAvailable(this);
            }

            @Override
            public void onAllDataRead() throws Exception {
                processAvailable();                
                complete();
            }

            @Override
            public void onError(Throwable t) {
                handleError(t);
                complete();
            }

            private void complete() {
                // Complete HTTP request processing
                ...................
                // Don't forget to resume!!!
                response.resume();
            }
        });
    }
}
-----------------------------------------------

PS: While the samples above demonstrate how the NIO streams feature may be used in binary mode via NIOInputStream and NIOOutputStream abstractions. The same set of methods/features could be used in character mode, using NIOReader and NIOWriter.

For more information please use following docs and samples:

Thursday Feb 25, 2010

Grizzly 2.0: simple authentication example

I've just created a simple example, which shows how we can implement simple authentication for string-based protocol using Grizzly 2.0.

Server side is implemented as a chain of following Filters:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ServerAuthFilter <- MultiLinePacket w/o auth headers -> EchoFilter

Client side filter chain is following:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ClientAuthFilter <- MultiLinePacket w/o auth headers -> ClientFilter

The most interesting thing happens on client side inside ClientAuthFilter. If it appears, that we're sending some message via non-authenticated client - ClientAuthFilter suspends the message(s) sending and initializes authentication process, once authentication will be completed - ClientAuthFilter resumes the suspended message writes.

So communication happens on following layers: 

Client Server
 TransportFilter TransportFilter
 StringFilter StringFilter
 MultiLineFilter MultiLineFilter
 ClientAuthFilter <- Authentication messages -> ServerAuthFilter
ClientFilter <-- Cusom Messages --> EchoFilter

Wednesday May 27, 2009

Grizzly 2.0: Streaming and Messaging

Originally, when designing Grizzly 2.0 Connection API, we were thinking that Streams could cover all the possible scenarios developers may have. and could be easily used with TCP and UDP transports. But when started to implement UDP transport, we came to conclusion, that it's not actually true. For sure UDP is message oriented protocol, but thought we assumed it will be easy to emulate that using Streams like:

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush();

for connected UDP Connection, or

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush(dstAddress); 

for non-connected UDP Connections. Similar trick could be done for reading UDP messages.

So, from API point of view, Streams could work fine even for message-oriented protocols, but the problem appears, when we build UDP server (have non-connected UDP socket). In this case server may receive packets from different clients, but as Streams are not thread-safe, we can process only one single client packet at the time and block other clients, until first packet will be completely processed. This fact creates sensitive performance issue for UDP server, because many UDP packets may get lost because of big delays in processing. For such a usecase we need possibility to not block the connection, when processing incoming message, but make the connection available to process next message. This is not doable with Streams, because of mentioned thread-safety limitation, so in Grizzly 2.0 M2 we've added message-based API for Connections.

Message-based API:

Message-based API is reachable via Connection interface, which is now extending Readable and Writable interfaces, and is represented by set of read and write methods:

  • Future<ReadResult<Buffer, L>> read(...);
  • Future<WriteResult<Buffer, L>> write(...);

where <L> represents source/destination address type (SocketAddress for NIO TCP and UDP transports).

Ok, with Connections it should be straightforward, how we can use Message-based API, what about FilterChains? In the original Grizzly samples, TransportFilter, which we add first to a FilterChain, provided us StreamReader/StreamWriter objects to be used by next Filters in chain. So TransportFilter was oriented to work just in Stream mode. Now it's possible to create TransportFilter, which will work in Message mode:

transport.getFilterChain().add(new TransportFilter(TransportFilter.Mode.Message)); 

if we add TransportFilter like above, then it will not provide Streams, which could be used by the rest of FilterChain, but message (Buffer), which could be accessed via FilterChainContext:

Buffer message = (Buffer) context.getMessage();

so next Filters in chain should deal directly with message, not Streams. 

Here is the simple example of using Message-based API for UDP echo server.

Stream-based API:

We discussed Stream-based API in one of my previous blogs. So here I just want to add, that for TCP transport we recommend to use Stream API, because it makes implementation easier for most of usecases and takes care about internal buffer management, optimizing memory allocation etc.

Let's try to make some summary for Streaming and Messaging API. For sure in each particular case, it's up to you to decide which API to use. Here are just our recommendations:

  • Streams API is recommended to be used with TCP transport and could be also used for connected UDP Connections;
  • Message API is recommended to be used with non-connected UDP Connections (UDP servers);

Tuesday May 26, 2009

Grizzly 2.0 M2 Release

Next Grizzly 2.0 milestone is reached.

Here is the list of new features available with Grizzly 2.0 M2:

  • UDP transport,
  • New Connection API, which lets Connections to operate in "message" mode. This API fits better to message oriented transports like UDP,
  • Performance improvements,
  • Codec API and specifically SSL and FilterChain Codecs,
  • Extended Transport API to support multi-binding, added support for unbind(),
  • Significantly improved documentation,
  • All-in-one OSGi bundle for framework, http and http-servlet available. You can launch them by just doing java -jar grizzly-<module>
In next blog I'll provide more details about Connection "stream" and "message" API and give simple example for UDP transport.

 

About

oleksiys

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today