Grizzly 1.9.0: Asynchronous HTTP responses

In Grizzly 1.9.0, which will be released very soon, we've implemented new feature for HTTP module.

Now it is possible to send HTTP responses asynchronously without blocking a worker thread.  What does it mean and which advantages we get?

In earlier Grizzly versions, when we sent HTTP response, a current thread was blocked until whole the response will be written on network. This is fine, when response is relatively small and server is not overloaded with processing HTTP requests.

But in case, when server is under load and is not able to write HTTP response fast... We block on thread and wait, when OS will become ready to write next chunk of data on network, so write operation becomes a bottleneck for our server scalability.

In Grizzly 1.9 it is possible to leverage the advantages, proposed by Grizzly asynchronous write queue. So now, if channel can not write more data on wire, instead of blocking on a thread, we add the HTTP response message to a write queue. Asynchronous write queue will be processed, once OS will signal, that channel is available for writing.

The asynchronous mode for HTTP responses is turned off by default. So here are ways, how it could be turned on:

1) Programmatically, using SelectorThread

SelectorThread selectorThread = new SelectorThread();
selectorThread.setPort(PORT);
selectorThread.setAsyncHttpWriteEnabled(true);

2) Using system property

-Dcom.sun.grizzly.http.asyncwrite.enabled=true


Though asynchronous mode for HTTP responses is very useful and has huge advantages, comparing to blocking mode, it has one "detail" :), which we have to be careful with. I'm talking about ByteBuffer cloning.

At time, when we can not write more data on channel, we add the ByteBuffer to asynchronous write queue, which means we can not continue to work with this ByteBuffer, until it will be released from asynchronous write queue. So, to process next HTTP request, we have to create new ByteBuffer. So, basically to increase server scalability by using asynchronous write queue, we pay memory. In Grizzly 1.9 we use simple ByteBuffer pool with limited size, to avoid creating new ByteBuffers all the time. The size of the ByteBuffer pool could be tuned.

1) Programmatically:

SocketChannelOutputBuffer.setMaxBufferPoolSize(int size); 

2) Using system properties:

-Dcom.sun.grizzly.http.asyncwrite.maxBufferPoolSize=<size> 

Comments:

is there a perf penalty? is this always on, or does it switch on if the send blocks?

Posted by peter morelli on December 03, 2008 at 09:41 AM CET #

[Trackback] The latest release of the monster include tons of enhancements and new features like new Asynchronous I/O support, OSGi improvement, ExecutorServices now supported natively, Asynchronous Http Write...

Posted by Jean-Francois Arcand's Blog on December 03, 2008 at 08:22 PM CET #

There is no performance penalty. The asynchronous write queue works the way, that first it tries to write buffer directly on the wire, and only if it is not possible to write whole buffer - it will be added to the write queue.
So, for low load scenarios, messages will not be even added to the queue, so there will be no penalty.
For high load we expect performance improvement, because we will not block on thread. So the same thread could be reused for processing other requests. But we will need to create additional ByteBuffer (I mentioned it as "cloning"), so for perf. improvement we will pay some extra memory.

Posted by oleksiys on December 04, 2008 at 06:52 AM CET #

Oh, just missed the second part of question.
> is this always on, or does it switch on if the send blocks

By default the async HTTP response feature is switched off. You can switch it on using one of the options I described above.

Posted by oleksiys on December 04, 2008 at 08:53 AM CET #

Post a Comment:
  • HTML Syntax: NOT allowed
About

oleksiys

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today