Wednesday Sep 24, 2008

Be careful with SelectionKey interests! :)

I spent one day catching the bug in the client side implementation of Grizzly 2.0

The code bellow:;
Set<SelectionKey> readyKeySet = selector.selectedKeys(); 
for(Iterator<SelectionKey> it = readyKeySet.iterator(); it.hasNext(); ) {
SelectionKey key =;

is usual way of processing I/O events with Java NIO.
The SelectionKey should now tell us, which operation is ready to be processed, but

int readyOperations = key.readyOps(); 

returns 0   8-)
This is really unexpected, because it means, that SelectionKey is not ready for any I/O operation, and even more... now returns immediately all the time with non-ready SelectionKey. So our Selector thread is loading CPU up-to 100%.

After one day of investigation and trying different patches I realized, that issue occurred, because I forgot to unregister OP_CONNECT interest from the SelectionKey. Once I added logic to unregister OP_CONNECT interest, when a channel gets connected - everything started to work just fine.

I saw such an issue just on my Mac, so not sure how other OS'es JVM will behave. Anyway if you don't want to spend the day catching strange NIO bugs - be careful with SelectionKey interests :)

Thursday Sep 04, 2008

Grizzly 2.0 is available on Maven

We started to push Grizzly 2.0 artifacts to the Maven repository [1].

Currently there are two Maven artifacts available: grizzly-framework[2] and grizzly-framework-samples[3]; which contain correspondently Grizzly 2.0 core classes and several samples.

Grizzly 2.0 sources could be retrieved using SVN [4] or online [5].

[4] svn checkout

Friday Jun 20, 2008

Starting new Grizzly 2.0

It's always so cool to start something new... Starting Grizzly 2.0 we decided to create new Grizzly from the very beginning, using our experience with Grizzly 1.x and community feedback we were getting during last years.

So until the end of this year 2008, we are planning to have completely new bear, which will be even more friendly and powerful :)

Everyone is welcome to take part in the discussion, how Grizzly 2.0 should look like and what should be improved comparing to Grizzly 1.x. For now we have some compiled draft-plan, which has initial ideas, design, examples.


Wednesday May 14, 2008

Grizzly @ JavaOne 2008

Jeanfrancois and myself presented Grizzly @ JavaOne 2008 conference in San Francisco.

For me it was first presentation experience, so it wasn't perfect, but hope people who came and were interested in NIO and partially in Grizzly did get the information, what Grizzly is and how they can start to use it.

Here are our presentation slides. More information you can find on project Grizzly web site. And all your questions could be answered on our mailing lists.

Wednesday Jan 16, 2008

"Grizzly 1.7.0 presents"... Part I: Asynchronous write queues

Finanlly Grizzly 1.7.0 is released and we have implemented feature, which was asked long time ago: Asyncronous write queues.

Before it was possible to use async. NIO nature in Grizzly, but developer needed to implement and control such queue himself. Or as alternative write operation could be done in blocking manner: OutputWriter.flushChannel(...).

With newly implemented Async. write queue it's possible to register ByteBuffer for writing and get notification via a callback listener, when ByteBuffer will be written. Also it's possible together with ByteBuffer register some custom ByteBuffer Processor, which will be called before writing ByteBuffer, and will be able to process/change/encrypt ByteBuffer data before writing it on a wire. Grizzly itself uses ByteBuffer Processor for SSL Async. write queue implementation.

Async. write queue API is represented by the interface com.sun.grizzly.async.AsyncQueueWritable.

Grizzly client side connector handlers: TCPConnectorHandler, UDPConnectorHandler, SSLConnectorHandler, implement AsyncQueueWritable interface. However it's possible to use Async. write queue functionality inside ProtocolFilter - Context.getAsyncQueueWritable().

What are the features Async. write queue API proposes?

1) public void writeToAsyncQueue(ByteBuffer buffer) throws IOException;

Just register buffer on a write queue. Completion notification is not needed.

2) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler) throws IOException;

Registers buffer on a write queue, and provides callback handler, which will be notified on buffer write completion, or if any error will happen during buffer write operation.

3) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler, AsyncQueueDataProcessor writePreProcessor) throws IOException;

Does the same as (2), but also registers buffer associated writePreProcessor, which will be called, for buffer, when it comes its turn to be written. WritePreProcessor can change buffer content, can substitute buffer with different one. I found this feature useful, when implemented SSL async. write queue support. So I'm registering buffer with raw data, and SSLWritePreProcessor, which is called before buffer is going to be written and encrypts buffer's content according to SSL requirements.

4) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler, AsyncQueueDataProcessor writePreProcessor, boolean isCloneByteBuffer) throws IOException;

Method extends functionality of (3), by adding possibility to clone source buffer before adding it to a queue, so actually buffer clone will be added to a queue. "Cloning" could be useful for situations, when we don't want to care much about memory management. So it's possible to call this method with clone set to "true", and continue to work with the passed buffer (clear it, modify...) on next step.

But for better memory management, I would suggest to not use cloning, and release allocated buffer using AsyncWriteCallbackHandler.

There are other operations, provided with API, which are similar to (1)-(4), but has one more parameter: SocketAddress dstAddress. These operations are not supported for TCP, SSL transports, as they always implemented using connected channels, but with UDP transport, where channel could be not connected, it's possible to set destination address, where buffer will be sent.

\*If queue is empty, all write operations first try to write buffer directly to the destination channel on the same thread.

Looking forward to hear feedback, so we will be able to fix/improve our current implementation :)
Next time I'll try to describe Grizzly Asynchronous  read queue feature...

Monday Jul 16, 2007

SSL client implementation in Grizzly.

When implemented SSL client support for Grizzly, it was first time I used SSLEngine in practice. Want to say that it took several days until I had more or less complete understanding how it should be implemented - first of all handshake phase and the bunch of related SSLEngine states.
I tried to hide all SSLEngine operations and propose developer to use just common Grizzly ConnectorHandler methods like: connect, read, write, close. One highlevel method was added: handshake. All enlisted methods could be used both in blocking/non-blocking manner.
SSLConnectorHandler requires special kind of CallbackHandler - SSLCallbackHandler, which additionally has onHandshake(IOEvent) callback method. This method will be called when non-blocking handshake operation will be completed.
How to configure SSLConnectorHandler... SSLConnectorHandler could be used with default SSL configuration, it means SSL artifacts (keystore, truststore) will be loaded according to System properties. However you can set custom SSL configuration, using either SSLConnectorHandler.configure(SSLConfig) or SSLConnectorHandler.setSSLContext(SSLContext). New SSL configuration will become active after SSLConnectorHandler will be (re)connected.

Here is small code fragment showing simple usecase for SSLConnectorHandler (blocking mode):

// create new standalone SSLConnectorHandler instance
final SSLConnectorHandler sslConnector = new SSLConnectorHandler();

// initialize buffers
final ByteBuffer sendBuffer = ByteBuffer.wrap("sending data".getBytes());
final ByteBuffer receiveBuffer = ByteBuffer.allocate(256);
try {
        // Step #1: Connect
        sslConnector.connect(new InetSocketAddress(HOST, PORT));
        assert sslConnector.isConnected();

        // Step #2: Handshake
        assert sslConnector.handshake(receiveBuffer, true);

        // Step #3: Write some data
        sslConnector.write(sendBuffer, true);

        // Step #4: Read response, true);
        System.out.println("Response length: " + receiveBuffer.remaining());

} catch (IOException e) {
    // log exception

Non blocking implementation is more difficult, but lets you build more scalable applications.
Here is the example of Grizzly non-blocking SSL client implementation.

PS: To be able to run samples - you need keystore and trust store files (info...)

Wednesday Jun 20, 2007

Connection Management/Cache in Grizzly 1.5

Recently Connection Management/Cache feature was added to Grizzly 1.5!
Actual implementation was provided by Ken Cavanaugh and originally was targeted to Corba, but Ken kindly proposed us to integrate it to SOAP/TCP and Grizzly even before he did it for Corba :)
Connection Management/Cache provides possibility to control active connections, reclaim least recently used (LRU) connections if limitation is reached. Specifically for client side it also serves as connection cache, which makes possible to reuse existing connections to a target host, instead of creating a new ones.
After integration of Connection Management/Cache to Grizzly, all its features become available in Grizzly framework.

For server part we need create an instance of CacheableSelectionKeyHandler and set it to Controller.
Controller controller = new Controller();
SelectionKeyHandler cacheableKeyHandler = new CacheableSelectionKeyHandler(highWaterMark, numberToReclaim);
highWaterMark: maximum number of active inbound connections Controller will handle
numberToReclaim: number of LRU connections, which will be reclaimed in case highWaterMark limit will be reached.

On the client side Connection Mangement/Cache is represented by several classes: CacheableConnectorHandlerPool, CacheableConnector:
Controller controller = new Controller();
ConnectorHandlerPool cacheableHandlerPool = new CacheableConnectorHandlerPool(controller, highWaterMark, numberToReclaim, maxParallel);
................... complete controller initialization ...............................
ConnectorHandler clientConnector = controller.acquireConnectorHandler(Controller.Protocol.TCP);
clientConnector.connect(....);   // Initiates new connection or reuses one from cache
......................... execute client operations ....................................
clientConnector.close();          // If limit is not reached - connection will be put back to the cache
highWaterMark: maximum number of active outbound connections Controller will handle
numberToReclaim: number of LRU connections, which will be reclaimed in case highWaterMark limit will be reached.
maxParallel: maximum number of active outbound connections to single destination (usually <host>:<port>).


From simple example above we see, that Connection Management/Cache is very easy to use in Grizzly. It doesn't require significant changes comparing to default Grizzly configuration.

Friday Jun 15, 2007

Multi Selector threads in Grizzly 1.5

Default Grizzly Controller configuration uses single Selector thread for handling Channel events, it means OP_ACCEPT and OP_READ events are handled by a single Selector. For sure to process bigger number of concurrent requests - it's not enough, and we may need to spread Channels among several Selector threads.
It's very easy to do in Grizzly - Controller.setReadThreadsCount(threadNum);
It let's main (Controller) Selector thread to process only OP_ACCEPT events, and each accepted Channel will be registered for reading on a different Selector thread, taken from queue. Default Multi Selector threads implementation is working in RoundRobin manner.

Here is example, how to start Grizzly Controller with several Selector threads.
        final ProtocolFilter readFilter = new ReadFilter();
        final ProtocolFilter echoFilter = new EchoFilter();
        TCPSelectorHandler selectorHandler = new TCPSelectorHandler();
        final Controller controller = new Controller();     
                new DefaultProtocolChainInstanceHandler(){
            public ProtocolChain poll() {
                ProtocolChain protocolChain = protocolChains.poll();
                if (protocolChain == null){
                    protocolChain = new DefaultProtocolChain();
                return protocolChain;





« July 2016