Thursday Oct 09, 2008

Recent Grizzly 2.0 updates

Recently we made several updates in Grizzly 2.0:

1) introduced Buffer interface, which will be common for all kind of buffers, used in Grizzly

2) changed read/write API according to what proposed Jeanfrancois. Instead of read/write, readAsync/writeAsync,  readNow/writeNow methods - now we have just read/write. The I/O mode (blocking/async) could be changed, using Transport.configureBlocking(boolean) for whole transport, or Connection.configureBlocking() for specific connection.

3) completed standalone filter chain implementation. Which works both in synchronous and asynchronous modes.
It's possible to do following:

filterChain.add(new TransportFilter());
filterChain.add(new SSLFilter());
filterChain.add(new HttpFilter());


HttpRequest request = new HttpRequest(host, port, url);
Future writeFuture = filterChain.write(connection, HttpRequest()); Future readFuture =;
HttpResponse response = (HttpResponse) readFuture.get(timeout, timeunit);

We're planning to have more interesting features: Stream filter, smart parser filter, HTTP support...

Grizzly 2.0 roadmap

  • Status
    • Expected release December 2008
  • Tasks 
    • (required) Support TCP NIO transport                 DONE
    • (required) Support UDP NIO transport
    • (required) Support SSL
    • (required) Performance. Grizzly 2.0 should be faster or equal to Grizzly 1.x
    • (required) Documentation: complete javadoc for sources
    • (required) Documentation: tutorial (set of blogs), which covers all common framework features
    • (required) Support AIO's TCP/UDP as Transport
    • (required) High level design of the http server module
    • (feature) Standalone filter read/write                     DONE
    • (feature) Stream filter
    • (feature) Smart parser filter
 Yea, we really have big plans, so any help, contribution, feedback will be very appreciated! 


Wednesday Sep 24, 2008

Be careful with SelectionKey interests! :)

I spent one day catching the bug in the client side implementation of Grizzly 2.0

The code bellow:;
Set<SelectionKey> readyKeySet = selector.selectedKeys(); 
for(Iterator<SelectionKey> it = readyKeySet.iterator(); it.hasNext(); ) {
SelectionKey key =;

is usual way of processing I/O events with Java NIO.
The SelectionKey should now tell us, which operation is ready to be processed, but

int readyOperations = key.readyOps(); 

returns 0   8-)
This is really unexpected, because it means, that SelectionKey is not ready for any I/O operation, and even more... now returns immediately all the time with non-ready SelectionKey. So our Selector thread is loading CPU up-to 100%.

After one day of investigation and trying different patches I realized, that issue occurred, because I forgot to unregister OP_CONNECT interest from the SelectionKey. Once I added logic to unregister OP_CONNECT interest, when a channel gets connected - everything started to work just fine.

I saw such an issue just on my Mac, so not sure how other OS'es JVM will behave. Anyway if you don't want to spend the day catching strange NIO bugs - be careful with SelectionKey interests :)

Thursday Sep 04, 2008

Grizzly 2.0 is available on Maven

We started to push Grizzly 2.0 artifacts to the Maven repository [1].

Currently there are two Maven artifacts available: grizzly-framework[2] and grizzly-framework-samples[3]; which contain correspondently Grizzly 2.0 core classes and several samples.

Grizzly 2.0 sources could be retrieved using SVN [4] or online [5].

[4] svn checkout

Friday Jun 20, 2008

Starting new Grizzly 2.0

It's always so cool to start something new... Starting Grizzly 2.0 we decided to create new Grizzly from the very beginning, using our experience with Grizzly 1.x and community feedback we were getting during last years.

So until the end of this year 2008, we are planning to have completely new bear, which will be even more friendly and powerful :)

Everyone is welcome to take part in the discussion, how Grizzly 2.0 should look like and what should be improved comparing to Grizzly 1.x. For now we have some compiled draft-plan, which has initial ideas, design, examples.


Wednesday May 14, 2008

Grizzly @ JavaOne 2008

Jeanfrancois and myself presented Grizzly @ JavaOne 2008 conference in San Francisco.

For me it was first presentation experience, so it wasn't perfect, but hope people who came and were interested in NIO and partially in Grizzly did get the information, what Grizzly is and how they can start to use it.

Here are our presentation slides. More information you can find on project Grizzly web site. And all your questions could be answered on our mailing lists.

Wednesday Mar 26, 2008

Fast Infoset: JAX-RPC API to JAX-WS API migration.

If you migrate a Web service client, written on top of the JAX-RPC API, to the newer JAX-WS API and need to keep it communicate using Fast Infoset – you need to follow easy migration procedure.
Very likely old code enables Fast Infoset using one of the following JAX-RPC API specific methods:

  1. Setting the System property
    java -Dcom.sun.xml.rpc.client.ContentNegotiation=pessimistic ...
  2. Setting Content Negotiation on a Stub
    stub = ...; // Obtain reference to stub
    stub._setProperty(com.sun.xml.rpc.client.StubPropertyConstants.CONTENT_NEGOTIATION_PROPERTY, "pessimistic");
  3. Setting Content Negotiation on an instance of Call
    call = ...; // Obtain reference to call call.setProperty(com.sun.xml.rpc.client.dii.CallPropertyConstants.CONTENT_NEGOTIATION_PROPERTY, "pessimistic"); 

For JAX-WS API based Web services, Fast Infoset could be enabled very similar way to JAX-RPC cases (2) and (3), described above. JAX-WS API uses different property name, though supported property values are same: “none”, “optimistic”, “pessimistic”.

After migration to a JAX-WS API, a Web service client will be represented either by Stub or Dispatcher object. However the Fast Infoset enabling procedure is the same for both representations. It's required to set Content Negotiation property on a Web service Stub or Dispatch.
So JAX-WS API specific way to enable Fast Infoset is following:

  1. Setting Content Negotiation on a Stub or Dispatch
    stubOrDispatch = ...; // Obtain reference to Stub or Dispatch ((BindingProvider) stubOrDispatch).getRequestContext().put("", "pessimistic"); 

Additional links:
[1] How to enable Fast Infoset
[2] Fast Infoset in Java Web Services Developer Pack, Version 1.6 

Wednesday Jan 16, 2008

"Grizzly 1.7.0 presents"... Part I: Asynchronous write queues

Finanlly Grizzly 1.7.0 is released and we have implemented feature, which was asked long time ago: Asyncronous write queues.

Before it was possible to use async. NIO nature in Grizzly, but developer needed to implement and control such queue himself. Or as alternative write operation could be done in blocking manner: OutputWriter.flushChannel(...).

With newly implemented Async. write queue it's possible to register ByteBuffer for writing and get notification via a callback listener, when ByteBuffer will be written. Also it's possible together with ByteBuffer register some custom ByteBuffer Processor, which will be called before writing ByteBuffer, and will be able to process/change/encrypt ByteBuffer data before writing it on a wire. Grizzly itself uses ByteBuffer Processor for SSL Async. write queue implementation.

Async. write queue API is represented by the interface com.sun.grizzly.async.AsyncQueueWritable.

Grizzly client side connector handlers: TCPConnectorHandler, UDPConnectorHandler, SSLConnectorHandler, implement AsyncQueueWritable interface. However it's possible to use Async. write queue functionality inside ProtocolFilter - Context.getAsyncQueueWritable().

What are the features Async. write queue API proposes?

1) public void writeToAsyncQueue(ByteBuffer buffer) throws IOException;

Just register buffer on a write queue. Completion notification is not needed.

2) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler) throws IOException;

Registers buffer on a write queue, and provides callback handler, which will be notified on buffer write completion, or if any error will happen during buffer write operation.

3) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler, AsyncQueueDataProcessor writePreProcessor) throws IOException;

Does the same as (2), but also registers buffer associated writePreProcessor, which will be called, for buffer, when it comes its turn to be written. WritePreProcessor can change buffer content, can substitute buffer with different one. I found this feature useful, when implemented SSL async. write queue support. So I'm registering buffer with raw data, and SSLWritePreProcessor, which is called before buffer is going to be written and encrypts buffer's content according to SSL requirements.

4) public void writeToAsyncQueue(ByteBuffer buffer, AsyncWriteCallbackHandler callbackHandler, AsyncQueueDataProcessor writePreProcessor, boolean isCloneByteBuffer) throws IOException;

Method extends functionality of (3), by adding possibility to clone source buffer before adding it to a queue, so actually buffer clone will be added to a queue. "Cloning" could be useful for situations, when we don't want to care much about memory management. So it's possible to call this method with clone set to "true", and continue to work with the passed buffer (clear it, modify...) on next step.

But for better memory management, I would suggest to not use cloning, and release allocated buffer using AsyncWriteCallbackHandler.

There are other operations, provided with API, which are similar to (1)-(4), but has one more parameter: SocketAddress dstAddress. These operations are not supported for TCP, SSL transports, as they always implemented using connected channels, but with UDP transport, where channel could be not connected, it's possible to set destination address, where buffer will be sent.

\*If queue is empty, all write operations first try to write buffer directly to the destination channel on the same thread.

Looking forward to hear feedback, so we will be able to fix/improve our current implementation :)
Next time I'll try to describe Grizzly Asynchronous  read queue feature...

Tuesday Nov 06, 2007

FastInfoset 1.2.2 was released!

FastInfoset version 1.2.2 was recently released!

It includes fixes for the following bugs:

  • Parsing of escaped characters was not normalized (bug report)
  • Hexadecimal encoding algorithm fix: typo in transformation table
  • Stateful mode fix. When vocabulary max memory limit was reached - vocabulary stopped to work not just for a new values, but even for values, which are already indexed.
Here are links for complete distribution package and separately for sources.

Monday Oct 22, 2007

SOAP/TCP makes Web services faster

Last month two important events happened related to SOAP/TCP.
First of all, as part of project Metro, SOAP/TCP 1.0 implementation is now available in Glassfish v2 release.

Next great thing is: SOAP/TCP now is .NET interoperable!!! Read more on Paul's blog.

Now I wanted to stop on topic, what advantages SOAP/TCP brings to Web services.

We always mentioned, that main target for SOAP/TCP is performance, however never showed any numbers. Here I would like to publish some benchmark results, which I've got using Japex.

Testing environment configuration:

  • localhost
  • Sun X4100 server, 4 CPUs
  • Sun Solaris 10
  • Sun JDK 1.5.0_13
  • Glassfish v2 release
  • 16 client threads

Web Service test operations: 

Most of tests are echo-like (except getOrder), it means the same data set (depending on test) is sent to and returned from a Web service. For example echoInt test executes Web service operation, which implements following logic:

public int echoInt(int value) {
    return value;

Here is complete description of tests (provided by Ken Cavanaugh):

  • EchoVoid: no data sent or returned
  • EchoString: "Hello World" sent and returned
  • EchoStruct: a small class with an int, a float, and a String sent and returned
  • EchoSynthetic: a small class with a String, a Struct (as in EchoStruct), and a variable-length byte[] sent and returned.  The size of the byte array is indicated by the name of the test.
  • EchoArray: basically an ArrayList of an Item, with each Item initialized to the same data (so Strings are shared in the marshaled representation).  The number of Items is given in the name of the test.  An Item contains:
    • String
    • String
    • float
    • int
    • Location, which is 3 Strings
    • XMLGregorianCalendar (standard Java type; the BigInt or BigDecimal fields are null)
  • GetOrder send a very small request (customer ID and order number), and returns a large Order class containing:
    • int
    • int
    • XMLGregorianCalendar
    • Customer, which is a class containing:
      • int
      • String
      • String
      • String
      • XMLGregorianCalendar
      • String
      • String
      • Address (which contains 7 Strings)
      • Address
    • ArrayOfLineItem which contains an ArrayList of LineItem, each of which contains:
      • int
      • int
      • int
      • String
      • int
      • float 

Test results are measured in "transactions per second %". SOAP/TCP benchmark result is taken as 100%. Red bar represents SOAP/TCP results, blue bar is FastInfoset encoded SOAP over HTTP, and green bar is regular SOAP/HTTP.

Let's start from results for smaller testcases.

benchmark results (small testcases)


We can see, that for small testcases, where HTTP headers could be even bigger than SOAP message - SOAP/TCP header compactness plays very big role. And we can get over 50% performance improvement with SOAP/TCP comparing to FastInfoset over HTTP, and over 70% comparing to SOAP/HTTP.

Medium testcases:

benchmark results (medium testcases)

Here we can see that for echoStruct and echoArray40 SOAP/TCP is getting less advantage comparing to FastInfoset/HTTP, because SOAP messages are getting bigger and header processing gets relatively cheaper. Though for bigger messages, SOAP/TCP stateful characteristic (FastInfoset stateful mode) does its work (in SOAP/TCP, FastInfoset vocabulary is not transferred with each message, but is kept alive both on client and server until SOAP/TCP connection is alive).

On echoSynthetic testcases we can see SOAP/TCP throughput advantages. With changing transferring byte buffer size from 4K to 12K - difference between SOAP/TCP and HTTP results grows.

Big testcases:

benchmark results (big testcases)

On bigger tests, when transferring complex structures, SOAP/TCP also wins regarding its stateful FastInfoset vocabulary and better throughput. Though, its advantage is not so huge as for smaller messages, 15% performance improvement is also significant.

So, from graphs above we can see, that SOAP/TCP makes Web services faster: over 50% for smaller structures, 15% for bigger ones.

Monday Jul 16, 2007

SSL client implementation in Grizzly.

When implemented SSL client support for Grizzly, it was first time I used SSLEngine in practice. Want to say that it took several days until I had more or less complete understanding how it should be implemented - first of all handshake phase and the bunch of related SSLEngine states.
I tried to hide all SSLEngine operations and propose developer to use just common Grizzly ConnectorHandler methods like: connect, read, write, close. One highlevel method was added: handshake. All enlisted methods could be used both in blocking/non-blocking manner.
SSLConnectorHandler requires special kind of CallbackHandler - SSLCallbackHandler, which additionally has onHandshake(IOEvent) callback method. This method will be called when non-blocking handshake operation will be completed.
How to configure SSLConnectorHandler... SSLConnectorHandler could be used with default SSL configuration, it means SSL artifacts (keystore, truststore) will be loaded according to System properties. However you can set custom SSL configuration, using either SSLConnectorHandler.configure(SSLConfig) or SSLConnectorHandler.setSSLContext(SSLContext). New SSL configuration will become active after SSLConnectorHandler will be (re)connected.

Here is small code fragment showing simple usecase for SSLConnectorHandler (blocking mode):

// create new standalone SSLConnectorHandler instance
final SSLConnectorHandler sslConnector = new SSLConnectorHandler();

// initialize buffers
final ByteBuffer sendBuffer = ByteBuffer.wrap("sending data".getBytes());
final ByteBuffer receiveBuffer = ByteBuffer.allocate(256);
try {
        // Step #1: Connect
        sslConnector.connect(new InetSocketAddress(HOST, PORT));
        assert sslConnector.isConnected();

        // Step #2: Handshake
        assert sslConnector.handshake(receiveBuffer, true);

        // Step #3: Write some data
        sslConnector.write(sendBuffer, true);

        // Step #4: Read response, true);
        System.out.println("Response length: " + receiveBuffer.remaining());

} catch (IOException e) {
    // log exception

Non blocking implementation is more difficult, but lets you build more scalable applications.
Here is the example of Grizzly non-blocking SSL client implementation.

PS: To be able to run samples - you need keystore and trust store files (info...)

Wednesday Jun 20, 2007

Connection Management/Cache in Grizzly 1.5

Recently Connection Management/Cache feature was added to Grizzly 1.5!
Actual implementation was provided by Ken Cavanaugh and originally was targeted to Corba, but Ken kindly proposed us to integrate it to SOAP/TCP and Grizzly even before he did it for Corba :)
Connection Management/Cache provides possibility to control active connections, reclaim least recently used (LRU) connections if limitation is reached. Specifically for client side it also serves as connection cache, which makes possible to reuse existing connections to a target host, instead of creating a new ones.
After integration of Connection Management/Cache to Grizzly, all its features become available in Grizzly framework.

For server part we need create an instance of CacheableSelectionKeyHandler and set it to Controller.
Controller controller = new Controller();
SelectionKeyHandler cacheableKeyHandler = new CacheableSelectionKeyHandler(highWaterMark, numberToReclaim);
highWaterMark: maximum number of active inbound connections Controller will handle
numberToReclaim: number of LRU connections, which will be reclaimed in case highWaterMark limit will be reached.

On the client side Connection Mangement/Cache is represented by several classes: CacheableConnectorHandlerPool, CacheableConnector:
Controller controller = new Controller();
ConnectorHandlerPool cacheableHandlerPool = new CacheableConnectorHandlerPool(controller, highWaterMark, numberToReclaim, maxParallel);
................... complete controller initialization ...............................
ConnectorHandler clientConnector = controller.acquireConnectorHandler(Controller.Protocol.TCP);
clientConnector.connect(....);   // Initiates new connection or reuses one from cache
......................... execute client operations ....................................
clientConnector.close();          // If limit is not reached - connection will be put back to the cache
highWaterMark: maximum number of active outbound connections Controller will handle
numberToReclaim: number of LRU connections, which will be reclaimed in case highWaterMark limit will be reached.
maxParallel: maximum number of active outbound connections to single destination (usually <host>:<port>).


From simple example above we see, that Connection Management/Cache is very easy to use in Grizzly. It doesn't require significant changes comparing to default Grizzly configuration.

Friday Jun 15, 2007

Multi Selector threads in Grizzly 1.5

Default Grizzly Controller configuration uses single Selector thread for handling Channel events, it means OP_ACCEPT and OP_READ events are handled by a single Selector. For sure to process bigger number of concurrent requests - it's not enough, and we may need to spread Channels among several Selector threads.
It's very easy to do in Grizzly - Controller.setReadThreadsCount(threadNum);
It let's main (Controller) Selector thread to process only OP_ACCEPT events, and each accepted Channel will be registered for reading on a different Selector thread, taken from queue. Default Multi Selector threads implementation is working in RoundRobin manner.

Here is example, how to start Grizzly Controller with several Selector threads.
        final ProtocolFilter readFilter = new ReadFilter();
        final ProtocolFilter echoFilter = new EchoFilter();
        TCPSelectorHandler selectorHandler = new TCPSelectorHandler();
        final Controller controller = new Controller();     
                new DefaultProtocolChainInstanceHandler(){
            public ProtocolChain poll() {
                ProtocolChain protocolChain = protocolChains.poll();
                if (protocolChain == null){
                    protocolChain = new DefaultProtocolChain();
                return protocolChain;


Wednesday Jun 13, 2007

Strange "Software caused connection abort: recv failed"

Recently I was investigating Grizzly related bug, which appeared when Grizzly tried to redirect (HTTP response code 302) Java HTTPS client. And the bug appeared only in situations, when client tried to send some payload data to server, not just HTTP headers.
Finally I realize, that it has nothing to do neither with Grizzly nor HTTPS/SSL. Following scenario fully reproduces the problem with clear Sockets:
1) Client -> Server: client sends request chunk#1
2) Server -> Client: server reads chunk#1, processes, writes response
3) Server closes connection
4) Client -> Server: client sends request chunk#2
5) Client tries to read server response: " Software caused connection abort: recv failed"

As the result client will not be able to read any byte from server's response!
What is interesting on step (4) we don't see any exception, but this step (sending data over connection, which is closed by peer) is the reason of the exception we're getting on (5). Removing step (4) from our scenario, or putting it before step (3) - makes everything work.
Actual exception looks strange for me, as it is thrown not from the place, which causes the problem.

Thursday Jun 07, 2007

How to make Web service use TCP transport with JAX-WS

SOAP/TCP right now is the part of WSIT project, but it's also possible to use it with clear JAX-WS framework.
There are lots of places, which should be improved, however nothing could stop us using it right now :)
Several steps, especially for the server side, are similar or even same to WSIT ones, however the client side configuration is completely different.

So, "how to deploy web service to Glassfish and make it reachable via SOAP/TCP?".
  1. JavaEE 5 or JSR109 WS deployment.
    Actually all Web services deployed to Glassfish using this deployment method become TCP transport enabled. They are automatically registered on SOAP/TCP listener and ready to process.
    \*One known issue with JavaEE 5 deployment of WS is: if WS is being deployed to a Web container - then it is required for this WS to have the descriptor web.xml, where it will be specified to load on startup. In this case the SOAP/TCP listener will get to know about this web service.

  2. Servlet deployment.
    For this deployment type, things are not much different. In this case one more servlet context listener should be added to a web service's (web appliction's) web.xml file [1].
    And the servlet should be set to load on startup. Otherwise the SOAP/TCP listener will not be aware know about Web Service, until WSStartupServlet will not be started.  

"How to make client to choose SOAP/TCP as transport to work with web service?"

We need to configure JAX-WS client to be aware of SOAP/TCP transport. For that reason, we have to provide following service provider file [2], located on client classpath, and contain line [3] in it.
Web service WSDL file, which is provided to a client, should contain Web service endpoint address (/definitions/service/port/soap:address@location) with SOAP/TCP specific protocol schema [4]. Only in that case SOAP/TCP transport pipe will be attached and used for sending/receiving SOAP messages. Currently server side WSDL generation doesn't support protocol schema changing for SOAP/TCP, so this change should be made manually.

[1] <listener-class></listener-class>
[2]  META-INF/services/



« April 2014