Tuesday May 26, 2009

Grizzly 2.0 M2 Release

Next Grizzly 2.0 milestone is reached.

Here is the list of new features available with Grizzly 2.0 M2:

  • UDP transport,
  • New Connection API, which lets Connections to operate in "message" mode. This API fits better to message oriented transports like UDP,
  • Performance improvements,
  • Codec API and specifically SSL and FilterChain Codecs,
  • Extended Transport API to support multi-binding, added support for unbind(),
  • Significantly improved documentation,
  • All-in-one OSGi bundle for framework, http and http-servlet available. You can launch them by just doing java -jar grizzly-<module>
In next blog I'll provide more details about Connection "stream" and "message" API and give simple example for UDP transport.

 

Wednesday Apr 22, 2009

Grizzly 2.0 StreamReader/StreamWriter API

When you work with Grizzly 2.0, you'll most probably come to situation, when you need to read or write some data on network :))

Core Grizzly 2.0 I/O API is based on 3 main entities:

  • Connection, which represents any kind of Transport connection. As example for TCP NIO Transport, one Connection represents one NIO SocketChannel.
  • StreamReader represents Connection's input stream, using which it's possible to read data from Connection.
  • StreamWriter represents Connection's output stream to write data to Connection.

So to read data from Connection, we need to read data from Connection's StreamReader: connection.getStreamReader().readXXX(); The similar step, but with StreamWriter, we need to make in order to write some data to Connection: connection.getStreamWriter().writeXXX(...);

StreamReader and StreamWriter API has a set of readXXX(), writeXXX() methods to work with Java primitives and arrays of primitives. For example to write float value to Connection we call corresponding StreamWriter write method: connection.getStreamWriter().writeFloat(0.3f);

How Streams could work in non-blocking?

In Grizzly 2.0 we can work with StreamReader and StreamWriter either in blocking or non-blocking mode. The mode could be checked and set using following methods: isBlocking(), setBlocking(boolean).

When Streams operate in blocking mode - all their methods, which may work asynchronously (return "Future") will work in blocking mode, and returned Future will always have ready result.

As for non-blocking mode, ok let's start from...

StreamReader. 

In order to use StreamReader in non-blocking mode - we may want to check if StreamReader has enough data to be read: streamReader.availableDataSize(); So once StreamReader has enough data - we can safely read it without blocking. 

It is also possible to ask StreamReader to notify us once it will have enough data, or provide any other custom Condition, which StreamReader should check each time new data come - and notify us, once this Condition is met. For example:

Future<Integer> future = streamReader.notifyAvailable(10, completionHandler);

StreamReader returns Future object, which will mark as done, once StreamReader will have available 10 bytes for reading. At the same time we pass the CompletionHandler, which will be notified, once StreamReader will have 10 bytes available. So it's possible to have poll and push-like notifications with StreamReader.

We can ask StreamReader to get notification, when more complex conditions are met.

Future<Integer> future = streamReader.notifyCondition(customCondition, completionHandler); 

This way we implemented SSL handshake mechanism, so SSLStreamReader notifies us, when handshake status becomes NEED_WRAP.

StreamWriter. 

Non-blocking mode for StreamWriter means, that stream flushing will be done non-blocking. For example:

Future<Integer> future = streamWriter.flush(); 

where flush() operation returns Future<Integer>, which could be used to check if bytes were flushed and how may bytes were written on Connection. It is also possible to pass CompletionHandler to flush() operation, which will be notified by StreamWriter, once bytes will be flushed.

 

Thursday Dec 04, 2008

Grizzly 1.9: ExecutorService as common thread pool interface

Yesterday we've released Grizzly 1.9

One of the biggest releases we had so far. Here is the announce from Jean-Francois with the complete list, what was added to the new release.

Here I'll provide more details on new Grizzly1.9 thread pool API (actually there is nothing new for those, who uses java.util.concurrent objects). Since version 1.9 we stop to support Pipeline API and move to more standard and known ExecutorService.

So, now developer should not write thread pool wrapper, based on Pipeline, in order to use custom Thread pool implementation, but directly use own ExecutorService implementation, which will be responsible for worker threads lifecycle.

ExecutorService, as thread pool API, simplifies Grizzly integration process with existing platforms/frameworks, which may have own thread pools (based on ExecutorServices) and now are able to share the same thread pool with Grizzly.

Here is example how Grizzly thread pool could be configured and set:

Controller controller = new Controller();
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(corePoolSize, maximumPoolSize,
                                                     keepAliveTime, unit, workQueue, new WorkerThreadFactory());
threadPool.setMaximumPoolSize(5);
controller.setThreadPool(threadPool);
........................................


private class DefaultWorkerThreadFactory implements ThreadFactory {
     public Thread newThread(Runnable r) {
            Thread thread = new WorkerThreadImpl(null, "WorkerThread", r, initialByteBufferSize);
            thread.setPriority(priority);
            return thread;
     }
}

Tuesday Dec 02, 2008

Grizzly 1.9.0: Asynchronous HTTP responses

In Grizzly 1.9.0, which will be released very soon, we've implemented new feature for HTTP module.

Now it is possible to send HTTP responses asynchronously without blocking a worker thread.  What does it mean and which advantages we get?

In earlier Grizzly versions, when we sent HTTP response, a current thread was blocked until whole the response will be written on network. This is fine, when response is relatively small and server is not overloaded with processing HTTP requests.

But in case, when server is under load and is not able to write HTTP response fast... We block on thread and wait, when OS will become ready to write next chunk of data on network, so write operation becomes a bottleneck for our server scalability.

In Grizzly 1.9 it is possible to leverage the advantages, proposed by Grizzly asynchronous write queue. So now, if channel can not write more data on wire, instead of blocking on a thread, we add the HTTP response message to a write queue. Asynchronous write queue will be processed, once OS will signal, that channel is available for writing.

The asynchronous mode for HTTP responses is turned off by default. So here are ways, how it could be turned on:

1) Programmatically, using SelectorThread

SelectorThread selectorThread = new SelectorThread();
selectorThread.setPort(PORT);
selectorThread.setAsyncHttpWriteEnabled(true);

2) Using system property

-Dcom.sun.grizzly.http.asyncwrite.enabled=true


Though asynchronous mode for HTTP responses is very useful and has huge advantages, comparing to blocking mode, it has one "detail" :), which we have to be careful with. I'm talking about ByteBuffer cloning.

At time, when we can not write more data on channel, we add the ByteBuffer to asynchronous write queue, which means we can not continue to work with this ByteBuffer, until it will be released from asynchronous write queue. So, to process next HTTP request, we have to create new ByteBuffer. So, basically to increase server scalability by using asynchronous write queue, we pay memory. In Grizzly 1.9 we use simple ByteBuffer pool with limited size, to avoid creating new ByteBuffers all the time. The size of the ByteBuffer pool could be tuned.

1) Programmatically:

SocketChannelOutputBuffer.setMaxBufferPoolSize(int size); 

2) Using system properties:

-Dcom.sun.grizzly.http.asyncwrite.maxBufferPoolSize=<size> 

Grizzly 2.0: SSL support

Recently we've added SSL support for Grizzly 2.0

Unlike Grizzly 1.x, Grizzly 2.0 doesn't have special transport, called SSL or TLS. The SSL support is implemented using new Transformer API we've introduced, which could be used either standalone or within FilterChain.

Here is brief description of classes:

SSLEncoderTransformer: encodes plaintext input Buffer into TLS/SSL encoded output Buffer.
SSLDecoderTransformer: decodes TLS/SSL encoded Buffer into plaintext data Buffer.
SSLCodec: incapsulates encoder and decoder transformers together with SSL configuration.

As I mentioned, it is possible to use SSL both in standalone and within FilterChain.

1) Standalone

In standalone mode, developer should implicitly initialize SSL connection by executing SSL handshake. Then it's possible to use Connection I/O methods: read/write to send or receive data.

Connection connection = null;

// Initiate the SSLCodec
SSLCodec sslCodec = new SSLCodec(createSSLContext());

TCPNIOTransport transport = TransportFactory.instance().createTCPTransport();
try {
transport.bind(PORT);
transport.start();

// Connect client
ConnectFuture future = transport.connect("localhost", PORT);
connection = (TCPNIOConnection) future.get(10, TimeUnit.SECONDS);

// Run handshake
Future handshakeFuture = sslCodec.handshake(connection);

// Wait until handshake will be completed
handshakeFuture.get(10, TimeUnit.SECONDS);

MemoryManager memoryManager = transport.getMemoryManager();
Buffer message = MemoryUtils.wrap(memoryManager, "Hello world!");

// Write the message with SSLCodec.getEncoder() parameter.
Future writeFuture = connection.write(message, sslCodec.getEncoder());
writeFuture.get();

// Obtain the Buffer, which corresponds to the SSLEngine requirements.
Buffer receiverBuffer = SSLResourcesAccessor.getInstance().obtainAppBuffer(connection);

// Read the message with SSLCodec.getDecoder() parameter
Future readFuture = connection.read(receiverBuffer, sslCodec.getDecoder());
.....................................................

 2. FilterChain mode

In FilterChain mode, developer should just add SSLFilter to the FilterChain. The SSLFilter itself has SSLCodec, which in its turn has SSL encode/decode transformers and SSL configuration.

Connection connection = null;
SSLCodec sslCodec = new SSLCodec(createSSLContext());

TCPNIOTransport transport =
TransportManager.instance().createTCPTransport();
transport.getFilterChain().add(new TransportFilter());
// Add SSLFilter
transport.getFilterChain().add(new SSLFilter(sslCodec));
transport.getFilterChain().add(new EchoFilter());

try {
transport.bind(PORT);
transport.start();

...................

The last thing, I wanted to mention, is SSL configuration. How we can configure SSL?

As I told SSLCodec represents a core of SSL processing, it contains encoder/decoder Transformers and SSL configuration. In order to configure SSLCodec, it is possible to pass ready SSLContext, which could be created in your custom code, or use Grizzly 2.0 utility class SSLContextConfigurator.

Here is example, how SSLContextConfigurator could be used:

SSLContextConfigurator sslContextConfigurator = new SSLContextConfigurator();
URL cacertsUrl = cl.getResource("ssltest-cacerts.jks");
if (cacertsUrl != null) {
sslContextConfigurator.setTrustStoreFile(cacertsUrl.getFile());
}

URL keystoreUrl = cl.getResource("ssltest-keystore.jks");
if (keystoreUrl != null) {
sslContextConfigurator.setKeyStoreFile(keystoreUrl.getFile());
}

return sslContextConfigurator.createSSLContext();

If you have any question - pls. ask them on grizzly mailing lists :)

About

oleksiys

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today