Friday Mar 19, 2010

Glassfish "hangs"?

From time to time I see the messages on glassfish mailing list, reporting that Glassfish hangs at some point. I'd like to provide some simple instructions, which will help Glassfish team to find the problem and help you asap.

  1. Don't panic. In most cases the "hang" is caused by custom (web) application. 
  2. When you found that Glassfish is not responsive - pls. take a snapshot of threads dump:
    • Find the pid of the Glassfish process. Glassfish 2.x: jps | grep PELaunch, or Glassfish 3.x: jps | grep ASMain
    • Force threads dump to be written to the jvm.log file: kill -3 <pid>
  3. Locate jvm.log file (usually it could be found in the instance log directory): GF/domains/domain1/logs/jvm.log
  4. You can check the jvm.log file and make sure listed threads are not blocked inside your (web) application classes.
  5. If not - report the problem on glassfish users mailing list or forum, providing as detailed info/observations on the usecase as possible.

Thursday Mar 18, 2010

new String(byte[]): Charset or charset name?

Today I was investigating bm results of the simple Grizzly 2.0 based Web server, for some reason it showed sensitive perf. gap comparing to 1.9.x implementation. I suspected something is happening on HTTP side, because Grizzly 2.0 core shows equal or better results for different tests.

After spending some time I realized, that the problem is caused by simple String constructor: new String(byte[] buffer, Charset charset), which is used in Grizzly 2.0. Grizzly 1.9.x uses new String(byte[] buffer, String charsetName). I supposed, that by passing Charset to the String constructor - I'll be able to optimize it and skip Charset resolving phase. But it was just a half of true. Even though I was able to skip charset resolving, it appeared that new String(byte[], Charset) constructor uses absolutely different execution path, comparing to charsetName constructor. Here is code, where it ends up:

new String(byte[] buffer, String charsetName) path:

static char[] decode(String charsetName, byte[] ba, int off, int len)
throws UnsupportedEncodingException
{
StringDecoder sd = (StringDecoder)deref(decoder);
String csn = (charsetName == null) ? "ISO-8859-1" : charsetName;
if ((sd == null) || !(csn.equals(sd.requestedCharsetName())
|| csn.equals(sd.charsetName()))) {
sd = null;
try {
Charset cs = lookupCharset(csn);
if (cs != null)
sd = new StringDecoder(cs, csn);
} catch (IllegalCharsetNameException x) {}
if (sd == null)
throw new UnsupportedEncodingException(csn);
set(decoder, sd);
}
return sd.decode(ba, off, len);
}

new String(byte[] buffer, Charset charset) path:

static char[] decode(Charset cs, byte[] ba, int off, int len) {
StringDecoder sd = new StringDecoder(cs, cs.name());
byte[] b = Arrays.copyOf(ba, ba.length);
return sd.decode(b, off, len);
}

Byte copying? Why?

From javadoc I understood, that new String(byte[], Charset) has a new? feature: "This method always replaces malformed-input and unmappable-character sequences with this charset's default replacement string". May be because of that?

Anyway, here is the difference I observe in my profiler: new String(byte[] buffer, Charset charset) and new String(byte[] buffer, String charsetName)

Constructor with the Charset is 7x slower ???!!!

For sure it might be different depending on byte[] length, in my case the length=11.

Monday Mar 15, 2010

Grizzly 2.0: asynchronous HTTP server and client samples

We've completed initial implementation of the Grizzly 2.0 HTTP module. The main different with the Grizzly 1.x - is that we separated HTTP parsing and processing logic, so HTTP module has 2 HttpFilter implementations: client and server, which are responsible for asynchronous parsing/serializing of HTTP messages. So developer will be responsible for implementing just the HTTP processing logic. Here is general schema of HTTP message processing:

(read phase):  TransportFilter  - (Buffer) -> [SSLFilter] - (Buffer) -> HttpFilter - (HttpContent) -> CustomHttpProcessorFilter

(write phase): CustomHttpProcessorFilter - (HttpPacket) -> HttpFilter - (Buffer) -> [SSLFilter] - (Buffer) -> TransportFilter

The big advantage of the Filter approach - is that it's possible to reuse HttpFilter logic for both HTTP client and HTTP server code. 

I've also created a sample, which includes:

1) \*Simple Web server, which is implemented as custom Grizzly 2.0 Filter, and is able to serve HTTP requests for the static resources (local files).

2) \*Simple HTTP client, which downloads remote HTTP resource to the local file.

The next step for Grizzly 2.0 is to simplify Web server and HTTP client development by providing higher level API. On server side it will be similar GrizzlyWebServer/GrizzlyAdapter/GrizzlyRequest/GrizzlyResponse API, known from Grizzly 1.9.x and as for client side we will also need to come with some reasonable API.

Feedback is very appreciated. 

\* Just a small note, that both server and client are operating in non-blocking mode. 

Thursday Mar 11, 2010

Release: Grizzly 1.0.33

We've released Grizzly 1.0.33, which has fixes for the following 2 issues:

"Empty request entity received"
https://grizzly.dev.java.net/issues/show_bug.cgi?id=11452

"file cache breaks virtual server docroot discrimination"
CR6910122

You can download it from:

http://download.java.net/maven/2/com/sun/grizzly/grizzly-framework-http/1.0.33/

Here is instruction on how to patch existing GFv2.1.x with the latest Grizzly binaries.

Thanks to all who helped!

Thursday Feb 25, 2010

Grizzly 2.0: simple authentication example

I've just created a simple example, which shows how we can implement simple authentication for string-based protocol using Grizzly 2.0.

Server side is implemented as a chain of following Filters:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ServerAuthFilter <- MultiLinePacket w/o auth headers -> EchoFilter

Client side filter chain is following:

TransportFilter <- Buffer -> StringFilter <- String -> MultiLineFilter <- MultiLinePacket w/ auth headers -> ClientAuthFilter <- MultiLinePacket w/o auth headers -> ClientFilter

The most interesting thing happens on client side inside ClientAuthFilter. If it appears, that we're sending some message via non-authenticated client - ClientAuthFilter suspends the message(s) sending and initializes authentication process, once authentication will be completed - ClientAuthFilter resumes the suspended message writes.

So communication happens on following layers: 

Client Server
 TransportFilter TransportFilter
 StringFilter StringFilter
 MultiLineFilter MultiLineFilter
 ClientAuthFilter <- Authentication messages -> ServerAuthFilter
ClientFilter <-- Cusom Messages --> EchoFilter

Friday Oct 16, 2009

Patching Glassfish 2.1.x with the latest Grizzly 1.0.x releases

From time to time Grizzly team works on issues, found by Glassfish 2.1.x users. Very often it happens, that issue has/might already been fixed in latest Grizzly 1.0.x release, and to not force user to perform complete Glassfish upgrade, we ask to try latest Grizzly 1.0.x and check if it fixes the problem.

Here I'd like to describe how user can patch existing Glassfish 2.1.x with latest Grizzly 1.0.x.

1) Check the Grizzly version, which is currently integrated to Glassfish.

To check the current Grizzly version, we need to set system property "com.sun.enterprise.web.connector.grizzly.displayConfiguration" to true. We can do it by editing Glassfish instance's domain.xml file (usually located under GF/domains/domain1/config/domain.xml) , specifically we need to find XML element <java-config> and add there child element [1].

After restarting Glassfish instance in server.log (usually located under GF/domains/domain1/logs/server.log)  we'd see output like [2], where Grizzly version is 1.0.30. Please note, that if you'll see output similar to [2], but without Grizzly version in it - it means, that your version is older than 1.0.30, so you might be interested in Grizzly upgrade :)

2) Find and download Grizzly 1.0.x binary file.

You can receive link to the required Grizzly binaries directly from Grizzly team, or find the latest Grizzly binaries in the maven repository [3].

3) Apply the latest Grizzly 1.0.x binary.

Assume we've downloaded Grizzly 1.0.x binary and saved it at "/home/my/grizzly-framework-http-1.0.30.jar". Now we need to set this jar file on Glassfish prefix-classpath, to force Glassfish use the latest Grizzly classes instead of embedded ones. Again we have to edit Glassfish instance's domain.xml file (usually located under GF/domains/domain1/config/domain.xml), find XML element <java-config> and edit/add its XML attribute classpath-prefix to make it point to the saved latest Grizzly binary. So it will like [4].

4) Restart Glassfish

Probably it will also make sense to repeat step 1, to make sure we run the latest Grizzly version.

That's it :) 

[1] <jvm-options>-Dcom.sun.enterprise.web.connector.grizzly.displayConfiguration=true</jvm-options> 

[2] 
Grizzly 1.0.30 running on Mac OS X-10.5.8 under JDK version: 1.6.0_15-Apple Inc.
port: 8080
maxThreads: 5
ByteBuffer size: 4096
useDirectByteBuffer: 8192
maxKeepAliveRequests: 250
keepAliveTimeoutInSeconds: 30
Static File Cache enabled: false
Pipeline : com.sun.enterprise.web.portunif.PortUnificationPipeline
Round Robin Selector Algorithm enabled: false
Round Robin Selector pool size: 1
Asynchronous Request Processing enabled: true|#]

[3] http://download.java.net/maven/2/com/sun/grizzly/grizzly-framework-http/

[4] <java-config classpath-prefix="/home/my/grizzly-framework-http-1.0.30.jar" classpath-suffix="" debug-enabled="true" debug-options="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=9009" env-classpath-ignored="true" java-home="${com.sun.aas.javaRoot}" javac-options="-g" rmic-options="-iiop -poa -alwaysgenerate -keepgenerated -g" system-classpath=""> 

Wednesday May 27, 2009

Grizzly 2.0: Streaming and Messaging

Originally, when designing Grizzly 2.0 Connection API, we were thinking that Streams could cover all the possible scenarios developers may have. and could be easily used with TCP and UDP transports. But when started to implement UDP transport, we came to conclusion, that it's not actually true. For sure UDP is message oriented protocol, but thought we assumed it will be easy to emulate that using Streams like:

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush();

for connected UDP Connection, or

streamWriter.writeInt(...);
streamWriter.writeLong(...);
streamWriter.flush(dstAddress); 

for non-connected UDP Connections. Similar trick could be done for reading UDP messages.

So, from API point of view, Streams could work fine even for message-oriented protocols, but the problem appears, when we build UDP server (have non-connected UDP socket). In this case server may receive packets from different clients, but as Streams are not thread-safe, we can process only one single client packet at the time and block other clients, until first packet will be completely processed. This fact creates sensitive performance issue for UDP server, because many UDP packets may get lost because of big delays in processing. For such a usecase we need possibility to not block the connection, when processing incoming message, but make the connection available to process next message. This is not doable with Streams, because of mentioned thread-safety limitation, so in Grizzly 2.0 M2 we've added message-based API for Connections.

Message-based API:

Message-based API is reachable via Connection interface, which is now extending Readable and Writable interfaces, and is represented by set of read and write methods:

  • Future<ReadResult<Buffer, L>> read(...);
  • Future<WriteResult<Buffer, L>> write(...);

where <L> represents source/destination address type (SocketAddress for NIO TCP and UDP transports).

Ok, with Connections it should be straightforward, how we can use Message-based API, what about FilterChains? In the original Grizzly samples, TransportFilter, which we add first to a FilterChain, provided us StreamReader/StreamWriter objects to be used by next Filters in chain. So TransportFilter was oriented to work just in Stream mode. Now it's possible to create TransportFilter, which will work in Message mode:

transport.getFilterChain().add(new TransportFilter(TransportFilter.Mode.Message)); 

if we add TransportFilter like above, then it will not provide Streams, which could be used by the rest of FilterChain, but message (Buffer), which could be accessed via FilterChainContext:

Buffer message = (Buffer) context.getMessage();

so next Filters in chain should deal directly with message, not Streams. 

Here is the simple example of using Message-based API for UDP echo server.

Stream-based API:

We discussed Stream-based API in one of my previous blogs. So here I just want to add, that for TCP transport we recommend to use Stream API, because it makes implementation easier for most of usecases and takes care about internal buffer management, optimizing memory allocation etc.

Let's try to make some summary for Streaming and Messaging API. For sure in each particular case, it's up to you to decide which API to use. Here are just our recommendations:

  • Streams API is recommended to be used with TCP transport and could be also used for connected UDP Connections;
  • Message API is recommended to be used with non-connected UDP Connections (UDP servers);

Tuesday May 26, 2009

Grizzly 2.0 M2 Release

Next Grizzly 2.0 milestone is reached.

Here is the list of new features available with Grizzly 2.0 M2:

  • UDP transport,
  • New Connection API, which lets Connections to operate in "message" mode. This API fits better to message oriented transports like UDP,
  • Performance improvements,
  • Codec API and specifically SSL and FilterChain Codecs,
  • Extended Transport API to support multi-binding, added support for unbind(),
  • Significantly improved documentation,
  • All-in-one OSGi bundle for framework, http and http-servlet available. You can launch them by just doing java -jar grizzly-<module>
In next blog I'll provide more details about Connection "stream" and "message" API and give simple example for UDP transport.

 

Wednesday Apr 22, 2009

Grizzly 2.0 StreamReader/StreamWriter API

When you work with Grizzly 2.0, you'll most probably come to situation, when you need to read or write some data on network :))

Core Grizzly 2.0 I/O API is based on 3 main entities:

  • Connection, which represents any kind of Transport connection. As example for TCP NIO Transport, one Connection represents one NIO SocketChannel.
  • StreamReader represents Connection's input stream, using which it's possible to read data from Connection.
  • StreamWriter represents Connection's output stream to write data to Connection.

So to read data from Connection, we need to read data from Connection's StreamReader: connection.getStreamReader().readXXX(); The similar step, but with StreamWriter, we need to make in order to write some data to Connection: connection.getStreamWriter().writeXXX(...);

StreamReader and StreamWriter API has a set of readXXX(), writeXXX() methods to work with Java primitives and arrays of primitives. For example to write float value to Connection we call corresponding StreamWriter write method: connection.getStreamWriter().writeFloat(0.3f);

How Streams could work in non-blocking?

In Grizzly 2.0 we can work with StreamReader and StreamWriter either in blocking or non-blocking mode. The mode could be checked and set using following methods: isBlocking(), setBlocking(boolean).

When Streams operate in blocking mode - all their methods, which may work asynchronously (return "Future") will work in blocking mode, and returned Future will always have ready result.

As for non-blocking mode, ok let's start from...

StreamReader. 

In order to use StreamReader in non-blocking mode - we may want to check if StreamReader has enough data to be read: streamReader.availableDataSize(); So once StreamReader has enough data - we can safely read it without blocking. 

It is also possible to ask StreamReader to notify us once it will have enough data, or provide any other custom Condition, which StreamReader should check each time new data come - and notify us, once this Condition is met. For example:

Future<Integer> future = streamReader.notifyAvailable(10, completionHandler);

StreamReader returns Future object, which will mark as done, once StreamReader will have available 10 bytes for reading. At the same time we pass the CompletionHandler, which will be notified, once StreamReader will have 10 bytes available. So it's possible to have poll and push-like notifications with StreamReader.

We can ask StreamReader to get notification, when more complex conditions are met.

Future<Integer> future = streamReader.notifyCondition(customCondition, completionHandler); 

This way we implemented SSL handshake mechanism, so SSLStreamReader notifies us, when handshake status becomes NEED_WRAP.

StreamWriter. 

Non-blocking mode for StreamWriter means, that stream flushing will be done non-blocking. For example:

Future<Integer> future = streamWriter.flush(); 

where flush() operation returns Future<Integer>, which could be used to check if bytes were flushed and how may bytes were written on Connection. It is also possible to pass CompletionHandler to flush() operation, which will be notified by StreamWriter, once bytes will be flushed.

 

Tuesday Apr 21, 2009

Grizzly 2.0 M1 Release

Finally we're ready to make first milestone release of Grizzly 2.0
The M1 release has 4 modules:

  • Framework: Grizzly 2.0 core
  • RCM: Resource Consumption Filter implementation, based on Grizzly 2.0 API
  • HTTP: Lightweight Web container, based on Grizzly 2.0 API
  • Servlet: Lightweight Servlet container, based on Grizzly 2.0 API. 

There are 2 modules, which provide examples of basic features, provided by Grizzly 2.0:

  • Framework-samples: "echo" client/server application, SSL example, Connection lifecycle control using Grizzly 2.0 API, custom I/O Strategy example.
  • HTTP-samples: simple Grizzly lightweight webserver using Grizzly core API (WebFilter) and higher level GrizzlyWebServer API. Both HTTP and HTTPS scenarios.

Following new features are included in this release:

  • New I/O API, based on Grizzly StreamReader and StreamWriter
  • Extensible memory management API, which makes possible to implement either simple or complex memory management  logic and use it with Grizzly. 
  • Extensible Strategy API for processing Connection events. Grizzly bundles 4 built-in strategies:
    • SameThreadStrategy, which processes all I/O events in the same thread they were got (selector thread), so no additional worker threads are used. This strategy could be very useful and optimal for processing small amount of clients, and lets server have great response time
    • LeaderFollowerStrategy. Performs I/O processing in the same thread (selector thread), and delegates selector polling logic to another thread.
    • WorkerThreadStrategy. Performs all the I/O events in separate worker thread.
    • SimpleDynamicStrategy. Dynamically applies one of the above 3 strategies, depending on number of SelectionKeys selected last time.
  • WebFilter. Grizzly lightweight webcontainer implemented as Filter.
  • GrizzlyWebServer. High level API to work with Grizzly lightweight webcontainer.

Later I will write separate blogs describing each feature....

Stay tuned... 

 

Tuesday Dec 09, 2008

Glassfish V3: Asynchronous HTTP responses

We've completed integration of the latest Grizzly 1.9.0 binaries to Glassfish v3.

One of the most interesting features, which now become available in GFv3, is asynchronous HTTP responses. Recently I described, how this feature could be used with the standalone Grizzly 1.9.0. Now it is also applicable for GFv3.

To enable asynchronous HTTP responses in GFv3, we need to add following line to the domain.xml (/domain/configs/config/java-config):

<jvm-options>-Dcom.sun.grizzly.http.asyncwrite.enabled=true</jvm-options>

It's also possible to change the size of ByteBuffer pool, used for cloning ByteBuffers before asynchronous write (details). Again we can do this by adding domain.xml (/domain/configs/config/java-config) property:

<jvm-options>-Dcom.sun.grizzly.http.asyncwrite.maxBufferPoolSize=NEW_SIZE</jvm-options>

PS: This feature has been added after GFv3 Prelude release, so is currently available on GFv3 trunk only. 

Thursday Dec 04, 2008

Simple Grizzly 2.0

From time to time I see mails from people, who are looking forward to use Grizzly 2.0, but don't know where to start from. Currently we don't have much documentation on that and mainly I have to point people to unit tests code to give them some idea about Grizzly 2.0

Here I'll try to show the very simple scenario, how Grizzly 2.0 could be used. I will not show FilterChains, CallbackHandlers... nothing. I will show how Grizzly 2.0 could be used instead of plain Sockets. So, even if you have just basic understanding how Java Sockets work - you'll understand how Grizzly 2.0 could be used instead.

Why use Grizzly 2.0 instead of Sockets?

  1. Grizzly 2.0 provides very simple API, based on Future, CompletionHandler API, which lets you work asynchronously (in non-blocking manner) with connections.
  2. Grizzly 2.0 uses asynchronous read and write queues, which means you don't have to care about simultaneous reads and writes from different threads. 
  3. Understand better Grizzly 2.0 API, which will help you to write more complex Grizzly based systems.

 

Client, based on Grizzly 2.0:

TCPNIOTransport transport = TransportFactory.getInstance().createTCPTransport();
// Enable standalone mode
transport.setProcessorSelector(new NullProcessorSelector());
// Start transport
transport.start();
// Connect to a server ConnectFuture connectFuture = transport.connect(host, port);
Connection connection = connectFuture.get();
// Initialize data, which will be sent
MemoryManager memoryManager = transport.getMemeoryManager();
Buffer buffer = MemoryUtils.wrap(memoryManager, "Hello World!", Charset.forName("UTF-8"));
// Send data (async write queue will be used underneath)
Future writeFuture = connection.write(buffer);
// We can wait until write will be completed
writeFuture.get(timeout, TimeUnit.SECONDS);
// Initialize receive buffer
Buffer recieveBuffer = memoryManager.allocate(SIZE);
//Read response
Future readFuture = connection.read(receiveBuffer);
/\*
\* Tip. If we need to read whole buffer (readFully), use following:
\* Future readFuture = connection.read(receiveBuffer, null, null, MinBufferSizeCondition(receiveBuffer.remaining());
\*/

readFuture.get(timeout, TimeUnit.SECONDS);

......................
connection.close();

 

Socket-like server, based on Grizzly:

TCPNIOTransport transport = TransportFactory.getInstance().createTCPTransport();
// Enable standalone mode
transport.setProcessorSelector(new NullProcessorSelector());
// Bind transport to listen on specific port
transport.bind(PORT);
// Start transport
transport.start();
while(!transport.isStopped()) {
     // Accept new client connection
     AcceptFuture acceptFuture = transport.accept();
     Connection connection = acceptFuture.get();
     // Process accepted connection (may be in separate thread)
     processAcceptedConnection(connection);
}

From the samples above we can see, that Grizzly 2.0 provides API similar to Java Socket, but it lets us work with connections asynchronously! May be Grizzly need more initialization lines than Sockets

  1.  TCPNIOTransport transport = TransportFactory.getInstance().createTCPTransport();
  2.  transport.setProcessorSelector(new NullProcessorSelector();  


The "1" just initializes the TCP transport.
The "2" configures transports, so it will be used in standalone mode. FilterChains, CallbackHandlers and other possible Processors will not be invoked.

And again, if you want working example, how Grizzly could be used in standalone mode (Socket-like manner), please take a look here

Grizzly 1.9: ExecutorService as common thread pool interface

Yesterday we've released Grizzly 1.9

One of the biggest releases we had so far. Here is the announce from Jean-Francois with the complete list, what was added to the new release.

Here I'll provide more details on new Grizzly1.9 thread pool API (actually there is nothing new for those, who uses java.util.concurrent objects). Since version 1.9 we stop to support Pipeline API and move to more standard and known ExecutorService.

So, now developer should not write thread pool wrapper, based on Pipeline, in order to use custom Thread pool implementation, but directly use own ExecutorService implementation, which will be responsible for worker threads lifecycle.

ExecutorService, as thread pool API, simplifies Grizzly integration process with existing platforms/frameworks, which may have own thread pools (based on ExecutorServices) and now are able to share the same thread pool with Grizzly.

Here is example how Grizzly thread pool could be configured and set:

Controller controller = new Controller();
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(corePoolSize, maximumPoolSize,
                                                     keepAliveTime, unit, workQueue, new WorkerThreadFactory());
threadPool.setMaximumPoolSize(5);
controller.setThreadPool(threadPool);
........................................


private class DefaultWorkerThreadFactory implements ThreadFactory {
     public Thread newThread(Runnable r) {
            Thread thread = new WorkerThreadImpl(null, "WorkerThread", r, initialByteBufferSize);
            thread.setPriority(priority);
            return thread;
     }
}

Tuesday Dec 02, 2008

Grizzly 1.9.0: Asynchronous HTTP responses

In Grizzly 1.9.0, which will be released very soon, we've implemented new feature for HTTP module.

Now it is possible to send HTTP responses asynchronously without blocking a worker thread.  What does it mean and which advantages we get?

In earlier Grizzly versions, when we sent HTTP response, a current thread was blocked until whole the response will be written on network. This is fine, when response is relatively small and server is not overloaded with processing HTTP requests.

But in case, when server is under load and is not able to write HTTP response fast... We block on thread and wait, when OS will become ready to write next chunk of data on network, so write operation becomes a bottleneck for our server scalability.

In Grizzly 1.9 it is possible to leverage the advantages, proposed by Grizzly asynchronous write queue. So now, if channel can not write more data on wire, instead of blocking on a thread, we add the HTTP response message to a write queue. Asynchronous write queue will be processed, once OS will signal, that channel is available for writing.

The asynchronous mode for HTTP responses is turned off by default. So here are ways, how it could be turned on:

1) Programmatically, using SelectorThread

SelectorThread selectorThread = new SelectorThread();
selectorThread.setPort(PORT);
selectorThread.setAsyncHttpWriteEnabled(true);

2) Using system property

-Dcom.sun.grizzly.http.asyncwrite.enabled=true


Though asynchronous mode for HTTP responses is very useful and has huge advantages, comparing to blocking mode, it has one "detail" :), which we have to be careful with. I'm talking about ByteBuffer cloning.

At time, when we can not write more data on channel, we add the ByteBuffer to asynchronous write queue, which means we can not continue to work with this ByteBuffer, until it will be released from asynchronous write queue. So, to process next HTTP request, we have to create new ByteBuffer. So, basically to increase server scalability by using asynchronous write queue, we pay memory. In Grizzly 1.9 we use simple ByteBuffer pool with limited size, to avoid creating new ByteBuffers all the time. The size of the ByteBuffer pool could be tuned.

1) Programmatically:

SocketChannelOutputBuffer.setMaxBufferPoolSize(int size); 

2) Using system properties:

-Dcom.sun.grizzly.http.asyncwrite.maxBufferPoolSize=<size> 

Grizzly 2.0: SSL support

Recently we've added SSL support for Grizzly 2.0

Unlike Grizzly 1.x, Grizzly 2.0 doesn't have special transport, called SSL or TLS. The SSL support is implemented using new Transformer API we've introduced, which could be used either standalone or within FilterChain.

Here is brief description of classes:

SSLEncoderTransformer: encodes plaintext input Buffer into TLS/SSL encoded output Buffer.
SSLDecoderTransformer: decodes TLS/SSL encoded Buffer into plaintext data Buffer.
SSLCodec: incapsulates encoder and decoder transformers together with SSL configuration.

As I mentioned, it is possible to use SSL both in standalone and within FilterChain.

1) Standalone

In standalone mode, developer should implicitly initialize SSL connection by executing SSL handshake. Then it's possible to use Connection I/O methods: read/write to send or receive data.

Connection connection = null;

// Initiate the SSLCodec
SSLCodec sslCodec = new SSLCodec(createSSLContext());

TCPNIOTransport transport = TransportFactory.instance().createTCPTransport();
try {
transport.bind(PORT);
transport.start();

// Connect client
ConnectFuture future = transport.connect("localhost", PORT);
connection = (TCPNIOConnection) future.get(10, TimeUnit.SECONDS);

// Run handshake
Future handshakeFuture = sslCodec.handshake(connection);

// Wait until handshake will be completed
handshakeFuture.get(10, TimeUnit.SECONDS);

MemoryManager memoryManager = transport.getMemoryManager();
Buffer message = MemoryUtils.wrap(memoryManager, "Hello world!");

// Write the message with SSLCodec.getEncoder() parameter.
Future writeFuture = connection.write(message, sslCodec.getEncoder());
writeFuture.get();

// Obtain the Buffer, which corresponds to the SSLEngine requirements.
Buffer receiverBuffer = SSLResourcesAccessor.getInstance().obtainAppBuffer(connection);

// Read the message with SSLCodec.getDecoder() parameter
Future readFuture = connection.read(receiverBuffer, sslCodec.getDecoder());
.....................................................

 2. FilterChain mode

In FilterChain mode, developer should just add SSLFilter to the FilterChain. The SSLFilter itself has SSLCodec, which in its turn has SSL encode/decode transformers and SSL configuration.

Connection connection = null;
SSLCodec sslCodec = new SSLCodec(createSSLContext());

TCPNIOTransport transport =
TransportManager.instance().createTCPTransport();
transport.getFilterChain().add(new TransportFilter());
// Add SSLFilter
transport.getFilterChain().add(new SSLFilter(sslCodec));
transport.getFilterChain().add(new EchoFilter());

try {
transport.bind(PORT);
transport.start();

...................

The last thing, I wanted to mention, is SSL configuration. How we can configure SSL?

As I told SSLCodec represents a core of SSL processing, it contains encoder/decoder Transformers and SSL configuration. In order to configure SSLCodec, it is possible to pass ready SSLContext, which could be created in your custom code, or use Grizzly 2.0 utility class SSLContextConfigurator.

Here is example, how SSLContextConfigurator could be used:

SSLContextConfigurator sslContextConfigurator = new SSLContextConfigurator();
URL cacertsUrl = cl.getResource("ssltest-cacerts.jks");
if (cacertsUrl != null) {
sslContextConfigurator.setTrustStoreFile(cacertsUrl.getFile());
}

URL keystoreUrl = cl.getResource("ssltest-keystore.jks");
if (keystoreUrl != null) {
sslContextConfigurator.setKeyStoreFile(keystoreUrl.getFile());
}

return sslContextConfigurator.createSSLContext();

If you have any question - pls. ask them on grizzly mailing lists :)

About

oleksiys

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today