Tyrus client – shared container

Current “trunk” version contains one notable feature – shared client container. It can save significant portion of resources on client side if you are creating lots of client connections and there is almost no drawback.

This feature was added as a response to TYRUS-275 and it somehow copies what other containers are doing in the JRS 356 implementation. Specification does not really state which system resources should be reusable (that’s ok, I think this is implementation detail), but it also does not take care of thread pool related settings, which might introduce some inconsistencies among implementations.

WebSocket client implementation in Tyrus re-creates client runtime wheneverWebSocketContainer#connectToServer is invoked. This approach gives us some perks like out-of-the-box isolation and relatively low thread count (currently we have 1 selector thread and 2 worker threads). Also it gives you the ability to stop the client runtime – one Session instance is tied to exactly one client runtime, so we can stop it when Session is closed. This seems as a good solution for most of WebSocket client use cases – you usually use java client from application which uses it for communicating with server side and you typically don’t need more than 10 instances (my personal estimate is that more than 90% applications won’t use more than 1 connection). There are several reasons for it – of it is just a client, it needs to preserve server resources – one WebSocket connection means one TCP connection and we don’t really want clients to consume more than needed. Previous statement may be invalidated by WebSocket multiplexing extension, but for now, it is still valid.

On the other hand, WebSocket client implementations in some other containers took another (also correct) approach – they share client runtime for creating all client connections. That means they might not have this strict one session one runtime policy, they cannot really give user way how he to control system resources, but surely it has another advantage – it can handle much more opened connections. Thread pools are share among client sessions which may or may not have some unforeseen consequences, but if its implemented correctly, it should outperform Tyrus solution mentioned in previous paragraph in some use cases, like the one mentioned in TYRUS-275 - performance tests. Reported created simple program which used WebSocket API to create clients and connect to remote endpoint and he measured how many clients can he create (or in other words: how many parallel client connection can be created; I guess that original test case is to measure possible number of concurrent clients on server side, but that does not really matter for this post). Tyrus implementation loose compared to some other and it was exactly because it did not have shared client runtime capability.

I hope that common grounds are established and explained and we can take more detailed look to Tyrus implementation of shared client runtime. Default client implementation is using Grizzly as client container and all the work was done there – client SPI did not need to be changed (phew), so basically there are only some static fields and some basic synchronisation required for this to work. Grizzly part was really easy, it is only about sharing the instance of created TCPNIOTransport. Another small enhancement related to this feature is shared container timeout – if shared client runtime does not handle any connections for set period of time, it will just shutdown and clear all used resources. Everything will be automatically recreated when needed. Default value for this is 30 seconds.

How can you use this feature?

1
2
3
ClientManager client = ClientManager.createClient();
client.getProperties().put(GrizzlyClientContainer.SHARED_CONTAINER, true);

And you are done. You might also want to specify container idle timeout:

1
client.getProperties().put(GrizzlyClientContainer.SHARED_CONTAINER_IDLE_TIMEOUT, 5);

and last but not least, you might want to specify thread pool sizes used by shared container (please use this feature only when you do know what are you doing. Grizzly by default does not limit max number of used threads, so if you do that, please make sure thread pool size fits your purpose):

1
2
client.getProperties().put(GrizzlyClientSocket.SELECTOR_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(3));
client.getProperties().put(GrizzlyClientSocket.WORKER_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(10));

And that’s it for today. You can use mentioned features with Tyrus 1.4-SNAPSHOT and newer, feel free to comment this post or drop me a note to users@tyrus.java.net with any feedback or suggestion how this can be improved.

Links

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Pavel Bucek

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today