This article is a bit different than the normal data source articles I have been writing because it's focused on an application use of WebLogic Server (WLS) data sources, although the application is still part of WLS. Java Messaging Service (JMS) supports the use of either a file store or a JDBC store to store JMS persistent messages (the JDBC store can also be used for Transaction Log information, diagnostics, etc.). The file store is easier to configure,generates no network traffic, and is generally faster. However, the JDBC store is popular because most customers have invested in High Availability (HA) solutions, like RAC, Data Guard or Golden Gate, for their database so using a JDBC store on the database makes HA and migration much easier (for the file store, the disk must be shared or migrated). Some work has been done in recent releases to improve the JDBC store performance and take advantage of RAC clusters.
It's obvious from the JDBC store configuration that JMS uses just a single table in the database. JMS use of this table is sort of like a queue so there are hot spots at the beginning and end of the table as messages are added and consumed - that might get fixed in WLS in the future but it is a consideration for the current store performance. JMS since the beginning has been single threaded on a single database connection. Starting in WLS 10.3.6 (see this link), the store can run with multiple worker threads each with its own connection by setting Worker Count on the JDBC Persistent Store page in the console. There are no documented restrictions or recommendations about how to set this value Should we set it to the maximum allowed of 1000 so we get a lot of work done? Not quite ...
Since we have contention between the connections, using too many connections is not good. To begin with, there is overhead in managing the work among the threads so if JMS is lightly loaded, it's worse to use multiple connections. When you have a high load, we found that for one configuration, 8 threads gave the best performance but 4 was almost as good at half the resources using the Oracle Thin driver on an Oracle database (more about database vendors below). Optimization for queues with multiple connections is a big win with some gains as high as 200%. Handling a topic is another ... well, topic. It's complicated by the fact that a message can go to a single or multiple topics and we want to aggregate acknowledgements to reduce contention and improve performance. Topic testing saw more modest gains of around 20%, depending on the test.
How about different data source types? It turns out that when using a RAC cluster and updates are scattered across multiple instances, there is too much overhead in locking and cache fusion across the RAC instances. That makes it important that all of the reserved connections are on a single RAC instance. For a generic data source, there is nothing to worry about - you have just one node. In the case of multi data source (MDS), you can get all connections on a single instance by using the AlgorithmType set to "Failover" (see this link ). All connections will be reserved on the first configured generic data source within the MDS until a failure occurs, then the failed data source will be marked as suspended and all connections will come from the next generic data source in the MDS. You don't want to use AlgorithmType set to "Load-Balancing". In the case of Active GridLink (AGL), it's actually difficult to get connection affinity to a single node and without it, performance can seriously degrade. Some benchmarks saw performance loss of 50% when using multiple connections on different instances. For WLS 10.3.6 and 12.1.1, it is not recommended to use AGL with multiple connections. In WLS 12.1.2, this was fixed so that JMS will reserve all connections on the same instance. If there is a failure, all of the reserved connections need to be closed, a new connection is reserved using Connection Runtime Load Balancing (RCLB), hopefully on a lightly loaded instance), and then the rest of the connections are reserved on the same instance. In one benchmark, performance improved by 200% when using multiple connections on the same instance.
How about different database vendor types? Your performance will vary based on the application and the database. The discussion above regarding RAC cluster performance is interesting and may have implications for any application that you are moving to a cluster. Another thing that is specific to the Oracle database is indexing the table for efficient access by multiple connections. In this case, it is recommended to use a reverse key index for the primary key. The bytes in the key are reversed such that keys that normally would be grouped because the left-most bytes are the same are now distributed more evenly (imagine using a B-tree to store a bunch of sequential numbers with left padding with 0's, for example).
Bottom line: this feature may give you a big performance boost but you might want to try it with your application, database, hardware, and vary the worker count.