Friday Aug 30, 2013

Introducing Elastic JMS

In WebLogic 12.1.2, we enhanced the way that you can configure JMS servers, stores, and subdeployments so that the JMS subsystem can automatically scale with the Managed Servers in a cluster. We call this Elastic JMS. My friend Maciej Gruszka calls it Magic JMS!

 Here are some details:

JMS Servers: In releases before WebLogic Server 12.1.2, each JMS Server was individually configured and targeted at a single Managed Server. It didn’t matter whether or not that Managed Server was part of a cluster. Starting in WebLogic Server 12.1.2, you can target a JMS Server at a cluster. Under the covers, WebLogic spins up a JMS Server on each managed server in the cluster. If you add or remove servers from the cluster, JMS Servers are added or removed automatically.

WebLogic Persistent Stores: Like JMS Servers, in releases before WebLogic Server 12.1.2, each WebLogic Persistent Store (file store or JDBC store) was individually configured and targeted to a single Managed Server, clustered or not. In WebLogic Server 12.1.2, you can target a WebLogic Persistent Store at a cluster. Under the covers, WebLogic creates a store instance on each Managed Server in the cluster. Each instance of a file store uses the same path to either a shared file system or to a local file. Each instance of a JDBC store uses the same JDBC data source, but gets its own underlying tables.

Subdeployments: A subdeployment defines the list of JMS Servers that will host a queue or topic. In releases before WebLogic Server 12.1.2, when you defined a subdeployment for a distributed queue or topic, you listed each JMS Server in the cluster. When you scaled up the cluster by adding a Managed Server and a corresponding JMS Server, you also needed to update the subdeployment with the new JMS Server. Starting in WebLogic Server 12.1.2, subdeployments are much simpler. You can list a single JMS Server that is targeted at the cluster. When you scale up the cluster, the distributed queue is automatically extended to the new JMS Server instance without any changes to the subdeployment.

Pulling it all together: By using cluster targeted JMS Servers and Persistent Stores, you get some nice benefits:

  • Simplified configuration – Even initial JMS configuration is much simpler than it was in the past: no need for individually configured JMS Servers and related items.
  • Elastic scalability – As you scale the cluster, the JMS services automatically scale with it. 
  • Support for Dynamic Clusters – Because Dynamic Clusters require homogenous targeting of services, the new configuration options make it possible to run JMS on Dynamic Clusters. 

  • Check out the documentation at http://docs.oracle.com/middleware/1212/wls/JMSAD/dynamic_messaging.htm or see my video at for more details.

    Wednesday Jul 31, 2013

    JMS JDBC Store Performance Using Multiple Connections

    This article is a bit different than the normal data source articles I have been writing because it's focused on an application use of WebLogic Server (WLS) data sources, although the application is still part of WLS.  Java Messaging Service (JMS) supports the use of either a file store or a JDBC store to store JMS persistent messages (the JDBC store can also be used for Transaction Log information, diagnostics, etc.).  The file store is easier to configure,generates no network traffic, and is generally faster.  However, the JDBC store is popular because most customers have invested in High Availability (HA) solutions, like RAC, Data Guard or Golden Gate,  for their database so using a JDBC store on the database makes HA and migration much easier (for the file store, the disk must be shared or migrated). Some work has been done in recent releases to improve the JDBC store performance and take advantage of RAC clusters.

    It's obvious from the JDBC store configuration that JMS uses just a single table in the database.  JMS use of this table is sort of like a queue so there are hot spots at the beginning and end of the table as messages are added and consumed - that might get fixed in WLS in the future but it is a consideration for the current store performance.  JMS since the beginning has been single threaded on a single database connection.  Starting in WLS 10.3.6 (see this link), the store can run with multiple worker threads each with its own connection by setting Worker Count on the JDBC Persistent Store page in the console.  There are no documented restrictions or recommendations about how to set this value  Should we set it to the maximum allowed of 1000 so we get a lot of work done?  Not quite ...

    Since we have contention between the connections, using too many connections is not good.  To begin with, there is overhead in managing the work among the threads so if JMS is lightly loaded, it's worse to use multiple connections.  When you have a high load, we found that for one configuration, 8 threads gave the best performance but 4 was almost as good at half the resources using the Oracle Thin driver on an Oracle database (more about database vendors below).  Optimization for queues with multiple connections is  a big win with some gains as high as 200%.  Handling a topic is another ... well, topic.  It's complicated by the fact that a message can go to a single or multiple topics and we want to aggregate acknowledgements to reduce contention and improve performance.  Topic testing saw more modest gains of  around 20%, depending on the test.

    How about different data source types? It turns out that when using a RAC cluster and updates are scattered across multiple instances, there is too much overhead in locking and cache fusion across the RAC instances.  That makes it important that all of the reserved connections are on a single RAC instance.   For a generic data source, there is nothing to worry about - you have just one node.  In the case of multi data source (MDS), you can get all connections on a single instance by using the AlgorithmType set to "Failover" (see this link ).   All connections will be reserved on the first configured generic data source within the MDS until a failure occurs, then the failed data source will be marked as suspended and all connections will come from the next generic data source in the MDS.  You don't want to use AlgorithmType set to "Load-Balancing".  In the case of Active GridLink (AGL), it's actually difficult to get connection affinity to a single node and without it, performance can seriously degrade.  Some benchmarks saw performance loss of 50% when using multiple connections on different instances.  For WLS 10.3.6 and 12.1.1, it is not recommended to use AGL with multiple connections.  In WLS 12.1.2, this was fixed so that JMS will reserve all connections on the same instance.  If there is a failure, all of the reserved connections need to be closed, a new connection is reserved using Connection Runtime Load Balancing (RCLB), hopefully on a lightly loaded instance), and then the rest of the connections are reserved on the same instance.  In one benchmark, performance improved by 200% when using multiple connections on the same instance.

    How about different database vendor types?  Your performance will vary based on the application and the database.  The discussion above regarding RAC cluster performance is interesting and may have implications for any application that you are moving to a cluster.  Another thing that is specific to the Oracle database is indexing the table for efficient access by multiple connections.  In this case, it is recommended to use a reverse key index for the primary key.  The bytes in the key are reversed such that keys that normally would be grouped because the left-most bytes are the same are now distributed more evenly (imagine using a B-tree to store a bunch of sequential numbers with left padding with 0's, for example). 

    Bottom line: this feature may give you a big performance boost but you might want to try it with your application, database, hardware, and vary the worker count.




    Thursday May 19, 2011

    WebLogic MDB and Distributed Queue Elastic Capabilities

    Recently, a customer asked me how MDBs interact with a Distributed Queue, especially after an Automatic Service Migration event occurs. We have some great technology built into WebLogic Server to make migration and failover seamless. I thought there might be some other folks interested in knowing more about this, too.[Read More]
    About

    The official blog for Oracle WebLogic Server fans and followers!

    Stay Connected

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today