Tuesday Sep 01, 2015

Using Orachk to Clean Up Concrete Classes for Application Continuity

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. Getting rid of any references to Oracle concrete classes is the first step.

Oracle has a utility program that you can download from MOS to validate various hardware, operating system, and software attributes associated with the Oracle database and more (it’s growing). The program name is orachk. The latest version is 12.1.0.2.4 and starting in this version, there are some checks available for applications running with AC.

There is enough documentation about getting started with orachk so I’ll just say to download and unzip the file. The AC checking is part of a larger framework that will have additional analysis in future versions. This article focuses on the analysis for Oracle concrete classes in the application code.

AC is unable to replay transactions that use oracle.sql deprecated concrete classes of the form ARRAY, BFILE, BLOB, CLOB, NCLOB, OPAQUE, REF, or STRUCT as a variable type, a cast, the return type of a method, or calling a constructor. See New Jdbc Interfaces for Oracle types (Doc ID 1364193.1) for further information about concrete classes. They must be modified for AC to work with the application. See Using API Extensions for Oracle JDBC Types for many examples of using the newer Oracle JDBC types in place of the older Oracle concrete types.

There are three values that control the AC checking (called acchk in orachk) for Oracle concrete classes. They can be set either on the command line or via shell environment variable (or mixed). They are the following.

Command Line Argument

Shell Environment Variable

Usage

–asmhome jarfilename  

RAT_AC_ASMJAR

This must point to a version of asm-all-5.0.3.jar that you download from http://asm.ow2.org/.

-javahome JDK8dirname

RAT_JAVA_HOME

This must point to the JAVA_HOME directory for a JDK8 installation.

-appjar dirname

RAT_AC_JARDIR

To analyze the application code for references to Oracle concrete classes like oracle.sql.BLOB, this must point to the parent directory name for the code. The program will analyze .class files, and recursively .jar files and directories. If you have J2EE .ear or .war files, you must recursively explode these into a directory structure with .class files exposed.

This test works with software classes compiled for Oracle JDBC 11 or 12.

When you run the AC checking, the additional checking about database server, etc. is turned off. It would be common to run the concrete class checking on the mid-tier to analyze software that accesses the Oracle driver.

I chose some old QA test classes that I knew had some bad usage of concrete classes and ran the test on a small subset for illustration purposes. The command line was the following.

$ ./orachk -asmhome /tmp/asm-all-5.0.3.jar -javahome /tmp/jdk1.8.0_40 -appjar /tmp/appdir

This is a screen shot of the report details. There is additional information reported about the machine, OS, database, timings, etc.

From this test run, I can see that my one test class has five references to STRUCT that need to be changed to java.sql.Struct or oracle.jdbc.OracleStruct.

Note that WLS programmers have been using the weblogic.jdbc.vendor.oracle.* interfaces for over a decade to allow for wrapping Oracle concrete classes and this AC analysis doesn’t pick that up (there are five weblogic.jdbc.vendor.oracle.* interfaces that correspond to concrete classes). These should be removed as well. For example, trying to run with this ORACLE extension API and the WLS wrapper

import weblogic.jdbc.vendor.oracle.OracleThinBlob;

rs.next();
OracleThinBlob blob = (OracleThinBlob)rs.getBlob(2);
java.io.OutputStream os = blob.getBinaryOutputStream();

on a Blob column using the normal driver works but using the replay driver yields

java.lang.ClassCastException: weblogic.jdbc.wrapper.Blob_oracle_jdbc_proxy_oracle
$1jdbc$1replay$1driver$1TxnReplayableBlob$2oracle$1jdbc$1internal
$1OracleBlob$$$Proxy cannot be cast to weblogic.jdbc.vendor.oracle.OracleThinBlob

It must be changed to use the standard JDBC API

rs.next();
java.sql.Blob blob = rs.getBlob(2);
java.io.OutputStream os = blob.setBinaryStream(1);

So it’s time to remove references to the deprecated Oracle and WebLogic classes and preferably migrate to the standard JDBC API’s or at least the new Oracle interfaces. This will clean up the code and get it ready to take advantage of Application Continuity in the Oracle database.

Monday Mar 09, 2015

WLS JDBC Driver Patching

The handling of Oracle driver jar patches is complicated but getting sorted out. This article tries to gather the information in one place with pointers to more details.  There are a few patches that are still not available, marked as TBA (To Be Available) in the tables below.  As these files become available, this page will be updated.

WLS 10.3.6, 12.1.1, and 12.1.1 shipped Database 11.2.0.3 jar files.  However these are non-standard versions of the jars with additional bug fixes and enhancements to support WLS.  That means that you can't just drop in an 11.2.0.3 patch or upgrade to 11.2.0.4 using standard released jar files. Although support is required to provide 11.2.0.3 patches as needed, it will be difficult and the recommendation is to upgrade to a special 11.2.0.4 patch that contains 11.2.0.4 and all of the patches and enhancements in the 11.2.0.3 database jar files shipped with WLS. It's further complicated because WLS started using the Oracle Universal Installer in 12.1.2, requiring a different patch format.

WLS 10.3.6, 12.1.1, 12.1.2, and 12.1.3 also support running with Oracle Database 12c client jar files. For WLS 10.3.6 through 12.1.2, the jar files must be manually installed; there is no installer or patch to automate this upgrade. To get patches, you must be running with the Database 12.1.0.2 jar files; WLS patches will not be generated for the Database 12.1.0.1 jar files. WLS 12.1.3 ships with a pre-release version of Database 12.1.0.2 driver jar files and a patch will be available to upgrade to the production version of these files. After this upgrade, standard database Oracle patch files will work as expected for WLS 12.1.3 (and WLS 12.1.2 with a manual upgrade to database 12.1.0.2 jar files).

Patching the installed Oracle Driver

WLS Release

Oracle Driver Install

Database Jar

Patch Strategy

Documentation

10.3.6

11.2.0.3.0AS11.1.1.6.0

11.2.0.4 WLS patch

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1970437.1

12.1.1

11.2.0.3.0AS11.1.1.6.0

11.2.0.4 WLS patch

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1970437.1

12.1.2

11.2.0.3.0AS12.1.2.0.0

11.2.0.4 opatch

Patch Request 18557114 for bug 19477203

12.1.3

Pre-12.1.0.2

12.1.0.2 opatch to bring up to shipping 12.1.0.2; standard opatch for additional bug fixes

Patch 20741228: 12.1.3 WLS UPGRADE TO 12.1.0.2 JDBC RELEASE


























Running with the Database 12c Driver

WLS Release

12.1.0.2 installation

Database Jar Patch Strategy

Documentation for Installation
Documentation for patching

10.3.6

Manual installation of 12.1.0.2

12.1.0.2 WLS patch

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1
TBA

12.1.1

Manual installation of 12.1.0.2

12.1.0.2 WLS patch

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1
TBA

12.1.2

Manual installation of 12.1.0.2

12.1.0.2 opatch

https://docs.oracle.com/middleware/1212/wls/JDBCA/ds_12cdriver.htm#JDBCA272

Standard patch procedure

12.1.3

Pre-12.1.0.2 installed; Patch to bring up to shipping 12.1.0.2

12.1.0.2 opatch

https://docs.oracle.com/middleware/1213/wls/JDBCA/ds_12cdriver.htm#JDBCA272

Patch 20741228: 12.1.3 WLS UPGRADE TO 12.1.0.2 JDBC RELEASE

Standard patch procedure

On a related topic, updating non-Oracle driver jar files is covered by the following note.

https://support.oracle.com/epmos/faces/DocumentDisplay?id=1

This includes the DataDirect and MySQL drivers that are shipped in the kit. The jar file is backed up and removed, the new file installed, and the CLASSPATH adjusted if the jar name changes.

You'll notice that releases earlier than WLS 10.3.6 are not discussed.  For releases earlier than WLS 10.3.4, they depend only on the ojdbcN.jar file.  It's possible that they will work with the 11.2.0.4 jar file but that hasn't been certified and they are not still in error correction support.  For WLS 10.3.4 or 10.3.5, it depends not only on a specific ojdbc jar file but also ONS/UCP jars that have the package names renamed.  They will likely not work correctly with the 11.2.0.4 jar file (certainly not Active GridLink).  Since these releases ended error correction support in May 2012, you will need to upgrade to WLS 10.3.6 or 12.1.x to use later driver jar files.

Monday Aug 04, 2014

Setting V$SESSION for a WLS Datasource

Every Oracle database connection runs in the context of a database process called a session.  There is a v$session view that contains a lot of information about the database sessions that are active.. By default when you use the Oracle Thin Driver, the value v$session.program is set to "JDBC Thin Client".  That separates out the Java applications from sqlplus and the scores of database background programs but doesn't provide much additional information since all of the Java connections look the same.  It's easy to set this and some other values on v$session using connection properties on the Oracle Thin driver.  The following connection properties are supported:  v$session.osuser, v$session.process, v$session.machine, v$session.terminal, and v$session.program.  Setting these will set the corresponding value on the session on the database side.  These values are then available from the v$session view.

The simple approach is to hard-code a value into a normal connection Property.  That's fine if you want to associate a fixed value with a data source.  It's more interesting if you dynamically set the value at runtime. For example, if there are multiple servers running within a domain and the information needs to be server specific, a normal cluster deployment with one fixed value is not useful, and the option of deploying the DataSource to every server individually and then hand-editing each one's descriptor with unique values for these properties is not manageable. You can easily handle this using a System Property.  The value that is specified is taken to be a Java system property that you set on the command line of the application server.  It is retrieved using System.getProperty() and set as the value of the connection property.    There's a new Encrypted Property in WLS 12.1.3; I'll write another article about that.

If you use bin/startWebLogic.sh to start the server, it will put -Dweblogic.Name=${SERVER_NAME}on the command line.  If you set the v$session.program System Property connection property to "weblogic.Name", your session program value will match the WLS server that is making the connection. 

You can set connection properties by editing the data source configuration on the "Connection Pool" tab in the WebLogic administration console.  Properties are set in the Properties and System Properties text boxes.  Let's say that I set four of the values to test values and one to a system property, generating the descriptor fragment as follows.

<property>
  <name>v$session.osuser</name>
  <value>test1</value>
</property>
<property>
  <name>v$session.process</name>
  <value>test2</value>
</property>
<property>
  <name>v$session.machine</name>
  <value>test3</value>
</property>
<property>
  <name>v$session.terminal</name>
  <value>test4</value>
</property>
<property>
  <name>v$session.program</name>
  <sys-prop-value>weblogic.Name</sys-prop-value>
</property>

Alternatively, you could set these values using on-line or off-line WLST.

Here's a fragment of an off-line WLST script.

cd('/JDBCSystemResource/myds/JdbcResource/myds') cd('JDBCDriverParams/NO_NAME_0') cd('Properties/NO_NAME_0') create('v$session.program','Property') cd('Property') cd('v$session.program') set('SysPropValue', 'weblogic.Name')

If $SERVER_NAME is myserver and I then go to run a query, here is the resulting output.

SQL> select program, osuser, process, machine, terminal 
  from v$session where program = 'myserver';
myserver test1 test2 test3 test4

If the server names aren't obvious enough, you could set the program to "WebLogic Server $SERVER_NAME".  You could set -Djdbc.process=<PID> to tie connections to a specific WLS server process. You might want to add the WLS data source name to the program value.  You could set osuser to the Java value "user.name".

 Using system properties can make this a powerful feature for tracking information about the source of the connections, especially for large configurations.


Friday Aug 01, 2014

Using 3rd party JDBC Jar Files

If you look at the documentation for adding a 3rd party JDBC jar file, it recommends adding it to the system classpath by updating comEnv.sh.  See http://docs.oracle.com/middleware/1212/wls/JDBCA/third_party_drivers.htm#JDBCA233.  However, there more options with some advantages and disadvantages.  To use some of these options, you need to know something about application archive files.

The general precedence of jar files in the classpath is the following:

system classpath >> DOMAIN_HOME/lib >> APP-INF/lib >> WEB-INF/lib

The highest priority is given to jar files on the system classpath.  The advantage of using the system classpath is that it works for all environment including standalone applications. 

The rest of the options only work for applications deployed in the application server.  That includes the WLS administration console and WLST but not an application like 'java utils.Schema'.  These approaches work on WLS 10.3.6 and later.

The simplest way to add a jar file when you don't need it accessed in a standalone application is to drop it in DOMAIN_HOME/lib.  This is an easy way to access a driver for a data source in the application server. It won't work for jar files that are shipped with WLS and already part of the system classpath because the system classpath takes precedence.  That's why this article focuses on "3rd party" jar files.  More information and recommendations about using DOMAIN_HOME/lib can be found at http://docs.oracle.com/middleware/1212/wls/WLPRG/classloading.htm#i1096881 .

APP-INF/lib will only work in an EAR file and applies to all applications in the EAR file.  APP-INF is located at the root directory in the EAR file at the same level as META-INF.  This approach and the next have the advantage of packaging the driver with the application archive.  However, it's not efficient if the driver is needed by more than one archive and you probably should use DOMAIN_HOME/lib.

WEB-INF/lib only works in a WAR file and must be located at the root directory in the WAR file.  It only applies to the corresponding web application.  By using a WEB-INF/weblogic.xml file that has "<prefer-web-inf-classes>true</prefer-web-inf-classes>", the jar files in WEB-INF/lib take precedence over others (that is, the default precedence defined above is overridden).  This is useful if you want different versions of the driver for different applications in various WAR files.   Note that starting in WLS 12.1.1 with Java EE 6 support, it's possible to have a standalone WAR file that is not embedded in an EAR file.

Here's an indented list that attempts to capture the hierarchy of the files and directories described above in EAR and WAR files.

Application (ear)

  • Web module (war)
    • WEB-INF/lib
    • WEB-INF/web.xml
    • WEB-INF/weblogic.xml
  • EJB module
  • META-INF
  • APP-INF/lib

There is sample code for a WAR file at this link.  The data source definition can appear either as a descriptor in web.xml or as an annotation in the java file.  weblogic.xml can be used to set prefer-web-inf-classes appropriately.  The @Resource annotation is used to reference the data source in the application code.  This program prints the version of the driver.  You can play with two versions of the Derby driver to see how the precedence works.

Note:  You can't have two versions of the same jar in both domain/lib or the system classpath and  WEB-INF/lib or APP-INF/lib with prefer-web-inf-classes or prefer-application-packages set.  That is, either you use domain/lib or system classpath to get it into all applications in the domain or you use it embedded in the application but not both.  If you don't follow this restriction, it's possible (depending on the jar, the version changes, and the order in which the jars are referenced) that a ClassCastException will occur in the application.

The choice of where you locate the JDBC jar file depends on your application.  For standalone application access, use the system classpath.  For simple access in all or most applications, use DOMAIN_HOME/lib.  For access across an EAR file, use APP-INF/lib.  For access within a single WAR file, use WEB-INF/lib.



Monday Sep 16, 2013

Ensuring high level of performance with WebLogic JDBC

In this post you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.  However, rather than just throwing some tips, I will detail why each recommendation is beneficial to JDBC and Weblogic Server performance.

  • Use Oracle JDBC thin driver (Type 4) rather than OCI driver (Type 2)

The Oracle JDBC thin driver is lightweight (easy to install and administrate), platform-independent (entirely written in Java), and provides slightly higher performance than the JDBC OCI (Oracle Call Interface) driver.  The thin driver does not require any additional software on the client side.  Oracle JDBC FAQ stipulates that the performance benefit with the thin driver is not consistent and that the OCI driver can even deliver better performances in some scenarios.

  • Use PreparedStatements objects rather than Statement 
With PreparedStatements, compiled SQL statements will be kept in cache and only be executed once against the database.  As a result, unnecessary parsing and round-trips to the database will be avoided when the same statement is used later within the same connection.  The statement cache size defines the total number of prepared statement that can be made with a single connection from the Datasource.

  • Close all JDBC resources in a finally Block

This include ResultSet, Connection, Statement and Prepared Statement objects and to avoid potential memory leaks.  The connection.close() won't necessarily automatically clean up all the other objects because the implementation of close() may differ between JDBC drivers.  Also, JDBC objects not properly closed could lead to this error:

java.sql.SQLException: ORA-01000: maximum open cursors exceeded.

If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed. 

  • Set Shrink frequency to 0 in production environments

By disabling Shrink frequency you will not allow waits before shrinking a connection pool that has incrementally increased to meet demand.  The Shrink Frequency parameter is used to specify the number of seconds to wait before shrinking a connection pool, given that ShrinkingEnabled is kept at its default value, or set to true.  Weblogic shrinks a connection pool by reducing the number of connections to the greater of either the minimum capacity or the number of connection in use and thus frees up some resources.  In development we can afford to keep the no-longer-used connection active rather than immediately returning them to the pool.  

  • Consider skipping the SQL-query test when Test Connections on Reserve is enabled

When Test Connections on Reserve is enabled (see Advanced Connection Pool Configuration in the console), the Weblogic Server checks a database connection prior to returning the connection to a client to avoid passing an invalid connection in the application.  Since this operation could impact the performance, it's recommended to use Seconds to Trust an Idle Pool Connection (set to 10 seconds by default) that defines how long WebLogic Server will trust the connection and therefore skip the connection test after a connection has been proven valid.

  • Enable Pinned-to-Thread
Disabled by default, this option can improve performance by enabling threads to keep a pooled database connection even after the application closes the logical connection.  This eliminates potential contention between threads getting a database connection.  However, this parameter should be used with great care because the connection pool maximum capacity is ignored when pinned-to-thread is enabled.  A connection no longer used will not return to the pool but will stay linked to the thread, and no shrinking can apply to that pool.
  • Ensure that Maximum Thread Constraint property doesn't exceed the number of database connection

This property (See Environment Work Manager in the console) will set a maximum number of possible concurrent requests. If it exceeds the number of database connection then some threads might end up waiting until existing database connection are made available.


Visit Tuning Data Source Connection Pools and Tuning Data Sources for additional parameters tuning in JDBC data sources and connection pools to improve system performance with Weblogic Server 12.1.2, and Performance Tuning Your JDBC Application for application-specific design and configuration. 

Thursday Sep 05, 2013

Part 3 - 12c Database and WLS - Multi Tenant Database (CDB/PDB)

I introduced WLS support for 12c database at https://blogs.oracle.com/WebLogicServer/entry/part_1_12c_database_and and the Application Continuity feature at https://blogs.oracle.com/WebLogicServer/entry/part_2_12c_database_and  . 

One of the biggest new features in the 12c database is officially called the Oracle Multitenant option. I'm going to point out what I think are some of the highlights here and eventually get down into some WLS sample code.  See http://docs.oracle.com/cd/E16655_01/server.121/e17636/part_cdb.htm#BGBIDDFD for the Oracle Database Administration documentation for this feature.  Technical folks, administrators and programmers, will call this feature "CDB/PDB" after the Container Database and Pluggable Databases that are used to implement it.  As the names imply, you have a CDB to contain a collection of PDB's.  If you are an administrator, you can appreciate the ability to create, manage, monitor, backup, patch, tune, etc. one entity instead of a bunch of them.  As a programmer, its great that you can connect to your PDB and it looks and behaves just like a non-PDB.  All that you need to know is the host, port, and service name just like the non-PDB.  Of course, the service name needs to be unique in your environment so that you get to the right PDB in the right CDB.

As you might expect, there needs to be a container at the top to hold all of the PDB's - it's called the CDB$ROOT.  There also needs to be some users in charge of the PDB's.  These users are called "common users" and the names must begin with "C##".  A common user is created in the root (it fails within a PDB) and exist in all PDB's. "Local users" are created within individual PDB's and behave like users in non-PDB's.  When you are doing administration from tools like sqlplus or sqldeveloper, you connect to the database in the root and can switch to a PDB by using something like  "alter session set container = PDB1" and then "alter session set container = CDB$ROOT" to get back to the root.  There are new views to keep track of the CDB/PDB information - use "select pdb from cdb_services where pdb is not null" to see what PDB's exist.  Some operations can only be done at the root level.  For example, Database Resident Connection Pooling (DRCP), another new 12c feature, can only be started at the root.  A PDB is managed pretty much like a non-PDB with startup and shutdown.  Each PDB has an administrative database service that you can't configure (Enterprise Manager needs something consistent to manage the PDB) and you can create additional database services with your own choice of attributes. You can use  the dba_services table within a PDB to see just the services for that PDB.

So how do you use this?  The easy way to use this is for improving density and scalability on the data tier.  You can move multiple standalone databases into a single container database with individual PDB's corresponding to each standalone database.  On the WebLogic Server side, you can still have a data source defined for each of these PDB's, just as you have today. 

Feeling more ambitious?  You can share a single data source across multiple PDB's.  To do that, you need to get a connection and switch to the PDB that you want to work on by executing the SQL statement "ALTER SESSION SET CONTAINER = pdbname".  You might be wondering if any state is leaked from one PDB to another by doing this.  Under the covers, the Oracle database server tells the Oracle driver that the PDB has been switched so driver connection state can be cleaned up.  Then the driver tells the WLS data source that the PDB has been switched so that the WLS connection state can be cleaned up. Although general use of a PDB is transparent to WLS so that  it can be used with versions starting with WLS release 10.3.6, PDB SET CONTAINER isn't supported until WLS release 12.1.2.

Switching PDB's is not a cheap operation so you should minimize doing this.  To minimize the overhead, you really want to keep track of what PDB is associated with each connection.  WLS data source has a mechanism for associating labels with connections that's ideal for this purpose.  See http://docs.oracle.com/middleware/1212/wls/JDBCA/ds_labeling.htm#BCGFADIH  for the connection labeling documentation.

Take a look at the sample code at this source code link (the .txt file is actually a .java file).  It's written up as a servlet (don't get caught up in those details). It assumes that there is a 12c container database with PDB1 and PDB2 that has been configured as a generic data source with a JNDI name "ds0".  When it first gets the data source, a labeling callback is created and registered (we only want to do this once).    The program gets connections on each PDB.  The first time it gets a connection on a PDB, it needs to do a switch; the next time, it can reuse the same connection.   The labeling getConnection API is used; it takes a Properties object and the label name is "TENANT".  Looking at the callback, if the label properties match then 0 is returned, otherwise a non-zero value is returned (some debugging information is printed so it's possible to watch the matching process).  The key work is in the configure method, which is called when the tenant label doesn't match.  In that case, the SET CONTAINER is done using the tenant label value and the label properties are reset.

Climbing back out of the code, the Oracle Multitenant option can be a big win on the database server side immediately with no change on the WLS side except maybe for the URL in the JDBC descriptor.  Longer term, I believe applications will want higher density on the mid-tier and start to collapse the number of data sources. Along the same lines of improving density but not for multiple tenants, you can also move your SALES and CRM databases into PDB's within a container, and merge the two data sources into a single data source using the methods described above.  It will be interesting to see how customers leverage the power of this new feature.

Wednesday Jul 31, 2013

JMS JDBC Store Performance Using Multiple Connections

This article is a bit different than the normal data source articles I have been writing because it's focused on an application use of WebLogic Server (WLS) data sources, although the application is still part of WLS.  Java Messaging Service (JMS) supports the use of either a file store or a JDBC store to store JMS persistent messages (the JDBC store can also be used for Transaction Log information, diagnostics, etc.).  The file store is easier to configure,generates no network traffic, and is generally faster.  However, the JDBC store is popular because most customers have invested in High Availability (HA) solutions, like RAC, Data Guard or Golden Gate,  for their database so using a JDBC store on the database makes HA and migration much easier (for the file store, the disk must be shared or migrated). Some work has been done in recent releases to improve the JDBC store performance and take advantage of RAC clusters.

It's obvious from the JDBC store configuration that JMS uses just a single table in the database.  JMS use of this table is sort of like a queue so there are hot spots at the beginning and end of the table as messages are added and consumed - that might get fixed in WLS in the future but it is a consideration for the current store performance.  JMS since the beginning has been single threaded on a single database connection.  Starting in WLS 10.3.6 (see this link), the store can run with multiple worker threads each with its own connection by setting Worker Count on the JDBC Persistent Store page in the console.  There are no documented restrictions or recommendations about how to set this value  Should we set it to the maximum allowed of 1000 so we get a lot of work done?  Not quite ...

Since we have contention between the connections, using too many connections is not good.  To begin with, there is overhead in managing the work among the threads so if JMS is lightly loaded, it's worse to use multiple connections.  When you have a high load, we found that for one configuration, 8 threads gave the best performance but 4 was almost as good at half the resources using the Oracle Thin driver on an Oracle database (more about database vendors below).  Optimization for queues with multiple connections is  a big win with some gains as high as 200%.  Handling a topic is another ... well, topic.  It's complicated by the fact that a message can go to a single or multiple topics and we want to aggregate acknowledgements to reduce contention and improve performance.  Topic testing saw more modest gains of  around 20%, depending on the test.

How about different data source types? It turns out that when using a RAC cluster and updates are scattered across multiple instances, there is too much overhead in locking and cache fusion across the RAC instances.  That makes it important that all of the reserved connections are on a single RAC instance.   For a generic data source, there is nothing to worry about - you have just one node.  In the case of multi data source (MDS), you can get all connections on a single instance by using the AlgorithmType set to "Failover" (see this link ).   All connections will be reserved on the first configured generic data source within the MDS until a failure occurs, then the failed data source will be marked as suspended and all connections will come from the next generic data source in the MDS.  You don't want to use AlgorithmType set to "Load-Balancing".  In the case of Active GridLink (AGL), it's actually difficult to get connection affinity to a single node and without it, performance can seriously degrade.  Some benchmarks saw performance loss of 50% when using multiple connections on different instances.  For WLS 10.3.6 and 12.1.1, it is not recommended to use AGL with multiple connections.  In WLS 12.1.2, this was fixed so that JMS will reserve all connections on the same instance.  If there is a failure, all of the reserved connections need to be closed, a new connection is reserved using Connection Runtime Load Balancing (RCLB), hopefully on a lightly loaded instance), and then the rest of the connections are reserved on the same instance.  In one benchmark, performance improved by 200% when using multiple connections on the same instance.

How about different database vendor types?  Your performance will vary based on the application and the database.  The discussion above regarding RAC cluster performance is interesting and may have implications for any application that you are moving to a cluster.  Another thing that is specific to the Oracle database is indexing the table for efficient access by multiple connections.  In this case, it is recommended to use a reverse key index for the primary key.  The bytes in the key are reversed such that keys that normally would be grouped because the left-most bytes are the same are now distributed more evenly (imagine using a B-tree to store a bunch of sequential numbers with left padding with 0's, for example). 

Bottom line: this feature may give you a big performance boost but you might want to try it with your application, database, hardware, and vary the worker count.




Monday Jul 22, 2013

Part 2 - 12c Database and WLS - Application continuity

I introduced WLS support for 12c database in this blog link and described one of the prerequisites for Application Continuity (AC) at this blog link.

When an error occurs on a connection, it would be nice to be able to keep processing on another connection - that's what AC does.  When on a single node database, getting another connection means that the database and network are still available, e.g., maybe just a network glitch.  However, in the case of a Real Application Cluster (RAC), chances are good that even if you lost a connection on one instance you can get a connection on another instance for the same database.  For this feature to work correctly, it's necessary to ensure that all of the work that you did on the connection before the error is replayed on the new connection - that's why AC is also called replay. To replay the operations, it's necessary to keep a list of the operations that have been done on the connection so they can be done again.  Of course, when you replay the operations, data might have changed since the last time so it's necessary to keep track of the results to make sure that they match.  So AC may fail if replaying the operations fail.

You can read the WLS documentation, an AC white paper, and the Database documentation to get the details. This article gives an overview of this feature (and a few tidbits that the documentation doesn't cover).

To use this feature, you need to use both the 12c driver and a 12c database.  Note that if you use an 11.2.0.3 driver and/or database, it might appear to work but replay will only happen for read-only transactions (the 11g mode is not supported but we don't have a mechanism to prevent this).

You need to configure a database service to run with AC.  To do that, you need to set the new 12c service attributes FAILOVER_TYPE=TRANSACTION and COMMIT_OUTCOME=true on the server side.

There's not much to turning on AC in WLS - you just use the replay driver "oracle.jdbc.replay.OracleDataSourceImpl" instead of "oracle.jdbc.OracleDriver"  when configuring the data source.  There's no programming in the application needed to use it.  Internal to WLS, when you get a connection we call the API to start collecting the operations and when you close a connection we call the API to clear the operation history.

WLS introduced a labeling callback for applications that use the connection labeling feature.  If this callback is registered, it will be called when a new connection is being initialized before replay.  Even if you aren't using labeling, you might still want to be called and there is a new connection initialization callback that is for replay (labeling callback trumps initialization callback if both exist).

It sounds easy and perfect - what's the catch?   I've already mentioned that you need to get rid of references to concrete classes and use the new Oracle interfaces.  For some applications that might be some significant work.  I've mentioned that if replaying the operations fails or is inconsistent, AC fails.  There are a few other operations that turn off AC - see the documentation for details. One of the big ones is that you can't use replay with XA transactions (at least for now).   Selecting from V$instance or sys_context  or other test traces for test instrumentation needs to be in callouts as the values change when replayed.  If you use sysdate or systimestamp, you need to grant keep date time to your user.

We are also tracking two defects - AC doesn't work with Oracle proxy authentication and it doesn't work with the new DRCP feature. 

There's another more complex topic to consider (not currently in the current WLS documentation).  By default, when a local transaction is completed on the connection, replay ends.  You can keep working on the connection but failure from that point on will return an error to the application. This default is based on the service attribute SESSION_STATE_CONSISTENCY with a value of DYNAMIC.  You can set the value to STATIC if your application does not modify non-transactional session state (NTSS) in the transaction.  I'm not sure how many applications fall into this trap but the safe thing is to default to dynamic.  I'll include such a code fragment below.  A related common problem that people run into is forgetting to disable autocommit, which defaults to true, and the first (implicit) commit turns off replay if SESSION_STATE_CONSISTENCY is set to DYNAMIC.

It's important to know how to turn on debugging so that if a particular sequence doesn't replay, you can understand why.  You simply need to run with the debug driver (ojdbc6_g.jar or ojdbc7_g.jar) and run with -Dweblogic.debug.DebugJDBCReplay (or turn this debug category on in the configuration).

AC won't replay everything and you still need to have some application logic to deal with the failures or return them to the end user.  Also, there's some overhead in time and memory to keep the replay data.  Still, it seems like a great feature for a lot of applications where you don't need to change anything but the driver name and you can avoid showing an error to the end user or simplify some recovery logic.

P.S. Confused about NTSS? So was I.  Examples of non-transactional session state that can change at run-time are ALTER SESSION, PL/SQL global variables, SYS_CONTEXT, and temporary table contents.  Here's an example of a PL/SQL global variable.  Imagine a package with the following body:

 current_order number := null;
 current_line number;
 procedure new_order (customer_id number) is
  current_order := order_seq.nextval;
  insert into orders values (current_order, customer_id);
  current_line := 0;
 end new_order;
 procedure new_line (product_id number, count number) is
  current_line := current_line + 1;
  insert into order_lines values (current_order, current_line,product_id, count);
 end new_line;
end order;

and a psuedo-code sequence in WLS like this:
getConnection()
exec "begin order.new_order(:my_customer_id); end;"
commit;
exec "begin order.new_line(:my_product_id, :my_count); end;"
<DB server failure and failover>
commit;

In this scenario, we won't replay the first transaction, because it's already committed and we'd end up with two orders in the database. But if we don't replay it, we lose the order_id, so the new_line call won't work. So we have to disable replay after the commit.  If you can guarantee that no such scenario exists, you can mark the service session-state as static.



Thursday Jul 11, 2013

Using Oracle JDBC Type Interfaces

One of the hot new features in Oracle database 12c is Application Continuity (AC).  The feature basically will detect that a connection has gone bad and substitute a new one under the covers (I'll talk about it more in another article).  To be able to do that, the application is given a connection wrapper instead of the real connection.  Wrappers or dynamic proxies can only be generated for classes based on interfaces.  The Oracle types, like REF and ARRAY, were originally introduced as concrete classes.   In WLS 10.3.6 and/or the JDBC 11.2.0.3 driver, there are new interfaces for the Oracle types that you will need to use to take advantage of this new AC feature.

First, some history so that you understand the needed API changes.  In the early days of WebLogic data source, any references to vendor proprietary methods were handled by hard-coded references.  Keeping up with adding and removing methods was a significant maintenance problem. At the peak of the insanity, we had over one thousand lines of code that referenced Oracle-proprietary methods and the server could not run without an Oracle jar in the classpath (even for DB2-only shops).  In release 8.1 in March 2003, we introduced wrappers for all JDBC objects such that we dynamically generated proxies that implement all public interface methods of, and delegate to, the underlying vendor object.  The hard-coded references and the maintenance nightmare went away and just as importantly, we could provide debugging information, find leaked connections, automatically close objects when the containing object closed, replace connections that fail testing, etc.  The Oracle types were concrete classes so proxies were generated for these classes implementing the WLS vendor interfaces weblogic.jdbc.vendor.oracle.*.  Applications can cast to the WLS vendor interfaces or use getVendorObj to access the underlying driver object. Later, we added an option to unwrap data types, with a corresponding loss of functionality like no debug information.

Although the focus of this article is Oracle types, the dynamic proxies work for any vendor.   For example, you can cast a DB2 connection to use a proprietary method  ((com.ibm.db2.jcc.DB2Connection)conn).setDB2ClientUser("myname").

Starting with Oracle driver 11.2.0.3, the database team needed wrappers for the new AC feature and introduced new interfaces.  For WebLogic data source users, that's good news - no more unwrapping, the weblogic.jdbc.vendor package is no longer needed, and it's all transparent.  Before you go and change your programs to use the new Oracle proprietary interfaces, the recommended approach is to first see if you can just use standard JDBC API's.  In fact, as part of defining the new interfaces, Oracle proprietary methods were dropped if there was an equivalent standard JDBC API or the method was not considered to add significant value.  This table defines the mapping.  The goal is to get rid of references to the first and second columns and replace them with the third column.

" align="center">Old Oracle types
Deprecated WLS Interface
New interfaces
oracle.sql.ARRAY
weblogic.jdbc.vendor.oracle.OracleArray
oracle.jdbc.OracleArray
oracle.sql.STRUCT
weblogic.jdbc.vendor.oracle.OracleStruct
oracle.jdbc.OracleStruct
oracle.sql.CLOB
weblogic.jdbc.vendor.oracle.OracleThinClob
oracle.jdbc.OracleClob
oracle.sql.BLOB
weblogic.jdbc.vendor.oracle.OracleThinBlob
oracle.jdbc.OracleBlob
oracle.sql.REF
weblogic.jdbc.vendor.oracle.OracleRef
oracle.jdbc.OracleRef

This is a job for a shell hacker!  Much of it can be automated and the compiler can tell you if you are referencing a method that has gone away - then check if the missing method is in the equivalent jdbc.sql interface (e.g., getARRAY() becomes the JDBC standard getArray()).

You can take a look at a sample program that I wrote to demonstrate all of these new interfaces at   https://blogs.oracle.com/WebLogicServer/resource/StephenFeltsFiles/OracleTypes.txt (note that this is actually a ".java" program). It covers programming with all of these Oracle types.  While use of Blob and Clob might be popular, Ref and Struct might not be used as much.  The sample program shows how to create, insert, update, and access each type using both standard and extension methods.  Note that you need to use the Oracle proprietary createOracleArray() instead of the standard createArrayOf(). Although the sample program doesn't use the standard createBlob() or createClob(), these are supported for the Oracle driver.

The API's can be reviewed starting with the 11.2.0.3 version of the Javadoc.  A copy is available at http://download.oracle.com/otn_hosted_doc/jdeveloper/905/jdbc-javadoc/oracle/jdbc/package-summary.html .

This is a first step toward using Application Continuity.  But it's also a good move to remove Oracle API's that will eventually go away and use standard JDBC interfaces and new Oracle interfaces.



Thursday Jun 20, 2013

JDBC in Java Mission Control

In case you haven't tried them before, JRockit Flight Recorder (JFR) and JRockit Mission Control (JMC) are useful tools to analyze what's happening in the Java Virtual Machine (JVM) and your running application. Basically, you make a record of events and information in a defined time period and then analyze the generated jfr file with JMC.  WebLogic Server 10.3.3 added support in the diagnostics component to work tightly with JMC. Part of that support is analysis of what is happening on the JDBC front and that's what I'm going to focus on here.  If you want some background, read about JFR at http://docs.oracle.com/cd/E15289_01/doc.40/e15070.pdf  and JMC at http://www.oracle.com/technetwork/middleware/jrockit/overview/index-090630.html.

The command name to get started is jrmc.If the JVM browser is not visible, click Window -> Show View -> JVM Browser. Right click your WebLogic Server JVM in the list and select Start Console.  It should look something like the following screen shot.

Now you need to record a sample for analysis.  Right click the JVM connection again and select Start Recording.  Select a timer period like one minute and click start.  For this article, I ran a demo program OTRADE that was used at Oracle Open World 2012.  It has a simple load generator that simulates updating stock values for Oracle stock.  The variety of SQL statements is limited but enough to demonstrate how JFR works.  A window opens to the JFR analysis when the time period is complete.  I'm going to focus the analysis on JDBC events. Select the Events button on the left.  Select Window -> Show View -> Event Types. Unselect Java Application, expand WebLogic, and select JDBC (minimize the Event Type window).  This is what the overview tab looks like with just the JDBC events showing.


The Log tab allows you to drill into individual records.

The Trace tab allows you to see the stack trace where the SQL call was made.  This can be useful if there is a misbehaving SQL statement and you want to know who is calling it.

There are a lot of other interesting areas for analysis that I won't dive into.  You might want to look at Code -> Hot methods to see where time is being spent or Memory -> object statistics to see the big memory users.

It's possible to load some "unsupported" plug-ins. I'd recommend the one for WebLogic.  Select Help, Install Plugins, Expand, WebLogic Tab Pack, Next, Finish, and Install All.  You will need to restart jrmc and select the WebLogic button on the left.


Then select the Database tab.


The default WLDF diagnostic volume is Low.  You can reset it from the administrative console by selecting Environment -> Servers, clicking on the name of your server, selecting Configuration -> General, and changing the Diagnostic Volume value to Medium or High.

If you are aware of what's happening on the JVM front, you will realize that the last version of JRockit is JDK6.  That's fine for WLS 10.3.6, which supports the server running on JDK6 or JDK7, but 12.1.1 moves to JDK7, which is based on HotSpot.  The early versions of  JDK7 on HotSpot don't have the ability to run JFR but that's coming back in Oracle JDK7 with updates greater than or equal to 7u4.  I tested using the latest publicly available update at this time, 7u25, with WLS 10.3.6.  You need to have a Java Advanced or Java Suite license (included in WebLogic Server EE and WebLogic Suite license). You can download it from "All Java SE Downloads on MOS [ID 1439822.1]". Of course, all of the "JRockit" names now become "Java".  JRMC is now JMC (JFR remains the same).  The JFR component is now gated behind switches and needs to be actively turned on.  Also, HotSpot still needs to set PermGen (that isn't fixed until JDK8).  So your command line will be something like

java -Xmx1024m -XX:MaxPermSize=350m -XX:+UnlockCommercialFeatures -XX:+FlightRecorder   weblogic.Server

This article is a bit long but I've only scratched the surface on how to use this tool to analyze your WebLogic data source usage among the many other WLS components.  I realize the pictures in this format of the blog appear small - use whatever mechanism your browser has to zoom in on them - the detail is available.

P.S. The JFR recording data is included in WebLogic Diagnostic Framework (WLDF) Diagnostic Images, and is also included in DFW incident reports. Those mechanisms can be used to capture JFR recording data as well.  See Using WLDF with Oracle JRockit Flight Recorder and Configuring and Capturing Diagnostic Images.


Friday Jun 07, 2013

Using try-with-resources with JDBC objects

There is a new language construct that is introduced in JDK7 that applies to JDBC.  It’s called try-with-resource.  There is a tutorial at http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html.
It allows you to automatically close an object when it is no longer needed.  Theoretically, the object must extend java.lang.AutoCloseable.  In JDK7, that list includes java.sql.CallableStatement, Connection, PreparedStatement, Statement, ResultSet, and *RowSet.  The following code

Statement stmt = null;
try {
  stmt = con.createStatement();
} catch (Exception ignore) {
} finally {
  if (stmt != null) stmt.close()
}
becomes a shorter version using the new syntax
try (Statement stmt = con.createStatement()) {
} catch (Exception ignore) {
}

Here’s what a larger example looks like that I embedded in a WebLogic servlet for testing.  Note that there are two resources in the first try-with-resource, separated by a semi-colon.

private String doit() {
  String table2 = "test222";
  String dropsql = "drop table " + table2 + "";
  String createsql = "create table " + table2 + " ( col1 int, col2 int )";
  String insertsql = "insert into " + table2 + " values (1,2)";
  String selectsql = "select col1, col2 from " + table2 + "";
  try {
    ds = getDS();
  } catch(Exception e) {
    return("failed to get datasource");
  }
  try (Connection conn = ds.getConnection();
    Statement stmt = conn.createStatement()) {
    try {
       stmt.execute(dropsql);
   } catch (Exception ignore) {} // ignore if table not dropped
   stmt.execute(createsql);
   stmt.execute(insertsql);
   try (ResultSet rs = stmt.executeQuery(selectsql)) {
     rs.next();
   } catch (Exception e2) {
     e2.printStackTrace();
     return("failed");
   }
 } catch(Exception e) {
   e.printStackTrace();
   return("failed");
 }
 return "DONE";
}

I’m not sure whether or not I like the programming paradigm – you can decide that for yourself.

How do I know that the connection, statement, and result set get closed?  I can turn on JDBCSQL debugging in the server and look at the server log output (this is a good trick to use to see what is happening under the covers).  Here is a stripped down version of the output.

 Connection@1 CreateStatement() 
 Connection@1 CreateStatement returns StatementWrapper@2 
 StatementWrapper@2 execute(drop table test222) 
 StatementWrapper@2 execute(drop table test222) throws java.sql.SQLSyntaxErrorException: table or view does not exist
 StatementWrapper@2 execute(create table test222 ( col1 int, col2 int )) 
 StatementWrapper@2 execute returns false 
 StatementWrapper@2 execute(insert into test222 values (1,2)) 
 StatementWrapper@2 execute returns false 
 StatementWrapper@2 executeQuery(select col1, col2 from test222) 
 StatementWrapper@2 executeQuery returns ResultSetImpl@3 
 ResultSetImpl@3 next() 
 ResultSetImpl@3 next returns true 
 ResultSetImpl@3 close() 
 ResultSetImpl@3 close returns 
 StatementWrapper@2 close() 
 StatementWrapper@2 close returns 
 Connection@1 close() 
 Connection@1 close returns

You might be thinking that there are not a lot of JDBC 4.1 compliant drivers out there and that’s true (the Oracle thin driver will have an ojdbc7.jar in version 12c, when it is released).  However, an object like java.sql.Statement implements AutoClosable so when running on JDK7 the driver Statement also is assumed to implement AutoClosable.  The object just needs to have a close() method so all of these JDBC objects have met the criteria since the initial specification.  WebLogic Server started supporting JDK7 in version 10.3.6.  Testing with all drivers that are shipped with WebLogic server, plus some drivers from various vendors, show that this JDK7 language feature works just fine with pre-JDBC 4.1 drivers. 

Note that this trick doesn't work for new JDK7 methods - you will get an AbstractMethodError.

So if you like this programming paradigm, start using it now with WebLogic Server 10.3.6 or later.

Wednesday Apr 24, 2013

Getting Connections with Active GridLink

As a follow-up to an earlier blog, someone asked "Can you please tell us more about gravitationShrinkFrequencySeconds and DRCP ?"   DRCP support isn’t shipping yet (wait until the next version of the Database and of WLS) so I can’t talk about it.

I think that people understand how connections are allocated in generic data sources and multi data sources (MDS).   Let me talk generally about how connections are allocated in Active GridLink (AGL). With the former (generic and MDS), connection allocation is pretty much dependent on WLS.  With AGL, it is also very dependent on the database configuration and runtime statistics.  The runtime information is provided to WLS via Oracle Notification Service (ONS) in the form of up and down events and Runtime Load Balancing events.

Connections are added to the pool initially based on the configured initial capacity. Connect time load balancing based on the listener(s) is used to spread the connections across the instances in the RAC cluster. For that to work correctly, you must either specify a Single Client Access Name (SCAN) address or use LOAD_BALANCE=ON for multiple non-SCAN addresses.  If you have multiple addresses and you forget to use LOAD_BALANCE=on, then all of the connections will end up on the first RAC instance – not what you want.  Connection load balancing is intended to give out connections based on load on the available RAC instances.  It’s not perfect.  If you have two instances and they are evenly loaded, there should be approximately 50% of the connections on each instance (but don’t enter a bug if they split up 49/51).  There a general problem with load balancing that the statistics can lag reality.  If you have big swings in demand on the two instances, then you can end up creating a connection on the more heavily loaded instance.

If you go to get a connection from the pool, runtime load balancing is used to pick the instance.  That’s again based on the load on the various instances and the information is updated by default every 30 seconds.  If a connection is available on the desired instance, then you have a hit on the pool and you get the available connection.

If you go to get a connection from the pool and one doesn’t exist on the desired instance, then you have a miss.  In this case, a connection is added to the pool on demand based on connection load balancing.  That is, if you go to reserve a connection and there isn’t one available in the pool, then a new one is created and it is done using connection load balancing as described above.  There’s an assumption here that the connect time and runtime load balancing match up.

If you are running within an XA transaction, then that takes priority in determining the connection that you get.  There are restrictions on XA support within a RAC cluster such that all processing for an XA transaction branch must take place on one instance in the cluster (the restrictions are complicated so the one-instance limitation is the only safe approach).  Additionally, performance is significantly better using a single instance so you want that to happen anyway.  The first time that you get a connection, it’s via the mechanisms described above (runtime load balancing or connection load balancing).  After that, any additional requests for connections within the same XA transaction will get a connection from the same instance.  If you can’t get a connection with “XA affinity,” the application will get an exception.

Similar to XA affinity, if you are running within a Web session, you will get affinity for multiple connections created by the same session.  Unlike XA affinity, “Session Affinity” is not mandatory.  If you can’t get a connection to the same instance, it will go to another instance (but there will be a performance penalty).

When you take down a RAC instance, it generates a “planned down event”.  Any unused connections for that instance are released immediately and connections in use are released when returned to the pool.  If a RAC instance fails, it generates an “unplanned down event.”  In that case, all connections for that instance are destroyed immediately.

When a RAC instance becomes available, either a new instance or an instance is restarted, it generates an “up event.”  When that occurs, connections are proactively created on the new instance.

The pool can get filled up with connections to the “wrong” instance (heavily loaded) if load changes over time.  To help that situation, we release unused connections based on runtime load balancing information.  When gravitation shrinking occurs, one unused connection is destroyed on a heavily loaded instance.  The default period for gravitation shrinking is 30 seconds but you can control it by setting the system property “weblogic.jdbc.gravitationShrinkFrequencySeconds” to an integer value.  We don’t actively create a connection at this point.  We wait until there is demand for a new connection and then the connection load balancing will kick in.

Finally, normal shrinking happens if not disabled.  When this occurs, half of the unused connections down to minimum capacity are destroyed.  The algorithm currently just takes the first set of connections on the list without regard to load balancing (it’s random with respect to instances).  The default period is 900 seconds and you can configure this using ShrinkFrequencySeconds.

There are possible improvements that could be made with respect to pool misses, and gravitational and normal shrinking.  And the database team is working on improving the load balancing done on their end. Still, it works pretty well with instances dynamically coming and going and the load changing over time.

Friday Apr 19, 2013

Migrating from Multi Data Source to Active GridLink

Multi data source (MDS) for RAC connectivity has been supported in WebLogic Server since 2005.  As the popularity of Oracle RAC has grown, so has the use of MDS.  Now with the introduction of Active GridLink (AGL) in early 2011, users of MDS want to migrate to AGL.  There is not an automated mechanism to do so but it’s not the difficult.

First, no changes should be required to your applications.  A standard application looks up the MDS in JNDI and uses it to get connections.  By giving the AGL the same JNDI name as the MDS, the process is exactly the same in the application to use a data source name from JNDI.

The only changes necessary should be to your configuration.   AGL is composed of information from the MDS and the member generic data sources, combined into a single AGL descriptor.  The only additional information that is needed is the configuration of Oracle Notification Service (ONS) on the RAC cluster.  In many cases, the ONS information will consist of the same host names as used in the MDS and the only additional information is the port number, and that can be simplified by the use of a SCAN address.

The MDS descriptor doesn’t have much information in it.  It has a list of the member generic datasources, which will help you figure out where to get the remaining information that you need.  It has a JNDI name, which must become the name of your new AGL to keep things transparent to the application.  If you want to run the MDS in parallel with the AGL, then you will need to give the AGL a new name but the application must also be changed to use a new JNDI name.  You don’t need to worry about the algorithm type.

Each of the member generic datasources will have its own URL.  As described in Appendix B Using Multi Data Sources with Oracle RAC , it will look like the following.

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host1-vip)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=dbservice)(INSTANCE_NAME=inst1)))

Each member should have its own host and port pair.  They will likely have the same service and often have the same port on different hosts.  The URL for the AGL is a combination of the host and port pairs.

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=host1-vip)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=host2-vip)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=dbservice))

It is preferable to use an Oracle Single Client Access Name (SCAN) address instead of multiple host or Virtual IP (VIP) addresses.  Then the URL would look something like this.

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=scanaddress)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=dbservice))

It’s a lot simpler and makes changes to the nodes in the cluster transparent.  This section isn’t intended to be a complete guide to writing Oracle URL’s – see the Oracle RAC Administration Guide.

Assuming that you will replace the MDS with the AGL, you will need to delete the MDS and the generic data sources from the configuration using the administration console and add a single AGL data source.  The process is described earlier in this chapter.  Give it the same JNDI name as your MDS had.  Select whether your generic data sources used an XA or non-XA driver. You can enter the complete URL as described above.  The user and password should be the same as what you had on one (hopefully all) of the MDS members.  When you get to the “Test GridLink Datasource Connection” page, click on the “Test All Listeners” button to see that you specified the new URL correctly.  The next page is where you need the new information for the ONS connections.  Specify one or more host:port pairs. For example, “host1-vip:6200” or “scanaddress:6200”.  If possible, use a single SCAN address and port.  Make sure that FAN enabled is checked.  On the next page, test the ONS connections.  Finally, you are ready to deploy the data source.

There are many data source parameters that you can’t configure on the creation flow.  You need to go back and edit the AGL data source configuration.  The parameters that you set should generally be based on the parameters that you were using in the MDS member data sources.  Hopefully they were all the same; if not, you need to decide what the right values are.

Monday Dec 10, 2012

Data Source Connection Pool Sizing

One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly.

Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science. 

From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections.

The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach.

When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time.

There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations.

There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment.

Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed.

So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.  

If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended. 

In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

Friday Nov 30, 2012

Data Source Use of Oracle Edition Based Redefinition (EBR)

Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition.

Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying

SQL ALTER SESSION SET EDITION = edition_name

in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database.

To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently.

There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details.

There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

About

The official blog for Oracle WebLogic Server fans and followers!

Stay Connected

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today