When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates.
(LDS Conference Report, October 1970, p107)
Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured. With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite. This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.
The basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite. The composite is then executed. After the composite has completed another snapshot is taken of the SOA tables. This is illustrated in the diagram below:
An example of the data collected by the tool is shown below:
Test Name | Total Tables Changed | Total Rows Added | Notes |
---|---|---|---|
AsyncTest1 | 13 | 15 | Async Interaction with simple SOA composite, one retry to send response. |
AsyncTest2 | 12 | 13 | Async interaction with simple SOA composite, no retries on sending response. |
AsyncTest3 | 12 | 13 | Async interaction with simple SOA composite, no callback address provided. |
OneWayTest1 | 12 | 13 | One-Way interaction with simple SOA composite. |
SyncTest1 | 7 | 7 | Sync interaction with simple SOA composite. |
Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:
Test Name | Table Name | Rows Added |
---|---|---|
AsyncTest1 | AUDIT_COUNTER | 1 |
AsyncTest1 | AUDIT_DETAILS | 1 |
AsyncTest1 | AUDIT_TRAIL | 2 |
AsyncTest1 | COMPOSITE_INSTANCE | 1 |
AsyncTest1 | CUBE_INSTANCE | 1 |
AsyncTest1 | CUBE_SCOPE | 1 |
AsyncTest1 | DLV_MESSAGE | 1 |
AsyncTest1 | DOCUMENT_CI_REF | 1 |
AsyncTest1 | DOCUMENT_DLV_MSG_REF | 1 |
AsyncTest1 | HEADERS_PROPERTIES | 1 |
AsyncTest1 | INSTANCE_PAYLOAD | 1 |
AsyncTest1 | WORK_ITEM | 1 |
AsyncTest1 | XML_DOCUMENT | 2 |
Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.
Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:
Test Name | Base Test Name | Table Name | Row Difference |
---|---|---|---|
AsyncTest1 | AsyncTest2 | AUDIT_TRAIL | 1 |
Here are the additional tables referenced by this test
Test Name | Base Test Name | Additional Table Name | Rows Added |
---|---|---|---|
AsyncTest1 | AsyncTest2 | WORK_ROWS | 1 |
I created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table. By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes. I then created a view that shows the difference in the number of rows before and after composite execution. This view has a number of sub-views that allow us to query specific items. The schema is shown below:
The different tables and views are:
I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.
Here we can see how many rows in how many tables changed as a result of running a test:
-- Display the total number of rows and tables changed for each test
select * from summary_delta_view
order by test_name;TEST_NAME TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES
-------------------- -------------- -------------- -----------
AsyncTest1 15 0 13
AsyncTest1noCCIS 15 0 13
AsyncTest1off 8 0 8
AsyncTest1prod 13 0 12
AsyncTest2 13 0 12
AsyncTest2noCCIS 13 0 12
AsyncTest2off 7 0 7
AsyncTest2prod 11 0 11
AsyncTest3 13 0 12
AsyncTest3noCCIS 13 65536 12
AsyncTest3off 7 0 7
AsyncTest3prod 11 0 11
OneWayTest1 13 0 12
OneWayTest1noCCI 13 65536 12
OneWayTest1off 7 0 7
OneWayTest1prod 11 0 11
SyncTest1 7 0 7
SyncTest1noCCIS 7 0 7
SyncTest1off 2 0 2
SyncTest1prod 5 0 520 rows selected
Here for a given test we can see which tables had rows inserted.
-- Display the tables which grew and show the number of rows they grew by
select * from simple_delta_view
where test_name='AsyncTest1'
order by table_name;
TEST_NAME TABLE_NAME DELTAROWS DELTASIZE
-------------------- ------------------------------ ---------- ----------
AsyncTest1 AUDIT_COUNTER 1 0
AsyncTest1 AUDIT_DETAILS 1 0
AsyncTest1 AUDIT_TRAIL 2 0
AsyncTest1 COMPOSITE_INSTANCE 1 0
AsyncTest1 CUBE_INSTANCE 1 0
AsyncTest1 CUBE_SCOPE 1 0
AsyncTest1 DLV_MESSAGE 1 0
AsyncTest1 DOCUMENT_CI_REF 1 0
AsyncTest1 DOCUMENT_DLV_MSG_REF 1 0
AsyncTest1 HEADERS_PROPERTIES 1 0
AsyncTest1 INSTANCE_PAYLOAD 1 0
AsyncTest1 WORK_ITEM 1 0
AsyncTest1 XML_DOCUMENT 2 0
13 rows selected
Here we can see the differences in rows for two tests.
-- Return difference in rows updated (test1)
select * from different_rows_view
where test1='AsyncTest1' and test2='AsyncTest2';
TEST1 TEST2 TABLE_NAME DELTA
-------------------- -------------------- ------------------------------ ----------
AsyncTest1 AsyncTest2 AUDIT_TRAIL 1
Here we can see tables that were used by one test but not by the other test.
-- Register base test case for use in extra_tables_view
-- First parameter (test1) is test we expect to have extra rows/tables
begin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;
/
anonymous block completed
-- Return additional tables used by test1
column TEST2 FORMAT A20
select * from extra_tables_view;
TEST1 TEST2 TABLE_NAME DELTAROWS
-------------------- -------------------- ------------------------------ ----------
AsyncTest1 AsyncTest2 WORK_ITEM 1
I used the tool to find out the following. All tests were run using SOA Suite 11.1.1.7.
The following is based on a very simple composite as shown below:
Each BPEL process is basically the same as the one shown below:
Setting | Total Rows Written | Total Tables Updated |
---|---|---|
No Retry | 13 | 12 |
One Retry | 15 | 13 |
When a fault policy causes a retry then the following additional database rows are written:
Table Name | Number of Rows |
---|---|
AUDIT_TRAIL | 1 |
WORK_ITEM | 1 |
Setting | Total Rows Written | Total Tables Updated |
---|---|---|
Development | 13 | 12 |
Production | 11 | 11 |
When the audit level is set at development instead of production then the following additional database rows are written:
Table Name | Number of Rows |
---|---|
AUDIT_TRAIL | 1 |
WORK_ITEM | 1 |
Setting | Total Rows Written | Total Tables Updated |
---|---|---|
Production | 11 | 11 |
Off | 7 | 7 |
When the audit level is set at production rather than off then the following additional database rows are written:
Table Name | Number of Rows |
---|---|
AUDIT_COUNTER | 1 |
AUDIT_DETAILS | 1 |
AUDIT_TRAIL | 1 |
COMPOSITE_INSTANCE | 1 |
Setting | Total Rows Written | Total Tables Updated |
---|---|---|
On | 13 | 12 |
Off | 13 | 12 |
When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:
Setting | Total Rows Written | Total Tables Updated |
---|---|---|
async.persist | 13 | 12 |
async.cache | 7 | 7 |
sync | 7 | 7 |
When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:
Table Name | Number of Rows |
---|---|
AUDIT_DETAILS | 1 |
DLV_MESSAGE | 1 |
DOCUMENT_CI_REF | 1 |
DOCUMENT_DLV_MSG_REF | 1 |
HEADERS_PROPERTIES | 1 |
XML_DOCUMENT | 1 |
As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database. The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.
The results above are based on a trivial test case. The numbers will be different for bigger and more complex composites. However by taking snapshots of different configurations you can produce the numbers that apply to your composites.
The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.
The sample project I used us available here.
The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.
The Web Service wrapper to the capture state stored procedure is available here.
The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.
Antony, good stuff. However, doesn't look like the sql scripts are available at https://blogs.oracle.com/reynolds/entry/com/reynolds/resource/PerfTuningTool/DBScripts.zip
Sorry about the link being corrupted for the SQL scripts. It should be https://blogs.oracle.com/reynolds/resource/PerfTuningTool/DBScripts.zip
I have fixed the link in the blog entry.
Thanks for letting me know.