Still not enough customers know about Oracle SQL Tuning Sets (known as STS). Some time ago also my colleague Mike Dietrich confirmed this in some of his web seminars about Upgrade and Performance Stability. So I decided to post an English version about the the STS technology and provide the according scripts to be able to start immediately. 

Which are the use cases for SQL Tuning sets? With SQL Tuning Sets (STS), for example, you can list the relevant statements for (recurring) tuning tasks. The according information about execution performance can easily be displayed. You can also use existing AWR reports to build SQL Tuning Sets. It provides the basis for the SQL Tuning Advisor and SQL Access Advisor and the foundation for the SQL Performance Advisor (aka SPA). If you want to test on a remote system, SQL Tuning Sets can be copied to a remote database system using Datapump Export or Import in case you need to test on a remote system. 

The SQL Tuning Set functionality is available in the Enterprise Edition of the Oracle database – on-premises or in the cloud – without additional licensing of a Management Pack or Option. Information on this can be found in the Oracle Licensing Guide.

SQL Tuning Sets have existed in Oracle Database for a long time. They were introduced in connection with the implementation of Oracle’s SQL Tuning Framework. An STS, as the name suggests, is a collection of SQL statements that are stored in the database under a user-defined name. Not only the SQL commands themselves are stored, but also related information about the execution. 

In general an SQL Tuning Set consists of the following components:

  •     One or more SQL statements
  •     Context for execution such as bind variables, parsing scheme, etc.
  •     Basic execution statistics such as elapsed time, CPU time, etc.
  •     Execution plans

SQL Tuning Set can be generated from various sources such as Automatic Workload Repository (AWR), Cursor Cache or Real Application Testing Database Replay. For example, to store as many statements as possible easily and quickly in the STS, the capture from the cursor cache is suitable. If only the top statements are to be considered, one can limit oneself to the AWR method.

To create an STS, either the graphical interface Enterprise Manager Cloud Control, which offers a very intuitive interface or the linemode PL/SQL interface can be used. Since not everyone has Enterprise Manager available, the examples in the following sections will be performed in linemode.

The infrastructure for SQL Tuning Sets consists mainly of the package DBMS_SQLTUNE and the associated data dictionary views DBA_SQLSET and DBA_SQLSET_STATEMENTS. The complete functionality of the package and the data dictionary views can be read in the manual  in the corresponding chapters under:

Before you start, make sure that the user working with STS has the required privilege. You need to have the ADMINISTER SQL TUNING SET privilege to work with STS.

In the following sections we will illustrate how to …

Please note: The scripts used in this posting can be downloaded from Github.

Create STS from Cursor Cache
Statements can be filtered according to certain criteria before they are stored in the database under a name. The following basic filter criteria are available: 

  • SQL_TEXT for SQL Text
  • PARSING_SCHEMA_NAME for user schema
  • MODULE in DBMS_APPLICATION_INFO
  • ACTION in DBMS_APPLICATION_INFO
  • ELAPSED_TIME
  • CPU_TIME
  • BUFFER_GETS
  • DISK_READS
  • ROWS_PROCESS
  • EXECUTIONS

Use SELECT_CURSOR_CACHE function to select the statements from the cursor cache that meet certain criteria.

In the following example, a maximum of 10 SELECT statements from the schema DWH_DATA are filtered and displayed in the order of elapsed time. To do this, use the settings PARSING_SCHEMA_NAME for the user schema, SQL_TEXT for the use of SELECT statements, RESULT_LIMIT for the limitation to 10 and RANKING_MEASURE1 for the sorting according to elapsed time in the parameter BASIC_FILTER.

set linesize window

SELECT last_exec_start_time, elapsed_time/1000/1000 elapsed_sec, sql_id, sql_text
FROM dbms_sqltune.select_cursor_cache (
basic_filter =>'parsing_schema_name = ''DWH_DATA'' and sql_text like ''SELECT%''', 
result_limit => 100, 
ranking_measure1 =>'elapsed_time'      )
/

LAST_EXEC_START_TIM ELAPSED_SEC SQL_ID        SQL_TEXT
------------------- ----------- ------------- --------------------------------------------------------------------------------
2023-07-20/14:07:22   10.979077 1a09zh5111gn8 SELECT
                                                  'G' || tt1.PG_FeatureValue_19_1 PG_FeatureValue_19_1,
                                                  'B' || 1010

2023-07-20/14:06:15   10.636308 4f82wsuf4jvj6 SELECT
                                                  'B' || tt1.PG_FeatureValue_03_1 PG_FeatureValue_03_1,
                                                  'B' || 1010

2023-07-20/14:06:38   10.169495 9554d72gknsm3 SELECT
                                                  'B' || tt1.PG_FeatureValue_07_1 PG_FeatureValue_07_1,
                                                  'B' || 1010

2023-07-20/14:07:08    9.607707 4wnm88kuv8hb1 SELECT
                                                  'B' || 1010280 repchannel_1,
                                                  'B' || tt1.domain_productgroup_1 dom

2023-07-20/14:07:48     9.60563 2b5wt819wzcpk SELECT
                                                  'B' || 1010280 repchannel_1,
                                                  'B' || tt1.domain_productgroup_1 dom

2023-07-20/14:07:34    9.474815 9qq1u1vj5ddkx SELECT
                                                  'G' || tt1.PG_FeatureValue_19_1 PG_FeatureValue_19_1,
                                                  'B' || 1010

2023-07-20/14:05:50    9.208127 88c8ytct1c0gf SELECT
... 

Any filter criteria can be combined and thus influence the output. So SQL Tuning Sets can be created, which satisfy certain performance or environment requirements like module specification by DBMS_APPLICATION_INFO, parsing schema, cpu time etc. Further criteria can be found in the manual in the section DBMS_SQLTUNE.SELECT_CURSOR_CACHE.

Note: If you use a large amount or size of bind variables, keep in mind that there is an internal limit. You may adjust the internal parameter _cursor_bind_capture_area_size which can be changed without restarting the instance. It is only active during capture of the SQL statement because it is about how much bind data we can keep in the memory for each query.

How can we store now these statements into the database in a particular STS? To create an STS in the database, the first step is to specify an object with a certain name. This is done with the CREATE_STS procedure.

First let’s connect to a PDB and drop the STS.

execute dbms_sqltune.drop_sqlset(sqlset_name=>'STS_1', sqlset_owner=>'DWH_DATA');

Then let’s create a new STS.

execute dbms_sqltune.create_sqlset(sqlset_name=>'STS_1', sqlset_owner=>'DWH_DATA');

If you want to accumulate statements incrementally over a certain time from the cursor cache in a SQL Tuning Set, use the procedure CAPTURE_CURSOR_CACHE_SQLSET. The following example polls the statements from the cursor cache every 5 minutes over a period of 30 minutes (1800 seconds). In our example we are interested in statements from schema DWH_DATA and with elapsed time more than 5 seconds.  

execute dbms_sqltune.capture_cursor_cache_sqlset(-
   sqlset_name     => 'STS_1',-
   time_limit      => 1800, -
   repeat_interval => 300, -
   basic_filter    => 'parsing_schema_name = ''DWH_DATA'' AND elapsed_time>5000000');

Note: The execution of the procedure CAPTURE_CURSOR_CACHE_SQLSET takes as long as the given time period – in our case 30 minutes. If you also use the DBMS_SCHEDULER to collect such statements incrementally, the process can also be terminated prematurely with DBMS_SCHEDULER.STOP_JOB.

If you want to load statements all at once from the current cursor cache instead of incrementally, you can use the following PL/SQL code.

execute dbms_sqltune.drop_sqlset(sqlset_name=>'STS_2', sqlset_owner=>'DWH_DATA');
execute dbms_sqltune.create_sqlset(sqlset_name=>'STS_2', sqlset_owner=>'DWH_DATA');

DECLARE
 cur dbms_sqltune.SQLSET_CURSOR;
BEGIN
 OPEN cur FOR
   SELECT VALUE(P)
   FROM dbms_sqltune.select_cursor_cache(basic_filter => 
   'parsing_schema_name = ''DWH_DATA'' and sql_text like ''SELECT%''') P;
   dbms_sqltune.load_sqlset(sqlset_name => 'STS_2', populate_cursor => cur);
  CLOSE cur;
END;
/

Note: The VALUE function returns the value of an object.

Create STS from AWR

Alternatively, the data collected by AWR or ASH from the workload repository can be used. This data is collected continuously and stored in the database for 8 days by default. Since the top SQL statements are also collected from the database once per second, the causing SQL statements can be found via the associated snapshots in the event of a performance problem. Instead of the SELECT_CURSOR_CACHE function, the SELECT_WORKLOAD_REPOSITORY function is used in this case. It contains the two Ids of the snapshots to indicate the beginning and the end of the period. Additionally, as already explained in the previous example, it is possible to specify filter criteria.

First we search for the corresponding snapshots using the data dictionary view DBA_HIST_SNAPSHOT.

ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH:MI';

col begin_snap format a20
col end_snap format a20

SELECT snap_id, begin_interval_time begin_snap, end_interval_time end_snap
FROM dba_hist_snapshot
ORDER BY snap_id;

   SNAP_ID BEGIN_SNAP                     END_SNAP
---------- ------------------------------ ------------------------------
      ...
      6541 2023-07-20 10:00     2023-07-20 11:00
      6542 2023-07-20 11:00     2023-07-20 12:00
      6543 2023-07-20 12:00     2023-07-20 01:00
      6544 2023-07-20 01:00     2023-07-20 02:00

In the example, the two snapshots 6543 and 6543 are used. In addition, further filter criteria can be specified as in the other examples.

execute dbms_sqltune.drop_sqlset(sqlset_name=>'STS_3', sqlset_owner=>'DWH_DATA');
execute dbms_sqltune.create_sqlset(sqlset_name=>'STS_3', sqlset_owner=>'DWH_DATA');

DECLARE
 cur dbms_sqltune.sqlset_cursor;
BEGIN
 OPEN cur FOR
  SELECT VALUE(P) 
  FROM dbms_sqltune.select_workload_repository
   (begin_snap=>6543, end_snap=> 6544, basic_filter => 'sql_text like ''SELECT%''   
   and parsing_schema_name = ''DWH_DATA''' ) P; 
  dbms_sqltune.load_sqlset (sqlset_name => 'STS_3', populate_cursor => cur); 
END;
/

Note: The number of statements listed in the AWR is limited, because you only want to list the top statements. If you want to change the number, you can do this with the function DBMS_WORKLOAD_REPOSITORY. MODIFY_SNAPSHOT_SETTINGS.

Reduce an STS

If the STS is too large or you want to divide the STS into several sets, you can use the SELECT_SQLSET function in conjunction with various filter criteria. The following example creates the subset STS_1A. It contains 10 statements from the SQL tuning set STS_1 sorted by the elapsed time.

execute dbms_sqltune.drop_sqlset(sqlset_name=>'STS_1A', sqlset_owner=>'DWH_DATA');
execute dbms_sqltune.create_sqlset(sqlset_name=>'STS_1A', sqlset_owner=>'DWH_DATA');

DECLARE cur dbms_sqltune.sqlset_cursor; 
BEGIN 
OPEN cur FOR
     SELECT VALUE (P)
     FROM table(DBMS_SQLTUNE.SELECT_SQLSET(sqlset_name =>'STS1',    
            ranking_measure1 =>'ELAPSED_TIME', result_limit=>&limit)) P;
  DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => 'STS1_A', populate_cursor => cur);
CLOSE cur;
END;
/
Enter value for limit: 10
old   7:          sqlset_name =>'STS1', ranking_measure1 =>'ELAPSED_TIME', result_limit=>&limit)) P;
new   7:          sqlset_name =>'STS1', ranking_measure1 =>'ELAPSED_TIME', result_limit=>10)) P;

PL/SQL procedure successfully completed.

Monitor STS

If you want to get an overview of all SQL Tuning Sets with owner DWH_DATA, you can use the following query.

select name, owner, created, statement_count
from all_sqlset 
where owner='DWH_DATA'; 

NAME                 OWNER                CREATED   STATEMENT_COUNT
-------------------- -------------------- --------- ---------------
STS_1                DWH_DATA             20-JUL-23              15
STS_1A               DWH_DATA             20-JUL-23              10
STS_2                DWH_DATA             20-JUL-23             283
STS_3                DWH_DATA             20-JUL-23              10

Information about the exact content of the SQL Tuning Sets can be obtained via the view DBA_SQLSET_STATEMENTS. Here you have the possibility to sort and filter according to different criteria – comparable to V$SQL. In this way you get a quick overview of statements that do not meet the requirements and do not necessarily have to generate an AWR report.

select executions, cpu_time/1000 cpu_in_ms, elapsed_time/1000 elapsed_in_ms,
       sql_id, substr(sql_text,1,80) 
from all_sqlset_statements
where sqlset_name like 'STS_2' and sqlset_owner='DWH_DATA'
ORDER BY 3 desc;

EXECUTIONS  CPU_IN_MS ELAPSED_IN_MS SQL_ID
---------- ---------- ------------- -------------
SUBSTR(SQL_TEXT,1,80)
--------------------------------------------------------------------------------
         1   7474.292     10979.077 1a09zh5111gn8
SELECT
    'G' || tt1.PG_FeatureValue_19_1 PG_FeatureValue_19_1,
    'B' || 1010

         1   6653.287     10636.308 4f82wsuf4jvj6
SELECT
    'B' || tt1.PG_FeatureValue_03_1 PG_FeatureValue_03_1,
    'B' || 1010

Transport a STS to a remote system 

There is also the option of transporting SQL Tuning Sets to other databases. This is especially useful if, for example, the tuning process is to be outsourced. The following process describes the procedure.

Note: Please don’t use the schema SYS for this process.
Four steps are required.

1) Create a staging table ( here STAGING_TABLE) on the source system.

execute dbms_sqltune.create_stgtab_sqlset(table_name => 'STAGING_TABLE', schema_name => 'DWH_DATA');

2) Load the staging table with data from one or more SQL Tuning Sets. In our example we want to transport STS_2.

execute dbms_sqltune.pack_stgtab_sqlset(sqlset_name => 'STS_2', 
                                        sqlset_owner => 'DWH_DATA', 
                                        staging_table_name => 'STAGING_TABLE', 
                                        staging_schema_owner=> 'DWH_DATA');

3) Datapump export and import of the STAGING_TABLE table into the test/target environment.

4) Extract the SQL Tuning Set(s) from the staging table.

execute dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_2', -
                                          sqlset_owner => 'DWH_DATA', -
                                          replace => TRUE, -
                                          staging_schema_owner=>'DWH_DATA', -
                                          staging_table_name => 'STAGING_TABLE');

Now the SQL Tuning Set(s) can be used in the new environment, which serves as a test environment, for example. Even the releases of the environments can be different. Thus, an SQL Tuning Set from a database environment with a less current release status can be exported or imported into a test environment in order to test the behavior in the new environment.

Further Reading