Tuesday May 24, 2016

Global Temporary Tables and Upgrading to Oracle Database 12c - Don't Get Caught Out

Prior to Oracle Database 12c, global temporary tables (GTTs) shared statistics between sessions, so each GTT had one set of statistics visible to all sessions using it. Of course, gathering statistics on GTTs can be very useful because it will help the optimizer find better execution plans, but this advantage was sometimes tempered by the fact that multiple session would see the same set of stats. To resolve this dilemma, Oracle Database 12c included the ability to create session-private statistics, allowing sessions to create and use their own set of statistics for each GTT. This new behavior is controlled by a DBMS_STATS preference GLOBAL_TEMP_TABLE_STATS, which can be set to SHARED or SESSION. 

It is important to remember that the default in Oracle Database 12c is now SESSION, and that this has some implications if you're upgrading to this release. Firstly, you should consider whether your database application depends on SHARED GTT statistics. For example, you might have an ETL job that consist of multiple processes, and perhaps one process uses a database session to gather statistics on a GTT for other sessions to make use of later on. If your database applications are upgraded without taking into account the new default, then workloads relying on shared statistics won't see statistics on GTTs where they had seen them before. Dynamic sampling might kick in where it hadn't before (depending on your optimizer_dynamic_sampling level), and this could result in new or sub-optimal execution plans. So - if you're experiencing SQL execution plan degredations in Oracle Database 12c after an upgrade, check to see if any of the problem queries are associated with GTTs.

What should you do about this? Firstly, you can use the following query to see if GTT preferences match your expectations (you'll need to modify it to include your own application schema names):

select owner,
from   dba_tables
where  temporary = 'Y'
and    owner in ('my_application_owner')
and    duration = 'SYS$SESSION'
order  by owner,table_name;

Note that the predicate on the duration column is used to identify GTTs created with "on commit preserve rows".

If you want to be very selective, it is easy to change preferences on individual GTTs using a PL/SQL procedure similar to this. Just adjust the query to select the GTTs you want to change:

  cursor c1 is
     select owner,
     from   dba_tables
     where  temporary = 'Y'
     and    duration = 'SYS$SESSION'
     and    owner in ('my_application_owner');
   for r in c1
   end loop;

You can use a bigger and easier switch to set the preference at the global, database and schema-level too. For example:

exec dbms_stats.set_global_prefs('GLOBAL_TEMP_TABLE_STATS','SHARED');
exec dbms_stats.set_schema_prefs('my_application_owner','GLOBAL_TEMP_TABLE_STATS','SHARED');

There is an "upgrade gotcha" you need to be aware of. If you have upgraded to Oracle Database 12c and are using optimizer_features_enable (OFE) set to a pre- version (let's say, for the sake of argument, it is '') then be aware that the Oracle Optimizer will not "see" session private statistics.

You can see this in action if you do something like this:

-- Set dynamic sampling to the default value to help us indentify when stats are missing
alter session set optimizer_dynamic_sampling = 2;
-- Create a GTT
create global temporary table gtt1 (id number(10)) on commit preserve rows;
insert into gtt1 values (10);
-- Just to make sure you're using the default preference value, "SESSION":
exec dbms_stats.set_table_prefs(user,'GTT1','GLOBAL_TEMP_TABLE_STATS','SESSION')
-- Gather stats 
exec dbms_stats.gather_table_stats(user,'gtt1'); 

Execute a query on the GTT:

set autotrace on
select * from gtt1 where id = 1;

The execution plan will include the following comment in the "Note" section:

Global temporary table session private statistics used

If you now do the same with OFE set to

alter session set optimizer_features_enable = '';
set autotrace on
select * from gtt1 where id = 1;

You will see the following note in the execution plan, where the use of dynamic sampling indicates that the session-private statistics are not being seen:

dynamic statistics used: dynamic sampling (level=2)

Note also, that if you do this:

exec dbms_stats.set_table_prefs(user,'GTT1','GLOBAL_TEMP_TABLE_STATS','SESSION')
alter session set optimizer_features_enable = '';
exec dbms_stats.gather_table_stats(user,'gtt1');

...then you will gather session private statistics, which will not be seen by queries executed against the GTT if OFE remains set prior to


  • If you're setting OFE prior to and you have "on commit preserve rows" GTTs, you will probably want to change your GTT table preferences to SHARED so that statistics will be created and seen as expected.
  • If you are upgrading to Oracle Database 12c or developing new functionality on this release, you need to consider whether this change of behavior is relevant to your application. Do you have GTTs created with "on commit preserve rows" and does the application gather statistics on those tables? Has your application been modified to accommodate the new default? If not, you might need to change your GTT table preferences to SHARED.

Thursday May 05, 2016

Optimizer Feature Differences Between Oracle Database Releases

Each time we release a new version of the Oracle Database we publish a white paper that covers what's in the new in the Oracle Optimizer. It's pretty comprehensive and it covers all of the headline features, but it doesn't go deep down in amongst the dragons.What if you want a closer look a what's changed in the Oracle Optimizer between releases? That's what this post is about.

You'll know that there's a database parameter called optimizer_features_enable (OFE). It can be used to restrict which Optimizer features are enabled in your database. You'll also know that you shouldn't use it like a magic wand, but it is very useful for the trick I have in mind here.

I've put together a script for you to check out. It's not earth-shatteringly new (as you'll realize if you take a look around the Internet) but I think that you'll find it useful. The principle behind its operation is pretty simple: it captures parameters and configuration settings for two different OFE values ( and, by default). Finally, it displays a comparison between the two sets of results. I think that the output is very interesting. I hope you think so too.

Friday Apr 22, 2016

How to Use SQL Plan Management


In December I posted an entry on the subject of SQL plan management (SPM). It was inspired by questions raised at Oracle Open World 2015 and since then I’ve been talking to more customers about it, particularly in the context of upgrades. As you might guess, I get a lot of questions on the topic of how to use SPM in practice, so it seems like a good time to collate my answers and give you some pointers on how to get the best from the feature. If you're not familiar with SPM, there’s a four-part blog series to get you up to speed.

I’m going to make the assumption that your system is a suitable candidate for implementing SPM. This implies that you have a finite set of repeatable queries (or at least a finite set of critical queries). In SPM terminology, a repeatable query is one that is executed multiple times without the query text being modified, so this will make it practical and worthwhile to capture and stabilize its execution plan. This usually means that the application will need to use bind variables and not incorporate literal values in query strings (unless cursor sharing is being used to address that). If your application does use literal values for some critical queries, consider using SQL profiles with the “force_match” parameter.

Choosing between a strategic or tactical approach

SPM has been designed so that it can be used strategically. In other words, it will stabilize the execution plans of all your repeatable SQL statements and prevent them from encountering performance regression. If you need to avoid regression at all costs then a strategic, “SQL-plan-baseline-everything” approach is often the most appropriate choice. You should definitely consider creating SQL plan baselines for all queries if you are upgrading and want to continue to use the same SQL execution plans. After upgrade, you can evolve SQL plan baselines once the system is established, taking advantage of new optimizations and better execution plans in an incremental and verified way.

SPM can be implemented in a more tactical manner; using it to target a subset of your workload. This is appropriate if you are confident that you can identify a core set of critical queries and you can be sure that regressions outside this set are unlikely to significantly harm your service levels.

Using SPM proactively or reactively

You might have seen the following image in our documentation; comparing SPM with SQL profiles:

SQL Plan Management vs SQL Profiles

You can see that SPM is usually used proactively to prevent the use of suboptimal plans, whereas SQL Profiles are usually used reactively to correct problems once they have occurred. But don’t assume that this guidance is cast in stone. If you’ve read the earlier SPM series you’ll know that you can use SPM to “repair” queries that have regressed. In my experience, many DBAs use SPM reactively to correct individual queries, avoiding the need to change the application in any way (something that every DBA wants to have in their tool-box).

For the rest of this post, I’ll assume that SPM is being used proactively and that there are SQL plan baselines for all (or a significant part) of your workload.

When to capture SQL plan baselines

Capture queries from your workload when the system is performing well so that, by definition, all of the captured SQL execution plans will be “good” execution plans. SPM auto capture is a very popular choice with DBAs and it's particularly easy to use, but be aware that it will add some overhead to your workload because the execution plans are captured and stored at (hard) parse time. This isn't usually a problem because hard parse rates are generally low of course, but if you're concerned about it you can always capture in bulk from the cursor cache or from SQL tuning sets at any time. Make sure that you create accepted SQL plan baselines for every captured query and if a query has multiple execution plans (because of adaptive cursor sharing) go ahead and accept them all. This approach requires little manual intervention and there is no need to micro-manage which queries to accept.

SQL plan baseline capture is an ongoing-process, but Oracle recommends that you don’t switch on auto-capture indefinitely. Instead, capture new queries when changes have been made to applications and workloads. Bulk capture (from the cursor cache or from SQL tuning sets) is a good on-going approach too. For all types of capture, SQL plan baselines will be created without requiring you to figure out which ones are new and which ones already exist – SPM takes care of that for you.

If you are upgrading a critical system to Oracle Database 12c, then this is a perfect time to consider creating SQL plan baselines for all your queries. You can do this using the pre or post-upgraded system (see here for details).

Implementing SPM and backing out SQL plan baselines

There is always risk associated with making changes to a production system, and change management procedures will usually demand that you have a back-out procedure. It is reasonable to consider what would happen if you add or enable a large number of SQL plan baselines and you encounter problems of some kind. To help you in these circumstances, the DBA_SQL_PLAN_BASELINES view includes timestamps for creation and modification (as well as the origin of the SQL plan baseline). You can use this information to identify recently added and changed SQL plan baselines. The DROP_SQL_PLAN_BASELINE and ALTER_SQL_PLAN_BASELINE procedures are very efficient so, for example, most systems will be capable of enabling and disabling hundreds of SQL plan baselines per second.

Fixing SQL plan baselines

Individual SQL plan baselines can be fixed. This means that new plans will not be captured automatically for SQL statements that have fixed SQL plan baselines (although you can still load new plans manually and make them fixed if you want to). Be aware that if a SQL plan baseline is fixed for a particular query, but other non-accepted plans in the SQL plan history were previously captured (for this query) then these plan history entries will be subject to evolution. I have heard of DBAs encountering this, but it is a reare scenario - it is more usual to fix all baselines for a given SQL statement if you were considering fixing at all.

Some DBAs use SPM very tactically, creating and fixing SQL plan baselines for a carefully chosen set of queries with the intention of never evolving them. However, before fixing SQL plan baselines, consider the possibility of using a slightly different approach. Instead, don't fix them, but allow SPM to capture alternative plans in the SQL plan history. The trick then is to use the SPM auto evolve task to report on the new SQL execution plans without accepting them by default. Like this:

    task_name => 'SYS_AUTO_SPM_EVOLVE_TASK'
,   parameter => 'ACCEPT_PLANS'
,   value     => 'FALSE'

Now you have the option to view the evolve task report at any time and experiment with new plans, accepting the better ones individually and manually if you prefer.

How and when to evolve

If SQL plan baselines have been comprehensively captured for a representative workload, the number of new plans identified is usually very low. In this case, the motivation to evolve plans will also be low. Nevertheless, evolution is still worthwhile to take into account new plans that are generated by change (such as changes to underlying schema objects, application changes or configuration changes).

In Oracle Database 12c, the evolve process is fully automated and enabled by default. We want you to be confident of using SPM with very little manual intervention, but I know that automation has surprised a number of DBAs that had originally intended to implement evolution down-the-line. If you want to postpone evolution for whatever reason, then the best approach is usually to set the evolve auto task parameter ACCEPT_PLANS to FALSE (see above). New excution plans will be verified and you can view the auto task report at any time to identify improvements and then be as selective as you like about which ones to accept using the evolve API on a case-by-case basis. For example:

DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE( … sql_handle=>'...', plan_name=>'...' …);

When upgrading a database, a strategic SPM approach will enable you to carry forward your existing SQL execution plans into the new release. Once the new environment is established, it is likely that you will capture a large number of new execution plans in the SQL plan history. After all, the Optimizer will be different. The motivation to evolve baselines is likely to be stronger because it makes good sense to make full use of the most up-to-date Optimizer but you might want it to verify that it is better (and find out why if it’s not). For this scenario, it is not unreasonable to use ACCEPT_PLANS = FALSE to help you identify improved SQL execution plans to evolve individually. This will give you the opportunity to gain trust in evolution before you set ACCEPT_PLANS to TRUE.


Systems are rarely static. Schema changes or environment changes can sometimes make it impossible to continue to use a previously-valid execution plan defined in a SQL plan baseline. For example, if a SQL plan baseline specifies an index range scan but the corresponding index has been dropped, then it will become impossible for the SQL plan baseline excution plan to be used. This will not cause the corresponding SQL statement to fail; there is no error condition or error message, but it can mean that there will be a SQL statement that is no longer “protected” by a SQL plan baseline so it could conceivably suffer from a performance regression. If evolve is used frequently then this scenario can be avoided to a large extent because plan changes are captured in the SQL plan history where they can be verified and accepted when necessary.

I have seen cases where a customer evolves their SQL plan history infrequently, and subsequently some SQL statements have stopped matching their intended SQL plan baselines without being noticed. Also, this can sometimes happen because of a bug or where there has been a schema change without the DBAs realizing that it could make some execution plans in SQL plan baselines non-viable.  Fortunately, it is not difficult to identify queries in the SQL cursor cache that have a corresponding SQL plan baseline but (for whatever reason) it is not being successfully matched and used. Take a look at the utility scripts I’ve uploaded to GitHub – and in particular “nomatch.sql” shows you how it’s possible to identify SQL statements in the cursor cache that are failing to match their SQL plan baselines. As usual, if you have any suggestions on how I could improve the scripts, just let me know in the comments.

Click for more posts on SPM...

Friday May 08, 2015

Controlling Access To The In-Memory Column Store Via Hints

It’s been almost a year since I’ve had an opportunity to write a new blog post here due to my crazy schedule now that I'm the In-Memory Maven (thanks to Doug Burns & Graham Wood for the new and improved title). But this morning while I was writing a post for the In-Memory blog about controlling the use of the In-Memory column store (IM column store), I realized that the content on Optimizer hints really belonged over here.

So, I decided to split the blog post in two. I’ve put the information on the initialization parameter that control the use of the IM column store on the In-Memory blog and the information about the Optimizer hint that control the use of the IM column store here.

[Read More]

Monday Jul 07, 2014

The Optimizer & the new Oracle Database In-Memory option

I know its been forever since I've posted a new blog but I am no longer the Optimizer lady. In case you haven't heard already,  I have a new role now as the In-Memory lady (definitely need to work on a better title).

But for this weeks In-Memory blog post I got a chance to combine both my old and my new roles when I wrote about how the In-Memory option and the Optimizer interact. And I thought the readers of this blog might be interested in this topic too.

So, if you are interested in seeing how these these two fascinating pieces of the Oracle technology work together, check out the latest post on the new In-Memory blog.

You should also mark your calendars because of the head of the Optimizer development team, Mohamed Zait, will also discuss this topic at Oracle Open World later this year!

 Hope to catch up with a lot of you then.

Friday Sep 13, 2013

What's new in 12c: Adaptive joins part 2

In our earlier post on adaptive joins we explained how this new 12c functionality works and said we would follow up this post with a real-world demo.

[Read More]

Sunday Aug 25, 2013

What's new in 12c: Adaptive joins

As we promised in our previous post, we are starting a blog series describing all of new Optimizer and statistics related functionality on Oracle Database 12c. We begin the series with an in-depth look at adaptive plans, one of the key features in the new adaptive query optimization framework.[Read More]

Tuesday Jun 25, 2013

Oracle Database 12c is here!

Oracle Database 12c was officially release today and is now available for download. Along with the software release comes a whole new set of collateral that explains in detail all of the new features and functionality you will find in this release.

The Optimizer page on Oracle.com has all the juicy details about what you can expect from the Optimizer in Oracle Database 12c. Direct links are below.

[Read More]

The Oracle Optimizer blog is written by members of the Optimizer development team. The goal of this blog is to provide an insight into the workings of the Optimizer and the statistics it relies on. The views expressed on this blog are our own and do not necessarily reflect the views of Oracle and its affiliates. The views and opinions expressed by visitors on this blog are theirs solely and may not reflect ours.


« June 2016