Tuesday Jul 19, 2016

Setting a Session Parameter Overrides OFE

I received an email recently that demonstrated something the author considered strange when the init.ora parameter optimizer_feature_enable (OFE) is set in a database session. I thought I'd mention it here because the behavior he spotted is expected, but I don't think that it is entirely obvious.

Let's assume that you're logged into Oracle Database 12c. Now check the value of a hidden parameter applicable to this database version:

select ksppinm name,
       ksppstvl value
from   sys.x$ksppi x,
       sys.x$ksppcv y
where (x.indx = y.indx)
and   ksppinm = '_optimizer_aggr_groupby_elim';

NAME                            VALUE
=============================   =====
_optimizer_aggr_groupby_elim    TRUE

Next, set OFE to 11.2.0.4:

alter session set optimizer_features_enable = '11.2.0.4';

And you will then see this:

NAME                            VALUE
=============================   =====
_optimizer_aggr_groupby_elim    FALSE

That's not surprising - we have turned off an Oracle Database 12c feature by setting OFE down to 11.2.0.4. You can probably guess that the use of hidden parameters is not something I normally recommend, but there are circumstances where you might want to set them. Now, check this out:

alter session set "_optimizer_aggr_groupby_elim"=true;
alter session set optimizer_features_enable = '11.2.0.4';

NAME                            VALUE
=============================   =====
_optimizer_aggr_groupby_elim    TRUE

This time we see "TRUE", and it's because we do not cascade OFE to a parameter that has been changed in the session.

It's similar if you use a hint. In the following example, the group-by and aggregation elimination will be available to the query even though you might think that the hint will imply otherwise:

alter session set "_optimizer_aggr_groupby_elim"=true;

SELECT /*+ optimizer_features_enable('11.2.0.4') */ ...

If the hidden parameter was not set in the session, the hint will disable group-by elimination.




Thursday May 05, 2016

Optimizer Feature Differences Between Oracle Database Releases

Each time we release a new version of the Oracle Database we publish a white paper that covers what's in the new in the Oracle Optimizer. It's pretty comprehensive and it covers all of the headline features, but it doesn't go deep down in amongst the dragons.What if you want a closer look a what's changed in the Oracle Optimizer between releases? That's what this post is about.

You'll know that there's a database parameter called optimizer_features_enable (OFE). It can be used to restrict which Optimizer features are enabled in your database. You'll also know that you shouldn't use it like a magic wand, but it is very useful for the trick I have in mind here.

I've put together a script for you to check out. It's not earth-shatteringly new (as you'll realize if you take a look around the Internet) but I think that you'll find it useful. The principle behind its operation is pretty simple: it captures parameters and configuration settings for two different OFE values (11.2.0.4 and 12.1.0.2, by default). Finally, it displays a comparison between the two sets of results. I think that the output is very interesting. I hope you think so too.


Friday Apr 22, 2016

How to Use SQL Plan Management

Introduction

In December I posted an entry on the subject of SQL plan management (SPM). It was inspired by questions raised at Oracle Open World 2015 and since then I’ve been talking to more customers about it, particularly in the context of upgrades. As you might guess, I get a lot of questions on the topic of how to use SPM in practice, so it seems like a good time to collate my answers and give you some pointers on how to get the best from the feature. If you're not familiar with SPM, there’s a four-part blog series to get you up to speed.

I’m going to make the assumption that your system is a suitable candidate for implementing SPM. This implies that you have a finite set of repeatable queries (or at least a finite set of critical queries). In SPM terminology, a repeatable query is one that is executed multiple times without the query text being modified, so this will make it practical and worthwhile to capture and stabilize its execution plan. This usually means that the application will need to use bind variables and not incorporate literal values in query strings (unless cursor sharing is being used to address that). If your application does use literal values for some critical queries, consider using SQL profiles with the “force_match” parameter.

Choosing between a strategic or tactical approach

SPM has been designed so that it can be used strategically. In other words, it will stabilize the execution plans of all your repeatable SQL statements and prevent them from encountering performance regression. If you need to avoid regression at all costs then a strategic, “SQL-plan-baseline-everything” approach is often the most appropriate choice. You should definitely consider creating SQL plan baselines for all queries if you are upgrading and want to continue to use the same SQL execution plans. After upgrade, you can evolve SQL plan baselines once the system is established, taking advantage of new optimizations and better execution plans in an incremental and verified way.

SPM can be implemented in a more tactical manner; using it to target a subset of your workload. This is appropriate if you are confident that you can identify a core set of critical queries and you can be sure that regressions outside this set are unlikely to significantly harm your service levels.

Using SPM proactively or reactively

You might have seen the following image in our documentation; comparing SPM with SQL profiles:

SQL Plan Management vs SQL Profiles

You can see that SPM is usually used proactively to prevent the use of suboptimal plans, whereas SQL Profiles are usually used reactively to correct problems once they have occurred. But don’t assume that this guidance is cast in stone. If you’ve read the earlier SPM series you’ll know that you can use SPM to “repair” queries that have regressed. In my experience, many DBAs use SPM reactively to correct individual queries, avoiding the need to change the application in any way (something that every DBA wants to have in their tool-box).

For the rest of this post, I’ll assume that SPM is being used proactively and that there are SQL plan baselines for all (or a significant part) of your workload.

When to capture SQL plan baselines

Capture queries from your workload when the system is performing well so that, by definition, all of the captured SQL execution plans will be “good” execution plans. SPM auto capture is a very popular choice with DBAs and it's particularly easy to use, but be aware that it will add some overhead to your workload because the execution plans are captured and stored at (hard) parse time. This isn't usually a problem because hard parse rates are generally low of course, but if you're concerned about it you can always capture in bulk from the cursor cache or from SQL tuning sets at any time. Make sure that you create accepted SQL plan baselines for every captured query and if a query has multiple execution plans (because of adaptive cursor sharing) go ahead and accept them all. This approach requires little manual intervention and there is no need to micro-manage which queries to accept.

SQL plan baseline capture is an ongoing-process, but Oracle recommends that you don’t switch on auto-capture indefinitely. Instead, capture new queries when changes have been made to applications and workloads. Bulk capture (from the cursor cache or from SQL tuning sets) is a good on-going approach too. For all types of capture, SQL plan baselines will be created without requiring you to figure out which ones are new and which ones already exist – SPM takes care of that for you.

If you are upgrading a critical system to Oracle Database 12c, then this is a perfect time to consider creating SQL plan baselines for all your queries. You can do this using the pre or post-upgraded system (see here for details).

Implementing SPM and backing out SQL plan baselines

There is always risk associated with making changes to a production system, and change management procedures will usually demand that you have a back-out procedure. It is reasonable to consider what would happen if you add or enable a large number of SQL plan baselines and you encounter problems of some kind. To help you in these circumstances, the DBA_SQL_PLAN_BASELINES view includes timestamps for creation and modification (as well as the origin of the SQL plan baseline). You can use this information to identify recently added and changed SQL plan baselines. The DROP_SQL_PLAN_BASELINE and ALTER_SQL_PLAN_BASELINE procedures are very efficient so, for example, most systems will be capable of enabling and disabling hundreds of SQL plan baselines per second.

Fixing SQL plan baselines

Individual SQL plan baselines can be fixed. This means that new plans will not be captured automatically for SQL statements that have fixed SQL plan baselines (although you can still load new plans manually and make them fixed if you want to). Be aware that if a SQL plan baseline is fixed for a particular query, but other non-accepted plans in the SQL plan history were previously captured (for this query) then these plan history entries will be subject to evolution. I have heard of DBAs encountering this, but it is a reare scenario - it is more usual to fix all baselines for a given SQL statement if you were considering fixing at all.

Some DBAs use SPM very tactically, creating and fixing SQL plan baselines for a carefully chosen set of queries with the intention of never evolving them. However, before fixing SQL plan baselines, consider the possibility of using a slightly different approach. Instead, don't fix them, but allow SPM to capture alternative plans in the SQL plan history. The trick then is to use the SPM auto evolve task to report on the new SQL execution plans without accepting them by default. Like this:

DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
    task_name => 'SYS_AUTO_SPM_EVOLVE_TASK'
,   parameter => 'ACCEPT_PLANS'
,   value     => 'FALSE'
);

Now you have the option to view the evolve task report at any time and experiment with new plans, accepting the better ones individually and manually if you prefer.

How and when to evolve

If SQL plan baselines have been comprehensively captured for a representative workload, the number of new plans identified is usually very low. In this case, the motivation to evolve plans will also be low. Nevertheless, evolution is still worthwhile to take into account new plans that are generated by change (such as changes to underlying schema objects, application changes or configuration changes).

In Oracle Database 12c, the evolve process is fully automated and enabled by default. We want you to be confident of using SPM with very little manual intervention, but I know that automation has surprised a number of DBAs that had originally intended to implement evolution down-the-line. If you want to postpone evolution for whatever reason, then the best approach is usually to set the evolve auto task parameter ACCEPT_PLANS to FALSE (see above). New excution plans will be verified and you can view the auto task report at any time to identify improvements and then be as selective as you like about which ones to accept using the evolve API on a case-by-case basis. For example:

DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE( … sql_handle=>'...', plan_name=>'...' …);

When upgrading a database, a strategic SPM approach will enable you to carry forward your existing SQL execution plans into the new release. Once the new environment is established, it is likely that you will capture a large number of new execution plans in the SQL plan history. After all, the Optimizer will be different. The motivation to evolve baselines is likely to be stronger because it makes good sense to make full use of the most up-to-date Optimizer but you might want it to verify that it is better (and find out why if it’s not). For this scenario, it is not unreasonable to use ACCEPT_PLANS = FALSE to help you identify improved SQL execution plans to evolve individually. This will give you the opportunity to gain trust in evolution before you set ACCEPT_PLANS to TRUE.

Troubleshooting

Systems are rarely static. Schema changes or environment changes can sometimes make it impossible to continue to use a previously-valid execution plan defined in a SQL plan baseline. For example, if a SQL plan baseline specifies an index range scan but the corresponding index has been dropped, then it will become impossible for the SQL plan baseline excution plan to be used. This will not cause the corresponding SQL statement to fail; there is no error condition or error message, but it can mean that there will be a SQL statement that is no longer “protected” by a SQL plan baseline so it could conceivably suffer from a performance regression. If evolve is used frequently then this scenario can be avoided to a large extent because plan changes are captured in the SQL plan history where they can be verified and accepted when necessary.

I have seen cases where a customer evolves their SQL plan history infrequently, and subsequently some SQL statements have stopped matching their intended SQL plan baselines without being noticed. Also, this can sometimes happen because of a bug or where there has been a schema change without the DBAs realizing that it could make some execution plans in SQL plan baselines non-viable.  Fortunately, it is not difficult to identify queries in the SQL cursor cache that have a corresponding SQL plan baseline but (for whatever reason) it is not being successfully matched and used. Take a look at the utility scripts I’ve uploaded to GitHub – and in particular “nomatch.sql” shows you how it’s possible to identify SQL statements in the cursor cache that are failing to match their SQL plan baselines. As usual, if you have any suggestions on how I could improve the scripts, just let me know in the comments.

Click for more posts on SPM...


Tuesday Mar 15, 2016

Group-by and Aggregation Elimination

I get a fair number of questions on query transformations, and it’s especially true at the moment because we’re in the middle of the Oracle Database 12c Release 2 beta program. Sometimes people notice “something missing” or unusual in an execution plan and realize that a transformation is happening. For example, Join Elimination (thanks, Christian) can take you by surprise if you notice that a table in your query doesn’t appear in the SQL execution plan at all (and neither do any of its indexes).

I’m sure you’re aware that query transformations are an important step in query optimization because it’s often possible to reduce the cost of a query by eliminating steps like joins or sorts. Sometimes changing the shape of the query will allow it to use different access paths, different types of join, and entirely different types of query blocks. We cover most transformations in our “What to expect from the Optimizer” collateral and, in particular, release-specific Optimizer white papers (here's the Oracle Database 12c one).

In Oracle Database 12.1.0.2, we addead a new transformation called Group-by and Aggregation Elimination and it slipped through any mention in our collateral. It happens to be one of the simplest transformations in the Oracle Optimizer’s repertoire and I know that some of you are spookily well-informed and know about it already. You might have seen it getting a mention in Mike Dietrich’s upgrade blog.

Let’s take a look at what the transformation does…

Many applications contain queries that have a form where a group-by query block has a single table that is a “group-by view”. Here’s an example:

SELECT v.column1, v.column2, MAX(v.sm), SUM(v.sm)
FROM (SELECT t1.column1, t1.column2, SUM(t1.item_count) AS sm
      FROM   t1, t2
      WHERE  t1.column4 > 3
      AND    t1.id = t2.id
      AND    t2.column5 > 10
      GROUP BY t1.column1, t1.column2) V
GROUP BY v.column1, v.column2;


For the transformation to be possible, the outer query must have a single group-by view, but the inner query (that forms the view) can include multiple tables. Under certain conditions, we can transform these types of query to eliminate group-by aggregations, which is an expensive operation. The resulting query is simpler and contains fewer group-by clauses and aggregate functions.  For the query above, we can transform it into something like this:

SELECT t1.column1, t1.column2, SUM(t1.item_count) AS "MAX(V.SM)", SUM(t1.item_count) AS "SUM(V.SM)"
FROM   t1, t2
WHERE  t1.column4 > 3 AND
       t1.id = t2.id AND
       t2.column5 > 10
GROUP BY t1.column1, t1.column2; 


Here’s what the SQL execution plan looks like without the transformation. In my case there are 100,000 rows in each table and the elapsed time is 2.09 seconds:

------------------------------------------------------
   Id | Operation             | Name | Rows  | Bytes |
------------------------------------------------------
|   0 | SELECT STATEMENT      |      |       |       |
|   1 |  HASH GROUP BY        |      | 66521 |  1494K|
|   2 |   VIEW                |      | 66521 |  1494K|
|   3 |    HASH GROUP BY      |      | 66521 |  2143K|
|   4 |     HASH JOIN         |      | 99800 |  3216K|
|   5 |      TABLE ACCESS FULL| T2   | 99800 |   877K|
|   6 |      TABLE ACCESS FULL| T1   | 99998 |  2343K|
------------------------------------------------------


You can see from the plan above that there are two HASH GROUP BY steps, one for the view and one for the outer query block. I used database version 12.1.0.2 for this test, so I disabled the query transformation by setting the database hidden parameter _optimizer_aggr_groupby_elim to FALSE.

Looking at the plan for the transformed query, you can see that there is only a single HASH GROUP BY step. The elapsed time was much lower too – only 1.29 seconds in my case:

----------------------------------------------------
   Id | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  HASH GROUP BY      |      | 66521 |  2143K|
|   2 |   HASH JOIN         |      | 99800 |  3216K|
|   3 |    TABLE ACCESS FULL| T2   | 99800 |   877K|
|   4 |    TABLE ACCESS FULL| T1   | 99998 |  2343K|
----------------------------------------------------


The example above is relatively easy to understand because the GROUP BY columns in the view and the outer query are the same. This doesn’t have to be the case. It’s sometimes possible to transform the query even if the outer query’s GROUP BY is a subset of the view’s GROUP BY. For example:

SELECT v.column1, v.column3, MAX(v.column1), SUM(v.sm)
FROM (SELECT t1.column1, t1.column2, t1.column3, SUM(t1.item_count) AS sm
      FROM   t1, t2
      WHERE  t1.column4 > 3 AND
             t1.id = t2.id  AND
             t2.column5 > 10
      GROUP BY t1.column1, t1.column2, t1.column3) V
GROUP BY v.column1, v.column3;

----------------------------------------------------
   Id | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  HASH GROUP BY      |      | 49891 |  1607K|
|*  2 |   HASH JOIN         |      | 99800 |  3216K|
|*  3 |    TABLE ACCESS FULL| T2   | 99800 |   877K|
|*  4 |    TABLE ACCESS FULL| T1   | 99998 |  2343K|
----------------------------------------------------


You don’t need to do anything special to enable this query transformation. It is enabled by default and it happens automatically whenever a particular query passes the validation criteria. In real-world, enterprise-class systems you can of course expect to see much more significant time-savings when this transformation can be applied. Note that the transformation is not applied for queries that use ROLLUP or CUBE grouping functions.

Were there problems with this transformation? Yes, there were (and this is why Mike Dietrich mentioned it). With any transformation, the Oracle Optimizer has to figure out when it can and can’t be applied and the logic behind this can be incredibly complex. The bottom line is that there were some cases where the transformation was being applied and it shouldn’t have been. Generally, this was where the outer group-by query was truncating or casting columns used by the inner group-by. This is now fixed and it’s covered by patch number 21826068. Please use MOS to check availability for your platform and database version.

Thursday Dec 03, 2015

Upgrade to Oracle Database 12c and Avoid Query Regression

Those of you that made it to the demo grounds at Oracle Open World this year (BTW - it’s still 2015 - just) will have had the chance to chat with the Oracle developers and throw lots of difficult questions at them! For everyone in the Optimizer development team it was a great opportunity to get lots of interaction and feedback, so thanks to all of you that took the time to come along and talk to us. We’re not all lucky enough to get a trip out to San Francisco, so I’ve been collating the main topics that came up to steer the subject matter of the next few blog posts. In this way I hope that we can all benefit from these interactions.

I can tell you right away that the number one demo ground question at OOW 2015 was …drum roll… “How can I reduce the risk of query regression when I upgrade to Oracle Database 12c?”. Well, maybe not those words exactly, but words to that effect. There is quite a lot of information out there on this topic, but people seem to struggle to find it… so we put our heads together and we realized that we should publish a 20,000ft view of this topic with pointers down into the detail. That’s the purpose of this post and, for the sake of brevity, I’m going to make the assumption that you are upgrading your Enterprise Edition database from Oracle Database 11g Release 2 to Oracle Database 12c.

The bottom line is this: if you want to mitigate the risk of query regression when you upgrade to Oracle Database 12c, then use SQL Plan Management (SPM). This is the recommended approach for the vast majority of systems out there, where the most critical SQL statements are reusable or, in other words, they are executed more than once.

Here are a couple of common scenarios:

Scenario#1 You want to use all Oracle Database 12c Optimizer features right away, but you need to “repair” any regressed queries quickly and with minimum effort.
Scenario#2 You’re upgrading and want to keep your “tried-and-tested”, Oracle Database 11g execution plans. Nevertheless, you do not want your application to be frozen in time: you want to evolve and use improved execution plans that are available in the new release, and you need to do this in a proven and controlled way.

Scenario 1

This is something you’ll want to think about before your production system goes live, particularly if you have not been able to test a realistic workload on all of your production data. It’s also very relevant if you are running a benchmark or proof of concept, where the time that’s available to resolve problem queries can be pretty limited (I’m using some understatement there!).

Ideally you will have captured SQL plan baselines before you’ve upgraded, because then you’ll have a set of “good” execution plans at-the-ready. It isn’t absolutely necessity to do this, though. As long as you can reproduce or find an example of a good plan, then this can be used to create a SQL plan baseline on-demand. For example, you may find a better plan:

  • By running the problem query in a pre-upgrade environment (remembering that you can export and import SQL plan baselines to copy them between databases).
  • Inside an existing SQL Tuning Set (STS).
  • By reproducing the good plan in the post-upgrade environment using (for example) “alter session set optimizer_features_enabled = 11…”, adjusting other Optimizer database parameters or by using hints. Yes, setting this parameter to an older version will give you the plan of the older version; that’s the whole purpose of it (and if it doesn’t work for you then it usually means that there’s a bug).

The next step is the particularly clever part, but I get the impression that a lot of Oracle experts don’t know that it’s even possible. When you’ve found a good plan and captured the details in a SQL plan baseline, you can use SPM to associate it with a regressed SQL statement without having to change the existing query or the existing application code. For details, take a look in the section, “Creating an accepted plan by modifying the SQL text” in an earlier Optimizer blog entry and also page 24 of SQL Plan Management with Oracle Database 12c. In both cases, an improved SQL execution plan is found using a hint. This plan is associated with a regressed SQL statement so that, in future, the better plan is used.

Scenario 2

You should capture SQL Plan Baselines in your Oracle Database 11g environment and export them so that they can be imported into the upgraded database. If you are upgrading in-place, then existing SQL plan baselines will be available without the need to export and import them. If you neglected to capture baselines in the pre-upgrade environment, then you still have the option to capture 11.2 execution plans in an Oracle Database 12c environment by executing your queries in a session where the database parameter optimizer_features_enabled is set to “11.2.0.4” (or whatever version you like).

Once SQL plan baselines are established in the upgraded database, you will enjoy plan stability while you get to know your system in the critical early days after the upgrade. Once you are happy with your shiny new database, you can evolve SQL plan baselines either automatically or at your own pace. Then you will gain advantage of all the new Optimizer features available in Oracle Database 12c.

Licensing

SQL Plan Management is an Oracle Database Enterprise Edition (EE) feature (you can see this here). You don’t need to buy an additional license to use SPM on EE. Optionally, you might choose to use SPM with SQL Tuning Sets (STS). If you do, then you will need to have purchased the Oracle Tuning Pack for Oracle Database (PDF) in adition to EE because STS requires this pack.

Top Tip

Whenever you plan to upgrade, check out Oracle’s Database Upgrade Blog. It’s full of really great information and it will hook you up with the latest advice at the finest level of detail. For example, here are some useful specifics on SPM to get you started.

So now is the time to upgrade, unless you’re a retailer like Amazon who’s heating up its systems for the big Christmas push, or perhaps you’re heading into your end-of-year financial reporting period. Nevertheless, even for you, the “now” is pretty close…



Tuesday Oct 20, 2015

Oracle OpenWorld 2015

Oracle OpenWorld 2015 is nearly here! Come and see our session on Monday, entitled The Next Generation of the Oracle Optimizer...and How You Can Use It NOW [CON8740]. The session will introduce you to the latest enhancements Oracle has made to make Optimizer statistics more comprehensive and accurate than ever before. You'll get some pointers to the latest best practice too.

If you want to get the deepest insight into the Optimizer features available in the latest generation of the Oracle Database, come and talk to the developers themselves in the demo grounds. They'll be in Moscone South, pod SLD-042 (you'll see it if you search for "optimizer" here).

Have a great Oracle OpenWorld 2015!

Monday Sep 07, 2015

Tips on SQL Plan Management and Oracle Database In-Memory – Part 3

In Part 1 of this series of tips on SQL Plan Management (SPM) and Oracle Database In-Memory I covered an example where a full table scan query made use of the In-Memory column store immediately without changing the SQL execution plan. In Part 2 I presented an example where the In-Memory column store made an alternative SQL execution plan viable, and where there was a corresponding SQL plan baseline already in place so that this plan could be used immediately. 

In this post I will consider a slightly different scenario:
  • SQL plan management is used to stabilize the SQL execution plans for our critical application (that until now has not been using the In-Memory column store).
  • The In-Memory column store is subsequently enabled and populated with application tables. 
  • The Optimizer identifies new SQL execution plans.
  • Most of the new execution plans have never been chosen by the Optimizer before.
Let me say right away; SPM behaves in a business-as-usual manner: 
  • New plans for existing baselines are captured by SPM but they will not be used until they are accepted.
  • Existing SQL plan baselines are used, so queries continue to use "approved" SQL execution plans.
  • The database administrator chooses how and when to evolve the SQL plan baselines to take full advantage of In-Memory SQL execution plans.
This is probably one of the most common scenarios you’ll encounter if you use SPM and you start to use Oracle Database In-Memory. As Andy Rivenes pointed out in his blog post, SPM is a very good way to avoid query regressions by controlling how and when queries are affected as you populate the In-Memory column store with more and more tables. I'll use the following example to show you how SPM behaves:
  • There is a query called Q3.
  • Q3 queries a table called MYSALES. 
  • MYSALES is not yet populated into the In-Memory column store.
  • Q3 filters rows in MYSALES using a column called SALE_TYPE.
  • SALE_TYPE has relatively low cardinality, but an index is still useful.
  • There is a SQL plan baseline for Q3 to ensure that it will uses an index range scan and not a full table scan.
This is the plan before the In-Memory column store is populated with MYSALES:

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  8xkx5abshb4rz, child number 2
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type in (2,3)

Plan hash value: 719460714

-------------------------------------------------------------------------------------------------
|     | Operation                             | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |         |       |       |   246 (100)|          |
|   1 |  SORT AGGREGATE                       |         |     1 |     7 |            |          |
|   2 |   INLIST ITERATOR                     |         |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| MYSALES | 20000 |   136K|   246   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | SI      | 20000 |       |    40   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access(("SALE_TYPE"=2 OR "SALE_TYPE"=3))

Note
-----
   - SQL plan baseline SQL_PLAN_bk42daz2f53zwb9fe04b5 used for this statement

It's an index range scan, and the “note” section (above) shows us that a SQL plan baseline is being used (it's name ending in "4b5" ). Let's take a look at our baselines- there's just one: 

SELECT plan_name,sql_handle,sql_text,enabled, accepted
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SPM%';

PLAN_NAME                       SQL_HANDLE           SQL_TEXT                            ENA ACC
------------------------------- -------------------- ----------------------------------- --- ---
SQL_PLAN_bk42daz2f53zwb9fe04b5  SQL_b9104d57c4e28ffc select /* SPM */ count(*),sum(val)  YES YES
                                                     from mysales
                                                     where sale_type in (2,3)

Now, populate the In-Memory column store:  

-- Mark MYSALES with the In-Memory attribute
ALTER TABLE mysales INMEMORY;

-- Access MYSALES to trigger population into In-Memory column store
SELECT count(*) FROM mysales;

If we re-run Q3, we still get an index range scan rather than the INMEMORY FULL scan we might have anticipated (because an In-Memory scan can be more efficient than an index range scan in some cases): 

-------------------------------------------------------------------------------------------------
|     | Operation                             | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |         |       |       |   246 (100)|          |
|   1 |  SORT AGGREGATE                       |         |     1 |     7 |            |          |
|   2 |   INLIST ITERATOR                     |         |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| MYSALES | 20000 |   136K|   246   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | SI      | 20000 |       |    40   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Has the Optimizer decided that an index range scan is still the best option? We can answer that if we take another look at the SQL plan baselines: 

SELECT plan_name,sql_handle,sql_text,enabled, accepted
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SPM%';

PLAN_NAME                       SQL_HANDLE           SQL_TEXT                            ENA ACC
------------------------------- -------------------- ----------------------------------- --- ---
SQL_PLAN_bk42daz2f53zwc69cec1f  SQL_b9104d57c4e28ffc select /* SPM */ count(*),sum(val)  YES NO
                                                     from mysales
                                                     where sale_type in (2,3)
SQL_PLAN_bk42daz2f53zwb9fe04b5  SQL_b9104d57c4e28ffc select /* SPM */ count(*),sum(val)  YES YES
                                                     from mysales
                                                     where sale_type in (2,3)

OK. There’s a new baseline, but it isn’t accepted (the value “NO” appears in the accepted column). This is exactly what SPM is supposed to do: we continue to use accepted and "approved" plans until we have verified or chosen to use alternatives. What is the new baseline plan in this case?

SELECT PLAN_TABLE_OUTPUT
FROM   V$SQL s, DBA_SQL_PLAN_BASELINES b,
        TABLE(
          DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(b.sql_handle,b.plan_name,'basic')
        ) t
WHERE  s.EXACT_MATCHING_SIGNATURE=b.SIGNATURE
AND    b.PLAN_NAME='SQL_PLAN_bk42daz2f53zwc69cec1f';

PLAN_TABLE_OUTPUT
-----------------
--------------------------------------------------------------------------------
SQL handle: SQL_b9104d57c4e28ffc
SQL text: select /* SPM */ count(*),sum(val) from mysales where sale_type in
         (2,3)
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_bk42daz2f53zwc69cec1f         Plan id: 3332172831
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3292460164

-----------------------------------------------
| Id  | Operation                   | Name    |
-----------------------------------------------
|   0 | SELECT STATEMENT            |         |
|   1 |  SORT AGGREGATE             |         |
|   2 |   TABLE ACCESS INMEMORY FULL| MYSALES |
-----------------------------------------------

There it is! The Optimizer has established that the In-Memory full table scan is a good choice, but it will not be used until the new SQL plan baseline has been accepted. Let's go ahead and accept it, but take note that in this example there’s a good chance that the difference in performance will be very small because, after all, it’s only a simple query on a small dataset. If the performance difference is small then automatic plan evolution won’t deem the performance improvement to be sufficient to trigger automatic acceptance of the new baseline. It's worth remembering this if you find that you have a bunch of new baselines that are not accepted automatically. I'll use “verify=>’NO’” to force acceptance: 

cVal := dbms_spm.evolve_sql_plan_baseline(sql_handle=>' SQL_b9104d57c4e28ffc',verify=>'NO');

SELECT plan_name,sql_handle,sql_text,enabled, accepted
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SPM%';

PLAN_NAME                       SQL_HANDLE           SQL_TEXT                            ENA ACC
------------------------------- -------------------- ----------------------------------- --- ---
SQL_PLAN_bk42daz2f53zwc69cec1f  SQL_b9104d57c4e28ffc select /* SPM */ count(*),sum(val)  YES YES
                                                     from mysales
                                                     where sale_type in (2,3)
SQL_PLAN_bk42daz2f53zwb9fe04b5  SQL_b9104d57c4e28ffc select /* SPM */ count(*),sum(val)  YES YES
                                                     from mysales
                                                     where sale_type in (2,3)

Now, re-run the query:

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  8xkx5abshb4rz, child number 0
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type in (2,3)

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
|     | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |    24 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES | 20000 |   136K|    24   (9)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("SALE_TYPE"=2 OR "SALE_TYPE"=3))
       filter(("SALE_TYPE"=2 OR "SALE_TYPE"=3))

Note
-----
   - SQL plan baseline SQL_PLAN_bk42daz2f53zwc69cec1f used for this statement

That’s more like it! We have accepted the new In-Memory execution plan and we’ve done it in a controlled manner. We are using the new SQL plan baseline (the name ends in "c1f" ).

In reality, you might have thousands of SQL plan baselines to evolve, but you can use the SPM evolve advisor task to automate the process of verification and acceptance. If you use this feature, then any SQL statements in your baseline that don’t benefit from the In-Memory column store significantly will continue to use their existing SQL execution plans.  

In this series of posts I don’t pretend to have covered every possible scenario, but I hope that this has given some idea of how SPM will behave if you choose to use Oracle Database In-Memory. I'm still not absolutely sure that "Part 3" will be the last part, so this might end up being a trilogy in four or five parts (to steal a quote from a famous author).You can take and develop the scripts I wrote to try out scenarios of your own. They are available on GitHub. So go ahead and check them out, and post any questions you have in the comments section below.


Tuesday Aug 25, 2015

Tips on SQL Plan Management and Oracle Database In-Memory - Part 2

In Part 1 of this series of tips on SQL Plan Management (SPM) and Oracle Database In-Memory, I covered what would happen if we have a SQL plan baseline for a full table scan query when the table was populating the In-Memory column store. 

In this part I’m going to cover a scenario where a query has more than one SQL plan baseline: 

  • There is a query (called Q2, for short).
  • Q2 queries a table called MYSALES, which is not yet populating the In-Memory column store.
  • Q2 filters rows in MYSALES using a predicate on the SALE_TYPE column.
  • Data in SALE_TYPE is skewed, so there’s an index and a histogram on this column.
  • Because there is data skew, Q2 has two accepted SQL plan baselines; one with a full table scan and one with an index range scan.

You’ve probably come across this situation many times: the Oracle Optimizer must choose between a full table scan or an index range scan depending on predicate selectivity. The ability to change the execution plan based on the value of bind variables is called adaptive cursor sharing. If you’ve not come across that, then you’ll find it useful to check out the section on this topic in the Database SQL Tuning Guide.

What’s great about SPM is that it allows you to have multiple SQL plan baselines for individual queries, so you're not forced to pick one plan in preference to another. This capability is most relevant in environments where SQL statements use bind variables and there is a good deal of data skew. Queries like this are likely to have their plans affected by Oracle In-Memory Database because in-memory full table scans will have a lower cost than storage-resident table scans. Clearly, the In-Memory column store will affect the point of inflection where a full table scan will become more efficient than an index range scan. How is this going to work with SPM? 

Take a look at the following example. Q2 executes and matches 2 million rows because I picked the value of bind variable “:val” to do just that. The Optimizer chooses a full table scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 1
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

------------------------------------------------------------------------------
  Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       |  2475 (100)|          |
|   1 |  SORT AGGREGATE    |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS FULL| MYSALES |  2000K|    32M|  2475   (1)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

For the second execution, the value “:val” is set so that it would match only 20,001 rows. This time the Optimizer chooses an index range scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 2
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 1266559460

------------------------------------------------------------------------------------------------
  Id  | Operation                            | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |       |       |   133 (100)|          |
|   1 |  SORT AGGREGATE                      |         |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| MYSALES | 20001 |   332K|   133   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | SI      | 20001 |       |    44   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):a
---------------------------------------------------
  3 - access("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuh5d8bf80c used for this statement

As you will have figured out, the Optimizer has calculated that the index is less efficient than a full table scan when Q2 matches a large number of rows (2 million in this case) so we have two viable SQL execution plans for this query. Before I ran the queries above, I accepted two SQL plan baselines for Q2. You can see in the “note” sections above that two different baselines are used (one ending in “80c” and one ending in “c1f”). They can be seen in the dba_sql_plan_baselines view: 

SELECT plan_name,sql_text,enabled, accepted 
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SPM%';

PLAN_NAME                           SQL_TEXT                                ENA ACC
----------------------------------- ----------------------------------      --- ---
SQL_PLAN_93ct9zmnvtbuhc69cec1f      select /* SPM */ count(*),sum(val)      YES YES
                                    from mysales where sale_type = :val           
SQL_PLAN_93ct9zmnvtbuh5d8bf80c      select /* SPM */ count(*),sum(val)      YES YES
                                    from mysales where sale_type = :val

We’re good shape here. The Optimizer is adapting the query execution plan to take into account bind variable values and data skew. What’s more, SPM is working with us and not against us because it is not forcing Q2 to use a single SQL execution plan.

What happens if we populate MYSALES into the In-Memory column store? 

-- Mark MYSALES with the In-Memory attribute
ALTER TABLE mysales INMEMORY;

-- Access MYSALES to trigger population into In-Memory column store
SELECT count(*) FROM mysales;

If we execute Q2 to match 2 million rows, the Optimizer continues to choose a full table scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 1
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   115 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES |  2000K|    32M|   115  (20)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - inmemory("SALE_TYPE"=:VAL)
      filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

The full table scan is now annotated with INMEMORY, so we know that some or all of the data for MYSALES is scanned via the In-Memory column store. The “note” section reports that the same baseline is being used as before (ending in “c1f”). This is good news, and it’s the scenario that was covered in Part 1 of this series.  

What if we executed the query to match 20,001 rows? You can probably guess what’s coming; the Optimizer judges that the In-Memory scan is more efficient than the index range scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 2
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   115 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES | 20001 |   332K|   115  (20)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - inmemory("SALE_TYPE"=:VAL)
      filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

Since there is a SQL plan baseline that allows a full table scan to be used, Q2 can use this access method straight away and we get immediate benefit from scanning the In-Memory column store!

Hold on a minute! Wasn’t that just a little bit too convenient? I arranged it so that there was a handy full-table-scan SQL plan baseline ready and waiting for when I "flipped the switch" and started using the In-Memory column store. This example might seem a little contrived, but it is a real-world example and I chose it to illustrate how SPM works together with both Oracle In-Memory Database and adaptive cursor sharing (and if you want more, there's an earlier blog on how adaptive cursor sharing interacts with SPM).

If, instead, I had started out with a single baseline that specified an index range scan, then this is the plan that would have been used even after MYSALES populated the In-Memory column store (and we would not have had an INMEMORY FULL scan). That’s not a bad thing; it is exactly what plan stability means and it is how SPM is meant to work. In the example above I made use of a couple of SQL execution plans that were validated and accepted before I initiated the In-Memory column store. In the more general case, where the Optimizer identifies a brand new execution plan for use with the In-Memory column store, we might want to validate it before we allow the database to use it in our critical application. How can we do that? Happily, it's what SPM evolution was built for, and it goes all the way back to the initial scenario I mentioned in Part 1. I'll cover the details in Part 3 (coming soon). 

If you want to try out this example for yourself, the scripts are in GitHub.

Wednesday Aug 12, 2015

Tips on SQL Plan Management and Oracle Database In-Memory Part 1

If you follow Oracle’s In-Memory blog then you probably came across a post mentioning how you should use SQL Plan Management when you’re upgrading to Oracle Database In-Memory. Whether you have read that post or not, you might be wondering what will happen if you have some SQL plan baselines and you begin to populate the In-Memory column store with a bunch of tables as used by those baselines. That’s what this post is about. Well, in fact, I’m going to break the topic up into a few posts because (as ever!) there is a little bit of subtlety to cover. Luckily, this will make your life easier rather than more difficult because you can get immediate benefit from In-Memory even if you don’t evolve SQL plan baselines on day one.  

When I started to think about this post I thought that I would start with the first scenario that probably comes to mind if you’re familiar with SQL Plan Management (SPM): 
  • The Optimizer comes up with a new execution plan for a SQL statement because something has changed, and Oracle Database In-Memory would be a very good example of that! 
  • If there’s a SQL plan baseline for the statement, the database will use the baseline execution plan and capture the new plan.
  • Where appropriate, the new plan will be validated and accepted using SQL plan evolution. 

I will get to that, but first it’s better to start with a couple of more subtle points. With this information in our back pocket it will be easier to understand (and explain) the more traditional aspects of SQL plan evolution in the context of Oracle Database In-Memory. 

Here, I will cover the following example:
  • There is a table called MYSALES that’s not yet populated into the In-Memory column store. 
  • A query (called “Q1”) includes a full table scan of MYSALES. There is no index on the table that’s useful to Q1. 
  • Q1 has an active SQL plan baseline.
  • MYSALES is subsequently populated into the In-Memory column store.

Let’s take a look at Q1 and its SQL execution plan before populating MYSALES into the In-Memory column store (and I'll explain the significance of the highlighted text further down)...

SQL_ID  4ss4zbb813250, child number 0
-------------------------------------
SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'

Plan hash value: 3292460164

------------------------------------------------------------------------------
  Id  | Operation          | Name    | Rows  | Bytes | Cost  %CPU | Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       |    69 (100)|          |
|   1 |  SORT AGGREGATE    |         |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| MYSALES | 99991 |   195K|    69   (2)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - filter("VAL"='X')

Note
-----
  - SQL plan baseline SQL_PLAN_7469nmnn7nsu3c69cec1f used for this statement

Q1 performs a full table scan of MYSALES. The "note" section makes it clear that a SQL plan baseline is used. This is what that looks like:

SELECT PLAN_TABLE_OUTPUT
FROM   V$SQL s, DBA_SQL_PLAN_BASELINES b,
      TABLE(
        DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(b.sql_handle,b.plan_name,'basic')
      ) t
WHERE  s.EXACT_MATCHING_SIGNATURE=b.SIGNATURE
AND    b.PLAN_NAME=s.SQL_PLAN_BASELINE
AND    s.SQL_ID='4ss4zbb813250';

--------------------------------------------------------------------------------
SQL handle: SQL_7219349d287a6343
SQL text: SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_7469nmnn7nsu3c69cec1f         Plan id: 3332172831
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3292460164

--------------------------------------
  Id  | Operation          | Name    |
--------------------------------------
|   0 | SELECT STATEMENT   |         |
|   1 |  SORT AGGREGATE    |         |
|   2 |   TABLE ACCESS FULL| MYSALES |
--------------------------------------

What happens if MYSALES is now populated into the In-Memory column store? 

-- Mark MYSALES with the In-Memory attribute
ALTER TABLE mysales INMEMORY;

-- Access MYSALES to trigger population into In-Memory column store
SELECT count(*) FROM mysales;

Let’s rerun our query and examine the execution plan:

SQL_ID  4ss4zbb813250, child number 1
-------------------------------------
SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES |   100K|   195K|     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("VAL"='X')
       filter("VAL"='X')

Note
-----
   - SQL plan baseline SQL_PLAN_7469nmnn7nsu3c69cec1f used for this statement

There is still a full table scan, but this time the query will read data from MYSALES via the In-Memory column store rather than the storage-resident table and, even better, the same SQL plan baseline is used. That was pretty easy! The Optimizer chose a full table scan in both cases, so the same SQL plan baseline was used both cases. The INMEMORY annotation for the full table scan is “for your information only”; it tells you that the query scanned some or all of the data for your table via the In-Memory column store but as far as the Optimizer is concerned it is “just” a full table scan, as the keyword INMEMORY does not affect the plan hash value, so it will match the existing the SQL plan baseline (above, you can see that the plan hash value is always "3292460164" ).

Why do I say the INMEMORY keyword indicates some or all of the data for your table is scanned via the In-Memory column store? Remember until all of the data belonging to MYSALES has been populated into the In-Memory column store, Oracle will automatically pick up the rest of the data from wherever it resides. That could be from memory (e.g. the buffer cache) or from flash or from disk.

It should be pretty obvious by now that if we decide to remove MYSALES from the In-Memory column store, the query will revert to scanning the storage-resident table and the plan will display “TABLE ACCESS FULL”. 

This example is very simple, but the principle applies to queries that have the same execution plan for In-Memory versus non-In-Memory. What happens if there are execution plan changes and, in particular, if indexes are involved? Start by looking at Part 2.

If you want to try out this example for yourself, the scripts are in GitHub.


Tuesday May 26, 2015

Space Management and Oracle Direct Path Load

Most of you will be familiar with the concept of direct path load and how it’s an efficient way to load large volumes of data into an Oracle database as well as being a great technique to use when moving and transforming data inside the database. It’s easy to use in conjunction with parallel execution too, so you can scale out and use multiple CPUs and database servers if you need to process more data in less time.

Probably less well known is how the Oracle database manages space during direct path load operations. This is understandable because the Oracle database uses a variety of approaches and it has not always been very obvious which one it has chosen. The good news is that Oracle Database 12c from version 12.1.0.2 onwards makes this information visible in the SQL execution plan and it is also possible to understand what’s happening inside earlier releases with a bit of help from this post and some scripts I’ve linked to at the end.

[Read More]

Monday Jul 07, 2014

The Optimizer & the new Oracle Database In-Memory option

I know its been forever since I've posted a new blog but I am no longer the Optimizer lady. In case you haven't heard already,  I have a new role now as the In-Memory lady (definitely need to work on a better title).

But for this weeks In-Memory blog post I got a chance to combine both my old and my new roles when I wrote about how the In-Memory option and the Optimizer interact. And I thought the readers of this blog might be interested in this topic too.

So, if you are interested in seeing how these these two fascinating pieces of the Oracle technology work together, check out the latest post on the new In-Memory blog.

You should also mark your calendars because of the head of the Optimizer development team, Mohamed Zait, will also discuss this topic at Oracle Open World later this year!

 Hope to catch up with a lot of you then.

Tuesday Jun 25, 2013

Oracle Database 12c is here!

Oracle Database 12c was officially release today and is now available for download. Along with the software release comes a whole new set of collateral that explains in detail all of the new features and functionality you will find in this release.

The Optimizer page on Oracle.com has all the juicy details about what you can expect from the Optimizer in Oracle Database 12c. Direct links are below.

[Read More]

Thursday May 02, 2013

Upcoming event: Kscope 13

Kscope 13, is the annual conference of the Oracle Development Tools User Group, taking place  June 23 - June 27, in New Orleans. This is a great conference for Oracle developers and architects, offering some of the best content by renowned experts.

I am luck enough to be involved in five sessions this year around the Oracle Optimizer and performance. Below are details on the session I will be presenting. I hope you have an opportunity to check out some of these sessions if you plan to attend the conference!

[Read More]

Wednesday Mar 13, 2013

Oracle Open World 2013 Call For Papers!

Believe it or not, it's time once again to start thinking about Oracle OpenWorld! The Oracle OpenWorld 2013 call for papers opens March 13th. Oracle customers and partners are encouraged to submit proposals to present at this year's Oracle OpenWorld conference, which will be held September 22 - 26, 2013 at the Moscone Center in San Francisco. Details and submission guidelines are available on the Oracle OpenWorld Call for Papers web site. The deadline for submissions is April 12th 2012 at 11:59 pm PDT. We look forward to checking out your sessions on the Optimizer, SQL Plan Management, and statistics!

This also means you need to start thinking of new slogans for this years optimizer bumper sticker! Details on how to submit an entry will follow shortly.

[Read More]
About

The Oracle Optimizer blog is written by members of the Optimizer development team. The goal of this blog is to provide an insight into the workings of the Optimizer and the statistics it relies on. The views expressed on this blog are our own and do not necessarily reflect the views of Oracle and its affiliates. The views and opinions expressed by visitors on this blog are theirs solely and may not reflect ours.

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
20
21
22
23
24
25
26
27
28
29
30
31
      
Today