Tuesday Aug 25, 2015

Tips on SQL Plan Management and Oracle Database In-Memory - Part 2

In Part 1 of this series of tips on SQL Plan Management (SPM) and Oracle Database In-Memory, I covered what would happen if we have a SQL plan baseline for a full table scan query when the table was populating the In-Memory column store. 

In this part I’m going to cover a scenario where a query has more than one SQL plan baseline: 

  • There is a query (called Q2, for short).
  • Q2 queries a table called MYSALES, which is not yet populating the In-Memory column store.
  • Q2 filters rows in MYSALES using a predicate on the SALE_TYPE column.
  • Data in SALE_TYPE is skewed, so there’s an index and a histogram on this column.
  • Because there is data skew, Q2 has two accepted SQL plan baselines; one with a full table scan and one with an index range scan.

You’ve probably come across this situation many times: the Oracle Optimizer must choose between a full table scan or an index range scan depending on predicate selectivity. The ability to change the execution plan based on the value of bind variables is called adaptive cursor sharing. If you’ve not come across that, then you’ll find it useful to check out the section on this topic in the Database SQL Tuning Guide.

What’s great about SPM is that it allows you to have multiple SQL plan baselines for individual queries, so you're not forced to pick one plan in preference to another. This capability is most relevant in environments where SQL statements use bind variables and there is a good deal of data skew. Queries like this are likely to have their plans affected by Oracle In-Memory Database because in-memory full table scans will have a lower cost than storage-resident table scans. Clearly, the In-Memory column store will affect the point of inflection where a full table scan will become more efficient than an index range scan. How is this going to work with SPM? 

Take a look at the following example. Q2 executes and matches 2 million rows because I picked the value of bind variable “:val” to do just that. The Optimizer chooses a full table scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 1
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

------------------------------------------------------------------------------
  Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       |  2475 (100)|          |
|   1 |  SORT AGGREGATE    |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS FULL| MYSALES |  2000K|    32M|  2475   (1)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

For the second execution, the value “:val” is set so that it would match only 20,001 rows. This time the Optimizer chooses an index range scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 2
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 1266559460

------------------------------------------------------------------------------------------------
  Id  | Operation                            | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |       |       |   133 (100)|          |
|   1 |  SORT AGGREGATE                      |         |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| MYSALES | 20001 |   332K|   133   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | SI      | 20001 |       |    44   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):a
---------------------------------------------------
  3 - access("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuh5d8bf80c used for this statement

As you will have figured out, the Optimizer has calculated that the index is less efficient than a full table scan when Q2 matches a large number of rows (2 million in this case) so we have two viable SQL execution plans for this query. Before I ran the queries above, I accepted two SQL plan baselines for Q2. You can see in the “note” sections above that two different baselines are used (one ending in “80c” and one ending in “c1f”). They can be seen in the dba_sql_plan_baselines view: 

SELECT plan_name,sql_text,enabled, accepted 
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SPM%';

PLAN_NAME                           SQL_TEXT                                ENA ACC
----------------------------------- ----------------------------------      --- ---
SQL_PLAN_93ct9zmnvtbuhc69cec1f      select /* SPM */ count(*),sum(val)      YES YES
                                    from mysales where sale_type = :val           
SQL_PLAN_93ct9zmnvtbuh5d8bf80c      select /* SPM */ count(*),sum(val)      YES YES
                                    from mysales where sale_type = :val

We’re good shape here. The Optimizer is adapting the query execution plan to take into account bind variable values and data skew. What’s more, SPM is working with us and not against us because it is not forcing Q2 to use a single SQL execution plan.

What happens if we populate MYSALES into the In-Memory column store? 

-- Mark MYSALES with the In-Memory attribute
ALTER TABLE mysales INMEMORY;

-- Access MYSALES to trigger population into In-Memory column store
SELECT count(*) FROM mysales;

If we execute Q2 to match 2 million rows, the Optimizer continues to choose a full table scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 1
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   115 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES |  2000K|    32M|   115  (20)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - inmemory("SALE_TYPE"=:VAL)
      filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

The full table scan is now annotated with INMEMORY, so we know that some or all of the data for MYSALES is scanned via the In-Memory column store. The “note” section reports that the same baseline is being used as before (ending in “c1f”). This is good news, and it’s the scenario that was covered in Part 1 of this series.  

What if we executed the query to match 20,001 rows? You can probably guess what’s coming; the Optimizer judges that the In-Memory scan is more efficient than the index range scan: 

PLAN_TABLE_OUTPUT
-----------------
SQL_ID  d3u63rk540w0r, child number 2
-------------------------------------
select /* SPM */ count(*),sum(val) from mysales where sale_type = :val

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   115 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |    17 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES | 20001 |   332K|   115  (20)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - inmemory("SALE_TYPE"=:VAL)
      filter("SALE_TYPE"=:VAL)

Note
-----
  - SQL plan baseline SQL_PLAN_93ct9zmnvtbuhc69cec1f used for this statement

Since there is a SQL plan baseline that allows a full table scan to be used, Q2 can use this access method straight away and we get immediate benefit from scanning the In-Memory column store!

Hold on a minute! Wasn’t that just a little bit too convenient? I arranged it so that there was a handy full-table-scan SQL plan baseline ready and waiting for when I "flipped the switch" and started using the In-Memory column store. This example might seem a little contrived, but it is a real-world example and I chose it to illustrate how SPM works together with both Oracle In-Memory Database and adaptive cursor sharing (and if you want more, there's an earlier blog on how adaptive cursor sharing interacts with SPM).

If, instead, I had started out with a single baseline that specified an index range scan, then this is the plan that would have been used even after MYSALES populated the In-Memory column store (and we would not have had an INMEMORY FULL scan). That’s not a bad thing; it is exactly what plan stability means and it is how SPM is meant to work. In the example above I made use of a couple of SQL execution plans that were validated and accepted before I initiated the In-Memory column store. In the more general case, where the Optimizer identifies a brand new execution plan for use with the In-Memory column store, we might want to validate it before we allow the database to use it in our critical application. How can we do that? Happily, it's what SPM evolution was built for, and it goes all the way back to the initial scenario I mentioned in Part 1. I'll cover the details in Part 3 (coming soon). 

If you want to try out this example for yourself, the scripts are in GitHub.

Wednesday Aug 12, 2015

Tips on SQL Plan Management and Oracle Database In-Memory Part 1

If you follow Oracle’s In-Memory blog then you probably came across a post mentioning how you should use SQL Plan Management when you’re upgrading to Oracle Database In-Memory. Whether you have read that post or not, you might be wondering what will happen if you have some SQL plan baselines and you begin to populate the In-Memory column store with a bunch of tables as used by those baselines. That’s what this post is about. Well, in fact, I’m going to break the topic up into a few posts because (as ever!) there is a little bit of subtlety to cover. Luckily, this will make your life easier rather than more difficult because you can get immediate benefit from In-Memory even if you don’t evolve SQL plan baselines on day one.  

When I started to think about this post I thought that I would start with the first scenario that probably comes to mind if you’re familiar with SQL Plan Management (SPM): 
  • The Optimizer comes up with a new execution plan for a SQL statement because something has changed, and Oracle Database In-Memory would be a very good example of that! 
  • If there’s a SQL plan baseline for the statement, the database will use the baseline execution plan and capture the new plan.
  • Where appropriate, the new plan will be validated and accepted using SQL plan evolution. 

I will get to that, but first it’s better to start with a couple of more subtle points. With this information in our back pocket it will be easier to understand (and explain) the more traditional aspects of SQL plan evolution in the context of Oracle Database In-Memory. 

Here, I will cover the following example:
  • There is a table called MYSALES that’s not yet populated into the In-Memory column store. 
  • A query (called “Q1”) includes a full table scan of MYSALES. There is no index on the table that’s useful to Q1. 
  • Q1 has an active SQL plan baseline.
  • MYSALES is subsequently populated into the In-Memory column store.

Let’s take a look at Q1 and its SQL execution plan before populating MYSALES into the In-Memory column store (and I'll explain the significance of the highlighted text further down)...

SQL_ID  4ss4zbb813250, child number 0
-------------------------------------
SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'

Plan hash value: 3292460164

------------------------------------------------------------------------------
  Id  | Operation          | Name    | Rows  | Bytes | Cost  %CPU | Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       |    69 (100)|          |
|   1 |  SORT AGGREGATE    |         |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| MYSALES | 99991 |   195K|    69   (2)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - filter("VAL"='X')

Note
-----
  - SQL plan baseline SQL_PLAN_7469nmnn7nsu3c69cec1f used for this statement

Q1 performs a full table scan of MYSALES. The "note" section makes it clear that a SQL plan baseline is used. This is what that looks like:

SELECT PLAN_TABLE_OUTPUT
FROM   V$SQL s, DBA_SQL_PLAN_BASELINES b,
      TABLE(
        DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(b.sql_handle,b.plan_name,'basic')
      ) t
WHERE  s.EXACT_MATCHING_SIGNATURE=b.SIGNATURE
AND    b.PLAN_NAME=s.SQL_PLAN_BASELINE
AND    s.SQL_ID='4ss4zbb813250';

--------------------------------------------------------------------------------
SQL handle: SQL_7219349d287a6343
SQL text: SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_7469nmnn7nsu3c69cec1f         Plan id: 3332172831
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3292460164

--------------------------------------
  Id  | Operation          | Name    |
--------------------------------------
|   0 | SELECT STATEMENT   |         |
|   1 |  SORT AGGREGATE    |         |
|   2 |   TABLE ACCESS FULL| MYSALES |
--------------------------------------

What happens if MYSALES is now populated into the In-Memory column store? 

-- Mark MYSALES with the In-Memory attribute
ALTER TABLE mysales INMEMORY;

-- Access MYSALES to trigger population into In-Memory column store
SELECT count(*) FROM mysales;

Let’s rerun our query and examine the execution plan:

SQL_ID  4ss4zbb813250, child number 1
-------------------------------------
SELECT /* SPM */ COUNT(*) FROM   mysales WHERE  val = 'X'

Plan hash value: 3292460164

---------------------------------------------------------------------------------------
  Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| MYSALES |   100K|   195K|     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("VAL"='X')
       filter("VAL"='X')

Note
-----
   - SQL plan baseline SQL_PLAN_7469nmnn7nsu3c69cec1f used for this statement

There is still a full table scan, but this time the query will read data from MYSALES via the In-Memory column store rather than the storage-resident table and, even better, the same SQL plan baseline is used. That was pretty easy! The Optimizer chose a full table scan in both cases, so the same SQL plan baseline was used both cases. The INMEMORY annotation for the full table scan is “for your information only”; it tells you that the query scanned some or all of the data for your table via the In-Memory column store but as far as the Optimizer is concerned it is “just” a full table scan, as the keyword INMEMORY does not affect the plan hash value, so it will match the existing the SQL plan baseline (above, you can see that the plan hash value is always "3292460164" ).

Why do I say the INMEMORY keyword indicates some or all of the data for your table is scanned via the In-Memory column store? Remember until all of the data belonging to MYSALES has been populated into the In-Memory column store, Oracle will automatically pick up the rest of the data from wherever it resides. That could be from memory (e.g. the buffer cache) or from flash or from disk.

It should be pretty obvious by now that if we decide to remove MYSALES from the In-Memory column store, the query will revert to scanning the storage-resident table and the plan will display “TABLE ACCESS FULL”. 

This example is very simple, but the principle applies to queries that have the same execution plan for In-Memory versus non-In-Memory. What happens if there are execution plan changes and, in particular, if indexes are involved? Start by looking at Part 2.

If you want to try out this example for yourself, the scripts are in GitHub.


Wednesday Jun 24, 2015

What you need to know about SQL Plan Management and Auto Capture

SQL Plan Management (SPM) is an Oracle database feature that allows you to establish a set of SQL execution plans that will be used even if the database is subject to changes that would otherwise cause execution plan changes to occur. For example, you might have an end-of-day batch run that operates in a business context where there are extreme peaks and troughs in daily volume, or perhaps you are upgrading a database and want to be sure that plans are carried over (at least initially). You do not have to fix execution plans in stone with SPM, you can use plan evolution to automate the process of finding improved plans, improving performance in a controlled way and at your own pace. If you’re not familiar with SPM, a very good place to start is to take a look at Maria Colgan’s four-part blog post on the subject. It gives you all the tools you need to get started.

If you are using SPM in Oracle Database 11gR2 or 12c, or if you are considering whether you should use it, then this blog post is for you. I decided to publish this post because I recently encountered a couple of environments that ran into, let’s say, “difficulties” with SPM when capturing SQL plan baselines automatically and continuously over a very long period of time (more than a year in fact). I’d like to give you a few pointers to avoid running into the same problems and why automatic SQL baseline capture was never intended to be used in that way.

[Read More]

Tuesday May 26, 2015

Space Management and Oracle Direct Path Load

Most of you will be familiar with the concept of direct path load and how it’s an efficient way to load large volumes of data into an Oracle database as well as being a great technique to use when moving and transforming data inside the database. It’s easy to use in conjunction with parallel execution too, so you can scale out and use multiple CPUs and database servers if you need to process more data in less time.

Probably less well known is how the Oracle database manages space during direct path load operations. This is understandable because the Oracle database uses a variety of approaches and it has not always been very obvious which one it has chosen. The good news is that Oracle Database 12c from version 12.1.0.2 onwards makes this information visible in the SQL execution plan and it is also possible to understand what’s happening inside earlier releases with a bit of help from this post and some scripts I’ve linked to at the end.

[Read More]

Monday Dec 19, 2011

Why was the RULE hint ignored?

Recently I got a call from a customer that had just upgraded to Oracle
Database 11.2.0.2 and was panicking because they thought the Rule Based
Optimizer (RBO) had been removed from Oracle Database 11g. The reason
they thought this was because when they tried to add the RULE hint to a
SQL statement they still got a Cost Based Optimizer (CBO) plan.

[Read More]

Thursday Oct 20, 2011

Optimizer Technical Papers

We recently got a comment about our paper on adaptive cursor sharing and cardinality feedback, in which the author wished for more technical papers from the optimizer team. We've actually published papers on a few different topics over the years, so we wanted to summarize and post links to them here. Unfortunately they are not all available online for free, but where possible, we've posted links to the full text.

Cost-Based Query Transformation in Oracle
Rafi Ahmed, Allison W. Lee, Andrew Witkowski, Dinesh Das, Hong Su, Mohamed Zait, Thierry Cruanes: Cost-Based Query Transformation in Oracle. VLDB 2006: 1026-1036
This paper talks about many of the query transformations in Oracle 11g which we have discussed here before, including join elimination, view merging, subquery unnesting, join predicate pushdown, join factorization, plus some others that we have not covered here before.  It also describes the cost-based framework itself, some of the challenges of transforming queries in a cost-based manner, and presents performance results for some experiments on a query workload from the Oracle E-Business Suite.

Optimizer Plan Change Management (full text)
Mohamed Ziauddin, Dinesh Das, Hong Su, Yali Zhu, Khaled Yagoub: Optimizer plan change management: improved stability and performance in Oracle 11g. PVLDB 1(2): 1346-1355 (2008)
This describes SQL plan management, which we discussed on here in a four-part SPM series.  The paper gives a bit more detail about the internals of SPM, plus analyzes the behavior of SPM on a large query workload from Oracle E-Business Suite.

Closing the Query Processing Loop in Oracle 11g
(full text)
Allison W. Lee, Mohamed Zait: Closing the query processing loop in Oracle 11g. PVLDB 1(2): 1368-1378 (2008)
This paper discusses adaptive cursor sharing and cardinality feedback, plus the general concept of using feedback from execution to improve plan selection.  We've covered both of these topics on here before, but the paper also includes an analysis of the performance impact of the two features on a query workload from Oracle E-Business Suite.

Efficient and scalable statistics gathering for large databases in Oracle 11g
Sunil Chakkappen, Thierry Cruanes, Benot Dageville, Linan Jiang, Uri Shaft, Hong Su, Mohamed Zait: Efficient and scalable statistics gathering for large databases in Oracle 11g. SIGMOD Conference 2008: 1053-1064
This paper discusses two major improvements to statistics gathering in 11g -- approximate NDV and incremental statistics gathering.  We've discussed both of these topics here before, but the paper has some additional details of the algorithms, plus results of experiments on a few different data sets.

Enhanced Subquery Optimizations in Oracle, VLDB 2009 (full text)
Srikanth Bellamkonda, Rafi Ahmed, Andrew Witkowski, Angela Amor, Mohamed Zait, Chun Chieh Lin: Enhanced Subquery Optimizations in Oracle. PVLDB 2(2): 1366-1377 (2009)
This paper talks about several of the newer subquery optimizations in Oracle.  We talked about basic subquery unnesting in two posts here a while ago, and alluded to some of the other subquery transformations we perform -- this paper talks about all of those.  It also talks about a new form of join elimination which we introduced in 11gR2.  Nearly all of the topics in this paper are NOT covered here on the blog yet, but we hope to fix that soon!

For future reference, we will be adding a new section on the right side of the page called "Optimizer Technical Papers" with links to all of these papers.

Tuesday May 24, 2011

Cardinality Feedback

Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help.[Read More]

Monday May 09, 2011

Explain the Explain Plan white paper finally published!

New Oracle Optimizer white paper, Explain the Explain Plan, published on OTN.

[Read More]

Monday Mar 28, 2011

What happened to the Optimizer white papers on OTN?

We have gotten a lot of questions recently about what happened to the Optimizer white papers that use to be on OTN. The white papers are still there but the URL has changed slightly, which is causing the problem. Here are new URL's for the most recent Optimizer white papers:

SQL Plan Management in Oracle Database 11g

Upgrading from Oracle Database 9i to 10g: What to expect from the Optimizer

Upgrading from Oracle Database 10g to 11g: What to expect from the Optimizer

Monday Aug 16, 2010

Dynamic sampling and its impact on the Optimizer

Dynamic sampling (DS) was introduced in Oracle Database 9i Release 2 to improve the optimizer's ability to generate good execution plans. The most common misconception is that DS can be used as a substitute for optimizer statistics. The goal of DS is to augment the optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates.

So how and when will DS be use?

[Read More]

Monday Aug 02, 2010

We have moved!

You might have been wondering why things had gone so quiet on the Optimizer development team's blog Optimizer Magic over the last few months. Well the blog has moved to blogs.oracle.com/optimizer. All of the old articles have moved too and we plan to be a lot more active at our new home, with at least one new post every month.

Monday Dec 21, 2009

Upgrading from 9i to 11g and the implicit migration from RBO

Now that Oracle Database 11g Release 2 is out, more and more folks are considering upgrading to 11g. However, if you are currently on Oracle 9i then you will have to tackle both the upgrade to a new release and the migration from the Rule-Based Optimizer (RBO) to the Cost-based Optimizer (CBO). The RBO was de-supported in Oracle Database 10g, so in Oracle Database 11g you must use the CBO. Thanks to SQL Plan Management (SPM), originally discussed in our January post, you can handle the upgrade and the migration with ease. By loading the original RBO plans into SPM you can ensure the Optimizer won't change the execution plans during an upgrade and the implicit migration from RBO to CBO. In fact, there are two possible approaches you can take.[Read More]

Thursday Jun 26, 2008

Why are some of the tables in my query missing from the plan?

In 10gR2, we introduced a new transformation, table elimination (alternately called "join elimination"), which removes redundant tables from a query. A table is redundant if its columns are only referenced to in join predicates, and it is guaranteed that those joins neither filter nor expand the resulting rows. There are several cases where Oracle will eliminate a redundant table. We will discuss each case in turn.[Read More]

Wednesday Feb 27, 2008

What to expect from the Optimizer when upgrading from Oracle Database 9i to 10g

One of the most daunting activities a DBA can undertake is upgrading the database to a new version. Having to comprehend all of the new features and to deal with potential plan changes can be overwhelming. In order to help DBA's upgrade from Oracle Database 9i to 10g a new whitepaper called "Upgrading from Oracle Database 9i to 10g: What to expect from the Optimizer" has recently been posted on Oracle Technology Network (OTN). This paper aims to explain in detail what to expect from the CBO when you upgrade from Oracle database 9i to 10g and describes what steps you should take before and after the upgrade to minimize any potential SQL regressions. This is a must read for any DBA planning on upgrading from 9i to 10g in the near future!

Tuesday Dec 11, 2007

Outerjoins in Oracle

Since release 6, Oracle has supported a restricted form of left outerjoin, which uses Oracle-specific syntax. In 9i, Oracle introduced support for ANSI SQL 92/99 syntax for inner joins and various types of outerjoin. The Oracle syntax for left outerjoin and that of ANSI SQL 92/99 are not equivalent, as the latter is more expressive.

There appears to be some confusion about equivalence between ANSI outer join and Oracle outer join syntax. The following examples explain the equivalences and in-equivalences of these two syntaxes.

[Read More]
About

The Oracle Optimizer blog is written by members of the Optimizer development team. The goal of this blog is to provide an insight into the workings of the Optimizer and the statistics it relies on. The views expressed on this blog are our own and do not necessarily reflect the views of Oracle and its affiliates. The views and opinions expressed by visitors on this blog are theirs solely and may not reflect ours.

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today