X

Insights into Statistics, Query Optimization and the Oracle Optimizer

Why do I have SQL statement plans that change for the worse?

Nigel Bayliss
Product Manager

Execution Plan Changes

I am often asked:

  • Why do my SQL execution plans keep changing, and why are they sometimes so bad?
  • Is there a bug? I have upgraded a database, but some SQL statements perform poorly compared to the previous release.

There are many possible reasons for plan changes. Still, an essential factor to consider is the possibility that one or more cardinality estimates (used during optimization) are inaccurate, even though optimizer statistics are fresh.

Estimating Cost

The optimizer uses statistics to estimate the number of rows that need to be processed during SQL execution. The row count estimates are known as cardinality estimates. Internal optimizer calculations take cardinality and use it to derive cost (the cost will vary depending on the operations used, such as index range scans, table scans, and so on). At SQL statement parse time, the optimizer considers the cost of competing execution plans, and the plan found to have the lowest cost is chosen.

Consider this query:

select * from items where item_type_code = 10;

For the sake of example, we will assume that statistics tell us:

  • The ITEMS table has 10,000 rows

  • The column ITEM_TYPE_CODE is NOT NULL, NUMBER(10)

  • Values in ITEM_TYPE_CODE range between 1 and 1000, and there are 100 distinct values

You will probably assume that ITEM_TYPE_CODE values are evenly distributed with no range or value skew (the optimizer will assume this too, unless there’s a histogram). I am sure that you will figure out (just like the optimizer) that the query is likely to return about ten rows (i.e., the cardinality is 10).

Now consider this example:

  • The ITEMS table has 10,000 rows

  • Column ITEM_TYPE_CODE is now NOT NULL, VARCHAR2(4)

  • Values in ITEM_TYPE_CODE range between AAAP and ZZZU, there are 100 distinct values, and the average column length is 3.5 characters

What are the cardinalities for the following examples?

select * from items where item_type_code like '%BF% ';
select * from items where substr(item_type_code,2,2) = 'ZZ';

Not quite so easy, and it is clear that we need to know more about the data. How can you (or the optimizer) calculate an accurate cardinality estimate without any additional information? In turn, how can you then expect the optimizer to come up with an accurate estimate of cost? I suspect that a mathematician might have scope for some clever tricks in these cases, but we don't even know whether or not the code includes lower-case or numeric characters.

Inaccurate Cardinality, Inaccurate Cost

If cardinality estimates are inaccurate, then the calculated costs are likely to be inaccurate too. The resulting execution plan might be suboptimal because it is based on incorrect or missing information. Still, it is essential to remember that this does not mean that a bad execution plan is certain. After all, the optimizer is designed to work around issues like this as much as possible.

Small changes in statistics might tip the balance from one plan to another if the underlying cardinality estimates are not on a solid foundation. In addition, a SQL statement harboring a poor cardinality estimate might perform well in database version X, but badly in version Y. Ironically, this can happen even if cost calculations are better in new version Y than old version X:

  • Consider a cost calculation that is near-perfect, but the cardinality estimates it is using are inaccurate. As a result, costing will be imperfect, and this will sometimes result in a suboptimal plan. In some cases, the suboptimal plan will degrade performance enough to be noticeable at the application level.

  • A new and improved optimizer cost calculation (in a new database release, for example) might deliver a cost estimate different from the old one. The new costing might cause the plan to change. If cardinality estimates are inaccurate, then the plan might be suboptimal. In some cases, the suboptimal plan will degrade performance enough to be noticeable at the application level.

Also, consider the following:

  • If a SQL statement uses bind variables, and if cardinality estimates are not accurate, then the execution plan might be suboptimal for some bind variable values, but not necessarily all. The optimizer includes adaptive cursor sharing and statistics feedback to deal with this. However, it is still possible that you will notice inconsistent performance if suboptimal plans are chosen initially or during the reoptimization process (and these rescue attempts might not be effective in every single case).

Fortunately, Oracle has excellent solutions to these issues: for example, you can control plans or stop them from changing using various methods, but this is not the topic for today.  Instead, we are going to look at tackling the root cause.

Tackling the Root Cause

If you have a problem SQL statement, I strongly suggest that you always ask yourself, ‘can I improve the cardinality estimates?’

Here are some ways to do that:

  • Ensure that statistics are present and not stale. Gather statistics regularly to keep them fresh. Hopefully, this is obvious.

  • Create histograms (manually or automatically), especially if there is a known range or value skew.

  • Create extended statistics for expressions and correlated columns.

  • Use dynamic statistics (dynamic sampling) at level 3 or more, especially if SQL statements are relatively long-running and have complex predicates (you can target individual queries or sessions; there is no need to use elevated dynamic sampling levels system-wide)

I realize that it is all too easy for me to make these suggestions, but you want to know how to deal with a particular SQL statement quickly. I have much more to say on this topic, and follow-up posts are coming soon. You don’t need to go looking for them now; when available, I will update this post.

Food for Thought

Poor cardinality estimates are not responsible for all SQL performance issues, but they can be a significant contributory factor. If you encounter a SQL performance problem, take a close look at the predicates in the WHERE clause. You will often see that SQL developers have to deal with a data model that has evolved far from its original design. There might be logic to handle non-atomic column values or data with an overloaded meaning (for example, codes that must be treated differently depending on context). As a result, I regularly see columns in WHERE clauses with SUBSTR, CASE, DECODE, or REGEX wrapped around them before being compared to a value. I am not saying that examples like this are always a problem: it is all about context, and I will explain further in later posts.

In my experience, it is too easy to forget what we might be asking the optimizer to estimate. Sometimes additional measures are fully justified and are required to improve cardinality estimates to a level where consistent performance becomes the norm. For this reason, I suggest that you look closely at problem SQL statements and consider the tools that Oracle gives you to improve matters.

The takeaway is that the Oracle Optimizer has some fantastic features to deal with challenging queries. We should also remind ourselves that Oracle Databases all over the world each execute tens or hundreds of thousands of distinct queries without a hitch and without the need for any human intervention.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.