How do I compare statistics?

This question came up recently when I was helping a customer migrate a large data warehouse to Oracle Database 11g. Prior to the upgrade, they were using an ESTIMATE_PERCENT of 0.000001, the smallest possible sample size allowed, when they gathering statistics on their larger tables. When I asked why they picked such a tiny sample size they said it was because they needed statistics to be gathered extremely quickly after their daily load both at the partition and at the global level.

Since these large tables were partitioned, I thought they would be an excellent candidate for incremental statistics. However, in order to use incremental statistics gathering you have to let the ESTIMATE_PERCENT parameter default to AUTO_SAMPLE_SIZE. Although the customer saw the benefit of using incremental statistics they were not keen on changing the value of ESTIMATE_PERCENT. So I argued that with the new AUTO_SAMPLE_SIZE in Oracle Database 11g they would get statistics that were equivalent to a 100% sample but with the speed of a 10% sample.  And since we would be using incremental statistics we would only have to gather statistics on the freshly loaded partition and the global statistics (table level statistics) would be automatically aggregated correctly.

So with much skepticism, they tried incremental statistics in their test system and they were pleasantly surprised at the elapse time. However, they didn’t trust the statistics that were gathered and asked me, “how do I compare the statistics I got with AUTO_SAMPLE_SIZE  to the statistics I normally get with an ESTIMATE_PERCENT of 0.000001?”

The answer to that was easy, ‘use DBMS_STAT.DIFF_TABLE_STATS’. Although the answer was easy, it wasn’t an easy process to help them to work out how to use the DBMS_STAT.DIFF_STATS functions correctly. In this post I hope to share some of the gotchas you many encounter using DIFF_STATS that are not so obvious from the description of the functions in our documentation or our whitepaper on 'Upgrading to Oracle Database 10g: What to Expect From The Optimizer'.  Let’s take the SALES table from the SH schema as an example. The SALES table has 28 partitions and 918843 rows.

First we will gather statistics on the SALES table with their original setting for ESTIMATE_PRECENT, 0.00001.


These statistics can now be backed up into a newly created stats table.

Now that we have an export of the statistics from the manually specified ESTIMATE_PERCENT run, let’s re-gather statistics on the SALES table using the default, AUTO_SAMPLE_SIZE.

So, we are now ready to compare the two sets of the statistics for the SALES table using the DBMS_STAT.DIFF_TABLE_STATS function. There are actually three version of this function depending on where the statistics being compared are located;

  • DBMS_STAT.DIFF_TABLE_STATS_IN_HISTORY
  • DBMS_STAT.DIFF_TABLE_STATS_IN_PENDING
  • DBMS_STAT.DIFF_TABLE_STATS_IN_STATTAB

 In this case we will be using the DBMS_STAT.DIFF_TABLE_STATS_IN_STATTAB  function. The functions also compare the statistics of the dependent objects (indexes, columns, partitions) and will only displays statistics for the object(s) from both sources if the difference between the statistics exceeds a certain threshold (%). The threshold can be specified as an argument to the function; the default value is 10%. The statistics corresponding to the first source will be used as the basis for computing the differential percentage.

The DBMS_STAT.DIFF_TABLE_STATS functions are table functions so you must use the key word table when you are selecting from them, otherwise you will receive an error saying the object does not exist. 


The table function returns a report (clob datatype) and maxdiffpct (number). In order to display the report correctly you must use the set long command to define the width of a long so the report can be displayed properly.


How that we know how to generate the report, let’s look at what it says,

The report has three sections. It begins with a comparison of the basic table statistics. In this case the table statistics (number of rows, number of blocks etc) are the same. The results so far are to be expected since we can accurately extrapolate the table statistics from a very small sample. The second section of the report examines column statistics.

Each of the columns where the statistics vary is listed (AMOUNT_SOLD, CUST_ID, TIME_ID) along with a copy of the statistics values from each source. Source A is the STATTAB, which in our case is the ESTIMATE_PERCENT of 0.000001. Source B is the statistics currently in the dictionary, which in our case is the AUTO_SAMPLE_SIZE set. You will notice quite a significant difference in the statistics, especially in the NDV (number of distinct values) and the minimum and maximum values for each of the columns. If we compare these results with the actual number of distinct values for these (below), we see that the statistics reported by source B, the AUTO_SAMPLE_SIZE are the most accurate.

The report then goes on to list the column statistics differences for each of the partitions. In this section you will see that the problems occur only in the AMOUNT_SOLD, CUST_ID columns. The SALES table is range partitioned on TIME_ID, so there is a limited number of TIME_IDs in each partition, thus the percentage difference between the results on this level is not enough to meet the threshold of 10%.

Finally the report looks at the index statistics.In this case the index statistics were different but they were not greater than 10% different so the report doesn't show them.

Along with the report the function returns the MAXDIFFPCT. This is the maximum percentage difference between the statistics. These differences can come from the table, column, or index statistics. In this case the MAXDIFFPCT was 96%!

So the statistics were significantly different with AUTO_SAMPLE_SIZE but the proof is in the pudding. We put the new statistics to the test and found that the Optimizer chose much better execution plan with the new statistics. So much so they were able to remove a ton of hints from their code that had been necessary previously due to poor statistics. You can't ask for anything better than that!

You can get a copy of the script I used to generate this post here.

+Maria Colgan

Comments:

Nice......

Posted by Zahid Hasan on May 25, 2012 at 05:42 AM PDT #

Thanks Maria for another nice post !

goran

Posted by goran on May 28, 2012 at 10:43 AM PDT #

Nice explanation about statistics collection and estimate statistics.

Posted by David on June 04, 2012 at 04:01 AM PDT #

So just using AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT is enough OR we also need to turn on INCREMENTAL feature on the table level as shown below.

EXEC DBMS_STATS.SET_TABLE_PREFS('SH','SALES','INCREMENTAL','TRUE');

-Yasser

Posted by Yasser on June 13, 2012 at 12:29 AM PDT #

Hi Yasser,

To use incremental statistics you need to use AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT, set the DBMS_STATS preference INCREMENTAL to TRUE for the table and ensure the GRANULARITY parameter of the gather stats command contains GLOBAL.

You can set the INCREMENTAL preference using the command you mentioned.

Thanks,
Maria

Posted by Maria Colgan on June 13, 2012 at 09:16 AM PDT #

I like this post a lot. I've had trouble convincing some of my customers to go with the auto sample size. Here is some proof.

Any chance you can post the examples in text that could be copied, rather than in an image?

Posted by Rich on June 14, 2012 at 06:58 PM PDT #

Hi Rich,

I have attached the script I used to generate this blog to the end of the post. Now that I know folks would find that useful I will do it for all future posts and will try and back fill my previous post.

Thanks,
Maria

Posted by Maria Colgan on June 15, 2012 at 11:28 AM PDT #

thanks for a wonderful insight.

Posted by bhanu pratap m on August 22, 2012 at 04:36 AM PDT #

As you indicated, "...the proof is in the pudding." I'm a bit curious regarding the approach chosen to compare plans? How were queries chosen to ensure a good sampling of those queries that reflect what are typically executed by an application (literals, binds, multi-join, partioned (composite too)), etc?

It would seem that there is the potential for a complete follow-on article to address the explain plan comparisons that were performed (and maybe this has been done already?).

On a side note, is there a canned method to estimate sysaux growth based on synopsis data? I know that there are two scripts that help gauge sysaux sizing but I don't believe that they specifically deal with synopsis data.

Thanks.

Posted by guest on September 28, 2012 at 09:18 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

The Oracle Optimizer blog is written by members of the Optimizer development team. The goal of this blog is to provide an insight into the workings of the Optimizer and the statistics it relies on. The views expressed on this blog are our own and do not necessarily reflect the views of Oracle and its affiliates. The views and opinions expressed by visitors on this blog are theirs solely and may not reflect ours.

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today