Thursday Apr 28, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - A True Story

Recently I came across a really interesting customer case in the UK dealing with Incremental Statistics Collection issues in regards to an upgrade to Oracle Database 12.1.0.2.

This is the follow-up blog post to: 

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

The Basics

The idea of Incremental Statistics Collection is simply to save time and resources when gathering statistics for partitioned tables in order to update the global stats. In Oracle Database 12c we added the very important features of: 

  • Incremental stats working now with partition exchange as well
  • "Changed" partitions won't be eligible for new stats generation until a certain stale percentage (default: 10%) has been reached - this has to be enabled and can be tweaked
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('INCREMENTAL_STALENESS','USE_STALE_PERCENT');
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('STALE_PERCENT','12');

Furthermore we always recommend to:  

  • Not enable incremental stats collection globally but only for specific tables. Otherwise the footprint for the synopsis on disk can grow fairly large. Biggest footprint I've seen so far was almost 1TB in size in the SYSAUX tablespace
  • Enable it mostly for range-partitioned tables where only a few partitions undergo DML changes 
    .

The Case

Actually the synopsis table in this particular case did contain "only" 300GB of data. But as the starting point was already Oracle Database 11.2.0.3 just a change from Range-Hash to List-Hash Partitioning would happen. As this happens via metadata swapping the impact should be small.

But the issue coming up after the upgrade didn't have to do with this change in partitioning layout.

Issue No.1

During the maintenance window the incremental stats job did not finish and actually this statement caused plenty of trouble:

delete from sys.wri$_optstat_synopsis$ where bo# = :tobjn and group# in (select * from table(:groups)

Not completing this statement within the 4 hours of the default maintenance window led to a rollback of the delete - and its rollback alone took 14 hours. It turned out that the delete has to happen (and complete) before the regathering of stats could start. 

I did recommend:
patch 21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2  (see also MOS Note:2041541.1 - Gather_Database_Stats_Job_Proc Taking More Time in 12.1.0.2 Than 11.2.0.4)

and the customer requested:
Patch 22893653: MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.160119 FOR BUGS 19450139 20807398
on top of their January 2016 PSU - the merge included the patch I mentioned.

Besides that another issue got discovered.

Issue No.2

The daily purge of statistics didn't really work on large synopsis as the default degree of parallelism introduced with Oracle 12c gets derived from the number of blocks on the synopsis table - bigger table means a higher parallel degree for the purge. It ended up with a PARALLEL hint of 60 - and that was counterproductive. Once a purge got started manually in serial mode or with a low DOP it completed in less than 1 minute.

With a similar trace as:

set serveroutput on;
EXEC dbms_output.enable(999999999);
EXEC dbms_stats.set_global_prefs('trace',1+4);
EXEC dbms_stats.gather_table_stats(ownname=>'&TABLE_OWNER',tabname=>'&TABLE_NAME');
EXEC dbms_stats.set_global_prefs('trace',0);

the issues could be identified as: 

Bug 21258096 - UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS

The customer requested another merge patch 22926433 which contains the following fixes:

19450139: KN:LNX:PERFORMANCE ISSUE WHEN RUNNING GATHER TABLE STATS WITH INCREMENTAL STATS
20807398: ORA-00600 [KGL-HASH-COLLISION] WITH FIX TO BUG 20465582
21258096: UNNECESSARY INCREMENTAL PARTITION GATHERS/HISTOGRAM REGATHERS
21498770: AUTOMATIC INCREMENTAL STATISTICS JOB TAKING MORE TIME ON 12.1.0.2

Finally the customer agreed with Support's recommendation to truncate the two synopsis tables, WRI$_OPTSTAT_SYNOPSIS_HEAD$ andWRI$_OPTSTAT_SYNOPSIS$, and regathered incremental statistics the following weekend. Of course they validated this action plan on their performance testing environment first - with the merge patch applied - and it had the desired effect and solved the issue.

Incremental statistic gathering works now as expected, the job fits into the maintenance window.

Lessons Learned

Actually Oracle Support released a very helpful and important note just a few weeks ago (too late for this customer): 

It contains not only links to the patches for the issues the customer hit here - but also a long list for Oracle 11.2.0.4 as well. 

Another MOS Note is worth to mention here: 

But these were not all issues the customer faced - so I may write up another blog post in addition within the next days.

--Mike

PS. All credits go to David Butler and Rob Dawley - thanks for your hard work, sorry for all the inconvenience - and especially thanks for writing it all together and forwarding it to me!!!

Wednesday Apr 27, 2016

Incremental Statistics Collection in Oracle 12.1.0.2 - Upgrade Pitfalls

A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2: 

And you'll find more information in our documentation and in posts by our optimizer folks:  

And  you may read on this follow-up blog post about a related real world customer example ...   .

Database Upgrade

Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit.

The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months.
.

What happens during the database upgrade? 

Incremental Statistics Collection got introduced with Oracle Database 11.1.0.6 and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen.

  • Upgrade from Oracle 11.1.0.6/7 or 11.2.0.1 to Oracle 11.2.0.2/3/4:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning 
    • Most data movement will happen here
    • As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement
      .

  • Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning
    • There is little data movement in this case as the move happens with the help of metadata swapping
      .

Which symptoms may you see?

Actually very simple and obvious symptoms:
Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours.

If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with. 

But if you spot such results (quoting a colleague here):

"The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this step." 

then you should be alerted. 
.

How can you check this upfront? 

We haven't included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result: 

  • How many tables have incremental stats on?
    SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;
    .
  • How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?
    SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';
    .
  • How large is your synopsis table?
    SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';
    ,
  • Tables where inc stats are ON?
    SQL> select u.name "OWNER" ,o.name "TABLE_NAME" ,p.valchar
    from  sys.OPTSTAT_USER_PREFS$ p
    inner join sys.obj$ o on p.obj#=o.obj#
    inner join sys.user$ u on o.owner#=u.user#
    where p.PNAME = 'INCREMENTAL';

  • Synopsis for tables which don't exist anymore?
    SQL> select distinct h.bo# from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from sys.tab$ t where t.obj# = h.bo#);
    .

Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration.

How do you cure this?

Support sometimes gives the recommendation to look for MOS Note: 1055547.1 - SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance:

begin
for i in reverse 1..31
loop
dbms_stats.purge_stats(sysdate-i);
end loop;
end;
/

But this won't clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally.

First of all you have to make sure that this patch got applied to your target already before upgrade - it will add parallel index capabilities which will speed up the rebuild a lot: 

Be aware:
Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.
.

Further Information?

Please read on here about a real world customer example ... 

 


--Mike

Tuesday Mar 01, 2016

Differences between Automatic Statistics Gathering job and GATHER_SCHEMA_STATS

Recently a customer raised a question whether there are differences between the Automatic Statistics Gathering job and a manual creation of stats via the GATHER_SCHEMA_STATS procedure.

The results in performance were quite interesting. Performance after an upgrade from Oracle Database 11.2.0.3 to Oracle Database 11.2.0.4 was not good when the automatic stats job got used. But performance changed significantly to the better when schema stats were created with the downside of taking more resources during the gathering.

Is the Automatic Stats Gathering job enabled?

That question can be answered quite easily. There's a very good MOS Note:1233203.1 - FAQ: Automatic Statistics Collection displaying this query:

SELECT CLIENT_NAME, STATUS FROM DBA_AUTOTASK_CLIENT WHERE CLIENT_NAME='auto optimizer stats collection';

The MOS Note has also the code to enable (or disable) the job.
.

Which parameters/settings are used?

That question is a bit more tricky as the Note says: "The automatic statistics-gathering job uses the default parameter values for the DBMS_STATS procedures". But how do I display them?

The following script will display the parameters being used during the Automatic Statistics Gathering:

SET ECHO OFF
SET TERMOUT ON
SET SERVEROUTPUT ON
SET TIMING OFF
DECLARE
   v1  varchar2(100);
   v2  varchar2(100);
   v3  varchar2(100);
   v4  varchar2(100);
   v5  varchar2(100);
   v6  varchar2(100);
   v7  varchar2(100);
   v8  varchar2(100);
   v9  varchar2(100);
   v10 varchar2(100);        
BEGIN
   dbms_output.put_line('Automatic Stats Gathering Job - Parameters');
   dbms_output.put_line('==========================================');
   v1 := dbms_stats.get_prefs('AUTOSTATS_TARGET');
   dbms_output.put_line(' AUTOSTATS_TARGET:  ' || v1);
   v2 := dbms_stats.get_prefs('CASCADE');
   dbms_output.put_line(' CASCADE:           ' || v2);
   v3 := dbms_stats.get_prefs('DEGREE');
   dbms_output.put_line(' DEGREE:            ' || v3);
   v4 := dbms_stats.get_prefs('ESTIMATE_PERCENT');
   dbms_output.put_line(' ESTIMATE_PERCENT:  ' || v4);
   v5 := dbms_stats.get_prefs('METHOD_OPT');
   dbms_output.put_line(' METHOD_OPT:        ' || v5);
   v6 := dbms_stats.get_prefs('NO_INVALIDATE');
   dbms_output.put_line(' NO_INVALIDATE:     ' || v6);
   v7 := dbms_stats.get_prefs('GRANULARITY');
   dbms_output.put_line(' GRANULARITY:       ' || v7);
   v8 := dbms_stats.get_prefs('PUBLISH');
   dbms_output.put_line(' PUBLISH:           ' || v8);
   v9 := dbms_stats.get_prefs('INCREMENTAL');
   dbms_output.put_line(' INCREMENTAL:       ' || v9);
   v10:= dbms_stats.get_prefs('STALE_PERCENT');
   dbms_output.put_line(' STALE_PERCENT:     ' || v10);
END;
/

The settings of the DBMS_STATS.GATHER_SCHEMA_STATS procedure are documented:
https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68577 

When you compare the two you'll see that the settings/defaults are identical. 
.

But what is the difference between these two?

Both activities use the same parameters. So the stats will look the same - IF they get created. The real difference between the Automatic Statistics Gathering job and a manual invocation of GATHER_SCHEMA_STATS is that the latter will refresh ALL statistics whereas the Automatic Statistics Gathering job will refresh only statistics on objects where statistics are missing or marked as STALE.

The same behavior appears when you compare the recommendation to gather dictionary statistics before the upgrade by using DBMS_STATS.GATHER_DICTIONARY_STATS versus a DBMS_STATS.GATHER_SCHMEA_STATS('SYS')call. The latter will refresh all statistics whereas the first one will take less resources but refresh only STALE and missing statistics.
.

A simple example

This script is kept as simple as possible.

  • It creates a test user
  • It creates two tables within this user - tablespace USERS
  • It inserts and updates information in the two tables
  • It flushes the monitoring information (how many DMLs got run?) out
  • It gathers stats on only one table to verify that STALE is working as intended
  • It kicks off the automatic stats gathering job
  • It kicks off the schema stats gathering call
  • It compares results before/after in the stats history table 

set timing on
set serverout on
set echo on
set termout on
column table_name Format a5
column owner      Format a6
column stale_stats Format a4
column last_analyzed Format a15
column sample_size format 9999999
drop user test1 cascade;
create user test1 identified by test1;
grant connect, resource, dba to test1;
alter user test1 default tablespace USERS;
create table TEST1.TAB1 as select * from dba_objects where rownum<50001;
exec dbms_stats.gather_table_stats('TEST1','TAB1');
create table TEST1.TAB2 as select * from dba_objects where rownum<50001;
exec dbms_stats.gather_table_stats('TEST1','TAB2');
insert into TEST1.TAB1 select * from dba_objects where rownum<50001;
commit;
insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
commit;
insert into TEST1.TAB2 select * from dba_objects where rownum<50001;
commit;
update TEST1.TAB1 set object_id=object_id+0;
commit;
update TEST1.TAB2 set object_id=object_id+1;
commit;
exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
exec DBMS_STATS.GATHER_TABLE_STATS('TEST1','TAB1');
select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
exec DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS;
pause Wait a bit - then press return ...
select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
exec dbms_stats.gather_schema_stats('TEST1');
select table_name,owner,stale_stats,to_char(last_analyzed,'DD-MON HH24:MI:SS') LAST_ANALYZED,SAMPLE_SIZE from dba_tab_statistics where table_name in ('TAB1','TAB2');
prompt End ...

.

The results

exec
DBMS_STATS.
FLUSH_DATABASE_MONITORING_INFO;
TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
----- ------ ---- --------------- -----------
TAB1  TEST1  YES  29-FEB 22:37:07       50000
TAB2  TEST1  YES  29-FEB 22:37:07       50000

exec
DBMS_STATS.
GATHER_TABLE_STATS('TEST1','TAB1');
TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
----- ------ ---- --------------- -----------
TAB1  TEST1  NO   29-FEB 22:37:12      100000
TAB2  TEST1  YES  29-FEB 22:37:07       50000

exec
DBMS_AUTO_TASK_IMMEDIATE.
GATHER_OPTIMIZER_STATS;

TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
----- ------ ---- --------------- -----------
TAB1  TEST1  NO   29-FEB 22:37:12      100000
TAB2  TEST1  NO   29-FEB 22:37:13      150000

exec
dbms_stats.
gather_schema_stats('TEST1');

TABLE OWNER  STAL LAST_ANALYZED   SAMPLE_SIZE
----- ------ ---- --------------- -----------
TAB1  TEST1  NO   29-FEB 22:37:43      100000
TAB2  TEST1  NO   29-FEB 22:37:43      150000

The results can be interpreted this way:

  • The sample size of 50k is based on the first activity during the CTAS
  • Once table TAB1 gets analyzed the sample size is now correct - and the time stamp got updated - statistics on TAB2 are still marked STALE of course as the underlying table has changed by more than 10%
  • The Automatic Statistics Gathering job will refresh only stats for objects where stats are missing or marked STALE - in this example here TAB2. Table TAB1's statistics remain unchanged.
  • When the GATHER_SCHEMA_STATS job gets invoked it will refresh all statistics - regardless if they were STALE or not. 

This is the behavior the customer who raised the question about differences in these two ways to create statistics may have seen. The GATHER_SCHEMA_STATS job took longer and consumed more resources as it will refresh all statistics regardless of the STALE attribute.

And it's hard to figure out why the refresh of statistics created in a previous release may have led to suboptimal performance, especially as we talk about a patch set upgrade - and not a full release upgrade. Thanks to Wissem El Khlifi who twittered the following annotations I forgot to mention:

  • The Automatic Statistics Gathering job prioritizes objects with NO statistics over objects with STALE statistics
  • The Automatic Statistics Gathering job may get interrupted or skip objects leaving them with NO statistics gathered. You can force this by locking statistics - so the Auto job will skip those completely

You'll find more information about the Automatic Statistics Gathering job here:

And another strange finding ...

When I played with this example in 12c I encountered the strange behavior of the GATHER_OPTIMIZER_STATS call taking exactly 10 minutes unti it returns to the command prompt.

First I thought this is a Multitenant only issue. But I realized quickly: this happens in non-CDB databases in Oracle 12c as well. And when searching the bug database I came across the following unpublished bug:

  • Bug 14840737
    DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS RETURNS INCORRECTLY

which got logged in Oct 2012 and describes this exact behavior. I kick off the job - it will update the stats pretty soon after - but still take 10 minutes to return control to the command prompt. It is supposed to be fixed in a future release of Oracle Database ... 

 

--Mike 

Thursday Nov 13, 2014

Incremental Statistics Collection improved in Oracle 12c

Traveling right now through Asia. It was Beijing for 32 hours, Toyko for 24 hours - and now we are running an internal 2-day workshop with colleagues from Korea, New Zealand, India and some other countries in Seoul. And yesterday I had the pleasure to listen to Tom Kyte to his optimizer talk at the OTN Conference in Tokyo. And I learned a lot - as always when having the chance to listen to Tom, Graham Wood and the other great experts.

Oracle Database 11.1 offered a great new feature: Incremental Statistics Collection which helped a lot to make stats collection on partitioned tables way more efficient. But it had a few flaws and it took a while to work as expected. And it had one side effect when you used it heavily: It stored tons of data in WRI$_OPSTAT_SYNOPSIS. We saw it on some databases at almost 300GB. 

Now the thing with such a huge WRI$_OPSTAT_SYNOPSIS can be: It gets a new partitioning layout during upgrades twice:

  • Upgrade from Oracle 11.1.0.x/11.2.0.1 to Oracle 11.2.0.2/3/4:
    • Change to Range.Hash Partitioning for WRI$_OPSTAT_SYNOPSIS
    • This can cause a lot of data movement.
  • Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x:
    • Change to List-Hash Partitioning
    • This will cause not as much data movement as in the previous change

Tom explained yesterday that in Oracle Database 12c Incremental Statistics Collection has gotten a few excellent extensions making it more efficient: 

  • Smaller footprint on disk for synopses compared to previous releases
  • Incremental stats with partition exchange operations
  • Ability to define a stale percentage for existing partitions

The latter one is very interesting as it meant: Before Oracle Database 12c whenever you did change a single row within an existing partition during a recalculation of the Global Stats this particular partition need to be examined again - even though just one record has been changed - instead of still using the stored synopsis.

In Oracle Database 12c you can now define a stale percentage. First you'll have to enable it, second you can set a stale percentage by yourself - otherwise the default of 10% would apply - but only if it has been enabled. Otherwise the pre-12c default will be kept (and this is the behavior in Oracle Database 12c out of the box):

  • Switch incremental statistics on for a specific partitioned table:
    • SQL> exec DBMS_STATS.SET_TABLE_PREFS('SH','SALES','INCREMENTAL','TRUE'); 
  • Switch on the new 12c stale percentage feature globally:
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('INCREMENTAL_STALENESS',
      'USE_STALE_PERCENT');
  • Change (only if desired) the stale percentage of default of 10%:
    • SQL> exec DBMS_STATS.SET_DATABASE_PREFS('STALE_PERCENT','12');
-Mike

Tuesday Apr 03, 2012

New Slides - and a discussion about Dictionary Statistics

First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2.

Why are we creating dictionary statistics during upgrade?

I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended.

The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings:

It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say:

$ execute DBMS_STATS.GATHER_DICTIONARY_STATS;

This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x.

And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora:

_optim_dict_stats_at_db_cr_upg=FALSE

to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database.

Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call:

$ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE

-Mike

Tuesday Nov 29, 2011

Wrong statistics in AUX_STATS$ might puzzle the optimizer

We do recommend the creation of System Statistics for quite a long time. Since Oracle 9i the optimizer works with a CPU and IO cost based model. And in order to give the optimizer some knowledge about the IO subsystem's performance and throughput - once System Statistics are collected - they'll get stored in AUX_STATS$. For this purpose in the old Oracle 9i days some default values had been defined - and you'll still find those defaults in Oracle Database 11g Release 2 in AUX_STATS$. But these old values don't reflect the performance of modern IO systems. So it might be a good best practice post upgrade to create fresh System Statistics if you haven't done this before. 

You can collect System Statistics with:

exec DBMS_STATS.GATHER_SYSTEM_STATS('start');

and end it later by executing:

exec DBMS_STATS.GATHER_SYSTEM_STATS('stop');


You could also run DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N) instead where N is the number of minutes when statistics gathering is stopped automatically.

Please make sure you'll do this on a real workload period. It won't make sense to gather these values while the database is in an idle state. You should do this ideally for several hours. It doesn't affect performance in a negative way as the values are anyway collected in V$SYSSTAT and V$SESSTAT. And in case you'd like to delete the stats and revert to the old default values you'd simply execute:
exec DBMS_STATS.DELETE_SYSTEM_STATS;

The tricky thing in Oracle Database 11.2 - and that's why I'm actually writing this blog post today - is bug9842771. This leads to wrong values in AUX_STATS$ for SREADTIM and MREADTIM by factor 1000 guiding the optimizer sometimes into the totally wrong directon. The workaround is to overwrite these values manually and divide them by 10000. Use the DBMS_STATS.SET_SYSTEM_STATS procedure. See this MOS Note:9842771.8 for the above bug for some further information. This issue is fixed in Oracle Database 11.2.0.3 and above.

To get some background information about the statistics collected in please read this section in the Oracle Database 11.2 Performance Tuning Guide. And gathering System Statistics might have some implication if you have mixed workloads - and interacts with DB_FILE_MULTIBLOCK_READ_COUNT. For more information please read section 13.4.1.2.


Correction: I had to correct the factor (marked in RED above) from 1000 to 10000. Thanks to Andre Duvekot from the Netherlands for highlighting this to me!!!


About

Mike Dietrich - Oracle Mike Dietrich
Master Product Manager - Database Upgrade & Migrations - Oracle

Based in Germany. Interlink between customers/partners and the Upgrade Development. Running workshops between Arctic and Antartica. Assisting customers in their reference projects onsite and remotely. Connect via:

- -

Search

Archives
« May 2016
SunMonTueWedThuFriSat
1
2
5
7
8
9
10
12
14
15
16
18
20
21
22
23
24
26
27
28
29
31
    
       
Today
Slides Download Center
Visitors since 17-OCT-2011
White Paper and Docs
Workshops
Viewlets and Videos
Workshop Map
x Oracle related Tech Blogs
This week on my Rega & Pono
Upgrade Reference Papers