Deleting statistics or/and dropping indexes on Global temporary tables can help too

Post Upgrade Performance Pangs..

After a 10g upgrade of production 11.5.9 Oracle Application instances, one of our clients was seeing some performance problems relating to pricing, particularly with a custom concurrent program called XXONT_PROGRESS_NOTIFIED.

This triggered the onsite Senior DBA, Brian Fane smaller bfane.JPG: , into action. While watching the current executing sqls of this job, this SQL statement was seen to be executing somewhat frequently:

         SELECT list_header_id
           FROM qp_preq_qual_tmp
          WHERE list_header_id = :b4
            AND line_index = :b3
            AND list_line_id = -1:
            AND pricing_status_code = :b2
            AND pricing_phase_id = :b1:
            AND ROWNUM = 1;

     In fact, based on one AWR report he saw, it was about 19% of the gets and 5% of the total elapsed time between 0800 and 1300 hrs. It ran nearly a million
times. The execution plan looked like this:

         CHILD_NUMBER   QUERY

         0  SELECT STATEMENT   [ALL_ROWS] Cost=1
         0  COUNT STOPKEY
         0    TABLE ACCESS BY INDEX ROWID QP_PREQ_QUAL_TMP_T  Cost=0 Card=1  Bytes=82
         0      INDEX SKIP SCAN XXQP_QP_PREQ_QUAL_TMP_T_N1  Cost=0 Card=1  Bytes=

Comparing with other 9i instances..

      In DEV, the query was not using XXQP_QP_PREQ_QUAL_TMP_T_N1, but instead was using QP_PREQ_QUAL_TMP_N1.

     Taking another approach, in a BCV copy of PROD, when he tried removing the statistics, or updating them, it was either causing a full table scan or reverting back to PROD-like behavior of using XXQP_QP_PREQ_QUAL_TMP_T_N1, respectively.

     So the variant in the situation was the custom index. The question was whether it was needed at all? He decided to look a little further into this, and found that there are
currently 11 queries in the SQL area that were referring to this index. So somehow, the custom index wasn't really helping. This sql was useful for finding out the current sqls using the custom index was:

         SELECT DISTINCT old_hash_value
           FROM v$sql_plan
           JOIN v$sqltext_with_newlines
          using (sql_id)
           JOIN v$sql
          using (sql_id)
          WHERE object_name  = 'XXQP_QP_PREQ_QUAL_TMP_T_N1'
            AND object_owner = 'QP';

A Reliable validation point..

       So was this index being used in 9i at all? Was there another way to confirm this for PROD? Luckily, from a conscious choice, we had taken a backup of 9i's perfstat schema, which had historical sql plans being used by the system too (happens with snap_level >=7). Brian proceeded to imported the perfstat data into the BCV copy and ran this query to determine the sql plans that were using the custom index 'XXQP_QP_PREQ_QUAL_TMP_T_N1:

         SELECT /*+ PARALLEL(ssp, 8) */
                DISTINCT plan_hash_value
           FROM perfstat.stats$sql_plan ssp
          WHERE object_owner = 'QP'
            AND object_name = 'XXQP_QP_PREQ_QUAL_TMP_T_N1';

      This query yielded no queries in at least the last 30 days of PROD's 9i days in which this custom index was used. Simply speaking, this index was not being used in the 9i PROD instance.

      Brian then worked with a developer to get a script for invoking this piece of code, for testing purposes. He thought of the possibility of dropping the index and letting the CBO do either a full table scan or use another index (after gathering statistics).

Testing Results..

      Based on the code Brian got, and by doing two things -- deleting any statistics that exist on the QP_PREQ_QUAL_TMP_T table and dropping the custom
XXQP_QP_PREQ_TMP_T_N1, he did see an improvement of the pricing performance. Here's the summary of the pre-change performance:

call     count       cpu    elapsed       disk      query    current       rows
------- ------  -------- ---------- ---------- ---------- ---------- ----------
Parse      841      0.13       0.12          0          0          0          0
Execute  36094      8.19       8.79        285     371446     118861      36701
Fetch    30680     23.21      24.20      21307    3083457          0      41171
------- ------  -------- ---------- ---------- ---------- ---------- ----------
total    67615     31.53      33.12      21592    3454903     118861      77872

      After making the changes:

call     count       cpu    elapsed       disk      query    current       rows
------- ------  -------- ---------- ---------- ---------- ---------- ----------
Parse     1018      0.18       0.24          0          0          0          0
Execute  36273      8.66       9.38         31     369674     106981      36701
Fetch    30891     12.19      12.47          0    1464382          0      41352
------- ------  -------- ---------- ---------- ---------- ---------- ----------
total    68182     21.03      22.10         31    1834056     106981      78053

The new performance shows gets being about half of the pre-change value, and elapsed time is about two thirds of the previous performance.

Here's the code for the deletion of statistics and dropping the index:

    BEGIN
        dbms_stats.delete_table_stats(ownname => 'QP', tabname =>  'QP_PREQ_QUAL_TMP_T', cascade_indexes => true);
    END;
    /

   DROP INDEX QP.XXQP_QP_PREQ_QUAL_TMP_T_N1;

To back out:.

   CREATE INDEX QP.XXQP_QP_PREQ_QUAL_TMP_T_N1 ON
      QP.QP_PREQ_QUAL_TMP_T
      (UNIQUE_KEY, PRICING_STATUS_CODE, PRICING_PHASE_ID);

    BEGIN
        fnd_stats.gather_table_stats(ownname => 'QP', tabname =>    'QP_PREQ_QUAL_TMP_T', CASCADE => FALSE);
        fnd_stats.gather_index_stats(ownname => 'QP', indname =>    'XXQP_QP_PREQ_QUAL_TMP_T_N1');
    END;
    /

Learning from this anecdote..

One question could be: why did we not create a 10g sql profile (9i's stored outline) for fixing the sql at hand? The only caveat there is, you need a sql profile for every possible sql that would potentially use QP.QP_PREQ_QUAL_TMP_T, which is not an easy proposition.

Even though we may gather statistics on all related objects, the CBO may not choose the best sql plan for really high number of executions. On another note, it pays to have the history of sql plans of working scenarios. For promoting the CBO to choose the optimal plan, It may be required to drop some indexes to force a less costlier plan (over a million executions).

Note: This article is being produced with the explicit permission of Brian Fane, and is aimed at sharing tips in the Oracle world for other people who might be in the same situation.

Comments:

Gaurav, Bear in mind that QP_PREQ_QUAL_TMP_T is a temporary table, so it should not have statistics. Fnd_stats package specifically excludes temporary tables from gathering statistics, and Oracle dependes on dynamic_sampling. I suspect that the statistics gathering the client did are incorrect as the table has 0 rows. Problem in 10.X is that dynamic_sampling is 2 instead of 1, so you'll get a whole lot of different plans in Pricing because of the this new setting., with plans changing all the time. I have a client with huge problems in pricing because of this, with statements with over 20 execution plans. Regards, Paul

Posted by Paul Albeck on May 30, 2009 at 11:08 AM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

bocadmin_ww

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today