Wednesday Apr 29, 2015

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?

Demantra TABLE_REORG procedure. 

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?  TABLE_REORG
can addressed partitioned tables and is more efficient!  Get the latest release at My Oracle Support using bug 17640575.

TABLE_REORG has really replaced and improved the functionality of REBUILD_SCHEMA and REBUILD_TABLES.
It rebuilds the table which is done in primary key order and it moves empty columns to the end of the row.

REBUILD_SCHEMA uses ALTER TABLE MOVE tablespace to reduce chained rows of all tables in the schema.
However, it does not support partitioned tables.

REBUILD_TABLES  is the similar.  It was originally designed for MDP_MATRIX / SALES_DATA, but it can run for all tables and
also for a specific table.  From 2010 it does support partitioned tables.

The procedure MOVE_TABLE was fixed to handle partitioned tables.  It is also out of date, I see ANALYZE TABLE has used parts of the
code (eg for SALES_DATA and MDP_MATRIX).  For an "all tables run" is uses ANALYZE_SCHEMA that does use dbms_stats.GATHER_TABLE_STATS

All tables  - Where the stats value chain_cnt > 0, it does not automatically include SALES_DATA unless 'sys_params','Rebuild_Sales_Table' = 1.

REBUILD_TABLES ( table namel,  stats check,  sales,  all tables flag)

exec REBUILD_TABLES ( null, 1, null, 1) ;    -- With ANALYZE_SCHEMA(100000)  = for none or really old stats
exec REBUILD_TABLES ( null, 0, null, 1) ;

exec REBUILD_TABLES ( null, 0, 1, 1) ;   -- Will include SALES_DATA

For more information see: Troubleshooting TABLE_REORG Package issues - RDF Snapshot drop when process fails + TABLE_REORG Guide MOS Note 1964291.1


Gathering statistics on partitioned tables.  Best practice:

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table, gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere).
This first gather after turning it on will take longer than previous analyzes.  Then going forward we will see the following:

1.  The normal dbms_stats or fnd_stats, will only gather statistics on lower level partitions if the statistics on that partition are stale.  This is a significant change.  That is going forward using the default options of a gather command may in fact perform no re-analyze on the actual data if the modifications to the table do not warrant it.

2.  If a subpartition is stale the normal stats will ONLY gather statistics on that subpartition.  The partition for that subpartition will be re-derived as will the global
    statistics, no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1

Monday Apr 27, 2015

The table_reorg procedure ran for SALES_DATA errored due to lack of tablespace. Can I delete the RDF segments?

Yes, you may delete these temporary rdbms_redefinition tables.  They are snapshots of the table to be reorganized.  The redefinition requires creating a
snapshot.  While you can use SQL to release the RDF segments, it is advised that you call the dbms_redefinition.abort_redef_table to release the snapshot.  Then re-start the procedure.  The dbms_redefinition.abort_redef_table procedure requires three parameters: schema, original table name, interim table name.

Use the following or SQL

BEGIN
  DBMS_REDEFINITION.ABORT_REDEF_TABLE (
    uname       => '&demantra_schema_name',
    orig_table  => '&original_table_name',
    int_table   => '&interim_table_name');
END;
/

Tuesday Jan 20, 2015

Demantra Large Table Partitions and Using the Flashback Recycle bin, recyclebin, dba_recyclebin and sys.RECYCLEBIN$ Purge Best Practice

This is covered using MOS note:

Demantra Large Table Partitions and Using the Flashback Recycle bin, recyclebin, dba_recyclebin and sys.RECYCLEBIN$ Purge Best Practice (Doc ID 1962730.1)

When you are using the  feature and Oracle partitions are involved you will need to perform additional due diligence.  After the automatic
purge that occurrs when the quota is reached or after you issue a purge command to the recyclebin, you will notice that there are orphaned
BIN$ objects that consume the same space as the original partition that was dropped and the purged.

 

This was the solution based off of a customer SR that puzzled us at first.  The flashback documentation does not discuss the flashback purge
and partitions.  We recommend following the current best practice when managing the flashback feature and partitions.

If you have already performed or if the auto purge has occured, you will need to perform the following.  Of course customized to you BIN$ object name:

ALTER SESSION SET RECYCLEBIN=OFF;
drop table lpudemantra."BIN$A3+yI1NBASrgUwoVBkIBKg==$0";
drop table lpudemantra."BIN$A4gGznhiAVbgUwoVBkIBVg==$0";
.
.
.
.
drop table lpudemantra."BIN$9WCVYRl3BGbgQwoVBkIEZg==$0";

If you have not experienced the purge of the recycle bin, attempt the following:

  1) select count(*) from sys.recyclebin$;

  2) Instead of simply trying to purge the table we can use the following alternative:

     a) flashback table <table_name> to before drop;

          or

          if  <table_name> is currently used by another object:

        flashback table <table_name> to before drop rename to <new_table_name>;


     b) create a script that will drop all the partitions one by one :

        spool drop_<table_name>_partitions.sql

        select 'alter table '|| table_owner|| '.'|| table_name ||' drop partition '|| partition_name||';'
        from dba_tab_partitions
        where table_name='<table_name>'
        /

        spool off


     c) run drop_<table_name>_partitions.sql


     d) drop table <table_name>;


     e) purge recyclebin;


To turn off the recyclebin
  - alter system set recyclebin=off scope=spfile;

Tuesday Dec 03, 2013

Upgrading to 12.2.2? Do you want extra speed? Use a logon trigger for parallelism

When upgrading your Demantra application from an older version directly to 12.2.2, you can minimize the upgrade time.  You can force parallelism using a Logon trigger on the server.

   Creation of new Indexes on sales data can take long time.

   As part of the upgrade there is a script building a new set of indexes, one per engine profile, on each profile quantity_form expression.

   The creation of the index can take a long time on big data tables.

    - Remember to disable or drop that trigger when the upgrade has completed.

   The following trigger was created as SYS database user and tested successfully by creating DEMANTRA database user and logging in with it:

 CREATE OR REPLACE TRIGGER force_parallel_ddl_trg

AFTER LOGON ON database

BEGIN

  IF (USER='DEMANTRA') THEN

    EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';

  END IF;

END force_parallel_ddl_trg;

/

 Make sure to:

  1. Modify DEMANTRA to be the database schema name.
  2. drop it after it is not needed anymore using the foillowing:

 DROP TRIGGER force_parallel_ddl_trg;

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today