Wednesday May 13, 2015

TABLE_REORG Causing ORA-42012: error occurred while completing the redefinition and ORA-00600

Hello!  The latest information for you.

Submitting the TABLE_REORG procedure for a table that has a foreign key constraint that refers to the same table, leaves the constraint in a disabled state.  However, Foreign key constraints that refer to other tables, behave as expected. i.e., the constraint is re-enabled at the end of the redefinition.

When this condition exists, TABLE_REORG will fail with the following:

ORA-42012: error occurred while completing the redefinition
ORA-00600: internal error code, arguments: [17183], [0x3FFF81BF400], [], [], [], [], [], [], [], [], [], []

It does generate a trace file and dump file.

ERROR at line 1:
ORA-42012: error occurred while completing the redefinition
ORA-00600: internal error code, arguments: [17183], [0x3FFF81BF400], [], [], [], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_REDEFINITION", line 82
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1524
ORA-06512: at line 1
ORA-06512: at "DEMANTRA.TABLE_REORG", line 1193

To resolve this issue, please apply Patch 13867469 if available for your database version and platform.  The fix for Bug 13867469 should be included in 12.2 future release.

- See MOS Note <Note 13572659.8> for additional details.  Bug 13572659 - DBMS_REDEFINITION disables Foreign Keys used for REFERENCE partitioning

- Also available, patch 20954948 : MERGE REQUEST ON TOP OF 11.2.0.3.0 FOR BUGS 13040943 13572659 13642044

As a workaround, if you did not receive an ORA-00600, re-enable the constraint after the redefinition.

Note: The above workaround is not applicable for a nested table built on a partitioned parent table. Reference: <Note 1929007.1>.
Foreign Constraint On Nested Table is Created With Status Disabled.

Tuesday May 05, 2015

Demant TABLE_REORG Procedure Failed ORA- on SALES_DATA MDP_MATRIX PROMOTION_DATA How do I Restart? RUPD$_ MLOG$_

When you submit TABLE_REORG, DBMS_REDEFINITION is used.  This will create an MLOG$_ database object.  The table you supplied for the TABLE_REORG procedure has a Primary Key (PK).  Since there is a PK on the table, when DBMS_DEFINITION creates the MLOG$_ object, DBMS_REDEFINITION automatically creates a RUPD$_ object.  These objects are meant to be used for Java RepAPI.  If you execute a 'drop snapshot log on tablename' the snapshot log as well as the temporary snapshot log are dropped.  However, dropping the MLOG$_ object is not best practice. 

If these objects exist prior to executing TABLE_REORG, you will see the following: ‘Table cannot be redefinitioned.' in log_table_reorg table.  This means that one or both objects/segments exist.  For example, if I had a table_reorg for mdp_matrix fail, the following segments would most likely be left behind:

RUPD$_MDP_MATRIX
MLOG$_MDP_MATRIX

Use the following SQL to verify (10g and above):

select substr(object_name,1,30)
from dba_objects
where regexp_like( object_name, 'MLOG$|RUPD$')
and owner = '&Schema_owner'

While you can drop these temporary RDBMS segments, it is best practice to use the following:

BEGIN
 DBMS_REDEFINITION.ABORT_REDEF_TABLE (
   uname       => '&demantra_schema_name',
   orig_table  => '&original_table_name',
   int_table   => '&interim_table_name');
END;

In the above, supply the arguments:

- DM or your schema name
- MDP_MATRIX is the original_table_name
- MLOG$_MDP_MATRIX is the interim_table_name

Verify that the objects are dropped.  Submit the TABLE_REORG again AFTER repairing the cause of the last failure.

Thursday Apr 30, 2015

System Wide, Environment Performance Point Summary + Important Performance MOS Notes

Hello!  Wanted to post a great summary of performance action points.  Also, take a look at these two important Demantra performance related MOS notes.

  • Demantra TABLE_REORG procedure. Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES? MOS Note 2005086.1
  • For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1
  • As a Demantra implementer or Demantra DBA you should be able explain the action taken and provide justification of current environment for each of these areas.
  • Remember, Demantra does not support editioning or hot patching.

Database Performance and Configurations

Prime Table Reorganization and Partitioning
- Few Tables in Demantra account for 95% of the data storage these are mdp_matrix, sales_data and promotion_data
- If any of these tables has more than 10 million rows then they are prone to performance issues
- Develop a strategy to partition these tables, Consult the worksheet design team as this can impact their user experience as well.
- Have Multiple Partition schemes to be tested
- Compute the Optimal column Order using Null Statistics and Column attributes
- Compute the optimal PCTFREE, PCTUSED and INITRANS values for the tables
- Ensure that the Schema stats are up to date


Database Performance and Configurations

Review the CBO Settings
- Analysis of monitored long operations and tuning approach for these
- Index Reduction Analysis using Index Usage Analysis
- Review of Index Addition to improve targeted query performance
- Review of Parallelism settings for possible impact on CBO
- Review of Logging settings and performance impact
- Ensure that you make provision for Database Tuning in your project plan


Application Server Configurations
- There are multiple parameters in the application server property files which need to be reviewed but the following are extremely important as they are
  closely related to your environment
- threadpool.query_run.per_user
- threadpool.query_run.size
- worksheet.full.load
- client.worksheet.calcSummaryExpression


Extending Batch Load Process
- Enable Incremental Load
- Extend EP_LOAD_PROCESS to run in parallel mode
- Extend Proport to Run In Parallel mode
- These Procedures are in PL/SQL and the extension is not invasive and can be implemented without touching upon the delivered functionality


Analytical Engine Performance (Things to Consider)
- Bootstrap Run will be the longest run and is not the indication of the engine runs there after
- Larger forecast horizon increases the number of records that will be created and will impact the processing time for other processes.
- Gather Schema stats after every forecast run and essentially before the forecast is run.
- Forecast versioning : Keep it to the number that is absolutely necessary.
- Ensure that Sales_data, mdp_matrix have their own tablespaces. Have Auto extend turned on for them. Turn off the Redo logs for these tables.
- Increase the Bulk loader size to say 20000, this will ensure the frequency of loading the data into the DB is not too high

Determining the number of engines that you can execute on your environment.
General Rule: 1.5 Engines per CPU, if you have 4 CPU you can go up to 6 engines.  Do not load the Blade server to 100%.  You could increase the number of
engines based on the server performance and your needs.  The number of engines will spawn 2 times the number of sessions on the Database, ensure that you have
enough CPU’s to manage these threads.


Worksheet Design Guidelines for Performance
- Enable Simple Filters for all the user groups
- Leverage Open with functionality
- Design Open with Tree Content in the Collaborator Workbench Homepage
- Limit the Number of Aggregation levels in the worksheet definition
- Place the Aggregation levels in the page of the layout
- Limit the number of series that the worksheet contains, if possible
- Limit the Time Horizon of the Worksheet, if possible i.e.  For Accuracy worksheets, please limit it to the time period for which Forecast
  accuracy has to be displayed


Cache the Exception worksheets
- Create functional index on the exception data series columns
- Level Caching
- Demantra system has a capability to cache level members on a level by level basis.
- Explore the option of caching bigger levels for the better GUI performance.  Setting an optimal threshold for Level caching is a trial & error method.
- More details regarding Level Caching can be found in the Demantra Implementation Guide under section “Managing Level Caching”


Client Configurations
- JRE Settings
- Garbage collections
- Heap size settingsv

Wednesday Apr 29, 2015

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?

Demantra TABLE_REORG procedure. 

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?  TABLE_REORG
can addressed partitioned tables and is more efficient!  Get the latest release at My Oracle Support using bug 17640575.

TABLE_REORG has really replaced and improved the functionality of REBUILD_SCHEMA and REBUILD_TABLES.
It rebuilds the table which is done in primary key order and it moves empty columns to the end of the row.

REBUILD_SCHEMA uses ALTER TABLE MOVE tablespace to reduce chained rows of all tables in the schema.
However, it does not support partitioned tables.

REBUILD_TABLES  is the similar.  It was originally designed for MDP_MATRIX / SALES_DATA, but it can run for all tables and
also for a specific table.  From 2010 it does support partitioned tables.

The procedure MOVE_TABLE was fixed to handle partitioned tables.  It is also out of date, I see ANALYZE TABLE has used parts of the
code (eg for SALES_DATA and MDP_MATRIX).  For an "all tables run" is uses ANALYZE_SCHEMA that does use dbms_stats.GATHER_TABLE_STATS

All tables  - Where the stats value chain_cnt > 0, it does not automatically include SALES_DATA unless 'sys_params','Rebuild_Sales_Table' = 1.

REBUILD_TABLES ( table namel,  stats check,  sales,  all tables flag)

exec REBUILD_TABLES ( null, 1, null, 1) ;    -- With ANALYZE_SCHEMA(100000)  = for none or really old stats
exec REBUILD_TABLES ( null, 0, null, 1) ;

exec REBUILD_TABLES ( null, 0, 1, 1) ;   -- Will include SALES_DATA

For more information see: Troubleshooting TABLE_REORG Package issues - RDF Snapshot drop when process fails + TABLE_REORG Guide MOS Note 1964291.1


Gathering statistics on partitioned tables.  Best practice:

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table, gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere).
This first gather after turning it on will take longer than previous analyzes.  Then going forward we will see the following:

1.  The normal dbms_stats or fnd_stats, will only gather statistics on lower level partitions if the statistics on that partition are stale.  This is a significant change.  That is going forward using the default options of a gather command may in fact perform no re-analyze on the actual data if the modifications to the table do not warrant it.

2.  If a subpartition is stale the normal stats will ONLY gather statistics on that subpartition.  The partition for that subpartition will be re-derived as will the global
    statistics, no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1

Monday Apr 27, 2015

The table_reorg procedure ran for SALES_DATA errored due to lack of tablespace. Can I delete the RDF segments?

Yes, you may delete these temporary rdbms_redefinition tables.  They are snapshots of the table to be reorganized.  The redefinition requires creating a
snapshot.  While you can use SQL to release the RDF segments, it is advised that you call the dbms_redefinition.abort_redef_table to release the snapshot.  Then re-start the procedure.  The dbms_redefinition.abort_redef_table procedure requires three parameters: schema, original table name, interim table name.

Use the following or SQL

BEGIN
  DBMS_REDEFINITION.ABORT_REDEF_TABLE (
    uname       => '&demantra_schema_name',
    orig_table  => '&original_table_name',
    int_table   => '&interim_table_name');
END;
/

Wednesday Apr 01, 2015

New SQL to Determine Out of Order Ratio and Cluster Factor

Hello!  Please run the following for both mdp_matrix and sales_data.     Questions?  Email at Jeffery.goulette@oracle.com

1. The first SQL will need to be adjusted to accommodate your PK for sales_data and your PK for mdp_matrix.  Adjust the select and from clause to match your PK.   In the case of this sample, the PK was:

ITEM_ID
LOCATION_ID
SALES_DATE
IS_T_EP_SPF


-- ShowOOR.sql

SELECT (ROUND((
 (SELECT COUNT(*) AS CNT
 FROM
   (SELECT ITEM_ID,
     LOCATION_ID,
     SALES_DATE,
     IS_T_EP_SPF ,
     RELATIVE_FNO ,
     BLOCK_NUMBER ,
     ROW_NUMBER ,
     DATA_ROW ,
     (LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY ROW_NUMBER)) AS PREV_DATA_ROW
   FROM
     (SELECT ITEM_ID,
       LOCATION_ID,
       SALES_DATE,
       IS_T_EP_SPF ,
       RELATIVE_FNO ,
       BLOCK_NUMBER ,
       ROW_NUMBER ,
       (DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY ITEM_ID,LOCATION_ID,SALES_DATE,IS_T_EP_SPF)) AS DATA_ROW
     FROM
       (SELECT ITEM_ID,
         LOCATION_ID,
         SALES_DATE,
         IS_T_EP_SPF ,
         DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO ,
         DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER ,
         DBMS_ROWID.ROWID_ROW_NUMBER(ROWID)   AS ROW_NUMBER
       FROM SALES_DATA
       ) C
     ) B
   ) A
 WHERE DATA_ROW != PREV_DATA_ROW
 AND DATA_ROW   != PREV_DATA_ROW + 1
 )                               /
 (SELECT COUNT(*) FROM SALES_DATA
 )),3)*100) AS "Out Of Order Ratio %"
FROM DUAL;


-- ShowCF.sql
undefine table_name
SELECT ui.index_name, us.blocks as "Table Blocks", ui.clustering_factor as "Index clustering Factor", ut.num_rows as "Table Rows"
FROM   user_indexes     ui,
       user_tables      ut,
       user_constraints uc,
       user_segments us
WHERE  ui.table_name = ut.table_name
AND    ut.table_name = uc.table_name
AND    ui.index_name = uc.index_name
AND    ut.table_name = us.segment_name
AND    us.segment_type = 'TABLE'
AND    uc.constraint_type = 'P'
AND    ut.table_name      = '&table_name';

Friday Feb 13, 2015

Demantra Table Reorganization, Fragmentation, Null Columns, Primary Key, Editioning, Cluster Factor, PCT Fee, Freelist, Initrans, Automatic Segment Management (ASM), Blocksize....

Hello All!   Why does Demantra require table reordering and null column movement on a, what seems to be, frequent basis?  Your other apps do not require this type of action.  Can this be explained?   I can give you the following.  Let me know if you have questions.  Thank You!   Jeff

 

Addressing the reason why Demantra requires frequent reordering/reorganization and null column placement, I can explain the reason as follows.  As data is inserted into the Demantra table it will not be in any particular order.  The null columns will be scattered throughout the row.  The main quest is to increase throughput and reduce contention.  Since Demantra is both OLTP and data warehouse the apps DBA must tune accordingly.  Of course EBS and VCP applications have elements of both OLTP and data warehouse but are largely transaction processing at their core.  Applications such as EBS and VCP can typically provide satisfactory response time using typical DBA tuning methods.  Specifically, indexing and proper SQL tuning is plenty to improve and maintain acceptable performance.  However, large tables in data warehouse implementations require unique DBA tuning techniques to reduce IO when accessing the big database tables.  Reducing wait times by eliminating contention is the main goal of table reorganization and the movement of null columns to the end of the row.  The data warehouse requirements of Demantra are no different than other data warehouse applications.

By themselves these are not unique and are implemented in data warehouse applications every day.  For each and every read operation we want to fetch as many relevant rows as possible.  To that end we reorder the large tables according to the PK of each table.  The PK is the main access method to the data according the business requirement.  In the case of Demantra, the PK chosen covers approximately 80% of the data access needed for the main forecasting worksheets.  The rest are addressed using typical DBA tuning methods.  There are new booked, shipped orders inserted into the Demantra world daily, weekly, etc.  When this occurs, the rows naturally loose table/index locality and thus the selectiveness is reduced.  This increases the cluster factor because the number of read operations increases to fetch the desired rows.  The CBO recognizes this and calculates the access method accordingly. 

We want to assist the CBO decision making process as much as possible.  If data was static this effort would be greatly reduced.  Because the Demantra data is dynamic, we need to push the CBO our way to deliver the access plan we desire.  If the table is in Primary Key (PK) order, making your Out of Order Ratio (OOR) low, rows are brought into the SGA in sequential order.  Reducing muli-block reads to process data.  The Cluster Factor (CF) is closely related to OOR.

To review the cluster factor concept, see Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)

The selectivity of a column value, the db_block_size, the avg_row_len and the cardinality all work together in helping the optimizer to decide whether to use an index versus a full table scan.  If a data column has high selectivity and a low CF, then an index scan is usually the fastest execution method.  Once your large tables are in PK order with a low CF, performance can increase dramatically.  The main goal of table reorganization is to reduce IO when accessing the big database tables.  What it does is two main things:

1. Reorders the table data according to the primary key index.
2. When using the “C” which stands for column reordering, it also pushes columns that have no data, nulls, to the end of the table row. This is good thing for tables having a high number of rows as Oracle works in a 254 columns chunks.

Some of the larger Demantra tables have upwards of 200+ columns.  Since every customer uses the Demantra tool in a different manner a portion of these columns will naturally be left as null.  Pushing the null columns to the end of the row decreases data access waits or contention by streamlining the fetch action.  The end result is a column ordered, less fragmented table that will reduce the IO operations for range operations, for example scan data between dates.  The cluster factor will be lower and thus the CBO will look at our desired access method favorably.

Demantra is sensitive to database scans therefore reviewing and implementing sound database performance techniques is essential.  While the EBS and Value Chain Planning (VCP) RDBMS can provide adequate response with proper statistics, indexes, tuning and patching, Demantra brings processing methods to the table that require tighter controls.  I will highlight some of these areas that require tighter control.

1) Tables that have 300+ columns many of the columns are null.  The columns are in the table to represent the family of Demantra solutions.  For
   example Demand Planning, Advanced Demand Planning, Promotion Planning, S&OP, etc.  When the null columns are pushed to the end of the row, the
   read operation becomes streamlined thus increasing performance.

2) Demantra moves significant amounts of data to and from the disk.  It behooves the implementer/Applications DBA to implement a strategy that reduces wait constraints.  Here is a sample of techniques that require strategic implementation attention:

   - Block size 16k or larger
   - PCTFREE set at 30% to 40%
   - FREELIST increased according to your particular implementation
   - INITRANS increased according to your particular implementation
   - Statistics maintained at a 30% sample or larger
   - Parallel Processing at every possible contention point
   - Redo size appropriate to keep switches at 3-4 per hour
   - No Automatic Segment Management (ASM)
   - No Editioning
   - Partitions implemented
   - Large SGA to accommodate multi-block reads
   - Synchronous IO implemented
   - Careful setting of Demantra CBO parameters


In conclusion, it is true that Demantra requires RDBMS implementation strategies that are unique but not uncommon for OLTP and Warehouse systems.
However, VCP requires a unique trick that is unique to VCP.  Creating statistics on empty MLOG$ objects then locking the statistics to the object.
It has been proven that this increases performance by reducing a bottleneck.

Reference Notes:
- Demantra Out of Order Ratio, Table_reorg Procedure, Partitions and Clustering Factor - Manage Your Demantra Database Objects (Doc ID 1548760.1)
- Demantra DB Health and Performance: Oracle Demantra Database Best Practices - A White Paper / Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1499638.1)
- Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)
- Demantra How to Use TABLE_REORG to Reorder MDP_MATRIX in Primary Key (PK) Order Action Plan Version 7.3.1.3 and Later. See 1085012.1 Prior to 7.3.1.3 (Doc ID 1528966.1)
- Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)
- Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)

Wednesday Oct 01, 2014

Demantra Performance and Setup Analyzer v2.00 is FINALLY Available!

Hello Demantra Customer!  There is a new release of the Demantra Performance and Setup Analyzer!  Version 2.00 has been released!
You can pick and choose any SQL that you prefer to see given any type of task at hand.   See MyOracleSupport note
Demantra Analyzer script for Setup and Performance Monitoring (11.5.X and R12), 1618885.1

I am considering adding more topical analysis and deeper digging into additional details for the next release. 

Any questions / comments, please email Jeffery.goulette@oracle.com

Thank You!

Demantra 12.2.4 Performance Enhancement Data Points that you need to know!

Demantra 12.2.4 Customers, There is a new MyOracleSupport note highly recommended by Demantra Development.  There are new performance enhancements available!  However, there is additional information in this note that is critical to implement some of these new enhancements.

Please see, Demantra 12.2.4 Worksheet Performance Enhancements Parameter dynamic_hint_enabled. Development Recommended Proper Setup and Use (Doc ID 1923933.1)

Friday Jul 11, 2014

Demantra Index Cluster Factor Major Update Performance Impact

Hello Everyone!   Please check out the following important information regarding the cluster factor and the impact to the CBO.   Please comment if you like.

Regards!   Jeff

 The Cluster Factor (CF) of an index a very important statistic used by the Cost Based Optimizer (CBO) to determin
the most efficient execution plan.  However, there is a major flaw in the current CF calculation.  Simply put, the CF
is calculated by performing a Full Index Scan, looking at the rowid of each index entry.  If the table block being referenced
is different from the previous index entry, the CF is incremented.  However if the table block being referenced is the same
as the previous index entry, the CF is not incremented.  So the CF gives an indication of how well ordered the data in the table is
in relation to the chosen index entries.  

The lower the CF, closer to the number of blocks in the table vs the number of rows, the more efficient it is to use the index as
less table blocks would need to be accessed to retrieve the necessary data via the chosen index.

So far so good.  However, what if part of the incoming row is stored in block one and the remainder in block 2?  When the row is
required, block 1 and block 2 will be read into cache resulting in near zero waiting for the entire row.  OK, fast forward, what if
the table has 100m rows and imagine that this situation occurs 30% of the time.  Your current CF method is most likely skewed
and incorrect.  

Now for the good news, bug 13262857 - INDEX CLUSTERING FACTOR COMPUTATION IS PESSIMISTIC.  The computation of the index clustering
factor in dbms_stats package is pessimistic about the caching ratio of the table blocks.  It assumes that at most one block from the
table is cached.

This is an enhancement to allow a user to specify the number of blocks that dbms_stats package will consider when gathering the index clustering
factor statistics. Prior to this enhancement dbms_stats assumed 1 and it still does after the enhancement.  This enhancement allows the user to
specify a value between 1 and 255.  There is also an AUTO option which if specified then dbms_stats will use 1% of the table blocks up to
0.1% of the buffer cache size, in blocks.  

The new CF approach is currently available with patches that can be applied on both Exadata databases and Oracle versions 11.1.0.7, 11.2.0.2 and 11.2.0.3.
The Patch ID is 15830250.

Once applied, there is a new statistics collection preference that can be defined, called TABLE_CACHED_BLOCKS.  This basically sets the number of
table blocks we can assume would already be cached when performing an index scan and can be ignored when incrementing the CF during statistics gathering.
The default is 1, as stated, the same as today.  But can be set up to be a value between 1 and 255, meaning that during the collection of index statistics
the process will not increment the CF if the table block being referenced by the current index entry has already been referenced by any of the prior 255 index
entries, if set to 255.  It basically sets the appropriate parameter in the sys_op_countchg function used to calculate the CF value during statistic gathering
to not increment the CF if the current table block has already been accessed “x” index entries previously.

The TABLE_CACHED_BLOCKS preference can be set by either the DBMS_STATS.SET_TABLE_PREFS, DBMS_STATS.SET_SCHEMA_PREFS or DBMS_STATS.SET_DATABASE_PREFS procedures.
Apply the patch, change the TABLE_CACHED_BLOCKS preference for this table and re-calculate the index statistics:

SQL> exec dbms_stats.set_table_prefs(ownname=>dm, tabname=>'SALES_DATA',

pname=>'TABLE_CACHED_BLOCKS', pvalue=>50);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>dm, indname=>'SALE_DATA', estimate_percent=> null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='SALES_DATA';

Although I found this bug internally, many thanks to several on the WWW for their comments.



Tuesday Nov 19, 2013

Oracle Demantra Gathering Statistics on Large Partitioned Tables in 11gR2

Hello!   Have you implemented partitions on a large Demantra table?  Are you using RDBMS version 11gR2?  Then consider implementing incremental statistics.

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table we should gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere). This first gather after turning it on will take longer than previous analyzes. Then going forward we will see the following :

1 – the normal dbms_stats (or fnd_stats) will only gather statistics on lower level partitions if the statistics on that partition are stale. This is a significant change. That is going forward using the default options of a gather command may in fact perform no reanalyze on the actual data if the modifications to the table don’t warrant it.

2 – if a subpartition is stale the normal stats will ONLY gather statistics on that subpartition. The partition for that subpartition will be re-derived as will the global statistics – no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

 

Saturday Oct 26, 2013

Demantra Partitioning and the First PK Column

  We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok
  to create a new index instead of changing the PK.

  For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting
  with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).

  * Note that the first column of the new index matches the partition key.

  It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column
  matches the partition key.

Wednesday Oct 23, 2013

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

Hello!   There is a new document available:

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG

The table reorganization can be setup to automatically run in version 7.3.1.5. 

In version 12.2.2 we run the TABLE_REORG.CHECK_REORG function at every appserver restart.
If the function recommends a reorg then we strongly encourage to reorg the database object. 
This is documented in the official docs.

In versions 7.3.1.3 and 7.3.1.4, the TABLE_REORG module exists and can be used.
It has two main functions that are documented in the Implementation Guide Supplement,
Release 7.3, Part No. E26760-03, chapter 4.

In short, if you are using version 7.3.1.3 or higher, you can check for the need to run a reorg by doing the following 2 steps:

1. Run TABLE_REORG.CHECK_REORG('T');
2. Check the table LOG_TABLE_REORG for recommendations

If you are on a version before 7.3.1.3, you will need to follow the instructions below to determine if you need to do a manual reorg.

How to determine if a table reorg is needed

1. It is strongly encouraged by DEV that You gather statistics on the required table.  The prefered percentage for the gather is 100%.

2. Run the following SQL to evaluate how table reorg might affect Primary Key (PK) based access:

 
SELECT ui.index_name,trunc((ut.num_rows/ui.clustering_factor)/(ut.num_rows/ut.blocks),2)
FROM user_indexes ui, user_tables ut, user_constraints uc
WHERE ui.table_name=ut.table_name
AND ut.table_name=uc.table_name
AND ui.index_name=uc.index_name
AND UC.CONSTRAINT_TYPE='P'
AND ut.table_name=upper('&enter_table_name');
 

3. Based on the result:

VALUE ABOVE 0.75 - DOES NOT REQUIRE REORG

VALUE BETWEEN 0.5 AND 0.75 - REORG IS RECOMMENDED

VALUE LOWER THAN 0.5 - IT IS HIGHLY RECOMMENDED TO REORG

Friday Sep 27, 2013

Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime Using DBMS_REDEFINITION

Greetings Demantra Users! There is an excellent note detailing Oracle's implementation of Demantra in a big data environment. This was Oracle's largest ever Internal value chain planning database table. Read about how we did this, implementing RDBMS partitions. See Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime (Doc ID 1587179.1)

Wednesday Jul 17, 2013

Demantra Using Oracle database Real Application Clusters (RAC)

Demantra version:7.3.1.2

Database version:11.2.0.3

Due to the large amount of data within our system we are plannig to implement RAC to improve performance.  We have already recommended other performance guidelines from Oracle like reordering columns, partitioning sales_data table etc.  For further performance improvement since we expect more stores and items being introduced which would increase data we are planning to go for RAC as it is mentioned in the Demantra Installation Guide that RAC is supported in Demanra.

We would like the below information before proceeding with this major step:-

1.) For Distributed Engine Scaling - The Oracle Doc ID 468688.1 states that "The number of engines/machines defined in the Engine Administrator should not exceed 3 times the number of database CPUs".  So we want to know in case of RAC will the above calculation be based on cumulative CPU nodes of all database RAC nodes? I.e when using RAC can we set the total number of Engines upto 3 times the cumulative CPU nodes of all database RAC nodes.

Answer 1: For Distributed Engine Scaling -- using the cumulative CPU nodes is not quite right for RAC, since there will be extra interconnect overhead in a RAC system.  So you can try using the "3 times CPU's" but you might have to scale it back based on empirical performance.

2.) Using a Distributed Engine with one master and multiple slave machines. Is it possible to point different slave engines to different RAC nodes?

Answer 2: Is it possible to point different slave engines to different RAC nodes?  No.  The RAC will automatically load balance when you connect using the SCAN addresses.

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today