Tuesday May 05, 2015


When you submit TABLE_REORG, DBMS_REDEFINITION is used.  This will create an MLOG$_ database object.  The table you supplied for the TABLE_REORG procedure has a Primary Key (PK).  Since there is a PK on the table, when DBMS_DEFINITION creates the MLOG$_ object, DBMS_REDEFINITION automatically creates a RUPD$_ object.  These objects are meant to be used for Java RepAPI.  If you execute a 'drop snapshot log on tablename' the snapshot log as well as the temporary snapshot log are dropped.  However, dropping the MLOG$_ object is not best practice. 

If these objects exist prior to executing TABLE_REORG, you will see the following: ‘Table cannot be redefinitioned.' in log_table_reorg table.  This means that one or both objects/segments exist.  For example, if I had a table_reorg for mdp_matrix fail, the following segments would most likely be left behind:


Use the following SQL to verify (10g and above):

select substr(object_name,1,30)
from dba_objects
where regexp_like( object_name, 'MLOG$|RUPD$')
and owner = '&Schema_owner'

While you can drop these temporary RDBMS segments, it is best practice to use the following:

   uname       => '&demantra_schema_name',
   orig_table  => '&original_table_name',
   int_table   => '&interim_table_name');

In the above, supply the arguments:

- DM or your schema name
- MDP_MATRIX is the original_table_name
- MLOG$_MDP_MATRIX is the interim_table_name

Verify that the objects are dropped.  Submit the TABLE_REORG again AFTER repairing the cause of the last failure.

Wednesday Apr 29, 2015

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?

Demantra TABLE_REORG procedure. 

can addressed partitioned tables and is more efficient!  Get the latest release at My Oracle Support using bug 17640575.

TABLE_REORG has really replaced and improved the functionality of REBUILD_SCHEMA and REBUILD_TABLES.
It rebuilds the table which is done in primary key order and it moves empty columns to the end of the row.

REBUILD_SCHEMA uses ALTER TABLE MOVE tablespace to reduce chained rows of all tables in the schema.
However, it does not support partitioned tables.

REBUILD_TABLES  is the similar.  It was originally designed for MDP_MATRIX / SALES_DATA, but it can run for all tables and
also for a specific table.  From 2010 it does support partitioned tables.

The procedure MOVE_TABLE was fixed to handle partitioned tables.  It is also out of date, I see ANALYZE TABLE has used parts of the
code (eg for SALES_DATA and MDP_MATRIX).  For an "all tables run" is uses ANALYZE_SCHEMA that does use dbms_stats.GATHER_TABLE_STATS

All tables  - Where the stats value chain_cnt > 0, it does not automatically include SALES_DATA unless 'sys_params','Rebuild_Sales_Table' = 1.

REBUILD_TABLES ( table namel,  stats check,  sales,  all tables flag)

exec REBUILD_TABLES ( null, 1, null, 1) ;    -- With ANALYZE_SCHEMA(100000)  = for none or really old stats
exec REBUILD_TABLES ( null, 0, null, 1) ;

exec REBUILD_TABLES ( null, 0, 1, 1) ;   -- Will include SALES_DATA

For more information see: Troubleshooting TABLE_REORG Package issues - RDF Snapshot drop when process fails + TABLE_REORG Guide MOS Note 1964291.1

Gathering statistics on partitioned tables.  Best practice:

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table, gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere).
This first gather after turning it on will take longer than previous analyzes.  Then going forward we will see the following:

1.  The normal dbms_stats or fnd_stats, will only gather statistics on lower level partitions if the statistics on that partition are stale.  This is a significant change.  That is going forward using the default options of a gather command may in fact perform no re-analyze on the actual data if the modifications to the table do not warrant it.

2.  If a subpartition is stale the normal stats will ONLY gather statistics on that subpartition.  The partition for that subpartition will be re-derived as will the global
    statistics, no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1

Monday Apr 27, 2015

The table_reorg procedure ran for SALES_DATA errored due to lack of tablespace. Can I delete the RDF segments?

Yes, you may delete these temporary rdbms_redefinition tables.  They are snapshots of the table to be reorganized.  The redefinition requires creating a
snapshot.  While you can use SQL to release the RDF segments, it is advised that you call the dbms_redefinition.abort_redef_table to release the snapshot.  Then re-start the procedure.  The dbms_redefinition.abort_redef_table procedure requires three parameters: schema, original table name, interim table name.

Use the following or SQL

    uname       => '&demantra_schema_name',
    orig_table  => '&original_table_name',
    int_table   => '&interim_table_name');

Thursday Apr 02, 2015

Demantra Installing as SYS or SYSTEM Installer Fails or is Failing

We are aware of a bug that might raise an error during the installation of the Generic Patch recently released.

In some cases, when using SYS as the DBA user and “Enable Automatic Table Maintenance” checkbox has been checked, sys_grants.sql script which is being
run as part of the installation may fail.

sys_grants.sql is using an internal function in with a specific combination of hardware/platform may generate invalid value.

Run the installer using SYSTEM and not SYS, then, after the installation completes, replace sys_grants.sql with the attached file and run it as sys.
Make sure to add “TRUE” as the 4th parameter.

For example:


This has been addressed in release


Uptake of VCP is mandatory for all VCP and Demantra 12.2.4 installations.

Are you already at

Demantra has a new required patch 19945449, ARU# 18420277, Demantra patch 12240066 - is now available. It was discovered that the engine throws a segmentation fault while running CDP consumption profile.  Please see the additional information in the readme Notes.

Reference Notes:
Oracle Demantra Post Release Mandatory Patch Application - Patch 19945449 (Doc ID 1960180.1)
Demantra Cumulative Patch. This CU is Specifically for r12.2.4.1. There are 13 Issues Patched and or Improved. (Doc ID 1952805.1)
Oracle Demantra Release Notes for Release (Doc ID 1947062.1)
Demantra Release Known Upgrade and Known Product Issues with Workarounds (Doc ID 1948769.1)
Oracle Demantra Demand Planning Announcing New Release is Now Available (Doc ID 1948684.1)
Troubleshooting Demantra Installation (Doc ID 1986634.1)


Wednesday Apr 01, 2015

New SQL to Determine Out of Order Ratio and Cluster Factor

Hello!  Please run the following for both mdp_matrix and sales_data.     Questions?  Email at Jeffery.goulette@oracle.com

1. The first SQL will need to be adjusted to accommodate your PK for sales_data and your PK for mdp_matrix.  Adjust the select and from clause to match your PK.   In the case of this sample, the PK was:


-- ShowOOR.sql

     IS_T_EP_SPF ,
     DATA_ROW ,
       IS_T_EP_SPF ,
       ROW_NUMBER ,
         IS_T_EP_SPF ,
       ) C
     ) B
   ) A
 )                               /
 )),3)*100) AS "Out Of Order Ratio %"

-- ShowCF.sql
undefine table_name
SELECT ui.index_name, us.blocks as "Table Blocks", ui.clustering_factor as "Index clustering Factor", ut.num_rows as "Table Rows"
FROM   user_indexes     ui,
       user_tables      ut,
       user_constraints uc,
       user_segments us
WHERE  ui.table_name = ut.table_name
AND    ut.table_name = uc.table_name
AND    ui.index_name = uc.index_name
AND    ut.table_name = us.segment_name
AND    us.segment_type = 'TABLE'
AND    uc.constraint_type = 'P'
AND    ut.table_name      = '&table_name';

Thursday Feb 26, 2015

Setting Worksheet Related Parameters and Hardware Requirement Example

 Hello!   This is an example when determining how to set critical aps_params for worksheet performance:

If you expect to have 150 users with concurrency rate of 30% your setting and hardware should be:

   50 users * 30% = 45 users

   threadpool.query_run.per_user = 8

   threadpool.query_run.size = Number of concurrent users * threadpool.query_run.per_user (8) = 360

   MaxDBConnections = threadpool.query_run.size + 10 = 370

   To run a production environment with those setting you should have:

   Database: DB machine with 60 CPU's (= MaxDBConnections / 6 = 60)

   Application Server: AP with 27 CPU's (= threadpool.query_run.size/16) + 4), 4GB of memory

Only with this hardware you can run Demantra with 150 users with concurrency rate of 30%, is your hardware powerful enough for your environment?

Tuesday Feb 17, 2015

Demantra Table_Reorg. A new version has been released!

Demantra Customers, there is a new release of the table_reorg Tool.  This patch will install updates to the TABLE_REORG tool.  Demantra Version: 7.3.1.x and 12.2.x.  The minimum 7.3.1.x version is   You can download this patch using bug 17640575 in My Oracle Support.  For additional information see MOS note:
Demantra TABLE_REORG Tool New Release with Multiple Updates! Partitions, Drop_temps and More! 7.3.1.x to 12.2.x. (Doc ID 1980408.1)

Friday Feb 13, 2015

Demantra Table Reorganization, Fragmentation, Null Columns, Primary Key, Editioning, Cluster Factor, PCT Fee, Freelist, Initrans, Automatic Segment Management (ASM), Blocksize....

Hello All!   Why does Demantra require table reordering and null column movement on a, what seems to be, frequent basis?  Your other apps do not require this type of action.  Can this be explained?   I can give you the following.  Let me know if you have questions.  Thank You!   Jeff


Addressing the reason why Demantra requires frequent reordering/reorganization and null column placement, I can explain the reason as follows.  As data is inserted into the Demantra table it will not be in any particular order.  The null columns will be scattered throughout the row.  The main quest is to increase throughput and reduce contention.  Since Demantra is both OLTP and data warehouse the apps DBA must tune accordingly.  Of course EBS and VCP applications have elements of both OLTP and data warehouse but are largely transaction processing at their core.  Applications such as EBS and VCP can typically provide satisfactory response time using typical DBA tuning methods.  Specifically, indexing and proper SQL tuning is plenty to improve and maintain acceptable performance.  However, large tables in data warehouse implementations require unique DBA tuning techniques to reduce IO when accessing the big database tables.  Reducing wait times by eliminating contention is the main goal of table reorganization and the movement of null columns to the end of the row.  The data warehouse requirements of Demantra are no different than other data warehouse applications.

By themselves these are not unique and are implemented in data warehouse applications every day.  For each and every read operation we want to fetch as many relevant rows as possible.  To that end we reorder the large tables according to the PK of each table.  The PK is the main access method to the data according the business requirement.  In the case of Demantra, the PK chosen covers approximately 80% of the data access needed for the main forecasting worksheets.  The rest are addressed using typical DBA tuning methods.  There are new booked, shipped orders inserted into the Demantra world daily, weekly, etc.  When this occurs, the rows naturally loose table/index locality and thus the selectiveness is reduced.  This increases the cluster factor because the number of read operations increases to fetch the desired rows.  The CBO recognizes this and calculates the access method accordingly. 

We want to assist the CBO decision making process as much as possible.  If data was static this effort would be greatly reduced.  Because the Demantra data is dynamic, we need to push the CBO our way to deliver the access plan we desire.  If the table is in Primary Key (PK) order, making your Out of Order Ratio (OOR) low, rows are brought into the SGA in sequential order.  Reducing muli-block reads to process data.  The Cluster Factor (CF) is closely related to OOR.

To review the cluster factor concept, see Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)

The selectivity of a column value, the db_block_size, the avg_row_len and the cardinality all work together in helping the optimizer to decide whether to use an index versus a full table scan.  If a data column has high selectivity and a low CF, then an index scan is usually the fastest execution method.  Once your large tables are in PK order with a low CF, performance can increase dramatically.  The main goal of table reorganization is to reduce IO when accessing the big database tables.  What it does is two main things:

1. Reorders the table data according to the primary key index.
2. When using the “C” which stands for column reordering, it also pushes columns that have no data, nulls, to the end of the table row. This is good thing for tables having a high number of rows as Oracle works in a 254 columns chunks.

Some of the larger Demantra tables have upwards of 200+ columns.  Since every customer uses the Demantra tool in a different manner a portion of these columns will naturally be left as null.  Pushing the null columns to the end of the row decreases data access waits or contention by streamlining the fetch action.  The end result is a column ordered, less fragmented table that will reduce the IO operations for range operations, for example scan data between dates.  The cluster factor will be lower and thus the CBO will look at our desired access method favorably.

Demantra is sensitive to database scans therefore reviewing and implementing sound database performance techniques is essential.  While the EBS and Value Chain Planning (VCP) RDBMS can provide adequate response with proper statistics, indexes, tuning and patching, Demantra brings processing methods to the table that require tighter controls.  I will highlight some of these areas that require tighter control.

1) Tables that have 300+ columns many of the columns are null.  The columns are in the table to represent the family of Demantra solutions.  For
   example Demand Planning, Advanced Demand Planning, Promotion Planning, S&OP, etc.  When the null columns are pushed to the end of the row, the
   read operation becomes streamlined thus increasing performance.

2) Demantra moves significant amounts of data to and from the disk.  It behooves the implementer/Applications DBA to implement a strategy that reduces wait constraints.  Here is a sample of techniques that require strategic implementation attention:

   - Block size 16k or larger
   - PCTFREE set at 30% to 40%
   - FREELIST increased according to your particular implementation
   - INITRANS increased according to your particular implementation
   - Statistics maintained at a 30% sample or larger
   - Parallel Processing at every possible contention point
   - Redo size appropriate to keep switches at 3-4 per hour
   - No Automatic Segment Management (ASM)
   - No Editioning
   - Partitions implemented
   - Large SGA to accommodate multi-block reads
   - Synchronous IO implemented
   - Careful setting of Demantra CBO parameters

In conclusion, it is true that Demantra requires RDBMS implementation strategies that are unique but not uncommon for OLTP and Warehouse systems.
However, VCP requires a unique trick that is unique to VCP.  Creating statistics on empty MLOG$ objects then locking the statistics to the object.
It has been proven that this increases performance by reducing a bottleneck.

Reference Notes:
- Demantra Out of Order Ratio, Table_reorg Procedure, Partitions and Clustering Factor - Manage Your Demantra Database Objects (Doc ID 1548760.1)
- Demantra DB Health and Performance: Oracle Demantra Database Best Practices - A White Paper / Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1499638.1)
- Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)
- Demantra How to Use TABLE_REORG to Reorder MDP_MATRIX in Primary Key (PK) Order Action Plan Version and Later. See 1085012.1 Prior to (Doc ID 1528966.1)
- Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)
- Demantra Large Database Object Diagnosis and Maintenance Best Practices for Best Performance (Doc ID 1629078.1)

Tuesday Jan 20, 2015

Demantra Large Table Partitions and Using the Flashback Recycle bin, recyclebin, dba_recyclebin and sys.RECYCLEBIN$ Purge Best Practice

This is covered using MOS note:

Demantra Large Table Partitions and Using the Flashback Recycle bin, recyclebin, dba_recyclebin and sys.RECYCLEBIN$ Purge Best Practice (Doc ID 1962730.1)

When you are using the  feature and Oracle partitions are involved you will need to perform additional due diligence.  After the automatic
purge that occurrs when the quota is reached or after you issue a purge command to the recyclebin, you will notice that there are orphaned
BIN$ objects that consume the same space as the original partition that was dropped and the purged.


This was the solution based off of a customer SR that puzzled us at first.  The flashback documentation does not discuss the flashback purge
and partitions.  We recommend following the current best practice when managing the flashback feature and partitions.

If you have already performed or if the auto purge has occured, you will need to perform the following.  Of course customized to you BIN$ object name:

drop table lpudemantra."BIN$A3+yI1NBASrgUwoVBkIBKg==$0";
drop table lpudemantra."BIN$A4gGznhiAVbgUwoVBkIBVg==$0";
drop table lpudemantra."BIN$9WCVYRl3BGbgQwoVBkIEZg==$0";

If you have not experienced the purge of the recycle bin, attempt the following:

  1) select count(*) from sys.recyclebin$;

  2) Instead of simply trying to purge the table we can use the following alternative:

     a) flashback table <table_name> to before drop;


          if  <table_name> is currently used by another object:

        flashback table <table_name> to before drop rename to <new_table_name>;

     b) create a script that will drop all the partitions one by one :

        spool drop_<table_name>_partitions.sql

        select 'alter table '|| table_owner|| '.'|| table_name ||' drop partition '|| partition_name||';'
        from dba_tab_partitions
        where table_name='<table_name>'

        spool off

     c) run drop_<table_name>_partitions.sql

     d) drop table <table_name>;

     e) purge recyclebin;

To turn off the recyclebin
  - alter system set recyclebin=off scope=spfile;

Friday Jan 09, 2015

Demantra Are you upgrading or have upgraded? New Mandatory Patch is available.

Are you already at  Demantra has a new required patch 19945449, ARU# 18420277, Demantra patch 12240066 - is now available.
It was discovered that the engine throws a segmentation fault while running CDP consumption profile.  Please see the additional information in the readme Notes.

Wednesday Dec 10, 2014

Hello Demantra Customers! Are you at version A new cumulative patch will be released soon.

Hello Demantra Customers!  Are you at version  A new cumulative patch will be released soon.  This cumulative patch is different to the 12.2.x patch strategies in the respect
that it will only contain fixes that are new to  This cumulative patch is scheduled to be released later this month, December 2014.  To monitor, see this note, 1952805.1 or visit the Oracle Demantra BLOG at https://blogs.oracle.com/demantra/

Wednesday Nov 26, 2014

Oracle Demantra Demand Planning Announcing New Release is Now Available MOS Note 1948684.1

Announcing Oracle Demantra Demand Planning Release

We are excited to announce Oracle Demantra Demand Planning is now available for new and existing customers.

Oracle Demantra Demand Planning includes several features in 12.2.4. Key features include:

  • Oracle In-Memory Consumption-Driven Planning
  • 64-bit Analytical Engine
  • User Access Control
  • Additional Demantra Hierarchies in APCC
  • Launch Management Extended to DM and PTP Components
  • APCC Support for Trade Hierarchies and Data
  • Streamlined Promotion Creation and Improved Method Usability
  • Promotion Calendar Enhancements
  • Printing Promotion Calendar
  • Streamlined Simulations
  • Enhanced Shipment and Consumption Planning Support
  • Demantra- Improved Worksheet Performance
  • VCP and Demantra- Support E1 Long Item Code
  • VCP and Demantra- Certified with Oracle DB in Memory

Uptake of VCP is mandatory for all VCP and Demantra 12.2.4 installations.

  • Demantra Patch Number is 19973580.
  • For E1 customers Long Item Code  Patch Number is 19608405.
  • Demantra will only work with VCP
  • Demantra and VCP will work with EBS 12.1.3 or EBS 12.2.4.


If you would like additional information regarding Oracle Demantra functionality, certification and deployment please refer to the Demanta Documentation Library on My Oracle Support (Document  443969.1).

Contact Jeffery.goulette@oracle.com with direct questions.   Regards!   Jeff

Monday Nov 24, 2014

UPDATED: Does Demantra support Edition Based Redefinition (EBR)? How do I address EBR in my EBS instance?

Hello Demantra Customers!  Do you know what Edition Based Redefinition (EBR) is or how it impacts Demantra?  EBS uses EBR for online patching.

There is a new update to the referenced MOS note below.  Demantra DOES NOT support Edition Based Redefinition. 

Edition-Based Redefinition enables you to upgrade an application's database objects while the application is in use, thus minimizing or eliminating down time. This is accomplished by changing (redefining) database objects in a private environment known as an edition.  Only when all changes have been made and tested do you make the new version of the application available to users.

Edition-based redefinition allows multiple versions of PL/SQL objects, views and synonyms in a single schema, which makes it possible to perform upgrades of database applications with zero down time.

Large, mission critical applications built on Oracle Database 11g Release 1 and earlier versions are often unavailable for tens of hours while the application’s database objects are patched or upgraded.  Oracle Database 11g Release 2 introduces edition-based redefinition, a revolutionary new capability that allows online application upgrades with uninterrupted availability of the application.  When the installation of the upgrade is complete, the pre-upgrade application and the post-upgrade application can be used at the same time.

EBS supports EBR from the 12.2 release onwards in the online patching feature.

To read about EBR and Demantra, see the new white paper, Oracle Demantra and RDBMS Edition-Based Redefinition (EBR) EBS schema - Processing and Patching (Doc ID 1932210.1)

Friday Oct 31, 2014

Demantra Upgrading to 12.2.x? Critical Known Issues and Lessons Learned

Oracle Demantra customer, are you considering an upgrade to you Oracle Demantra software?  There are three ciritical MOS notes that you may want to review.

  • 12.2.1, 12.2.2, 12.2.3 and 12.2.4 Install Upgrade ADVISOR: Demantra (Doc ID 1664177.2)
  • DEMANTRA WARNING Have you OR are you Planning to Upgrade to Release 12.2.2? Upgrade from 12.2.2 to 12.2.3 or 12.2.4 Collections EP Load is Failing ORA-20002, ORA-00001, ORA-06512 (Doc ID 1917715.1)
  • Upgrading to Demantra 12.2.4? Here are current known issues from Demantra Development, Proactive Services and Oracle Support (Doc ID 1928367.1)

New White Papers in Development Sponsored White Paper Library!

Hello Demantra Customers!  There is a new Development sponsored Value Chain Planning and Demantra library!  This library is exclusive to Demantra Development and Oracle Proactive Support.  There are excellent white papers and more!  See Development and Proactive Services Document Library White Papers (Doc ID 1669052.1)

 As of today, we provide

    Value Chain Planning
    - ASCP Data Collections
    - Demantra Demand Planning  * see below
    - Strategic Network Optimization
    - Rapid Planning
    - Global Order Promising
    - Collaborative Planning
    - WebLogic
    - IO Inventory Optimization
    - SPP Service Parts Planning
    - DP Distribution Planning
    - Integration
    - Information Centers

    Specifically for Demantra
    - Architecture
    - Install and Upgrade
    - Integration, Data Loading and CTO
    - Workflow Issues
    - Worksheet Related
    - Engine Related
    - Performance
    - User Interface Use and Design
    - Diagnostic Tools
    - Weblogic Related


This blog delivers the latest information regarding performance and install/upgrade. Comments welcome


« August 2016