Monday Nov 23, 2015

Looking for Install Guide, System Requirements, Implementation Guide and User Guides? See Below

Hello!  The guides can be found at the following URL.  They have been updated where required.

Doc 443969.1 will be updated as soon as possible.  

Regards!  Jeff

Wednesday Nov 18, 2015

Demantra Demant Planning Analyzer v200.2 is NOW Available!

Hello!  There is a new version of the Demantra Analyzer.  

See the latest in this MOS note: 1618885.1

See the overview doc here: 2079537.1

There are additional data points, a new format that is easier to review.  Additional comprehensive verification!  Give it a try!

Regards!   Jeff

Wednesday Oct 21, 2015

Demantra is available!

Hello!  Demantra does not have version 12.2.5 available.  Demantra provides with a mandatory patch 21959406.  

Please see MOS notes 2068456.1 and 2068458.1. 

It is best to keep ASCP and Demantra at the same level but not 100% required unless otherwise advised.  Regards!  Jeff

Monday Oct 19, 2015

New Demantra Information Center White Papers - Topical Essays

 See the Information Center Overview and Alerts: Oracle Demand Management (Demantra) (Doc ID 1400486.2)


From this link you can click into the VCP Information Center Master Index or drill into the Demant White Papers and Topical Essays.

Regards!  Jeff

Wednesday Oct 07, 2015

Are you on There is a mandatory patch that requires application.

Hello!   If you have upgraded to version there is a mandatory patch:

Oracle Demantra Post Release Mandatory Patch Application - Patch 19945449, patch 1960180.1.


Demantra, new required patch 19945449, ARU# 18420277, Demantra patch 12240066 - is now available.
It was discovered that the engine throws a segmentation fault while running CDP consumption profile.

Please see the additional information in the readme Notes:

Please refer to Oracle Support note 453127.1 if more then 1 instance of Demantra is installed on the same machine.

If you are using a WAR file to deploy the application to the web server, recreate and redeploy the WAR file after applying the patch on the centralized machine, in order to ensure the changes made by the patch are propagated to the web server.

It is necessary to clear the Java Plug-in cache on the client machine in order to ensure the changes made by this patch are loaded by the browser.
To clear the Java Plug-in cache click Start -> Control Panel -> Double-click the Java icon in the control panel.
Then Click Settings under Temporary Internet Files section -> Delete Files.

Monday Oct 05, 2015

Patch 21089579 Avoiding Duplicate Rows During Import Generic Patch for 7.3.1.x and 12.2.x

Hello!  A customer brought patch 21089579 to my attention.  Please consider this a mandatory patch if you are importing data.  This will help you avoid duplicate rows during the import process.  If you have this issue after importing data please note that as these are only descriptive values I would expect the data load process to synchronize them on the next data load.  The level table descriptions should be updated to match with the staging table after the EP_LOAD_ITEMS and EP_LOAD_LOCATIONS procedures complete.

Please apply this patch if you are on Demantra Version: 7.3.1.x to 12.2.x


Fixed in

* Please note that this patch does not remove any existing "duplicate" rows in ITEMS, LOCATION, MDP_MATRIX or SALES_DATA.v

Monday Sep 21, 2015

New Demantra Log File Parser in PERL

Hello!  There is a new Demantra Log File Parser available.  This is written in PERL.  Take a look.  As always, comments are welcome.

MOS Note 2053982.1:  Demantra Demand Planning Log File Parser

Monday Aug 24, 2015

How to avoid ORA-06512 and ORA-20000 when Concurrent Statistics Gathering is enabled. New in 12.1 Database, Concurrent Statistics Gathering, Simultaneous for Multiple Tables or Partitions

Oracle Database 12.1 introduces a new feature, Concurrent Statistics Gathering. 

Concurrent statistics collection is simply the ability to gather statistics on multiple tables, or table partitions, at the same time.  When CONCURRENT statistics gathering is enabled, you can execute each statistics gathering job in parallel.  This combination is useful when you need to analyze large tables, partitions, or subpartitions.  This is accomplished  using a combination of the job scheduler, advanced queuing and resource manager.  Concurrent statistics collection can reduce the time it takes to gather statistics, provided the system can accommodate the extra workload.

Functionality wise, it is a way to gather stats on multiple tables, table partitions or subpartitions at the same time.

From a user perspective, the concurrent statistics collection functionality is very simple.  You set the CONCURRENT global preference to the required value using the DBMS_STATS package.  The RDBMS determines if concurrency is appropriate and if so, the level of concurrency to use. 
See MOS Note: 2034376.1, How to avoid ORA-06512 and ORA-20000 when Concurrent Statistics Gathering is enabled. New in 12.1 Database, Concurrent Statistics Gathering, Simultaneous for Multiple Tables or Partitions

Tuesday Jun 30, 2015

Upgrading Demantra 7.3.1.x to 12.2.x can produce duplicate rows. There are two ways to solve this issue.

Upgrading Demantra 7.3.1.x to 12.2.x can produce duplicate rows.  There are two ways to solve; apply 21089579 before your upgrade to 12.2.x or wait for  The platform upgrade resolves this issue in  Apply this patch before your upgrade from 7.3.1.x to 12.2.x.  After an application upgrade SITE_TYPE_CODE, SITE_TYPE_DESC, ACCT_TYPE_CODE, ACCT_TYPE_DESC columns do not have a default '0'.  Please note that this patch does not remove any existing "duplicate" rows in ITEMS, LOCATION, MDP_MATRIX or SALES_DATA.

Upgrading from Demantra 7.3.1.x to 12.2.x? This can produce duplicate rows. There are two ways to solve. See patch 21089579 and the details in this note. (Doc ID 2025954.1)

Thursday Jun 18, 2015

Demantra Certification Study Guides, Exam Preparation

Hello All!   We have the certification documents available!  They are located at

There you will find:

  • The PDF from May 2015 webcast, Demantra Certification.  Are you attempting to get certified?  Let's walk through the process!
  • Mfg_DEM_Advisor_Webcast_2015_0610.pdf
  • Mfg_DEM_Advisor_Webcast_Certification_2015_0610-Part-2.pdf
  • Demantra-Certification-Technical.pdf
    • This is still in draft however, plenty of technical points that will add to your understanding of Demantra troubleshooting and management
  • Demantra-Certification-Topics.pdf

Of course, there are plenty of webcasts, as noted in Demantra-Certification-Topics.pdf.  Remember, the test is based on  I would advise that you stay with manuals that are available for as made available at Oracle Demantra Documentation Library, DOCs, TOIs and Training Available(Doc ID 443969.1). 

There is one more site that will help you drill into specific topics.  This site has whitepapers from Demantra DEV and Support Proactive Services that will provide detailed analysis and direction to Demantra functional and technical topics.  The MOS note is:

Development and Proactive Services Document Library. CRITICAL Updates, Comprehensive, Impactful White Papers, Guides, Notes +! Oracle Value Chain Planning Suite ASCP GOP Demantra RP APCC SNO IO DSR PS SPP, Note 1669052.1

Additionally, the Demantra Information Center is being updated and integrated into the NEW Value Chain Planning (VCP) Information Center!  I will provide details as soon as they are available.

As always, feel free to contact me.   Best to you in your efforts!   Jeff

Saturday Jun 13, 2015

Demantra Certification Training Inventory Coming Soon!

Hello All!   As promised during the 10-Jun-2015 Demantra webcast, I will be publishing the following on Monday in the Demantra Community.   There will be additional docs that will help you move towards certification success.  Remember, if you fail, you will know where to refocus your study for attempt two OR even attempt 3!!  These are the docs that I have so far:


I will be in touch soon.   Thank You!   Jeff

Friday May 22, 2015

Demantra Worksheet Performance - A summary guide at Customer Request

Worksheet performance.  There are dozens of notes.  It can be challenging to find the best approach. 

  • If you are on or greater, see the following three notes.  Upgrade to the latest version of TABLE_REORG.  Run TABLE_REORG with the 'T' option and review the suggestions in the LOG_TABLE_REORG table.
  • Demantra TABLE_REORG procedure. Did you know that TABLE_REORG has replace rebuild_schema mad rebuild_tables?(Doc ID 2005086.1)
    - Demantra TABLE_REORG Tool New Release with Multiple Updates! Partitions, DROP_TEMPS and More! to 12.2.x.(Doc ID 1980408.1)
  • If you have an error: Demantra table_reorg Procedure Failed ORA- on sales_data mdp_matrix promotion_data How do I Restart? rupd$_ mlog$_ I have Table cannot be redefinitioned in the LOG_TABLE_REORG table(Doc ID 2006779.1)

I would consider these notes to be the best regarding worksheet performance:

  • Oracle Demantra Worksheet Performance - A White Paper (Doc ID 470852.1)
  • Oracle Demantra Worksheet Performance FAQ/TIPS 7.3+! (Doc ID 1110517.1)
  • Demantra 12.2.4 Worksheet Performance Enhancements Parameter dynamic_hint_enabled, Enable Dynamic Degree of Parallelism Hint for Worksheets. 
  • Development Recommended Proper Setup and Use (Doc ID 1923933.1)
  • Demantra Development Suggested Performance Advice Plus Reference Docs (Doc ID 1157173.1)
  • Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Increase Performance (Doc ID 1627652.1)
  • The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1)

Also, see:
Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2 (Doc ID 1601596.1)
- Demantra 11g Statistics new Features and Best Practices Gather Schema Stats (Doc ID 1458911.1)

I would review all parameters mentioned in the docs above and:

1. Monitor the workstation memory consumption and CPU utilization as the worksheet is being loaded.
   * You may have to adjust the memory ceiling for Java
2. Manage MDP_MATRIX.  Are there dead/unused combinations?  When running the engine, you can manage the footprint of the input.  If MDP_MATRIX
   is carrying sizeable dead combinations and/or entries without a matching entry in SALES_DATA, you are increasing processing load.  Check out
   note 1509754.1.  The attachment explains the principle.
3. Using the notes above, can you cache?  Can you use filters?  Can you use open with? 
   A series can be cached, aggregated by item and cached in the branch_data_items table.  This improves performance of worksheets that are aggregated
   across locations and that do not have any location or matrix filtering.
4. Run the index advisor.  Does it suggest additional indexes? 
5. If you do not have the index advisor, produce an AWR.  The AWR should be taken when the user opens the worksheet.  For example, start the AWR process. 
   Wait 10-15 minutes.  Tell the user to open the worksheet.  After the open succeeds, wait 10 minutes.  Stop the AWR process.  What are the top SQLS? 
   What are the contentions?
6. Do you have your large tables on their own tablespace?  This means each large table has a tablespace to its self.  Each large index has a
   tablespace to its self.
7. The worksheet is retrieving rows to display.  Is there row chaining causing multiple block reads?  That should be revealed in the AWR or run the
   appropriate SQL.
8. Worksheet design is important.  The worksheet designers setup what they need.  However, that does not mean that the worksheet design blends well
   with available processing capabilities.  Know the forecast branch health.  I think this is discussed in 1509754.1.  The following SQL reveals the

   select level_id,count(*) from mdp_matrix
   where prediction_status = 1
   group by level_id
   order by level_id

   If you have a branch that is 100000 and remaining branches at 5000 and 10000 that is a problem.  That would point to a setup/design issue.
   Meaning that if you have branch as a level and it just so happens that 1 branch indeed has 100,000 and the other 2 branches account for smaller
   volumne, 5000 and 10000, the chosen levels of the worksheet need to be revisted.  Perhaps a level lower than branch is better suited to
   processing the data.  While this and #2 above are probably out of your control, it will help explain the worksheet loading and engine processing
9. Reduce the amount of memory that your worksheet selects:
   - Remove series if possible
   - Reduce the span of time
   - Apply filters
10. Review all server and client expressions.  Are they affecting performance?

Wednesday May 13, 2015

TABLE_REORG Causing ORA-42012: error occurred while completing the redefinition and ORA-00600

Hello!  The latest information for you.

Submitting the TABLE_REORG procedure for a table that has a foreign key constraint that refers to the same table, leaves the constraint in a disabled state.  However, Foreign key constraints that refer to other tables, behave as expected. i.e., the constraint is re-enabled at the end of the redefinition.

When this condition exists, TABLE_REORG will fail with the following:

ORA-42012: error occurred while completing the redefinition
ORA-00600: internal error code, arguments: [17183], [0x3FFF81BF400], [], [], [], [], [], [], [], [], [], []

It does generate a trace file and dump file.

ERROR at line 1:
ORA-42012: error occurred while completing the redefinition
ORA-00600: internal error code, arguments: [17183], [0x3FFF81BF400], [], [], [], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_REDEFINITION", line 82
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1524
ORA-06512: at line 1
ORA-06512: at "DEMANTRA.TABLE_REORG", line 1193

To resolve this issue, please apply Patch 13867469 if available for your database version and platform.  The fix for Bug 13867469 should be included in 12.2 future release.

- See MOS Note <Note 13572659.8> for additional details.  Bug 13572659 - DBMS_REDEFINITION disables Foreign Keys used for REFERENCE partitioning

- Also available, patch 20954948 : MERGE REQUEST ON TOP OF FOR BUGS 13040943 13572659 13642044

As a workaround, if you did not receive an ORA-00600, re-enable the constraint after the redefinition.

Note: The above workaround is not applicable for a nested table built on a partitioned parent table. Reference: <Note 1929007.1>.
Foreign Constraint On Nested Table is Created With Status Disabled.

Tuesday May 05, 2015


When you submit TABLE_REORG, DBMS_REDEFINITION is used.  This will create an MLOG$_ database object.  The table you supplied for the TABLE_REORG procedure has a Primary Key (PK).  Since there is a PK on the table, when DBMS_DEFINITION creates the MLOG$_ object, DBMS_REDEFINITION automatically creates a RUPD$_ object.  These objects are meant to be used for Java RepAPI.  If you execute a 'drop snapshot log on tablename' the snapshot log as well as the temporary snapshot log are dropped.  However, dropping the MLOG$_ object is not best practice. 

If these objects exist prior to executing TABLE_REORG, you will see the following: ‘Table cannot be redefinitioned.' in log_table_reorg table.  This means that one or both objects/segments exist.  For example, if I had a table_reorg for mdp_matrix fail, the following segments would most likely be left behind:


Use the following SQL to verify (10g and above):

select substr(object_name,1,30)
from dba_objects
where regexp_like( object_name, 'MLOG$|RUPD$')
and owner = '&Schema_owner'

While you can drop these temporary RDBMS segments, it is best practice to use the following:

   uname       => '&demantra_schema_name',
   orig_table  => '&original_table_name',
   int_table   => '&interim_table_name');

In the above, supply the arguments:

- DM or your schema name
- MDP_MATRIX is the original_table_name
- MLOG$_MDP_MATRIX is the interim_table_name

Verify that the objects are dropped.  Submit the TABLE_REORG again AFTER repairing the cause of the last failure.

Thursday Apr 30, 2015

System Wide, Environment Performance Point Summary + Important Performance MOS Notes

Hello!  Wanted to post a great summary of performance action points.  Also, take a look at these two important Demantra performance related MOS notes.

  • Demantra TABLE_REORG procedure. Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES? MOS Note 2005086.1
  • For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1
  • As a Demantra implementer or Demantra DBA you should be able explain the action taken and provide justification of current environment for each of these areas.
  • Remember, Demantra does not support editioning or hot patching.

Database Performance and Configurations

Prime Table Reorganization and Partitioning
- Few Tables in Demantra account for 95% of the data storage these are mdp_matrix, sales_data and promotion_data
- If any of these tables has more than 10 million rows then they are prone to performance issues
- Develop a strategy to partition these tables, Consult the worksheet design team as this can impact their user experience as well.
- Have Multiple Partition schemes to be tested
- Compute the Optimal column Order using Null Statistics and Column attributes
- Compute the optimal PCTFREE, PCTUSED and INITRANS values for the tables
- Ensure that the Schema stats are up to date

Database Performance and Configurations

Review the CBO Settings
- Analysis of monitored long operations and tuning approach for these
- Index Reduction Analysis using Index Usage Analysis
- Review of Index Addition to improve targeted query performance
- Review of Parallelism settings for possible impact on CBO
- Review of Logging settings and performance impact
- Ensure that you make provision for Database Tuning in your project plan

Application Server Configurations
- There are multiple parameters in the application server property files which need to be reviewed but the following are extremely important as they are
  closely related to your environment
- threadpool.query_run.per_user
- threadpool.query_run.size
- worksheet.full.load
- client.worksheet.calcSummaryExpression

Extending Batch Load Process
- Enable Incremental Load
- Extend EP_LOAD_PROCESS to run in parallel mode
- Extend Proport to Run In Parallel mode
- These Procedures are in PL/SQL and the extension is not invasive and can be implemented without touching upon the delivered functionality

Analytical Engine Performance (Things to Consider)
- Bootstrap Run will be the longest run and is not the indication of the engine runs there after
- Larger forecast horizon increases the number of records that will be created and will impact the processing time for other processes.
- Gather Schema stats after every forecast run and essentially before the forecast is run.
- Forecast versioning : Keep it to the number that is absolutely necessary.
- Ensure that Sales_data, mdp_matrix have their own tablespaces. Have Auto extend turned on for them. Turn off the Redo logs for these tables.
- Increase the Bulk loader size to say 20000, this will ensure the frequency of loading the data into the DB is not too high

Determining the number of engines that you can execute on your environment.
General Rule: 1.5 Engines per CPU, if you have 4 CPU you can go up to 6 engines.  Do not load the Blade server to 100%.  You could increase the number of
engines based on the server performance and your needs.  The number of engines will spawn 2 times the number of sessions on the Database, ensure that you have
enough CPU’s to manage these threads.

Worksheet Design Guidelines for Performance
- Enable Simple Filters for all the user groups
- Leverage Open with functionality
- Design Open with Tree Content in the Collaborator Workbench Homepage
- Limit the Number of Aggregation levels in the worksheet definition
- Place the Aggregation levels in the page of the layout
- Limit the number of series that the worksheet contains, if possible
- Limit the Time Horizon of the Worksheet, if possible i.e.  For Accuracy worksheets, please limit it to the time period for which Forecast
  accuracy has to be displayed

Cache the Exception worksheets
- Create functional index on the exception data series columns
- Level Caching
- Demantra system has a capability to cache level members on a level by level basis.
- Explore the option of caching bigger levels for the better GUI performance.  Setting an optimal threshold for Level caching is a trial & error method.
- More details regarding Level Caching can be found in the Demantra Implementation Guide under section “Managing Level Caching”

Client Configurations
- JRE Settings
- Garbage collections
- Heap size settingsv


This blog delivers the latest information regarding performance and install/upgrade. Comments welcome


« November 2015