Monday Oct 05, 2015

Patch 21089579 Avoiding Duplicate Rows During Import Generic Patch for 7.3.1.x and 12.2.x

Hello!  A customer brought patch 21089579 to my attention.  Please consider this a mandatory patch if you are importing data.  This will help you avoid duplicate rows during the import process.  If you have this issue after importing data please note that as these are only descriptive values I would expect the data load process to synchronize them on the next data load.  The level table descriptions should be updated to match with the staging table after the EP_LOAD_ITEMS and EP_LOAD_LOCATIONS procedures complete.

Please apply this patch if you are on Demantra Version: 7.3.1.x to 12.2.x


Fixed in

* Please note that this patch does not remove any existing "duplicate" rows in ITEMS, LOCATION, MDP_MATRIX or SALES_DATA.v

Wednesday Oct 01, 2014

Demantra Performance and Setup Analyzer v2.00 is FINALLY Available!

Hello Demantra Customer!  There is a new release of the Demantra Performance and Setup Analyzer!  Version 2.00 has been released!
You can pick and choose any SQL that you prefer to see given any type of task at hand.   See MyOracleSupport note
Demantra Analyzer script for Setup and Performance Monitoring (11.5.X and R12), 1618885.1

I am considering adding more topical analysis and deeper digging into additional details for the next release. 

Any questions / comments, please email

Thank You!

Wednesday May 21, 2014

Topical Procedural Demantra Debugging Using Log Files

Demantra Debugging Using Log Files

Default log location is the Demantra_root/Collaborator/virtual_directory/portal/logs directory.
Review the comments in the Demantra_root/Collaborator/virtual_directory/portal/conf/logconf.lcf  file to specify this log file information:
- Name and location
- Maximum size of the log file
- Number to keep
- Log categories and log level

You can also set log levels at runtime with the following URL:
http://Demantra_root/Collaborator/virtual_directory /admin/loggerManager.jsp


Application server memory configuration (-Xmx)

Web Application troubleshooting:
- Demantra Web application includes 2 components:
  Application Server
  – Demantra Web application deployment inside a Web server.

  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.

  They are connected by a Network (LAN, WAN...).
  SQL*Net log

  For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?

If Application out of memory errors:
- Identify where the problem is by using logs:
  Collaborator log, application server side log.   Plug-in log, Client side log.

Server side issues:
- Heap dump from the webserver
- Thread dump from the Webserver, in case of application freeze or slow down
- Application server memory configuration ( -Xmx)
- Count of Level members, need level member caching?

Client side issues:
- Heap dump from the Java plug-in
- Thread dump from the Java plug-in
- Plug-in memory configuration ( -Xmx)

When having installation errors there are a few files that can be analyzed:
• Installer log
• Screenshots of the selections in the different Installer panels.
• Review db_audit_log table for object changes made by the upgrade DB_Exception_Log table

Data Model: When having data model errors there are few files that can be analyzed:
• An Export of the Data model
• Review db_audit_log table for object changes made by the upgrade.
• LOG_DATA_MODEL table (an automatic LOG_IT log).

Data Loading
– Ep_Load, Ep_load Errors:
• Enable LOG_IT process on relevant Demantra stored procedures as defined in LOG_IT_PARAMS table
• DB_Exception_Log.

Ep_Load performance:
• Count of data from the staging tables.
• DB_Params table
  – Ep_Load performance related parameters
• Execute “Call_DM_Build_Procedures” after changing parallelization parameters in DB_Params
• DB HW definitions.
• System Memory Disk IO and CPU Utilization monitoring
Either system performance monitor or Linux top events
• AWR /ADDM reports.

Data Loading, Integration:

Integration Errors:
• Collaborator & Integration log (Enable sql logging: appserver.sql, appserver.update.sql)
• Concurrent request logs (collaborator.login.user)
• apscheck.sql (Note:246150.1)

Integration Performance:
• Count of data from the staging tables.
• APS_Params
  – Integration Interface related parameters
• DB HW definitions.
• system memory disk IO and CPU utilization monitoring
  - either system performance monitor or Linux top events.
• AWR/ADDM reports.

DB deadlock/ internal database error:
• Collaborator Log
• Database \ udump logs (USER_DUMP_DEST specifies the pathname for a directory where the DB server will write debugging trace files on behalf of a user process.)
• Enable LOG_IT process on selected Demantra stored procedures as defined in LOG_IT_Params table.

Web Application troubleshooting:
• Demantra Web application includes 2 components:
• Application Server

Demantra Web application deployment inside a Web server.
• Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.
• They are connected by a Network (LAN, WAN...).
• For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?

Application out of memory errors:
• Identify where the problem is by using logs:
• Collaborator log
  – server side log.
• Plug-in log
  – Client side log.
• Server side issues:
• Heap dump from the webserver
• Thread dump from the Webserver (in case of application freeze or slow down)
• Application server memory configuration (-Xmx)
• Count of Level members (need level member caching?)
• Client side issues:
• Heap dump from the Java plug-in.
• Thread dump from the Java plug-in
• Plug-in memory configuration (-Xmx)

WS performance and errors:
• Identify where the problem is:
• Server side
  – AWR /ADDM reports, DB side CPU and Memory.  Application server CPU and configuration, APS_PARAMS relevant parameters.
• Client side
  - CPU utilization (and for how long).
• Server side issues:
• Demantra Index Advisor report.
• Thread dump (in case of application freeze or slow down).
• System Memory Disk IO and CPU Utilization monitoring
  - either system performance monitor or Linux top events.
• DB HW definitions.
• Number of concurrent users at the time (concurrent user log)
- Also see Level Member Caching Purpose Performance Guidance (Doc ID 1627689.1)
- Also see Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Reduce Performance (Doc ID 1627652.1)


Client side issues:
• Plug-in –Xmx setting
• WS wizard panels screen shots.
• Client side log
  – If requested (Number of combinations in the WS).

Engine errors:
Always provide a full Engine log (manager and engine) with relevant logging option.
• DB errors:
• Engine log with Sql logging
  – depends on where the DB error is Engine manager or engine2k.

Processor error:
• Engine log with DebugPro logging’ groups 100/110
• Engine Out of memory errors:
• No of Tasks
• Count of Active Combinations (Select Count(1) from MDP_Matrix where prediction_status = 1)
• Engine not running:
• On Linux
  – Application server log
• Engine performance:
• AWR report
• Check whether CPU Hyper threading enabled on Engine servers
  – Disable if its enabled
• Engine Blades/Server + DB HW definitions.
• DB/Engine Blades Memory Disk IO and CPU Utilization Monitoring
  - Either system performance monitor or Linux top events.
• For more information check Oracle Support note 800030.1 "Demantra Engine Troubleshooting” Presentation (April 6th, 2011)

Configure auditing capabilities for BM configuration changes:
• This is a customization enhancement introduced by DEV application team in a white paper (“Implementing Demantra Audit log”)
  - Configuring an Audit Log in Demantra Using Database Triggers (Doc ID 1629974.1)
    * Currently captures Inserts/ Updates on Sessions/Computed_fields tables by a trigger based process
• The mechanism and idea can easily be enhanced for further Metadata audit opportunities
• Difference from BAL diff
  – This mechanism captures all the changes, not just how the latest change differs between schemas
• The logs are kept in a separate table (AUDIT_LOG ), so no impact on the out of the box schema.
• AUDIT_LOG table includes the following information (columns):
  - log_date
  - session_id
  - os_user
  - host_name
  - module
  - user_id
  - object_type
  - object_name
  - old_value
  - new_value
  - Message

Wednesday Mar 12, 2014

Troubleshooting Strategy and Processes that Execute Enormous data sets in Demantra

Hello Demantra Customers!   There is a new white paper that discusses large data set loading.

See MOS Note 1634925.1

Recently we have encountered a few scenarios where customers tried to process huge amounts of data within a Demantra process.

  • Customer wanted to load over 50M records via EP_LOAD process on weekly bases.
  • Customer wanted to process millions of rows via BLE process on daily bases.
  • Customer wanted to load via integration interface 40M rows on weekly bases.

In all of the above the customer complained about the system inability to process the amount of data, either being too slow or it simply could not be completed at all meaning the process erred out.
In all of the above examples an escalated defect was logged, and development worked troubleshooting the problem.

An additional common theme between all the above examples and many more is that there was no real need to have a product fix although performance improvement opportunities may have been identified, they were not the final solution that addressed the problem.
What helped was understanding that there is no real need to process all the data and recognizing the data that really needed to be loaded.  The data could have been loaded in a more efficient way using some best practices and creative thinking.

The scope of this document is to provide some guidance how to troubleshoot such customer situations.
The document will focus on three main areas:

  • Learning and understanding the business needs
  • Understanding the actual data that needs to be loaded, understand the gap between this number and the number of rows that the customer actually tries to load.
  • Provide some best practices that can help the customer work with the data efficiently.

This document will not deal with the initial Data load processes although some of the best practices can be adopted for such situations as well.
We will illustrate the concepts in this document using three real life examples of service requests and/or defects that were logged on behalf of the customer.


Friday Sep 14, 2012

Introducing the new Demantra Guided Resolutions!

There is a new method to find your solution called guided resolution or search helper.

Currently, we cover 5 specific topical areas:

Oracle Demantra 7.x and Above Data Loading Guided Resolution Note 1461899.1

Oracle Demantra Load Data Issues Not Necessarily Error Code Related Guided Resolution Note 1468884.1

Oracle Demantra 7.x and Above Workflow Issues Guided Resolution Note 1353217.1

Oracle Demantra 7.x and Above Worksheet Related Guided Resolution Note 1486639.1

Oracle Demantra 7.x and Above Engine Related Guided Resolution Note 1486634.1

The guides will be updated with the latest proven solutions on a regular basis, keeping the contect relevant.

What is a guided resolution?   What is the best practice using a guided resolution?

How to Use the Demantra Guided Resolution, a Proactive Services Tutorial.  Note 1473992.1

Thursday Sep 06, 2012

Using DB_PARAMS to Tune the EP_LOAD_SALES Performance

The DB_PARAMS table can be used to tune the EP_LOAD_SALES performance.  The
AWR report supplied shows 16 CPUs so I imaging that you can run with 8 or more
parallel threads.  This can be done by setting the following DB_PARAMS
parameters.  Note that most of parameter changes are just changing a 2 or 4 into an 8:

DBHintEp_Load_SalesUseParallel = TRUE
DBHintEp_Load_SalesUseParallelDML = TRUE
DBHintEp_Load_SalesInsertErr = + parallel(@T_SRC_SALES@ 8) full(@T_SRC_SALES@)
DBHintEp_Load_SalesInsertLd  = + parallel(@T_SRC_SALES@ 8)
DBHintEp_Load_SalesMergeSALES_DATA = + parallel(@T_SRC_SALES_LD@ 8) full(@T_SRC_SALES_LD@)
DBHintMdp_AddUpdateIs_Fictive0SD = + parallel(s 8 )
DBHintMdp_AddUpdateIs_Fictive2SD = + parallel(s 8 )

Tuesday Aug 14, 2012

Is There a User Interface to Manage LOG_IT Logging Actions?

The only way to configure LOG_IT is to update the data in the LOG_IT_PARAMS table.

First see if LOG_IT logging is available in that version for the procedure


To update the logging level to 3 for the API procedure it to run.

SET logging_level = 3


Check the update is there.

This should run a trigger that re-creates LOG_IT with 'API_CREATE_ORA_DEM_USER' added to an IF condition.

When you run the procedure to test it should create a log table 'LOG_API_CREATE_ORA_DEM_USER'

IF LOG_IT did not get rebuilt to include 'API_CREATE_ORA_DEM_USER' then rebuild it manually

EXEC BUILD_LOG_IT_PROCEDURE; -- rebuilds log_it
EXEC BUILD_ORDER_COMPILE -- Makes a list of invalid procedures.
EXEC COMPILE_ALL; -- recompiles all the invalid procedures.
See document 1408753.1 Demantra LOG_IT Setup Excecution Explanation ORA Errors Detailed Log Production

Shipment and Booking History -Self Service to load not Populating t_src_item_tmpl

I am running EBS Collections: Legacy Systems > Shipment and Booking History -Self Service to load DemHistory.dat sample data.
Only t_src_sales_tmpl table is populated, but t_src_item_tmpl and t_src_item_tmpl are not populated.  This is expected, as I do
not have the same items and locations on EBS as this is sample data.

As per Implementation guide, Prior to launching this collection, complete ASCP collections for the legacy instance.  I followed
document 402222.1 and downloaded QATempate.  It includes many .dat files such as Category.dat, item.dat ....

1. Would you please advise if I need to load each set of *.dat file in order to load Booking&shipments data?  Or only specific .dat
files like TradingPartner.dat and Item.dat?

2. It seems the sample data from TradingPartner.dat does not match what in DemHistory.dat sample file. Is this expected?

Yes, for a legacy instance, user first needs to load the reference data - Items, Trading Partners, Trading Partner Sites, Demand Classes, Sales Channel.
For hierarchies - they need to load Item Categories, Regions and Zone etc.

1. User needs to load the dat files required for the reference data only.
2. I am not sure about this question. Does he mean that the example given in TradingPartner.dat and DemHistory.dat do not match ?
   If yes, then ok.        

   For the actual data load, they need to load the customer and sites first, and then load sales data for them using DemHistory.dat.

Loading CTO Model Without Option Classes - is it Possible?

Loading CTO Model and Options without the Option classes on the BOM?  Is it possible using the standard integration?

Please evaluate the following option:

By setting profile option 'MSD_DEM: Calculate Planning Percentage' to 'Yes, for "Consume & Derive" Options only'.
With this profile option setting, the standard integration brings in only the Models and options.

Option Classes are not collected into Demantra.  The options are rolled up to the model by skipping the in between option classes.

  a. The option class item attributes should be set as usual.
  b. Publish planning factors is not supported with this profile option.  You can only publish the total demand
     (independent forecast + dependent forecast) to Advanced Supply Chain Planning (ASCP).

Purging History for Net Change and Complete Refresh Collections

Data Load Real Time Customer Question:

When we run Shipment and Booking History Download, the date range of the data profile Purge History Data is updated at run time with
the date range chosen while running the concurrent program, Shipment and Booking History Download.

We noticed that everytime we run netchange collections, the date parameters on the Purge HIstory Integration Interface are set based on
the dates chosen in the Concurrent program Shipment and Booking History Download.
But, when we run the concurrent program in Complete refresh mode, the dates on the Purge History Integration Interface are set with dates
which fall outside the history and hence no history is purged.  Is this intended design that History will not be purged during Demantra Complete
Refresh collections, and only during Net Change collections?

To answer your question, history should be purged for both net-change and refresh collections.  If this is not working please contact Oracle

Note, the complete refresh collection is not meant for regular weekly runs.  Ideally it should be run only once (boot strap) and that too
only if you want to collect all the data from Order Management (OM) into Demantra.

You should run only net-change collection for regular runs.  Also for bootstrap data load, if OM has more data than required for Demantra,
net-change collection with appropriate date range should be run.

Tuesday Jul 10, 2012

Data Loading Issues? Try the new Demantra Data Load Guided Resolution

Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.  

New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra. 

Contact me with any questions or suggestions.   Thank You!

Tuesday Jun 26, 2012

Latest EP_LOAD_SALES Performance Improvement for and 12.2

Hello!   Take a look at the latest EP_LOAD_SALES solution.  New for and 12.2, there are 3 additional parameters to control EP_LOAD_SALES.    Click Here

Thursday Mar 08, 2012

Loading Product Family into Demantra Using Legacy Collection

I am using legacy collection to load product family into Demantra.  In the soucre EBS/Value Chain Planning (VCP)
I can see my product family and member in the ASCP plan.

I would like to be able to have product family dimension populated in a Demantra Hierarchy.  I have used demhistory
flat file to upload forecasts for my items (members) and I expect to have product family dimension populated.

I started to analyse demantra collection and EP launch, and it seems to me that stagging table T_SRC_ITEM_TMPL is
populated by process "Collect Level Type (2)"  or perhaps a subprocess.

The stagging table stores the link between members and product family itself, ebs_product_family_code field, but it
is populated with default value (0).

The profile 'MSD_DEM: Master Org' is used for collecting item -> product family rollups into demantra.
Item to product family rollup needs to be available in ASCP under the master organization specified by the above profile.

This is a source side profile, and cannot be set by hand in case of legacy source.  To set the value, you will need to run the
request 'Configure Legacy Profiles' from 'Demand Management System Administrator' responsibility.  Parameters for this
request are as follows:

- Legacy Instance
- Master Org
- Category set

After running this, check for records in msd_dem_legacy_setup_params table.  Then try running flat file load and verify
product family is populated in t_src_item_tmpl table in demantra schema.

Also the product family members/product family relationship comes from the msc_system_items table for the master organization,
not organization where product family/product members is defined

You can check for a link between members and product family in the master org within msc_system_items and then the product family
should be visible within Demantra.


This blog delivers the latest information regarding performance and install/upgrade. Comments welcome


« July 2016