Wednesday May 21, 2014

Topical Procedural Demantra Debugging Using Log Files

Demantra Debugging Using Log Files


Default log location is the Demantra_root/Collaborator/virtual_directory/portal/logs directory.
Review the comments in the Demantra_root/Collaborator/virtual_directory/portal/conf/logconf.lcf  file to specify this log file information:
- Name and location
- Maximum size of the log file
- Number to keep
- Log categories and log level

You can also set log levels at runtime with the following URL:
http://Demantra_root/Collaborator/virtual_directory /admin/loggerManager.jsp

 

Application server memory configuration (-Xmx)

Web Application troubleshooting:
- Demantra Web application includes 2 components:
  Application Server
  – Demantra Web application deployment inside a Web server.

  Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.

  They are connected by a Network (LAN, WAN...).
  SQL*Net log

  For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


If Application out of memory errors:
- Identify where the problem is by using logs:
  Collaborator log, application server side log.   Plug-in log, Client side log.


Server side issues:
- Heap dump from the webserver
- Thread dump from the Webserver, in case of application freeze or slow down
- Application server memory configuration ( -Xmx)
- Count of Level members, need level member caching?


Client side issues:
- Heap dump from the Java plug-in
- Thread dump from the Java plug-in
- Plug-in memory configuration ( -Xmx)


When having installation errors there are a few files that can be analyzed:
• Installer log
• Screenshots of the selections in the different Installer panels.
• Review db_audit_log table for object changes made by the upgrade DB_Exception_Log table


Data Model: When having data model errors there are few files that can be analyzed:
• An Export of the Data model
• Review db_audit_log table for object changes made by the upgrade.
• LOG_DATA_MODEL table (an automatic LOG_IT log).


Data Loading
– Ep_Load, Ep_load Errors:
• Enable LOG_IT process on relevant Demantra stored procedures as defined in LOG_IT_PARAMS table
• DB_Exception_Log.


Ep_Load performance:
• Count of data from the staging tables.
• DB_Params table
  – Ep_Load performance related parameters
• Execute “Call_DM_Build_Procedures” after changing parallelization parameters in DB_Params
• DB HW definitions.
• System Memory Disk IO and CPU Utilization monitoring
Either system performance monitor or Linux top events
• AWR /ADDM reports.


Data Loading, Integration:

Integration Errors:
• Collaborator & Integration log (Enable sql logging: appserver.sql, appserver.update.sql)
• Concurrent request logs (collaborator.login.user)
• apscheck.sql (Note:246150.1)


Integration Performance:
• Count of data from the staging tables.
• APS_Params
  – Integration Interface related parameters
• DB HW definitions.
• system memory disk IO and CPU utilization monitoring
  - either system performance monitor or Linux top events.
• AWR/ADDM reports.


DB deadlock/ internal database error:
• Collaborator Log
• Database \ udump logs (USER_DUMP_DEST specifies the pathname for a directory where the DB server will write debugging trace files on behalf of a user process.)
• Enable LOG_IT process on selected Demantra stored procedures as defined in LOG_IT_Params table.


Web Application troubleshooting:
• Demantra Web application includes 2 components:
• Application Server


Demantra Web application deployment inside a Web server.
• Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.
• They are connected by a Network (LAN, WAN...).
• For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


Application out of memory errors:
• Identify where the problem is by using logs:
• Collaborator log
  – server side log.
• Plug-in log
  – Client side log.
• Server side issues:
• Heap dump from the webserver
• Thread dump from the Webserver (in case of application freeze or slow down)
• Application server memory configuration (-Xmx)
• Count of Level members (need level member caching?)
• Client side issues:
• Heap dump from the Java plug-in.
• Thread dump from the Java plug-in
• Plug-in memory configuration (-Xmx)


WS performance and errors:
• Identify where the problem is:
• Server side
  – AWR /ADDM reports, DB side CPU and Memory.  Application server CPU and configuration, APS_PARAMS relevant parameters.
• Client side
  - CPU utilization (and for how long).
• Server side issues:
• Demantra Index Advisor report.
• Thread dump (in case of application freeze or slow down).
• System Memory Disk IO and CPU Utilization monitoring
  - either system performance monitor or Linux top events.
• DB HW definitions.
• Number of concurrent users at the time (concurrent user log)
- Also see Level Member Caching Purpose Performance Guidance (Doc ID 1627689.1)
- Also see Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Reduce Performance (Doc ID 1627652.1)

 

Client side issues:
• Plug-in –Xmx setting
• WS wizard panels screen shots.
• Client side log
  – If requested (Number of combinations in the WS).


Engine errors:
Always provide a full Engine log (manager and engine) with relevant logging option.
• DB errors:
• Engine log with Sql logging
  – depends on where the DB error is Engine manager or engine2k.


Processor error:
• Engine log with DebugPro logging’ groups 100/110
• Engine Out of memory errors:
• No of Tasks
• Count of Active Combinations (Select Count(1) from MDP_Matrix where prediction_status = 1)
• Engine not running:
• On Linux
  – Application server log
• Engine performance:
• AWR report
• Check whether CPU Hyper threading enabled on Engine servers
  – Disable if its enabled
• Engine Blades/Server + DB HW definitions.
• DB/Engine Blades Memory Disk IO and CPU Utilization Monitoring
  - Either system performance monitor or Linux top events.
• For more information check Oracle Support note 800030.1 "Demantra Engine Troubleshooting” Presentation (April 6th, 2011)


Configure auditing capabilities for BM configuration changes:
• This is a customization enhancement introduced by DEV application team in a white paper (“Implementing Demantra Audit log”)
  - Configuring an Audit Log in Demantra Using Database Triggers (Doc ID 1629974.1)
    * Currently captures Inserts/ Updates on Sessions/Computed_fields tables by a trigger based process
• The mechanism and idea can easily be enhanced for further Metadata audit opportunities
• Difference from BAL diff
  – This mechanism captures all the changes, not just how the latest change differs between schemas
• The logs are kept in a separate table (AUDIT_LOG ), so no impact on the out of the box schema.
• AUDIT_LOG table includes the following information (columns):
  - log_date
  - session_id
  - os_user
  - host_name
  - module
  - user_id
  - object_type
  - object_name
  - old_value
  - new_value
  - Message

Wednesday Mar 12, 2014

Troubleshooting Strategy and Processes that Execute Enormous data sets in Demantra

Hello Demantra Customers!   There is a new white paper that discusses large data set loading.

See MOS Note 1634925.1

Recently we have encountered a few scenarios where customers tried to process huge amounts of data within a Demantra process.
Examples:

  • Customer wanted to load over 50M records via EP_LOAD process on weekly bases.
  • Customer wanted to process millions of rows via BLE process on daily bases.
  • Customer wanted to load via integration interface 40M rows on weekly bases.


In all of the above the customer complained about the system inability to process the amount of data, either being too slow or it simply could not be completed at all meaning the process erred out.
In all of the above examples an escalated defect was logged, and development worked troubleshooting the problem.

An additional common theme between all the above examples and many more is that there was no real need to have a product fix although performance improvement opportunities may have been identified, they were not the final solution that addressed the problem.
What helped was understanding that there is no real need to process all the data and recognizing the data that really needed to be loaded.  The data could have been loaded in a more efficient way using some best practices and creative thinking.

The scope of this document is to provide some guidance how to troubleshoot such customer situations.
The document will focus on three main areas:

  • Learning and understanding the business needs
  • Understanding the actual data that needs to be loaded, understand the gap between this number and the number of rows that the customer actually tries to load.
  • Provide some best practices that can help the customer work with the data efficiently.


This document will not deal with the initial Data load processes although some of the best practices can be adopted for such situations as well.
We will illustrate the concepts in this document using three real life examples of service requests and/or defects that were logged on behalf of the customer.

 

Thursday May 23, 2013

Have you heard about the Partial Extrac Tool? Do you use the Partial Extract Tool?

Hello Demantra Users!  There has been a complete rewrite of the Demantra Data Extraction Tool guide.  Check out Demantra Non Invasive Partial Schema Export Utility MOS Note 1448266.1. 

Friday Dec 14, 2012

The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values

Do you know what these values are telling you?

COUNT(*) PREDICTION_STATUS DO_FORE DO_AGGRI AGGRI_98 AGGRI_99 LEVEL_ID
19854 99 1 1 1 1 3
1077 99 0 1 1 1 0
262691 99 1 1 -1
56 99 0 1 1 1 2
1 98 1 1 1 1
1 99 0 1 1 1
748796 1 1 1 4
351633 1 1 1 1 1 2
1877829 97 1 1 4
840 99 1 1 1 1
27 99 0 1 1 1 3
1 97 1 1 -1
66712 99 1 1 1 1 2
53213 1 1 1 1 1 3
2560 98 1 1 4

 

Check out The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1)

This customer is adding an additional processing burden, adding no value.  The incoming data should be scrubbed to eliminate the overhead. 

Friday Sep 14, 2012

Introducing the new Demantra Guided Resolutions!

There is a new method to find your solution called guided resolution or search helper.

Currently, we cover 5 specific topical areas:

Oracle Demantra 7.x and Above Data Loading Guided Resolution Note 1461899.1

Oracle Demantra Load Data Issues Not Necessarily Error Code Related Guided Resolution Note 1468884.1

Oracle Demantra 7.x and Above Workflow Issues Guided Resolution Note 1353217.1

Oracle Demantra 7.x and Above Worksheet Related Guided Resolution Note 1486639.1

Oracle Demantra 7.x and Above Engine Related Guided Resolution Note 1486634.1


The guides will be updated with the latest proven solutions on a regular basis, keeping the contect relevant.


What is a guided resolution?   What is the best practice using a guided resolution?

How to Use the Demantra Guided Resolution, a Proactive Services Tutorial.  Note 1473992.1

Thursday Sep 06, 2012

Using DB_PARAMS to Tune the EP_LOAD_SALES Performance

The DB_PARAMS table can be used to tune the EP_LOAD_SALES performance.  The
AWR report supplied shows 16 CPUs so I imaging that you can run with 8 or more
parallel threads.  This can be done by setting the following DB_PARAMS
parameters.  Note that most of parameter changes are just changing a 2 or 4 into an 8:

DBHintEp_Load_SalesUseParallel = TRUE
DBHintEp_Load_SalesUseParallelDML = TRUE
DBHintEp_Load_SalesInsertErr = + parallel(@T_SRC_SALES@ 8) full(@T_SRC_SALES@)
DBHintEp_Load_SalesInsertLd  = + parallel(@T_SRC_SALES@ 8)
DBHintEp_Load_SalesMergeSALES_DATA = + parallel(@T_SRC_SALES_LD@ 8) full(@T_SRC_SALES_LD@)
DBHintMdp_AddUpdateIs_Fictive0SD = + parallel(s 8 )
DBHintMdp_AddUpdateIs_Fictive2SD = + parallel(s 8 )

Tuesday Aug 14, 2012

Is There a User Interface to Manage LOG_IT Logging Actions?

The only way to configure LOG_IT is to update the data in the LOG_IT_PARAMS table.

First see if LOG_IT logging is available in that version for the procedure

SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

To update the logging level to 3 for the API procedure it to run.

UPDATE LOG_IT_PARAMS
SET logging_level = 3
WHERE pname = 'API_CREATE_ORA_DEM_USER';

COMMIT;

Check the update is there.
SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

This should run a trigger that re-creates LOG_IT with 'API_CREATE_ORA_DEM_USER' added to an IF condition.

When you run the procedure to test it should create a log table 'LOG_API_CREATE_ORA_DEM_USER'

IF LOG_IT did not get rebuilt to include 'API_CREATE_ORA_DEM_USER' then rebuild it manually

EXEC BUILD_LOG_IT_PROCEDURE; -- rebuilds log_it
EXEC BUILD_ORDER_COMPILE -- Makes a list of invalid procedures.
EXEC COMPILE_ALL; -- recompiles all the invalid procedures.
 
See document 1408753.1 Demantra LOG_IT Setup Excecution Explanation ORA Errors Detailed Log Production

Shipment and Booking History -Self Service to load not Populating t_src_item_tmpl

I am running EBS Collections: Legacy Systems > Shipment and Booking History -Self Service to load DemHistory.dat sample data.
Only t_src_sales_tmpl table is populated, but t_src_item_tmpl and t_src_item_tmpl are not populated.  This is expected, as I do
not have the same items and locations on EBS as this is sample data.

As per Implementation guide, Prior to launching this collection, complete ASCP collections for the legacy instance.  I followed
document 402222.1 and downloaded QATempate.  It includes many .dat files such as Category.dat, item.dat ....

Questions:
1. Would you please advise if I need to load each set of *.dat file in order to load Booking&shipments data?  Or only specific .dat
files like TradingPartner.dat and Item.dat?

2. It seems the sample data from TradingPartner.dat does not match what in DemHistory.dat sample file. Is this expected?

Answer:
-------
Yes, for a legacy instance, user first needs to load the reference data - Items, Trading Partners, Trading Partner Sites, Demand Classes, Sales Channel.
For hierarchies - they need to load Item Categories, Regions and Zone etc.

1. User needs to load the dat files required for the reference data only.
2. I am not sure about this question. Does he mean that the example given in TradingPartner.dat and DemHistory.dat do not match ?
   If yes, then ok.        

   For the actual data load, they need to load the customer and sites first, and then load sales data for them using DemHistory.dat.

Loading CTO Model Without Option Classes - is it Possible?

Loading CTO Model and Options without the Option classes on the BOM?  Is it possible using the standard integration?


Answer:
-------
Please evaluate the following option:

By setting profile option 'MSD_DEM: Calculate Planning Percentage' to 'Yes, for "Consume & Derive" Options only'.
With this profile option setting, the standard integration brings in only the Models and options.

Option Classes are not collected into Demantra.  The options are rolled up to the model by skipping the in between option classes.

Note:
  a. The option class item attributes should be set as usual.
  b. Publish planning factors is not supported with this profile option.  You can only publish the total demand
     (independent forecast + dependent forecast) to Advanced Supply Chain Planning (ASCP).

Purging History for Net Change and Complete Refresh Collections

Data Load Real Time Customer Question:

When we run Shipment and Booking History Download, the date range of the data profile Purge History Data is updated at run time with
the date range chosen while running the concurrent program, Shipment and Booking History Download.

We noticed that everytime we run netchange collections, the date parameters on the Purge HIstory Integration Interface are set based on
the dates chosen in the Concurrent program Shipment and Booking History Download.
 
But, when we run the concurrent program in Complete refresh mode, the dates on the Purge History Integration Interface are set with dates
which fall outside the history and hence no history is purged.  Is this intended design that History will not be purged during Demantra Complete
Refresh collections, and only during Net Change collections?


To answer your question, history should be purged for both net-change and refresh collections.  If this is not working please contact Oracle
Support

Note, the complete refresh collection is not meant for regular weekly runs.  Ideally it should be run only once (boot strap) and that too
only if you want to collect all the data from Order Management (OM) into Demantra.

You should run only net-change collection for regular runs.  Also for bootstrap data load, if OM has more data than required for Demantra,
net-change collection with appropriate date range should be run.

Tuesday Jul 10, 2012

Data Loading Issues? Try the new Demantra Data Load Guided Resolution

Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.  

New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra. 

Contact me with any questions or suggestions.   Thank You!

Tuesday Jun 26, 2012

Latest EP_LOAD_SALES Performance Improvement for 7.3.1.4 and 12.2

Hello!   Take a look at the latest EP_LOAD_SALES solution.  New for 7.3.1.4 and 12.2, there are 3 additional parameters to control EP_LOAD_SALES.    Click Here

Wednesday Apr 25, 2012

Partial Database Schema Update

The Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis (Doc ID 1448266.1) is now available in a patch.

Using MOS, download checkin  13930627.

Regards!

Sunday Apr 15, 2012

Tired of Downloading Your Entire Schema? How About a Partial Download?

Proceed to document 1448266.1, Demantra Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis.  In this document you will find instructions and 2 SQL scripts.  Give it a try.  If you have a problem, feel free to post to this BLOG or the community located at:

https://communities.oracle.com/portal/server.pt/community/demantra_solutions/231

Regards!   Jeff

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today