Friday Sep 27, 2013

New Demanta Performance and Setup Analyzer on the way!

Hello Demantra Customers!  There is a new tool being developed in Oracle Proactive Services.  The Demantra Performance and Setup Analyzer.  We are reviewing the data points now and release is scheduled for November 2013.   Check out the areas of coverage below.  Any comments?  Please post.  Regards!  Jeff

There are some parameters listed twice to deliver in context results.

========================================
========================================
VERIFICATION
========================================
========================================
Verify database name

Verify Database User

DBUser

Present the tnsnames.ora file to verify settings

Verify the ds.ini file

Check the Demantra version

Check the RIGHT PRIVILEGES are GRANTED

Echo on engine server, $ENGINE_ROOT, $ORACLE_HOME, $PATH, $LD_LIBRARY_PATH, $JAVA_HOME

Verify profiles 'MSD_DEM: Schema' and 'MSD_DEM: Host URL'

Verify profile MSD_DEM: Debug Mode

ADD SQL to reveal the setting

========================================
========================================
PROCESSING
========================================
========================================

Are you running Multiple Batch Processes in Parallel?

Checking for active combinations and combinations that could be archived in BIIO table

Number of Active combinations in MDP_MATRIX

Improving Your Batch Forecast

Tasks and Branches, TreeHierarchyTest.exe

Managing Simulations

MDP_MATRIX - ACTIVE COMBINATIONS and ENGINE PERFORMANCE

Find Missing Dates

========================================
PARAMETERS
========================================

Verify the basic setup.

Verify Sys_params - AppServerURL AppServerLocation

threadpool.update.data.manual.size

threadpool.update.table.manual.size

threadpool.update.comb.manual.size=

threadpool.update.record.manual.size=

MaxDBConnections

Debug Mode

tunnel.client.maxConnections 

threadpool.update.comb.batch.size

threadpool.update.record.batch.size

threadpool.update.record.batch.size

MaxUpdateThreads Check

Is Auditing Turned on?

MaxSqlInExpressionTokens

StatsLowRowLimit

Import Block Size

enginestarter port

Are any engine libraries missing?

Are the permissions set correctly for Engine.exe

========================================
========================================
Database Objects
========================================
========================================

Check for corruption
- ANALYZE TABLE SALES_DATA VALIDATE STRUCTURE CASCADE FAST;
- ANALYZE TABLE MDP_MATRIX VALIDATE STRUCTURE CASCADE FAST;
- ANALYZE TABLE PROMOTION_DATA VALIDATE STRUCTURE CASCADE FAST;

Implementing Parallization

Chained Row count
 
Should you rebuild tables by Primary Key (PK). 

Partitioning

INITRANS

PCTFREE, PCTUSED

Excess Rows

Not Null Columns

Indexes, Unused

========================================
========================================
Database Parameters
========================================
========================================

DB_BLOCK_SIZE

OPTIMIZER_INDEX

Optimizer_Mode

Cursor_Sharing

db_file_multiblock_read_count (MBRC)

SQL Plan Baselines

parallel_dml

SQLNET.EXPIRE_TIME

========================================
========================================
Statistics
========================================
========================================

Verify the health of large tables

System Statistics

Gathering Statistics

========================================
========================================
Java Application Server, Client Cache
========================================
========================================

Which is the java version on application server and client?

Do your workstations have the Java runtime parameters setup to support the memory of their system per Note <>?

Clearing Cache could help performance
- we discuss all tiers

========================================
========================================
Logs, Display bottom 75 lines
========================================
========================================

DB_EXCEPTION_LOG

Do you have an engine2k.log file?

EngineManager.log

collaborator.log

EngineManagerPreRunLog.txt

DB_TIMING_LOG

DB_SECTION_LOG

========================================
========================================
Middle Tier
========================================
========================================

Cache

Clearing Server Caches

Limit Number of retained files

Verify the Connection Pool

connectionTimeout

========================================
========================================
Engine
========================================
========================================

Number of Engines, ComputerNames, localhost

Is logging turned on for the engine?

Engine Profiles

Verify date formats

Clearing Forecast Values

Verify that the six system parameters specifying engine used tablespaces are set correctly

Verify engineplatform

Verify enginebaseURL

====================================
====================================
$ENGINE_ROOT/bin/Settings.xml file
====================================
====================================

EngineUnixPortConfig

EngUnixDebugMode

ComputerNames, localhost

Is the Engine Running?  Where?

Verify SQLLDR

OCCITest

Present permissions for DS_CONFIG.sh

Present permissions for $ENGINE_ROOT/lib and $ENGINE_ROOT/bin

Do you have at least 1 combination that meets the rule for Insert_Units to work on and insert rows into the future?

Are you viewing the correct series?

Are you using the correct profile?

The Parameter MaxEngMemory is introduced from 7.3.0.1 onwards

Are you using promotions?

Are you setup for a distributed engine?

Verify settings.xml EngineUnixPortConfig

Verify the Logging Level of the Run Time Environment

Determining if the Engine Failed

Verify Null Pn Rows/Column verification

========================================
========================================
Forecast Specific
========================================
========================================

TargetTaskSize

BranchIDMultiple

========================================
========================================
Rolling Updates
========================================
========================================

RollingUpdatesUseJobQueues

RollingUpdatesMaxQueues

RollingUpdatesTimeout

RollingUpdatesRenameColumns

RunInsertUnits

========================================
========================================
Worksheet Specific
========================================
========================================

MaxAvailableFilterMembers

AppServerURL

MaxSqlInExpressionTokens

worksheet.full.load

client.worksheet.calcSummaryExpressions

Excess Rows

threadpool.query_run.per_user

threadpool.query_run.size

worksheet.data.comb.block_size

ApprovalProcessScope

Review your worksheet hints

Configurable Combinations in the Worksheet
- Client.uilimitations.maxcombs.ws
- Client.uilimitations.maxcells.ws
- Client.uilimitations.maxcells
- Client.uilimitations.maxdiskspace
- Client.uilimitations.warning

========================================
========================================
Data Loading
========================================
========================================

Verify any data loading errors
- select * from BIIO_SUPPLY_PLANS_err;
- select * from BIIO_SUPPLY_PLANS_POP_err;
- select * from BIIO_OTHER_PLAN_DATA_ERR;
- select * from t_src_item_tmpl_err;
- select * from t_src_loc_tmpl_err;
- select * from t_src_sales_tmpl_err;

Monitor Data Load Process

Ep_Load_Sales_LoadNullActualQty

Ep_Load_Sales_LoadFromStagingTableDirectly

Ep_Load_Sales_DisableEnableTriggers

Ep_Load_Sales_SALES_DATA_Merge_LoopControl

ep_load_do_commits

========================================
========================================
Workflow
========================================
========================================

Workflow Status

Verify your schema

========================================
========================================
insert_units
========================================
========================================

Verify Combinations for insert_units

========================================
========================================
Proport in Parallel
========================================
========================================

ProportTableLabel

ProportRunsInCycle

ProportParallelJobs

========================================
========================================
appserver properties.bat
========================================
========================================

Display the appserver properties.bat
- Displays the file and delivers detailed explanation of settings

Tuesday Aug 14, 2012

Is There a User Interface to Manage LOG_IT Logging Actions?

The only way to configure LOG_IT is to update the data in the LOG_IT_PARAMS table.

First see if LOG_IT logging is available in that version for the procedure

SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

To update the logging level to 3 for the API procedure it to run.

UPDATE LOG_IT_PARAMS
SET logging_level = 3
WHERE pname = 'API_CREATE_ORA_DEM_USER';

COMMIT;

Check the update is there.
SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

This should run a trigger that re-creates LOG_IT with 'API_CREATE_ORA_DEM_USER' added to an IF condition.

When you run the procedure to test it should create a log table 'LOG_API_CREATE_ORA_DEM_USER'

IF LOG_IT did not get rebuilt to include 'API_CREATE_ORA_DEM_USER' then rebuild it manually

EXEC BUILD_LOG_IT_PROCEDURE; -- rebuilds log_it
EXEC BUILD_ORDER_COMPILE -- Makes a list of invalid procedures.
EXEC COMPILE_ALL; -- recompiles all the invalid procedures.
 
See document 1408753.1 Demantra LOG_IT Setup Excecution Explanation ORA Errors Detailed Log Production

Shipment and Booking History -Self Service to load not Populating t_src_item_tmpl

I am running EBS Collections: Legacy Systems > Shipment and Booking History -Self Service to load DemHistory.dat sample data.
Only t_src_sales_tmpl table is populated, but t_src_item_tmpl and t_src_item_tmpl are not populated.  This is expected, as I do
not have the same items and locations on EBS as this is sample data.

As per Implementation guide, Prior to launching this collection, complete ASCP collections for the legacy instance.  I followed
document 402222.1 and downloaded QATempate.  It includes many .dat files such as Category.dat, item.dat ....

Questions:
1. Would you please advise if I need to load each set of *.dat file in order to load Booking&shipments data?  Or only specific .dat
files like TradingPartner.dat and Item.dat?

2. It seems the sample data from TradingPartner.dat does not match what in DemHistory.dat sample file. Is this expected?

Answer:
-------
Yes, for a legacy instance, user first needs to load the reference data - Items, Trading Partners, Trading Partner Sites, Demand Classes, Sales Channel.
For hierarchies - they need to load Item Categories, Regions and Zone etc.

1. User needs to load the dat files required for the reference data only.
2. I am not sure about this question. Does he mean that the example given in TradingPartner.dat and DemHistory.dat do not match ?
   If yes, then ok.        

   For the actual data load, they need to load the customer and sites first, and then load sales data for them using DemHistory.dat.

Loading CTO Model Without Option Classes - is it Possible?

Loading CTO Model and Options without the Option classes on the BOM?  Is it possible using the standard integration?


Answer:
-------
Please evaluate the following option:

By setting profile option 'MSD_DEM: Calculate Planning Percentage' to 'Yes, for "Consume & Derive" Options only'.
With this profile option setting, the standard integration brings in only the Models and options.

Option Classes are not collected into Demantra.  The options are rolled up to the model by skipping the in between option classes.

Note:
  a. The option class item attributes should be set as usual.
  b. Publish planning factors is not supported with this profile option.  You can only publish the total demand
     (independent forecast + dependent forecast) to Advanced Supply Chain Planning (ASCP).

Purging History for Net Change and Complete Refresh Collections

Data Load Real Time Customer Question:

When we run Shipment and Booking History Download, the date range of the data profile Purge History Data is updated at run time with
the date range chosen while running the concurrent program, Shipment and Booking History Download.

We noticed that everytime we run netchange collections, the date parameters on the Purge HIstory Integration Interface are set based on
the dates chosen in the Concurrent program Shipment and Booking History Download.
 
But, when we run the concurrent program in Complete refresh mode, the dates on the Purge History Integration Interface are set with dates
which fall outside the history and hence no history is purged.  Is this intended design that History will not be purged during Demantra Complete
Refresh collections, and only during Net Change collections?


To answer your question, history should be purged for both net-change and refresh collections.  If this is not working please contact Oracle
Support

Note, the complete refresh collection is not meant for regular weekly runs.  Ideally it should be run only once (boot strap) and that too
only if you want to collect all the data from Order Management (OM) into Demantra.

You should run only net-change collection for regular runs.  Also for bootstrap data load, if OM has more data than required for Demantra,
net-change collection with appropriate date range should be run.

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today