Tuesday Aug 14, 2012

Is There a User Interface to Manage LOG_IT Logging Actions?

The only way to configure LOG_IT is to update the data in the LOG_IT_PARAMS table.

First see if LOG_IT logging is available in that version for the procedure

SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

To update the logging level to 3 for the API procedure it to run.

UPDATE LOG_IT_PARAMS
SET logging_level = 3
WHERE pname = 'API_CREATE_ORA_DEM_USER';

COMMIT;

Check the update is there.
SELECT * FROM log_it_params WHERE pname = 'API_CREATE_ORA_DEM_USER'

This should run a trigger that re-creates LOG_IT with 'API_CREATE_ORA_DEM_USER' added to an IF condition.

When you run the procedure to test it should create a log table 'LOG_API_CREATE_ORA_DEM_USER'

IF LOG_IT did not get rebuilt to include 'API_CREATE_ORA_DEM_USER' then rebuild it manually

EXEC BUILD_LOG_IT_PROCEDURE; -- rebuilds log_it
EXEC BUILD_ORDER_COMPILE -- Makes a list of invalid procedures.
EXEC COMPILE_ALL; -- recompiles all the invalid procedures.
 
See document 1408753.1 Demantra LOG_IT Setup Excecution Explanation ORA Errors Detailed Log Production

Shipment and Booking History -Self Service to load not Populating t_src_item_tmpl

I am running EBS Collections: Legacy Systems > Shipment and Booking History -Self Service to load DemHistory.dat sample data.
Only t_src_sales_tmpl table is populated, but t_src_item_tmpl and t_src_item_tmpl are not populated.  This is expected, as I do
not have the same items and locations on EBS as this is sample data.

As per Implementation guide, Prior to launching this collection, complete ASCP collections for the legacy instance.  I followed
document 402222.1 and downloaded QATempate.  It includes many .dat files such as Category.dat, item.dat ....

Questions:
1. Would you please advise if I need to load each set of *.dat file in order to load Booking&shipments data?  Or only specific .dat
files like TradingPartner.dat and Item.dat?

2. It seems the sample data from TradingPartner.dat does not match what in DemHistory.dat sample file. Is this expected?

Answer:
-------
Yes, for a legacy instance, user first needs to load the reference data - Items, Trading Partners, Trading Partner Sites, Demand Classes, Sales Channel.
For hierarchies - they need to load Item Categories, Regions and Zone etc.

1. User needs to load the dat files required for the reference data only.
2. I am not sure about this question. Does he mean that the example given in TradingPartner.dat and DemHistory.dat do not match ?
   If yes, then ok.        

   For the actual data load, they need to load the customer and sites first, and then load sales data for them using DemHistory.dat.

Loading CTO Model Without Option Classes - is it Possible?

Loading CTO Model and Options without the Option classes on the BOM?  Is it possible using the standard integration?


Answer:
-------
Please evaluate the following option:

By setting profile option 'MSD_DEM: Calculate Planning Percentage' to 'Yes, for "Consume & Derive" Options only'.
With this profile option setting, the standard integration brings in only the Models and options.

Option Classes are not collected into Demantra.  The options are rolled up to the model by skipping the in between option classes.

Note:
  a. The option class item attributes should be set as usual.
  b. Publish planning factors is not supported with this profile option.  You can only publish the total demand
     (independent forecast + dependent forecast) to Advanced Supply Chain Planning (ASCP).

Purging History for Net Change and Complete Refresh Collections

Data Load Real Time Customer Question:

When we run Shipment and Booking History Download, the date range of the data profile Purge History Data is updated at run time with
the date range chosen while running the concurrent program, Shipment and Booking History Download.

We noticed that everytime we run netchange collections, the date parameters on the Purge HIstory Integration Interface are set based on
the dates chosen in the Concurrent program Shipment and Booking History Download.
 
But, when we run the concurrent program in Complete refresh mode, the dates on the Purge History Integration Interface are set with dates
which fall outside the history and hence no history is purged.  Is this intended design that History will not be purged during Demantra Complete
Refresh collections, and only during Net Change collections?


To answer your question, history should be purged for both net-change and refresh collections.  If this is not working please contact Oracle
Support

Note, the complete refresh collection is not meant for regular weekly runs.  Ideally it should be run only once (boot strap) and that too
only if you want to collect all the data from Order Management (OM) into Demantra.

You should run only net-change collection for regular runs.  Also for bootstrap data load, if OM has more data than required for Demantra,
net-change collection with appropriate date range should be run.

Thursday Aug 02, 2012

Are you Implementing Partitions for Demantra? Consider These Points.

1) Partition columns must be a subset of the primary key columns

2) Each partition should have its own tablespace.  All partitions of a partitioned object must reside in tablespaces of a single block size.

3) Each of the large tables should have their own tablespace.

4) Set the following parameters so that the Analytical Engine can find the partition on which any combination resides:
  
    Parameter                Purpose
    PartitionColumnItem Specifies the name of the column that partitions the data by item.
 
    PartitionColumnLoc  Specifies the name of the column that partitions the data by location.

    Note: When the SALES_DATA table is not partitioned by a level column, you need to set:
    update init_params_0 set value_string = column name
    where pname in ('PartitionColumnItem', 'PartitionColumnLoc');


5) Compute the optimal PCTFREE, PCTUSED and INITRANS values for the tables.

6) Ensure that the Schema statistics are up to date.

7) When creating partitions, consider your main worksheet levels.  Does your primary key follow the worksheet levels?  Partitions should also follow your worksheet levels and primary key.  If you have several worksheets that have different levels, way your options according to use.

Also, please review the following MyOracleSupport Documents:

Oracle Demantra Implementing Partitions for Performance (Doc ID 1227173.1)
Demantra Performance Overview and Recommendations High Impact Discussion Points (Doc ID 1162795.1)
Partitioned Sales_data Table But Engine Run Is Slower (Doc ID 1331482.1)

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« August 2012 »
SunMonTueWedThuFriSat
   
1
3
4
5
6
7
8
9
10
11
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today