Friday May 22, 2015

Demantra Worksheet Performance - A summary guide at Customer Request

Worksheet performance.  There are dozens of notes.  It can be challenging to find the best approach. 

  • If you are on 7.3.1.4 or greater, see the following three notes.  Upgrade to the latest version of TABLE_REORG.  Run TABLE_REORG with the 'T' option and review the suggestions in the LOG_TABLE_REORG table.
  • Demantra TABLE_REORG procedure. Did you know that TABLE_REORG has replace rebuild_schema mad rebuild_tables?(Doc ID 2005086.1)
    - Demantra TABLE_REORG Tool New Release with Multiple Updates! Partitions, DROP_TEMPS and More! 7.3.1.3 to 12.2.x.(Doc ID 1980408.1)
  • If you have an error: Demantra table_reorg Procedure Failed ORA- on sales_data mdp_matrix promotion_data How do I Restart? rupd$_ mlog$_ I have Table cannot be redefinitioned in the LOG_TABLE_REORG table(Doc ID 2006779.1)

I would consider these notes to be the best regarding worksheet performance:

  • Oracle Demantra Worksheet Performance - A White Paper (Doc ID 470852.1)
  • Oracle Demantra Worksheet Performance FAQ/TIPS 7.3+! (Doc ID 1110517.1)
  • Demantra 12.2.4 Worksheet Performance Enhancements Parameter dynamic_hint_enabled, Enable Dynamic Degree of Parallelism Hint for Worksheets. 
  • Development Recommended Proper Setup and Use (Doc ID 1923933.1)
  • Demantra Development Suggested Performance Advice Plus Reference Docs (Doc ID 1157173.1)
  • Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Increase Performance (Doc ID 1627652.1)
  • The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1)

Also, see:
Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2 (Doc ID 1601596.1)
- Demantra 11g Statistics new Features and Best Practices Gather Schema Stats (Doc ID 1458911.1)

I would review all parameters mentioned in the docs above and:

1. Monitor the workstation memory consumption and CPU utilization as the worksheet is being loaded.
   * You may have to adjust the memory ceiling for Java
2. Manage MDP_MATRIX.  Are there dead/unused combinations?  When running the engine, you can manage the footprint of the input.  If MDP_MATRIX
   is carrying sizeable dead combinations and/or entries without a matching entry in SALES_DATA, you are increasing processing load.  Check out
   note 1509754.1.  The attachment explains the principle.
3. Using the notes above, can you cache?  Can you use filters?  Can you use open with? 
   A series can be cached, aggregated by item and cached in the branch_data_items table.  This improves performance of worksheets that are aggregated
   across locations and that do not have any location or matrix filtering.
4. Run the index advisor.  Does it suggest additional indexes? 
5. If you do not have the index advisor, produce an AWR.  The AWR should be taken when the user opens the worksheet.  For example, start the AWR process. 
   Wait 10-15 minutes.  Tell the user to open the worksheet.  After the open succeeds, wait 10 minutes.  Stop the AWR process.  What are the top SQLS? 
   What are the contentions?
6. Do you have your large tables on their own tablespace?  This means each large table has a tablespace to its self.  Each large index has a
   tablespace to its self.
7. The worksheet is retrieving rows to display.  Is there row chaining causing multiple block reads?  That should be revealed in the AWR or run the
   appropriate SQL.
8. Worksheet design is important.  The worksheet designers setup what they need.  However, that does not mean that the worksheet design blends well
   with available processing capabilities.  Know the forecast branch health.  I think this is discussed in 1509754.1.  The following SQL reveals the
   tree:

   select level_id,count(*) from mdp_matrix
   where prediction_status = 1
   group by level_id
   order by level_id

   If you have a branch that is 100000 and remaining branches at 5000 and 10000 that is a problem.  That would point to a setup/design issue.
   Meaning that if you have branch as a level and it just so happens that 1 branch indeed has 100,000 and the other 2 branches account for smaller
   volumne, 5000 and 10000, the chosen levels of the worksheet need to be revisted.  Perhaps a level lower than branch is better suited to
   processing the data.  While this and #2 above are probably out of your control, it will help explain the worksheet loading and engine processing
   time.
9. Reduce the amount of memory that your worksheet selects:
   - Remove series if possible
   - Reduce the span of time
   - Apply filters
10. Review all server and client expressions.  Are they affecting performance?
11. Run DROP_TEMPS

Tuesday May 05, 2015

Demant TABLE_REORG Procedure Failed ORA- on SALES_DATA MDP_MATRIX PROMOTION_DATA How do I Restart? RUPD$_ MLOG$_

When you submit TABLE_REORG, DBMS_REDEFINITION is used.  This will create an MLOG$_ database object.  The table you supplied for the TABLE_REORG procedure has a Primary Key (PK).  Since there is a PK on the table, when DBMS_DEFINITION creates the MLOG$_ object, DBMS_REDEFINITION automatically creates a RUPD$_ object.  These objects are meant to be used for Java RepAPI.  If you execute a 'drop snapshot log on tablename' the snapshot log as well as the temporary snapshot log are dropped.  However, dropping the MLOG$_ object is not best practice. 

If these objects exist prior to executing TABLE_REORG, you will see the following: ‘Table cannot be redefinitioned.' in log_table_reorg table.  This means that one or both objects/segments exist.  For example, if I had a table_reorg for mdp_matrix fail, the following segments would most likely be left behind:

RUPD$_MDP_MATRIX
MLOG$_MDP_MATRIX

Use the following SQL to verify (10g and above):

select substr(object_name,1,30)
from dba_objects
where regexp_like( object_name, 'MLOG$|RUPD$')
and owner = '&Schema_owner'

While you can drop these temporary RDBMS segments, it is best practice to use the following:

BEGIN
 DBMS_REDEFINITION.ABORT_REDEF_TABLE (
   uname       => '&demantra_schema_name',
   orig_table  => '&original_table_name',
   int_table   => '&interim_table_name');
END;

In the above, supply the arguments:

- DM or your schema name
- MDP_MATRIX is the original_table_name
- MLOG$_MDP_MATRIX is the interim_table_name

Verify that the objects are dropped.  Submit the TABLE_REORG again AFTER repairing the cause of the last failure.

Wednesday Apr 29, 2015

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?

Demantra TABLE_REORG procedure. 

Did you know that TABLE_REORG has replace REBUILD_SCHEMA mad REBUILD_TABLES?  TABLE_REORG
can addressed partitioned tables and is more efficient!  Get the latest release at My Oracle Support using bug 17640575.

TABLE_REORG has really replaced and improved the functionality of REBUILD_SCHEMA and REBUILD_TABLES.
It rebuilds the table which is done in primary key order and it moves empty columns to the end of the row.

REBUILD_SCHEMA uses ALTER TABLE MOVE tablespace to reduce chained rows of all tables in the schema.
However, it does not support partitioned tables.

REBUILD_TABLES  is the similar.  It was originally designed for MDP_MATRIX / SALES_DATA, but it can run for all tables and
also for a specific table.  From 2010 it does support partitioned tables.

The procedure MOVE_TABLE was fixed to handle partitioned tables.  It is also out of date, I see ANALYZE TABLE has used parts of the
code (eg for SALES_DATA and MDP_MATRIX).  For an "all tables run" is uses ANALYZE_SCHEMA that does use dbms_stats.GATHER_TABLE_STATS

All tables  - Where the stats value chain_cnt > 0, it does not automatically include SALES_DATA unless 'sys_params','Rebuild_Sales_Table' = 1.

REBUILD_TABLES ( table namel,  stats check,  sales,  all tables flag)

exec REBUILD_TABLES ( null, 1, null, 1) ;    -- With ANALYZE_SCHEMA(100000)  = for none or really old stats
exec REBUILD_TABLES ( null, 0, null, 1) ;

exec REBUILD_TABLES ( null, 0, 1, 1) ;   -- Will include SALES_DATA

For more information see: Troubleshooting TABLE_REORG Package issues - RDF Snapshot drop when process fails + TABLE_REORG Guide MOS Note 1964291.1


Gathering statistics on partitioned tables.  Best practice:

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table, gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere).
This first gather after turning it on will take longer than previous analyzes.  Then going forward we will see the following:

1.  The normal dbms_stats or fnd_stats, will only gather statistics on lower level partitions if the statistics on that partition are stale.  This is a significant change.  That is going forward using the default options of a gather command may in fact perform no re-analyze on the actual data if the modifications to the table do not warrant it.

2.  If a subpartition is stale the normal stats will ONLY gather statistics on that subpartition.  The partition for that subpartition will be re-derived as will the global
    statistics, no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

For more information: Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2, MOS Note 1601596.1

Wednesday Apr 01, 2015

New SQL to Determine Out of Order Ratio and Cluster Factor

Hello!  Please run the following for both mdp_matrix and sales_data.     Questions?  Email at Jeffery.goulette@oracle.com

1. The first SQL will need to be adjusted to accommodate your PK for sales_data and your PK for mdp_matrix.  Adjust the select and from clause to match your PK.   In the case of this sample, the PK was:

ITEM_ID
LOCATION_ID
SALES_DATE
IS_T_EP_SPF


-- ShowOOR.sql

SELECT (ROUND((
 (SELECT COUNT(*) AS CNT
 FROM
   (SELECT ITEM_ID,
     LOCATION_ID,
     SALES_DATE,
     IS_T_EP_SPF ,
     RELATIVE_FNO ,
     BLOCK_NUMBER ,
     ROW_NUMBER ,
     DATA_ROW ,
     (LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY ROW_NUMBER)) AS PREV_DATA_ROW
   FROM
     (SELECT ITEM_ID,
       LOCATION_ID,
       SALES_DATE,
       IS_T_EP_SPF ,
       RELATIVE_FNO ,
       BLOCK_NUMBER ,
       ROW_NUMBER ,
       (DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY ITEM_ID,LOCATION_ID,SALES_DATE,IS_T_EP_SPF)) AS DATA_ROW
     FROM
       (SELECT ITEM_ID,
         LOCATION_ID,
         SALES_DATE,
         IS_T_EP_SPF ,
         DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO ,
         DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER ,
         DBMS_ROWID.ROWID_ROW_NUMBER(ROWID)   AS ROW_NUMBER
       FROM SALES_DATA
       ) C
     ) B
   ) A
 WHERE DATA_ROW != PREV_DATA_ROW
 AND DATA_ROW   != PREV_DATA_ROW + 1
 )                               /
 (SELECT COUNT(*) FROM SALES_DATA
 )),3)*100) AS "Out Of Order Ratio %"
FROM DUAL;


-- ShowCF.sql
undefine table_name
SELECT ui.index_name, us.blocks as "Table Blocks", ui.clustering_factor as "Index clustering Factor", ut.num_rows as "Table Rows"
FROM   user_indexes     ui,
       user_tables      ut,
       user_constraints uc,
       user_segments us
WHERE  ui.table_name = ut.table_name
AND    ut.table_name = uc.table_name
AND    ui.index_name = uc.index_name
AND    ut.table_name = us.segment_name
AND    us.segment_type = 'TABLE'
AND    uc.constraint_type = 'P'
AND    ut.table_name      = '&table_name';

Thursday Feb 26, 2015

Setting Worksheet Related Parameters and Hardware Requirement Example

 Hello!   This is an example when determining how to set critical aps_params for worksheet performance:


If you expect to have 150 users with concurrency rate of 30% your setting and hardware should be:

   50 users * 30% = 45 users

   threadpool.query_run.per_user = 8

   threadpool.query_run.size = Number of concurrent users * threadpool.query_run.per_user (8) = 360

   MaxDBConnections = threadpool.query_run.size + 10 = 370

   To run a production environment with those setting you should have:

   Database: DB machine with 60 CPU's (= MaxDBConnections / 6 = 60)

   Application Server: AP with 27 CPU's (= threadpool.query_run.size/16) + 4), 4GB of memory

Only with this hardware you can run Demantra with 150 users with concurrency rate of 30%, is your hardware powerful enough for your environment?

Tuesday Feb 17, 2015

Demantra Table_Reorg. A new version has been released!

Demantra Customers, there is a new release of the table_reorg Tool.  This patch will install updates to the TABLE_REORG tool.  Demantra Version: 7.3.1.x and 12.2.x.  The minimum 7.3.1.x version is 7.3.1.3.   You can download this patch using bug 17640575 in My Oracle Support.  For additional information see MOS note:
Demantra TABLE_REORG Tool New Release with Multiple Updates! Partitions, Drop_temps and More! 7.3.1.x to 12.2.x. (Doc ID 1980408.1)

Wednesday Dec 10, 2014

Hello Demantra Customers! Are you at version 12.2.4.1? A new cumulative patch will be released soon.

Hello Demantra Customers!  Are you at version 12.2.4.1?  A new cumulative patch will be released soon.  This 12.2.4.1 cumulative patch is different to the 12.2.x patch strategies in the respect
that it will only contain fixes that are new to 12.2.4.1.  This cumulative patch is scheduled to be released later this month, December 2014.  To monitor, see this note, 1952805.1 or visit the Oracle Demantra BLOG at https://blogs.oracle.com/demantra/

Wednesday Oct 01, 2014

Demantra Performance and Setup Analyzer v2.00 is FINALLY Available!

Hello Demantra Customer!  There is a new release of the Demantra Performance and Setup Analyzer!  Version 2.00 has been released!
You can pick and choose any SQL that you prefer to see given any type of task at hand.   See MyOracleSupport note
Demantra Analyzer script for Setup and Performance Monitoring (11.5.X and R12), 1618885.1

I am considering adding more topical analysis and deeper digging into additional details for the next release. 

Any questions / comments, please email Jeffery.goulette@oracle.com

Thank You!

Demantra 12.2.4 Performance Enhancement Data Points that you need to know!

Demantra 12.2.4 Customers, There is a new MyOracleSupport note highly recommended by Demantra Development.  There are new performance enhancements available!  However, there is additional information in this note that is critical to implement some of these new enhancements.

Please see, Demantra 12.2.4 Worksheet Performance Enhancements Parameter dynamic_hint_enabled. Development Recommended Proper Setup and Use (Doc ID 1923933.1)

Thursday Sep 25, 2014

Demantra Integration Webcast 24-Sep-2014

Hello!   Were you able to attend the the Demantra webcast 24-Sep-2014?   The topic was integration.  You can watch a replay found at:
Demantra Solutions Advisor Webcast Calendar And Archive (Doc ID 800030.1) and scroll to Integrating Demantra With Oracle Applications.

 

Coming up:

1) 8th OCT, V2 Analyzer review and Data Mining Live Demo

2) I am thinking about breaking the 24-Sep-2014 into two presentations.  This would allow us to dig deeper into debugging the integration and workflow.  These would concentrate on data once it arrives into the denorm table AND data mapping. 

Has anyone used the e_plan_tree database object?

SQL> desc msdem.e_plan_tree
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 E_PLAN_NODE_ID                            NOT NULL NUMBER(10)
 E_PLAN_CHILD_NODE                                  NUMBER(10)
 TABLE_NAME                                NOT NULL VARCHAR2(30)
 FIELD_NAME                                NOT NULL VARCHAR2(30)
 E_PLAN_NAME                                        VARCHAR2(30)
 E_PLAN_TITLE                                       VARCHAR2(100 CHAR)
 E_PLAN_BASE_DIM                                    VARCHAR2(30)
 E_PLAN_TYPE                                        VARCHAR2(10)
 E_PLAN_DATA_TYPE                                   VARCHAR2(10)
 E_PLAN_RELATION                                    NUMBER(10)
 SOURCE_NAME                                        VARCHAR2(30)
 E_PLAN_AGGRI                                       VARCHAR2(400)
 MODEL_VERSION                             NOT NULL NUMBER(5)
 VERSION_NAME                                       VARCHAR2(30)
 FICTIVE_COLUMN_NAME                                VARCHAR2(30)
 E_PLAN_TABLE_NAME                                  VARCHAR2(30)
 HYRARCHY_NO                                        NUMBER(5)
 ENABLE_LEVEL                                       VARCHAR2(20)
 USE_EP_GEN_ID_FOR_FILTER                           NUMBER(1)
 USE_ITEM_LOC_ID                                    VARCHAR2(20)
 AGGRI_FUNC                                         VARCHAR2(400)
 WAVG_DEPENDANT                                     VARCHAR2(30)
 PARTS_VERSION                             NOT NULL NUMBER(5)
 IS_HIERARCHY_LEVEL                                 NUMBER(1)
 PARENT_FIELD_NAME                                  VARCHAR2(30)
 APPLICATION_ID                                     VARCHAR2(63)

 

Demantra Performance and Setup News

Hello!   Sorry for being away so long!  I have been preparing version 2 of the Demantra Performance and Setup Analyzer.  More on that soon.


Have you seen the latest 12.2.1, 12.2.2, 12.2.3 and 12.2.4 Install Upgrade ADVISOR: Demantra (Doc ID 1664177.2)?  This is an excellent store
of hand picked notes and white papers.  Including White Papers, How To +! Documents from Development.  Check it out and let me know what you think.
Regards!  Jeffv

Friday Jul 11, 2014

Demantra Index Cluster Factor Major Update Performance Impact

Hello Everyone!   Please check out the following important information regarding the cluster factor and the impact to the CBO.   Please comment if you like.

Regards!   Jeff

 The Cluster Factor (CF) of an index a very important statistic used by the Cost Based Optimizer (CBO) to determin
the most efficient execution plan.  However, there is a major flaw in the current CF calculation.  Simply put, the CF
is calculated by performing a Full Index Scan, looking at the rowid of each index entry.  If the table block being referenced
is different from the previous index entry, the CF is incremented.  However if the table block being referenced is the same
as the previous index entry, the CF is not incremented.  So the CF gives an indication of how well ordered the data in the table is
in relation to the chosen index entries.  

The lower the CF, closer to the number of blocks in the table vs the number of rows, the more efficient it is to use the index as
less table blocks would need to be accessed to retrieve the necessary data via the chosen index.

So far so good.  However, what if part of the incoming row is stored in block one and the remainder in block 2?  When the row is
required, block 1 and block 2 will be read into cache resulting in near zero waiting for the entire row.  OK, fast forward, what if
the table has 100m rows and imagine that this situation occurs 30% of the time.  Your current CF method is most likely skewed
and incorrect.  

Now for the good news, bug 13262857 - INDEX CLUSTERING FACTOR COMPUTATION IS PESSIMISTIC.  The computation of the index clustering
factor in dbms_stats package is pessimistic about the caching ratio of the table blocks.  It assumes that at most one block from the
table is cached.

This is an enhancement to allow a user to specify the number of blocks that dbms_stats package will consider when gathering the index clustering
factor statistics. Prior to this enhancement dbms_stats assumed 1 and it still does after the enhancement.  This enhancement allows the user to
specify a value between 1 and 255.  There is also an AUTO option which if specified then dbms_stats will use 1% of the table blocks up to
0.1% of the buffer cache size, in blocks.  

The new CF approach is currently available with patches that can be applied on both Exadata databases and Oracle versions 11.1.0.7, 11.2.0.2 and 11.2.0.3.
The Patch ID is 15830250.

Once applied, there is a new statistics collection preference that can be defined, called TABLE_CACHED_BLOCKS.  This basically sets the number of
table blocks we can assume would already be cached when performing an index scan and can be ignored when incrementing the CF during statistics gathering.
The default is 1, as stated, the same as today.  But can be set up to be a value between 1 and 255, meaning that during the collection of index statistics
the process will not increment the CF if the table block being referenced by the current index entry has already been referenced by any of the prior 255 index
entries, if set to 255.  It basically sets the appropriate parameter in the sys_op_countchg function used to calculate the CF value during statistic gathering
to not increment the CF if the current table block has already been accessed “x” index entries previously.

The TABLE_CACHED_BLOCKS preference can be set by either the DBMS_STATS.SET_TABLE_PREFS, DBMS_STATS.SET_SCHEMA_PREFS or DBMS_STATS.SET_DATABASE_PREFS procedures.
Apply the patch, change the TABLE_CACHED_BLOCKS preference for this table and re-calculate the index statistics:

SQL> exec dbms_stats.set_table_prefs(ownname=>dm, tabname=>'SALES_DATA',

pname=>'TABLE_CACHED_BLOCKS', pvalue=>50);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>dm, indname=>'SALE_DATA', estimate_percent=> null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='SALES_DATA';

Although I found this bug internally, many thanks to several on the WWW for their comments.



Wednesday May 21, 2014

Topical Procedural Demantra Debugging Using Log Files

Demantra Debugging Using Log Files


Default log location is the Demantra_root/Collaborator/virtual_directory/portal/logs directory.
Review the comments in the Demantra_root/Collaborator/virtual_directory/portal/conf/logconf.lcf  file to specify this log file information:
- Name and location
- Maximum size of the log file
- Number to keep
- Log categories and log level

You can also set log levels at runtime with the following URL:
http://Demantra_root/Collaborator/virtual_directory /admin/loggerManager.jsp

 

Application server memory configuration (-Xmx)

Web Application troubleshooting:
- Demantra Web application includes 2 components:
  Application Server
  – Demantra Web application deployment inside a Web server.

  Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.

  They are connected by a Network (LAN, WAN...).
  SQL*Net log

  For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


If Application out of memory errors:
- Identify where the problem is by using logs:
  Collaborator log, application server side log.   Plug-in log, Client side log.


Server side issues:
- Heap dump from the webserver
- Thread dump from the Webserver, in case of application freeze or slow down
- Application server memory configuration ( -Xmx)
- Count of Level members, need level member caching?


Client side issues:
- Heap dump from the Java plug-in
- Thread dump from the Java plug-in
- Plug-in memory configuration ( -Xmx)


When having installation errors there are a few files that can be analyzed:
• Installer log
• Screenshots of the selections in the different Installer panels.
• Review db_audit_log table for object changes made by the upgrade DB_Exception_Log table


Data Model: When having data model errors there are few files that can be analyzed:
• An Export of the Data model
• Review db_audit_log table for object changes made by the upgrade.
• LOG_DATA_MODEL table (an automatic LOG_IT log).


Data Loading
– Ep_Load, Ep_load Errors:
• Enable LOG_IT process on relevant Demantra stored procedures as defined in LOG_IT_PARAMS table
• DB_Exception_Log.


Ep_Load performance:
• Count of data from the staging tables.
• DB_Params table
  – Ep_Load performance related parameters
• Execute “Call_DM_Build_Procedures” after changing parallelization parameters in DB_Params
• DB HW definitions.
• System Memory Disk IO and CPU Utilization monitoring
Either system performance monitor or Linux top events
• AWR /ADDM reports.


Data Loading, Integration:

Integration Errors:
• Collaborator & Integration log (Enable sql logging: appserver.sql, appserver.update.sql)
• Concurrent request logs (collaborator.login.user)
• apscheck.sql (Note:246150.1)


Integration Performance:
• Count of data from the staging tables.
• APS_Params
  – Integration Interface related parameters
• DB HW definitions.
• system memory disk IO and CPU utilization monitoring
  - either system performance monitor or Linux top events.
• AWR/ADDM reports.


DB deadlock/ internal database error:
• Collaborator Log
• Database \ udump logs (USER_DUMP_DEST specifies the pathname for a directory where the DB server will write debugging trace files on behalf of a user process.)
• Enable LOG_IT process on selected Demantra stored procedures as defined in LOG_IT_Params table.


Web Application troubleshooting:
• Demantra Web application includes 2 components:
• Application Server


Demantra Web application deployment inside a Web server.
• Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.
• They are connected by a Network (LAN, WAN...).
• For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


Application out of memory errors:
• Identify where the problem is by using logs:
• Collaborator log
  – server side log.
• Plug-in log
  – Client side log.
• Server side issues:
• Heap dump from the webserver
• Thread dump from the Webserver (in case of application freeze or slow down)
• Application server memory configuration (-Xmx)
• Count of Level members (need level member caching?)
• Client side issues:
• Heap dump from the Java plug-in.
• Thread dump from the Java plug-in
• Plug-in memory configuration (-Xmx)


WS performance and errors:
• Identify where the problem is:
• Server side
  – AWR /ADDM reports, DB side CPU and Memory.  Application server CPU and configuration, APS_PARAMS relevant parameters.
• Client side
  - CPU utilization (and for how long).
• Server side issues:
• Demantra Index Advisor report.
• Thread dump (in case of application freeze or slow down).
• System Memory Disk IO and CPU Utilization monitoring
  - either system performance monitor or Linux top events.
• DB HW definitions.
• Number of concurrent users at the time (concurrent user log)
- Also see Level Member Caching Purpose Performance Guidance (Doc ID 1627689.1)
- Also see Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Reduce Performance (Doc ID 1627652.1)

 

Client side issues:
• Plug-in –Xmx setting
• WS wizard panels screen shots.
• Client side log
  – If requested (Number of combinations in the WS).


Engine errors:
Always provide a full Engine log (manager and engine) with relevant logging option.
• DB errors:
• Engine log with Sql logging
  – depends on where the DB error is Engine manager or engine2k.


Processor error:
• Engine log with DebugPro logging’ groups 100/110
• Engine Out of memory errors:
• No of Tasks
• Count of Active Combinations (Select Count(1) from MDP_Matrix where prediction_status = 1)
• Engine not running:
• On Linux
  – Application server log
• Engine performance:
• AWR report
• Check whether CPU Hyper threading enabled on Engine servers
  – Disable if its enabled
• Engine Blades/Server + DB HW definitions.
• DB/Engine Blades Memory Disk IO and CPU Utilization Monitoring
  - Either system performance monitor or Linux top events.
• For more information check Oracle Support note 800030.1 "Demantra Engine Troubleshooting” Presentation (April 6th, 2011)


Configure auditing capabilities for BM configuration changes:
• This is a customization enhancement introduced by DEV application team in a white paper (“Implementing Demantra Audit log”)
  - Configuring an Audit Log in Demantra Using Database Triggers (Doc ID 1629974.1)
    * Currently captures Inserts/ Updates on Sessions/Computed_fields tables by a trigger based process
• The mechanism and idea can easily be enhanced for further Metadata audit opportunities
• Difference from BAL diff
  – This mechanism captures all the changes, not just how the latest change differs between schemas
• The logs are kept in a separate table (AUDIT_LOG ), so no impact on the out of the box schema.
• AUDIT_LOG table includes the following information (columns):
  - log_date
  - session_id
  - os_user
  - host_name
  - module
  - user_id
  - object_type
  - object_name
  - old_value
  - new_value
  - Message

Wednesday Mar 12, 2014

Troubleshooting Strategy and Processes that Execute Enormous data sets in Demantra

Hello Demantra Customers!   There is a new white paper that discusses large data set loading.

See MOS Note 1634925.1

Recently we have encountered a few scenarios where customers tried to process huge amounts of data within a Demantra process.
Examples:

  • Customer wanted to load over 50M records via EP_LOAD process on weekly bases.
  • Customer wanted to process millions of rows via BLE process on daily bases.
  • Customer wanted to load via integration interface 40M rows on weekly bases.


In all of the above the customer complained about the system inability to process the amount of data, either being too slow or it simply could not be completed at all meaning the process erred out.
In all of the above examples an escalated defect was logged, and development worked troubleshooting the problem.

An additional common theme between all the above examples and many more is that there was no real need to have a product fix although performance improvement opportunities may have been identified, they were not the final solution that addressed the problem.
What helped was understanding that there is no real need to process all the data and recognizing the data that really needed to be loaded.  The data could have been loaded in a more efficient way using some best practices and creative thinking.

The scope of this document is to provide some guidance how to troubleshoot such customer situations.
The document will focus on three main areas:

  • Learning and understanding the business needs
  • Understanding the actual data that needs to be loaded, understand the gap between this number and the number of rows that the customer actually tries to load.
  • Provide some best practices that can help the customer work with the data efficiently.


This document will not deal with the initial Data load processes although some of the best practices can be adopted for such situations as well.
We will illustrate the concepts in this document using three real life examples of service requests and/or defects that were logged on behalf of the customer.

 

Wednesday Feb 19, 2014

Demantra Performance and Setup Analyzer V1, MOS Doc ID 1618885.1

The Demantra Setup and Performance Analyzer is a Self-Service Health-Check script which reviews the overall footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices.  This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome.  

Benefits:
  • Immediate Analysis and Output of your Demantra Environment
    • Reveals Current Environment
    • Delivers Known Recommendations and Solutions
    • Perform Sanity Checks for Known Issues
  • Easy to add Tool for regular Maintenance
  • Execute Analysis anytime to compare current environment 
Audience:
  • DBA's / System Administrator's should run the script and review the output
  • Output usually will be reviewed by technical application DBA or super user
  • This script was designed for 7.3.1 and later.

Where to Run the Script & How to fix Items Identified:

You may run the script on a Production environment as it's non-invasive.  We recommend fixing items identified from the output first in a clone of production to test/verify the expected outcome.  Visit Doc ID 1618885.1 to download the script and view a complete output example. 

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
6
7
8
9
10
11
12
14
15
16
17
18
19
20
21
23
24
25
26
27
28
29
30
31
      
Today