Thursday Sep 25, 2014

Demantra Performance and Setup News

Hello!   Sorry for being away so long!  I have been preparing version 2 of the Demantra Performance and Setup Analyzer.  More on that soon.


Have you seen the latest 12.2.1, 12.2.2, 12.2.3 and 12.2.4 Install Upgrade ADVISOR: Demantra (Doc ID 1664177.2)?  This is an excellent store
of hand picked notes and white papers.  Including White Papers, How To +! Documents from Development.  Check it out and let me know what you think.
Regards!  Jeffv

Friday Jul 11, 2014

Demantra Index Cluster Factor Major Update Performance Impact

Hello Everyone!   Please check out the following important information regarding the cluster factor and the impact to the CBO.   Please comment if you like.

Regards!   Jeff

 The Cluster Factor (CF) of an index a very important statistic used by the Cost Based Optimizer (CBO) to determin
the most efficient execution plan.  However, there is a major flaw in the current CF calculation.  Simply put, the CF
is calculated by performing a Full Index Scan, looking at the rowid of each index entry.  If the table block being referenced
is different from the previous index entry, the CF is incremented.  However if the table block being referenced is the same
as the previous index entry, the CF is not incremented.  So the CF gives an indication of how well ordered the data in the table is
in relation to the chosen index entries.  

The lower the CF, closer to the number of blocks in the table vs the number of rows, the more efficient it is to use the index as
less table blocks would need to be accessed to retrieve the necessary data via the chosen index.

So far so good.  However, what if part of the incoming row is stored in block one and the remainder in block 2?  When the row is
required, block 1 and block 2 will be read into cache resulting in near zero waiting for the entire row.  OK, fast forward, what if
the table has 100m rows and imagine that this situation occurs 30% of the time.  Your current CF method is most likely skewed
and incorrect.  

Now for the good news, bug 13262857 - INDEX CLUSTERING FACTOR COMPUTATION IS PESSIMISTIC.  The computation of the index clustering
factor in dbms_stats package is pessimistic about the caching ratio of the table blocks.  It assumes that at most one block from the
table is cached.

This is an enhancement to allow a user to specify the number of blocks that dbms_stats package will consider when gathering the index clustering
factor statistics. Prior to this enhancement dbms_stats assumed 1 and it still does after the enhancement.  This enhancement allows the user to
specify a value between 1 and 255.  There is also an AUTO option which if specified then dbms_stats will use 1% of the table blocks up to
0.1% of the buffer cache size, in blocks.  

The new CF approach is currently available with patches that can be applied on both Exadata databases and Oracle versions 11.1.0.7, 11.2.0.2 and 11.2.0.3.
The Patch ID is 15830250.

Once applied, there is a new statistics collection preference that can be defined, called TABLE_CACHED_BLOCKS.  This basically sets the number of
table blocks we can assume would already be cached when performing an index scan and can be ignored when incrementing the CF during statistics gathering.
The default is 1, as stated, the same as today.  But can be set up to be a value between 1 and 255, meaning that during the collection of index statistics
the process will not increment the CF if the table block being referenced by the current index entry has already been referenced by any of the prior 255 index
entries, if set to 255.  It basically sets the appropriate parameter in the sys_op_countchg function used to calculate the CF value during statistic gathering
to not increment the CF if the current table block has already been accessed “x” index entries previously.

The TABLE_CACHED_BLOCKS preference can be set by either the DBMS_STATS.SET_TABLE_PREFS, DBMS_STATS.SET_SCHEMA_PREFS or DBMS_STATS.SET_DATABASE_PREFS procedures.
Apply the patch, change the TABLE_CACHED_BLOCKS preference for this table and re-calculate the index statistics:

SQL> exec dbms_stats.set_table_prefs(ownname=>dm, tabname=>'SALES_DATA',

pname=>'TABLE_CACHED_BLOCKS', pvalue=>50);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>dm, indname=>'SALE_DATA', estimate_percent=> null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='SALES_DATA';

Although I found this bug internally, many thanks to several on the WWW for their comments.



Wednesday May 21, 2014

Topical Procedural Demantra Debugging Using Log Files

Demantra Debugging Using Log Files


Default log location is the Demantra_root/Collaborator/virtual_directory/portal/logs directory.
Review the comments in the Demantra_root/Collaborator/virtual_directory/portal/conf/logconf.lcf  file to specify this log file information:
- Name and location
- Maximum size of the log file
- Number to keep
- Log categories and log level

You can also set log levels at runtime with the following URL:
http://Demantra_root/Collaborator/virtual_directory /admin/loggerManager.jsp

 

Application server memory configuration (-Xmx)

Web Application troubleshooting:
- Demantra Web application includes 2 components:
  Application Server
  – Demantra Web application deployment inside a Web server.

  Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.

  They are connected by a Network (LAN, WAN...).
  SQL*Net log

  For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


If Application out of memory errors:
- Identify where the problem is by using logs:
  Collaborator log, application server side log.   Plug-in log, Client side log.


Server side issues:
- Heap dump from the webserver
- Thread dump from the Webserver, in case of application freeze or slow down
- Application server memory configuration ( -Xmx)
- Count of Level members, need level member caching?


Client side issues:
- Heap dump from the Java plug-in
- Thread dump from the Java plug-in
- Plug-in memory configuration ( -Xmx)


When having installation errors there are a few files that can be analyzed:
• Installer log
• Screenshots of the selections in the different Installer panels.
• Review db_audit_log table for object changes made by the upgrade DB_Exception_Log table


Data Model: When having data model errors there are few files that can be analyzed:
• An Export of the Data model
• Review db_audit_log table for object changes made by the upgrade.
• LOG_DATA_MODEL table (an automatic LOG_IT log).


Data Loading
– Ep_Load, Ep_load Errors:
• Enable LOG_IT process on relevant Demantra stored procedures as defined in LOG_IT_PARAMS table
• DB_Exception_Log.


Ep_Load performance:
• Count of data from the staging tables.
• DB_Params table
  – Ep_Load performance related parameters
• Execute “Call_DM_Build_Procedures” after changing parallelization parameters in DB_Params
• DB HW definitions.
• System Memory Disk IO and CPU Utilization monitoring
Either system performance monitor or Linux top events
• AWR /ADDM reports.


Data Loading, Integration:

Integration Errors:
• Collaborator & Integration log (Enable sql logging: appserver.sql, appserver.update.sql)
• Concurrent request logs (collaborator.login.user)
• apscheck.sql (Note:246150.1)


Integration Performance:
• Count of data from the staging tables.
• APS_Params
  – Integration Interface related parameters
• DB HW definitions.
• system memory disk IO and CPU utilization monitoring
  - either system performance monitor or Linux top events.
• AWR/ADDM reports.


DB deadlock/ internal database error:
• Collaborator Log
• Database \ udump logs (USER_DUMP_DEST specifies the pathname for a directory where the DB server will write debugging trace files on behalf of a user process.)
• Enable LOG_IT process on selected Demantra stored procedures as defined in LOG_IT_Params table.


Web Application troubleshooting:
• Demantra Web application includes 2 components:
• Application Server


Demantra Web application deployment inside a Web server.
• Client
  – User side browser and resources (CPU, Memory) running SUN Java Plug-in.
• They are connected by a Network (LAN, WAN...).
• For better troubleshooting we need to understand where we have a problem is it on the Server side/Client side / Both ?


Application out of memory errors:
• Identify where the problem is by using logs:
• Collaborator log
  – server side log.
• Plug-in log
  – Client side log.
• Server side issues:
• Heap dump from the webserver
• Thread dump from the Webserver (in case of application freeze or slow down)
• Application server memory configuration (-Xmx)
• Count of Level members (need level member caching?)
• Client side issues:
• Heap dump from the Java plug-in.
• Thread dump from the Java plug-in
• Plug-in memory configuration (-Xmx)


WS performance and errors:
• Identify where the problem is:
• Server side
  – AWR /ADDM reports, DB side CPU and Memory.  Application server CPU and configuration, APS_PARAMS relevant parameters.
• Client side
  - CPU utilization (and for how long).
• Server side issues:
• Demantra Index Advisor report.
• Thread dump (in case of application freeze or slow down).
• System Memory Disk IO and CPU Utilization monitoring
  - either system performance monitor or Linux top events.
• DB HW definitions.
• Number of concurrent users at the time (concurrent user log)
- Also see Level Member Caching Purpose Performance Guidance (Doc ID 1627689.1)
- Also see Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Reduce Performance (Doc ID 1627652.1)

 

Client side issues:
• Plug-in –Xmx setting
• WS wizard panels screen shots.
• Client side log
  – If requested (Number of combinations in the WS).


Engine errors:
Always provide a full Engine log (manager and engine) with relevant logging option.
• DB errors:
• Engine log with Sql logging
  – depends on where the DB error is Engine manager or engine2k.


Processor error:
• Engine log with DebugPro logging’ groups 100/110
• Engine Out of memory errors:
• No of Tasks
• Count of Active Combinations (Select Count(1) from MDP_Matrix where prediction_status = 1)
• Engine not running:
• On Linux
  – Application server log
• Engine performance:
• AWR report
• Check whether CPU Hyper threading enabled on Engine servers
  – Disable if its enabled
• Engine Blades/Server + DB HW definitions.
• DB/Engine Blades Memory Disk IO and CPU Utilization Monitoring
  - Either system performance monitor or Linux top events.
• For more information check Oracle Support note 800030.1 "Demantra Engine Troubleshooting” Presentation (April 6th, 2011)


Configure auditing capabilities for BM configuration changes:
• This is a customization enhancement introduced by DEV application team in a white paper (“Implementing Demantra Audit log”)
  - Configuring an Audit Log in Demantra Using Database Triggers (Doc ID 1629974.1)
    * Currently captures Inserts/ Updates on Sessions/Computed_fields tables by a trigger based process
• The mechanism and idea can easily be enhanced for further Metadata audit opportunities
• Difference from BAL diff
  – This mechanism captures all the changes, not just how the latest change differs between schemas
• The logs are kept in a separate table (AUDIT_LOG ), so no impact on the out of the box schema.
• AUDIT_LOG table includes the following information (columns):
  - log_date
  - session_id
  - os_user
  - host_name
  - module
  - user_id
  - object_type
  - object_name
  - old_value
  - new_value
  - Message

Wednesday Mar 12, 2014

Troubleshooting Strategy and Processes that Execute Enormous data sets in Demantra

Hello Demantra Customers!   There is a new white paper that discusses large data set loading.

See MOS Note 1634925.1

Recently we have encountered a few scenarios where customers tried to process huge amounts of data within a Demantra process.
Examples:

  • Customer wanted to load over 50M records via EP_LOAD process on weekly bases.
  • Customer wanted to process millions of rows via BLE process on daily bases.
  • Customer wanted to load via integration interface 40M rows on weekly bases.


In all of the above the customer complained about the system inability to process the amount of data, either being too slow or it simply could not be completed at all meaning the process erred out.
In all of the above examples an escalated defect was logged, and development worked troubleshooting the problem.

An additional common theme between all the above examples and many more is that there was no real need to have a product fix although performance improvement opportunities may have been identified, they were not the final solution that addressed the problem.
What helped was understanding that there is no real need to process all the data and recognizing the data that really needed to be loaded.  The data could have been loaded in a more efficient way using some best practices and creative thinking.

The scope of this document is to provide some guidance how to troubleshoot such customer situations.
The document will focus on three main areas:

  • Learning and understanding the business needs
  • Understanding the actual data that needs to be loaded, understand the gap between this number and the number of rows that the customer actually tries to load.
  • Provide some best practices that can help the customer work with the data efficiently.


This document will not deal with the initial Data load processes although some of the best practices can be adopted for such situations as well.
We will illustrate the concepts in this document using three real life examples of service requests and/or defects that were logged on behalf of the customer.

 

Wednesday Feb 19, 2014

Demantra Performance and Setup Analyzer V1, MOS Doc ID 1618885.1

The Demantra Setup and Performance Analyzer is a Self-Service Health-Check script which reviews the overall footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices.  This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome.  

Benefits:
  • Immediate Analysis and Output of your Demantra Environment
    • Reveals Current Environment
    • Delivers Known Recommendations and Solutions
    • Perform Sanity Checks for Known Issues
  • Easy to add Tool for regular Maintenance
  • Execute Analysis anytime to compare current environment 
Audience:
  • DBA's / System Administrator's should run the script and review the output
  • Output usually will be reviewed by technical application DBA or super user
  • This script was designed for 7.3.1 and later.

Where to Run the Script & How to fix Items Identified:

You may run the script on a Production environment as it's non-invasive.  We recommend fixing items identified from the output first in a clone of production to test/verify the expected outcome.  Visit Doc ID 1618885.1 to download the script and view a complete output example. 

Tuesday Dec 03, 2013

Upgrading to 12.2.2? Do you want extra speed? Use a logon trigger for parallelism

When upgrading your Demantra application from an older version directly to 12.2.2, you can minimize the upgrade time.  You can force parallelism using a Logon trigger on the server.

   Creation of new Indexes on sales data can take long time.

   As part of the upgrade there is a script building a new set of indexes, one per engine profile, on each profile quantity_form expression.

   The creation of the index can take a long time on big data tables.

    - Remember to disable or drop that trigger when the upgrade has completed.

   The following trigger was created as SYS database user and tested successfully by creating DEMANTRA database user and logging in with it:

 CREATE OR REPLACE TRIGGER force_parallel_ddl_trg

AFTER LOGON ON database

BEGIN

  IF (USER='DEMANTRA') THEN

    EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';

  END IF;

END force_parallel_ddl_trg;

/

 Make sure to:

  1. Modify DEMANTRA to be the database schema name.
  2. drop it after it is not needed anymore using the foillowing:

 DROP TRIGGER force_parallel_ddl_trg;

Thursday Nov 21, 2013

Upgrading to Demantra 12.2.2? There are known Issues and Fixes

When upgrading your Demantra application from an older version directly to 12.2.2 there are some issues of which you need to be aware.
- You are upgrading to Oracle Demantra 12.2.2
- ORA errors, Excel missing cut/paste data, performance issues, Notes, Calendar and other known upgrade issues.

See MOS note:

Oracle Demantra Platform Upgrade to 12.2.2 Known Issues Worksheet Performance Calendar Excel ORA-0179 ORA-06512 ORA-31003 java.lang.NullPointerException (Doc ID 1602288.1)

Tuesday Nov 19, 2013

Oracle Demantra Gathering Statistics on Large Partitioned Tables in 11gR2

Hello!   Have you implemented partitions on a large Demantra table?  Are you using RDBMS version 11gR2?  Then consider implementing incremental statistics.

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table we should gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere). This first gather after turning it on will take longer than previous analyzes. Then going forward we will see the following :

1 – the normal dbms_stats (or fnd_stats) will only gather statistics on lower level partitions if the statistics on that partition are stale. This is a significant change. That is going forward using the default options of a gather command may in fact perform no reanalyze on the actual data if the modifications to the table don’t warrant it.

2 – if a subpartition is stale the normal stats will ONLY gather statistics on that subpartition. The partition for that subpartition will be re-derived as will the global statistics – no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

 

Saturday Oct 26, 2013

Demantra Partitioning and the First PK Column

  We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok
  to create a new index instead of changing the PK.

  For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting
  with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).

  * Note that the first column of the new index matches the partition key.

  It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column
  matches the partition key.

Friday Sep 27, 2013

New Demanta Performance and Setup Analyzer on the way!

Hello Demantra Customers!  There is a new tool being developed in Oracle Proactive Services.  The Demantra Performance and Setup Analyzer.  We are reviewing the data points now and release is scheduled for November 2013.   Check out the areas of coverage below.  Any comments?  Please post.  Regards!  Jeff

There are some parameters listed twice to deliver in context results.

========================================
========================================
VERIFICATION
========================================
========================================
Verify database name

Verify Database User

DBUser

Present the tnsnames.ora file to verify settings

Verify the ds.ini file

Check the Demantra version

Check the RIGHT PRIVILEGES are GRANTED

Echo on engine server, $ENGINE_ROOT, $ORACLE_HOME, $PATH, $LD_LIBRARY_PATH, $JAVA_HOME

Verify profiles 'MSD_DEM: Schema' and 'MSD_DEM: Host URL'

Verify profile MSD_DEM: Debug Mode

ADD SQL to reveal the setting

========================================
========================================
PROCESSING
========================================
========================================

Are you running Multiple Batch Processes in Parallel?

Checking for active combinations and combinations that could be archived in BIIO table

Number of Active combinations in MDP_MATRIX

Improving Your Batch Forecast

Tasks and Branches, TreeHierarchyTest.exe

Managing Simulations

MDP_MATRIX - ACTIVE COMBINATIONS and ENGINE PERFORMANCE

Find Missing Dates

========================================
PARAMETERS
========================================

Verify the basic setup.

Verify Sys_params - AppServerURL AppServerLocation

threadpool.update.data.manual.size

threadpool.update.table.manual.size

threadpool.update.comb.manual.size=

threadpool.update.record.manual.size=

MaxDBConnections

Debug Mode

tunnel.client.maxConnections 

threadpool.update.comb.batch.size

threadpool.update.record.batch.size

threadpool.update.record.batch.size

MaxUpdateThreads Check

Is Auditing Turned on?

MaxSqlInExpressionTokens

StatsLowRowLimit

Import Block Size

enginestarter port

Are any engine libraries missing?

Are the permissions set correctly for Engine.exe

========================================
========================================
Database Objects
========================================
========================================

Check for corruption
- ANALYZE TABLE SALES_DATA VALIDATE STRUCTURE CASCADE FAST;
- ANALYZE TABLE MDP_MATRIX VALIDATE STRUCTURE CASCADE FAST;
- ANALYZE TABLE PROMOTION_DATA VALIDATE STRUCTURE CASCADE FAST;

Implementing Parallization

Chained Row count
 
Should you rebuild tables by Primary Key (PK). 

Partitioning

INITRANS

PCTFREE, PCTUSED

Excess Rows

Not Null Columns

Indexes, Unused

========================================
========================================
Database Parameters
========================================
========================================

DB_BLOCK_SIZE

OPTIMIZER_INDEX

Optimizer_Mode

Cursor_Sharing

db_file_multiblock_read_count (MBRC)

SQL Plan Baselines

parallel_dml

SQLNET.EXPIRE_TIME

========================================
========================================
Statistics
========================================
========================================

Verify the health of large tables

System Statistics

Gathering Statistics

========================================
========================================
Java Application Server, Client Cache
========================================
========================================

Which is the java version on application server and client?

Do your workstations have the Java runtime parameters setup to support the memory of their system per Note <>?

Clearing Cache could help performance
- we discuss all tiers

========================================
========================================
Logs, Display bottom 75 lines
========================================
========================================

DB_EXCEPTION_LOG

Do you have an engine2k.log file?

EngineManager.log

collaborator.log

EngineManagerPreRunLog.txt

DB_TIMING_LOG

DB_SECTION_LOG

========================================
========================================
Middle Tier
========================================
========================================

Cache

Clearing Server Caches

Limit Number of retained files

Verify the Connection Pool

connectionTimeout

========================================
========================================
Engine
========================================
========================================

Number of Engines, ComputerNames, localhost

Is logging turned on for the engine?

Engine Profiles

Verify date formats

Clearing Forecast Values

Verify that the six system parameters specifying engine used tablespaces are set correctly

Verify engineplatform

Verify enginebaseURL

====================================
====================================
$ENGINE_ROOT/bin/Settings.xml file
====================================
====================================

EngineUnixPortConfig

EngUnixDebugMode

ComputerNames, localhost

Is the Engine Running?  Where?

Verify SQLLDR

OCCITest

Present permissions for DS_CONFIG.sh

Present permissions for $ENGINE_ROOT/lib and $ENGINE_ROOT/bin

Do you have at least 1 combination that meets the rule for Insert_Units to work on and insert rows into the future?

Are you viewing the correct series?

Are you using the correct profile?

The Parameter MaxEngMemory is introduced from 7.3.0.1 onwards

Are you using promotions?

Are you setup for a distributed engine?

Verify settings.xml EngineUnixPortConfig

Verify the Logging Level of the Run Time Environment

Determining if the Engine Failed

Verify Null Pn Rows/Column verification

========================================
========================================
Forecast Specific
========================================
========================================

TargetTaskSize

BranchIDMultiple

========================================
========================================
Rolling Updates
========================================
========================================

RollingUpdatesUseJobQueues

RollingUpdatesMaxQueues

RollingUpdatesTimeout

RollingUpdatesRenameColumns

RunInsertUnits

========================================
========================================
Worksheet Specific
========================================
========================================

MaxAvailableFilterMembers

AppServerURL

MaxSqlInExpressionTokens

worksheet.full.load

client.worksheet.calcSummaryExpressions

Excess Rows

threadpool.query_run.per_user

threadpool.query_run.size

worksheet.data.comb.block_size

ApprovalProcessScope

Review your worksheet hints

Configurable Combinations in the Worksheet
- Client.uilimitations.maxcombs.ws
- Client.uilimitations.maxcells.ws
- Client.uilimitations.maxcells
- Client.uilimitations.maxdiskspace
- Client.uilimitations.warning

========================================
========================================
Data Loading
========================================
========================================

Verify any data loading errors
- select * from BIIO_SUPPLY_PLANS_err;
- select * from BIIO_SUPPLY_PLANS_POP_err;
- select * from BIIO_OTHER_PLAN_DATA_ERR;
- select * from t_src_item_tmpl_err;
- select * from t_src_loc_tmpl_err;
- select * from t_src_sales_tmpl_err;

Monitor Data Load Process

Ep_Load_Sales_LoadNullActualQty

Ep_Load_Sales_LoadFromStagingTableDirectly

Ep_Load_Sales_DisableEnableTriggers

Ep_Load_Sales_SALES_DATA_Merge_LoopControl

ep_load_do_commits

========================================
========================================
Workflow
========================================
========================================

Workflow Status

Verify your schema

========================================
========================================
insert_units
========================================
========================================

Verify Combinations for insert_units

========================================
========================================
Proport in Parallel
========================================
========================================

ProportTableLabel

ProportRunsInCycle

ProportParallelJobs

========================================
========================================
appserver properties.bat
========================================
========================================

Display the appserver properties.bat
- Displays the file and delivers detailed explanation of settings

Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime Using DBMS_REDEFINITION

Greetings Demantra Users! There is an excellent note detailing Oracle's implementation of Demantra in a big data environment. This was Oracle's largest ever Internal value chain planning database table. Read about how we did this, implementing RDBMS partitions. See Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime (Doc ID 1587179.1)

Wednesday Sep 04, 2013

Are you ready for a Demantra Worksheet Webcast?

The Key to Optimizing Demantra Worksheet Performance will be presented September 11, 2013 at 10am Central.
Please see Demantra Solutions Advisor Webcast Calendar And Archive (Doc ID 800030.1) in My Oracle Support.

Worksheet Performance Problems? Try this action plan

The index advisor can be ran for a user, all users, a worksheet, all worksheets or the entire schema. I am suggesting we run it for the two slowest worksheets and implment the results.
1) Are the workstations configured to use the maximum amount of Java memory?
2) Are you using client expressions?  
3) Are you using server expressions?  Can any of the data be adjusted and saved prior to worksheet open?
4) Have you iosolated the SQL that drives the slowest responding worksheets for tuning/index improvements?
5) Have you submitted the longest running SQL to Support for suggestions?
6) Has that longest running SQL been executed outside of the worksheet display process to determine if the expected results are realistic considering available hardware and matematical physical limits of hardware?
7) Have you analyzed your approach to retrieving/adjusting forecast data? The past approach may not be the best. For example, aggregation levels, forecast horizon, the amount of families or business lines retrieved at once for any particular worksheet.
8) Is your instance undersized before adding any new sales_data data. A complete review of
production and supporting instances, such as test, could be conducted.  Use note 1301231.1.
9) What is your forecast horizon?  Is this required for every worksheet even during the planners day-to-day tasks of forecast adjustment? Can you use a 'master plan' worksheet approach.
10) A complete review of all Demantra param and environment variable needs to be conducted. These should match the capabilites of the RDBMS and the application server.
11) A complete review of the application server settings needs to be conducted to allow for maximum performance.

Thursday Aug 08, 2013

Worksheet Issues During or After Upgrade?

 Proactive action During/After an Upgrade may help.  Please review the following:

  1. Clean the Java cache by deleting the Java cache windows folder 
  2. Set the Java console debug mode to '5'
  3. Reproduced the issue
  4. Review the Client Java console
  5. Review the collaborator.log
  6. Produce an action plan

Wednesday Feb 13, 2013

Demantra MDP_Matrix 7.3.1.3 and Above using Table_Reorg to Rebuild by Primary Key.


Demantra MDP_Matrix table reorg by Primary Key.  This is a non partitioned table.

Sample Action Plan to create partitions on Demantra Sales_data table.  This has been used recently and was successful.

USE at your own RISK after careful study and adjustment to meet your needs.  This was produced to use in an 11gr2 RDBMS environment.   

Step 1:- Create tablespace for MDP_MATRIX
=================================================================
Create tablespace MDP_MATRIX_DATA
datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_01.dbf' size 5000m
LOGGING ONLINE PERMANENT BLOCKSIZE 16384
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_02.dbf' size 5000m;
Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_03.dbf' size 5000m;

Step 2:- Create tablespace for MDP_MATRIX INDEX's
=================================================================
Create tablespace MDP_MATRIX_IDX
datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_01.dbf' size 5000m
LOGGING ONLINE PERMANENT BLOCKSIZE 16384
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_02.dbf' size 5000m;
Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_03.dbf' size 5000m;

Step 3:- Move the table MDP_MATRIX to MDP_MATRIX_DATA
=================================================================
conn demantra/<Passwd>

alter session force parallel dml;

alter table MDP_MATRIX move MDP_MATRIX_DATA parallel 7;

alter table MDP_MATRIX parallel 3;


Step 4:- Do the Row order reorg for MDP_MATRIX Table as per the Primary Key
===========================================================================================

NOTE:- Monitor the Tablespaces Closely MDP_MATRIX_DATA/ And MDP_MATRIX_IDX for this below activity.


SELECT /*+ PARALLEL(SALES_DATA,4) */(ROUND(((SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) AS CNT     FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE , ITEM_ID
          , LOCATION_ID,RELATIVE_FNO,BLOCK_NUMBER ,ROW_NUMBER,DATA_ROW
     ,(LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY
              ROW_NUMBER)) AS PREV_DATA_ROW FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,RELATIVE_FNO ,BLOCK_NUMBER
          ,ROW_NUMBER
     ,(DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER
              BY SALES_DATE, ITEM_ID, LOCATION_ID)) AS DATA_ROW
          FROM (SELECT  /*+ PARALLEL(SALES_DATA,8) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO
          ,DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER
          ,DBMS_ROWID.ROWID_ROW_NUMBER(ROWID) AS ROW_NUMBER
          FROM SALES_DATA
          ) C
          ) B
          ) A
          WHERE DATA_ROW != PREV_DATA_ROW
          AND DATA_ROW != PREV_DATA_ROW + 1)/(SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) FROM SALES_DATA)),3)*100) AS "Out Of Order Ratio %"


Step 5:- Reorder the MDP_MATRIX Table
===========================================================================================
Reorder the MDP_MATRIX Table as per Primary Key

Script to grant_table_reorg (locate the sql and adjust your path)
SQL> @your_instance_name/oracle/sales_data/grant_table_reorg.sql

SQL> table_reorg.reorg ('DEMANTRA,'MDP_MATRIX','C');

Script to revoke_table_reorg  (locate the sql and adjust your path)
SQL> @your_instance_name/oracle/sales_data/revoke_table_reorg.sql

Step 6:- Rebuild the index MDP_MATRIX matrix and move them to MDP_MATRIX_IDX Tablespace
===========================================================================================
spool INDEX_REBUILD.lst
Alter Index WS_MDPMATRIX_1_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_2_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_3_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DIVIDER_I_X_1 rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_4_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MATRIX_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_1_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_2_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_3_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_4_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_5_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_PARENT_AD_NUM_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_ACCOUNT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_BUS_GROUP_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_CUSTOMER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_CUST_CLASS_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_DEMAND_CLASS_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_LEGAL_ENTITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_OPER_UNIT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_PROD_CAT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_PROD_FAMILY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_SALES_CH_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_SUPPLIER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_TP_ZONE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_ZONE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ITEM_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_10_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_8_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_9_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR2A_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_10_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_PK rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_8_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_9_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ORGANIZATION_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ORG_TYPE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2B_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_SITE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_1465_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DO_FORE_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index LOC_NO rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_BRANCH_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_IN rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_LOC rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_481_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_537_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_706_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_ITEM_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MM_COMP_LEAD_DATE_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index PREDICTION_STATUS_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index PROP_CHANGES_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_CORP_CODE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_COUNTRY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_STATE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CTRY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_STATE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DIVIDER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
spool off;

spool Change_parallel.lst
Alter Index WS_MDPMATRIX_1_IDX parallel 1;
Alter Index WS_MDPMATRIX_2_IDX parallel 1;
Alter Index WS_MDPMATRIX_3_IDX parallel 1;
Alter Index DIVIDER_I_X_1 parallel 1;
Alter Index WS_MDPMATRIX_4_IDX parallel 1;
Alter Index MATRIX_IND parallel 1;
Alter Index T_EP_E1_ITEM_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_6_I_X parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_1_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_2_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_3_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_4_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_5_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_PARENT_AD_NUM_I_X parallel 1;
Alter Index T_EP_EBS_ACCOUNT_I_X parallel 1;
Alter Index T_EP_EBS_BUS_GROUP_I_X parallel 1;
Alter Index T_EP_EBS_CUSTOMER_I_X parallel 1;
Alter Index T_EP_EBS_CUST_CLASS_I_X parallel 1;
Alter Index T_EP_EBS_DEMAND_CLASS_I_X parallel 1;
Alter Index T_EP_EBS_LEGAL_ENTITY_I_X parallel 1;
Alter Index T_EP_EBS_OPER_UNIT_I_X parallel 1;
Alter Index T_EP_EBS_PROD_CAT_I_X parallel 1;
Alter Index T_EP_EBS_PROD_FAMILY_I_X parallel 1;
Alter Index T_EP_EBS_SALES_CH_I_X parallel 1;
Alter Index T_EP_EBS_SUPPLIER_I_X parallel 1;
Alter Index T_EP_EBS_TP_ZONE_I_X parallel 1;
Alter Index T_EP_EBS_ZONE_I_X parallel 1;
Alter Index T_EP_ITEM_I_X parallel 1;
Alter Index T_EP_I_ATT_10_I_X parallel 1;
Alter Index T_EP_I_ATT_1_I_X parallel 1;
Alter Index T_EP_I_ATT_2_I_X parallel 1;
Alter Index T_EP_I_ATT_3_I_X parallel 1;
Alter Index T_EP_I_ATT_4_I_X parallel 1;
Alter Index T_EP_I_ATT_5_I_X parallel 1;
Alter Index T_EP_I_ATT_6_I_X parallel 1;
Alter Index T_EP_I_ATT_7_I_X parallel 1;
Alter Index T_EP_I_ATT_8_I_X parallel 1;
Alter Index T_EP_I_ATT_9_I_X parallel 1;
Alter Index T_EP_LR1_I_X parallel 1;
Alter Index T_EP_LR2A_I_X parallel 1;
Alter Index T_EP_LR2_I_X parallel 1;
Alter Index T_EP_LS1_I_X parallel 1;
Alter Index T_EP_LS2_I_X parallel 1;
Alter Index T_EP_LS3_I_X parallel 1;
Alter Index T_EP_LS4_I_X parallel 1;
Alter Index T_EP_LS5_I_X parallel 1;
Alter Index T_EP_LS6_I_X parallel 1;
Alter Index T_EP_L_ATT_10_I_X parallel 1;
Alter Index T_EP_L_ATT_1_I_X parallel 1;
Alter Index MDP_PK parallel 1;
Alter Index T_EP_L_ATT_6_I_X parallel 1;
Alter Index T_EP_L_ATT_7_I_X parallel 1;
Alter Index T_EP_L_ATT_8_I_X parallel 1;
Alter Index T_EP_L_ATT_9_I_X parallel 1;
Alter Index T_EP_ORGANIZATION_I_X parallel 1;
Alter Index T_EP_ORG_TYPE_I_X parallel 1;
Alter Index T_EP_P1_I_X parallel 1;
Alter Index T_EP_P2A1_I_X parallel 1;
Alter Index T_EP_P2A2_I_X parallel 1;
Alter Index T_EP_P2A_I_X parallel 1;
Alter Index T_EP_P2B_I_X parallel 1;
Alter Index T_EP_P3_I_X parallel 1;
Alter Index T_EP_P4_I_X parallel 1;
Alter Index T_EP_SITE_I_X parallel 1;
Alter Index MDP_MATRIX_1465_IDX parallel 1;
Alter Index T_EP_E1_ITEM_CAT_7_I_X parallel 1;
Alter Index T_EP_L_ATT_2_I_X parallel 1;
Alter Index T_EP_L_ATT_3_I_X parallel 1;
Alter Index T_EP_L_ATT_4_I_X parallel 1;
Alter Index T_EP_L_ATT_5_I_X parallel 1;
Alter Index DO_FORE_IND parallel 1;
Alter Index LOC_NO parallel 1;
Alter Index MDP_BRANCH_ID_IDX parallel 1;
Alter Index MDP_IN parallel 1;
Alter Index MDP_LOC parallel 1;
Alter Index MDP_MATRIX_481_IDX parallel 1;
Alter Index MDP_MATRIX_537_IDX parallel 1;
Alter Index MDP_MATRIX_706_IDX parallel 1;
Alter Index MDP_MATRIX_ITEM_ID_IDX parallel 1;
Alter Index MM_COMP_LEAD_DATE_IDX parallel 1;
Alter Index PREDICTION_STATUS_IND parallel 1;
Alter Index PROP_CHANGES_IND parallel 1;
Alter Index T_EP_CORP_CODE_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_BR_CITY_I_X parallel 1;
Alter Index T_EP_E1_BR_COUNTRY_I_X parallel 1;
Alter Index T_EP_E1_BR_STATE_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_6_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_7_I_X parallel 1;
Alter Index T_EP_E1_CUST_CITY_I_X parallel 1;
Alter Index T_EP_E1_CUST_CTRY_I_X parallel 1;
Alter Index T_EP_E1_CUST_STATE_I_X parallel 1;
Alter Index T_EP_P2_I_X parallel 1;
Alter Index DIVIDER_I_X parallel 1;
spool off;

Step 7:- Recompile the Invalid Objects
===========================================================================================

Run utlrp.sql / dbms_recomp  Recompile all the objects.

Spool Data_collection_after_MDP_Tablereorg.lst
select object_type,count(1) from dba_objects where owner='DEMANTRA' group by object_type order by  2;
select object_name,object_type,owner,status from dba_objects where status='INVALID';
select index_name,table_name,status from dba_indexes where table_name='MDP_MATRIX';
select constraint_name,table_name,status from dba_constraints where table_name='MDP_MATRIX';
Spool Off


Step 8:- Gather the Schema Stats Again
===========================================================================================
SQL> DROP TEMPS
SQL> execute DBMS_STATS.DELETE_TABLE_STATS(ownname => 'DEMANTRA', tabname => 'SALES_DATA');
SQL> execute DBMS_STATS.DELETE_TABLE_STATS(ownname => 'DEMANTRA', tabname => 'MDP_MATRIX');

exec dbms_stats.GATHER_SCHEMA_STATS(OWNNAME=>'DEMANTRA', estimate_percent=>30 ,DEGREE=> 10);


Step 9:- Check the Reorder out of ratio
===========================================================================================

SELECT /*+ PARALLEL(SALES_DATA,4) */(ROUND(((SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) AS CNT     FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE , ITEM_ID
          , LOCATION_ID,RELATIVE_FNO,BLOCK_NUMBER ,ROW_NUMBER,DATA_ROW
     ,(LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY
              ROW_NUMBER)) AS PREV_DATA_ROW FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,RELATIVE_FNO ,BLOCK_NUMBER
          ,ROW_NUMBER
     ,(DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER
              BY SALES_DATE, ITEM_ID, LOCATION_ID)) AS DATA_ROW
          FROM (SELECT  /*+ PARALLEL(SALES_DATA,8) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO
          ,DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER
          ,DBMS_ROWID.ROWID_ROW_NUMBER(ROWID) AS ROW_NUMBER
          FROM SALES_DATA
          ) C
          ) B
          ) A
          WHERE DATA_ROW != PREV_DATA_ROW
          AND DATA_ROW != PREV_DATA_ROW + 1)/(SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) FROM SALES_DATA)),3)*100) AS "Out Of Order Ratio %"

Spool Data_collection_after_MDP_column_reorg.lst
select object_type,count(1) from dba_objects where owner='DEMANTRA' group by object_type order by  2;
select object_name,object_type,owner,status from dba_objects where status='INVALID';
select index_name,table_name,status from dba_indexes where table_name='MDP_MATRIX';
select constraint_name,table_name,status from dba_constraints where table_name='MDP_MATRIX';
Spool Off

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« February 2016
SunMonTueWedThuFriSat
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
     
       
Today