Tuesday Nov 19, 2013

Oracle Demantra Gathering Statistics on Large Partitioned Tables in 11gR2

Hello!   Have you implemented partitions on a large Demantra table?  Are you using RDBMS version 11gR2?  Then consider implementing incremental statistics.

For all 11gr2 environments with large partitioned or subpartitioned objects turn on incremental statistics using this command:

exec dbms_stats.set_table_prefs('OWNER','TABLE','INCREMENTAL','TRUE');

Once this is set for any given table we should gather statistics on that table using the normal tool (fnd_stats in ebs or dbms_stats elsewhere). This first gather after turning it on will take longer than previous analyzes. Then going forward we will see the following :

1 – the normal dbms_stats (or fnd_stats) will only gather statistics on lower level partitions if the statistics on that partition are stale. This is a significant change. That is going forward using the default options of a gather command may in fact perform no reanalyze on the actual data if the modifications to the table don’t warrant it.

2 – if a subpartition is stale the normal stats will ONLY gather statistics on that subpartition. The partition for that subpartition will be re-derived as will the global statistics – no other statistics will be gathered.

Making this change promises to reduce gather stats by hours in some cases.

 

Saturday Oct 26, 2013

Demantra Partitioning and the First PK Column

  We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok
  to create a new index instead of changing the PK.

  For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting
  with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).

  * Note that the first column of the new index matches the partition key.

  It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column
  matches the partition key.

Wednesday Oct 23, 2013

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

Hello!   There is a new document available:

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG

The table reorganization can be setup to automatically run in version 7.3.1.5. 

In version 12.2.2 we run the TABLE_REORG.CHECK_REORG function at every appserver restart.
If the function recommends a reorg then we strongly encourage to reorg the database object. 
This is documented in the official docs.

In versions 7.3.1.3 and 7.3.1.4, the TABLE_REORG module exists and can be used.
It has two main functions that are documented in the Implementation Guide Supplement,
Release 7.3, Part No. E26760-03, chapter 4.

In short, if you are using version 7.3.1.3 or higher, you can check for the need to run a reorg by doing the following 2 steps:

1. Run TABLE_REORG.CHECK_REORG('T');
2. Check the table LOG_TABLE_REORG for recommendations

If you are on a version before 7.3.1.3, you will need to follow the instructions below to determine if you need to do a manual reorg.

How to determine if a table reorg is needed

1. It is strongly encouraged by DEV that You gather statistics on the required table.  The prefered percentage for the gather is 100%.

2. Run the following SQL to evaluate how table reorg might affect Primary Key (PK) based access:

 
SELECT ui.index_name,trunc((ut.num_rows/ui.clustering_factor)/(ut.num_rows/ut.blocks),2)
FROM user_indexes ui, user_tables ut, user_constraints uc
WHERE ui.table_name=ut.table_name
AND ut.table_name=uc.table_name
AND ui.index_name=uc.index_name
AND UC.CONSTRAINT_TYPE='P'
AND ut.table_name=upper('&enter_table_name');
 

3. Based on the result:

VALUE ABOVE 0.75 - DOES NOT REQUIRE REORG

VALUE BETWEEN 0.5 AND 0.75 - REORG IS RECOMMENDED

VALUE LOWER THAN 0.5 - IT IS HIGHLY RECOMMENDED TO REORG

Friday Sep 27, 2013

Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime Using DBMS_REDEFINITION

Greetings Demantra Users! There is an excellent note detailing Oracle's implementation of Demantra in a big data environment. This was Oracle's largest ever Internal value chain planning database table. Read about how we did this, implementing RDBMS partitions. See Implementing Large-Scale Demantra Table Rebuilds To Improve Performance with Zero Downtime (Doc ID 1587179.1)

Wednesday Jul 17, 2013

Demantra Using Oracle database Real Application Clusters (RAC)

Demantra version:7.3.1.2

Database version:11.2.0.3

Due to the large amount of data within our system we are plannig to implement RAC to improve performance.  We have already recommended other performance guidelines from Oracle like reordering columns, partitioning sales_data table etc.  For further performance improvement since we expect more stores and items being introduced which would increase data we are planning to go for RAC as it is mentioned in the Demantra Installation Guide that RAC is supported in Demanra.

We would like the below information before proceeding with this major step:-

1.) For Distributed Engine Scaling - The Oracle Doc ID 468688.1 states that "The number of engines/machines defined in the Engine Administrator should not exceed 3 times the number of database CPUs".  So we want to know in case of RAC will the above calculation be based on cumulative CPU nodes of all database RAC nodes? I.e when using RAC can we set the total number of Engines upto 3 times the cumulative CPU nodes of all database RAC nodes.

Answer 1: For Distributed Engine Scaling -- using the cumulative CPU nodes is not quite right for RAC, since there will be extra interconnect overhead in a RAC system.  So you can try using the "3 times CPU's" but you might have to scale it back based on empirical performance.

2.) Using a Distributed Engine with one master and multiple slave machines. Is it possible to point different slave engines to different RAC nodes?

Answer 2: Is it possible to point different slave engines to different RAC nodes?  No.  The RAC will automatically load balance when you connect using the SCAN addresses.

Thursday Jun 27, 2013

Demantra Implementation Tip Windows and Unix or Linux

Hello!  Are you implementing using a third party or consulting resources?  

Recently we have seen some cases where customers no longer have a windows installation.  After the initial install and configuration, once the instance has gone live, the windows install is either deleted or most likely no longer with the customer as the same was installed on the implementers' laptop to start with. As a result when support comes back requesting the customer to apply a patch and/or upgrade they do not have a windows installation.  This has started happening after Oracle Demantra gave them the option to configure the engine on Unix. 

Workaround:

It is advisable that the customer keep their Windows installation intact for further patching and/or upgrade.  It is aslo possible that the implementer had installed Demantra on his Windows box and you do not have access to it any more.  It is possible that with the web and engine on Unix, and the silent installer having downloaded all the executable for Business Modeler, to work on the User's client machine, you may no longer need the windows install.

I have not tested the above 

Wednesday Feb 13, 2013

Demantra MDP_Matrix 7.3.1.3 and Above using Table_Reorg to Rebuild by Primary Key.


Demantra MDP_Matrix table reorg by Primary Key.  This is a non partitioned table.

Sample Action Plan to create partitions on Demantra Sales_data table.  This has been used recently and was successful.

USE at your own RISK after careful study and adjustment to meet your needs.  This was produced to use in an 11gr2 RDBMS environment.   

Step 1:- Create tablespace for MDP_MATRIX
=================================================================
Create tablespace MDP_MATRIX_DATA
datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_01.dbf' size 5000m
LOGGING ONLINE PERMANENT BLOCKSIZE 16384
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_02.dbf' size 5000m;
Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_DATA_03.dbf' size 5000m;

Step 2:- Create tablespace for MDP_MATRIX INDEX's
=================================================================
Create tablespace MDP_MATRIX_IDX
datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_01.dbf' size 5000m
LOGGING ONLINE PERMANENT BLOCKSIZE 16384
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_02.dbf' size 5000m;
Alter tablespace MDP_MATRIX_DATA add datafile '/my_instance/oradata/data01/MDP_MATRIX_IDX_03.dbf' size 5000m;

Step 3:- Move the table MDP_MATRIX to MDP_MATRIX_DATA
=================================================================
conn demantra/<Passwd>

alter session force parallel dml;

alter table MDP_MATRIX move MDP_MATRIX_DATA parallel 7;

alter table MDP_MATRIX parallel 3;


Step 4:- Do the Row order reorg for MDP_MATRIX Table as per the Primary Key
===========================================================================================

NOTE:- Monitor the Tablespaces Closely MDP_MATRIX_DATA/ And MDP_MATRIX_IDX for this below activity.


SELECT /*+ PARALLEL(SALES_DATA,4) */(ROUND(((SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) AS CNT     FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE , ITEM_ID
          , LOCATION_ID,RELATIVE_FNO,BLOCK_NUMBER ,ROW_NUMBER,DATA_ROW
     ,(LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY
              ROW_NUMBER)) AS PREV_DATA_ROW FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,RELATIVE_FNO ,BLOCK_NUMBER
          ,ROW_NUMBER
     ,(DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER
              BY SALES_DATE, ITEM_ID, LOCATION_ID)) AS DATA_ROW
          FROM (SELECT  /*+ PARALLEL(SALES_DATA,8) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO
          ,DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER
          ,DBMS_ROWID.ROWID_ROW_NUMBER(ROWID) AS ROW_NUMBER
          FROM SALES_DATA
          ) C
          ) B
          ) A
          WHERE DATA_ROW != PREV_DATA_ROW
          AND DATA_ROW != PREV_DATA_ROW + 1)/(SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) FROM SALES_DATA)),3)*100) AS "Out Of Order Ratio %"


Step 5:- Reorder the MDP_MATRIX Table
===========================================================================================
Reorder the MDP_MATRIX Table as per Primary Key

Script to grant_table_reorg (locate the sql and adjust your path)
SQL> @your_instance_name/oracle/sales_data/grant_table_reorg.sql

SQL> table_reorg.reorg ('DEMANTRA,'MDP_MATRIX','C');

Script to revoke_table_reorg  (locate the sql and adjust your path)
SQL> @your_instance_name/oracle/sales_data/revoke_table_reorg.sql

Step 6:- Rebuild the index MDP_MATRIX matrix and move them to MDP_MATRIX_IDX Tablespace
===========================================================================================
spool INDEX_REBUILD.lst
Alter Index WS_MDPMATRIX_1_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_2_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_3_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DIVIDER_I_X_1 rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index WS_MDPMATRIX_4_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MATRIX_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_1_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_2_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_3_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_4_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_IT_BR_CAT_5_EP_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_PARENT_AD_NUM_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_ACCOUNT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_BUS_GROUP_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_CUSTOMER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_CUST_CLASS_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_DEMAND_CLASS_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_LEGAL_ENTITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_OPER_UNIT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_PROD_CAT_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_PROD_FAMILY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_SALES_CH_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_SUPPLIER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_TP_ZONE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_EBS_ZONE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ITEM_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_10_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_8_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_I_ATT_9_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR2A_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LR2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_LS6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_10_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_PK rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_8_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_9_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ORGANIZATION_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_ORG_TYPE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2A_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2B_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_SITE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_1465_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_ITEM_CAT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_L_ATT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DO_FORE_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index LOC_NO rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_BRANCH_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_IN rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_LOC rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_481_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_537_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_706_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MDP_MATRIX_ITEM_ID_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index MM_COMP_LEAD_DATE_IDX rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index PREDICTION_STATUS_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index PROP_CHANGES_IND rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_CORP_CODE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_CITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_COUNTRY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_BR_STATE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_1_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_3_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_4_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_5_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_6_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CAT_7_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CITY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_CTRY_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_E1_CUST_STATE_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index T_EP_P2_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
Alter Index DIVIDER_I_X rebuild Online parallel (degree 7)  tablespace MDP_MATRIX_IDX;
spool off;

spool Change_parallel.lst
Alter Index WS_MDPMATRIX_1_IDX parallel 1;
Alter Index WS_MDPMATRIX_2_IDX parallel 1;
Alter Index WS_MDPMATRIX_3_IDX parallel 1;
Alter Index DIVIDER_I_X_1 parallel 1;
Alter Index WS_MDPMATRIX_4_IDX parallel 1;
Alter Index MATRIX_IND parallel 1;
Alter Index T_EP_E1_ITEM_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_ITEM_CAT_6_I_X parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_1_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_2_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_3_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_4_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_IT_BR_CAT_5_EP_ID_IDX parallel 1;
Alter Index T_EP_E1_PARENT_AD_NUM_I_X parallel 1;
Alter Index T_EP_EBS_ACCOUNT_I_X parallel 1;
Alter Index T_EP_EBS_BUS_GROUP_I_X parallel 1;
Alter Index T_EP_EBS_CUSTOMER_I_X parallel 1;
Alter Index T_EP_EBS_CUST_CLASS_I_X parallel 1;
Alter Index T_EP_EBS_DEMAND_CLASS_I_X parallel 1;
Alter Index T_EP_EBS_LEGAL_ENTITY_I_X parallel 1;
Alter Index T_EP_EBS_OPER_UNIT_I_X parallel 1;
Alter Index T_EP_EBS_PROD_CAT_I_X parallel 1;
Alter Index T_EP_EBS_PROD_FAMILY_I_X parallel 1;
Alter Index T_EP_EBS_SALES_CH_I_X parallel 1;
Alter Index T_EP_EBS_SUPPLIER_I_X parallel 1;
Alter Index T_EP_EBS_TP_ZONE_I_X parallel 1;
Alter Index T_EP_EBS_ZONE_I_X parallel 1;
Alter Index T_EP_ITEM_I_X parallel 1;
Alter Index T_EP_I_ATT_10_I_X parallel 1;
Alter Index T_EP_I_ATT_1_I_X parallel 1;
Alter Index T_EP_I_ATT_2_I_X parallel 1;
Alter Index T_EP_I_ATT_3_I_X parallel 1;
Alter Index T_EP_I_ATT_4_I_X parallel 1;
Alter Index T_EP_I_ATT_5_I_X parallel 1;
Alter Index T_EP_I_ATT_6_I_X parallel 1;
Alter Index T_EP_I_ATT_7_I_X parallel 1;
Alter Index T_EP_I_ATT_8_I_X parallel 1;
Alter Index T_EP_I_ATT_9_I_X parallel 1;
Alter Index T_EP_LR1_I_X parallel 1;
Alter Index T_EP_LR2A_I_X parallel 1;
Alter Index T_EP_LR2_I_X parallel 1;
Alter Index T_EP_LS1_I_X parallel 1;
Alter Index T_EP_LS2_I_X parallel 1;
Alter Index T_EP_LS3_I_X parallel 1;
Alter Index T_EP_LS4_I_X parallel 1;
Alter Index T_EP_LS5_I_X parallel 1;
Alter Index T_EP_LS6_I_X parallel 1;
Alter Index T_EP_L_ATT_10_I_X parallel 1;
Alter Index T_EP_L_ATT_1_I_X parallel 1;
Alter Index MDP_PK parallel 1;
Alter Index T_EP_L_ATT_6_I_X parallel 1;
Alter Index T_EP_L_ATT_7_I_X parallel 1;
Alter Index T_EP_L_ATT_8_I_X parallel 1;
Alter Index T_EP_L_ATT_9_I_X parallel 1;
Alter Index T_EP_ORGANIZATION_I_X parallel 1;
Alter Index T_EP_ORG_TYPE_I_X parallel 1;
Alter Index T_EP_P1_I_X parallel 1;
Alter Index T_EP_P2A1_I_X parallel 1;
Alter Index T_EP_P2A2_I_X parallel 1;
Alter Index T_EP_P2A_I_X parallel 1;
Alter Index T_EP_P2B_I_X parallel 1;
Alter Index T_EP_P3_I_X parallel 1;
Alter Index T_EP_P4_I_X parallel 1;
Alter Index T_EP_SITE_I_X parallel 1;
Alter Index MDP_MATRIX_1465_IDX parallel 1;
Alter Index T_EP_E1_ITEM_CAT_7_I_X parallel 1;
Alter Index T_EP_L_ATT_2_I_X parallel 1;
Alter Index T_EP_L_ATT_3_I_X parallel 1;
Alter Index T_EP_L_ATT_4_I_X parallel 1;
Alter Index T_EP_L_ATT_5_I_X parallel 1;
Alter Index DO_FORE_IND parallel 1;
Alter Index LOC_NO parallel 1;
Alter Index MDP_BRANCH_ID_IDX parallel 1;
Alter Index MDP_IN parallel 1;
Alter Index MDP_LOC parallel 1;
Alter Index MDP_MATRIX_481_IDX parallel 1;
Alter Index MDP_MATRIX_537_IDX parallel 1;
Alter Index MDP_MATRIX_706_IDX parallel 1;
Alter Index MDP_MATRIX_ITEM_ID_IDX parallel 1;
Alter Index MM_COMP_LEAD_DATE_IDX parallel 1;
Alter Index PREDICTION_STATUS_IND parallel 1;
Alter Index PROP_CHANGES_IND parallel 1;
Alter Index T_EP_CORP_CODE_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_BR_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_BR_CITY_I_X parallel 1;
Alter Index T_EP_E1_BR_COUNTRY_I_X parallel 1;
Alter Index T_EP_E1_BR_STATE_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_1_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_2_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_3_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_4_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_5_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_6_I_X parallel 1;
Alter Index T_EP_E1_CUST_CAT_7_I_X parallel 1;
Alter Index T_EP_E1_CUST_CITY_I_X parallel 1;
Alter Index T_EP_E1_CUST_CTRY_I_X parallel 1;
Alter Index T_EP_E1_CUST_STATE_I_X parallel 1;
Alter Index T_EP_P2_I_X parallel 1;
Alter Index DIVIDER_I_X parallel 1;
spool off;

Step 7:- Recompile the Invalid Objects
===========================================================================================

Run utlrp.sql / dbms_recomp  Recompile all the objects.

Spool Data_collection_after_MDP_Tablereorg.lst
select object_type,count(1) from dba_objects where owner='DEMANTRA' group by object_type order by  2;
select object_name,object_type,owner,status from dba_objects where status='INVALID';
select index_name,table_name,status from dba_indexes where table_name='MDP_MATRIX';
select constraint_name,table_name,status from dba_constraints where table_name='MDP_MATRIX';
Spool Off


Step 8:- Gather the Schema Stats Again
===========================================================================================
SQL> DROP TEMPS
SQL> execute DBMS_STATS.DELETE_TABLE_STATS(ownname => 'DEMANTRA', tabname => 'SALES_DATA');
SQL> execute DBMS_STATS.DELETE_TABLE_STATS(ownname => 'DEMANTRA', tabname => 'MDP_MATRIX');

exec dbms_stats.GATHER_SCHEMA_STATS(OWNNAME=>'DEMANTRA', estimate_percent=>30 ,DEGREE=> 10);


Step 9:- Check the Reorder out of ratio
===========================================================================================

SELECT /*+ PARALLEL(SALES_DATA,4) */(ROUND(((SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) AS CNT     FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE , ITEM_ID
          , LOCATION_ID,RELATIVE_FNO,BLOCK_NUMBER ,ROW_NUMBER,DATA_ROW
     ,(LAG(DATA_ROW) OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER BY
              ROW_NUMBER)) AS PREV_DATA_ROW FROM (SELECT /*+ PARALLEL(SALES_DATA,4) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,RELATIVE_FNO ,BLOCK_NUMBER
          ,ROW_NUMBER
     ,(DENSE_RANK() OVER(PARTITION BY RELATIVE_FNO, BLOCK_NUMBER ORDER
              BY SALES_DATE, ITEM_ID, LOCATION_ID)) AS DATA_ROW
          FROM (SELECT  /*+ PARALLEL(SALES_DATA,8) */ SALES_DATE, ITEM_ID, LOCATION_ID
          ,DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) RELATIVE_FNO
          ,DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) AS BLOCK_NUMBER
          ,DBMS_ROWID.ROWID_ROW_NUMBER(ROWID) AS ROW_NUMBER
          FROM SALES_DATA
          ) C
          ) B
          ) A
          WHERE DATA_ROW != PREV_DATA_ROW
          AND DATA_ROW != PREV_DATA_ROW + 1)/(SELECT /*+ PARALLEL(SALES_DATA,4) */ COUNT(*) FROM SALES_DATA)),3)*100) AS "Out Of Order Ratio %"

Spool Data_collection_after_MDP_column_reorg.lst
select object_type,count(1) from dba_objects where owner='DEMANTRA' group by object_type order by  2;
select object_name,object_type,owner,status from dba_objects where status='INVALID';
select index_name,table_name,status from dba_indexes where table_name='MDP_MATRIX';
select constraint_name,table_name,status from dba_constraints where table_name='MDP_MATRIX';
Spool Off

Thursday Oct 11, 2012

System Wide Performance Sanity Check Procedures

Do you need to boost your overall implementation performance?

Do you need a direction to pinpoint possible performance opportunities?

Are you looking for a general performance guide?

Try MOS note 69565.1

This paper describes a holistic methodology that defines a systematic approach to resolve complex Application performance problems.  It has been successfully used on many critical accounts.  The 'end-to-end' tuning approach encompasses the client, network and database and has proven far more effective than isolated tuning exercises.  It has been used to define and measure targets to ensure success. 

Even though it was created in 2001 and checked for relevance on 13-Oct-2008, the procedures are still very valuable.

Regards!

 

Thursday Aug 02, 2012

Are you Implementing Partitions for Demantra? Consider These Points.

1) Partition columns must be a subset of the primary key columns

2) Each partition should have its own tablespace.  All partitions of a partitioned object must reside in tablespaces of a single block size.

3) Each of the large tables should have their own tablespace.

4) Set the following parameters so that the Analytical Engine can find the partition on which any combination resides:
  
    Parameter                Purpose
    PartitionColumnItem Specifies the name of the column that partitions the data by item.
 
    PartitionColumnLoc  Specifies the name of the column that partitions the data by location.

    Note: When the SALES_DATA table is not partitioned by a level column, you need to set:
    update init_params_0 set value_string = column name
    where pname in ('PartitionColumnItem', 'PartitionColumnLoc');


5) Compute the optimal PCTFREE, PCTUSED and INITRANS values for the tables.

6) Ensure that the Schema statistics are up to date.

7) When creating partitions, consider your main worksheet levels.  Does your primary key follow the worksheet levels?  Partitions should also follow your worksheet levels and primary key.  If you have several worksheets that have different levels, way your options according to use.

Also, please review the following MyOracleSupport Documents:

Oracle Demantra Implementing Partitions for Performance (Doc ID 1227173.1)
Demantra Performance Overview and Recommendations High Impact Discussion Points (Doc ID 1162795.1)
Partitioned Sales_data Table But Engine Run Is Slower (Doc ID 1331482.1)

Friday Jul 06, 2012

Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

New 7.3.1.3 parameter: TargetTaskSize

Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards

Parameter Location: Parameters > System Parameters > Engine > Proport
 
Default: 0
 
Engine Mode: Both
 
Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level.
This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.

  - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has
    to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1
    member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine
    task is therefore comprised of one or more branches.

    - Reveal current task size
      go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.

    - Control of tasks
      The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast
      tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this
      calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.

    - Discovery of current branch size
      To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the
      level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that
      the forecast tree level driving branches be revised or at times completely removed from the forecast tree.

    - Control of foreacast tree branch size
        - Run the following sql to determine how even the branches are being split by the engine:
 
          select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;
 
          This will give you an understanding if some of the individual branches have an unusually large number of
          rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.

        - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines
          (both of these settings are found in the Engine Administrator)

          select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;

          This will give us an understanding at which level of the Forecast tree where the forecast is being generated. 
          Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast
          tree and/or engine parameters that are too strict

          Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might
          produce a forecast, with more combinations, at a lower level.

          For example:

            - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which
              determines the size of the branches.

            - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches
              be revised or at times completely removed from the forecast tree.
 
            - For example, if the highest level of the forecast tree is set to Brand/All Locations.

            - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.

            - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for
              a single engine to process.  Some possible solutions could be to remove the Brand level and instead
              use a different product grouping which has a more even distribution, possibly Product Group.
 
            - It is also possible to add a location dimension to this forecast tree level, for example Customer. 
              This will also reduce forecast tree branch size and will deliver a balanced task allocation.

            - A correctly configured Forecast Tree is something that is done by the Implementation team and is
              not the responsibility of Oracle Support. 

Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system
automatically calculates a value for 'TargetTaskSize' depending on the number of engines.

  - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead
               of automatically calculating the size?
    ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).

  - How to control the number of engines?
    Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general
    rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the
    engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this
    type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'
    repeated eight times in this field.

    Where do I set the number of engines?
       
        To add multiples computers where engine will run, please do a back-up of Settings.xml file under
        Analytical Engines\bin\ folder, then edit it and add there the selected machines.
       
        Example, this will allow 3 engines to start:

        - <Entry>
          <Key argument="ComputerNames" />
          <Value type="string" argument="localhost,localhost,localhost" />
          </Entry

Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used.
(Oracle does not recommend changing the default value.)

The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.

  - Level 1 combinations, known as group size

The engine manager will use this parameter to attempt creating branches with similar size.
  * The engine manager will not create engines that do not have a branch.

The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that
are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to
an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches.

Coming up next:
- The engine divider
- Group size
- Level 1 combinations
- MAX_FORE_LEVEL
- Engine Parameters

 

Wednesday Jun 06, 2012

Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

Please use the following on the production machine to get formatted explain
plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or
any other value that takes longer:

-- Open new session is SQL*Plus */
-- Make sure you are using updated PLAN_TABLE
-- This can be done by dropping it and recreate it by running:
-- SQL> @?/rdbms/admin/utlxplan.sql)

set lines 1000
set pages 1000
spool xplan_1.txt
EXPLAIN PLAN FOR
<<<<Replace this line with exactly the same query you used above. Force hard
parse by modifying the case of a character>>>>
@?/rdbms/admin/utlxplp
spool off
EXIT

--Open a second session is SQL*Plus
ALTER SESSION SET max_dump_file_size = unlimited;
ALTER SESSION SET tracefile_identifier = '10046';
ALTER SESSION SET statistics_level = ALL;
ALTER SESSION SET events '10046 trace name context forever, level 12';
<<<<Replace this line with exactly the same query you used above. Force hard
parse by modifying the case of a character>>>>
select 'verify cursor closed' from dual;
ALTER SYSTEM SET EVENTS '10046 trace name context off';
EXIT

Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump).

Need instructions to find udump?  

sqlplus "/ as sysdba"
show parameters dump_dest

This will show you bdump, cdump and udump locations.

Thursday May 24, 2012

Announcing: Oracle Demantra 7.3.1.3 Available

We are pleased to announce the availability of Oracle Demantra 7.3.1.3.

Please review the release readme and all 7.3.1.3 documentation carefully to determine content details for this patch.  You can also find documentation in the Demantra Documentation Library on My Oracle Support (note 443969.1).

The release is currently available on My Oracle Support as Patch 14076370.

Regards

Wednesday Apr 04, 2012

Need to Determine the Engine Status?

If you need to establish the status of the engine, begin with this SQL:

select status, engine, engine_version,fore_column_name
from dm.forecast_history

The status of an engine run is stored in the FORECAST_HISTORY table, in the “status” field.  We can also find in that table the FORE_COLUMN_NAME field. This field includes the name of the column in SALES_DATA in which the relevant forecast is stored.

Here are the possible statuses:
-1, -2 : The engine failed in the initialization phase.  Which means, before the engine manager created the engines.
 0 : The engine stopped in the optimization phase.  Which means, after the engines were created.
 1: The engine finished the run successfully.
 2: Forecast was never calculated for the relevant column that is mentioned in FORE_COLUMN_NAME.


 

Tuesday Apr 03, 2012

Add-ons for Firefox - Java Plugin has been blocked JRE versions below 1.6.0_31 or between 1.7.0 and 1.7.0_2

As of 3-APR-12 some Firefox versions automatically block all Java plugins which are Java versions less than 1.6.0_31.

7.3.0.2 AND ALL OF 7.3.1
Therefore every user that has a problem and is running with a Demantra that supports Java higher than 1.6.0_19 can upgrade their Java plugin to 1.6.1_31. Demantra 7.3.0.2
and all versions of 7.3.1 support it.

OLDER VERSION of DEMANTRA
Anyone who has a problem while running on an older Demantra version that does not support Java versions newer than 1.6.0_19 cannot use Firefox with Demantra.  Instead they can either use Windows Internet Explorer (IE) without a Java upgrade or they can upgrade their Demantra.  If they upgrade to the latest Java version then the Demantra plugin will not work - not with Firefox and not with IE.

Tuesday Mar 06, 2012

Implementing Oracle OATM and Demantra? A Performance For You!

Recommendation for Demantra?

If you are using a striped storage device, be it striped in hardware (like RAID 1+0 or RAID 5) or striped in software over many disks, then there is less of a need to separate the DB objects into different tablespaces as OATM constructs.

When doing an index lookup the DB server reads from the index and then it reads from the data table.  If both the index and the table are on the same physical disk then the DB server will wait between the index read and the data read while the disk head moves and the disk spins around to the right spot.

But if your storage device is an array of many disks then odds are that the index block and the data table block are not on the same physical disk anyway.  The more disks in the array the lower the odds that the DB server will need to wait.

That is why the number of disks is more important than their size.  The more disks the faster the I/O.

About

This blog delivers the latest information regarding performance and install/upgrade. Comments welcome

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today