Tuesday Nov 26, 2013

Introduction and Tips of Oracle BI Apps Variable Refresh

Author: Chuan Shi

Introduction

The ETL logic in BI Apps uses parameters in packages, interfaces, load plans, and knowledge modules (KM) to control the ETL behaviors. Parameters can be configured by customers, pre-seeded in the ETL code, or maintained internally:

  • Data Load parameters that can be configured by customers are maintained at product line (PLV) and fact/dimension group levels in the BI Apps Configuration Manager (BIACM).
  • Some parameters are pre-seeded in the ETL code OOTB, e.g. DATASOURCE_NUM_ID.
  • Other parameters are used internally by the load plan to control the execution of the load plan, e.g. EXECUTION_ID.

ETL parameters are handled by using ODI variables. The purpose of this blog is to explain ODI variable refreshing to ensure that correct values of variables are used in Oracle BI Apps ETL.

ODI Variable Classification

To ensure correct values to be used in ETL, variables are refreshed at run time. A variable can be refreshed in either a package or a load plan. Variables can be classified into four categories based on whether and where they are refreshed in ODI (and if a variable needs to be refreshed, then there must be a refreshing logic specified for it, which will be discussed later).


The four categories are:

  1. Not refreshed

Definition: Generally, internal parameters used by the load plan to control the ETL execution, and therefore they shall not be refreshed

Examples: DATASOURCE_NUM_ID, EXECUTION_ID

  1. Refreshed in the package

Definition: Variables that are not feasible to be refreshed in load plans. (The refreshing logic of such a variable depends on ETL run facts. It sources from DW tables and uses the QUALIFY user define function (UDF). ODI fails to interpret QUALIFY at load plan level in some cases.)

Examples: IS_INCREMENTAL, LAST_EXTARCT_DATE

  1. Hardcoded in the load plan

Definition: Variables whose values are hardcoded by overwriting in load plans. The variables will take the hardcoded values in ETL.

Examples: DOMAIN_CODE, LOOKUP_TYPE

  1. Refreshed in the load plan

Definition: Variables whose values are configured in BIACM by customers. (In other words, the values of these variables come from BIACM. The refreshing logic of such a variable uses a UDF to extract its value from BIACM.)

Examples: UPDATE_ALL_HISTORY, TYPE2_FLG

For a complete list of ODI variables, please refer to this post.

Refreshing Variables in a Package

Refreshing variables in a package is straightforward. One needs to create Refresh Variable steps for the variables to be refreshed. The screenshot below shows examples of refreshing IS_INCREMENTAL.

Hard-coding Variables in a Load Plan

Some variables, such as DOMAIN_CODE and LOOKUP_TYPE are hardcoded in load plan components. To do that, go to the load plan component to which the variables are to be hardcoded, select the Overwrite checkbox and provide the hardcode values for the variables. The screenshot below shows examples of hardcoding DOMAIN_CODE and LOOKUP_TYPE.

Refreshing Variables from BIACM

BIACM is a central UI where customers can define the values of data load parameters (i.e., ODI variables), among with many other features offered. ODI variables, which are refreshed in load plans, have their values extracted from BIACM. We also refer to such variables as BIACM variables.

BIACM Variables are classified into truly global variables, PLV specific variables, and fact/dimension group level variables.

  • A truly global variable (e.g., 13P_CALENDAR_ID) is a variable that has the same value in all product lines (i.e., EBS11510, PSFT90, etc) and for all fact/dimension groups. Truly global variables are refreshed centrally in the load plan system components.
  • A PLV specific variable (e.g., LANGUAGE_BASE) is a variable that takes the same value for all fact/dimension groups within a product line, but different values of the variable can be used in different production lines. They are refreshed individually in consuming load plan dev components.
  • A fact/dimension group level variable (e.g., UPDATE_ALL_HISTORY) is group specific. It can take different values in different fact/dimension groups within the same PLV and across different PLVs. They are refreshed individually in consuming load plan dev components.

From a variable value overriding perspective:

  • A truly global variable has a unique value. PLVs and fact/dimension groups cannot override its value.
  • A PLV variable has product line specific values (e.g., LANGUAGE_BASE takes the value of US in EBS product lines but ENG in PSFT product lines). The value is the same for all fact/dimension groups within that product line.
  • A fact/dimension group level variable has group specific values (e.g., TYPE2_FLG has the value of Yes in Position Dimension Hierarchy, while it has the value of No in Asset Dimension). Also, such a variable has a global default value. If a fact/dimension group does not specify the value of such a variable for its use, then the global default value will be used whenever this variable is called by that group (e.g., the global default value of TYPE2_FLG is No).

These variables are defined in ODI. To ensure that variable refreshing works correctly, there are some rules on the definitions of ODI variables:

  • Set the ‘Keep History’ option to ‘No History’;
  • Always provide a default value (the default value will be picked if refreshing from BIACM does not return a value for some reason. Otherwise the ETL will fail.). As a good practice, the ODI default value of the variable can be set the same as the global value of the variable in BIACM.

(Whenever the Keep History option or the Default Value of a variable is changed, the scenarios that use this variable need to be regenerated.)

Once ODI variables are defined, a refreshing logic is needed to refresh them from BIACM. In this regard,

  • The ODI UDF GET_CM_PARAM is used
  • To return the correct value for a variable, we need to specify the following in the refreshing logic:
  • variable name;
  • product line;
  • fact/dimension group.
  • Syntax: getcmparam($(param_code),$(DATASOURCE_NUM_ID))
  • $(param_code) is the name of the variable (e.g., TYPE2_FLG)
  • $(DATASOURCE_NUM_ID) is used to specify the product line.

For PLV/group level variables, we pass #DATASOURCE_NUM_ID as $(DATASOURCE_NUM_ID);

e.g., getcmparam('TYPE2_FLG','#DATASOURCE_NUM_ID')

For truly global variable, we pass #WH_DATASOURCE_NUM_ID as a pseudo-PLV ID.

e.g., getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

  • Do not pass fact/dimension group directly into the syntax. They are determined by where the variable is refreshed.

BIACM variables are refreshed in load plans. To refresh a variable in a load plan, the following three steps are required (they have been done OOTB):

Step 1: Specify the correct logical schema and refreshing logic in the refreshing tab of the variable definition.

The logical schema has to be CM_BIAPPS11G.

The refreshing logic should be getcmparam() with appropriate inputs, e.g.,

getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

Step 2: Update the variable definition in the variables tab of the load plan.

Go to the load plan component where you want to refresh the variables. In the Variables tab, right click on the variables and select ‘Refresh Variable Definition’ so that the variable definition in the LP is synchronized with its real definition. Once this is done, verify that the logical schema is showing CM_BIAPPS11G, and the select statement is showing the embedded SQL in the getcmparam() function.

Step 3: Check the refreshing checkbox at the appropriate LP step.

For truly global variables, Step 3 becomes:


The logic behind getcmparam() guarantees that appropriate value of the variable is returned from BIACM given the name of the variable, the DATASOURCE_NUM_ID passed in, and the LPC step where it is refreshed.

Values stored in BIACM are strings. Therefore all ODI variables refreshed from BIACM will come in as strings. Each of the consuming codes (where the variables are used) should make sure it converts the data type accordingly. For example, dates are returned as a string in format yyyy-mm-dd hh:mi:ss. TO_DATE_VAR UDF is used to convert the returned string to DATE format. Number values are returned as strings as well.

Checklist when Things Go Wrong

What can go wrong?

  • The value of a variable used in ETL is not in line with expectation.
  • A variable refreshed has no value returned, and it fails ETL run.

Overriding Rule (1)

  • In a load plan, when a variable is refreshed in the parent step (e.g., the root step), its value will be inherited by all its child steps, unless this variable is refreshed/overwritten in a child step.

· However, if a variable is refreshed and/or overwritten in a child step, the value refreshed from this step will override the value refreshed from the parent step. Other child steps of the same level will NOT be affected. They will still inherit the value refreshed in the parent step.

Overriding Rule (2) (unlikely to happen but it exists)

If a variable is refreshed both in a package and in a load plan, then the value refreshed from the package will override the value refreshed from the load plan.

When the value of a variable returned from BIACM is not in line with expectation:

  • Confirm where the variable is refreshed, e.g., BIACM? ETL tables in DW? etc.
  • For BIACM PLV or group level variables:
  • Check its value(s) in BIACM UI. For PLV variables, check its value in each product line; for group level variables, check its group specific values as well as global default value.
  • Check if the variable is refreshed in a root step of a load plan (refresh checkbox checked). In the meanwhile, check if the root step is named after a fact/dim group.
  • Check if the variable is incorrectly refreshed or hardcoded in a child step belonging to the root step (avoid overriding rule 1).
  • Check the ODI default value of this variable. If BIACM returns (null), i.e., nothing, for this variable, its ODI default value will be used. Also, if we check the overwrite box (but not the refresh checkbox) of a variable in a load plan step, but forget to provide the value, then the ODI default value will be used.
  • Note: overriding rule (2) is unlikely to happen to BIACM variables.
  • For variables refreshed from ETL tables in DW, an incorrect value likely to indicate run issue. Check the run of that specific task.

When variable has no value:

  • Confirm where the variable should be refreshed, e.g., BIACM? ETL tables in DW? etc.
  • In rare cases, a variable may not have a value returned when it is refreshed, and this leads to ETL failures.
  • ODI behaves like this: it first refreshes the variable from its designated source (e.g., BIACM). If its source returns (null), i.e., nothing, for this variable, the ODI default value of this variable will be used in ETL. However, if the ODI default value is not provided, then this variable will not have a value.

Tuesday Nov 19, 2013

Tips and Usage of Group Account Number Configuration

Author: Sridhar Kasam

Introduction

The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0.

When you implement Financial Analytics, one of the most important steps is to set up Group Account Numbers. Many issues are reported due to incorrect setup of Group Account Numbers.

- What is Group Account Number?

o Group Account Number logically groups GL natural accounts into reportable group(s), so users can view financial reports at a higher level than that of a GL natural account.


Example: Assign natural account range 1210 – 1240 to Group Account Number “AR” (Accounts Receivable).

CHART OF ACCOUNTS ID

FROM ACCT

TO ACCT

GROUP_ACCT_NUM

101

1110

1110

CASH

101

1210

1240

AR

101

1280

1280

OTHER CA

101

1340

1340

PPAID EXP

o Group Account Number is used to establish relationship between GL natural accounts and Financial Statement Item Code. Financial Statement Item Code is mapped to base fact tables for GL reconciliation process.

Example: Group Account Number, Financial Statement Item Code & Base Fact Table

GROUP_ACCT_NUM

FIN_STMT_ITEM_CODE

BASE FACT TABLE

ACC DEPCN

OTHERS

W_GL_OTHER_F

ACC LIAB

OTHERS

W_GL_OTHER_F

AP

AP

W_AP_XACT_F

AR

AR

W_AR_XACT_F

CASH

OTHERS

W_GL_OTHER_F 

CMMN STOCK

OTHERS

W_GL_OTHER_F 

COGS

COGS

W_GL_COGS_F

REVENUE

REVENUE

W_GL_REVN_F

- How to configure Group Account Number(s) and Financial Statement Item Code(s)?

o Group Account Numbers are defined / configured through a configurable csv file. In case of Oracle the file name is file_group_acct_codes_ora.csv. For PeopleSoft, it is file_group_acct_codes_psft.csv, and for JDE, it is file_group_acct_codes_jde.csv. Users are allowed to specify which GL natural accounts are assigned to a particular group account within chart of accounts, GL Business Units and company for Oracle, PeopleSoft and JDE sources respectively. Please ensure that the account ranges are continuous without any overlaps. If necessary you can have multiple ranges and/or accounts assigned to the same group account. Additionally, please ensure that the accounts being provided are all leaf accounts / ranges and not parent nodes/ranges.

o By mapping GL accounts to group account numbers and then associating group accounts to a financial statement Item code, an indirect association is built between GL accounts and financial statement Item codes as well. In case of BI APPS 11.1.1.7.1 or later, association of group account numbers to financial statement item code is performed in Oracle BI Applications Configuration Manager. But, in earlier releases like 7.9.6.3 this configuration was also done using a configurable file file_grpact_fstmt.csv.

Note: Customers should not modify the mapping relationship between Group Account Number and Financial Statement Item Code for Group Account Numbers AP, AR, COGS and REVENUE.

o It is not always necessary to assign all out of box group account numbers to certain account ranges. In cases where customers are not planning to use the logical metrics under GL which uses Group Account Number as part of their metric definitions, users do not need to configure majority of the Group Account Numbers. However, if users are still planning to implement Payables, Receivables, Revenue, or COGS facts, they still need to configure Group Account Numbers AP, AR, REVENUE, or COGS because these Group Account numbers are also used for the GL reconciliation process against these facts.

o If for some reason, out of the box group account numbers are not sufficient, then the high level steps to add additional group accounts are (11g):

§ Define new Group Account Numbers in Configuration Manager

§ Assign the newly created Group Account Number to a financial statement code (e.g. AP, AR, REVENUE, COGS, OTHERS).

§ Assign GL account to Group Accounts in source specific csv file.

§ Model the RPD to include a new group account measure and expose in presentation layer.

- Why is it important?

Group Account Number configuration is important as it determines that the right GL Accounts are assigned to the group account number(s). In other words, it determines the accuracy of most of the reports that are reported from Financial Analytics where Group Account Number is part of the report. Group Account Numbers in combination with Financial Statement Item Codes are also leveraged in GL reconciliation process to ensure that sub ledger data reconciles with GL Journal entries.

Group Account Number Usage in Financial Analytics

- GL reports

For “Account Receivable” in balance sheet reports, we calculate the total amount for GL natural accounts from 1210 to 1240 for ledgers with chart of account id 101.

o Because of this association and subsequent configuration to expose the group account number metric in RPD, users can now report on Group Account Number metric. For example in a Balance Sheet Report, users can now report on “AR Amount” from “Fact – Fins – GL Balance” (Logical Fact) corresponding to “AR” group account.

- Reconciliation

o Because of the indirect relationship that is built between GL Accounts and Financial statement Item codes, it is now possible to do GL reconciliation to ensure that the sub ledger data reconciles with GL Journal entries. For example, it is possible that after an invoice has been transferred to GL, the user might decide to adjust the invoice in GL. In which case, it is important that the adjustment amount is trickled down to the sub ledger base fact along with the balance fact. So, to determine such sub ledger transactions in GL, the reconciliation process uses Financial Statement item codes.


Common Issues due to Incorrect Group Account Number Setup

1. 1. Sub ledger transactions are not posted.

Group account numbers are associated with financial statement item codes. The GL reconciliation process uses the financial item codes to identify sub ledger fact tables to be reconciled with GL journals. If the group account number assignment to GL account is incorrect, sub ledger facts remain “unposted”.

As Sub ledger balance facts are based on posted transactions, for example, AP balances or AR balances reports may return incorrect results.

For Example: GL account “1210” is supposed to be associated to “AR” (Accounts Receivables) group account but was mistakenly associated to “AP” (Accounts Payables). In which case, as part of ETL Process all the GJ Journal lines for account 1210 are attempted to be reconciled against sub ledger accounting records in AP fact. But, in reality these GL journal lines came from AR and not AP. So, because it couldn’t find the corresponding entries in “AP” they will remain ��unposted”.

2. 2. Unnecessary Manual Records are created in Sub ledger facts.

When a group account number is incorrectly assigned to an account, unnecessary records with transaction type “Manual” are created in sub ledger facts (e.g. W_AP_XACT_F, W_GL_REVN_F etc). Those records do not have sub ledger transaction details (e.g. supplier, customer, transaction number, and so on).

Following the same example above, because it couldn’t find the corresponding entries in “AP”, the ETL process will insert “Manual” records into the AP fact because it thinks that these entries are “Manual” Journal entries created directly in the GL system.

3. 3. GL reports return incorrect results.

When group account number assignments are not correct, GL metrics such as “AP”, “AR” from the above example will have incorrect amount. Therefore, whenever, you see GL metrics showing incorrect amount, the first thing you should check is if the group account num used by that metric has correct GL account ranges assigned to it.

Extracting PSFT Chartfield Hierarchy Data in OBIA 11.1.1.7.1

Author: Akshay Satyendranath 

BI Apps 11.1.1.7.1 supports 26 chartfields with 3 of them considered to be mandatory chartfields and are mapped to specific dimensions as shown below.

Department – Cost Center Dimension

Fund – Balancing Segment Dimension

Account – Natural Account Dimension

All the remaining chartfields are mapped to one specific dimension which is GL Segment Dimension. The physical table is shared for all chartfields but there are multiple logical dimensions in the RPD which you can use. You will need to apply the appropriate filters for these logical dimensions to filter the data relevant to those chartfields.

If you want to extract the trees defined for these chartfields into the DW, you will need to configure some parameters as explained below. For each chartfield, there are two sets of parameters that you can configure.

  • · List of Tree Structures
  • · List of Trees within the Tree Structures configured as in 1.
  1. The first parameter is considered to be a mandatory parameter and you will need to configure this parameter by giving a comma separated list of all the Tree Structures you need to extract for that Chartfield.
  2. The second parameter is considered to be an optional parameter. If you don’t configure this parameter then all the Trees pertaining to the Tree Structures configured in 1 will be extracted. If you need only specific trees to be extracted within those tree structures, then configure this parameter by giving a comma separated list of SETID + Tree Name.

The actual parameter names for each dimension are given in the table below

Dimension

Parameter Name

Cost Center

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

Balancing Segment

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

Natural Account

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

GL Segment

TREE_STRUCT_ID_LIST_<CF>

TREE_SETID_NAME_LIST_<CF>


Since GL Segment dimension supports multiple Chartfields we have one set of parameters for each chartfield. You will need to configure those many parameters based on the number of Chartfields you need in BI.

For e.g. say you have configured the program chartfield, then you will need to configure TREE_STRUCT_ID_LIST_PROGRAM and TREE_SETID_NAME_LIST_PROGRAM. Screenshots for one such set of parameters along with the sample values are given below

The following are the FSM tasks you need to configure for setting these parameters:

1) Configure Data Load Parameters for People Soft Trees

2) Configure Trees to be extracted for GL Account Chartfields for PeopleSoft

The following section gives you some sample SQL’s which you can use as a reference to get the values of these parameters for each chartfield. Please note that these are to be used just as references and you will have to validate the parameter values before you configure them.

1) For Tree Structures

SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME=<Detail Table of your CF>;

E.g. for the Department chartfield which maps to the Cost Center Dimension you can use

SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME='DEPT_TBL';

2) For Trees within Tree Structures

SELECT A.SETID||'~'||A.TREE_NAME FROM PSTREEDEFN A, PSTREESTRCT B WHERE A.TREE_STRCT_ID=B.TREE_STRCT_ID AND B.DTL_RECNAME=<Detail Table of your CF>;

E.g. for the Department chartfield which maps to the Cost Center Dimension you can use

SELECT A.SETID||'~'||A.TREE_NAME FROM PSTREEDEFN A, PSTREESTRCT B WHERE A.TREE_STRCT_ID=B.TREE_STRCT_ID AND B.DTL_RECNAME='DEPT_TBL';

Note:

1) This feature allows you to extract multiple effective dated trees as well. By default the out of the box code extracts only the current version of any tree.

2) If you need to extract multiple effective dated versions, then you will need to set the value of the parameter TREE_CURRENT_VERSION_ONLY to N.

3) There is a patch which you need to apply when you extract multiple versions or else the code creates duplicates for each version. Please check with the support team to get the relevant patch.


Wednesday Nov 13, 2013

How to handle BIAPPS ETL failures due to duplicate data issues in Oracle DW

How to handle BIAPPS ETL failures due to duplicate data issues in Oracle DW

Author: Amit Kothari

Sometimes there are duplicate data rows in the source tables, due to this while running a ODI Load Plan (LP) in BI Applications 11.1.1.7.1 the ODI sessions may error out while creating the unique indexes. It is always recommended that the source data issues be fixed first but that may not be feasible sometimes or you may just want to fix the data in the warehouse and continue with the LP run. This blog shows you one way to do that – first we find the table and the indexed columns for the failed index and then we substitute it in the delete sql to delete the duplicate rows.

Sample Error:

ODI-1217: Session EXEC_TABLE_MAINT_PROC (12383500) fails with return code 20000.
ODI-1226: Step TABLE_MAINT_PROC fails after 1 attempt(s).
ODI-1232: Procedure TABLE_MAINT_PROC execution fails.
ODI-1228: Task TABLE_MAINT_PROC (Procedure) fails on the target ORACLE connection BIAPPS_DW.
Caused By: java.sql.SQLException: ORA-20000: Error creating Index/Constraint : W_EMPLOYEE_D_U1=> ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
ORA-06512: at line 152

Solution:

From sql developer run this sql in your ODI repository schema based on the failed indexed name which we got from the error above, it will give you the indexed columns for the failed index, for eg -

SELECT st.table_name,sc.col_name
FROM SNP_TABLE st, SNP_KEY sk, snp_key_col skc, snp_col sc
WHERE sk.key_name ='W_EMPLOYEE_D_U1'
  AND sk.I_table = st.I_table   AND skc.I_key = sk.I_key   AND sc.I_col= skc.i_col   AND st.I_table = sc.i_table


When we run the above sql we get the table name as W_EMPLOYEE_D and columns as – EFFECTIVE_FROM_DT , DATASOURCE_NUM_ID,INTEGRATION_ID

These are the columns which has the duplicate rows resulting in the failure of the unique index creation. Now we can proceed to delete the duplicate rows via a delete sql, make sure you backup the table data before issuing the delete statement.

From sql developer run this sql in your Oracle target schema and then simply restart the ODI load plan - a sample delete sql based on the above index is as follows. Just substitute the columns and the table name for your use case.

DELETE FROM W_EMPLOYEE_D A
WHERE ROWID > (SELECT MIN(ROWID) FROM W_EMPLOYEE_D B WHERE A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID AND A.EFFECTIVE_FROM_DT = B.EFFECTIVE_FROM_DT) ;
commit;

About

Oracle Blogs Admin-Oracle

Search

Categories
Archives
« November 2013 »
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
14
15
16
17
18
20
21
22
23
24
25
27
28
29
30
       
Today