Thursday Feb 05, 2015

ETL Design for sharing the sequence ID

Author: Archana Dixit

Introduction

This blog describes an ETL design to share the same sequence ID among several tables. Party Dimension is an exceptional case where Records from different streams merge into one table. Organization, person and group party types are loaded into separate tables W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D and later get merged into a master table W_PARTY_D.

 

These dimensions are Slowly Changing dimension out-of-the-box in the BI Application. They also carry a slowly changing type1 WID named as SCD1_WID. It holds the same value as for the new record also in case of SCD type 2 change. For example, if Organization Name is changed from ‘ABC Software’ to ‘ABCD Software’, the current record would still have the same value for SCD1_WID.

 

This WID is populated with a sequence generated numeric value. Knowledge modules (KM) use a separate DB sequence for each target table while loading data into W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D tables resulting in different sequence numbers for SCD1_WID column in master table W_PARTY_D. The following steps describe an approach to share the same sequence ID.

Step 1:

KM driven option ‘OBI_SCD1_WID’ should be disabled to refrain it from creating separate DB sequences. Set the default value of OBI_SCD1_WID IKM option to ‘false’ as shown in the screen shot below.

 

Step 2:

Create a mapping to populate the helper table W_PARTY_PER_T1_D table. The source for this mapping should be the corresponding staging table W_PARTY_PER_DS. The mapping expression for SCD1_WID column should read from the sequence created from the previous load stream (in this case W_PARTY_ORG_D load). Set the mapping expression as NEXTVAL (W_PARTY_ORG_S1W) and uncheck ‘Update’ checkbox.

 

Step 3:

In the flow tab, DETECTION_STRATEGY IKM option should be set to ‘NONE’.

 

Step 4:

Configure LP components to execute the scenarios loading W_PARTY_PER_T1_D, W_PARTY_PER_D & W_PARTY_D in serial mode in the order as follows.

 

Thursday Jan 22, 2015

DB Link with Oracle BI Applications

Authors: Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) and Patrick Block, Principal, OBIA Development

In an earlier blog post 3 Modes for Moving Data to the BI Applications DW from a Source Application Database the available options for setting up data source connections was provided in detail, along with screenshots.  It should be noted this information written for OBIA 11.1.1.7.1 is still applicable for 11.1.1.8.1. While not all customers will deploy the SDS (Source Dependent Store with GoldenGate), most customers should consider using the DB link option. The DB Link allows for direct Oracle database to Oracle database communication, and the data won’t need to flow through your ODI Agent/WLS.  The objective of this blog is to elaborate on the above mentioned post and add tips and troubleshooting guidance. 

[Read More]

Thursday May 08, 2014

BI apps 11.1.1.8.1 is available now

Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is now available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of ODI, and certified with the latest version of Oracle BI Foundation 11.1.1.7

 Highlights from the announcement.

What’s New

This new release of Oracle Business Intelligence Applications 11g adds:

1. Prebuilt Connector for Oracle Procurement and Spend Analytics with Oracle JD Edwards

2. Expanded functional content coverage across existing Oracle BI Applications

3. New utilities and enhancements to increase productivity and further drive down Total Cost of Ownership (TCO)

New Prebuilt Connector for BI Applications

· Oracle Procurement and Spend Analytics introduces a new adapter that provides out-of-box integration between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards EnterpriseOne, enabling purchasing and sourcing organizations quickly identify savings opportunities and improve operational performance through decision-ready, best practice analytics.

New Content Enhancements: Significant expansion of content improves existing Oracle BI Applications including:

· Oracle Human Resources Analytics expands analysis on HR programs and workforce performance with the introduction of Talent Management Analysis, to help HR and business managers assess talent strengths and build potential leaders by delivering greater insight into job profiles. Improvements to Absence Management, new support for Workforce Frozen Snapshots and Workforce Flex Fields are also available.

· Oracle Financial Analytics improves company Financial Performance with the new Subledger Accounting Module, providing complete account balance analysis, including supporting reference balances and drill down to accounting transactions. Aging Snapshot restoration for Receivables and Payables, drilldown from Payables to Purchase Orders, and Fixed Assets support for GL Posting Status are features also included in the release.

· Oracle Project Analytics expands insight into project-based data with new Earned Value Management Analysis, providing visibility for Project Stakeholders into Planned Value, Earned Value, and Actual Cost. Analysis of variances and indices, derived from Cost and Schedule Variance, CPI, TCPI, SPI and TSPI, is also available.

· Oracle Supply Chain & Order Management Analytics introduces Costing and Inventory Turn support for Oracle JDE EnterpriseOne. In addition, the ability to save and restore the Inventory Snapshot when executing a full load is now supported.

· Oracle Student Information Analytics introduces new Financial Aid Analysis, containing term-based detail information associated with a student award such as amount, status, disbursement, and aid type.

· Oracle Manufacturing Analytics and Enterprise Asset Management Analytics have been re-introduced into the BI Applications data model. Although new with ODI, both modules provide now the same broad functionality available with BI Applications 7.9.6.4

· Unstructured Data for Information Discovery – The BI Applications Data Model has been extended to include unstructured CRM and ERP text fields. Sample Information Discovery templates – including ODI interfaces and runtime security definitions –provide the ability to quickly create sample Information Discovery Applications from combined structured and unstructured data.

New Utilities and Enhancements to increase productivity and drive down TCO

· New Data Lineage Tool allows customers and implementers to track lineage from source to target – improving information transparency across their enterprise

· Configuration Management expands System Setup and Load Plan definition capabilities, including the registration of multiple instances for the same Product Line Version, UI enhancements to support DW connection configuration and Load Plan definition, and Load Plan definition migration.

Fact Partitioning IKMs and Parallel Index Builds are key performance Innovations introduced with this release. The former enables a feature that tags partitions so that only the indexes that need to be re-built during incremental loads are executed. The latter uses ODI parallel process so that Indexes can be built using multiple session (faster than and in addition to DB parallel).


 

Monday Apr 21, 2014

How does TYPE2_FLG Work in ETL

Author: Vivian(Weibei) Li

Instruction

TYPE2_FLG is usually used in slowly changing dimensions in BI Applications. This flag indicates if the dimension is type 2, and it determines the data storing behavior in ETL. This blog is to give you a better understanding on how TYPE2_FLG works in ETL.

Background

Slowly Changing dimension

There are many fundamental dimensions such as Customer, Product, Location and Employee in BI application. The attributes in these dimensions are revised from time to time. Sometimes the revised attributes merely correct an error in the data. But many times the revised attributes represent a true change at a point in time. These changes arrive unexpectedly, sporadically and far less frequently than fact table measurements, so we call this topic slowly changing dimensions (SCDs).

Slowly changing dimensions (SCD) entities like Employee, Customer, Product and others determine how the historical changes in the dimension tables are handled and decide how to respond to the changes. There are three different kinds of responses are needed: slowly changing dimension (SCD) Types 1, 2 and 3.

Type 1: Overwrite the attributes with new changes

Type 2: Add a New Dimension Record

Type 3: Add a New Field

We are talking about type 2 in this blog. In the Type 2 SCD model the whole history is stored in the database. An additional dimension record is created and the segmenting between the old record values and the new (current) value is easy to extract and the history is clear. A minimum of three additional columns should be added to the dimension row with type 2 changes: 1) row effective date or date/time stamp (EFFECTIVE_FROM_DT); 2) row expiration date or date/time stamp (EFFECTIVE_END_DT); and 3) current row indicator (CURRENT_FLG).

SRC_EFF_FROM_DT and EFFECTIVE_FROM_DT

The two columns have different concepts though they have similar name. We saw many customers getting confused about the two columns.

SRC_EFF_FROM_DT is extracted from the effective start date of the source (mainly from the main driven source) if the source has the history. If the source doesn’t store history or the history is not extracted, it is hard coded as #LOW_DATE.

EFFECTIVE_FROM_DT is a system column in dimension table to track the history. Remember that we use the knowledge modules (KM) for repeatable logic that can be reused across ETL tasks. Updating the SCD related columns, such as EFFECTIVE_FROM_DT, is usually handled by KM. EFFECTIVE_FROM_DT is modified when inserting a new type 2 record in incremental run, and it is usually modified to the same date as the changed on date from the source. EFFECTIVE_FROM_DT does not always map to the Source Effective Dates.

In type 2 SCD model, EFFECTIVE_FROM_DT is the date used to track the history.

TYPE2_FLG in BI Application

TYPE2_FLG is a flag used to indicate if the dimension is type 2 or not. This flag is used in many dimensions in BI application, such as employee, user, position, and so on. This flag is very important because it determines the history storing behavior.

TYPE2_FLG has two values: ‘Y’ and ‘N’. ‘Y’ means the dimension is a type 2, and ‘N’ means the dimension is type 1. Type 2 dimensions store the history, while type 1 dimensions only store the current record.

For example, if the supervisor is changed from Peter to Susan for an employee on 01/02/2012:

Type 1

EMPLOYEE_ID

SUPERVISOR_NAME

CURRENT_FLG

123

Susan

Y

Type 2

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

01/02/2012

Peter

‘N’

As shown above, type 1 dimension overwrites the supervisor with the new supervisor, and only stores the current record. Type 2 dimension inserts a new record with the new supervisor name and keeps the old record as a history. The EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT and CURRENT_FLG are modified accordingly: EFFECTIVE_TO_DT is changed to 01/02/2012 and CURRENT_FLG is set as ‘N’ for the old record. The ‘CURRENT_FLG’ is set as ‘Y’ for the new record with the new EFFECTIVE_FROM_DT.

How to Setup TYPE2_FLG

The out of the box code in BI application should have setup the default values. For the type 2 dimensions, it is usually set as ‘Y’.

The TYPE2_FLG can be configured in BIACM. This variable is configured by different dimension groups.

The screenshot above shows that you can configure the value of this flag for difference dimension groups by clicking the parameter value and overwriting it to a different value.

Note: You can only configure the TYPE2_FLG for the dimension groups that are in this BIACM list. The dimension groups that are not in the list cannot be configured.

You should set the value of TYPE2_FLG carefully. If you override the TYPE2_FLG to ‘N’ for a type 2 dimension, you may meet some issues. I will describe more details in the next session.

Possible Issues Related to TYPE2_FLG

As mentioned earlier, sometimes for some reason, the value of TYPE2_FLG may be set to ‘N’ for the type 2 dimension. This may cause some issues.

In BI application, SDE mapping brings the history from the source in the initial full load in some adapters, such as EBS. TYPE2_FLG affects the storing behavior for these historic records. Here compares the different behaviors when setting TYPE2_FLG to ‘Y’ and ‘N’ for a type 2 dimension.

Case 1-TYPE2_FLG = ‘Y’

Let’s take employee dimension (type 2 dimension) as an example

Source

EMPLOYEE_ID

SRC_EFF_FROM_DT

SUPERVISOR_NAME

ROW #

123

01/01/1999

Peter

1

123

01/02/2012

Susan

2

When loading the data into data warehouse in the initial full run, both the rows (including the historical record #1) will be loaded. TYPE2_FLG is ‘Y’ in this case, KM, which will handle the loading behavior, uses this value to determine the type of employee dimension, and accordingly the storing method.

KM will modify EFFECTIVE_TO_DT and CURRENT_FLG for the two records as TYPE2_FLG=’Y’ in this case.

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

01/02/2012

Peter

‘N’

Case 2 - TYPE2_FLG =’N’

This time, the TYPE2_FLG is set as ‘N’ for employee dimension (type 2 dimension), which is incorrect. KM will treat it as type 1 rather than type 2.

Source

EMPLOYEE_ID

SRC_EFF_FROM_DT

SUPERVISOR_NAME

ROW #

123

01/01/1999

Peter

1

123

01/02/2012

Susan

2

When loading the data into data warehouse, both the rows will be loaded because the history from the source is stored. However, because TYPE2_FLG is ‘N’, KM won’t modify EFFECTIVE_TO_DT and CURRENT_FLG accordingly, and this will cause issues.

Employee Table in Data warehouse

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

Future

Peter

‘Y’

As shown above, the two records are in an overlapping time range, and both have CURRENT_FLG as ‘Y’. It may give duplicates when resolving the employee from the facts. For example, the transaction date 02/04/2013 will fall into the time range of the two records, so both will be extracted, thus causing the duplicates in the facts.

How to Debug TYPE2_FLG Issues

As discussed in the previous session, in order to avoid this kind of issues, you should set the value of TYPE2_FLG carefully, and set it as ‘Y’ for out of the box TYPE2 dimensions.

In addition, when you get the duplicates in the fact, you can do the following checks.

  • Check where the duplicates come from in the fact, and find out the problematic dimension if they are from the dimension.
  • Check the data in the dimension for the duplicates to see if you see the similar loading behavior as the one in use case 2 of the previous session. You can first simply see if multiple records having CURRENT_FLG=’Y’.
  • Check the value of the TYPE2_FLG in ODI repository.

1. Open the session log of the task

2. Open ‘Definition’

3. Expand ‘Variable and Sequence Values’

4. Find TYPE2_FLG and check the value

5. If the value is ‘N’ but the dimension is type 2, you may hit the issue described in the previous session.

I also would like to provide you some tips to find out the type of a dimension here. You can find out this information in ODI repository.

  • For one dimension, such as employee dimension, you should first know the dimension table name, for example, W_EMPLOYEE_D
  • Go to ODI repository->’Designer’->’Models’
  • Find out the dimension table and open it by double clicking it
  • Go to ‘Definition’ and check the OLAP type. The type of slowly changing dimension tells you that this dimension is type 2

  • You can also find out which attributes are type 2 by checking the column attribute

1. Expand the dimension table, for example, W_EMPLOYEE_D and then expand Columns

2. Open the attribute of a column by double clicking it

3. Go to ‘Description’ and check ‘Slowly Changing Dimension Behavior’

As shown above, ‘Add Rows on Change’ option tells you that this attribute is type 2.

Conclusion

This blog helps you understand how TYPE2_FLG works in ETL and recognize the importance of this flag. It also gives you a way to debug the possible TYPE2_FLG issue.

Wednesday Mar 05, 2014

BI apps 11.1.1.7.1 Cumulative Patch 1 is available now

BI applications 11.1.1.7.1 cumulative patch  is available now.

Patch 17546336 - BIAPPS 11.1.1.7.1 ODI CUMULATIVE PATCH 1 (Patch) can be downloaded from My Oracle Support.

 "To download patch , navigate to My Oracle Support > Patches & Update. Search with the patch number (17546336) on the Patch Name or Number field. Follow the instructions in the Readme to apply the patch."

Tuesday Nov 26, 2013

Introduction and Tips of Oracle BI Apps Variable Refresh

Author: Chuan Shi

Introduction

The ETL logic in BI Apps uses parameters in packages, interfaces, load plans, and knowledge modules (KM) to control the ETL behaviors. Parameters can be configured by customers, pre-seeded in the ETL code, or maintained internally:

  • Data Load parameters that can be configured by customers are maintained at product line (PLV) and fact/dimension group levels in the BI Apps Configuration Manager (BIACM).
  • Some parameters are pre-seeded in the ETL code OOTB, e.g. DATASOURCE_NUM_ID.
  • Other parameters are used internally by the load plan to control the execution of the load plan, e.g. EXECUTION_ID.

ETL parameters are handled by using ODI variables. The purpose of this blog is to explain ODI variable refreshing to ensure that correct values of variables are used in Oracle BI Apps ETL.

ODI Variable Classification

To ensure correct values to be used in ETL, variables are refreshed at run time. A variable can be refreshed in either a package or a load plan. Variables can be classified into four categories based on whether and where they are refreshed in ODI (and if a variable needs to be refreshed, then there must be a refreshing logic specified for it, which will be discussed later).


The four categories are:

  1. Not refreshed

Definition: Generally, internal parameters used by the load plan to control the ETL execution, and therefore they shall not be refreshed

Examples: DATASOURCE_NUM_ID, EXECUTION_ID

  1. Refreshed in the package

Definition: Variables that are not feasible to be refreshed in load plans. (The refreshing logic of such a variable depends on ETL run facts. It sources from DW tables and uses the QUALIFY user define function (UDF). ODI fails to interpret QUALIFY at load plan level in some cases.)

Examples: IS_INCREMENTAL, LAST_EXTARCT_DATE

  1. Hardcoded in the load plan

Definition: Variables whose values are hardcoded by overwriting in load plans. The variables will take the hardcoded values in ETL.

Examples: DOMAIN_CODE, LOOKUP_TYPE

  1. Refreshed in the load plan

Definition: Variables whose values are configured in BIACM by customers. (In other words, the values of these variables come from BIACM. The refreshing logic of such a variable uses a UDF to extract its value from BIACM.)

Examples: UPDATE_ALL_HISTORY, TYPE2_FLG

For a complete list of ODI variables, please refer to this post.

Refreshing Variables in a Package

Refreshing variables in a package is straightforward. One needs to create Refresh Variable steps for the variables to be refreshed. The screenshot below shows examples of refreshing IS_INCREMENTAL.

Hard-coding Variables in a Load Plan

Some variables, such as DOMAIN_CODE and LOOKUP_TYPE are hardcoded in load plan components. To do that, go to the load plan component to which the variables are to be hardcoded, select the Overwrite checkbox and provide the hardcode values for the variables. The screenshot below shows examples of hardcoding DOMAIN_CODE and LOOKUP_TYPE.

Refreshing Variables from BIACM

BIACM is a central UI where customers can define the values of data load parameters (i.e., ODI variables), among with many other features offered. ODI variables, which are refreshed in load plans, have their values extracted from BIACM. We also refer to such variables as BIACM variables.

BIACM Variables are classified into truly global variables, PLV specific variables, and fact/dimension group level variables.

  • A truly global variable (e.g., 13P_CALENDAR_ID) is a variable that has the same value in all product lines (i.e., EBS11510, PSFT90, etc) and for all fact/dimension groups. Truly global variables are refreshed centrally in the load plan system components.
  • A PLV specific variable (e.g., LANGUAGE_BASE) is a variable that takes the same value for all fact/dimension groups within a product line, but different values of the variable can be used in different production lines. They are refreshed individually in consuming load plan dev components.
  • A fact/dimension group level variable (e.g., UPDATE_ALL_HISTORY) is group specific. It can take different values in different fact/dimension groups within the same PLV and across different PLVs. They are refreshed individually in consuming load plan dev components.

From a variable value overriding perspective:

  • A truly global variable has a unique value. PLVs and fact/dimension groups cannot override its value.
  • A PLV variable has product line specific values (e.g., LANGUAGE_BASE takes the value of US in EBS product lines but ENG in PSFT product lines). The value is the same for all fact/dimension groups within that product line.
  • A fact/dimension group level variable has group specific values (e.g., TYPE2_FLG has the value of Yes in Position Dimension Hierarchy, while it has the value of No in Asset Dimension). Also, such a variable has a global default value. If a fact/dimension group does not specify the value of such a variable for its use, then the global default value will be used whenever this variable is called by that group (e.g., the global default value of TYPE2_FLG is No).

These variables are defined in ODI. To ensure that variable refreshing works correctly, there are some rules on the definitions of ODI variables:

  • Set the ‘Keep History’ option to ‘No History’;
  • Always provide a default value (the default value will be picked if refreshing from BIACM does not return a value for some reason. Otherwise the ETL will fail.). As a good practice, the ODI default value of the variable can be set the same as the global value of the variable in BIACM.

(Whenever the Keep History option or the Default Value of a variable is changed, the scenarios that use this variable need to be regenerated.)

Once ODI variables are defined, a refreshing logic is needed to refresh them from BIACM. In this regard,

  • The ODI UDF GET_CM_PARAM is used
  • To return the correct value for a variable, we need to specify the following in the refreshing logic:
  • variable name;
  • product line;
  • fact/dimension group.
  • Syntax: getcmparam($(param_code),$(DATASOURCE_NUM_ID))
  • $(param_code) is the name of the variable (e.g., TYPE2_FLG)
  • $(DATASOURCE_NUM_ID) is used to specify the product line.

For PLV/group level variables, we pass #DATASOURCE_NUM_ID as $(DATASOURCE_NUM_ID);

e.g., getcmparam('TYPE2_FLG','#DATASOURCE_NUM_ID')

For truly global variable, we pass #WH_DATASOURCE_NUM_ID as a pseudo-PLV ID.

e.g., getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

  • Do not pass fact/dimension group directly into the syntax. They are determined by where the variable is refreshed.

BIACM variables are refreshed in load plans. To refresh a variable in a load plan, the following three steps are required (they have been done OOTB):

Step 1: Specify the correct logical schema and refreshing logic in the refreshing tab of the variable definition.

The logical schema has to be CM_BIAPPS11G.

The refreshing logic should be getcmparam() with appropriate inputs, e.g.,

getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

Step 2: Update the variable definition in the variables tab of the load plan.

Go to the load plan component where you want to refresh the variables. In the Variables tab, right click on the variables and select ‘Refresh Variable Definition’ so that the variable definition in the LP is synchronized with its real definition. Once this is done, verify that the logical schema is showing CM_BIAPPS11G, and the select statement is showing the embedded SQL in the getcmparam() function.

Step 3: Check the refreshing checkbox at the appropriate LP step.

For truly global variables, Step 3 becomes:


The logic behind getcmparam() guarantees that appropriate value of the variable is returned from BIACM given the name of the variable, the DATASOURCE_NUM_ID passed in, and the LPC step where it is refreshed.

Values stored in BIACM are strings. Therefore all ODI variables refreshed from BIACM will come in as strings. Each of the consuming codes (where the variables are used) should make sure it converts the data type accordingly. For example, dates are returned as a string in format yyyy-mm-dd hh:mi:ss. TO_DATE_VAR UDF is used to convert the returned string to DATE format. Number values are returned as strings as well.

Checklist when Things Go Wrong

What can go wrong?

  • The value of a variable used in ETL is not in line with expectation.
  • A variable refreshed has no value returned, and it fails ETL run.

Overriding Rule (1)

  • In a load plan, when a variable is refreshed in the parent step (e.g., the root step), its value will be inherited by all its child steps, unless this variable is refreshed/overwritten in a child step.

· However, if a variable is refreshed and/or overwritten in a child step, the value refreshed from this step will override the value refreshed from the parent step. Other child steps of the same level will NOT be affected. They will still inherit the value refreshed in the parent step.

Overriding Rule (2) (unlikely to happen but it exists)

If a variable is refreshed both in a package and in a load plan, then the value refreshed from the package will override the value refreshed from the load plan.

When the value of a variable returned from BIACM is not in line with expectation:

  • Confirm where the variable is refreshed, e.g., BIACM? ETL tables in DW? etc.
  • For BIACM PLV or group level variables:
  • Check its value(s) in BIACM UI. For PLV variables, check its value in each product line; for group level variables, check its group specific values as well as global default value.
  • Check if the variable is refreshed in a root step of a load plan (refresh checkbox checked). In the meanwhile, check if the root step is named after a fact/dim group.
  • Check if the variable is incorrectly refreshed or hardcoded in a child step belonging to the root step (avoid overriding rule 1).
  • Check the ODI default value of this variable. If BIACM returns (null), i.e., nothing, for this variable, its ODI default value will be used. Also, if we check the overwrite box (but not the refresh checkbox) of a variable in a load plan step, but forget to provide the value, then the ODI default value will be used.
  • Note: overriding rule (2) is unlikely to happen to BIACM variables.
  • For variables refreshed from ETL tables in DW, an incorrect value likely to indicate run issue. Check the run of that specific task.

When variable has no value:

  • Confirm where the variable should be refreshed, e.g., BIACM? ETL tables in DW? etc.
  • In rare cases, a variable may not have a value returned when it is refreshed, and this leads to ETL failures.
  • ODI behaves like this: it first refreshes the variable from its designated source (e.g., BIACM). If its source returns (null), i.e., nothing, for this variable, the ODI default value of this variable will be used in ETL. However, if the ODI default value is not provided, then this variable will not have a value.

Tuesday Nov 19, 2013

Tips and Usage of Group Account Number Configuration

Author: Sridhar Kasam

Introduction

The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0.

When you implement Financial Analytics, one of the most important steps is to set up Group Account Numbers. Many issues are reported due to incorrect setup of Group Account Numbers.

- What is Group Account Number?

o Group Account Number logically groups GL natural accounts into reportable group(s), so users can view financial reports at a higher level than that of a GL natural account.


Example: Assign natural account range 1210 – 1240 to Group Account Number “AR” (Accounts Receivable).

CHART OF ACCOUNTS ID

FROM ACCT

TO ACCT

GROUP_ACCT_NUM

101

1110

1110

CASH

101

1210

1240

AR

101

1280

1280

OTHER CA

101

1340

1340

PPAID EXP

o Group Account Number is used to establish relationship between GL natural accounts and Financial Statement Item Code. Financial Statement Item Code is mapped to base fact tables for GL reconciliation process.

Example: Group Account Number, Financial Statement Item Code & Base Fact Table

GROUP_ACCT_NUM

FIN_STMT_ITEM_CODE

BASE FACT TABLE

ACC DEPCN

OTHERS

W_GL_OTHER_F

ACC LIAB

OTHERS

W_GL_OTHER_F

AP

AP

W_AP_XACT_F

AR

AR

W_AR_XACT_F

CASH

OTHERS

W_GL_OTHER_F 

CMMN STOCK

OTHERS

W_GL_OTHER_F 

COGS

COGS

W_GL_COGS_F

REVENUE

REVENUE

W_GL_REVN_F

- How to configure Group Account Number(s) and Financial Statement Item Code(s)?

o Group Account Numbers are defined / configured through a configurable csv file. In case of Oracle the file name is file_group_acct_codes_ora.csv. For PeopleSoft, it is file_group_acct_codes_psft.csv, and for JDE, it is file_group_acct_codes_jde.csv. Users are allowed to specify which GL natural accounts are assigned to a particular group account within chart of accounts, GL Business Units and company for Oracle, PeopleSoft and JDE sources respectively. Please ensure that the account ranges are continuous without any overlaps. If necessary you can have multiple ranges and/or accounts assigned to the same group account. Additionally, please ensure that the accounts being provided are all leaf accounts / ranges and not parent nodes/ranges.

o By mapping GL accounts to group account numbers and then associating group accounts to a financial statement Item code, an indirect association is built between GL accounts and financial statement Item codes as well. In case of BI APPS 11.1.1.7.1 or later, association of group account numbers to financial statement item code is performed in Oracle BI Applications Configuration Manager. But, in earlier releases like 7.9.6.3 this configuration was also done using a configurable file file_grpact_fstmt.csv.

Note: Customers should not modify the mapping relationship between Group Account Number and Financial Statement Item Code for Group Account Numbers AP, AR, COGS and REVENUE.

o It is not always necessary to assign all out of box group account numbers to certain account ranges. In cases where customers are not planning to use the logical metrics under GL which uses Group Account Number as part of their metric definitions, users do not need to configure majority of the Group Account Numbers. However, if users are still planning to implement Payables, Receivables, Revenue, or COGS facts, they still need to configure Group Account Numbers AP, AR, REVENUE, or COGS because these Group Account numbers are also used for the GL reconciliation process against these facts.

o If for some reason, out of the box group account numbers are not sufficient, then the high level steps to add additional group accounts are (11g):

§ Define new Group Account Numbers in Configuration Manager

§ Assign the newly created Group Account Number to a financial statement code (e.g. AP, AR, REVENUE, COGS, OTHERS).

§ Assign GL account to Group Accounts in source specific csv file.

§ Model the RPD to include a new group account measure and expose in presentation layer.

- Why is it important?

Group Account Number configuration is important as it determines that the right GL Accounts are assigned to the group account number(s). In other words, it determines the accuracy of most of the reports that are reported from Financial Analytics where Group Account Number is part of the report. Group Account Numbers in combination with Financial Statement Item Codes are also leveraged in GL reconciliation process to ensure that sub ledger data reconciles with GL Journal entries.

Group Account Number Usage in Financial Analytics

- GL reports

For “Account Receivable” in balance sheet reports, we calculate the total amount for GL natural accounts from 1210 to 1240 for ledgers with chart of account id 101.

o Because of this association and subsequent configuration to expose the group account number metric in RPD, users can now report on Group Account Number metric. For example in a Balance Sheet Report, users can now report on “AR Amount” from “Fact – Fins – GL Balance” (Logical Fact) corresponding to “AR” group account.

- Reconciliation

o Because of the indirect relationship that is built between GL Accounts and Financial statement Item codes, it is now possible to do GL reconciliation to ensure that the sub ledger data reconciles with GL Journal entries. For example, it is possible that after an invoice has been transferred to GL, the user might decide to adjust the invoice in GL. In which case, it is important that the adjustment amount is trickled down to the sub ledger base fact along with the balance fact. So, to determine such sub ledger transactions in GL, the reconciliation process uses Financial Statement item codes.


Common Issues due to Incorrect Group Account Number Setup

1. 1. Sub ledger transactions are not posted.

Group account numbers are associated with financial statement item codes. The GL reconciliation process uses the financial item codes to identify sub ledger fact tables to be reconciled with GL journals. If the group account number assignment to GL account is incorrect, sub ledger facts remain “unposted”.

As Sub ledger balance facts are based on posted transactions, for example, AP balances or AR balances reports may return incorrect results.

For Example: GL account “1210” is supposed to be associated to “AR” (Accounts Receivables) group account but was mistakenly associated to “AP” (Accounts Payables). In which case, as part of ETL Process all the GJ Journal lines for account 1210 are attempted to be reconciled against sub ledger accounting records in AP fact. But, in reality these GL journal lines came from AR and not AP. So, because it couldn’t find the corresponding entries in “AP” they will remain ��unposted”.

2. 2. Unnecessary Manual Records are created in Sub ledger facts.

When a group account number is incorrectly assigned to an account, unnecessary records with transaction type “Manual” are created in sub ledger facts (e.g. W_AP_XACT_F, W_GL_REVN_F etc). Those records do not have sub ledger transaction details (e.g. supplier, customer, transaction number, and so on).

Following the same example above, because it couldn’t find the corresponding entries in “AP”, the ETL process will insert “Manual” records into the AP fact because it thinks that these entries are “Manual” Journal entries created directly in the GL system.

3. 3. GL reports return incorrect results.

When group account number assignments are not correct, GL metrics such as “AP”, “AR” from the above example will have incorrect amount. Therefore, whenever, you see GL metrics showing incorrect amount, the first thing you should check is if the group account num used by that metric has correct GL account ranges assigned to it.

Extracting PSFT Chartfield Hierarchy Data in OBIA 11.1.1.7.1

Author: Akshay Satyendranath 

BI Apps 11.1.1.7.1 supports 26 chartfields with 3 of them considered to be mandatory chartfields and are mapped to specific dimensions as shown below.

Department – Cost Center Dimension

Fund – Balancing Segment Dimension

Account – Natural Account Dimension

All the remaining chartfields are mapped to one specific dimension which is GL Segment Dimension. The physical table is shared for all chartfields but there are multiple logical dimensions in the RPD which you can use. You will need to apply the appropriate filters for these logical dimensions to filter the data relevant to those chartfields.

If you want to extract the trees defined for these chartfields into the DW, you will need to configure some parameters as explained below. For each chartfield, there are two sets of parameters that you can configure.

  • · List of Tree Structures
  • · List of Trees within the Tree Structures configured as in 1.
  1. The first parameter is considered to be a mandatory parameter and you will need to configure this parameter by giving a comma separated list of all the Tree Structures you need to extract for that Chartfield.
  2. The second parameter is considered to be an optional parameter. If you don’t configure this parameter then all the Trees pertaining to the Tree Structures configured in 1 will be extracted. If you need only specific trees to be extracted within those tree structures, then configure this parameter by giving a comma separated list of SETID + Tree Name.

The actual parameter names for each dimension are given in the table below

Dimension

Parameter Name

Cost Center

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

Balancing Segment

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

Natural Account

TREE_STRUCT_ID_LIST

TREE_SETID_NAME_LIST

GL Segment

TREE_STRUCT_ID_LIST_<CF>

TREE_SETID_NAME_LIST_<CF>


Since GL Segment dimension supports multiple Chartfields we have one set of parameters for each chartfield. You will need to configure those many parameters based on the number of Chartfields you need in BI.

For e.g. say you have configured the program chartfield, then you will need to configure TREE_STRUCT_ID_LIST_PROGRAM and TREE_SETID_NAME_LIST_PROGRAM. Screenshots for one such set of parameters along with the sample values are given below

The following are the FSM tasks you need to configure for setting these parameters:

1) Configure Data Load Parameters for People Soft Trees

2) Configure Trees to be extracted for GL Account Chartfields for PeopleSoft

The following section gives you some sample SQL’s which you can use as a reference to get the values of these parameters for each chartfield. Please note that these are to be used just as references and you will have to validate the parameter values before you configure them.

1) For Tree Structures

SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME=<Detail Table of your CF>;

E.g. for the Department chartfield which maps to the Cost Center Dimension you can use

SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME='DEPT_TBL';

2) For Trees within Tree Structures

SELECT A.SETID||'~'||A.TREE_NAME FROM PSTREEDEFN A, PSTREESTRCT B WHERE A.TREE_STRCT_ID=B.TREE_STRCT_ID AND B.DTL_RECNAME=<Detail Table of your CF>;

E.g. for the Department chartfield which maps to the Cost Center Dimension you can use

SELECT A.SETID||'~'||A.TREE_NAME FROM PSTREEDEFN A, PSTREESTRCT B WHERE A.TREE_STRCT_ID=B.TREE_STRCT_ID AND B.DTL_RECNAME='DEPT_TBL';

Note:

1) This feature allows you to extract multiple effective dated trees as well. By default the out of the box code extracts only the current version of any tree.

2) If you need to extract multiple effective dated versions, then you will need to set the value of the parameter TREE_CURRENT_VERSION_ONLY to N.

3) There is a patch which you need to apply when you extract multiple versions or else the code creates duplicates for each version. Please check with the support team to get the relevant patch.


Wednesday Nov 13, 2013

How to handle BIAPPS ETL failures due to duplicate data issues in Oracle DW

How to handle BIAPPS ETL failures due to duplicate data issues in Oracle DW

Author: Amit Kothari

Sometimes there are duplicate data rows in the source tables, due to this while running a ODI Load Plan (LP) in BI Applications 11.1.1.7.1 the ODI sessions may error out while creating the unique indexes. It is always recommended that the source data issues be fixed first but that may not be feasible sometimes or you may just want to fix the data in the warehouse and continue with the LP run. This blog shows you one way to do that – first we find the table and the indexed columns for the failed index and then we substitute it in the delete sql to delete the duplicate rows.

Sample Error:

ODI-1217: Session EXEC_TABLE_MAINT_PROC (12383500) fails with return code 20000.
ODI-1226: Step TABLE_MAINT_PROC fails after 1 attempt(s).
ODI-1232: Procedure TABLE_MAINT_PROC execution fails.
ODI-1228: Task TABLE_MAINT_PROC (Procedure) fails on the target ORACLE connection BIAPPS_DW.
Caused By: java.sql.SQLException: ORA-20000: Error creating Index/Constraint : W_EMPLOYEE_D_U1=> ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
ORA-06512: at line 152

Solution:

From sql developer run this sql in your ODI repository schema based on the failed indexed name which we got from the error above, it will give you the indexed columns for the failed index, for eg -

SELECT st.table_name,sc.col_name
FROM SNP_TABLE st, SNP_KEY sk, snp_key_col skc, snp_col sc
WHERE sk.key_name ='W_EMPLOYEE_D_U1'
  AND sk.I_table = st.I_table   AND skc.I_key = sk.I_key   AND sc.I_col= skc.i_col   AND st.I_table = sc.i_table


When we run the above sql we get the table name as W_EMPLOYEE_D and columns as – EFFECTIVE_FROM_DT , DATASOURCE_NUM_ID,INTEGRATION_ID

These are the columns which has the duplicate rows resulting in the failure of the unique index creation. Now we can proceed to delete the duplicate rows via a delete sql, make sure you backup the table data before issuing the delete statement.

From sql developer run this sql in your Oracle target schema and then simply restart the ODI load plan - a sample delete sql based on the above index is as follows. Just substitute the columns and the table name for your use case.

DELETE FROM W_EMPLOYEE_D A
WHERE ROWID > (SELECT MIN(ROWID) FROM W_EMPLOYEE_D B WHERE A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID AND A.EFFECTIVE_FROM_DT = B.EFFECTIVE_FROM_DT) ;
commit;

Thursday Sep 12, 2013

Registering BI Applications Data Sources

This blog post explains how to setup data sources from Configuration Manager and gives additional background about where this source data is stored and how to troubleshoot connection issues.[Read More]

Cleaning up ODI I$ and C$ work tables

ODI uses I$ and C$ work tables for E-LT processing.  If there are failures, sometimes these work tables can be left over.  This post explains how BI Apps provides a utility to help clean these up.[Read More]

Tuesday May 14, 2013

Load Plan Generator – An Inside Look

Authored By : Don Co Seng

Oracle BI Applications (BIAPPS) 11.1.1.7.1 was recently released and is now available for download. One of the biggest features in this release is the introduction of Oracle Data Integrator (ODI) as the embedded ETL tool. With the introduction of ODI, a new method of orchestrating the task of loading your data warehouse is now required. This requirement brings about the introduction of Load Plan Generator (LPG).

LPG is a utility for generating ODI load plans for a desired subset of fact tables to be populated into BIAPPS Data Warehouse against one or more source systems. An ODI load plan is an executable object in ODI allowing you to organize tasks based on pre-defined order on the basis of the fact tables being loaded. LPG is invoked from Configuration Manager (CM) and makes use of metadata stored in CM and ODI repository. There is no separate repository required for LPG. This results in significantly lower metadata development and maintenance costs as LPG uses same metadata as in the ETL tool repository.

Key Concepts

Before we get to the inner workings of LPG, let’s take a quick look at a few key concepts that drive the LPG behavior.

BI Apps taxonomy

At the top of the BIAPPS taxonomy are the different offerings available such as Financial Analytics, HR, CRM, etc. Under the offerings are the functional areas such as accounts payable in finance or payroll in HR. Below the functional areas are the different fact groups like “AP Transactions and Balance” or “Payroll Balance”. For each fact group, there are dimension groups associated with it. A dimension group can be specific to a particular fact group or shared across different fact group. The BI apps taxonomy drives both load plan generation as well as the setup flows in Functional Setup manager.

Offering (contains) -> Functional Areas (contains) -> Fact Groups (associated to) -> Dimension Groups

Phase

As for BIAPPS load phases, it is similar to that of previous BIAPPS releases. There are 3 main phases: Source Data Extract (SDE), Source Independent Load (SIL), and Post Load Process (PLP). The SDE phase consists of tasks extracting data from your different source systems. This phase loads all your staging tables and requires source system downtime. The SIL phase loads data from your staging tables into their respective dimension or fact tables. The PLP phase loads data into aggregate tables or some other facts requiring additional data processing. The SIL and PLP phase requires data warehouse downtime to complete the load process.

With BI apps 11.1.1.7.1, due to clear separation of SDE and SIL/PLP phases it’s possible to control and optimize the source system / warehouse downtime to the corresponding phases.

Design Time vs. Runtime dependencies

The LPG algorithm deals with the design time dependencies and run time dependencies separately. Any intra-entity specific design time dependencies are seeded in the repository. For example if a dimension has multiple steps to load the target table in SIL phase, these steps are seeded once in the ODI repository as they are known at design time and never change. The run time dependencies i.e. association of dimension to fact, or association of a dimension or fact to corresponding staging tables on the basis of sources are calculated by the LPG engine.

This approach reduces the number of iterations and the metadata required to get task ordering correct at the entire graph level as the design time dependencies are always consistently honored. LPG behavior is deterministic and does not require multiple iterations to get your entire graph behave correctly.

Load Plan Generator

Now, let’s take a closer look at how LPG works. At the core of LPG are load plan components. The load plan components capture the design time dependencies. There is a load plan component per dimension or fact per phase in general. Load plan components are used as building blocks by LPG to generate a complete load plan. Load plan components are further classified into two categories:

Development components are defined at the grain of fact groups or dimension groups as described earlier. Each component contains one or more references to actual ODI scenarios. Each development component requires all steps for loading a particular dimension or fact group. The order of scenarios is pre-defined during development. Most of the development components are for a specific dimension or fact group. However, there are also those defined for supporting tables like persistent staging or general tables.

System components are defined to capture the phases and ordering of phases. The system components are mostly static, since BIAPPS load phases rarely change. For special cases, the dependencies across dimension groups and fact groups are defined here. Also, inclusion of support tables such as persistent staging table is controlled here. System components contain references to development components.

Generating a Load Plan

As mentioned earlier, LPG is invoked from CM. Generating a load plan is a two step process. First, a user creates a load plan definition using CM. A load plan definition is mainly a list of one or more fact groups per source system which the user desires to load. The user then invokes LPG to generate a load plan based on this load plan definition. When complete, the generated load plan is stored back in a load plan folder in ODI repository.

Load Plan generation logic

So, how does LPG figure out which dimension or fact group component to include in the load plan to be generated? LPG generates a load plan utilizing the following information:

1. Fact tables belonging to selected fact groups

o LPG determines the fact tables and fact group association via an ODI flex field at the fact table level in the ODI data store.

2. Dimensions dependencies to Facts

o The main source of dependency information between dimensions and facts are the foreign key constraints between them. Dimension to dimension dependency (snow-flake) is also supported up to the 2nd level.

3. Staging tables related to Facts and Dimensions

o Staging and warehouse tables are resolved via BIAPPS data model naming standard. For example XXX_DS is the staging table for XXX_D dimension table.

4. Related PS or TMP tables used as source in scenarios

o PS and TMP dependencies are resolved based on which are used as source or lookup table in scenarios required for the dimension or fact group.

5. Keywords in load plan steps for domains and class dimensions

o An additional resolution is through the use of keywords in load plan steps which are used mainly for resolving steps within class dimensions or domains. These are generic tables used across facts / dimensions and the keywords help associate as subset task for these tables to specific facts or dimensions.

Table Maintenance

For table maintenance, additional steps are added in the generated load plan to perform truncate, index management, and table analysis based on pre-defined settings and table types. We will cover this in more detail in a follow up post.

Multisource Support

For multi-source support, the user can create a load plan definition containing one or more source system for a fact particular fact group. There are 3 sequencing options in this release for multi-source support:

1. Extract-Load, Extract-Load, Extract-Load (EL, EL, EL) – LPG generates separate load plans containing all phases for each source system. Each load plan is executed serially of each other.

2. Extract, Extract, Extract, Load (E,E,E,L) – LPG generates multiple SDE only load plan for each source and a single SIL/PLP load plan. Each SDE only load plan will be sourcing from a particular adaptor. The SIL/PLP load plan should be executed after all of the SDE load plans. Each load plan is executed serially of each other.

3. Extract-Extract-Extract-Load (E-E-E-L) – LPG generates a single load plan containing multiple SDE phases and one SIL phase. This load plan will simultaneously extract data from different source systems. After the SDE phase completes, the SIL followed by PLP phase ensues.

 For more details and information on LPG, please refer to documentation available in OTN.

Thursday May 09, 2013

Oracle BI apps 11.1.1.7.1 (BI apps on ODI)

From the announcement today


Oracle Business Intelligence (BI) Applications 11.1.1.7.1 (also known as 11g PS1) is now available on the Oracle Software Delivery Cloud (eDelivery) and on the Oracle Technology Network (OTN). This is the first major release on the 11g code line leveraging the power of ODI, and certified with the latest version of Oracle BI Foundation 11.1.1.7.

What’s New

This major new release of Oracle Business Intelligence Applications 11g adds:

1. New in-memory analytic applications

2. Significantly expand functional content across existing CRM and ERP Oracle BI Applications

3. Re-design to leverage the power of Oracle Data Integrator and, optionally, GoldenGate

4. New out-of-the-box utilities to increase productivity and further drive down Total Cost of Ownership (TCO).

New Oracle BI Applications:

· Oracle Student Information Analytics enables academic institutions to improve recruitment, lower dropout rate by tracking student attendance in real-time, monitor graduation rate, manage student financial transactions, and match up school resources with student needs more effectively.

· Oracle Indirect Spend Planning – a prebuilt planning application that teams with Oracle Procurement and Spend Analytics to enable businesses to optimize spend through what-if modeling

New Content Enhancements: Significant expansion of content improves existing front and back-office Oracle BI Applications including:

· Oracle Human Resources Analytics expands analysis on HR programs and workforce performance with new

Time and Labor Analysis, global Payroll Analysis and Headcount Gain and Loss.

· Oracle Financial Analytics improves company Financial Performance with new Fixed Assets Analysis, providing complete lifecycle of assets from acquisitions, capitalization to retirements. Budgetary Control Analysis, ideal for public sector, delivers insight into expense & revenue budgets including analysis of budgeted amounts, encumbrances, expenditures, funds available, overspending, and recognized revenues.

· Oracle Project Analytics expands insight into project-based data with new Resource Management Analysis, providing visibility into project requirements, trending, utilization, competencies and alignment. Reconciliation of project sub-ledger and General Ledger assist project accountants in reconciling differences between the two ledgers for cost and revenue transactions. Perspective based dashboards provide an optimized Project Analytics user interface that orients the content around specific user perspectives.

· Oracle Supply Chain & Order Management Analytics introduces new Enhanced Inventory Analysis, providing deep insight into Inventory Cycle Count, Inventory Aging, Inventory Expiration/Obsolescence tracking and Inventory balances and transaction details at lot level. Item Cost History Analysis allows insight into historical cost of Supply Chain related items.

· Oracle Procurement & Spend Analytics introduces new Sourcing Analysis, providing a new comprehensive Sourcing Business Process that includes negotiations, awards, savings and performance. Award decisions will help identify trends and check if there is a margin for any savings opportunity.

· Oracle CRM Analytics expands insight into front office Analytics with Indirect Forecasting and Service Contracts support. Forecasting Analysis is now possible when submissions are made up a forecast hierarchy that is outside of the reporting hierarchy. Oracle Price Analytics has added E-Business Suite to the list of supported sources. Customers now have a ready means to assess pricing performance and arrest revenue and margin leakage.  Robust price waterfall analysis possible with the new Waterfall view.

Using New Oracle BI Foundation Suite 11.1.1.7 Features:

· Visualization Enhancements, and User Interaction Improvements

· BI Mobile HD Enhancements: BI Applications available on mobile without additional development

New Oracle Data Integration: BI Apps are completely re-architected to leverage Oracle Data Integrator and Oracle GoldenGate. 

· BI Applications is now available with next-generation Data Integration. Oracle Data Integrator (ODI) delivers unique next-generation, Extract Load and Transform (E-LT) technology that improves performance and reduces data integration costs—even across heterogeneous systems, and improves productivity.

· Functional Setup Manager provides a Web based Administrative User Interface to track and manage implementation projects and their required functional setup steps

· Configuration Manager provides a centralized Web based administrative user interface to simplify the setup and maintenance of Oracle Business Intelligence Applications.

· A new optional pre-packaged solution, that leverages GoldenGate, eliminates the need for an ETL batch window and minimizes the impact on OLTP systems, while providing near real-time reporting and enabling more reliable change data capture and delete support.

· Customers may continue alternatively to use Oracle BI Apps 7.9.6.x and Informatica ETL.

Thursday Sep 13, 2012

BI Applications overview

Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that.

The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs.

The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role.

This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail.

Why BI apps?

  • Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle.
  • Demonstrate technical feasibility of BI project and significantly lower risk and improve success
  • Build Vs Buy benefit
  • Don’t have to start with a blank sheet of paper.
  • Help consolidate disparate systems
  • Data integration in M&A situations
  • Insulate BI consumers from changes in the OLTP
  • Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy

Prebuilt Integrations

BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP

  • Co-developed with inputs from functional experts in BI and Applications teams.
  • Out of the box dimensional model to source model mappings
  • Multi source and Multi Instance support

Rich Data Model

 BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities 

  • Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360
  • Over 360 fact tables across 7 product areas
  • CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56
  • Supported by 300 physical dimensions
  • Support for extensive calendars; Gregorian, enterprise and ledger based
  • Conformed data model and metrics for real time vs warehouse based reporting
  •  Multi-tenant enabled

Extensive BI related transformations

BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules

  • Slowly Changing Dimension support
  • Hierarchy flattening support
  • Row / Column Hybrid Hierarchy Flattening
  • As Is vs. As Was hierarchy support
  • Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies
  • UOM conversion
  • Internationalization / Localization
  • Dynamic Data translations
  • Code standardization (Domains)
  • Historical Snapshots
  • Cycle and process lifecycle computations
  • Balance Facts
  • Equalization of GL accounting chartfields/segments
  • Standardized values for categorizing GL accounts
  • Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL
  • Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals
  • Complex event Interpretation of source data – E.g.
    • What constitutes a transfer
    • Deriving supervisors via position hierarchy
    • Deriving primary assignment in PSFT
    • Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments.
    • Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations.
    • Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison.
    • Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery
    • Calculation of complex measures examples:
    • COGs, DSO, DPO, Inventory turns  etc
    • Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy.

Configurability and Extensibility support 

BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology

  • Key Flex fields and Descriptive Flex support
  • Extensible attribute support (JDE)
  • Conformed Domains

ETL Architecture

BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model

  • Multi Source
  • Multi Technology
  • Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks
  • Plan optimization allowing parallel ETL tasks
  • Oracle: Bit map indexes and partition management
  • High availability support
  • Follow the sun support

TCO

BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation

  • Improved cost of ownership – lower cost to deploy
  • On-going support for new versions of the source application
  • Task based setups flows
  • Data Lineage
  • Functional setup performed in Web UI by Functional person
  • Configuration
  • Test to Production support

Security

BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs

  • Fine grain object security at report / dashboard and presentation catalog level
  • Data Security integration with source systems
  • Extensible to support external data security rules

Extensive Set of KPIs

  • Over 7000 base and derived metrics across all modules
  • Time series calculations (YoY, % growth etc)
  • Common Currency and UOM reporting
  • Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc)

Prebuilt reports and dashboards

  • 3000+ prebuilt reports supporting a large number of industries
  • Hundreds of role based dashboards
  • Dynamic currency conversion at dashboard level

Highly tuned Performance

The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance.

  • Optimized data model for BI and analytic queries
  • Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance
  • Incremental extracts and loads
  • Incremental Aggregate build
  • Automatic table index and statistics management
  • Parallel ETL loads
  • Source system deletes handling
  • Low latency extract with Golden Gate
  • Micro ETL support
  • Bitmap Indexes
  • Partitioning support
  • Modularized deployment, start small and add other subject areas seamlessly

Source Specfic Staging and Real Time Schema

  • Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE

Application Integrations

The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making

  • Embedded dashboards for Fusion, EBS and Siebel applications
  • Action Link support
  • Marketing Segmentation
  • Sales Predictor Dashboard
  • Territory Management

External Integrations

The BI apps data integration choices include support for loading extenral data

  • External data enrichment choices : UNSPSC, Item class etc. Extensible
  • Spend Classification

Broad Deployment Choices

  • Exalytics support
  • Databases :  Oracle, Exadata, Teradata, DB2, MSSQL
  • ETL tool of choice : ODI (coming), Informatica

Extensible and Customizable

  • Extensible architecture and Methodology to add custom and external content
  • Upgradable across releases

Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover.

About

Oracle Blogs Admin-Oracle

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today