Friday Apr 11, 2014

Snapshot Facts in OBIA (3)

Authors: Anbing Xue, Zhi Lin

Delta Snapshot History Fact

Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The changes here can be either quantitative (on amount, units, etc) or qualitative (on names, description, etc). Hence we invented a new kind of snapshot fact, to specifically address it.

Typical Data Model

A typical attributes of a delta snapshot history fact is–

SNAPSHOT_DT

IMAGE_TYPE

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

The delta snapshot history fact would capture and store a new pair of images, whenever we detect a change on the original transaction. The image pair is essential, especially for qualitative changes. Usually it consists of one pre image (or “negative” image) as well as one post image (or “positive” image).  For example,

IMAGE_NUM

ORDER_ID

SHIP_TO_LOCATION

AMOUNT

SNAPSHOT_DT

IMAGE_TYPE

1

001

New York City

$100

Mar 1, 2014

POST

2

001

New York City

-$100

Mar 2, 2014

PRE

3

001

New York City

$120

Mar 2, 2014

POST

4

001

New York City

-$120

Mar 3, 2014

PRE

5

001

Boston

$120

Mar 3, 2014

POST

Basically, this delta snapshot history fact stores regular snapshots per change, and adds the “negative” snapshots before change. So it is enriched with a unique feature to report delta’s trend line, simply by clubbing both kinds of snapshots together.

Besides, we also introduced more flexibility for users to configure a subset of columns they are interested to track. Based on the configuration, the ETL would create new snapshots for changes only on the interested columns. Changes on the other columns would trigger an update to existing snapshots instead, in order to sync up with the original transactions.

Though, extra ETL complexity has to be introduced to handle pre vs. post images separately, plus the flexibility to track subset but all changes. The number of records is much less than regular daily snapshot facts. The data size of this fact is proportional to the number of changes.

Snapshot Facts in OBIA (2)

Authors: Divya Nandakumar, Zhi Lin

Range Snapshot History Fact

In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the snapshot fact could be humongous, depending on the snapshot frequency.  The dilemma is that better accuracy of change history would be achieved with more frequent captures, which makes data size overwhelming and performance badly impacted.

A solution is to create a variation of the snapshot history fact, which we call snapshot period history fact. The idea is simple, concatenating consecutive snapshots, if they happen to share identical images, of a transaction into a new snapshot. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

Typical data model

The typical attributes of a range snapshot history fact are–

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementations

Concatenate daily snapshots into a range

This is a conventional way to build up range snapshots. Two consecutive daily snapshots sharing identical status can be merged into one snapshot spanning across these two days. Two having different statuses would be stored as two separate snapshots for each day.

Concatenation of these daily snapshots could be created as a result of gathering related daily snapshot records together. The degree of condensation of data that can be achieved is remarkable, because the gathering may span to range of period unlike the fixed period of week or month. In case the triggering event occurs very often, for example 20 times a day then, this approach is not advisable. Meanwhile, every detail got preserved as no daily snapshot got dropped off the concatenation.

The ETL flow requires daily snapshots to start with, and do group-by on “interested” status to merge identical rows. Its dependency on accumulation of daily snapshots is extra task and large storage. Incremental load could be a challenge, especially for a back-dated snapshot. Also, this method assumes no gap between daily snapshots, which could lead to an exception difficult to handle in ETL.

A status change in these related daily snapshots could trigger a snapshot record to be entered into the data warehouse.  

Range snapshots directly from transactions

Here we invented a new way to overcome the shortage of the conventional method above to build range snapshots. We removed the dependency on daily snapshots and directly build range snapshots by scanning through all transaction footprints.

A few key points we have introduced to achieve this.

1)      Create multiple range snapshots trailing each footprint (transaction). For example, one order placed in Mar 25, 2012 by Adam, derives to range snapshots trailing as below. The period duration in each snapshot is one year here, which is configurable.

Customer

Status

Status Start Date

Status End Date

Adam

Active

Mar 25, 2012

Mar 25, 2013

Adam

Dormant

Mar 25, 2013

Mar 25, 2014

Adam

Lost

Mar 25, 2014

Mar 25, 2015

2)       Collapse all trailing series generated in (1), and come out only one status at any point of time, using priority rules. In the same example, the priority rule to override is, Active > Dormant > Lost.

3)       On top of the results from collapsing, concatenate the snapshots having identical statuses.

The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

The challenge on incremental load, especially back-dated records, can be solved here relatively easier, as all the source information here, the transaction footprints, are usually persisted anyway. In similar example, our ETL can be as simple as deleting records from the target table and recreating the records for a particular customer from scratch, every time there is an order placed by the customer.

Here we still achieve a great amount of data compression and robust ETL processing. The incremental load is still not precise yet to the most granular level. One incremental load involving one transaction per customer would end up to truncate and rebuild the entire target table.

Snapshot Facts in OBIA (1)

Authors: Divya Nandakumar, Zhi Lin

 

Snapshot History Fact

A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular, transactional fact intends to store data in data warehouse format and reflect OLTP transactions with near-real-time latency. In this context, a regular fact basically captures the near-current snapshot of the OLTP transactions, and is capable to support status de quo analysis.

A snapshot history fact, (or snapshot fact, in short,) accumulates a series of snapshots and preserve all, each with a different time label. In this way, the change history of each transaction is preserved by the snapshot series. This fact is very useful to do trend analysis over time.

Typical Data Model

The typical attribute of a snapshot history fact is–

SNAPSHOT_PERIOD_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

Rolling period of daily snapshots

In the case of a Business need to Analyze DAILY quantity-on-hand inventory levels by product and store for a Business process like “Retail store inventory”, the Granularity could be - Daily inventory by product at each store, the Dimensions - Date, product, store & Fact - Quantity on hand.

Storing source system daily snapshot would have a serious impact on storage of Data Warehouse. The first solution is to keep daily snapshots for a limited rolling period, like last 90 days for example. The daily snapshot table would accumulate daily snapshots from 1 day, 2 days,…, until 90 days. After that, it would always drop off the oldest daily snapshot, before it adds one more daily snapshot. Hence the ETL should always delete the snapshots older than 90 days first, and then append a new snapshot.

This method enables to keep a fairly granular of snapshots on daily level. However, older snapshots are not kept, so it’s not good for long term historical trending.

 

Monthly snapshots

At the end of the month all accounts have their month ending balance captured. The event is the end of the month, and the month is stored as part of the data warehouse. The selection program reads through the operational data and upon encountering a record that meets the qualifications, moves the record to the data warehouse. At the end of the month, each account is queried and the balance of the account at the end of the month is transferred to the data warehouse environment. One account may have had no activities during the month and another account may have had 200 activities during the month. Both accounts will show up as exactly one record in the data warehouse environment. No continuity of activity is assumed using this technique.

The passage of time - day end, week end, month end, etc. - is all common ways of triggering a snapshot. But the periodic passage of time is hardly the only way that snapshots are triggered.

The Monthly snapshot table stores snapshots of all previous day’s historical data. ETL design would have a preload mapping which deletes the data loaded for current month, based on Current Month End date and then load with the latest data for current month.

 

In this way, we “aggregate” up from daily snapshots and archive great compact on data size. Longer term history trending can be stored and reported. However, we lost gross levels of details in between every two month ends.

 

Wednesday Mar 05, 2014

BI apps 11.1.1.7.1 Cumulative Patch 1 is available now

BI applications 11.1.1.7.1 cumulative patch  is available now.

Patch 17546336 - BIAPPS 11.1.1.7.1 ODI CUMULATIVE PATCH 1 (Patch) can be downloaded from My Oracle Support.

 "To download patch , navigate to My Oracle Support > Patches & Update. Search with the patch number (17546336) on the Patch Name or Number field. Follow the instructions in the Readme to apply the patch."

Monday Feb 24, 2014

How to Implement Object Security in Project Analytics in OBIA 11.1.1.7.1

Author: Vijay Rajgopal

Introduction

This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA 11.1.1.7.1 onwards.

Object-level security controls the visibility to logical objects based on a user's duty/application roles. The access to following objects can be restricted using object level security: Presentation tables, Presentation table columns, Subject Areas, Reports, Dashboards, and Project Specific shared folders.

To apply object security over subject area, individual tables or individual column the default access for authenticated user application role must be set to No Access.

We need to explicitly grant read access to duty roles (which are based on adaptor as explained above) which can access/view the particular subject area or individual table or individual column.

Supported OBIA release: 11.1.1.7.1 onwards

  1. Project Analytics Application Roles used for enforcing object security –

In Enterprise Manager select WebLogic -> Domain -> bifoundation_domain -> Security -> Application Roles, Select obi application stripe and search for role name which starts with OBIA and you will see the list of all application roles that start with OBIA.


Following is the list of OOTB duty roles by adaptor

EBS Adaptor Duty Roles –

OBIA_EBS_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_EBS_PROJECT_MANAGEMENT_ANALYSIS_DUTY

PSFT Adaptor Duty Roles –

OBIA_PSFT_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PSFT_PROJECT_MANAGEMENT_ANALYSIS_DUTY

Fusion Adaptor Duty Roles –

OBIA_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PROJECT_MANAGEMENT_ANALYSIS_DUTY

  1. Project Analytics object security implementation -

2.1 Subject Area:

Eg: Project - Cost GL Reconciliation is a newly added area for EBS and PSFT adaptors. We want to ensure that this subject area is not seen by Fusion Adaptor customers.

Bring down the OBIEE Server, backup the existing rpd and open the rpd in the Admin tool.

Double click Project - Cost GL Reconciliation à Permissions


As you can see read access has been granted explicitly to duty roles associated with EBS and PSFT adaptors. All other duty roles would inherit the default access from Authenticated User application role which is set to No Access. This ensures that this subject area is not visible for Fusion adaptor users

2.2 Presentation Table:

Eg: Dim – Analysis Type is supported only for PSFT adaptor. We hide this presentation table from EBS and Fusion Adaptor customers.

Under Project - BillingAnalysis Type Permissions


As it can be seen above only users associated to PSFT duty roles would be able to view Analysis Type table. For EBS and Fusion adaptor users this table would be hidden.

2.3 Individual Columns:

Eg: Interproject Billing Amount metric in Project-Billing subject area is supported only for EBS and Fusion adaptors. We hide this individual column from PSFT customers.

Under Project - Billing Fact – Project Billing Interproject Invoice Amount à Permissions


As it can be seen above this metric would be viewed by EBS and Fusion adaptor users and hidden from PSFT adaptor users.

Save the rpd, do a consistency check and deploy the updated rpd in the OBIEE server.

  1. Additional Information –

General Details about 11.1.1.7.1 OBIA can be found here

Wednesday Feb 19, 2014

Notes for implementing Universal adapter for OBIA Project analytics

Author: Amit Kothari

Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules.

Supported OBIA releases: 11.1.1.7.1 onwards

Supported Apps releases: Universal Adapter.

Steps:

 Please refer to the OBIA documentation and the DMR as a starting point for this exercise.Also refer to this blog entry.

Please login to the ODI Designer to see the OBIA Projects Universal interfaces, the Source files can be seen in the Model layer.

1. High level steps to import data into the data warehouse through the Universal adapter.

a. Populate the csv files with your data (eg. file_proj_budget_fs,.csv is the source file for w_proj_budget_fs table). Typically customer writes an extract program like a shell file/PL*SQLprogram etc which creates these data files from a non supported Source OLTP system.

b. Refer to the steps details of how to populate these files.

c. Build a Load Plan with fact groups: "900: Universal Adaptor Instance"."Project".

d. Run the Load Plan that you created in the previous step.

e. Note: If applicable this Load Plan must be run after the regular Load Plan to populate Oracle Business Analytics Warehouse for the other Subject Areas has completed.

2. The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:

a. Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.

b. Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.

c. Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.

d. Refer to the Appendix section ‘Setting Up the Delimiter for a Source File’.

  1. As a general rule default 0 for numeric columns and '__NOT_APPLICABLE__’ for string columns so that we do not run into ‘Not Null’ errors when ETLs start loading data.
  2. Date columns should be populated in the CSV file as a number in the format YYYYMMDDHH24MISS or kept null
  3. The dimension ID fields in the fact staging tables have to be populated with the integration_id of the various dimensions. This is very important otherwise the dimension wids fields in the fact tables will default to 0. Please refer to the ODI Model or the DMR for the star schema diagrams and other FK info.
  4. Similarly the common dimensions which Projects uses like W_INT_ORG_D, W_MCAL_DAY_D, W_MCAL_CONTEXT_G, W_EMPLOYE_D, W_JOB_D, W_INVENTORY_PRODUCT_D etc also needs to populated correctly via source files
  5. W_MCAL_CONTEXT_G has a class fields that holds two values – GL or PROJECTS. To resolve the project accounting dates in the fact tables there must be data present in this table for class ‘PROJECTS’
  6. There are various Domain codes which are loaded to Warehouse staging table W_DOMAIN_MEMBER_GS. In order to load this table the generic file File_domain_member_gs.csv has to be populated with the correct Domain code.
    1. The granularity of this file is each domain member per language for any of the domains listed above.
    2. Domain codes for Projects are listed in the Appendix. Just load the domains based on the Facts/Dims you are planning to load.

Table B-164 file_domain_member_gs.csv Field Descriptions

Column Name

Data Type

Sample Data

Description

DOMAIN_CODE

Not available.

Not available.

This should be populated with the Domain Code corresponding to the Source Domain that is to be configured.

DOMAIN_TYPE_CODE

Not available.

Not available.

Defaulted to 'S' - indicates this is a Source Domain Code.

DOMAIN_MEMBER_CODE

Not available.

Not available.

This should be populated with the CODE value supplied in any of the above files.

DOMAIN_MEMBER_NAME

Not available.

Not available.

This should be populated with the NAME value that corresponds to the Member Code supplied.

DOMAIN_MEMBER_DESCR

Not available.

Not available.

Not available.

DOMAIN_MEMBER_REF_CODE

Not available.

Not available.

Hardcode to '__NOT_APPLICABLE__'.

DOMAIN_MEMBER_DEFN_TYPE_CODE

Not available.

Not available.

Not available.

LANGUAGE_CODE

Not available.

Not available.

Warehouse Language Code.

SRC_LANGUAGE_CODE

Not available.

Not available.

Source Language Code.

INTEGRATION_ID

Not available.

Not available.

This is the unique ID for the record. The INTEGRATION_ID for this file can also be populated as DOMAIN_CODE~DOMAIN_MEMBER_CODE.

DATASOURCE_NUM_ID

Not available.

Not available.

The unique Data Source ID of the Source Instance you are configuring.

Appendix

A. Setting Up the Delimiter for a Source File

When you load data from a Comma Separated Values (CSV) formatted source file, if the data contains a comma character (,), you must enclose the source data with a suitable enclosing character known as a delimiter that does not exist in the source data.

Note: Alternatively, you could configure your data extraction program to enclose the data with a suitable enclosing character automatically.

For example, you might have a CSV source data file with the following data:

Months, Status
January, February, March, Active
April, May, June, Active

If you loaded this data without modification, ODI would load 'January' as the Months value, and 'February' as the Status value. The remaining data for the first record (that is, March, Active) would not be loaded.

To enable ODI to load this data correctly, you might enclose the data in the Months field within the double-quotation mark enclosing character (" ") as follows:

Months, Status
"January, February, March", Active
"April, May, June", Active

After modification, ODI would load the data correctly. In this example, for the first record ODI would load 'January, February, March' as the Months value, and 'Active' as the Status value.

To set up the delimiter for a source file:

1. Open the CSV file containing the source data.

2. Enclose the data fields with the enclosing character that you have chosen (for example, (").

You must choose an enclosing character that is not present in the source data. Common enclosing characters include single quotation marks (') and double quotation marks (").

3. Save and close the CSV file.

4. In ODI Designer, display the Models view, and expand the Oracle BI Applications folder.

Identify the data stores that are associated with the modified CSV files. The CSV file that you modified might be associated with one or more data stores.

5. In ODI Designer, change the properties for each of these data stores to use the enclosing character, as follows:

1. Double-click the data source, to display the DataStore: <Name> dialog.

2. Display the Files tab.

3. Use the Text Delimiter field to specify the enclosing character that you used in step 2 to enclose the data.

4. Click OK to save the changes.

You can now load data from the modified CSV file.

  1. Project Domains

PROJECT_ANALYSIS_TYPE

PROJECT_ASSIGNMENT_STATUS

PROJECT_ASSIGNMENT_TYPE

PROJECT_BILLING_TYPE

PROJECT_BILL_HOLD_CODE

PROJECT_BILL_HOLD_FLG

PROJECT_BUDGET_CHANGE_REASON

PROJECT_BUDGET_CLASS

PROJECT_BUDGET_ENTRY_LEVEL

PROJECT_BUDGET_LEVEL

PROJECT_BUDGET_STATUS

PROJECT_BUDGET_TIME_PHASE

PROJECT_BUDGET_TYPE

PROJECT_BUDGET_VER_TYPE

PROJECT_CLASS_CATEGORY

PROJECT_CLASS_CODE

PROJECT_COMMITMENT_SOURCE_TYPE

PROJECT_COMMITMENT_TYPE

PROJECT_COST_CODE_TYPE

PROJECT_CROSS_CHARGE

PROJECT_CROSS_CHARGE_TYPE

PROJECT_DOCUMENT_ENTRY

PROJECT_DOCUMENT_SOURCE

PROJECT_EVENT_CLASS

PROJECT_EVENT_INVOICING_STATUS

PROJECT_EVENT_SOURCE

PROJECT_EVT_REV_DISTRIB

PROJECT_EXPENDITURE_CATEGORY

PROJECT_EXPENDITURE_CLASS

PROJECT_EXPENDITURE_SUBCATEGORY

PROJECT_FIN_PLAN_TYPE

PROJECT_FUNDING_APPROVAL

PROJECT_FUNDING_CATEGORY

PROJECT_FUNDING_LEVEL

PROJECT_INVOICE_CLASS

PROJECT_INVOICE_HDR_TYPE

PROJECT_INVOICE_LINE_TYPE

PROJECT_INV_STATUS

PROJECT_MFG_RESOURCE_TYPE

PROJECT_PRIORITY

PROJECT_PROGRESS_STATUS

PROJECT_REQUIREMENT_STATUS

PROJECT_RESOURCE_CLASS

PROJECT_RESOURCE_CODE

PROJECT_RESOURCE_TYPE

PROJECT_REVENUE_CATEGORY

PROJECT_REVENUE_STATUS

PROJECT_ROLE

PROJECT_SECURITY

PROJECT_SERVICE_TYPE

PROJECT_STATUS

PROJECT_SUB_STATUS

PROJECT_TASK_PRIORITY

PROJECT_TASK_STATUS

PROJECT_TASK_TYPE

PROJECT_TRANSACTION_SOURCE

PROJECT_TRANSFER_STATUS

PROJECT_TYPE

PROJECT_TYPE_CLASS

PROJECT_WORK_TYPE

How to include Fusion DFFs Into OBIA 11.1.1.7.1 In Premise Data Warehouse

Author: Saurabh Gautam

Introduction

This is a technote that explains the steps needed to extract the Fusion descriptive Flexfield (DFF) information into the in premise Oracle BI application (OBIA) Warehouse from the in premise Fusion applications (not SAAS).

Note: The OBIA changes have to be done manually.

Supported OBIA release: 11.1.1.7.1 onwards

Supported Apps release: Fusion Release 5 onwards

A. Enable and Deploy DFF

1. Enable the Descriptive Flexfield in Fusion Apps. Mark the DFF/attributes as BI Enabled. For e.g. Enable PJC_EXP_ITEMS_DESC_FLEX

2. Deploy the Flexfield Vo.

3. Refer to this link for more info.

B. Setup the rpd

1. Bring down the OBIEE server and presentation services.

2. Open the Oracle BI Applications repository file (*.rpd) via the admin tool.

3. Import the newly deployed DFF VO(for e.g. FscmTopModelAM.PjcEiBIAM.FLEX_BI_PjcEi_VI) into the rpd. Select the appropriate logical table (for e.g.Dim – Project Costing Details) while importing. This should import the DFF VO and also create the physical join to the appropriate VO. E.g. screenshot below:

1. Make sure that the VO name is <= 80 chars, if not then create an Alias on that VO with name <= 80 chars

2. Save the rpd, start the BI server

C. ETL Changes in ODI

 1. Please note that the steps documented in this note follow our standard customization process. This is needed in order for our future ODI metadata patches to work in your repository. As part of the standard customization process, you will copy the existing mapping folder to a custom folder, make changes in the custom folder, delete the OOTB scenario from the original mapping folder, and then generate the new scenarios in the custom folder using the original OOTB scenario name.

Please refer to the customization guide before you start on this.

2. Open the ODI Studio client and login to the appropriate repository. Go to the Model tab->Fusion 1.0->Oracle Fusion 1.0 FSCM/HCM/CRM folder

3. Import the newly deployed DFF VO using the RKM BIAPPS Oracle BI

4.

5. Open the Oracle BI Applications-> Oracle BI Applications model sub folder and add the fields to the target DS/FS and D/F data waterhouse (DW) tables in the correct folder.

6. Apply these target table changes to the target warehouse by doing an alter table.

7. Click on the Designer tab and navigate to the appropriate sde folder for fusion under BI apps project : Mappings ->SDE_FUSION_V1_Adaptor. Duplicate the appropriate SDE folder and copy it to your CUSTOM_SDE folder.

8. Open the temporary interface (icon marked in yellow) in that custom folder.

9.

10. Pull the DFF VO into the mapping tab of the interface.

11. Join the DFF VO to the base VO and drag the DFF VO fields which need to be extracted in to the DW into the right hand target pane

12. Open the main interface and map the fields from the temporary interface to the target.

13. Save all the objects. Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

14. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

15. Similarly copy the appropriate Dim or Fact in the SILOS folder to the CUSTOM_SILOS folder, and then map the new DS/FS fields to the D/F table in the main interface.Save.
Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

16. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

17. Unit test all the changes

D. RPD changes

1. Open the rpd in the admin tool in your dev environment, in the physical layer add the new fields to the modified_D/_F table under the DataWarehouse connection pool.

2. Drag the new fields from the alias to the BMM layer, rename it to give it a business name and drag it to the presentation layer.

3. Run the consistency check and save the rpd.

4. Deploy the modified rpd and restart the BI server and test the new fields from an answer.

E. Additional Information

General Details about 11.1.1.7.1 OBIA can be found here

Note: These fixes are to be applied in the right folder for e.g. Apply them in the SDE_Fusion_Adaptor folder of the ODI repository if you are running Fusion app. If you have customized the maps mentioned above then please carefully apply the steps mentioned above

BIAPPS List of ODI Variables

Author: Chuan Shi

The tables below list variables of different categories in terms of whether and where they are refreshed. Note that the intention of this blog is just to provide general information about ODI variables. If you are an Oracle BIAPPS customer and have specific questions regarding any ODI variable, please contact an Oracle support.

Variables that are not refreshed

Variables that are refreshed

Monday Feb 17, 2014

How to Compare RPDs

Author: Vivian(Weibei) Li

Introduction

Comparing RPD is a very necessary and useful process in RPD development. It is good to consider doing it in the following use cases.

  • You have a customized RPD and want to find out the differences between the customized RPD and the OOTB RPD.
  • You did some customization on the RPD and want to check if the customization is done as expected.
  • The administrator wants to check the differences in the later RPD with the older RPDs.
  • You want to upgrade your RPD and you are not sure if there are any upgrade issues from one release to anther release. You can compare the RPDs for the two releases to catch issues before upgrading.

There are many more cases to compare RPDs. This blog describes how to compare the RPDs.

There are two ways to compare RPDs, one is to use the compare utility and another is to use the compare dialogue in the admin tool.

You can choose either way to compare your RPDs. Here are the comparison of the two methods and some tips.

  • RPD compare utility is executed by the command line, so it doesn’t need the admin tool open.
  • RPD compare dialogue can be directly used in admin tool.
  • When two local RPDs are compared, especially when they require different OBIEE versions, RPD compare utility is recommended.
  • When modifications are performed in the RPD with admin tool and it is to be compared with benchmark RPDs(for example, the original RPD), RPD compare dialogue is recommended. See the details in the next sessions.

Compare RPDs with Utility

There are utilities used for rpd comparison in OBIEE. You can call the utilities with a few commands in the command prompt of Windows. Here lists the steps to compare RPDs with this option.

Step 1 - Choose your OBIEE version

Use the later version of OBIEE to do the comparison if you compare the two releases, and the two releases need different versions of OBIEE.
For example, there are two releases, let’s say release 1 and release 2. release 2 uses the later version of OBIEE. To compare the two releases, you need to use the OBIEE version for release 2.

Step 2 – Check out the subset of RPDs

It is recommended to check out the subset of RPDs that you want to compare with the projects you are interested in. Comparing the entire RPDs will be very inefficient and time-consuming.

Step 3 – Equalize the RPDs

Before comparing the RPDs, you should equalize the RPDs first with equalizerpds utility. The equalizerpds utility will equalize the Upgrade ID of objects in two separate repositories. If objects have the same Upgrade ID, they are considered to be the same object. The utility compares Upgrade IDs from the first repository (typically the original repository) with Upgrade IDs from the second repository (typically the modified repository). Then, the utility equalizes the Upgrade IDs of objects with the same name, using the Upgrade ID from the original repository.

equalizerpds.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin.

· Syntax

The equalizerpds utility takes the following parameters:

equalizerpds [-B original_repository_password] -C original_repository_name

[-E modified_repository_password] -F modified_repository_name [-J rename_map_file]

[-O output_repository_name] [-Y equalStringSet]

· Explanation

Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository.

For example, if you want to compare release 1 and release 2 RPDs to find out upgrading issues to upgrade from release 1 to release 2, put release 1 RPD as the original and release 2 RPD as the modified repository.

rename_map_file

When you equalize objects, you can lose track of object renames because legitimate object renames become different objects. In other words, intentional renames you did in the repository might be changed to different Upgrade IDs, so subsequent merges erroneously treat the renamed object as a new object. To avoid this situation, enter the before and after names of intentionally renamed objects in a rename map file that you then pass to the utility. The equalizerpds utility uses the information in the file to ensure that the original IDs are used in the renamed current objects.

rename_map_file is a text file containing a list of objects that were renamed and that you want to equalize. The format is a tab-separated file with the following columns:

                    TypeName     Name1     Name2
       For example, logical column "Core"."Dim - Customer"."Address4" is re-named as 
       "Core"."Dim - Customer"."Address 4" from Release 1 to Release 2. The file can be written as
                    Logical Column "Core"."Dim - Customer"."Address4" "Core"."Dim - Customer"."Address 4"

       Tip: How to find out the TypeName value?
Query your object with Query Repository tool in the admin tool, and you will find the TypeName value in the result. 
1.       Open the admin tool. Go to Tools->Query Repository

 

2.       In the popup dialogue, query your object.
3.       You will find the Type value in the result.

 

You can put this file in any folder in your machine, and give the absolute path in rename_map_file parameter. See the example below.

· An equalization command example

equalizerpds -B pwd123 -C C:\rpdcomparison\release1\release1.rpd -E pwd123 -F C:\rpdcomparison\release2\release2.rpd -J C:\rpdcomparison\rename-map-file.txt -O C:\rpdcomparison\release2\equalizedrpd.rpd

Step 4 – Compare the RPDs

Now you can compare the RPDs with the comparerpd utility. comparerpd.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin.

· Syntax

The comparerpd utility takes the following parameters:

comparerpd [-P modified_rpd_password] -C modified_rpd_pathname

[-W original_rpd_password] -G original_rpd_pathname {-O output_csv_file_name |

-D output_patch_file_name | -M output_mds_xml_directory_name} -E -8

· Explanation

Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository. The ‘modified’ repository should be the equalized RPD got from step 3.

-O output_csv_file_name is the name and location of a csv file where you want to store the repository object comparison output.

-D output_patch_file_name is the name and location of an XML patch file where you want to store the differences between the two repositories.

-M output_mds_xml_directory_name is the top-level directory where you want to store diff information in MDS XML format.

Note: You can specify an output CSV file using -O, an XML patch file using -D, or an MDS XML directory tree using -M. You cannot specify more than one output type at the same time.

· A comparison command example

comparerpd -P pwd123 -C C:\rpdcomparison\release2\equalizedrpd.rpd -W pwd123 -G C:\rpdcomparison\release1\release1.rpd -O C:\rpdcomparison\results.csv

Compare RPDs with Compare Dialogue

In this session, I will describe how to compare RPDs with the compare dialogue. The compare dialogue must be used with admin tool opened.

Compare the MUD RPD Before and After the Modification

· Open your fixed MUD RPD.

· Go to ‘File’->‘Multiuser’->’Compare with Original’.

It will compare the current modified rpd with the local copy of the original rpd.

Compare the local RPDs

You can also use compare dialogue to do the comparison for the local RPDs.

Note: It is recommended to use the compare utility to compare the local RPDs. Remember to extract the subset of RPDs before comparing. Comparing the entire RPDs will be time-consuming.

· Open your local RPD, let’s say RPD1 with admin tool. This RPD is the base RPD in your comparison.

· Go to ‘File’->’Compare’.

· Select the repository or XML file in the popup compare dialogue.

· Enter the password for the repository which you want to compare to, let’s say RPD2. RPD2 will be the modified repository.

· The compare dialogue will open the RPD2, equalize RPD2, then compare the RPD2 with RPD1, and finally show the results.

· You can see what the object looks like in RPD1 and RPD2 by clicking ‘View 1’ and ‘Edit 2’.

View 1 – the object in RPD1

Edit 2 – the object in RPD2

· Save the diff file as .CSV file in your local machine.

In summary, RPD comparison is a good tool for RPD development.  Using it appropriately will build more confidence for your RPD modification, fix and upgrade. It makes the RPD development more smooth and less error-prone if adding RPD comparison as a necessary process.

Wednesday Jan 22, 2014

Configuring Enterprise Calendar for Oracle BI Apps Time Dimension

Author: Chuan Shi

Introduction

One of the key common dimensions of Oracle BI Apps is the Time dimension. It contains calendars of different natures to support different types of analysis within various subjects in Oracle BI Apps. Different types of Calendar include:

  • Gregorian Calendar
  • Fiscal Calendar (multiple)
  • Enterprise Calendar (unique enterprise wide)

The Enterprise Calendar (or reporting calendar) enables cross subject area analysis. The Enterprise Calendar data warehouse tables have the W_ENT prefix. Within a single BI Apps deployment by the customer, only one fiscal calendar can be chosen as the Enterprise Calendar. The purpose of this blog is to explain how to configure the Enterprise Calendar.

Configure the Enterprise Calendar

The Enterprise Calendar can be set to one of the OLTP sourced fiscal calendars, or to one of the warehouse generated fiscal calendars (e.g., the 4-4-5 calendar and 13 period calendar supported by Oracle BI Apps). This can be done by setting the following source system parameters in the Business Intelligence Applications Configuration Manager (BIACM):

  • GBL_CALENDAR_ID (used to set the ID of the calendar to be used as the Enterprise Calendar)
  • GBL_DATASOURCE_NUM_ID (used to set the DSN of the source from which the Enterprise Calendar is chosen)

The following sections show how to set up these two parameters for the Enterprise Calendar in different scenarios.

Scenario 1: Using an Oracle EBS fiscal calendar as the Enterprise Calendar

  • GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In EBS, it should have the format of MCAL_CAL_NAME~MCAL_PERIOD_TYPE. For example, GBL_CALENDAR_ID will be 'Accounting~41' if MCAL_CAL_NAME = 'Accounting' and MCAL_PERIOD_TYPE = '41'.

Note 1: MCAL_CAL_NAME and MCAL_PERIOD_TYPE are sourced from PERIOD_SET_NAME and PERIOD_TYPE of the GL_PERIODS table (an Oracle EBS OLTP table). To see a valid list of combinations of MCAL_CAL_NAME~MCAL_PERIOD_TYPE, run the following query in the OLTP:

SELECT DISTINCT PERIOD_SET_NAME || '~' || PERIOD_TYPE FROM GL_PERIODS;

Note 2: The available EBS calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW:

SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID

FROM W_MCAL_CAL_D

WHERE DATASOURCE_NUM_ID = <the value corresponding to the EBS version that you use>;

  • GBL_DATASOURCE_NUM_ID: For EBS, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For example, if you are running EBS R11.5.10 and the DATASOURCE_NUM_ID for this source is 310, then you need to set GBL_DATASOURCE_NUM_ID to 310.

GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID are set in BIACM, and this will be covered in a later section.

Scenario 2: Using a PeopleSoft fiscal calendar as the Enterprise Calendar

  • GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In PSFT, it should have the format of SETID~CALENDAR_ID. For example, GBL_CALENDAR_ID will be 'SHARE~01' if SET_ID = 'SHARE' and CALENDAR_ID = '01'.

Note 1: SETID and CALENDAR_ID are sourced from the PS_CAL_DEFN_TBL table (a PeopleSoft OLTP table). To see a valid list of combinations of SETID~CALENDAR_ID, run the following query in the OLTP:

SELECT DISTINCT SETID || '~' || CALENDAR_ID FROM PS_CAL_DEFN_TBL;

Note 2: The available PeopleSoft calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW:

SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID

FROM W_MCAL_CAL_D

WHERE DATASOURCE_NUM_ID = <the value corresponding to the PeopleSoft version that you use>;

  • GBL_DATASOURCE_NUM_ID: For PSFT, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For instance, if you are running PeopleSoft 9.0 FSCM Instance and the DATASOURCE_NUM_ID for this source is 518, then you need to set GBL_DATASOURCE_NUM_ID to 518.
Note: OLTP sourced calendars are not supported in PeopleSoft HCM pillars. Therefore, should you want to choose an enterprise calendar if you are running PSFT HCM, you need to choose one of the DW generated calendars (i.e., 4-4-5 or 13 period calendar), which is explained in Scenario 3.

Scenario 3: Using a warehouse generated calendar as the Enterprise calendar

  • GBL_CALENDAR_ID: This parameter should be the CALENDAR_ID of the DW generated Calendar (so either 4-4-5 or 13 period calendar). By default, the 4-4-5 calendar has a CALENDAR_ID of '10000', and the 13-period calendar has a CALENDAR_ID of '10001'.
  • GBL_DATASOURCE_NUM_ID: If Enterprise Calendar is a DW generated Calendar, this parameter should be the DATASOURCE_NUM_ID value of the OLAP (Data warehouse). For example, if the DATASOURCE_NUM_ID for your DW is 999, then GBL_ DATASOURCE_NUM_ID should be set to 999.

Note 1: Customers can generate additional warehouse generated calendars which can be picked as the Enterprise Calendar.

Note 2: The available data warehouse calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW:

SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID

FROM W_MCAL_CAL_D

WHERE DATASOURCE_NUM_ID = <DSN of your DW>

Setting GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID in BIACM

To set GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID, log into BIACM, and click on Manage Data Load Parameters from the navigation bar on the left. Once being directed to the Manager Data Load Parameters page, type in GBL_CALENDAR_ID in the parameter field and choose Code as the parameter type. Then click the Search button after which the parameter with its current value will be returned. Refer to the example below, which shows 10000 as the current value of GBL_CALENDAR_ID.

To change the value of GBL_CALENDAR_ID, click on its current value, and then an edit dialog pops up.

Provide the desired value in the Parameter Value field (note that you do not need to include single quotes in the value, e.g., use Accounting~41 rather than ‘Accounting~41’), and then click Save and Close to save your change. The new value of GBL_CALENDAR_ID has been set.

The setting procedure for GBL_DATASOURCE_NUM_ID is similar. It should be first retrieved by searching for this variable. Once it is returned, click on its current value and then an edit dialog pops up. Change its parameter value there and then save the change.

Setting GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID in a multi-source ETL

In a multi-source ETL run, multiple fiscal calendars from different data sources can be loaded. However, in this case, ONLY ONE calendar can be chosen as the Enterprise Calendar. For example, if you have two data sources, PeopleSoft and Oracle, then you can only choose either a calendar from PeopleSoft or a calendar from Oracle as the Enterprise Calendar. The two parameters GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID should be set in BIACM according to the calendar that you choose. Never provide more than one value to GBL_CALENDAR_ID or GBL_DATASOURCE_NUM_ID in BIACM. That would fail the ETL run.

Configure summary calendar file (file_summary_calendar.csv) for PeopleSoft ETL

Author: Chuan Shi

Purpose and Audience

This article explains how to configure an ETL source file for time dimension from PeopleSoft (PSFT) adaptors, i.e., file_summary_calendar.csv. This file is needed to populate the fiscal quarter information for fiscal calendars sourced from PSFT. It applies to all Oracle BI Apps customers who use PSFT sources.

Oracle BI Apps supports calendars of different natures, including Gregorian calendar, fiscal calendar, and enterprise calendar. Fiscal calendars are stored in W_MCAL_%_D data warehouse tables, where W_MCAL_PERIOD_D is the most important table (loaded from the corresponding staging table W_MCAL_PERIOD_DS). Among many attributes of this table is the MCAL_QTR column which stores fiscal quarter information. The mapping logic of this column for PSFT OLTPs requires some special design. In what follows, we explain what needs to be done to ensure MCAL_QTR being populated correctly.

Populate MCAL_QTR (fiscal quarter information) for PSFT sourced fiscal calendars

In PSFT, there are concepts of detailed calendars (stored in PS_CAL_DETP_TBL) and summary calendars (stored in PS_CAL_SUMP_TBL). Definitions of all calendars are stored in PS_CAL_DEFN_TBL, where one can check if a calendar is a detailed or summary calendar.

Detailed fiscal calendars are loaded into W_MCAL_%_D tables. However, the detailed calendars by themselves do not have information about fiscal quarter. Therefore, for MCAL_QTR, an ETL logic is used to correctly allocate each period of a given detailed fiscal calendar to one of the four fiscal quarters (so that its MCAL_QTR is populated), by using a summary calendar that corresponds to the detailed calendar. As a result, the prerequisite of this ETL logic is to correctly connect a detailed calendar to a summary calendar. However, such dependency is not available from the PSFT sources, and it is a customer decision that determines which detailed calendar should be associated with which summary calendar.

Therefore, in our design, we create a source file called file_summary_calendar.csv. Customers are required to configure this csv file by associating each applicable detailed calendar to an appropriate summary calendar (i.e., each row in the file is a relationship between a paired detailed and summary calendar). This csv file is loaded into a DW table called W_MCAL_PSFT_SUMP_CONFIG_G. Consequently, this table contains relationships of different detailed calendars and their summary calendars. It is in turn used to help populate MCAL_QTR in W_MCAL_PERIOD_DS.

To summarize, in order to have MCAL_QTR correctly populated for a detailed calendar, you need:

  1. A detailed calendar defined in PSFT source;
  2. A summary calendar defined in PSFT source that is appropriate for the detailed calendar;
  3. Put an entry of the pair of detailed and summary calendar in file_summary_calendar.csv file.

Examples of how to configure the csv file is provided in the appendix.

Commonly known reasons for MCAL_QTR being unpopulated or populated incorrectly

A detailed calendar and its corresponding summary calendar should be synchronized perfectly in PSFT OLTP source data and their pair should be put in this csv file for MCAL_QTR to be populated correctly. However, not all detailed calendars need to be put into this csv file, because the fiscal quarter (MCAL_QTR) concept does not apply to some detailed calendars by nature (we will provide examples below). Unsynchronized detailed and summary calendars lead to MCAL_QTR being populated incorrectly or unpopulated.

There are four commonly known reasons of why MCAL_QTR is unpopulated or incorrectly populated for a fiscal period. They are either due to source data issues or due to configuration of the csv file. The four cases are:

  • Case 1: The fiscal quarter concept is not applicable to a detailed calendar. For example, if a detailed calendar is a year level calendar, then each row of this calendar in W_MCAL_PERIOD_DS is a fiscal year. It does not have a granularity lower than year. Therefore, it is not possible (or does not make sense) to populate quarter information to this calendar.

  • Case 2: Not all the years of a detailed calendar are covered by the corresponding summary calendar. Therefore, MCAL_QTR is null for the periods in those uncovered years. For example, the detailed calendar covers from fiscal year 1990 to 2020, while its paired summary calendar only covers from fiscal year 2001 to 2020. In this case, the periods between 1990 and 2000 will not have MCAL_QTR populated.

  • Case 3: The summary calendar is not suitable for that detailed calendar. This means that the quarter allocation ETL logic fails to correctly allocate periods of the detailed calendar into one of the four fiscal quarters.

  • Case 4: Entries of appropriate detailed and summary calendars are missing in the .csv file.

I provide below several examples of detailed calendars. Refer to the Comment column to see if they should be put in the csv file along with an appropriate summary calendar.

Detailed Calendar Examples:

When you have to create a summary calendar for one or more detailed calendars (so that they can be put in the csv file), the key is that in any given year, all the periods of the corresponding detailed calendar (so in a weekly calendar there are 52 or 53 periods) should be correctly allocated to the 4 fiscal quarters. This requires the summary calendar to have correct values in DETL_PERIOD_FROM and DETL_PERIOD_TO of PS_CAL_SUMP_TBL. For example, if you want to create a summary calendar for all your detailed weekly calendars, then the data entries in PS_CAL_SUMP_TBL of a given year (say 1990) should look like

This is because this summary calendar will be used to assign fiscal quarters to those weekly detailed calendars, and such a calendar has 52 or 53 periods (i.e., 52 or 53 weeks) in a given fiscal year. So, the DETL_PERIOD_FROM and DETL_PERIOD_TO values indicate that periods 1-13 (of a weekly detailed calendar) belong to fiscal quarter 1, periods 14-26 belong to fiscal quarter 2, etc. As you can see from this example, this summary calendar can be used for multiple weekly detailed calendars. Therefore, you don’t have to create different summary calendars for different weekly detailed calendars. However, for each detailed weekly calendar, you have to put an entry for it, together with this summary calendar, in the csv file.

As a comparison, if instead you need to create a summary calendar used for monthly detailed calendars, you may expect to have its DETL_PERIOD_FROM and DETL_PERIOD_TO (for a given year, say 1990,) as:

This is because this summary calendar is used to assign quarters to those monthly detailed calendars, and a monthly calendar only has 12 periods (i.e., 12 months) in a given year. So, the DETL_PERIOD_FROM and DETL_PERIOD_TO values allow periods 1-3 (of a detailed monthly calendar) to have fiscal quarter 1, periods 4-6 to have fiscal quarter 2, etc. Again, you can use this summary calendar for multiple monthly detailed calendars that you may have, but each of them requires an entry in the csv file.

Appendix: Configuring file_summary_calendar.csv

This csv file has the following header:

Description,Contains static fiscal period (weeks) information,,,,,,,,,,,,

Examples,The flat file can provide data for the FISCAL_YEAR- FISCAL_MONT- FISCAL_WEEK_NUM and FISCAL_WEEK_START_DT as 1996- 03- 14- and 03121996,,,,,,,,,,,,

Last Updated By,Siebel Analytics R&D,,,,,,,,,,,,

Last Updated On,10/17/2001 20:13,,,,,,,,,,,,

DETAIL_CALENDAR_SETID,DETAIL_CALENDAR_ID,SUMMARY_CALENDAR_SETID_QTR,

SUMMARY_CALENDAR_QTR,SUMMARY_CALENDAR_SETID_YEAR,SUMMARY_CALENDAR_YEAR,

SUMMARY_CALENDAR_SETID_MONTH,SUMMARY_CALENDAR_MONTH,SUMMARY_CALENDAR_SETID_HALF,

SUMMARY_CALENDAR_HALF,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM

The first four columns are the most important ones. Suppose that you have a monthly detailed calendar whose SETID = 'SHARE' and CALENDAR_ID = '01'; as well as a weekly detailed calendar whose SETID = 'SHARE' and CALENDAR_ID = '02'. Their corresponding summary calendars have SETID = 'SHARE' and CALENDAR_ID = 'QM', and SETID = 'SHARE' and CALENDAR_ID = 'QW', respectively. Then, you need to put the following two entries in the csv file (after the header):

SHARE,01,SHARE,QM,,,,,,,,,,

SHARE,02,SHARE,QW,,,,,,,,,,

And therefore, your final csv file becomes:

Description,Contains static fiscal period (weeks) information,,,,,,,,,,,,

Examples,The flat file can provide data for the FISCAL_YEAR- FISCAL_MONT- FISCAL_WEEK_NUM and FISCAL_WEEK_START_DT as 1996- 03- 14- and 03121996,,,,,,,,,,,,

Last Updated By,Siebel Analytics R&D,,,,,,,,,,,,

Last Updated On,10/17/2001 20:13,,,,,,,,,,,,

DETAIL_CALENDAR_SETID,DETAIL_CALENDAR_ID,SUMMARY_CALENDAR_SETID_QTR,

SUMMARY_CALENDAR_QTR,SUMMARY_CALENDAR_SETID_YEAR,SUMMARY_CALENDAR_YEAR,

SUMMARY_CALENDAR_SETID_MONTH,SUMMARY_CALENDAR_MONTH,SUMMARY_CALENDAR_SETID_HALF,

SUMMARY_CALENDAR_HALF,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM

SHARE,01,SHARE,QM,,,,,,,,,,

SHARE,02,SHARE,QW,,,,,,,,,,

Tuesday Nov 26, 2013

Introduction and Tips of Oracle BI Apps Variable Refresh

Author: Chuan Shi

Introduction

The ETL logic in BI Apps uses parameters in packages, interfaces, load plans, and knowledge modules (KM) to control the ETL behaviors. Parameters can be configured by customers, pre-seeded in the ETL code, or maintained internally:

  • Data Load parameters that can be configured by customers are maintained at product line (PLV) and fact/dimension group levels in the BI Apps Configuration Manager (BIACM).
  • Some parameters are pre-seeded in the ETL code OOTB, e.g. DATASOURCE_NUM_ID.
  • Other parameters are used internally by the load plan to control the execution of the load plan, e.g. EXECUTION_ID.

ETL parameters are handled by using ODI variables. The purpose of this blog is to explain ODI variable refreshing to ensure that correct values of variables are used in Oracle BI Apps ETL.

ODI Variable Classification

To ensure correct values to be used in ETL, variables are refreshed at run time. A variable can be refreshed in either a package or a load plan. Variables can be classified into four categories based on whether and where they are refreshed in ODI (and if a variable needs to be refreshed, then there must be a refreshing logic specified for it, which will be discussed later).


The four categories are:

  1. Not refreshed

Definition: Generally, internal parameters used by the load plan to control the ETL execution, and therefore they shall not be refreshed

Examples: DATASOURCE_NUM_ID, EXECUTION_ID

  1. Refreshed in the package

Definition: Variables that are not feasible to be refreshed in load plans. (The refreshing logic of such a variable depends on ETL run facts. It sources from DW tables and uses the QUALIFY user define function (UDF). ODI fails to interpret QUALIFY at load plan level in some cases.)

Examples: IS_INCREMENTAL, LAST_EXTARCT_DATE

  1. Hardcoded in the load plan

Definition: Variables whose values are hardcoded by overwriting in load plans. The variables will take the hardcoded values in ETL.

Examples: DOMAIN_CODE, LOOKUP_TYPE

  1. Refreshed in the load plan

Definition: Variables whose values are configured in BIACM by customers. (In other words, the values of these variables come from BIACM. The refreshing logic of such a variable uses a UDF to extract its value from BIACM.)

Examples: UPDATE_ALL_HISTORY, TYPE2_FLG

For a complete list of ODI variables, please refer to this post.

Refreshing Variables in a Package

Refreshing variables in a package is straightforward. One needs to create Refresh Variable steps for the variables to be refreshed. The screenshot below shows examples of refreshing IS_INCREMENTAL.

Hard-coding Variables in a Load Plan

Some variables, such as DOMAIN_CODE and LOOKUP_TYPE are hardcoded in load plan components. To do that, go to the load plan component to which the variables are to be hardcoded, select the Overwrite checkbox and provide the hardcode values for the variables. The screenshot below shows examples of hardcoding DOMAIN_CODE and LOOKUP_TYPE.

Refreshing Variables from BIACM

BIACM is a central UI where customers can define the values of data load parameters (i.e., ODI variables), among with many other features offered. ODI variables, which are refreshed in load plans, have their values extracted from BIACM. We also refer to such variables as BIACM variables.

BIACM Variables are classified into truly global variables, PLV specific variables, and fact/dimension group level variables.

  • A truly global variable (e.g., 13P_CALENDAR_ID) is a variable that has the same value in all product lines (i.e., EBS11510, PSFT90, etc) and for all fact/dimension groups. Truly global variables are refreshed centrally in the load plan system components.
  • A PLV specific variable (e.g., LANGUAGE_BASE) is a variable that takes the same value for all fact/dimension groups within a product line, but different values of the variable can be used in different production lines. They are refreshed individually in consuming load plan dev components.
  • A fact/dimension group level variable (e.g., UPDATE_ALL_HISTORY) is group specific. It can take different values in different fact/dimension groups within the same PLV and across different PLVs. They are refreshed individually in consuming load plan dev components.

From a variable value overriding perspective:

  • A truly global variable has a unique value. PLVs and fact/dimension groups cannot override its value.
  • A PLV variable has product line specific values (e.g., LANGUAGE_BASE takes the value of US in EBS product lines but ENG in PSFT product lines). The value is the same for all fact/dimension groups within that product line.
  • A fact/dimension group level variable has group specific values (e.g., TYPE2_FLG has the value of Yes in Position Dimension Hierarchy, while it has the value of No in Asset Dimension). Also, such a variable has a global default value. If a fact/dimension group does not specify the value of such a variable for its use, then the global default value will be used whenever this variable is called by that group (e.g., the global default value of TYPE2_FLG is No).

These variables are defined in ODI. To ensure that variable refreshing works correctly, there are some rules on the definitions of ODI variables:

  • Set the ‘Keep History’ option to ‘No History’;
  • Always provide a default value (the default value will be picked if refreshing from BIACM does not return a value for some reason. Otherwise the ETL will fail.). As a good practice, the ODI default value of the variable can be set the same as the global value of the variable in BIACM.

(Whenever the Keep History option or the Default Value of a variable is changed, the scenarios that use this variable need to be regenerated.)

Once ODI variables are defined, a refreshing logic is needed to refresh them from BIACM. In this regard,

  • The ODI UDF GET_CM_PARAM is used
  • To return the correct value for a variable, we need to specify the following in the refreshing logic:
  • variable name;
  • product line;
  • fact/dimension group.
  • Syntax: getcmparam($(param_code),$(DATASOURCE_NUM_ID))
  • $(param_code) is the name of the variable (e.g., TYPE2_FLG)
  • $(DATASOURCE_NUM_ID) is used to specify the product line.

For PLV/group level variables, we pass #DATASOURCE_NUM_ID as $(DATASOURCE_NUM_ID);

e.g., getcmparam('TYPE2_FLG','#DATASOURCE_NUM_ID')

For truly global variable, we pass #WH_DATASOURCE_NUM_ID as a pseudo-PLV ID.

e.g., getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

  • Do not pass fact/dimension group directly into the syntax. They are determined by where the variable is refreshed.

BIACM variables are refreshed in load plans. To refresh a variable in a load plan, the following three steps are required (they have been done OOTB):

Step 1: Specify the correct logical schema and refreshing logic in the refreshing tab of the variable definition.

The logical schema has to be CM_BIAPPS11G.

The refreshing logic should be getcmparam() with appropriate inputs, e.g.,

getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID')

Step 2: Update the variable definition in the variables tab of the load plan.

Go to the load plan component where you want to refresh the variables. In the Variables tab, right click on the variables and select ‘Refresh Variable Definition’ so that the variable definition in the LP is synchronized with its real definition. Once this is done, verify that the logical schema is showing CM_BIAPPS11G, and the select statement is showing the embedded SQL in the getcmparam() function.

Step 3: Check the refreshing checkbox at the appropriate LP step.

For truly global variables, Step 3 becomes:


The logic behind getcmparam() guarantees that appropriate value of the variable is returned from BIACM given the name of the variable, the DATASOURCE_NUM_ID passed in, and the LPC step where it is refreshed.

Values stored in BIACM are strings. Therefore all ODI variables refreshed from BIACM will come in as strings. Each of the consuming codes (where the variables are used) should make sure it converts the data type accordingly. For example, dates are returned as a string in format yyyy-mm-dd hh:mi:ss. TO_DATE_VAR UDF is used to convert the returned string to DATE format. Number values are returned as strings as well.

Checklist when Things Go Wrong

What can go wrong?

  • The value of a variable used in ETL is not in line with expectation.
  • A variable refreshed has no value returned, and it fails ETL run.

Overriding Rule (1)

  • In a load plan, when a variable is refreshed in the parent step (e.g., the root step), its value will be inherited by all its child steps, unless this variable is refreshed/overwritten in a child step.

· However, if a variable is refreshed and/or overwritten in a child step, the value refreshed from this step will override the value refreshed from the parent step. Other child steps of the same level will NOT be affected. They will still inherit the value refreshed in the parent step.

Overriding Rule (2) (unlikely to happen but it exists)

If a variable is refreshed both in a package and in a load plan, then the value refreshed from the package will override the value refreshed from the load plan.

When the value of a variable returned from BIACM is not in line with expectation:

  • Confirm where the variable is refreshed, e.g., BIACM? ETL tables in DW? etc.
  • For BIACM PLV or group level variables:
  • Check its value(s) in BIACM UI. For PLV variables, check its value in each product line; for group level variables, check its group specific values as well as global default value.
  • Check if the variable is refreshed in a root step of a load plan (refresh checkbox checked). In the meanwhile, check if the root step is named after a fact/dim group.
  • Check if the variable is incorrectly refreshed or hardcoded in a child step belonging to the root step (avoid overriding rule 1).
  • Check the ODI default value of this variable. If BIACM returns (null), i.e., nothing, for this variable, its ODI default value will be used. Also, if we check the overwrite box (but not the refresh checkbox) of a variable in a load plan step, but forget to provide the value, then the ODI default value will be used.
  • Note: overriding rule (2) is unlikely to happen to BIACM variables.
  • For variables refreshed from ETL tables in DW, an incorrect value likely to indicate run issue. Check the run of that specific task.

When variable has no value:

  • Confirm where the variable should be refreshed, e.g., BIACM? ETL tables in DW? etc.
  • In rare cases, a variable may not have a value returned when it is refreshed, and this leads to ETL failures.
  • ODI behaves like this: it first refreshes the variable from its designated source (e.g., BIACM). If its source returns (null), i.e., nothing, for this variable, the ODI default value of this variable will be used in ETL. However, if the ODI default value is not provided, then this variable will not have a value.

Tuesday Nov 19, 2013

Tips and Usage of Group Account Number Configuration

Author: Sridhar Kasam

Introduction

The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0.

When you implement Financial Analytics, one of the most important steps is to set up Group Account Numbers. Many issues are reported due to incorrect setup of Group Account Numbers.

- What is Group Account Number?

o Group Account Number logically groups GL natural accounts into reportable group(s), so users can view financial reports at a higher level than that of a GL natural account.


Example: Assign natural account range 1210 – 1240 to Group Account Number “AR” (Accounts Receivable).

CHART OF ACCOUNTS ID

FROM ACCT

TO ACCT

GROUP_ACCT_NUM

101

1110

1110

CASH

101

1210

1240

AR

101

1280

1280

OTHER CA

101

1340

1340

PPAID EXP

o Group Account Number is used to establish relationship between GL natural accounts and Financial Statement Item Code. Financial Statement Item Code is mapped to base fact tables for GL reconciliation process.

Example: Group Account Number, Financial Statement Item Code & Base Fact Table

GROUP_ACCT_NUM

FIN_STMT_ITEM_CODE

BASE FACT TABLE

ACC DEPCN

OTHERS

W_GL_OTHER_F

ACC LIAB

OTHERS

W_GL_OTHER_F

AP

AP

W_AP_XACT_F

AR

AR

W_AR_XACT_F

CASH

OTHERS

W_GL_OTHER_F 

CMMN STOCK

OTHERS

W_GL_OTHER_F 

COGS

COGS

W_GL_COGS_F

REVENUE

REVENUE

W_GL_REVN_F

- How to configure Group Account Number(s) and Financial Statement Item Code(s)?

o Group Account Numbers are defined / configured through a configurable csv file. In case of Oracle the file name is file_group_acct_codes_ora.csv. For PeopleSoft, it is file_group_acct_codes_psft.csv, and for JDE, it is file_group_acct_codes_jde.csv. Users are allowed to specify which GL natural accounts are assigned to a particular group account within chart of accounts, GL Business Units and company for Oracle, PeopleSoft and JDE sources respectively. Please ensure that the account ranges are continuous without any overlaps. If necessary you can have multiple ranges and/or accounts assigned to the same group account. Additionally, please ensure that the accounts being provided are all leaf accounts / ranges and not parent nodes/ranges.

o By mapping GL accounts to group account numbers and then associating group accounts to a financial statement Item code, an indirect association is built between GL accounts and financial statement Item codes as well. In case of BI APPS 11.1.1.7.1 or later, association of group account numbers to financial statement item code is performed in Oracle BI Applications Configuration Manager. But, in earlier releases like 7.9.6.3 this configuration was also done using a configurable file file_grpact_fstmt.csv.

Note: Customers should not modify the mapping relationship between Group Account Number and Financial Statement Item Code for Group Account Numbers AP, AR, COGS and REVENUE.

o It is not always necessary to assign all out of box group account numbers to certain account ranges. In cases where customers are not planning to use the logical metrics under GL which uses Group Account Number as part of their metric definitions, users do not need to configure majority of the Group Account Numbers. However, if users are still planning to implement Payables, Receivables, Revenue, or COGS facts, they still need to configure Group Account Numbers AP, AR, REVENUE, or COGS because these Group Account numbers are also used for the GL reconciliation process against these facts.

o If for some reason, out of the box group account numbers are not sufficient, then the high level steps to add additional group accounts are (11g):

§ Define new Group Account Numbers in Configuration Manager

§ Assign the newly created Group Account Number to a financial statement code (e.g. AP, AR, REVENUE, COGS, OTHERS).

§ Assign GL account to Group Accounts in source specific csv file.

§ Model the RPD to include a new group account measure and expose in presentation layer.

- Why is it important?

Group Account Number configuration is important as it determines that the right GL Accounts are assigned to the group account number(s). In other words, it determines the accuracy of most of the reports that are reported from Financial Analytics where Group Account Number is part of the report. Group Account Numbers in combination with Financial Statement Item Codes are also leveraged in GL reconciliation process to ensure that sub ledger data reconciles with GL Journal entries.

Group Account Number Usage in Financial Analytics

- GL reports

For “Account Receivable” in balance sheet reports, we calculate the total amount for GL natural accounts from 1210 to 1240 for ledgers with chart of account id 101.

o Because of this association and subsequent configuration to expose the group account number metric in RPD, users can now report on Group Account Number metric. For example in a Balance Sheet Report, users can now report on “AR Amount” from “Fact – Fins – GL Balance” (Logical Fact) corresponding to “AR” group account.

- Reconciliation

o Because of the indirect relationship that is built between GL Accounts and Financial statement Item codes, it is now possible to do GL reconciliation to ensure that the sub ledger data reconciles with GL Journal entries. For example, it is possible that after an invoice has been transferred to GL, the user might decide to adjust the invoice in GL. In which case, it is important that the adjustment amount is trickled down to the sub ledger base fact along with the balance fact. So, to determine such sub ledger transactions in GL, the reconciliation process uses Financial Statement item codes.