Monday Apr 21, 2014

How does TYPE2_FLG Work in ETL

Author: Vivian(Weibei) Li


TYPE2_FLG is usually used in slowly changing dimensions in BI Applications. This flag indicates if the dimension is type 2, and it determines the data storing behavior in ETL. This blog is to give you a better understanding on how TYPE2_FLG works in ETL.


Slowly Changing dimension

There are many fundamental dimensions such as Customer, Product, Location and Employee in BI application. The attributes in these dimensions are revised from time to time. Sometimes the revised attributes merely correct an error in the data. But many times the revised attributes represent a true change at a point in time. These changes arrive unexpectedly, sporadically and far less frequently than fact table measurements, so we call this topic slowly changing dimensions (SCDs).

Slowly changing dimensions (SCD) entities like Employee, Customer, Product and others determine how the historical changes in the dimension tables are handled and decide how to respond to the changes. There are three different kinds of responses are needed: slowly changing dimension (SCD) Types 1, 2 and 3.

Type 1: Overwrite the attributes with new changes

Type 2: Add a New Dimension Record

Type 3: Add a New Field

We are talking about type 2 in this blog. In the Type 2 SCD model the whole history is stored in the database. An additional dimension record is created and the segmenting between the old record values and the new (current) value is easy to extract and the history is clear. A minimum of three additional columns should be added to the dimension row with type 2 changes: 1) row effective date or date/time stamp (EFFECTIVE_FROM_DT); 2) row expiration date or date/time stamp (EFFECTIVE_END_DT); and 3) current row indicator (CURRENT_FLG).


The two columns have different concepts though they have similar name. We saw many customers getting confused about the two columns.

SRC_EFF_FROM_DT is extracted from the effective start date of the source (mainly from the main driven source) if the source has the history. If the source doesn’t store history or the history is not extracted, it is hard coded as #LOW_DATE.

EFFECTIVE_FROM_DT is a system column in dimension table to track the history. Remember that we use the knowledge modules (KM) for repeatable logic that can be reused across ETL tasks. Updating the SCD related columns, such as EFFECTIVE_FROM_DT, is usually handled by KM. EFFECTIVE_FROM_DT is modified when inserting a new type 2 record in incremental run, and it is usually modified to the same date as the changed on date from the source. EFFECTIVE_FROM_DT does not always map to the Source Effective Dates.

In type 2 SCD model, EFFECTIVE_FROM_DT is the date used to track the history.

TYPE2_FLG in BI Application

TYPE2_FLG is a flag used to indicate if the dimension is type 2 or not. This flag is used in many dimensions in BI application, such as employee, user, position, and so on. This flag is very important because it determines the history storing behavior.

TYPE2_FLG has two values: ‘Y’ and ‘N’. ‘Y’ means the dimension is a type 2, and ‘N’ means the dimension is type 1. Type 2 dimensions store the history, while type 1 dimensions only store the current record.

For example, if the supervisor is changed from Peter to Susan for an employee on 01/02/2012:

Type 1







Type 2
















As shown above, type 1 dimension overwrites the supervisor with the new supervisor, and only stores the current record. Type 2 dimension inserts a new record with the new supervisor name and keeps the old record as a history. The EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT and CURRENT_FLG are modified accordingly: EFFECTIVE_TO_DT is changed to 01/02/2012 and CURRENT_FLG is set as ‘N’ for the old record. The ‘CURRENT_FLG’ is set as ‘Y’ for the new record with the new EFFECTIVE_FROM_DT.

How to Setup TYPE2_FLG

The out of the box code in BI application should have setup the default values. For the type 2 dimensions, it is usually set as ‘Y’.

The TYPE2_FLG can be configured in BIACM. This variable is configured by different dimension groups.

The screenshot above shows that you can configure the value of this flag for difference dimension groups by clicking the parameter value and overwriting it to a different value.

Note: You can only configure the TYPE2_FLG for the dimension groups that are in this BIACM list. The dimension groups that are not in the list cannot be configured.

You should set the value of TYPE2_FLG carefully. If you override the TYPE2_FLG to ‘N’ for a type 2 dimension, you may meet some issues. I will describe more details in the next session.

Possible Issues Related to TYPE2_FLG

As mentioned earlier, sometimes for some reason, the value of TYPE2_FLG may be set to ‘N’ for the type 2 dimension. This may cause some issues.

In BI application, SDE mapping brings the history from the source in the initial full load in some adapters, such as EBS. TYPE2_FLG affects the storing behavior for these historic records. Here compares the different behaviors when setting TYPE2_FLG to ‘Y’ and ‘N’ for a type 2 dimension.

Case 1-TYPE2_FLG = ‘Y’

Let’s take employee dimension (type 2 dimension) as an example














When loading the data into data warehouse in the initial full run, both the rows (including the historical record #1) will be loaded. TYPE2_FLG is ‘Y’ in this case, KM, which will handle the loading behavior, uses this value to determine the type of employee dimension, and accordingly the storing method.

KM will modify EFFECTIVE_TO_DT and CURRENT_FLG for the two records as TYPE2_FLG=’Y’ in this case.
















Case 2 - TYPE2_FLG =’N’

This time, the TYPE2_FLG is set as ‘N’ for employee dimension (type 2 dimension), which is incorrect. KM will treat it as type 1 rather than type 2.














When loading the data into data warehouse, both the rows will be loaded because the history from the source is stored. However, because TYPE2_FLG is ‘N’, KM won’t modify EFFECTIVE_TO_DT and CURRENT_FLG accordingly, and this will cause issues.

Employee Table in Data warehouse
















As shown above, the two records are in an overlapping time range, and both have CURRENT_FLG as ‘Y’. It may give duplicates when resolving the employee from the facts. For example, the transaction date 02/04/2013 will fall into the time range of the two records, so both will be extracted, thus causing the duplicates in the facts.

How to Debug TYPE2_FLG Issues

As discussed in the previous session, in order to avoid this kind of issues, you should set the value of TYPE2_FLG carefully, and set it as ‘Y’ for out of the box TYPE2 dimensions.

In addition, when you get the duplicates in the fact, you can do the following checks.

  • Check where the duplicates come from in the fact, and find out the problematic dimension if they are from the dimension.
  • Check the data in the dimension for the duplicates to see if you see the similar loading behavior as the one in use case 2 of the previous session. You can first simply see if multiple records having CURRENT_FLG=’Y’.
  • Check the value of the TYPE2_FLG in ODI repository.

1. Open the session log of the task

2. Open ‘Definition’

3. Expand ‘Variable and Sequence Values’

4. Find TYPE2_FLG and check the value

5. If the value is ‘N’ but the dimension is type 2, you may hit the issue described in the previous session.

I also would like to provide you some tips to find out the type of a dimension here. You can find out this information in ODI repository.

  • For one dimension, such as employee dimension, you should first know the dimension table name, for example, W_EMPLOYEE_D
  • Go to ODI repository->’Designer’->’Models’
  • Find out the dimension table and open it by double clicking it
  • Go to ‘Definition’ and check the OLAP type. The type of slowly changing dimension tells you that this dimension is type 2

  • You can also find out which attributes are type 2 by checking the column attribute

1. Expand the dimension table, for example, W_EMPLOYEE_D and then expand Columns

2. Open the attribute of a column by double clicking it

3. Go to ‘Description’ and check ‘Slowly Changing Dimension Behavior’

As shown above, ‘Add Rows on Change’ option tells you that this attribute is type 2.


This blog helps you understand how TYPE2_FLG works in ETL and recognize the importance of this flag. It also gives you a way to debug the possible TYPE2_FLG issue.

Thursday Apr 17, 2014

3 Modes for Moving Data to the BI Applications DW from a Source Application Database

In BI Applications the adaptors for the following product lines use the LKM BIAPPS SQL to Oracle (Multi Transport) to move the data from the source Application database to the target BI Applications DW database:

  • E-Business Suite
  • Siebel
  • PeopleSoft
  • JDE

A key feature of this LKM developed specifically for BI Applications is that the data from the source system may be transported in 3 different ways and using a parameters set in Configuration Manager the mode can be selected to suit how the system has been setup, thereby optimizing ETL performance.  This blog post details those 3 modes, and how to configure BI Applications to use the mode that best suits the deployment.

[Read More]

Friday Apr 11, 2014

Snapshot Facts in OBIA (3)

Authors: Anbing Xue, Zhi Lin

Delta Snapshot History Fact

Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The changes here can be either quantitative (on amount, units, etc) or qualitative (on names, description, etc). Hence we invented a new kind of snapshot fact, to specifically address it.

Typical Data Model

A typical attributes of a delta snapshot history fact is–



Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions


The delta snapshot history fact would capture and store a new pair of images, whenever we detect a change on the original transaction. The image pair is essential, especially for qualitative changes. Usually it consists of one pre image (or “negative” image) as well as one post image (or “positive” image).  For example,









New York City


Mar 1, 2014




New York City


Mar 2, 2014




New York City


Mar 2, 2014




New York City


Mar 3, 2014






Mar 3, 2014


Basically, this delta snapshot history fact stores regular snapshots per change, and adds the “negative” snapshots before change. So it is enriched with a unique feature to report delta’s trend line, simply by clubbing both kinds of snapshots together.

Besides, we also introduced more flexibility for users to configure a subset of columns they are interested to track. Based on the configuration, the ETL would create new snapshots for changes only on the interested columns. Changes on the other columns would trigger an update to existing snapshots instead, in order to sync up with the original transactions.

Though, extra ETL complexity has to be introduced to handle pre vs. post images separately, plus the flexibility to track subset but all changes. The number of records is much less than regular daily snapshot facts. The data size of this fact is proportional to the number of changes.

Snapshot Facts in OBIA (2)

Authors: Divya Nandakumar, Zhi Lin

Range Snapshot History Fact

In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the snapshot fact could be humongous, depending on the snapshot frequency.  The dilemma is that better accuracy of change history would be achieved with more frequent captures, which makes data size overwhelming and performance badly impacted.

A solution is to create a variation of the snapshot history fact, which we call snapshot period history fact. The idea is simple, concatenating consecutive snapshots, if they happen to share identical images, of a transaction into a new snapshot. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

Typical data model

The typical attributes of a range snapshot history fact are–



Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions


Concatenate daily snapshots into a range

This is a conventional way to build up range snapshots. Two consecutive daily snapshots sharing identical status can be merged into one snapshot spanning across these two days. Two having different statuses would be stored as two separate snapshots for each day.

Concatenation of these daily snapshots could be created as a result of gathering related daily snapshot records together. The degree of condensation of data that can be achieved is remarkable, because the gathering may span to range of period unlike the fixed period of week or month. In case the triggering event occurs very often, for example 20 times a day then, this approach is not advisable. Meanwhile, every detail got preserved as no daily snapshot got dropped off the concatenation.

The ETL flow requires daily snapshots to start with, and do group-by on “interested” status to merge identical rows. Its dependency on accumulation of daily snapshots is extra task and large storage. Incremental load could be a challenge, especially for a back-dated snapshot. Also, this method assumes no gap between daily snapshots, which could lead to an exception difficult to handle in ETL.

A status change in these related daily snapshots could trigger a snapshot record to be entered into the data warehouse.  

Range snapshots directly from transactions

Here we invented a new way to overcome the shortage of the conventional method above to build range snapshots. We removed the dependency on daily snapshots and directly build range snapshots by scanning through all transaction footprints.

A few key points we have introduced to achieve this.

1)      Create multiple range snapshots trailing each footprint (transaction). For example, one order placed in Mar 25, 2012 by Adam, derives to range snapshots trailing as below. The period duration in each snapshot is one year here, which is configurable.



Status Start Date

Status End Date



Mar 25, 2012

Mar 25, 2013



Mar 25, 2013

Mar 25, 2014



Mar 25, 2014

Mar 25, 2015

2)       Collapse all trailing series generated in (1), and come out only one status at any point of time, using priority rules. In the same example, the priority rule to override is, Active > Dormant > Lost.

3)       On top of the results from collapsing, concatenate the snapshots having identical statuses.

The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

The challenge on incremental load, especially back-dated records, can be solved here relatively easier, as all the source information here, the transaction footprints, are usually persisted anyway. In similar example, our ETL can be as simple as deleting records from the target table and recreating the records for a particular customer from scratch, every time there is an order placed by the customer.

Here we still achieve a great amount of data compression and robust ETL processing. The incremental load is still not precise yet to the most granular level. One incremental load involving one transaction per customer would end up to truncate and rebuild the entire target table.

Snapshot Facts in OBIA (1)

Authors: Divya Nandakumar, Zhi Lin


Snapshot History Fact

A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular, transactional fact intends to store data in data warehouse format and reflect OLTP transactions with near-real-time latency. In this context, a regular fact basically captures the near-current snapshot of the OLTP transactions, and is capable to support status de quo analysis.

A snapshot history fact, (or snapshot fact, in short,) accumulates a series of snapshots and preserve all, each with a different time label. In this way, the change history of each transaction is preserved by the snapshot series. This fact is very useful to do trend analysis over time.

Typical Data Model

The typical attribute of a snapshot history fact is–


Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions


Rolling period of daily snapshots

In the case of a Business need to Analyze DAILY quantity-on-hand inventory levels by product and store for a Business process like “Retail store inventory”, the Granularity could be - Daily inventory by product at each store, the Dimensions - Date, product, store & Fact - Quantity on hand.

Storing source system daily snapshot would have a serious impact on storage of Data Warehouse. The first solution is to keep daily snapshots for a limited rolling period, like last 90 days for example. The daily snapshot table would accumulate daily snapshots from 1 day, 2 days,…, until 90 days. After that, it would always drop off the oldest daily snapshot, before it adds one more daily snapshot. Hence the ETL should always delete the snapshots older than 90 days first, and then append a new snapshot.

This method enables to keep a fairly granular of snapshots on daily level. However, older snapshots are not kept, so it’s not good for long term historical trending.


Monthly snapshots

At the end of the month all accounts have their month ending balance captured. The event is the end of the month, and the month is stored as part of the data warehouse. The selection program reads through the operational data and upon encountering a record that meets the qualifications, moves the record to the data warehouse. At the end of the month, each account is queried and the balance of the account at the end of the month is transferred to the data warehouse environment. One account may have had no activities during the month and another account may have had 200 activities during the month. Both accounts will show up as exactly one record in the data warehouse environment. No continuity of activity is assumed using this technique.

The passage of time - day end, week end, month end, etc. - is all common ways of triggering a snapshot. But the periodic passage of time is hardly the only way that snapshots are triggered.

The Monthly snapshot table stores snapshots of all previous day’s historical data. ETL design would have a preload mapping which deletes the data loaded for current month, based on Current Month End date and then load with the latest data for current month.


In this way, we “aggregate” up from daily snapshots and archive great compact on data size. Longer term history trending can be stored and reported. However, we lost gross levels of details in between every two month ends.


Wednesday Mar 05, 2014

BI apps Cumulative Patch 1 is available now

BI applications cumulative patch  is available now.

Patch 17546336 - BIAPPS ODI CUMULATIVE PATCH 1 (Patch) can be downloaded from My Oracle Support.

 "To download patch , navigate to My Oracle Support > Patches & Update. Search with the patch number (17546336) on the Patch Name or Number field. Follow the instructions in the Readme to apply the patch."

Monday Feb 24, 2014

How to Implement Object Security in Project Analytics in OBIA

Author: Vijay Rajgopal


This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA onwards.

Object-level security controls the visibility to logical objects based on a user's duty/application roles. The access to following objects can be restricted using object level security: Presentation tables, Presentation table columns, Subject Areas, Reports, Dashboards, and Project Specific shared folders.

To apply object security over subject area, individual tables or individual column the default access for authenticated user application role must be set to No Access.

We need to explicitly grant read access to duty roles (which are based on adaptor as explained above) which can access/view the particular subject area or individual table or individual column.

Supported OBIA release: onwards

  1. Project Analytics Application Roles used for enforcing object security –

In Enterprise Manager select WebLogic -> Domain -> bifoundation_domain -> Security -> Application Roles, Select obi application stripe and search for role name which starts with OBIA and you will see the list of all application roles that start with OBIA.

Following is the list of OOTB duty roles by adaptor

EBS Adaptor Duty Roles –



PSFT Adaptor Duty Roles –



Fusion Adaptor Duty Roles –



  1. Project Analytics object security implementation -

2.1 Subject Area:

Eg: Project - Cost GL Reconciliation is a newly added area for EBS and PSFT adaptors. We want to ensure that this subject area is not seen by Fusion Adaptor customers.

Bring down the OBIEE Server, backup the existing rpd and open the rpd in the Admin tool.

Double click Project - Cost GL Reconciliation à Permissions

As you can see read access has been granted explicitly to duty roles associated with EBS and PSFT adaptors. All other duty roles would inherit the default access from Authenticated User application role which is set to No Access. This ensures that this subject area is not visible for Fusion adaptor users

2.2 Presentation Table:

Eg: Dim – Analysis Type is supported only for PSFT adaptor. We hide this presentation table from EBS and Fusion Adaptor customers.

Under Project - BillingAnalysis Type Permissions

As it can be seen above only users associated to PSFT duty roles would be able to view Analysis Type table. For EBS and Fusion adaptor users this table would be hidden.

2.3 Individual Columns:

Eg: Interproject Billing Amount metric in Project-Billing subject area is supported only for EBS and Fusion adaptors. We hide this individual column from PSFT customers.

Under Project - Billing Fact – Project Billing Interproject Invoice Amount à Permissions

As it can be seen above this metric would be viewed by EBS and Fusion adaptor users and hidden from PSFT adaptor users.

Save the rpd, do a consistency check and deploy the updated rpd in the OBIEE server.

  1. Additional Information –

General Details about OBIA can be found here

Wednesday Feb 19, 2014

Notes for implementing Universal adapter for OBIA Project analytics

Author: Amit Kothari

Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules.

Supported OBIA releases: onwards

Supported Apps releases: Universal Adapter.


 Please refer to the OBIA documentation and the DMR as a starting point for this exercise.Also refer to this blog entry.

Please login to the ODI Designer to see the OBIA Projects Universal interfaces, the Source files can be seen in the Model layer.

1. High level steps to import data into the data warehouse through the Universal adapter.

a. Populate the csv files with your data (eg. file_proj_budget_fs,.csv is the source file for w_proj_budget_fs table). Typically customer writes an extract program like a shell file/PL*SQLprogram etc which creates these data files from a non supported Source OLTP system.

b. Refer to the steps details of how to populate these files.

c. Build a Load Plan with fact groups: "900: Universal Adaptor Instance"."Project".

d. Run the Load Plan that you created in the previous step.

e. Note: If applicable this Load Plan must be run after the regular Load Plan to populate Oracle Business Analytics Warehouse for the other Subject Areas has completed.

2. The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:

a. Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.

b. Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.

c. Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.

d. Refer to the Appendix section ‘Setting Up the Delimiter for a Source File’.

  1. As a general rule default 0 for numeric columns and '__NOT_APPLICABLE__’ for string columns so that we do not run into ‘Not Null’ errors when ETLs start loading data.
  2. Date columns should be populated in the CSV file as a number in the format YYYYMMDDHH24MISS or kept null
  3. The dimension ID fields in the fact staging tables have to be populated with the integration_id of the various dimensions. This is very important otherwise the dimension wids fields in the fact tables will default to 0. Please refer to the ODI Model or the DMR for the star schema diagrams and other FK info.
  4. Similarly the common dimensions which Projects uses like W_INT_ORG_D, W_MCAL_DAY_D, W_MCAL_CONTEXT_G, W_EMPLOYE_D, W_JOB_D, W_INVENTORY_PRODUCT_D etc also needs to populated correctly via source files
  5. W_MCAL_CONTEXT_G has a class fields that holds two values – GL or PROJECTS. To resolve the project accounting dates in the fact tables there must be data present in this table for class ‘PROJECTS’
  6. There are various Domain codes which are loaded to Warehouse staging table W_DOMAIN_MEMBER_GS. In order to load this table the generic file File_domain_member_gs.csv has to be populated with the correct Domain code.
    1. The granularity of this file is each domain member per language for any of the domains listed above.
    2. Domain codes for Projects are listed in the Appendix. Just load the domains based on the Facts/Dims you are planning to load.

Table B-164 file_domain_member_gs.csv Field Descriptions

Column Name

Data Type

Sample Data



Not available.

Not available.

This should be populated with the Domain Code corresponding to the Source Domain that is to be configured.


Not available.

Not available.

Defaulted to 'S' - indicates this is a Source Domain Code.


Not available.

Not available.

This should be populated with the CODE value supplied in any of the above files.


Not available.

Not available.

This should be populated with the NAME value that corresponds to the Member Code supplied.


Not available.

Not available.

Not available.


Not available.

Not available.

Hardcode to '__NOT_APPLICABLE__'.


Not available.

Not available.

Not available.


Not available.

Not available.

Warehouse Language Code.


Not available.

Not available.

Source Language Code.


Not available.

Not available.

This is the unique ID for the record. The INTEGRATION_ID for this file can also be populated as DOMAIN_CODE~DOMAIN_MEMBER_CODE.


Not available.

Not available.

The unique Data Source ID of the Source Instance you are configuring.


A. Setting Up the Delimiter for a Source File

When you load data from a Comma Separated Values (CSV) formatted source file, if the data contains a comma character (,), you must enclose the source data with a suitable enclosing character known as a delimiter that does not exist in the source data.

Note: Alternatively, you could configure your data extraction program to enclose the data with a suitable enclosing character automatically.

For example, you might have a CSV source data file with the following data:

Months, Status
January, February, March, Active
April, May, June, Active

If you loaded this data without modification, ODI would load 'January' as the Months value, and 'February' as the Status value. The remaining data for the first record (that is, March, Active) would not be loaded.

To enable ODI to load this data correctly, you might enclose the data in the Months field within the double-quotation mark enclosing character (" ") as follows:

Months, Status
"January, February, March", Active
"April, May, June", Active

After modification, ODI would load the data correctly. In this example, for the first record ODI would load 'January, February, March' as the Months value, and 'Active' as the Status value.

To set up the delimiter for a source file:

1. Open the CSV file containing the source data.

2. Enclose the data fields with the enclosing character that you have chosen (for example, (").

You must choose an enclosing character that is not present in the source data. Common enclosing characters include single quotation marks (') and double quotation marks (").

3. Save and close the CSV file.

4. In ODI Designer, display the Models view, and expand the Oracle BI Applications folder.

Identify the data stores that are associated with the modified CSV files. The CSV file that you modified might be associated with one or more data stores.

5. In ODI Designer, change the properties for each of these data stores to use the enclosing character, as follows:

1. Double-click the data source, to display the DataStore: <Name> dialog.

2. Display the Files tab.

3. Use the Text Delimiter field to specify the enclosing character that you used in step 2 to enclose the data.

4. Click OK to save the changes.

You can now load data from the modified CSV file.

  1. Project Domains





























































How to include Fusion DFFs Into OBIA In Premise Data Warehouse

Author: Saurabh Gautam


This is a technote that explains the steps needed to extract the Fusion descriptive Flexfield (DFF) information into the in premise Oracle BI application (OBIA) Warehouse from the in premise Fusion applications (not SAAS).

Note: The OBIA changes have to be done manually.

Supported OBIA release: onwards

Supported Apps release: Fusion Release 5 onwards

A. Enable and Deploy DFF

1. Enable the Descriptive Flexfield in Fusion Apps. Mark the DFF/attributes as BI Enabled. For e.g. Enable PJC_EXP_ITEMS_DESC_FLEX

2. Deploy the Flexfield Vo.

3. Refer to this link for more info.

B. Setup the rpd

1. Bring down the OBIEE server and presentation services.

2. Open the Oracle BI Applications repository file (*.rpd) via the admin tool.

3. Import the newly deployed DFF VO(for e.g. FscmTopModelAM.PjcEiBIAM.FLEX_BI_PjcEi_VI) into the rpd. Select the appropriate logical table (for e.g.Dim – Project Costing Details) while importing. This should import the DFF VO and also create the physical join to the appropriate VO. E.g. screenshot below:

1. Make sure that the VO name is <= 80 chars, if not then create an Alias on that VO with name <= 80 chars

2. Save the rpd, start the BI server

C. ETL Changes in ODI

 1. Please note that the steps documented in this note follow our standard customization process. This is needed in order for our future ODI metadata patches to work in your repository. As part of the standard customization process, you will copy the existing mapping folder to a custom folder, make changes in the custom folder, delete the OOTB scenario from the original mapping folder, and then generate the new scenarios in the custom folder using the original OOTB scenario name.

Please refer to the customization guide before you start on this.

2. Open the ODI Studio client and login to the appropriate repository. Go to the Model tab->Fusion 1.0->Oracle Fusion 1.0 FSCM/HCM/CRM folder

3. Import the newly deployed DFF VO using the RKM BIAPPS Oracle BI


5. Open the Oracle BI Applications-> Oracle BI Applications model sub folder and add the fields to the target DS/FS and D/F data waterhouse (DW) tables in the correct folder.

6. Apply these target table changes to the target warehouse by doing an alter table.

7. Click on the Designer tab and navigate to the appropriate sde folder for fusion under BI apps project : Mappings ->SDE_FUSION_V1_Adaptor. Duplicate the appropriate SDE folder and copy it to your CUSTOM_SDE folder.

8. Open the temporary interface (icon marked in yellow) in that custom folder.


10. Pull the DFF VO into the mapping tab of the interface.

11. Join the DFF VO to the base VO and drag the DFF VO fields which need to be extracted in to the DW into the right hand target pane

12. Open the main interface and map the fields from the temporary interface to the target.

13. Save all the objects. Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

14. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

15. Similarly copy the appropriate Dim or Fact in the SILOS folder to the CUSTOM_SILOS folder, and then map the new DS/FS fields to the D/F table in the main interface.Save.
Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

16. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

17. Unit test all the changes

D. RPD changes

1. Open the rpd in the admin tool in your dev environment, in the physical layer add the new fields to the modified_D/_F table under the DataWarehouse connection pool.

2. Drag the new fields from the alias to the BMM layer, rename it to give it a business name and drag it to the presentation layer.

3. Run the consistency check and save the rpd.

4. Deploy the modified rpd and restart the BI server and test the new fields from an answer.

E. Additional Information

General Details about OBIA can be found here

Note: These fixes are to be applied in the right folder for e.g. Apply them in the SDE_Fusion_Adaptor folder of the ODI repository if you are running Fusion app. If you have customized the maps mentioned above then please carefully apply the steps mentioned above

BIAPPS List of ODI Variables

Author: Chuan Shi

The tables below list variables of different categories in terms of whether and where they are refreshed. Note that the intention of this blog is just to provide general information about ODI variables. If you are an Oracle BIAPPS customer and have specific questions regarding any ODI variable, please contact an Oracle support.

Variables that are not refreshed

Variables that are refreshed

Monday Feb 17, 2014

How to Compare RPDs

Author: Vivian(Weibei) Li


Comparing RPD is a very necessary and useful process in RPD development. It is good to consider doing it in the following use cases.

  • You have a customized RPD and want to find out the differences between the customized RPD and the OOTB RPD.
  • You did some customization on the RPD and want to check if the customization is done as expected.
  • The administrator wants to check the differences in the later RPD with the older RPDs.
  • You want to upgrade your RPD and you are not sure if there are any upgrade issues from one release to anther release. You can compare the RPDs for the two releases to catch issues before upgrading.

There are many more cases to compare RPDs. This blog describes how to compare the RPDs.

There are two ways to compare RPDs, one is to use the compare utility and another is to use the compare dialogue in the admin tool.

You can choose either way to compare your RPDs. Here are the comparison of the two methods and some tips.

  • RPD compare utility is executed by the command line, so it doesn’t need the admin tool open.
  • RPD compare dialogue can be directly used in admin tool.
  • When two local RPDs are compared, especially when they require different OBIEE versions, RPD compare utility is recommended.
  • When modifications are performed in the RPD with admin tool and it is to be compared with benchmark RPDs(for example, the original RPD), RPD compare dialogue is recommended. See the details in the next sessions.

Compare RPDs with Utility

There are utilities used for rpd comparison in OBIEE. You can call the utilities with a few commands in the command prompt of Windows. Here lists the steps to compare RPDs with this option.

Step 1 - Choose your OBIEE version

Use the later version of OBIEE to do the comparison if you compare the two releases, and the two releases need different versions of OBIEE.
For example, there are two releases, let’s say release 1 and release 2. release 2 uses the later version of OBIEE. To compare the two releases, you need to use the OBIEE version for release 2.

Step 2 – Check out the subset of RPDs

It is recommended to check out the subset of RPDs that you want to compare with the projects you are interested in. Comparing the entire RPDs will be very inefficient and time-consuming.

Step 3 – Equalize the RPDs

Before comparing the RPDs, you should equalize the RPDs first with equalizerpds utility. The equalizerpds utility will equalize the Upgrade ID of objects in two separate repositories. If objects have the same Upgrade ID, they are considered to be the same object. The utility compares Upgrade IDs from the first repository (typically the original repository) with Upgrade IDs from the second repository (typically the modified repository). Then, the utility equalizes the Upgrade IDs of objects with the same name, using the Upgrade ID from the original repository.

equalizerpds.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin.

· Syntax

The equalizerpds utility takes the following parameters:

equalizerpds [-B original_repository_password] -C original_repository_name

[-E modified_repository_password] -F modified_repository_name [-J rename_map_file]

[-O output_repository_name] [-Y equalStringSet]

· Explanation

Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository.

For example, if you want to compare release 1 and release 2 RPDs to find out upgrading issues to upgrade from release 1 to release 2, put release 1 RPD as the original and release 2 RPD as the modified repository.


When you equalize objects, you can lose track of object renames because legitimate object renames become different objects. In other words, intentional renames you did in the repository might be changed to different Upgrade IDs, so subsequent merges erroneously treat the renamed object as a new object. To avoid this situation, enter the before and after names of intentionally renamed objects in a rename map file that you then pass to the utility. The equalizerpds utility uses the information in the file to ensure that the original IDs are used in the renamed current objects.

rename_map_file is a text file containing a list of objects that were renamed and that you want to equalize. The format is a tab-separated file with the following columns:

                    TypeName     Name1     Name2
       For example, logical column "Core"."Dim - Customer"."Address4" is re-named as 
       "Core"."Dim - Customer"."Address 4" from Release 1 to Release 2. The file can be written as
                    Logical Column "Core"."Dim - Customer"."Address4" "Core"."Dim - Customer"."Address 4"

       Tip: How to find out the TypeName value?
Query your object with Query Repository tool in the admin tool, and you will find the TypeName value in the result. 
1.       Open the admin tool. Go to Tools->Query Repository


2.       In the popup dialogue, query your object.
3.       You will find the Type value in the result.


You can put this file in any folder in your machine, and give the absolute path in rename_map_file parameter. See the example below.

· An equalization command example

equalizerpds -B pwd123 -C C:\rpdcomparison\release1\release1.rpd -E pwd123 -F C:\rpdcomparison\release2\release2.rpd -J C:\rpdcomparison\rename-map-file.txt -O C:\rpdcomparison\release2\equalizedrpd.rpd

Step 4 – Compare the RPDs

Now you can compare the RPDs with the comparerpd utility. comparerpd.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin.

· Syntax

The comparerpd utility takes the following parameters:

comparerpd [-P modified_rpd_password] -C modified_rpd_pathname

[-W original_rpd_password] -G original_rpd_pathname {-O output_csv_file_name |

-D output_patch_file_name | -M output_mds_xml_directory_name} -E -8

· Explanation

Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository. The ‘modified’ repository should be the equalized RPD got from step 3.

-O output_csv_file_name is the name and location of a csv file where you want to store the repository object comparison output.

-D output_patch_file_name is the name and location of an XML patch file where you want to store the differences between the two repositories.

-M output_mds_xml_directory_name is the top-level directory where you want to store diff information in MDS XML format.

Note: You can specify an output CSV file using -O, an XML patch file using -D, or an MDS XML directory tree using -M. You cannot specify more than one output type at the same time.

· A comparison command example

comparerpd -P pwd123 -C C:\rpdcomparison\release2\equalizedrpd.rpd -W pwd123 -G C:\rpdcomparison\release1\release1.rpd -O C:\rpdcomparison\results.csv

Compare RPDs with Compare Dialogue

In this session, I will describe how to compare RPDs with the compare dialogue. The compare dialogue must be used with admin tool opened.

Compare the MUD RPD Before and After the Modification

· Open your fixed MUD RPD.

· Go to ‘File’->‘Multiuser’->’Compare with Original’.

It will compare the current modified rpd with the local copy of the original rpd.

Compare the local RPDs

You can also use compare dialogue to do the comparison for the local RPDs.

Note: It is recommended to use the compare utility to compare the local RPDs. Remember to extract the subset of RPDs before comparing. Comparing the entire RPDs will be time-consuming.

· Open your local RPD, let’s say RPD1 with admin tool. This RPD is the base RPD in your comparison.

· Go to ‘File’->’Compare’.

· Select the repository or XML file in the popup compare dialogue.

· Enter the password for the repository which you want to compare to, let’s say RPD2. RPD2 will be the modified repository.

· The compare dialogue will open the RPD2, equalize RPD2, then compare the RPD2 with RPD1, and finally show the results.

· You can see what the object looks like in RPD1 and RPD2 by clicking ‘View 1’ and ‘Edit 2’.

View 1 – the object in RPD1

Edit 2 – the object in RPD2

· Save the diff file as .CSV file in your local machine.

In summary, RPD comparison is a good tool for RPD development.  Using it appropriately will build more confidence for your RPD modification, fix and upgrade. It makes the RPD development more smooth and less error-prone if adding RPD comparison as a necessary process.

Wednesday Jan 22, 2014

Configuring Enterprise Calendar for Oracle BI Apps Time Dimension

Author: Chuan Shi


One of the key common dimensions of Oracle BI Apps is the Time dimension. It contains calendars of different natures to support different types of analysis within various subjects in Oracle BI Apps. Different types of Calendar include:

  • Gregorian Calendar
  • Fiscal Calendar (multiple)
  • Enterprise Calendar (unique enterprise wide)

The Enterprise Calendar (or reporting calendar) enables cross subject area analysis. The Enterprise Calendar data warehouse tables have the W_ENT prefix. Within a single BI Apps deployment by the customer, only one fiscal calendar can be chosen as the Enterprise Calendar. The purpose of this blog is to explain how to configure the Enterprise Calendar.

Configure the Enterprise Calendar

The Enterprise Calendar can be set to one of the OLTP sourced fiscal calendars, or to one of the warehouse generated fiscal calendars (e.g., the 4-4-5 calendar and 13 period calendar supported by Oracle BI Apps). This can be done by setting the following source system parameters in the Business Intelligence Applications Configuration Manager (BIACM):

  • GBL_CALENDAR_ID (used to set the ID of the calendar to be used as the Enterprise Calendar)
  • GBL_DATASOURCE_NUM_ID (used to set the DSN of the source from which the Enterprise Calendar is chosen)

The following sections show how to set up these two parameters for the Enterprise Calendar in different scenarios.

Scenario 1: Using an Oracle EBS fiscal calendar as the Enterprise Calendar

  • GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In EBS, it should have the format of MCAL_CAL_NAME~MCAL_PERIOD_TYPE. For example, GBL_CALENDAR_ID will be 'Accounting~41' if MCAL_CAL_NAME = 'Accounting' and MCAL_PERIOD_TYPE = '41'.

Note 1: MCAL_CAL_NAME and MCAL_PERIOD_TYPE are sourced from PERIOD_SET_NAME and PERIOD_TYPE of the GL_PERIODS table (an Oracle EBS OLTP table). To see a valid list of combinations of MCAL_CAL_NAME~MCAL_PERIOD_TYPE, run the following query in the OLTP:


Note 2: The available EBS calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW:



WHERE DATASOURCE_NUM_ID = <the value corresponding to the EBS version that you use>;

  • GBL_DATASOURCE_NUM_ID: For EBS, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For example, if you are running EBS R11.5.10 and the DATASOURCE_NUM_ID for this source is 310, then you need to set GBL_DATASOURCE_NUM_ID to 310.

GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID are set in BIACM, and this will be covered in a later section.

Scenario 2: Using a PeopleSoft fiscal calendar as the Enterprise Calendar

  • GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In PSFT, it should have the format of SETID~CALENDAR_ID. For example, GBL_CALENDAR_ID will be 'SHARE~01' if SET_ID = 'SHARE' and CALENDAR_ID = '01'.

Note 1: SETID and CALENDAR_ID are sourced from the PS_CAL_DEFN_TBL table (a PeopleSoft OLTP table). To see a valid list of combinations of SETID~CALENDAR_ID, run the following query in the OLTP:


Note 2: The available PeopleSoft calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW:



WHERE DATASOURCE_NUM_ID = <the value corresponding to the PeopleSoft version that you use>;

  • GBL_DATASOURCE_NUM_ID: For PSFT, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For instance, if you are running PeopleSoft 9.0 FSCM Instance and the DATASOURCE_NUM_ID for this source is 518, then you need to set GBL_DATASOURCE_NUM_ID to 518.
Note: OLTP sourced calendars are not supported in PeopleSoft HCM pillars. Therefore, should you w