Friday Apr 11, 2014

Snapshot Facts in OBIA (3)

Authors: Anbing Xue, Zhi Lin

Delta Snapshot History Fact

Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The changes here can be either quantitative (on amount, units, etc) or qualitative (on names, description, etc). Hence we invented a new kind of snapshot fact, to specifically address it.

Typical Data Model

A typical attributes of a delta snapshot history fact is–

SNAPSHOT_DT

IMAGE_TYPE

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

The delta snapshot history fact would capture and store a new pair of images, whenever we detect a change on the original transaction. The image pair is essential, especially for qualitative changes. Usually it consists of one pre image (or “negative” image) as well as one post image (or “positive” image).  For example,

IMAGE_NUM

ORDER_ID

SHIP_TO_LOCATION

AMOUNT

SNAPSHOT_DT

IMAGE_TYPE

1

001

New York City

$100

Mar 1, 2014

POST

2

001

New York City

-$100

Mar 2, 2014

PRE

3

001

New York City

$120

Mar 2, 2014

POST

4

001

New York City

-$120

Mar 3, 2014

PRE

5

001

Boston

$120

Mar 3, 2014

POST

Basically, this delta snapshot history fact stores regular snapshots per change, and adds the “negative” snapshots before change. So it is enriched with a unique feature to report delta’s trend line, simply by clubbing both kinds of snapshots together.

Besides, we also introduced more flexibility for users to configure a subset of columns they are interested to track. Based on the configuration, the ETL would create new snapshots for changes only on the interested columns. Changes on the other columns would trigger an update to existing snapshots instead, in order to sync up with the original transactions.

Though, extra ETL complexity has to be introduced to handle pre vs. post images separately, plus the flexibility to track subset but all changes. The number of records is much less than regular daily snapshot facts. The data size of this fact is proportional to the number of changes.

Snapshot Facts in OBIA (2)

Authors: Divya Nandakumar, Zhi Lin

Range Snapshot History Fact

In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the snapshot fact could be humongous, depending on the snapshot frequency.  The dilemma is that better accuracy of change history would be achieved with more frequent captures, which makes data size overwhelming and performance badly impacted.

A solution is to create a variation of the snapshot history fact, which we call snapshot period history fact. The idea is simple, concatenating consecutive snapshots, if they happen to share identical images, of a transaction into a new snapshot. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

Typical data model

The typical attributes of a range snapshot history fact are–

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementations

Concatenate daily snapshots into a range

This is a conventional way to build up range snapshots. Two consecutive daily snapshots sharing identical status can be merged into one snapshot spanning across these two days. Two having different statuses would be stored as two separate snapshots for each day.

Concatenation of these daily snapshots could be created as a result of gathering related daily snapshot records together. The degree of condensation of data that can be achieved is remarkable, because the gathering may span to range of period unlike the fixed period of week or month. In case the triggering event occurs very often, for example 20 times a day then, this approach is not advisable. Meanwhile, every detail got preserved as no daily snapshot got dropped off the concatenation.

The ETL flow requires daily snapshots to start with, and do group-by on “interested” status to merge identical rows. Its dependency on accumulation of daily snapshots is extra task and large storage. Incremental load could be a challenge, especially for a back-dated snapshot. Also, this method assumes no gap between daily snapshots, which could lead to an exception difficult to handle in ETL.

A status change in these related daily snapshots could trigger a snapshot record to be entered into the data warehouse.  

Range snapshots directly from transactions

Here we invented a new way to overcome the shortage of the conventional method above to build range snapshots. We removed the dependency on daily snapshots and directly build range snapshots by scanning through all transaction footprints.

A few key points we have introduced to achieve this.

1)      Create multiple range snapshots trailing each footprint (transaction). For example, one order placed in Mar 25, 2012 by Adam, derives to range snapshots trailing as below. The period duration in each snapshot is one year here, which is configurable.

Customer

Status

Status Start Date

Status End Date

Adam

Active

Mar 25, 2012

Mar 25, 2013

Adam

Dormant

Mar 25, 2013

Mar 25, 2014

Adam

Lost

Mar 25, 2014

Mar 25, 2015

2)       Collapse all trailing series generated in (1), and come out only one status at any point of time, using priority rules. In the same example, the priority rule to override is, Active > Dormant > Lost.

3)       On top of the results from collapsing, concatenate the snapshots having identical statuses.

The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

The challenge on incremental load, especially back-dated records, can be solved here relatively easier, as all the source information here, the transaction footprints, are usually persisted anyway. In similar example, our ETL can be as simple as deleting records from the target table and recreating the records for a particular customer from scratch, every time there is an order placed by the customer.

Here we still achieve a great amount of data compression and robust ETL processing. The incremental load is still not precise yet to the most granular level. One incremental load involving one transaction per customer would end up to truncate and rebuild the entire target table.

Snapshot Facts in OBIA (1)

Authors: Divya Nandakumar, Zhi Lin

 

Snapshot History Fact

A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular, transactional fact intends to store data in data warehouse format and reflect OLTP transactions with near-real-time latency. In this context, a regular fact basically captures the near-current snapshot of the OLTP transactions, and is capable to support status de quo analysis.

A snapshot history fact, (or snapshot fact, in short,) accumulates a series of snapshots and preserve all, each with a different time label. In this way, the change history of each transaction is preserved by the snapshot series. This fact is very useful to do trend analysis over time.

Typical Data Model

The typical attribute of a snapshot history fact is–

SNAPSHOT_PERIOD_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

Rolling period of daily snapshots

In the case of a Business need to Analyze DAILY quantity-on-hand inventory levels by product and store for a Business process like “Retail store inventory”, the Granularity could be - Daily inventory by product at each store, the Dimensions - Date, product, store & Fact - Quantity on hand.

Storing source system daily snapshot would have a serious impact on storage of Data Warehouse. The first solution is to keep daily snapshots for a limited rolling period, like last 90 days for example. The daily snapshot table would accumulate daily snapshots from 1 day, 2 days,…, until 90 days. After that, it would always drop off the oldest daily snapshot, before it adds one more daily snapshot. Hence the ETL should always delete the snapshots older than 90 days first, and then append a new snapshot.

This method enables to keep a fairly granular of snapshots on daily level. However, older snapshots are not kept, so it’s not good for long term historical trending.

 

Monthly snapshots

At the end of the month all accounts have their month ending balance captured. The event is the end of the month, and the month is stored as part of the data warehouse. The selection program reads through the operational data and upon encountering a record that meets the qualifications, moves the record to the data warehouse. At the end of the month, each account is queried and the balance of the account at the end of the month is transferred to the data warehouse environment. One account may have had no activities during the month and another account may have had 200 activities during the month. Both accounts will show up as exactly one record in the data warehouse environment. No continuity of activity is assumed using this technique.

The passage of time - day end, week end, month end, etc. - is all common ways of triggering a snapshot. But the periodic passage of time is hardly the only way that snapshots are triggered.

The Monthly snapshot table stores snapshots of all previous day’s historical data. ETL design would have a preload mapping which deletes the data loaded for current month, based on Current Month End date and then load with the latest data for current month.

 

In this way, we “aggregate” up from daily snapshots and archive great compact on data size. Longer term history trending can be stored and reported. However, we lost gross levels of details in between every two month ends.

 

Monday Feb 24, 2014

How to Implement Object Security in Project Analytics in OBIA 11.1.1.7.1

Author: Vijay Rajgopal

Introduction

This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA 11.1.1.7.1 onwards.

Object-level security controls the visibility to logical objects based on a user's duty/application roles. The access to following objects can be restricted using object level security: Presentation tables, Presentation table columns, Subject Areas, Reports, Dashboards, and Project Specific shared folders.

To apply object security over subject area, individual tables or individual column the default access for authenticated user application role must be set to No Access.

We need to explicitly grant read access to duty roles (which are based on adaptor as explained above) which can access/view the particular subject area or individual table or individual column.

Supported OBIA release: 11.1.1.7.1 onwards

  1. Project Analytics Application Roles used for enforcing object security –

In Enterprise Manager select WebLogic -> Domain -> bifoundation_domain -> Security -> Application Roles, Select obi application stripe and search for role name which starts with OBIA and you will see the list of all application roles that start with OBIA.


Following is the list of OOTB duty roles by adaptor

EBS Adaptor Duty Roles –

OBIA_EBS_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_EBS_PROJECT_MANAGEMENT_ANALYSIS_DUTY

PSFT Adaptor Duty Roles –

OBIA_PSFT_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PSFT_PROJECT_MANAGEMENT_ANALYSIS_DUTY

Fusion Adaptor Duty Roles –

OBIA_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PROJECT_MANAGEMENT_ANALYSIS_DUTY

  1. Project Analytics object security implementation -

2.1 Subject Area:

Eg: Project - Cost GL Reconciliation is a newly added area for EBS and PSFT adaptors. We want to ensure that this subject area is not seen by Fusion Adaptor customers.

Bring down the OBIEE Server, backup the existing rpd and open the rpd in the Admin tool.

Double click Project - Cost GL Reconciliation à Permissions


As you can see read access has been granted explicitly to duty roles associated with EBS and PSFT adaptors. All other duty roles would inherit the default access from Authenticated User application role which is set to No Access. This ensures that this subject area is not visible for Fusion adaptor users

2.2 Presentation Table:

Eg: Dim – Analysis Type is supported only for PSFT adaptor. We hide this presentation table from EBS and Fusion Adaptor customers.

Under Project - BillingAnalysis Type Permissions


As it can be seen above only users associated to PSFT duty roles would be able to view Analysis Type table. For EBS and Fusion adaptor users this table would be hidden.

2.3 Individual Columns:

Eg: Interproject Billing Amount metric in Project-Billing subject area is supported only for EBS and Fusion adaptors. We hide this individual column from PSFT customers.

Under Project - Billing Fact – Project Billing Interproject Invoice Amount à Permissions


As it can be seen above this metric would be viewed by EBS and Fusion adaptor users and hidden from PSFT adaptor users.

Save the rpd, do a consistency check and deploy the updated rpd in the OBIEE server.

  1. Additional Information –

General Details about 11.1.1.7.1 OBIA can be found here

Wednesday Feb 19, 2014

Notes for implementing Universal adapter for OBIA Project analytics

Author: Amit Kothari

Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules.

Supported OBIA releases: 11.1.1.7.1 onwards

Supported Apps releases: Universal Adapter.

Steps:

 Please refer to the OBIA documentation and the DMR as a starting point for this exercise.Also refer to this blog entry.

Please login to the ODI Designer to see the OBIA Projects Universal interfaces, the Source files can be seen in the Model layer.

1. High level steps to import data into the data warehouse through the Universal adapter.

a. Populate the csv files with your data (eg. file_proj_budget_fs,.csv is the source file for w_proj_budget_fs table). Typically customer writes an extract program like a shell file/PL*SQLprogram etc which creates these data files from a non supported Source OLTP system.

b. Refer to the steps details of how to populate these files.

c. Build a Load Plan with fact groups: "900: Universal Adaptor Instance"."Project".

d. Run the Load Plan that you created in the previous step.

e. Note: If applicable this Load Plan must be run after the regular Load Plan to populate Oracle Business Analytics Warehouse for the other Subject Areas has completed.

2. The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:

a. Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.

b. Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.

c. Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.

d. Refer to the Appendix section ‘Setting Up the Delimiter for a Source File’.

  1. As a general rule default 0 for numeric columns and '__NOT_APPLICABLE__’ for string columns so that we do not run into ‘Not Null’ errors when ETLs start loading data.
  2. Date columns should be populated in the CSV file as a number in the format YYYYMMDDHH24MISS or kept null
  3. The dimension ID fields in the fact staging tables have to be populated with the integration_id of the various dimensions. This is very important otherwise the dimension wids fields in the fact tables will default to 0. Please refer to the ODI Model or the DMR for the star schema diagrams and other FK info.
  4. Similarly the common dimensions which Projects uses like W_INT_ORG_D, W_MCAL_DAY_D, W_MCAL_CONTEXT_G, W_EMPLOYE_D, W_JOB_D, W_INVENTORY_PRODUCT_D etc also needs to populated correctly via source files
  5. W_MCAL_CONTEXT_G has a class fields that holds two values – GL or PROJECTS. To resolve the project accounting dates in the fact tables there must be data present in this table for class ‘PROJECTS’
  6. There are various Domain codes which are loaded to Warehouse staging table W_DOMAIN_MEMBER_GS. In order to load this table the generic file File_domain_member_gs.csv has to be populated with the correct Domain code.
    1. The granularity of this file is each domain member per language for any of the domains listed above.
    2. Domain codes for Projects are listed in the Appendix. Just load the domains based on the Facts/Dims you are planning to load.

Table B-164 file_domain_member_gs.csv Field Descriptions

Column Name

Data Type

Sample Data

Description

DOMAIN_CODE

Not available.

Not available.

This should be populated with the Domain Code corresponding to the Source Domain that is to be configured.

DOMAIN_TYPE_CODE

Not available.

Not available.

Defaulted to 'S' - indicates this is a Source Domain Code.

DOMAIN_MEMBER_CODE

Not available.

Not available.

This should be populated with the CODE value supplied in any of the above files.

DOMAIN_MEMBER_NAME

Not available.

Not available.

This should be populated with the NAME value that corresponds to the Member Code supplied.

DOMAIN_MEMBER_DESCR

Not available.

Not available.

Not available.

DOMAIN_MEMBER_REF_CODE

Not available.

Not available.

Hardcode to '__NOT_APPLICABLE__'.

DOMAIN_MEMBER_DEFN_TYPE_CODE

Not available.

Not available.

Not available.

LANGUAGE_CODE

Not available.

Not available.

Warehouse Language Code.

SRC_LANGUAGE_CODE

Not available.

Not available.

Source Language Code.

INTEGRATION_ID

Not available.

Not available.

This is the unique ID for the record. The INTEGRATION_ID for this file can also be populated as DOMAIN_CODE~DOMAIN_MEMBER_CODE.

DATASOURCE_NUM_ID

Not available.

Not available.

The unique Data Source ID of the Source Instance you are configuring.

Appendix

A. Setting Up the Delimiter for a Source File

When you load data from a Comma Separated Values (CSV) formatted source file, if the data contains a comma character (,), you must enclose the source data with a suitable enclosing character known as a delimiter that does not exist in the source data.

Note: Alternatively, you could configure your data extraction program to enclose the data with a suitable enclosing character automatically.

For example, you might have a CSV source data file with the following data:

Months, Status
January, February, March, Active
April, May, June, Active

If you loaded this data without modification, ODI would load 'January' as the Months value, and 'February' as the Status value. The remaining data for the first record (that is, March, Active) would not be loaded.

To enable ODI to load this data correctly, you might enclose the data in the Months field within the double-quotation mark enclosing character (" ") as follows:

Months, Status
"January, February, March", Active
"April, May, June", Active

After modification, ODI would load the data correctly. In this example, for the first record ODI would load 'January, February, March' as the Months value, and 'Active' as the Status value.

To set up the delimiter for a source file:

1. Open the CSV file containing the source data.

2. Enclose the data fields with the enclosing character that you have chosen (for example, (").

You must choose an enclosing character that is not present in the source data. Common enclosing characters include single quotation marks (') and double quotation marks (").

3. Save and close the CSV file.

4. In ODI Designer, display the Models view, and expand the Oracle BI Applications folder.

Identify the data stores that are associated with the modified CSV files. The CSV file that you modified might be associated with one or more data stores.

5. In ODI Designer, change the properties for each of these data stores to use the enclosing character, as follows:

1. Double-click the data source, to display the DataStore: <Name> dialog.

2. Display the Files tab.

3. Use the Text Delimiter field to specify the enclosing character that you used in step 2 to enclose the data.

4. Click OK to save the changes.

You can now load data from the modified CSV file.

  1. Project Domains

PROJECT_ANALYSIS_TYPE

PROJECT_ASSIGNMENT_STATUS

PROJECT_ASSIGNMENT_TYPE

PROJECT_BILLING_TYPE

PROJECT_BILL_HOLD_CODE

PROJECT_BILL_HOLD_FLG

PROJECT_BUDGET_CHANGE_REASON

PROJECT_BUDGET_CLASS

PROJECT_BUDGET_ENTRY_LEVEL

PROJECT_BUDGET_LEVEL

PROJECT_BUDGET_STATUS

PROJECT_BUDGET_TIME_PHASE

PROJECT_BUDGET_TYPE

PROJECT_BUDGET_VER_TYPE

PROJECT_CLASS_CATEGORY

PROJECT_CLASS_CODE

PROJECT_COMMITMENT_SOURCE_TYPE

PROJECT_COMMITMENT_TYPE

PROJECT_COST_CODE_TYPE

PROJECT_CROSS_CHARGE

PROJECT_CROSS_CHARGE_TYPE

PROJECT_DOCUMENT_ENTRY

PROJECT_DOCUMENT_SOURCE

PROJECT_EVENT_CLASS

PROJECT_EVENT_INVOICING_STATUS

PROJECT_EVENT_SOURCE

PROJECT_EVT_REV_DISTRIB

PROJECT_EXPENDITURE_CATEGORY

PROJECT_EXPENDITURE_CLASS

PROJECT_EXPENDITURE_SUBCATEGORY

PROJECT_FIN_PLAN_TYPE

PROJECT_FUNDING_APPROVAL

PROJECT_FUNDING_CATEGORY

PROJECT_FUNDING_LEVEL

PROJECT_INVOICE_CLASS

PROJECT_INVOICE_HDR_TYPE

PROJECT_INVOICE_LINE_TYPE

PROJECT_INV_STATUS

PROJECT_MFG_RESOURCE_TYPE

PROJECT_PRIORITY

PROJECT_PROGRESS_STATUS

PROJECT_REQUIREMENT_STATUS

PROJECT_RESOURCE_CLASS

PROJECT_RESOURCE_CODE

PROJECT_RESOURCE_TYPE

PROJECT_REVENUE_CATEGORY

PROJECT_REVENUE_STATUS

PROJECT_ROLE

PROJECT_SECURITY

PROJECT_SERVICE_TYPE

PROJECT_STATUS

PROJECT_SUB_STATUS

PROJECT_TASK_PRIORITY

PROJECT_TASK_STATUS

PROJECT_TASK_TYPE

PROJECT_TRANSACTION_SOURCE

PROJECT_TRANSFER_STATUS

PROJECT_TYPE

PROJECT_TYPE_CLASS

PROJECT_WORK_TYPE

How to include Fusion DFFs Into OBIA 11.1.1.7.1 In Premise Data Warehouse

Author: Saurabh Gautam

Introduction

This is a technote that explains the steps needed to extract the Fusion descriptive Flexfield (DFF) information into the in premise Oracle BI application (OBIA) Warehouse from the in premise Fusion applications (not SAAS).

Note: The OBIA changes have to be done manually.

Supported OBIA release: 11.1.1.7.1 onwards

Supported Apps release: Fusion Release 5 onwards

A. Enable and Deploy DFF

1. Enable the Descriptive Flexfield in Fusion Apps. Mark the DFF/attributes as BI Enabled. For e.g. Enable PJC_EXP_ITEMS_DESC_FLEX

2. Deploy the Flexfield Vo.

3. Refer to this link for more info.

B. Setup the rpd

1. Bring down the OBIEE server and presentation services.

2. Open the Oracle BI Applications repository file (*.rpd) via the admin tool.

3. Import the newly deployed DFF VO(for e.g. FscmTopModelAM.PjcEiBIAM.FLEX_BI_PjcEi_VI) into the rpd. Select the appropriate logical table (for e.g.Dim – Project Costing Details) while importing. This should import the DFF VO and also create the physical join to the appropriate VO. E.g. screenshot below:

1. Make sure that the VO name is <= 80 chars, if not then create an Alias on that VO with name <= 80 chars

2. Save the rpd, start the BI server

C. ETL Changes in ODI

 1. Please note that the steps documented in this note follow our standard customization process. This is needed in order for our future ODI metadata patches to work in your repository. As part of the standard customization process, you will copy the existing mapping folder to a custom folder, make changes in the custom folder, delete the OOTB scenario from the original mapping folder, and then generate the new scenarios in the custom folder using the original OOTB scenario name.

Please refer to the customization guide before you start on this.

2. Open the ODI Studio client and login to the appropriate repository. Go to the Model tab->Fusion 1.0->Oracle Fusion 1.0 FSCM/HCM/CRM folder

3. Import the newly deployed DFF VO using the RKM BIAPPS Oracle BI

4.

5. Open the Oracle BI Applications-> Oracle BI Applications model sub folder and add the fields to the target DS/FS and D/F data waterhouse (DW) tables in the correct folder.

6. Apply these target table changes to the target warehouse by doing an alter table.

7. Click on the Designer tab and navigate to the appropriate sde folder for fusion under BI apps project : Mappings ->SDE_FUSION_V1_Adaptor. Duplicate the appropriate SDE folder and copy it to your CUSTOM_SDE folder.

8. Open the temporary interface (icon marked in yellow) in that custom folder.

9.

10. Pull the DFF VO into the mapping tab of the interface.

11. Join the DFF VO to the base VO and drag the DFF VO fields which need to be extracted in to the DW into the right hand target pane

12. Open the main interface and map the fields from the temporary interface to the target.

13. Save all the objects. Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

14. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

15. Similarly copy the appropriate Dim or Fact in the SILOS folder to the CUSTOM_SILOS folder, and then map the new DS/FS fields to the D/F table in the main interface.Save.
Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder.

16. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name.

17. Unit test all the changes

D. RPD changes

1. Open the rpd in the admin tool in your dev environment, in the physical layer add the new fields to the modified_D/_F table under the DataWarehouse connection pool.

2. Drag the new fields from the alias to the BMM layer, rename it to give it a business name and drag it to the presentation layer.

3. Run the consistency check and save the rpd.

4. Deploy the modified rpd and restart the BI server and test the new fields from an answer.

E. Additional Information

General Details about 11.1.1.7.1 OBIA can be found here

Note: These fixes are to be applied in the right folder for e.g. Apply them in the SDE_Fusion_Adaptor folder of the ODI repository if you are running Fusion app. If you have customized the maps mentioned above then please carefully apply the steps mentioned above

Monday Jun 10, 2013

Oracle BI Apps - Support for Unit of Measure conversion.

Authored by : Manjunath Gangadhar

Overview

UOM or Unit Of Measure is the method of expressing the quantity of an item so that it can be tracked and accounted for in a system.

A data warehouse that stores and analyzes Items or Products must have a UOM standard defined.

This post describes OBIA requirements, design and implementation of UOM

What is UOM Conversion

Let’s start with an example:

Let’s say on 15th Jan, 2013 100 Ea of Mont Blanc pens were sold. Here Ea is Each which is a Unit of measure that says 10 individual Mont Blanc pens were sold.

On 25th Jan, 2013 10 DZN Mont Blanc Pens were sold

In this example Ea(Each) and DZN(Dozen) are the UOMs for the Item Mon Blanc Pens

Let’s say the user wants to see a report that provides the Total Number Of Mont Blanc Pens sold in Jan 2013. If we just sum the two numbers (100 + 10) and show 110 Total Pens sold, that would be wrong.

Instead the Total should be 220 Ea, since 1 DZN = 12 Ea

This conversion where 1DZN = 12 Ea is defined as the UOM Conversion.

Why is UOM needed in OBIA

OBIA caters to various source systems like EBS, PSFT, Siebel, JDE, Fusion etc. Each system has its unique way of defining UOMs in the OLTP. The systems also define the UOM conversion to aggregate and show data correctly.

Most of our customers implement multiple OLTP systems. This presents a unique opportunity in BI to define a universal conversion as the data flows into OBIA DW.

Source Systems

Let’s take examples of some source systems that OBIA supports.

EBS

EBS allows a “Standard UOM” at the Item Level, defined for the Master Organization. This Standard UOM can be overwritten at the Individual Inventory Org level and is defined as the “Primary UOM”. (Inventory Org is the organization/Location where the Item is physically manufactured or stocked)

EBS also allows UOMs to be classified into various UOM Classes. UOM classes are used to group together units of measure that have similar characteristics or that measure the same dimension. Example: Length, Area, Weight, Volume, Time etc.

The conversions within EBS are of two types;

Inter Class: meaning across classes. These conversions are stored in EBS, from one UOM class to another UOM class at the Base UOM level. The Base UOM code is the lowest UOM Code for the UOM Class e.g., ml for the Volume Class, inches for the Length Class, etc.

Intra Class: meaning within the class. These conversions are stored in the EBS, form one UOM Code to the Base UOM Code within that UOM Class.

Further the UOM conversions can be Item specific (e.g., Pens 1 Box = 100 Ea) or Item Independent (e.g., 1 DZN = 12 Ea)

PeopleSoft

Item is defined with a Standard UOM at the Item Level for the Business Unit (BU). This UOM can be overridden at the individual Inventory Org Level and is defined as the “Primary UOM”.

UOM Conversions are defined in both directions. I.e., UOM A to UOM B, and the vice-a-versa.

Further the UOM conversions can be Item specific (e.g., Pens 1 Box = 100 Ea) or Item Independent (e.g., 1 DZN = 12 Ea)

Fusion

Fusion OLTP UOM design is very similar to EBS. We have similar concepts of UOM Class, InterClass and IntraClass conversions. The EBS concepts of Standard UOM and Primary UOM very well serve the same purpose in Fusion Apps.

Siebel does not have a concept of UOM Conversion.

OBIA Design and Solution

OBIA UOM Extraction Logic

EBS/Fusion – Both the interclass and intraclass conversions related to an Item and conversions which are independent of an item (called as Resource UOMs) are extracted into OBIA.

PSFT – Peoplesoft does not have a concept of UOM class. However the Item Specific and Item independent UOM and related conversions are similar here and both are extracted into OBIA.

Conformed UOM Domain

OBIA Design uses the Conformed Domains Concept for UOM. Since OBIA DW contains data from different source systems, and UOM codes/names could be different across sources, the Conformed Domain for UOM is helpful.

Example – A dozen could be called DZ in EBS and Dzn in PSFT. However when we analyze Facts or quantities along with the related UOM in OBIA, we use a conformed UOM which could be called “Dozen”.

Configuration Manager and UOMs

Let’s dig some more details on UOM Domains. We have a Source Domain called UOM and a Conformed Domain W_UOM. The Source Domains will get populated through the Domains ETL which the user would execute before running the Main ETL. Once the Source Domain UOMs are present in Configuration Manager, the customer has two options to map the source UOMs to Conformed UOMs–

  1. Enter all the Conformed UOM codes into CM and map the Source UOMs to Conformed UOMs. Example – Map DZ of EBS and Dzn of PSFT to the newly created Conformed UOM Dozen.
  2. Use the “Sync to Source” option of CM. By using this method, we can create Conformed UOM codes for all the Source UOMs present in a particular Source system and CM automatically maps them in a 1:1 fashion. This is the preferred method since the conformed UOMs would be similar to that of one of the Source systems that the user has.

In case of multi source environments, the user will have to map the unmapped UOMs from the other Source System(s) to the corresponding Conformed UOMs.

UOM Data in OBIA PS1 (11.1.1.7.1)

W_UOM_CONVERSION_G is a table which stores the UOM Conversions. It stores attributes like From UOM, To UOM and Conversion Rate. Irrespective of whether source systems store both A->B and B->A conversions, this table will have conversion rates captured for both permutations. This table stores the Conformed UOMs.

Master Item and Inventory Item – The Master Item information is captured in W_PRODUCT_D and W_INVENTORY_PRODUCT_D stores the Inventory Item information. As mentioned in previous sections, an UOM defined at Item level is called Standard UOM and UOM defined at Inventory level will be the Primary UOM. Hence, it becomes a no-brainer that W_PRODUCT_D will store the Standard UOM and W_INVENTORY_PRODUCT_D will have Primary UOM in OBIA.

These two tables store the Source value for UOMs. There are reusable Lookups built upon these tables LKP_PRODUCT_BASE_UOM and LKP_INVENTORY_PRODUCT_PRIMARY_UOM. These 2 Lookups have the logic to return the Conformed UOM Code.

Fact Tables will store the Conformed UOM codes. Fact tables will use the Conformed UOMs returned by the LKP_PRODUCT_BASE_UOM and LKP_INVENTORY_PRODUCT_PRIMARY_UOM during ETL for UOM Lookup and UOM Conversion Rate Lookup. The related UOM User Defined Functions (UDFs) are LKP_UOM_CONVERSION_CODE and LKP_UOM_CONVERSION_RATE.

As we have seen, ETL uses Conformed UOM codes at all places. Since the ETL has a dependency on the Source UOM to Conformed UOM Domain mapping, it becomes mandatory that the user does this Domain mapping before running the main ETL.

Pseudo-Code of UOM in OBIA

Fact ETL will have the following pseudo-code for UOM conversion and the conversion rate obtained here will be either used in ETL or RPD to sum up the Quantity related metrics.

Fact Staging

Use the Reusable Lookups (LKP_PRODUCT_BASE_UOM and LKP_INVENTORY_PRODUCT_PRIMARY_UOM). Get the Primary Conformed UOM and Standard Conformed UOM.

->

Pass these as inputs along with Product Id, Inventory Org Id(present in Fact Staging and sourced from Source Tables) to UOM UDFs. (LKP_UOM_CONVERSION_CODE and LKP_UOM_CONVERSION_RATE). UDF’s logic is as mentioned below:

If Primary Rate is null -> Return Standard Rate -> If that is null return default value of 1
If Primary Code is null -> Return Standard Code -> If that is null return the From UOM Code (the input value)

->

Fact Table

Also the Fact Tables might store the UOM Code and in such cases, the UOM stored in Fact Tables will be Conformed UOM codes.

Technical Example

Let’s take a look at a Dashboard page. In Procurement àSpend Analyzerà Item Details dashboard page, we have a report called “Spend Details By Item”. The report shows the spend details for each Item, along with item details, other metrics like Invoice Unit Price (average, minimum, maximum), Invoice Quantity and UOM Code. A screenshot of the report is shown below. The Invoice Quantity metric used the Conversion Rate which was calculated in ETL. Even though the transaction for Invoiced Quantity could have been happened in a different UOM (say Box) we have converted it to Ea and hence the summed up Quantity shows the accurate value as seen in the report.


Screenshot showing the metrics: Invoice Quantity calculation uses the Standard Conversion Rate calculated in ETL as mentioned in section “Pseudo-Code of UOM in OBIA”.

Invoice Quantity calculation:

The report also shows the UOM Code column. This comes from the Fact table itself as shown by the Lineage information below. Fact Tables stores the Conformed UOMs.

The Fact table W_AP_INV_DIST_F stores the Conformed UOM Code in this column. The Conformed UOM Code is evaluated in ETL using an UDF called as Domain Member Map. The pseudo code of the UDF is as shown below. It accepts the Source UOM Code and returns the Conformed UOM code. The user should have done the Source UOM à Conformed UOM mapping as mentioned in section “Configuration Manager and UOMs” for this UDF to work correctly.

DOMAIN_MEMBER_MAP( 'UOM', <Source UOM Value>, #DATASOURCE_NUM_ID, 'W_UOM' )

Conclusion

UOM is widely used across subject areas and it plays an important role in evaluating quantity metrics. We exclusively use Conformed UOMs in Model, ETL and Reporting Layers as explained in this post.

About

Phil Wang-Oracle

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today