Monday Jun 08, 2015

Loading Custom Calendars using the Universal Adaptor

 

Loading Custom Calendars using the Universal Adaptor

Author: Chuan Shi

Introduction

In Oracle BI Apps time dimension, we support fiscal calendars extracted from customers’ OLTP systems. However, there could be situations that customers need to use a custom calendar that does not exist. For example,

  • Siebel customers do not have any fiscal calendars in the OLTP
  • EBS/PSFT/Fusion customers do not have the specific calendar they want to use in their OLTP.

In these cases, the custom fiscal calendar can be seeded in the following csv files (that are shipped OOTB) and be extracted via the UNIVERSAL adaptor:

  • file_mcal_cal_d.csv*
  • file_mcal_period_ds.csv
  • file_mcal_context_g.csv*

* Note that these two files are replaced by file_mcal_cal_ds.csv and file_mcal_context_gs.csv in Oracle BI Apps 11.1.1.10.1 release and beyond.

This blog article provides explanation and example on this.

 

 

Load Custom Calendar via Universal Adaptor

A customer can load any custom fiscal calendar via the universal adaptor. There are three SDE tasks for fiscal calendars in this adaptor and they are:

  • SDE_Universal_TimeDimension_Calendar
  • SDE_Universal_TimeDimension_MCalPeriod
  • SDE_Universal_TimeDimension_MCalContext

These tasks load the fiscal calendar data seeded in the corresponding csv files into relevant DW tables. As long as the customer provides the fiscal calendar data correctly for the custom fiscal calendar in those csv files, this calendar will be loaded into the DW.

To demonstrate the idea, we assume that a customer wants to load a fiscal calendar where each fiscal year starts on July 1st of the current calendar year and ends on June 30th of the next calendar year; each fiscal period is a calendar month (i.e., the first period of each fiscal year is from July 1st to July 31st); and date range of this fiscal calendar is from 2015 July 1st to 2017 June 30th (i.e., two fiscal years – FY 2016 and FY 2017, are considered). We discuss how to prepare the data for these three SDE tasks next.

 

Prepare Data for FILE_MCAL_CAL_D.CSV

FILE_MCAL_CAL_D.CSV is used to register the custom calender in W_MCAL_CAL_D. The granularity of this DW table is at the calendar level and each row of it specifies a unique fiscal calendar. In the csv file, a customer is required to provide necessary information required to populate this calendar. The information includes:

Name

Description

ROW_WID

A unique numerical ID that specifies the calendar

Note that you may have other calendars loaded from OLTP and DW. Each of these calendars together with your custom calender must have a unique ROW_WID

MCAL_CAL_ID

A string that specifies the ID of the custom calendar

MCAL_CAL_NAME

A string that specifies the name of the custom calendar

MCAL_CAL_CLASS

A string that specifies the class of the calendar, e.g., OLTP Sourced or Generated. For custom calendar, OLTP Sourced can be used.

FISCAL_CALENDAR_FLG

Y if this is a fiscal calendar; else N

BUDGETARY_CALENDAR_FLG

Y if this is a budgetary calendar; else N

INTEGRATION_ID

The unique identifier of the calendar; it can be the same as the MCAL_CAL_ID

DATASOURCE_NUM_ID

DSN for this calendar

W_INSERT_DT

Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000

W_UPDATE_DT

Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000

TENANT_ID

DEFAULT

X_CUSTOM

0

No matter if a column is a string or a number, do not use any quotes in the csv file. Suppose that for the customer calendar we are interested, we name it Universal~Custom. Then, the content of the sample csv file for this calendar is:


 

Prepare Data for FILE_MCAL_PERIOD_DS.CSV

FILE_MCAL_PERIOD_DS.CSV is used to load all the fiscal periods of the custom calender in W_MCAL_PERIOD_DS. The granularity of this DW table is at the fiscal period level and each row of it specifies a unique fiscal period. In the csv file, a customer is required to provide necessary information required to populate all the periods of the calendar. The information includes:

Name

Description

MCAL_CAL_ID

A string that specifies the ID of the custom calendar

MCAL_CAL_NAME

A string that specifies the name of the custom calendar

MCAL_CAL_CLASS

A string that specifies the class of the calendar, e.g., OLTP Sourced or Generated. For custom calendar, OLTP Sourced can be used.

ADJUSTMENT_PERIOD_FLG

Y if the period is an adjustment period; else N

MCAL_PERIOD_TYPE

Something that is meaning to specify the type of the period, e.g., Custom

MCAL_PERIOD_NAME

Name of the fiscal period

MCAL_PERIOD

A number that shows the period number, e.g., 1, 2, etc

MCAL_PERIOD_ST_DT

Period start date; format is YYYYMMDDHHMISS, e.g., 20150701000000

MCAL_PERIOD_END_DT

Period end date; format is YYYYMMDDHHMISS, e.g., 20150731000000

MCAL_QTR

A number that shows the fiscal quarter (e.g., 1, 2, 3 or 4) to which this period belongs

MCAL_YEAR

Four digit number that shows the fiscal year (e.g., 2016) to which this period belongs

MCAL_QTR_START_DT

Start date of the fiscal quarter to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150701000000

MCAL_QTR_END_DT

End date of the fiscal quarter to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150930000000

MCAL_YEAR_START_DT

Start date of the fiscal year to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150701000000

MCAL_YEAR_END_DT

End date of the fiscal year to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20160630000000

INTEGRATION_ID

The unique identifier of the fiscal period; it is suggested that you use the concatenation of MCAL_CAL_ID and MCAL_PERIOD_NAME

PERIOD_CLASSIFICATION_CODE

__NOT_APPLICABLE__

DATASOURCE_NUM_ID

DSN for this calendar

W_INSERT_DT

Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000. You can leave it empty if you don’t want to provide it.

W_UPDATE_DT

Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000. You can leave it empty if you don’t want to provide it.

TENANT_ID

DEFAULT

X_CUSTOM

0

No matter if a column is a string or a number, do not use any quotes in the csv file. It is very important for a customer to provide the correct values to some crucial columns such as the start and end dates of fiscal periods, quarters, and years. If they are incorrect, they will be loaded into W_MCAL_PERIOD_D as bad data and when this table is aggregated to load W_MCAL_QTR_D and W_MCAL_YEAR_D, there could be ETL failures.

As in our example:

  • Each fiscal year starts from July 1st of the current year and ends on June 30th of the next year;
  • Each fiscal period is a month.

Using fiscal year 2016 as an example, the 12 fiscal periods should have the following values (also be careful about the leap year):


The content of the sample csv file for all the periods in our example is:


 

Prepare Data for FILE_MCAL_CONTEXT_G.CSV

Note that not all customers may need this. FILE_MCAL_CONTEXT_G.CSV is mainly used for financial customers where they have different organizations or ledgers and each organization or ledger may use a different fiscal calendar. Therefore the relationship between organization and fiscal calendar is stored in W_MCAL_CONTEXT_G table. (Siebel customers do not need to configure this csv file.)

In the csv file, a customer is required to provide necessary information required to populate this table. The information includes:

Name

Description

ORG_ID

A string that specifies the ID of the organization; either this or the LEDGER_ID should not be null

ORG_CODE

A string that specifies the code of the organization, this can be null

LEDGER_ID

A string that specifies the ID of the ledger; either this or the ORG_ID should not be null

LEDGER_NAME

A string that specifies the name of the ledger, this can be null

CALENDAR_ID

The ID of the calendar, e.g., Universal~Custom in our example

CALENDAR_NAME

The name of the calendar

INTEGRATION_ID

The unique identifier of the org/ledger and calendar combination; e.g. you can concatenate ORG_ID or LEDGER_ID with CALENDAR_ID, you can also prefix the concatenated string with GL to stand for general ledger. E.g., GL~3142~Universal~Custom

MCAL_CAL_WID

The WID of the calendar. This must be the same as the ROW_WID chosen for this given calendar. E.g., in our example, we use 4999 as the ROW_WID for the calendar Universal~Custom and here we use Universal~Custom as the calendar for this org or ledger, than we need to put 4999 for the MCAL_CAL_WID

DATASOURCE_NUM_ID

DSN

W_INSERT_DT

Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000

W_UPDATE_DT

Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000

TENANT_ID

DEFAULT

X_CUSTOM

0

No matter if a column is a string or a number, do not use any quotes in the csv file. Suppose that for the customer calendar we are interested, we name it Universal~Custom. Then, the content of the sample csv file for this calendar is:


 

Add Universal Adaptor to the Generated Execution Plan

Once these csv files are ready, a customer should generate a load plan that includes the universal adaptor as well for these tasks to be picked up in the generated EP. Then running the EP will load the custom calendar specified in these csv files into the DW, and therefore the customer can use it later in the reporting.

 

Modify or Extend Custom Calendar during Incremental Loads

Before the full load, customers are suggested to prepare data for all the fiscal years they want a fiscal calendar to cover in the file_mcal_period_ds.csv file. However, it is quite possible that customers later want to extend such a calendar loaded using the universal adaptor. Or customers may want to make corrections to existing data loaded via the universal adaptor. In these cases, customer needs to modify the corresponding csv files before the next incremental load.

  • If customer needs to extend a custom calendar, they need to add additional period entries in file_mcal_period_ds.csv for the fiscal years they want to extend the calendar to. Existing entries can be kept in the csv file or they can also be removed. The w_insert_dt and w_update_dt of these newly inserted rows should be correctly provided in the csv files.
  • If customer needs to make correction to a specific entry in the csv file, they should make sure the w_update_dt column of that entry is updated appropriately so that the incremental logic can pick them up during the next incremental load.

 

Use the Custom Calendar as the Enterprise Calendar

Since the custom calendar loaded via the universal adaptor is a fiscal calendar, it can be used as the enterprise calendar. Recall that the enterprise calendar is nothing but a chosen fiscal calendar. So, it means that customer must first load this custom calendar into W_MCAL% tables via the universal adaptor and then they are free to choose it as the enterprise calendar.

To set this custom calendar as the enterprise calendar, in BIACM, customer should set GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID correctly for the enterprise calendar. For the calendar considered in this article, the calendar customer wants to use is called Universal~Custom and its DSN is 900. Then, in BIACM, customer should set GBL_CALENDAR_ID to Universal~Custom, and GBL_DATASOURCE_NUM_ID to 900. This way, after ETL is run, the W_ENT% tables will be populated. (Note that W_ENT_WEEK_D will NOT be populated as customer is not using the DW generated calendar as the enterprise calendar.)

For more on how to set the enterprise calendar, the readers are referred to the following article:

https://blogs.oracle.com/biapps/entry/configuring_enterprise_calendar_for_oracle

Wednesday Apr 22, 2015

Customer RFM Analysis

Customer RFM Analysis

Author: Grace Wang

Customer RFM

RFM (Recency, Frequency, Monetary) is a marketing technique used to determine quantitatively which customers are the best ones by examining:

  • . Recency – How recently did the customer purchase?
  • . Frequency – How often do they purchase?
  • . Monetary Value – How much do they spend?

The method is based on the marketing axiom that “80% of your business comes from 20% of your customers.” RFM methodology provides an easy and economic way to help business identify those valuable customers and is widely used in various industries. Case study performed at University of North Carolina Kenan-Flagler Business School has proven:

  • Customers who purchased recently were more likely to buy again versus customers who had not purchased in a while
  • Customers who purchased frequently were more likely to buy again versus customers who had made just one or two purchases
  • Customers who had spent the most money in total were more likely to buy again.  The most valuable customers tended to continue to become even more valuable

RFM Methodology

There are variations of doing RFM. The most common method is finding a table with Customer Name, Date of Purchase and Purchase Value.

  • Using RFM analysis, customers are ranked and assigned a ranking number of 1,2,3,4, or 5 (with 5 being highest) based on quintile distribution for each RFM parameter.
  • The three ranking number makes up the score in each category (Recency Score / Frequency Score / Monetary Score), three scores together are referred to as an RFM "cell". e.g. 555,231..
  • The Customer data is sorted to determine which customers were "the best customers" in the past, with a cell ranking of "555" being ideal.

Customer RFM in OBIA

In OBIA, Customer Recency / Frequency / Monetary category assignment and score calculation are done at PLP (Post Load Process) stage during ETL. When CRM Revenue Fact table (W_REVN_F) completes data loading, a PLP task will access the “won” opportunities in CRM Revenue Fact table and evaluate RFM based on the Opportunity Closed Date for Recency, Number of Opportunities for Frequency and Opportunity Closed Revenue for Monetary.

Each Customer then is ranked among three categories and assigned a Recency Category Code and Recency Score, a Frequency Category Code and Frequency Score, a Monetary Category Code and Monetary Score. RFM cell score is also calculated for each customer.


The outcome of the RFM category codes and scores are captured in one task (PLP_PartyDimension_RFMInfo) and stored in Customer Dimension table (W_PARTY_D) through one update task (PLP_PartyDimension_UpdateRFMInfo).


Below is a quick glance of RFM ETL process:


RECENCY

FREQUENCY

MONETARY


How long ago the customer last made a purchase?

How many Purchases the customer has made?

How much Revenue the customer has generated?

Source

MAX (W_REVN_F.CLOSE_DT_WID)

COUNT (DISTINCT W_REVN_F.OPTY_WID)

SUM (W_REVN_F.DOC_CLOSED_REVN)

Target

W_PARTY_D.

W_RECENCY_CAT_CODE
RECENCY_SCORE

W_PARTY_D.

W_FREQUENCY_CAT_CODE
FREQUENCY_SCORE

W_PARTY_D.

W_MONETARY_CAT_CODE
MONETARY _SCORE


Rank Customer with the WON opportunity closed date. The later the Closed Date the higher rank.

Rank Customer with the number of WON opportunities. The more Opportunities the higher rank.

Rank Customer with the highest WON Revenue. The higher the Revenue the higher rank.

Domain Code

W_DBM_QUINTILE_TYPE

W_RECENCY_CAT_CODE

W_DBM_QUINTILE_TYPE
W_FREQUENCY_CAT_CODE

W_DBM_QUINTILE_TYPE
W_MONETARY_CAT_CODE


Customer RFM in RPD

Total of ten logical columns related to RFM are defined in Customer Dimension in OBIA RPD.

Logical Column

Description

Recency Category Code

Contains the value of : 1st Quintile, 2nd Quintile,3rd Quintile, 4th Quintile, 5th Quintile

Recency Category

The descriptive value of Recency Category Code: Most Recent, Very Recent, Recent, Somewhat Recent, Less Recent

Recency Score

Contains the score from 1 to 5 for Recency

Frequency Category Code

Contains the value of : 1st Quintile, 2nd Quintile,3rd Quintile, 4th Quintile, 5th Quintile

Frequency Category

The descriptive value of Frequency Category Code: Most Frequent, Very Frequent, Frequent, Somewhat Frequent, Less Frequent

Frequency Score

Contains the score from 1 to 5 for Frequency

Monetary Category Code

Contains the value of : 1st Quintile, 2nd Quintile,3rd Quintile, 4th Quintile, 5th Quintile

Monetary Category

The descriptive value of Recency Category Code: Most Valued, Very Valued, Valued, Somewhat Valued, Less Valued

Monetary Score

Contains the score from 1 to 5 for Monetary

RFM Score

Contains the value of RFM cell ranging from 111 to 555.

Sample Customer RFM Webcat

 


More Applications

RFM is an easy and economic way to mine current data to provide quick snapshot of customer. As a Customer dimension attribute, RFM columns can be used as a filter in various Customer related reports to produce more insightful analysis. In the mean time, the RFM methodology could be expanded into different subject areas. For example,

  • Instead of Revenue Fact, the RFM can be performed on Web Visit and Web Submit fact from online marketing. We could identify what cluster of Customers visited the website most recently and frequently with high order amount.
  • Using Purchase Order Fact from Spending, RFM can be applied to Supplier. We could identify who are the Suppliers most recently and frequently used by the company and with most spending expenditure.
  • With RFM scores captured on the periodical basis, it can also help detect the change of Customer Purchasing Behavior. For example, if we keep the monthly RFM scores for each customer on a separate table. By doing time series analysis, a declining RFM score trend may indicate the customer is losing interest while an ascending RFM score showing the customer is picking up momentum.

Thursday Feb 05, 2015

ETL Design for sharing the sequence ID

Author: Archana Dixit

Introduction

This blog describes an ETL design to share the same sequence ID among several tables. Party Dimension is an exceptional case where Records from different streams merge into one table. Organization, person and group party types are loaded into separate tables W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D and later get merged into a master table W_PARTY_D.

 

These dimensions are Slowly Changing dimension out-of-the-box in the BI Application. They also carry a slowly changing type1 WID named as SCD1_WID. It holds the same value as for the new record also in case of SCD type 2 change. For example, if Organization Name is changed from ‘ABC Software’ to ‘ABCD Software’, the current record would still have the same value for SCD1_WID.

 

This WID is populated with a sequence generated numeric value. Knowledge modules (KM) use a separate DB sequence for each target table while loading data into W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D tables resulting in different sequence numbers for SCD1_WID column in master table W_PARTY_D. The following steps describe an approach to share the same sequence ID.

Step 1:

KM driven option ‘OBI_SCD1_WID’ should be disabled to refrain it from creating separate DB sequences. Set the default value of OBI_SCD1_WID IKM option to ‘false’ as shown in the screen shot below.

 

Step 2:

Create a mapping to populate the helper table W_PARTY_PER_T1_D table. The source for this mapping should be the corresponding staging table W_PARTY_PER_DS. The mapping expression for SCD1_WID column should read from the sequence created from the previous load stream (in this case W_PARTY_ORG_D load). Set the mapping expression as NEXTVAL (W_PARTY_ORG_S1W) and uncheck ‘Update’ checkbox.

 

Step 3:

In the flow tab, DETECTION_STRATEGY IKM option should be set to ‘NONE’.

 

Step 4:

Configure LP components to execute the scenarios loading W_PARTY_PER_T1_D, W_PARTY_PER_D & W_PARTY_D in serial mode in the order as follows.

 

Tuesday Jan 27, 2015

Fact Table Partitioning with Oracle BI Applications

Authors:
Patrick Block, Principal Member Technical Staff, Oracle BI Apps Development
Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team)

Support for table partitioning was introduced to the BI Applications 11.x release series starting with version 11.1.1.8.1.  Partitioning support is largely automated - you define the partitions on the fact table directly in the database, and then the BI Apps Fact Table IKMs and the Oracle database does the work.  The Bitmap indexes on the fact table are automatically created as Local (rather than as the default Global) Indexes and are only rebuild the Bitmap Indexes on the partitions that have any data changes. The combination of having local indexes which may be created in parallel on each partition and only rebuilding the required indexes, considerably reduces the overall time taken to load the fact or aggregate table.

While the documentation on the topic: BI Application Configuration Guide - Review Table Partitioning for Human Resources Analytics, uses an HR Analytics example, this feature is supported for all BI Apps Fact tables. The following blog post details how Fact Partitioning can be implemented.[Read More]

Thursday Jan 22, 2015

DB Link with Oracle BI Applications

Authors: Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) and Patrick Block, Principal, OBIA Development

In an earlier blog post 3 Modes for Moving Data to the BI Applications DW from a Source Application Database the available options for setting up data source connections was provided in detail, along with screenshots.  It should be noted this information written for OBIA 11.1.1.7.1 is still applicable for 11.1.1.8.1. While not all customers will deploy the SDS (Source Dependent Store with GoldenGate), most customers should consider using the DB link option. The DB Link allows for direct Oracle database to Oracle database communication, and the data won’t need to flow through your ODI Agent/WLS.  The objective of this blog is to elaborate on the above mentioned post and add tips and troubleshooting guidance. 

[Read More]

Thursday Jul 31, 2014

Configure DW generated 4-4-5 calendar and 13 period calendar

Author: Chuan Shi

Introduction

Oracle BI APPs provide two data warehouse generated fiscal calendars OOTB. They are the 4-4-5 calendar (with three different disciplines, i.e., 4-4-5, 4-5-4 and 5-4-4, and one of them can be chosen) and 13 period calendar. These fiscal calendars can be populated according to the configuration rule chosen by the customer, such as when such a calendar starts, when it ends, and if customer wants to use 4-4-5, 4-5-4, or 5-4-4 discipline for the 4-4-5 calendar. This is done by configuring file_mcal_config_g.csv, which is used by the ETL logic. The purpose of this article is to explain how to configure file_mcal_config_g.csv. To start with, we briefly explain the 4-4-5 calendar and the 13 period calendar.

4-4-5 Calendar

This calendar is in general used for weekly and period analysis. Each fiscal year is composed of twelve 4-4-5 periods. A 4-4-5 period is defined to be either 4 or 5 weeks long. A week can start with any day (Sunday, Monday, etc) and ends on the 7th day following the start day. For example, if customer starts weeks on every Monday, then all weeks end on the following Sunday. A 4-4-5 calendar can use a 4-4-5 discipline, a 4-5-4 discipline, or a 5-4-4 discipline.

In a 4-4-5 discipline, the first period is composed of the first 4 weeks of a given fiscal year. The next 4 weeks compose the second period. The next 5 weeks compose the third period and so on. For example, the first three periods of a 4-4-5 discipline is shown in Figure 1. (5-4-4 and 4-5-4 disciplines are defined similarly.)

1 Week = 7 days (the first period can start from any day – configurable)

Figure 1. First three periods of a 4-4-5 discipline

A regular 4-4-5 calendar has 52 = (4+4+5 + 4+4+5 + 4+4+5 + 4+4+5) weeks.

13 Period Calendar

This calendar is used when period to period comparisons are needed. (Note the 4-4-5 calendar is not good for this, since a 5 week period is 25% longer than a 4 week period.) It is mostly used in Retail. Each fiscal year is composed of 13 periods, where each period is 4 weeks long. Again, a week can start with any day (Sunday, Monday, etc) and ends on the 7th day following the start day. Figure 2 shows the first three periods of a 13 period calendar.

1 Week = 7 days (the first period can start from any day – configurable)

Figure 2. First three periods of a 13 period calendar

A regular 13 period calendar has 13 periods where each period has 4 weeks. Therefore, a regular 13 period calendar also contains 52 weeks.

52 Weeks vs. 53 Weeks

From the introduction above, we see that for both the 4-4-5 calendar and 13 period calendar, a regular fiscal year contain 52 weeks, or equivalently 364 days. However, each Gregorian calendar year contain either 365 days or 366 days (in leap years). So, if we require all fiscal calendar years (or the 4-4-5 calendar or 13 period calendar) to precisely have 364 days, there will be accumulated gaps between the fiscal calendar cycle and the Gregorian calendar cycle.

Therefore, what happen is that, for every several such fiscal years that have 52 weeks, the difference between the fiscal calendar cycles and Gregorian calendar cycles becomes 7 days. Once that happens, we allocate that 7 days to the last period of the current fiscal year. It means that in every several fiscal years of 52 weeks, there will be a fiscal year that is 53 weeks, where the last period of that fiscal year has one more week. For example, in a 4-4-5 discipline, the last period of a 53 week fiscal year will have 6 weeks, instead of 5. In a 13 period calendar, the last period of a 53 week fiscal year will have 5 weeks, instead of 4.

Whether a fiscal year has 52 or 53 weeks depends on the start date and end date of each fiscal year. (For example, a customer can choose to load the first fiscal year with 53 weeks.) The start date of the first fiscal year is chosen by customers. The end date of the first fiscal year as well as the start and end dates of subsequent fiscal years of the calendar are determined by an ETL logic that uses a parameter called REFERENCE_DATE which is also set by the customer. We discuss them next.

Configuring DW Generated Calendars

For the two DW generated calendars, customers are allowed to configure the following:

  • Start date of the first fiscal year of a given calendar
  • End date of the last fiscal year of a given calendar
  • Discipline of the 4-4-5 calendar
  • Reference date that determines if a fiscal year has 52 or 53 weeks

The configuration is done by using the csv file named file_mcal_config_g.csv, whose content is provided below:

Description,,,,,,,,,,,,
Examples,,,,,,,,,,,,
Last Updated By,,,,,,,,,,,,
Last Updated On,,,,,,,,,,,,
CALENDAR_ID,CALENDAR_NAME,CALENDAR_CLASS,PERIOD_TYPE,CAL_ST_DT,CAL_END_DT,CAL_OFFSET,WEEK_ALLOCATION_RULE,REFERENCE_DATE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM
10000,4-4-5,Generated,4-4-5,20021229000000,20301231000000,0,4-4-5,1230,,,DEFAULT,0
10001,13,Generated,13,20021229000000,20301231000000,0,13,1230,,,DEFAULT,0

Customers are required to provide desirable values for both 4-4-5 calendar and 13 period calendar. The columns to be configured are:


The first three columns are straightforward and therefore we only explain the last one. Our ETL logic assumes that the end date of a given fiscal year must be within +/- 3 days of the REFERNECE_DATE. For example, if a customer sets REFERENCE_DATE to 1230 (Dec 30), then the ETL logic knows that the end date of any given fiscal year should be from DEC-27 (which is DEC-30 – 3 days) to JAN-02 (which is DEC-30 + 3 days).

Recall that a week starts with the day that the CAL_ST_DT is on and ends on the 7th day following it. As a result, in this example with CAL_ST_DT = 20021229000000 which is a Sunday and REFERENCE_DATE = 1230, the end date of a given fiscal year will be the unique Saturday that falls between December 27 and January 2 inclusively. Once the end date of the fiscal year is determined, the number of weeks is also determined. Consequently, this is how the ETL logic determines if a given fiscal year has 52 or 53 weeks.

The table below shows the data for fiscal years from 2002 to 2014 for a 5-4-4 discipline calendar with REFERENCE_DATE = 1230. We can verify that the MCAL_YEAR_END_DT of all these years fall between Dec 27 and Jan 02. Some of the fiscal years have 53 weeks while the rest have 52 weeks.

Thursday May 08, 2014

BI apps 11.1.1.8.1 is available now

Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is now available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of ODI, and certified with the latest version of Oracle BI Foundation 11.1.1.7

 Highlights from the announcement.

What’s New

This new release of Oracle Business Intelligence Applications 11g adds:

1. Prebuilt Connector for Oracle Procurement and Spend Analytics with Oracle JD Edwards

2. Expanded functional content coverage across existing Oracle BI Applications

3. New utilities and enhancements to increase productivity and further drive down Total Cost of Ownership (TCO)

New Prebuilt Connector for BI Applications

· Oracle Procurement and Spend Analytics introduces a new adapter that provides out-of-box integration between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards EnterpriseOne, enabling purchasing and sourcing organizations quickly identify savings opportunities and improve operational performance through decision-ready, best practice analytics.

New Content Enhancements: Significant expansion of content improves existing Oracle BI Applications including:

· Oracle Human Resources Analytics expands analysis on HR programs and workforce performance with the introduction of Talent Management Analysis, to help HR and business managers assess talent strengths and build potential leaders by delivering greater insight into job profiles. Improvements to Absence Management, new support for Workforce Frozen Snapshots and Workforce Flex Fields are also available.

· Oracle Financial Analytics improves company Financial Performance with the new Subledger Accounting Module, providing complete account balance analysis, including supporting reference balances and drill down to accounting transactions. Aging Snapshot restoration for Receivables and Payables, drilldown from Payables to Purchase Orders, and Fixed Assets support for GL Posting Status are features also included in the release.

· Oracle Project Analytics expands insight into project-based data with new Earned Value Management Analysis, providing visibility for Project Stakeholders into Planned Value, Earned Value, and Actual Cost. Analysis of variances and indices, derived from Cost and Schedule Variance, CPI, TCPI, SPI and TSPI, is also available.

· Oracle Supply Chain & Order Management Analytics introduces Costing and Inventory Turn support for Oracle JDE EnterpriseOne. In addition, the ability to save and restore the Inventory Snapshot when executing a full load is now supported.

· Oracle Student Information Analytics introduces new Financial Aid Analysis, containing term-based detail information associated with a student award such as amount, status, disbursement, and aid type.

· Oracle Manufacturing Analytics and Enterprise Asset Management Analytics have been re-introduced into the BI Applications data model. Although new with ODI, both modules provide now the same broad functionality available with BI Applications 7.9.6.4

· Unstructured Data for Information Discovery – The BI Applications Data Model has been extended to include unstructured CRM and ERP text fields. Sample Information Discovery templates – including ODI interfaces and runtime security definitions –provide the ability to quickly create sample Information Discovery Applications from combined structured and unstructured data.

New Utilities and Enhancements to increase productivity and drive down TCO

· New Data Lineage Tool allows customers and implementers to track lineage from source to target – improving information transparency across their enterprise

· Configuration Management expands System Setup and Load Plan definition capabilities, including the registration of multiple instances for the same Product Line Version, UI enhancements to support DW connection configuration and Load Plan definition, and Load Plan definition migration.

Fact Partitioning IKMs and Parallel Index Builds are key performance Innovations introduced with this release. The former enables a feature that tags partitions so that only the indexes that need to be re-built during incremental loads are executed. The latter uses ODI parallel process so that Indexes can be built using multiple session (faster than and in addition to DB parallel).


 

Monday Apr 21, 2014

How does TYPE2_FLG Work in ETL

Author: Vivian(Weibei) Li

Instruction

TYPE2_FLG is usually used in slowly changing dimensions in BI Applications. This flag indicates if the dimension is type 2, and it determines the data storing behavior in ETL. This blog is to give you a better understanding on how TYPE2_FLG works in ETL.

Background

Slowly Changing dimension

There are many fundamental dimensions such as Customer, Product, Location and Employee in BI application. The attributes in these dimensions are revised from time to time. Sometimes the revised attributes merely correct an error in the data. But many times the revised attributes represent a true change at a point in time. These changes arrive unexpectedly, sporadically and far less frequently than fact table measurements, so we call this topic slowly changing dimensions (SCDs).

Slowly changing dimensions (SCD) entities like Employee, Customer, Product and others determine how the historical changes in the dimension tables are handled and decide how to respond to the changes. There are three different kinds of responses are needed: slowly changing dimension (SCD) Types 1, 2 and 3.

Type 1: Overwrite the attributes with new changes

Type 2: Add a New Dimension Record

Type 3: Add a New Field

We are talking about type 2 in this blog. In the Type 2 SCD model the whole history is stored in the database. An additional dimension record is created and the segmenting between the old record values and the new (current) value is easy to extract and the history is clear. A minimum of three additional columns should be added to the dimension row with type 2 changes: 1) row effective date or date/time stamp (EFFECTIVE_FROM_DT); 2) row expiration date or date/time stamp (EFFECTIVE_END_DT); and 3) current row indicator (CURRENT_FLG).

SRC_EFF_FROM_DT and EFFECTIVE_FROM_DT

The two columns have different concepts though they have similar name. We saw many customers getting confused about the two columns.

SRC_EFF_FROM_DT is extracted from the effective start date of the source (mainly from the main driven source) if the source has the history. If the source doesn’t store history or the history is not extracted, it is hard coded as #LOW_DATE.

EFFECTIVE_FROM_DT is a system column in dimension table to track the history. Remember that we use the knowledge modules (KM) for repeatable logic that can be reused across ETL tasks. Updating the SCD related columns, such as EFFECTIVE_FROM_DT, is usually handled by KM. EFFECTIVE_FROM_DT is modified when inserting a new type 2 record in incremental run, and it is usually modified to the same date as the changed on date from the source. EFFECTIVE_FROM_DT does not always map to the Source Effective Dates.

In type 2 SCD model, EFFECTIVE_FROM_DT is the date used to track the history.

TYPE2_FLG in BI Application

TYPE2_FLG is a flag used to indicate if the dimension is type 2 or not. This flag is used in many dimensions in BI application, such as employee, user, position, and so on. This flag is very important because it determines the history storing behavior.

TYPE2_FLG has two values: ‘Y’ and ‘N’. ‘Y’ means the dimension is a type 2, and ‘N’ means the dimension is type 1. Type 2 dimensions store the history, while type 1 dimensions only store the current record.

For example, if the supervisor is changed from Peter to Susan for an employee on 01/02/2012:

Type 1

EMPLOYEE_ID

SUPERVISOR_NAME

CURRENT_FLG

123

Susan

Y

Type 2

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

01/02/2012

Peter

‘N’

As shown above, type 1 dimension overwrites the supervisor with the new supervisor, and only stores the current record. Type 2 dimension inserts a new record with the new supervisor name and keeps the old record as a history. The EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT and CURRENT_FLG are modified accordingly: EFFECTIVE_TO_DT is changed to 01/02/2012 and CURRENT_FLG is set as ‘N’ for the old record. The ‘CURRENT_FLG’ is set as ‘Y’ for the new record with the new EFFECTIVE_FROM_DT.

How to Setup TYPE2_FLG

The out of the box code in BI application should have setup the default values. For the type 2 dimensions, it is usually set as ‘Y’.

The TYPE2_FLG can be configured in BIACM. This variable is configured by different dimension groups.

The screenshot above shows that you can configure the value of this flag for difference dimension groups by clicking the parameter value and overwriting it to a different value.

Note: You can only configure the TYPE2_FLG for the dimension groups that are in this BIACM list. The dimension groups that are not in the list cannot be configured.

You should set the value of TYPE2_FLG carefully. If you override the TYPE2_FLG to ‘N’ for a type 2 dimension, you may meet some issues. I will describe more details in the next session.

Possible Issues Related to TYPE2_FLG

As mentioned earlier, sometimes for some reason, the value of TYPE2_FLG may be set to ‘N’ for the type 2 dimension. This may cause some issues.

In BI application, SDE mapping brings the history from the source in the initial full load in some adapters, such as EBS. TYPE2_FLG affects the storing behavior for these historic records. Here compares the different behaviors when setting TYPE2_FLG to ‘Y’ and ‘N’ for a type 2 dimension.

Case 1-TYPE2_FLG = ‘Y’

Let’s take employee dimension (type 2 dimension) as an example

Source

EMPLOYEE_ID

SRC_EFF_FROM_DT

SUPERVISOR_NAME

ROW #

123

01/01/1999

Peter

1

123

01/02/2012

Susan

2

When loading the data into data warehouse in the initial full run, both the rows (including the historical record #1) will be loaded. TYPE2_FLG is ‘Y’ in this case, KM, which will handle the loading behavior, uses this value to determine the type of employee dimension, and accordingly the storing method.

KM will modify EFFECTIVE_TO_DT and CURRENT_FLG for the two records as TYPE2_FLG=’Y’ in this case.

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

01/02/2012

Peter

‘N’

Case 2 - TYPE2_FLG =’N’

This time, the TYPE2_FLG is set as ‘N’ for employee dimension (type 2 dimension), which is incorrect. KM will treat it as type 1 rather than type 2.

Source

EMPLOYEE_ID

SRC_EFF_FROM_DT

SUPERVISOR_NAME

ROW #

123

01/01/1999

Peter

1

123

01/02/2012

Susan

2

When loading the data into data warehouse, both the rows will be loaded because the history from the source is stored. However, because TYPE2_FLG is ‘N’, KM won’t modify EFFECTIVE_TO_DT and CURRENT_FLG accordingly, and this will cause issues.

Employee Table in Data warehouse

EMPLOYEE_ID

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

SUPERVISOR_NAME

CURRENT_FLG

123

01/02/2012

Future

Susan

‘Y’

123

01/01/1999

Future

Peter

‘Y’

As shown above, the two records are in an overlapping time range, and both have CURRENT_FLG as ‘Y’. It may give duplicates when resolving the employee from the facts. For example, the transaction date 02/04/2013 will fall into the time range of the two records, so both will be extracted, thus causing the duplicates in the facts.

How to Debug TYPE2_FLG Issues

As discussed in the previous session, in order to avoid this kind of issues, you should set the value of TYPE2_FLG carefully, and set it as ‘Y’ for out of the box TYPE2 dimensions.

In addition, when you get the duplicates in the fact, you can do the following checks.

  • Check where the duplicates come from in the fact, and find out the problematic dimension if they are from the dimension.
  • Check the data in the dimension for the duplicates to see if you see the similar loading behavior as the one in use case 2 of the previous session. You can first simply see if multiple records having CURRENT_FLG=’Y’.
  • Check the value of the TYPE2_FLG in ODI repository.

1. Open the session log of the task

2. Open ‘Definition’

3. Expand ‘Variable and Sequence Values’

4. Find TYPE2_FLG and check the value

5. If the value is ‘N’ but the dimension is type 2, you may hit the issue described in the previous session.

I also would like to provide you some tips to find out the type of a dimension here. You can find out this information in ODI repository.

  • For one dimension, such as employee dimension, you should first know the dimension table name, for example, W_EMPLOYEE_D
  • Go to ODI repository->’Designer’->’Models’
  • Find out the dimension table and open it by double clicking it
  • Go to ‘Definition’ and check the OLAP type. The type of slowly changing dimension tells you that this dimension is type 2

  • You can also find out which attributes are type 2 by checking the column attribute

1. Expand the dimension table, for example, W_EMPLOYEE_D and then expand Columns

2. Open the attribute of a column by double clicking it

3. Go to ‘Description’ and check ‘Slowly Changing Dimension Behavior’

As shown above, ‘Add Rows on Change’ option tells you that this attribute is type 2.

Conclusion

This blog helps you understand how TYPE2_FLG works in ETL and recognize the importance of this flag. It also gives you a way to debug the possible TYPE2_FLG issue.

Thursday Apr 17, 2014

3 Modes for Moving Data to the BI Applications DW from a Source Application Database

In BI Applications 11.1.1.7.1 the adaptors for the following product lines use the LKM BIAPPS SQL to Oracle (Multi Transport) to move the data from the source Application database to the target BI Applications DW database:

  • E-Business Suite
  • Siebel
  • PeopleSoft
  • JDE

A key feature of this LKM developed specifically for BI Applications is that the data from the source system may be transported in 3 different ways and using a parameters set in Configuration Manager the mode can be selected to suit how the system has been setup, thereby optimizing ETL performance.  This blog post details those 3 modes, and how to configure BI Applications to use the mode that best suits the deployment.

[Read More]

Friday Apr 11, 2014

Snapshot Facts in OBIA (3)

Authors: Anbing Xue, Zhi Lin

Delta Snapshot History Fact

Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The changes here can be either quantitative (on amount, units, etc) or qualitative (on names, description, etc). Hence we invented a new kind of snapshot fact, to specifically address it.

Typical Data Model

A typical attributes of a delta snapshot history fact is–

SNAPSHOT_DT

IMAGE_TYPE

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

The delta snapshot history fact would capture and store a new pair of images, whenever we detect a change on the original transaction. The image pair is essential, especially for qualitative changes. Usually it consists of one pre image (or “negative” image) as well as one post image (or “positive” image).  For example,

IMAGE_NUM

ORDER_ID

SHIP_TO_LOCATION

AMOUNT

SNAPSHOT_DT

IMAGE_TYPE

1

001

New York City

$100

Mar 1, 2014

POST

2

001

New York City

-$100

Mar 2, 2014

PRE

3

001

New York City

$120

Mar 2, 2014

POST

4

001

New York City

-$120

Mar 3, 2014

PRE

5

001

Boston

$120

Mar 3, 2014

POST

Basically, this delta snapshot history fact stores regular snapshots per change, and adds the “negative” snapshots before change. So it is enriched with a unique feature to report delta’s trend line, simply by clubbing both kinds of snapshots together.

Besides, we also introduced more flexibility for users to configure a subset of columns they are interested to track. Based on the configuration, the ETL would create new snapshots for changes only on the interested columns. Changes on the other columns would trigger an update to existing snapshots instead, in order to sync up with the original transactions.

Though, extra ETL complexity has to be introduced to handle pre vs. post images separately, plus the flexibility to track subset but all changes. The number of records is much less than regular daily snapshot facts. The data size of this fact is proportional to the number of changes.

Snapshot Facts in OBIA (2)

Authors: Divya Nandakumar, Zhi Lin

Range Snapshot History Fact

In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the snapshot fact could be humongous, depending on the snapshot frequency.  The dilemma is that better accuracy of change history would be achieved with more frequent captures, which makes data size overwhelming and performance badly impacted.

A solution is to create a variation of the snapshot history fact, which we call snapshot period history fact. The idea is simple, concatenating consecutive snapshots, if they happen to share identical images, of a transaction into a new snapshot. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

Typical data model

The typical attributes of a range snapshot history fact are–

EFFECTIVE_FROM_DT

EFFECTIVE_TO_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementations

Concatenate daily snapshots into a range

This is a conventional way to build up range snapshots. Two consecutive daily snapshots sharing identical status can be merged into one snapshot spanning across these two days. Two having different statuses would be stored as two separate snapshots for each day.

Concatenation of these daily snapshots could be created as a result of gathering related daily snapshot records together. The degree of condensation of data that can be achieved is remarkable, because the gathering may span to range of period unlike the fixed period of week or month. In case the triggering event occurs very often, for example 20 times a day then, this approach is not advisable. Meanwhile, every detail got preserved as no daily snapshot got dropped off the concatenation.

The ETL flow requires daily snapshots to start with, and do group-by on “interested” status to merge identical rows. Its dependency on accumulation of daily snapshots is extra task and large storage. Incremental load could be a challenge, especially for a back-dated snapshot. Also, this method assumes no gap between daily snapshots, which could lead to an exception difficult to handle in ETL.

A status change in these related daily snapshots could trigger a snapshot record to be entered into the data warehouse.  

Range snapshots directly from transactions

Here we invented a new way to overcome the shortage of the conventional method above to build range snapshots. We removed the dependency on daily snapshots and directly build range snapshots by scanning through all transaction footprints.

A few key points we have introduced to achieve this.

1)      Create multiple range snapshots trailing each footprint (transaction). For example, one order placed in Mar 25, 2012 by Adam, derives to range snapshots trailing as below. The period duration in each snapshot is one year here, which is configurable.

Customer

Status

Status Start Date

Status End Date

Adam

Active

Mar 25, 2012

Mar 25, 2013

Adam

Dormant

Mar 25, 2013

Mar 25, 2014

Adam

Lost

Mar 25, 2014

Mar 25, 2015

2)       Collapse all trailing series generated in (1), and come out only one status at any point of time, using priority rules. In the same example, the priority rule to override is, Active > Dormant > Lost.

3)       On top of the results from collapsing, concatenate the snapshots having identical statuses.

The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.

The challenge on incremental load, especially back-dated records, can be solved here relatively easier, as all the source information here, the transaction footprints, are usually persisted anyway. In similar example, our ETL can be as simple as deleting records from the target table and recreating the records for a particular customer from scratch, every time there is an order placed by the customer.

Here we still achieve a great amount of data compression and robust ETL processing. The incremental load is still not precise yet to the most granular level. One incremental load involving one transaction per customer would end up to truncate and rebuild the entire target table.

Snapshot Facts in OBIA (1)

Authors: Divya Nandakumar, Zhi Lin

 

Snapshot History Fact

A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular, transactional fact intends to store data in data warehouse format and reflect OLTP transactions with near-real-time latency. In this context, a regular fact basically captures the near-current snapshot of the OLTP transactions, and is capable to support status de quo analysis.

A snapshot history fact, (or snapshot fact, in short,) accumulates a series of snapshots and preserve all, each with a different time label. In this way, the change history of each transaction is preserved by the snapshot series. This fact is very useful to do trend analysis over time.

Typical Data Model

The typical attribute of a snapshot history fact is–

SNAPSHOT_PERIOD_DT

Many attributes of the original transactions are kept and inherited, like–

Primary Key

Foreign keys to the dimensions

Implementation

Rolling period of daily snapshots

In the case of a Business need to Analyze DAILY quantity-on-hand inventory levels by product and store for a Business process like “Retail store inventory”, the Granularity could be - Daily inventory by product at each store, the Dimensions - Date, product, store & Fact - Quantity on hand.

Storing source system daily snapshot would have a serious impact on storage of Data Warehouse. The first solution is to keep daily snapshots for a limited rolling period, like last 90 days for example. The daily snapshot table would accumulate daily snapshots from 1 day, 2 days,…, until 90 days. After that, it would always drop off the oldest daily snapshot, before it adds one more daily snapshot. Hence the ETL should always delete the snapshots older than 90 days first, and then append a new snapshot.

This method enables to keep a fairly granular of snapshots on daily level. However, older snapshots are not kept, so it’s not good for long term historical trending.

 

Monthly snapshots

At the end of the month all accounts have their month ending balance captured. The event is the end of the month, and the month is stored as part of the data warehouse. The selection program reads through the operational data and upon encountering a record that meets the qualifications, moves the record to the data warehouse. At the end of the month, each account is queried and the balance of the account at the end of the month is transferred to the data warehouse environment. One account may have had no activities during the month and another account may have had 200 activities during the month. Both accounts will show up as exactly one record in the data warehouse environment. No continuity of activity is assumed using this technique.

The passage of time - day end, week end, month end, etc. - is all common ways of triggering a snapshot. But the periodic passage of time is hardly the only way that snapshots are triggered.

The Monthly snapshot table stores snapshots of all previous day’s historical data. ETL design would have a preload mapping which deletes the data loaded for current month, based on Current Month End date and then load with the latest data for current month.

 

In this way, we “aggregate” up from daily snapshots and archive great compact on data size. Longer term history trending can be stored and reported. However, we lost gross levels of details in between every two month ends.

 

Wednesday Mar 05, 2014

BI apps 11.1.1.7.1 Cumulative Patch 1 is available now

BI applications 11.1.1.7.1 cumulative patch  is available now.

Patch 17546336 - BIAPPS 11.1.1.7.1 ODI CUMULATIVE PATCH 1 (Patch) can be downloaded from My Oracle Support.

 "To download patch , navigate to My Oracle Support > Patches & Update. Search with the patch number (17546336) on the Patch Name or Number field. Follow the instructions in the Readme to apply the patch."

Monday Feb 24, 2014

How to Implement Object Security in Project Analytics in OBIA 11.1.1.7.1

Author: Vijay Rajgopal

Introduction

This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA 11.1.1.7.1 onwards.

Object-level security controls the visibility to logical objects based on a user's duty/application roles. The access to following objects can be restricted using object level security: Presentation tables, Presentation table columns, Subject Areas, Reports, Dashboards, and Project Specific shared folders.

To apply object security over subject area, individual tables or individual column the default access for authenticated user application role must be set to No Access.

We need to explicitly grant read access to duty roles (which are based on adaptor as explained above) which can access/view the particular subject area or individual table or individual column.

Supported OBIA release: 11.1.1.7.1 onwards

  1. Project Analytics Application Roles used for enforcing object security –

In Enterprise Manager select WebLogic -> Domain -> bifoundation_domain -> Security -> Application Roles, Select obi application stripe and search for role name which starts with OBIA and you will see the list of all application roles that start with OBIA.


Following is the list of OOTB duty roles by adaptor

EBS Adaptor Duty Roles –

OBIA_EBS_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_EBS_PROJECT_MANAGEMENT_ANALYSIS_DUTY

PSFT Adaptor Duty Roles –

OBIA_PSFT_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PSFT_PROJECT_MANAGEMENT_ANALYSIS_DUTY

Fusion Adaptor Duty Roles –

OBIA_PROJECT_EXECUTIVE_ANALYSIS_DUTY

OBIA_PROJECT_MANAGEMENT_ANALYSIS_DUTY

  1. Project Analytics object security implementation -

2.1 Subject Area:

Eg: Project - Cost GL Reconciliation is a newly added area for EBS and PSFT adaptors. We want to ensure that this subject area is not seen by Fusion Adaptor customers.

Bring down the OBIEE Server, backup the existing rpd and open the rpd in the Admin tool.

Double click Project - Cost GL Reconciliation à Permissions


As you can see read access has been granted explicitly to duty roles associated with EBS and PSFT adaptors. All other duty roles would inherit the default access from Authenticated User application role which is set to No Access. This ensures that this subject area is not visible for Fusion adaptor users

2.2 Presentation Table:

Eg: Dim – Analysis Type is supported only for PSFT adaptor. We hide this presentation table from EBS and Fusion Adaptor customers.

Under Project - BillingAnalysis Type Permissions


As it can be seen above only users associated to PSFT duty roles would be able to view Analysis Type table. For EBS and Fusion adaptor users this table would be hidden.

2.3 Individual Columns:

Eg: Interproject Billing Amount metric in Project-Billing subject area is supported only for EBS and Fusion adaptors. We hide this individual column from PSFT customers.

Under Project - Billing Fact – Project Billing Interproject Invoice Amount à Permissions


As it can be seen above this metric would be viewed by EBS and Fusion adaptor users and hidden from PSFT adaptor users.

Save the rpd, do a consistency check and deploy the updated rpd in the OBIEE server.

  1. Additional Information –

General Details about 11.1.1.7.1 OBIA can be found here

Wednesday Feb 19, 2014

Notes for implementing Universal adapter for OBIA Project analytics

Author: Amit Kothari

Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules.

Supported OBIA releases: 11.1.1.7.1 onwards

Supported Apps releases: Universal Adapter.

Steps:

 Please refer to the OBIA documentation and the DMR as a starting point for this exercise.Also refer to this blog entry.

Please login to the ODI Designer to see the OBIA Projects Universal interfaces, the Source files can be seen in the Model layer.

1. High level steps to import data into the data warehouse through the Universal adapter.

a. Populate the csv files with your data (eg. file_proj_budget_fs,.csv is the source file for w_proj_budget_fs table). Typically customer writes an extract program like a shell file/PL*SQLprogram etc which creates these data files from a non supported Source OLTP system.

b. Refer to the steps details of how to populate these files.

c. Build a Load Plan with fact groups: "900: Universal Adaptor Instance"."Project".

d. Run the Load Plan that you created in the previous step.

e. Note: If applicable this Load Plan must be run after the regular Load Plan to populate Oracle Business Analytics Warehouse for the other Subject Areas has completed.

2. The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations:

a. Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\.

b. Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>.

c. Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files.

d. Refer to the Appendix section ‘Setting Up the Delimiter for a Source File’.

  1. As a general rule default 0 for numeric columns and '__NOT_APPLICABLE__’ for string columns so that we do not run into ‘Not Null’ errors when ETLs start loading data.
  2. Date columns should be populated in the CSV file as a number in the format YYYYMMDDHH24MISS or kept null
  3. The dimension ID fields in the fact staging tables have to be populated with the integration_id of the various dimensions. This is very important otherwise the dimension wids fields in the fact tables will default to 0. Please refer to the ODI Model or the DMR for the star schema diagrams and other FK info.
  4. Similarly the common dimensions which Projects uses like W_INT_ORG_D, W_MCAL_DAY_D, W_MCAL_CONTEXT_G, W_EMPLOYE_D, W_JOB_D, W_INVENTORY_PRODUCT_D etc also needs to populated correctly via source files
  5. W_MCAL_CONTEXT_G has a class fields that holds two values – GL or PROJECTS. To resolve the project accounting dates in the fact tables there must be data present in this table for class ‘PROJECTS’
  6. There are various Domain codes which are loaded to Warehouse staging table W_DOMAIN_MEMBER_GS. In order to load this table the generic file File_domain_member_gs.csv has to be populated with the correct Domain code.
    1. The granularity of this file is each domain member per language for any of the domains listed above.
    2. Domain codes for Projects are listed in the Appendix. Just load the domains based on the Facts/Dims you are planning to load.

Table B-164 file_domain_member_gs.csv Field Descriptions

Column Name

Data Type

Sample Data

Description

DOMAIN_CODE

Not available.

Not available.

This should be populated with the Domain Code corresponding to the Source Domain that is to be configured.

DOMAIN_TYPE_CODE

Not available.

Not available.

Defaulted to 'S' - indicates this is a Source Domain Code.

DOMAIN_MEMBER_CODE

Not available.

Not available.

This should be populated with the CODE value supplied in any of the above files.

DOMAIN_MEMBER_NAME

Not available.

Not available.

This should be populated with the NAME value that corresponds to the Member Code supplied.

DOMAIN_MEMBER_DESCR

Not available.

Not available.

Not available.

DOMAIN_MEMBER_REF_CODE

Not available.

Not available.

Hardcode to '__NOT_APPLICABLE__'.

DOMAIN_MEMBER_DEFN_TYPE_CODE

Not available.

Not available.

Not available.

LANGUAGE_CODE

Not available.

Not available.

Warehouse Language Code.

SRC_LANGUAGE_CODE

Not available.

Not available.

Source Language Code.

INTEGRATION_ID

Not available.

Not available.

This is the unique ID for the record. The INTEGRATION_ID for this file can also be populated as DOMAIN_CODE~DOMAIN_MEMBER_CODE.

DATASOURCE_NUM_ID

Not available.

Not available.

The unique Data Source ID of the Source Instance you are configuring.

Appendix

A. Setting Up the Delimiter for a Source File

When you load data from a Comma Separated Values (CSV) formatted source file, if the data contains a comma character (,), you must enclose the source data with a suitable enclosing character known as a delimiter that does not exist in the source data.

Note: Alternatively, you could configure your data extraction program to enclose the data with a suitable enclosing character automatically.

For example, you might have a CSV source data file with the following data:

Months, Status
January, February, March, Active
April, May, June, Active

If you loaded this data without modification, ODI would load 'January' as the Months value, and 'February' as the Status value. The remaining data for the first record (that is, March, Active) would not be loaded.

To enable ODI to load this data correctly, you might enclose the data in the Months field within the double-quotation mark enclosing character (" ") as follows:

Months, Status
"January, February, March", Active
"April, May, June", Active

After modification, ODI would load the data correctly. In this example, for the first record ODI would load 'January, February, March' as the Months value, and 'Active' as the Status value.

To set up the delimiter for a source file:

1. Open the CSV file containing the source data.

2. Enclose the data fields with the enclosing character that you have chosen (for example, (").

You must choose an enclosing character that is not present in the source data. Common enclosing characters include single quotation marks (') and double quotation marks (").

3. Save and close the CSV file.

4. In ODI Designer, display the Models view, and expand the Oracle BI Applications folder.

Identify the data stores that are associated with the modified CSV files. The CSV file that you modified might be associated with one or more data stores.

5. In ODI Designer, change the properties for each of these data stores to use the enclosing character, as follows:

1. Double-click the data source, to display the DataStore: <Name> dialog.

2. Display the Files tab.

3. Use the Text Delimiter field to specify the enclosing character that you used in step 2 to enclose the data.

4. Click OK to save the changes.

You can now load data from the modified CSV file.

  1. Project Domains

PROJECT_ANALYSIS_TYPE

PROJECT_ASSIGNMENT_STATUS

PROJECT_ASSIGNMENT_TYPE

PROJECT_BILLING_TYPE

PROJECT_BILL_HOLD_CODE

PROJECT_BILL_HOLD_FLG

PROJECT_BUDGET_CHANGE_REASON

PROJECT_BUDGET_CLASS

PROJECT_BUDGET_ENTRY_LEVEL

PROJECT_BUDGET_LEVEL

PROJECT_BUDGET_STATUS

PROJECT_BUDGET_TIME_PHASE

PROJECT_BUDGET_TYPE

PROJECT_BUDGET_VER_TYPE

PROJECT_CLASS_CATEGORY

PROJECT_CLASS_CODE

PROJECT_COMMITMENT_SOURCE_TYPE

PROJECT_COMMITMENT_TYPE

PROJECT_COST_CODE_TYPE

PROJECT_CROSS_CHARGE

PROJECT_CROSS_CHARGE_TYPE

PROJECT_DOCUMENT_ENTRY

PROJECT_DOCUMENT_SOURCE

PROJECT_EVENT_CLASS

PROJECT_EVENT_INVOICING_STATUS

PROJECT_EVENT_SOURCE

PROJECT_EVT_REV_DISTRIB

PROJECT_EXPENDITURE_CATEGORY

PROJECT_EXPENDITURE_CLASS

PROJECT_EXPENDITURE_SUBCATEGORY

PROJECT_FIN_PLAN_TYPE

PROJECT_FUNDING_APPROVAL

PROJECT_FUNDING_CATEGORY

PROJECT_FUNDING_LEVEL

PROJECT_INVOICE_CLASS

PROJECT_INVOICE_HDR_TYPE

PROJECT_INVOICE_LINE_TYPE

PROJECT_INV_STATUS

PROJECT_MFG_RESOURCE_TYPE

PROJECT_PRIORITY

PROJECT_PROGRESS_STATUS

PROJECT_REQUIREMENT_STATUS

PROJECT_RESOURCE_CLASS

PROJECT_RESOURCE_CODE

PROJECT_RESOURCE_TYPE

PROJECT_REVENUE_CATEGORY

PROJECT_REVENUE_STATUS

PROJECT_ROLE

PROJECT_SECURITY

PROJECT_SERVICE_TYPE

PROJECT_STATUS

PROJECT_SUB_STATUS

PROJECT_TASK_PRIORITY

PROJECT_TASK_STATUS

PROJECT_TASK_TYPE

PROJECT_TRANSACTION_SOURCE

PROJECT_TRANSFER_STATUS

PROJECT_TYPE

PROJECT_TYPE_CLASS

PROJECT_WORK_TYPE

About

Phil Wang-Oracle

Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today