X

The Oracle BI Application Blogs provides the latest and tips and tricks on Oracle BI Applications Product (OBIA)

Recent Posts

OBIA 11g : Analyzing ODI ETL Load Performance - Part 1

When you run an ODI ETL Load plan, it takes certain time. At times, customers would like to understand a split of the time taken by the load plan and analyze the top running maps to see if they can be optimized to run faster. In the first part in this series, we help you understand how you can easily analyze the time taken by the load plan and identify the top running maps. Using ODI Studio, a customer can load at the Load Plan logs and see the time taken by each phase or if required, each step in the load plan. However since there is no easy option to sort by the time taken, it can sometimes be painful to look at the complete logs from ODI Studio to easily identify the top running maps. The following backend queries(to be run against the ODI repository schema) can help you see the same details that you see in ODI studio and yet analyze them better. Query to get the overall load plan run timings for an instance --List of Load Plan runs for a load plan instance id SELECT   LP.LOAD_PLAN_NAME,   LPI.I_LOAD_PLAN,   LPR.I_LP_INST,LPR.NB_RUN,   TO_CHAR(LPR.START_DATE, 'MM-DD-YYYY HH24:MI:SS') LPRUN_START_DATE,   TO_CHAR(LPR.END_DATE, 'MM-DD-YYYY HH24:MI:SS') LPRUN_END_DATE,   LPR.STATUS --D    Done,E    Error,M    Warning,R    Running,W    Waiting,A    Already done in a previous run FROM SNP_LPI_RUN LPR,SNP_LP_INST LPI, SNP_LOAD_PLAN LP WHERE   LPR.I_LP_INST=&LOAD_PLAN_INSTANCE_ID   AND LP.I_LOAD_PLAN=LPI.I_LOAD_PLAN   AND LPI.I_LP_INST=LPR.I_LP_INST ORDER BY LPR.START_DATE ; The query will prompt you for the Load Plan Instance id. You can easily get it from ODI studio for the load plan run you want to analyze. Below screenshot shows sample output of the above query. Next step is to analyze the individual step timings of a single load plan instance.The below query provides the timings at each load plan step.Use the LVL Column in query below to filter and see the timings at higher levels in the load plan (see examples below). (Download the queries here ODI ETL Performance Backend Queries) --LP Instance Latest Run Step Timings SELECT   LP.I_LOAD_PLAN,LP.LOAD_PLAN_NAME,LPI.I_LP_INST,   LPLOG.NB_RUN,   COUNT(1) OVER( ORDER BY STEP_NUM)-1 LPSTEPNUM,   SH.LVL ,   LPAD(' ', 2 * SH.LVL - 1,' ') || LPS.LP_STEP_NAME STEP_HIERARCHY,       LPLOG.STATUS,--D    Done,E    Error,M    Warning,R    Running,W    Waiting,A    Already done in a previous run   LPLOG.NB_ROW,   LPLOG.SESS_NO,   CASE WHEN LPLOG.SESS_NO IS NOT NULL THEN TRUNC((SESS.SESS_END-SESS.SESS_BEG)*24*60)||':'||LPAD(TRUNC(MOD((SESS.SESS_END-SESS.SESS_BEG)*24*60*60,60)),2,'0')     ELSE TRUNC((LPLOG.END_DATE-LPLOG.START_DATE)*24*60)||':'||LPAD(TRUNC(MOD((LPLOG.END_DATE-LPLOG.START_DATE)*24*60*60,60)),2,'0')   END DURATION_FORMATTED,   CASE WHEN LPLOG.SESS_NO IS NOT NULL THEN TRUNC((SESS.SESS_END-SESS.SESS_BEG)*24*60*60)     ELSE TRUNC((LPLOG.END_DATE-LPLOG.START_DATE)*24*60*60)   END DURATION_SEC,   CASE WHEN LPLOG.SESS_NO IS NOT NULL THEN TO_CHAR(SESS.SESS_BEG, 'MM-DD-YYYY HH24:MI:SS')     ELSE TO_CHAR(LPLOG.START_DATE, 'MM-DD-YYYY HH24:MI:SS')   END START_TIME,   CASE WHEN LPLOG.SESS_NO IS NOT NULL THEN TO_CHAR(SESS.SESS_END, 'MM-DD-YYYY HH24:MI:SS')     ELSE TO_CHAR(LPLOG.END_DATE, 'MM-DD-YYYY HH24:MI:SS')   END END_TIME,   NVL(LPS.SCEN_NAME,LPS.VAR_NAME) SCEN_VAR_NAME,   LPS.LP_STEP_TYPE,   CASE WHEN LPS.IND_ENABLED =1 THEN 'Y' ELSE 'N' END IS_ENABLED,   LPS.I_LP_STEP,   LPS.PAR_I_LP_STEP,   LPS.RESTART_TYPE,   LPS.SCEN_NAME,LPS.SCEN_VERSION,   LPS.VAR_NAME,LPS.VAR_OP,LPS.VAR_VALUE,   LPS.SESS_KEYWORDS,LPS.VAR_LONG_VALUE FROM SNP_LOAD_PLAN LP INNER JOIN SNP_LP_INST LPI ON LPI.I_LOAD_PLAN=LP.I_LOAD_PLAN    INNER JOIN   (SELECT I_LOAD_PLAN,I_LP_STEP,LP_STEP_NAME,STEP_PATH,TYPE_PREF,LVL,STEP_ORDER,RANK() OVER(ORDER BY TYPE_PREF DESC,ROWNUM) STEP_NUM FROM (     SELECT I_LOAD_PLAN,I_LP_STEP,LP_STEP_NAME,STEP_PATH,       CASE WHEN SUBSTR(STEP_TYPE_PATH,1,4)='->EX' THEN 1 ELSE 0 END TYPE_PREF,       LVL,STEP_ORDER FROM     (     SELECT * FROM (     SELECT  I_LOAD_PLAN,I_LP_STEP,LP_STEP_NAME,SYS_CONNECT_BY_PATH(LP_STEP_NAME, '->') STEP_PATH ,         SYS_CONNECT_BY_PATH(LP_STEP_TYPE, '->') STEP_TYPE_PATH ,LEVEL LVL, STEP_ORDER         FROM    SNP_LP_STEP                START WITH             PAR_I_LP_STEP IS NULL                    CONNECT BY                        I_LOAD_PLAN=PRIOR I_LOAD_PLAN AND             PAR_I_LP_STEP = PRIOR I_LP_STEP        ORDER SIBLINGS BY STEP_ORDER        )WHERE SUBSTR(STEP_TYPE_PATH,1,4)!='->EX' --Exclude Exception Steps        ))   ) SH ON LP.I_LOAD_PLAN=SH.I_LOAD_PLAN LEFT OUTER JOIN SNP_LPI_STEP LPS   ON LPI.I_LP_INST=LPS.I_LP_INST   AND LPS.I_LP_STEP=SH.I_LP_STEP LEFT OUTER JOIN SNP_SCEN S   ON LPS.SCEN_NAME=S.SCEN_NAME LEFT OUTER JOIN SNP_LPI_STEP_LOG LPLOG   ON LPLOG.I_LP_INST=LPS.I_LP_INST AND LPLOG.I_LP_STEP=LPS.I_LP_STEP LEFT OUTER JOIN SNP_SESSION SESS   ON LPLOG.SESS_NO=SESS.SESS_NO WHERE LPI.I_LP_INST=&LOAD_PLAN_INSTANCE_ID AND (LPLOG.NB_RUN=(SELECT DISTINCT                     MAX(QLPR.NB_RUN) OVER() MAX_NB_RUN                     FROM SNP_LPI_RUN QLPR,SNP_LP_INST QLPI                     WHERE QLPR.I_LP_INST=LPI.I_LP_INST                     AND QLPI.I_LP_INST=QLPR.I_LP_INST                     )     OR LPLOG.NB_RUN IS NULL) ORDER BY LP.I_LOAD_PLAN,STEP_NUM ;   How to Export query output from SQL Developer Run the query from SQL developer. Then right click on the output and select export. Specify xlsx format and then click on next to finish the export. Screenshots below: Once you have the excel sheet, you can easily analyze the timings as shown in examples below. Phase Wise Summary Use the LVL column to filter on top levels (level 1,2,3). See screenshot below. Top Running Maps: In the excel sheet, sort on DURATION_SEC and filter on STEP_TYPE=RS i.e only Scenario steps, to get the top running scenarios. The session numbers are also present in the output. Now you can analyze the individual top-running sessions further by looking at the sessions in ODI studio and checking which particular step in the session is taking most of the time and what is the SQL for that particular step. Obtain the AWR report and SQL advisor report for that query and see how that query can be tuned. In the part2 in this blog series, we discuss certain settings that influence the overall load plan duration.    

When you run an ODI ETL Load plan, it takes certain time. At times, customers would like to understand a split of the time taken by the load plan and analyze the top running maps to see if they can be...

BIAPPS on PAAS

Lift and Shift of Oracle BIAPPS Artifacts to Oracle Analytics Cloud

Authors: Swathi Singamareddygari , Anand Sadaiyan Table of Contents Disclaimer Section:1 Deliverables Section: 2 Lifting and shifting Application Roles and Web Catalogue Section: 3 Lifting and Shifting Repository Section: 4 FAQ Section: 5 Limitations Disclaimer This document does not replace the Oracle Analytics Cloud Service Documentation Library or other Cloud Services documents. It serves as a supplement for Lifting and Shifting Business Intelligence Applications Artifacts (BIAPPS / OBIA) to Oracle Analytics Cloud (OAC). This document is written based on the Oracle Analytics cloud version 17.2.1.0.0 Screenshots included in this document might differ slightly from what you see on your screen. Note:  It is always a good practice to take a snapshot of the current environment in Oracle Analytics Cloud, before lifting and shifting the artifacts. Ensure that you create a snapshot before proceeding.  Section:1 Deliverables Download the Oracle Analytics Cloud Deliverables Zip file (BIAPPS_10.2.0_OACV1.zip) from the ARU21167611. (Download this patch 21167611 from Oracle Support) . The deliverables are based on BIAPPS 10.2 release. Zip file consists of 1) OracleBIApps_10.2.0_OACV1.rpd (Password: welcome1) 2) BIAPPS_10.2.0_OACV1.bar (Password: Admin123) Section:2 Lifting and shifting Application Roles and Web Catalogue Uploading the Application Roles and web catalogue From the Oracle Analytics Cloud home page navigate to console and click on snapshots. Login to the Oracle Analytics Cloud VA page. From the Oracle Analytics Cloud home page, navigate to the Console and click Snapshots. Click Upload Snapshot to upload the delivered BAR file. See Uploading Snapshots. If Virus scanner is not configured, Kindly click “Proceed without a virus scanner” Select the delivered BAR file and enter “Admin123” as password. Select the uploaded snapshot, and click the Restore action, and in Restore Snapshot popup, select Application Roles and Catalog and click Restore to restore the snapshot for Application Roles and Web catalogue. See Restoring Snapshots. Verify the imported Application Roles in Application Role Management page (Console> Users and Roles-> Application Roles).                Verify the Webcat Artifacts by Clicking on dashboard menu from Classic Home.   Section:3 Lifting and Shifting Repository Note: Any modifications to the repository should be done in the On premise environment. No modifications are allowed in the Oracle Analytics Cloud. Allow 5 minutes of time for the Oracle Analytics Cloud environment to get refreshed after the Repository Upload. Oracle BI Applications Repository is delivered along with the Oracle Analytics Cloud deliverables zip file. Uploading the Repository: Modify the delivered repository with proper connection details (Connection pools and Schema variables OLAPTBO, CM_TBO etc) Login into Oracle Analytics Cloud environment. Navigate to Console from the home page and click on Snapshots. Replace the Oracle Analytics Cloud data model with on-premises repository using “Replace Data model” option. If Virus scanner is not configured, Kindly click “Proceed without a virus scanner” Choose the on-premises repository and provide the repository password ("welcome1" without quotes). Verify the Uploaded RPD by Navigating to Analyses and clicking “Create Analysis. You see the available subject areas. For more details on repository Lifting and Shifting refer to  Uploading an On-Premises Data Model to Oracle Cloud Service Section:4 FAQ Custom application roles permissions are not getting applied on the Repository objects. Solution: Create the custom application roles in VA first and the then upload the Repository.   Not able to change permissions for Webcat object and getting Assertion Failure error. Solution: Delete any unresolved accounts available in the webcat object and then change the permissions. Section:5 Limitations Following features which are used in Oracle BI Applications are not supported in Oracle Analytics Cloud environment. Group KPI KPI Watchlist List Format Segment Scorecard Name Path Signature Campaign Members Load Format /shared/Marketing/Segmentation/List Formats/Campaign Members Load Format Campaign Load Campaign Members Suspects Load Format /shared/Marketing/Segmentation/List Formats/Campaign Members Suspects Load Format Campaign Load Consumer Campaign Members Load Format /shared/Marketing/Segmentation/List Formats/Consumer Campaign Members Load Format Campaign Load Consumer Leads Import Load Format /shared/Marketing/Segmentation/List Formats/Consumer Leads Import Load Format Campaign Load Leads Import Load Format /shared/Marketing/Segmentation/List Formats/Leads Import Load Format Campaign Load Campaign Load - Contacts and Prospects Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Campaign Load - Contacts and Prospects Example Campaign Load Campaign Load - Database Writeback Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Campaign Load - Database Writeback Example Campaign Load Mutual Exclusion Campaign Load - Contacts and Prospects Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Mutual Exclusion Campaign Load - Contacts and Prospects Example Campaign Load Suspects Import Load Format /shared/Marketing/Segmentation/List Formats/Suspects Import Load Format Campaign Load All Groups /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/All Groups Group Below Top Performance /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Below Top Performance Group Global /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Global Group Top Performers /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Top Performers Group Average Negotiation Cycle Time /shared/Procurement/Procurement Scorecard/KPIs/Customer/Average Negotiation Cycle Time KPI Fulfilled Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Fulfilled Requisition Lines past expected date KPI Late Receipts /shared/Procurement/Procurement Scorecard/KPIs/Customer/Late Receipts KPI Processed Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Processed Requisition Lines past expected date KPI Procurement Cycle Time /shared/Procurement/Procurement Scorecard/KPIs/Customer/Procurement Cycle Time KPI Unfulfilled Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Unfulfilled Requisition Lines past expected date KPI Off-Contract Spend /shared/Procurement/Procurement Scorecard/KPIs/Financial/Off-Contract Spend KPI Perfect invoices /shared/Procurement/Procurement Scorecard/KPIs/Financial/Perfect invoices KPI Realized Cost Savings /shared/Procurement/Procurement Scorecard/KPIs/Financial/Realized Cost Savings KPI Invoice Automation /shared/Procurement/Procurement Scorecard/KPIs/Operations/Invoice Automation KPI Manual Requisition Lines Rate /shared/Procurement/Procurement Scorecard/KPIs/Operations/Manual Requisition Lines Rate KPI PO Transactions per Buyer /shared/Procurement/Procurement Scorecard/KPIs/Operations/PO Transactions per Buyer KPI Processed Negotiation Lines /shared/Procurement/Procurement Scorecard/KPIs/Operations/Processed Negotiation Lines KPI # of Suppliers per Category /shared/Procurement/Procurement Scorecard/KPIs/Supplier/# of Suppliers per Category KPI % of Spend By Diversified Suppliers /shared/Procurement/Procurement Scorecard/KPIs/Supplier/% of Spend By Diversified Suppliers KPI On-Time Delivery performance /shared/Procurement/Procurement Scorecard/KPIs/Supplier/On-Time Delivery performance KPI Quality Performance /shared/Procurement/Procurement Scorecard/KPIs/Supplier/Quality Performance KPI Returns /shared/Procurement/Procurement Scorecard/KPIs/Supplier/Returns KPI Exact Match Rate /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Exact Match Rate KPI Hit/Miss Accuracy /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Hit\/Miss Accuracy KPI Inventory Value /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Inventory Value KPI Average Change Order Approval Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Approval Time KPI Average Change Order Cycle Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Cycle Time KPI Average New Item Creation Approval Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Approval Time KPI Average New Item Creation Cycle Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Cycle Time KPI Percentage of Shared Categories /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Percentage of Shared Categories KPI List Export - Contacts Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/List Export - Contacts Example List Export Analytics Data Load - Leads Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Analytics Data Load - Leads Example Marketing BI Data Load Analytics Data Load - Responses Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Analytics Data Load - Responses Example Marketing BI Data Load Campaign Members Export Format /shared/Marketing/Segmentation/List Formats/Campaign Members Export Format Marketing Email Server Email Personalization - Contacts - OLTP Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/Email Personalization - Contacts - OLTP Example Marketing Email Server _CauseAndEffectLinkages /shared/Procurement/Procurement Scorecard/_CauseAndEffectLinkages Scorecard Cause And Effect Linkages Cause & Effect Map: Improve Response Time /shared/Procurement/Procurement Scorecard/Cause & Effect Map: Improve Response Time Scorecard Causes And Effects View Automate Invoice Processing /shared/Procurement/Procurement Scorecard/Automate Invoice Processing Scorecard Initiative Consolidate Supplier Base /shared/Procurement/Procurement Scorecard/Consolidate Supplier Base Scorecard Initiative Develop and Implement New policies to support Contract compliance /shared/Procurement/Procurement Scorecard/Develop and Implement New policies to support Contract compliance Scorecard Initiative Establish and Monitor SLAs /shared/Procurement/Procurement Scorecard/Establish and Monitor SLAs Scorecard Initiative Implement Internet Supplier Portal /shared/Procurement/Procurement Scorecard/Implement Internet Supplier Portal Scorecard Initiative Implement Self Service Procurement Application /shared/Procurement/Procurement Scorecard/Implement Self Service Procurement Application Scorecard Initiative Implement Spend Analytics /shared/Procurement/Procurement Scorecard/Implement Spend Analytics Scorecard Initiative Initiatives /shared/Procurement/Procurement Scorecard/Initiatives Scorecard Initiative Monitor Performance and provide regular feedback on quarterly basis /shared/Procurement/Procurement Scorecard/Monitor Performance and provide regular feedback on quarterly basis Scorecard Initiative Monitor Spend and Savings on Monthly Basis /shared/Procurement/Procurement Scorecard/Monitor Spend and Savings on Monthly Basis Scorecard Initiative Monitor Spend by Diversified Suppliers on monthly basis /shared/Procurement/Procurement Scorecard/Monitor Spend by Diversified Suppliers on monthly basis Scorecard Initiative Reward high performing employees /shared/Procurement/Procurement Scorecard/Reward high performing employees Scorecard Initiative _initiativeTree /shared/Procurement/Procurement Scorecard/_initiativeTree Scorecard Initiative Tree Mission /shared/Procurement/Procurement Scorecard/Mission Scorecard Mission Control Spend /shared/Procurement/Procurement Scorecard/Control Spend Scorecard Objective Develop and Retain Strategic Suppliers /shared/Procurement/Procurement Scorecard/Develop and Retain Strategic Suppliers Scorecard Objective Improve Response Time /shared/Procurement/Procurement Scorecard/Improve Response Time Scorecard Objective Improve Supplier Performance /shared/Procurement/Procurement Scorecard/Improve Supplier Performance Scorecard Objective Increase Productivity /shared/Procurement/Procurement Scorecard/Increase Productivity Scorecard Objective New Objective /shared/Procurement/Procurement Scorecard/New Objective Scorecard Objective New Objective 1 /shared/Procurement/Procurement Scorecard/New Objective 1 Scorecard Objective Procurement Scorecard /shared/Procurement/Procurement Scorecard/Procurement Scorecard Scorecard Objective Promote Supplier Diversity /shared/Procurement/Procurement Scorecard/Promote Supplier Diversity Scorecard Objective Reduce Operational Costs /shared/Procurement/Procurement Scorecard/Reduce Operational Costs Scorecard Objective Reduce Out-of-process Spend /shared/Procurement/Procurement Scorecard/Reduce Out-of-process Spend Scorecard Objective Customer /shared/Procurement/Procurement Scorecard/Customer Scorecard Perspective Financial /shared/Procurement/Procurement Scorecard/Financial Scorecard Perspective Operations /shared/Procurement/Procurement Scorecard/Operations Scorecard Perspective Supplier /shared/Procurement/Procurement Scorecard/Supplier Scorecard Perspective _Perspectives /shared/Procurement/Procurement Scorecard/_Perspectives Scorecard Perspective List _scorecardSettings /shared/Procurement/Procurement Scorecard/_scorecardSettings Scorecard Settings Strategy Map /shared/Procurement/Procurement Scorecard/Strategy Map Scorecard Strategy Map View _strategyTree /shared/Procurement/Procurement Scorecard/_strategyTree Scorecard Strategy Tree Strategy Tree /shared/Procurement/Procurement Scorecard/Strategy Tree Scorecard Strategy Tree View Vision /shared/Procurement/Procurement Scorecard/Vision Scorecard Vision Suspect Sync Segment /shared/Marketing/Segmentation/Segments/Suspect Sync Segment Segment Logistics KPI Watchlist /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Logistics KPI Watchlist Watchlist PIM KPI Watchlist /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/PIM KPI Watchlist Watchlist   All blogs related to BIAPPS on PAAS

Authors: Swathi Singamareddygari , Anand Sadaiyan Table of Contents DisclaimerSection:1 DeliverablesSection: 2 Lifting and shifting Application Roles and Web CatalogueSection: 3 Lifting and...

BIAPPS on PAAS

Lift and Shift of BIAPPS Artifacts to Oracle Business Intelligence Cloud Service

Authors: Swathi Singamareddygari , Anand Sadaiyan Table of Contents Disclaimer Section 1: Lifting and Shifting Application Roles Section 2: Lifting and Shifting the Repository Section 3: Lifting and Shifting the Web Catalogue Section 4: Repository ADF and Consistency Checks Disclaimer This document does not replace the Oracle Business Intelligence Cloud Service Documentation Library or other Cloud Services documents. It serves as a supplement for lifting and shifting Business Intelligence Applications Artifacts (BIAPPS / OBIA) to BI Cloud Service (BICS). This document is written based on the BI cloud version 17.2.5. Screenshots included in this document might differ slightly from what you see on your screen.  Note: This is a supplementary document for installing Oracle BI Applications on Paas with Oracle BI Cloud Service (Doc ID 2136376.1). It is always a good practice to take a snapshot of the current environment in Oracle BI Cloud Service, before performing any lift and shift activities. Ensure that you create a snapshot before proceeding. Section 1: Lifting and Shifting Application Roles Oracle BI Applications has delivered a BAR (obia_bics_v0.9.bar) file with the out of the box BIAPPS application roles. Download the BAR file from My Oracle Support (Doc ID: 2136376.1) and upload the application roles to Oracle BI Cloud Service environment. Uploading the Application Roles BAR File: Login to the Oracle BI Cloud Service environment. From the Oracle BI Cloud Service home page, navigate to the Console and click on Snapshots and Models. Click Upload Snapshot to upload the delivered Application Roles BAR file. See Uploading Snapshots . Select the delivered obia_bics_v0.9.bar file and enter “Admin123” as the password.  Select the uploaded snapshot, click the Restore action, and in Restore Snapshot popup, select Application Roles and click Restore to restore the snapshot for Application Roles. See Restoring Snapshots. Verify the imported Application Roles in the Application Role Management page (Console -> Users and Roles -> Application Roles). Section 2: Lifting and Shifting the Repository Overview: Administrators can upload the on premises repository to Oracle BI Cloud Service using the steps mentioned below. While setting up Oracle BI Applications setup on the PaaS environment, the repository is provisioned with the required connection details. Administrators can obtain the Oracle BI Applications provisioned repository from the Compute environment. For example : /u01/app/oracle/middleware/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/repository . Note: You can’t import the provisioned repository directly into Oracle BI Cloud Service because the severity of the consistency check has been increased in Oracle BI Cloud Service when compared to Oracle BI EE 11.1.1.9. Hence you must fix the consistency check issues before uploading the Repository to Oracle BI Cloud Service environment. Fixing ADF and Consistency Check Issues: You must fix the ADF and consistency check issues with the repository in your local environment and then upload the ‘fixed’ repository to Oracle BI Cloud Service. For the ease of use, all the changes are documented in the Section 4: Repository ADF and Consistency Checks. It is a mandatory requirement to fix the consistency check issues. Steps to be followed for lifting and shifting of repository Login to the Oracle BI Cloud Service environment Navigate to Console from the home page and click on Snapshots and Models. Replace the Oracle BI Cloud Service data model with on premises repository using the “Replace Data Model” option and provide the password for the on premises repository. Ensure that you select the repository in which you fixed the consistency check issues. Verify the uploaded repository by navigating to Analyses and clicking “Create Analysis”. You see the available subject areas. For more details on repository lifting and shifting refer to Uploading an On Premises Data Model to Oracle BI Cloud Service Section 3: Lifting and Shifting the Web Catalogue Administrators can upload web catalogue artifacts from another Oracle BI Cloud Service or Oracle BI Enterprise Edition 11.1.1.9.0 or later.   Lifting and shifting the web catalogue mainly involves these two activities: Archive individual functional area folder from the web catalogue available in the compute node Unarchive each folder to Company Shared in the Oracle BI Cloud Service environment. Lifting of Presentation catalog in the Compute node Open the Oracle BI Applications analytics, which is in the compute node. For Example https://<host_name>:<port>/analytics Navigate to the Catalog and expand Shared Folders in the left pane. Select the functional area folder and click Archive in Tasks pane. Select the Keep Permissions and Keep Timestamps option and click Ok. Repeat steps 3 and 4 for all functional area folders. Once archiving is done login to the Oracle BI Cloud Service environment and follow the steps for shifting the BI Presentation Catalogue. Shifting the Presentation Catalogue in Oracle BI Cloud Service Environment Login to the Oracle BI Cloud Service  environment Navigate to the Catalog from the home page. Expand the Company Shared folder in the Folder section. If you prefer to keep “Sample App” folder, then Skip Step 5 and 6. Select the SampleApp Folder and click Delete under Tasks pane. Click OK in the Confirm Delete page. Select Company Shared folder in the Folders pane.   8. Select Unarchive from the Tasks pane, browse to the archive file created for each functional area, and select "Preserve" for the ACL option. Repeat steps 7 and 8 to Unarchive all functional area folders to Company Shared folder. Click on the Dashboard menu to see all the dashboards 11. Click on any dashboard and check for results. Section 4: Repository ADF and Consistency Checks For the ease of use, all the consistency check issues have been documented in the attached spreadsheet (BICS_Lift_And_Shift_17x.xls). Detailed steps have been provided for fixing consistency check issue for “SET ID-based Security”. Follow similar steps to fix the consistency check issues for the other application roles mentioned in the attached spreadsheet.  For fixing the ADF Issues, follow the technote attached in the ARU 20780126. If you are using the trimmed Oracle BI Applications repository then all the issues mentioned here might not be applicable. Fix only the applicable issues. Any modifications to the repository should be done in the Local environment and it is your responsibility to maintain the repository. Detailed Steps for fixing SET ID-based Security Application Role: Open the repository with the BI Server Admin tool and click on Manage and then Identity. In Identity Manager window search for the application role “SET ID-based Security”, right click and select Properties. Click Permissions and then the Data Filters tab. Modify the "Core"."Dim - Customer"."Set Id” data filter expression to   "Core"."Dim - Customer"."Set Id"=VALUEOF(NQ_SESSION."SET_ID") . All blogs related to BIAPPS on PAAS

Authors: Swathi Singamareddygari , Anand Sadaiyan Table of Contents DisclaimerSection 1: Lifting and Shifting Application RolesSection 2: Lifting and Shifting the RepositorySection 3: Lifting and...

Backup - BIAPPS on PAAS

BIAPPS on PAAS – Backup and Restore - Introduction (Part1)

BI Applications (BIApps) is an integrated application involving multiple components. A backup of BIApps therefore would mean a backup of the integrated application and not just the datawarehouse or the database. High level architecture of BIAPPS on PAAS is shown below: There are four different cloud service instances involved at a minimum when using BIAPPS on PAAS. Following table shows the components/software that are installed on each: DBCS (Database Cloud Service) Database that has the ODI repository, BIACM Repository, SDS schemas and the Datawarehouse Schema BICS (BI Cloud Service) RPD, Webcat, Jazn file Compute Cloud Weblogic Server ODI Server BIACM BIAPPS Shiphome Customer Data stored as files (E.g. Source Files, Universal Adaptor files) *Optionally the below if installed Corente VPN ? Vnc Dev tools like sql developer/browser/ODI studio and their associated files Storage Service backups for Compute/DB You will need to backup all of the above instances to be able to successfully restore the BI Applications. Each of the Cloud services provides its own backup mechanism. Relevant information for backing up each of these cloud services is available in the Oracle Cloud documentation and will be detailed in subsequent blogs in this backup series. Customer may also want to look at Oracle Database Backup Cloud Service that is a separate optional cloud service that is available to take Oracle Database backups. However is it adequate if you just backed up each of the cloud instance independently? Following section details some of the considerations related to this question by drawing on few examples. Example 1: Database has been backed up on weekend Saturday 11th March 11pm. Compute Cloud Instance was backed up earlier at 10am the same day. Configuration that is done on weblogic (JDBC data sources, memory settings etc) is not stored in the database. So if any of the configuration was done between 10am 11thMarch and 11pm 11th March, that would be lost. And in that sense, the backup and restore does not truly reflect the state of the integrated BIAPPS environment as it would have been at 11pm 11th March. Example2: BICS was backed up on 10 pm on Sunday 12th March. Now continuing from example1, we have the database backup from 11pm 11th  Match and BICS backup from 10 pm on Sunday 12th March. If there were any changes that were done to the database (like adding a new table/column) that was followed by a change in the RPD, then there is a chance that when we restore the database and the BICS instance, we can have a failure since they are no longer in sync. As you can see from the above examples, backing up the cloud instances at different points in time can cause a potential problem. That said, there is a no single button that can be clicked to backup all the instances at exact same time. However with a right process in place, it is still easy enough to backup and restore the BIAPPS Application. Following are some of the best practices/guidelines that the customer can take to avoid the above issues: The most volatile of the BIAPPS components is the database. That said, the database is primarily changed when the ETL is run. So it is probably a good idea to backup the database outside the ETL window and as frequent as is possible. Configuration changes done in BIACM will also reside in the database, but these are less likely to occur once the initial configuration is done. Similarly ODI repository changes also reside in the database but in a production instance this should not be done everyday but rather during limited controlled windows. BICS RPD is tightly coupled with the database. So the customer could restrict the RPD changes to certain limited days in a month and ensure that there is proper database backup along with accompanying BICS backup outside the change period. In other words, have a quiet period for making RPD changes and ensure that the BICS and DB are backed up in that quiet period.  Most of the configuration required for Weblogic is done during the initial configuration. So after the full load, ensure there is a blackout period when you back up all the cloud instances. Subsequently similar to the BICS quiet periods, ensure that the changes to the weblogic and other domains on the Compute are done only during certain days and ensure that they are being backed up during the quiet periods.  Most of the cloud services have either an API or command line utility to backup that instance. You could consider using those to automate the backup of all those instances. Better still, you can have that script kicked off automatically at the end of the ETL load plan.  When restoring the system from a backup, consider the impact of any extract from Source Systems. Most of the Sources Systems have incremental extracts. If the last extracted date is stored in the database, then that date will also be restored as part of the database restore. However if the extract date is stored outside the BIAPPS Database (E.g. Fusion or any on Prem Sources which you are replicating via a Replication tool), then you will need to ensure that post the database restore, you reset the extract dates to match the data in the database and also clear any pending data in transit (Like in UCM).  A full reset of the SDS and the warehouse, followed by a full load will fix any issues with the SDS/warehouse. However Full loads are expensive and certain Source Systems have restrictions on how much data can be extracted in a day (E.g. Taleo). Further you can potentially lose any existing snapshot data if doing reset of the warehouse (and if the snapshot data is not available in the source).  When you restore a database, you will be restoring all the tables and all the schemas. It is not easy to restore a single table. Therefore it is best to keep activities that impact different schemas separate. E.g. If doing major configuration in BIACM, then do that when no ETL is running and take a adhoc backup before and after those changes. Similarly when promoting code to the ODI Production repository, do it outside the ETL window and at a time when no BIACM changes are happening and take a backup, before and after those changes. This will ensure that you can use the db backup to restore the database to the point in time before those changes are done without worrying about impact to other schemas. For the same reasons, if you are making a change to a single warehouse table, keep a backup of that table (and other dependent tables) in the warehouse schema along with the data, so that you can use those to restore the table rather than use the complete database backup. There are other components that are also involved in the BIAPPS on PAAS Solution and need to be included in the backup strategy. These include but not limited to: Source Systems: These are the systems from which BIAPPS gets the data from. The backup of those systems is also required when considering the entire application. However those are typically taken care by the Source System administrators and hence not listed here.   Replication Tools: If you are not using VPN to connect to the On Prem Source Systems, then it is likely, you have some kind of Replication mechanism to transfer the data from the On Premise Source System to the SDS. So your backup strategy ought to cover those as well. Identity Domain/ Users & Roles:  These are usually maintained from the Service Admin Console (SAC). Refer to SAC documentation on how to back these up. Any Network/Security Rules you setup between these various instances. The customer ought to therefore understand the entire BIAPPS archictecture and then design the backup strategy accordingly. The customer will also likely have a Dev/Test/Prod environment, each of which is a complete BIAPPS application in itself. The customer will have to ensure that the backup strategy covers all those environments. Special care should also be taken if customer has a T2P process (Test to Production) and one of the environments requires to be restored. The subsequent blogs in this series, will attempt to drill into the relevant backup functionality that is present for the individual components that make up the BIAPPS on PAAS solution. Below are few links that point to the backup documentation for the relevant cloud services: Backing up Deployments on Database Cloud Service About Database Backup Cloud Service (Optional cloud service that can be used to backup Oracle databases) Backing up and Restoring Storage Volumes - Compute Backing up and Restoring BICS Disclaimer: Refer to the latest BIAPPS and Oracle Cloud Documentation as things might have changed since this blog was written. All blogs related to BIAPPS on PAAS BIAPPS on PAAS Backup Blog Series      

BI Applications (BIApps) is an integrated application involving multiple components. A backup of BIApps therefore would mean a backup of the integrated application and not just the datawarehouse or...

Cloud Adaptor

Including Fusion-on-Cloud Extensions in BIAPPS (Troubleshooting) - Part 3

Authors: "Yadukrishna Munugoti - Oracle " and "Guna Vasireddy - Oracle" The blog1 and blog2 in this series provided steps on how to get Fusion Flexfields into BIAPPS. This blog, the third in this series, details some common issues you might encounter when doing so and how to resolve those issues.  ODI-40444: Column not found:XXXXXXXXXXXXXXXXXX After doing the required configuration to get the Fusion Flexfields into BIAPPS, you might encounter the ODI-40444: Column not found error at times. This error can occur for a couple of reasons. The flex columns are not present in the Fusion BI Server OTBI RPD.This may happen if the user introduced flex fields in the Fusion Application for the VO but forgot to run BI extender (ESS Job: Import Oracle Fusion Data Extensions for Transactional Business Intelligence) to push those changes to the Fusion BI Server OTBI RPD. This has been the primary reason for this error. To verify if this is cause: Check the source files (.csv & .mdcsv) for the flex VO that are present in UCM (blog). If we find that the column list in .mdcsv file (VO’s metadata file) does not match the actual extract column list in the .csv file(data file), then this is indeed the cause. To fix this issue, it is recommended to reset and do a full extract/load. The steps would be (a) Clean up all existing MANIFEST.MF files by deleting them from UCM (Oracle WebContent Server) (b) Run the BI Extender (c) Reset to full extract using BICCC (Refer to BICCC Documentation) (d) Re-run the Data extract again from BICCC Console (e) Reset SDS/WH (Refer to "Functional Configuration Reference Guide" -> Manage Load Plans UI in BIAPPS Documentation) (f) Run the ODI load plan. If you are on BIAPPS 10.1 release and the csv file name exceeds 128 characters. As per the ODI File Technology setting in that BIAPPS version, the maximum allowed file name length is 128.Hence while running FTS map, it automatically does trimming on file name length with 128 chars. Hence FTS map is not able to read the csv file (hence ODI-40444 issue). Download and apply the tech note as available in patch (Patch Number 22278058) to resolve this issue.So check the length of the file name in UCM and if the length is longer than 128 characters and you are in BIAPPS 10.1, then apply the above patch. Verify if the Flex columns are present in the OTBI RPD Incase you want to check whether the flex columns were successfully imported to the OTBI RPD before proceeding for BICC extract, please follow below steps to validate: Figure1: Issue SQL from BI Server. Login to OTBI FA analytics link Go to the Issue SQL OBIEE link Run a simple query to check the column(s) existence in the RPD for a VO. For Example: Below is the example for HCM VO. select_physical *   FROM  "oracle.apps.hcm.model.analytics.applicationModule.HcmTopModelAnalyticsGlobalAM_HcmTopModelGlobalAMLocal".."HcmTopModelAnalyticsGlobalAM.BaseWorkerAsgDFFBIAM.FLEX_BI_BaseWorkerAsgDFF_VI" Note the count of columns returned by the above query Now get all BI enabled column count using below SQL and this column count should match with 3rd step column count.If you see any difference then you will have to run BI Extender job to make them in sync.Note: we can actually run this sql using Issue SQL link from the BI Presentation services like we executed 3rd step SQL statement. Please change Flex VO’s name/catalog with the one which you are looking for, before running the below SQL. FYI... Below is the example for HCM VO. EXECUTE PHYSICAL CONNECTION POOL "oracle.apps.hcm.model.analytics.applicationModule.HcmTopModelAnalyticsGlobalAM_HcmTopModelGlobalAMLocal"."Connection Pool" <ADFQuery mode="view_object_attribute"><ViewObject>HcmTopModelAnalyticsGlobalAM.ContractLegDDFBIAM.FLEX_BI_ContractLegDDF_VI</ViewObject> <Attribute/> </ADFQuery> This way you can make sure that you have all BI flex enabled VO columns were successfully imported to the OTBI RPD. Limit the number of Flex Columns using the BI Enabled Flag Are all the flexfields you defined on the VO, relevant for BI? If not, you can restrict the flex extensions in BIAPPS to only those that are relevant to BI. This helps improve performance by restricting the data transfer to only relevant columns. There is also a limit on the how many columns get automatically propagated to the warehouse and this will help ensure the number of columns are within that limit. When you define a flexfield in Fusion, there is a flag called BI Enabled. Enabling this flag will ensure that this flexfield gets extended to the RPD and will then eventually flow into BIAPPS. If however some columns are not required for BI, then you can disable the flag for those columns. Figure2: BI Enabled Flag on Fusion Screen.(Click on image to view a bigger version) On editing the selected flexfield, you can see the flag for existing flexfields. Disable the flag for the ones that are not required for BI. Note: Disabling this flag, will disable it for OTBI as well. So disable it only if it is not required for OTBI and BIAPPS. There is no current way to disable it only for BIAPPS.    

Authors: "Yadukrishna Munugoti - Oracle " and "Guna Vasireddy - Oracle" The blog1 and blog2 in this series provided steps on how to get Fusion Flexfields into BIAPPS. This blog, the third in this...

Cloud Adaptor

Combined Data and Deleted Rows Extract when extracting from Fusion using BICCC

Authors: "Sankara Sampath - Oracle " and "Guna Vasireddy - Oracle" Starting PB14 onwards, Fusion 11 now has a new option that allows you to combine the Data and Deleted Rows extract into a single job using BICCC (BI Cloud Connector Console). Internally the individual extracts are done one after the other and there would be two manifest files generated along with the corresponding files. The Cloud Data extract is done first followed by the Deleted Rows extract. Figure 1 : Screenshot showing the new combined extract Earlier, a customer who wanted both of them would have had to wait for the data extract to finish before starting the Deleted rows extract. Scheduling them meant they would have had to have a fair estimate of how long the extracts would take and schedule them apart accordingly. This would become problematic when the data volumes increase during certain period (E.g Payroll data during month end). With this combined extract, they no longer have to worry about that.The system would automatically trigger the Deleted Rows Extract on completion of the Cloud Data Extract. FAQs: Q: What happens if the Cloud Data Extract fails for whatever reason? Would the Deleted Rows extract still be triggered when using the combined extract? A: No. When Data Extract fails, Delete Record Extract will NOT be triggered. Q: At what point is the selection of VOs taken into account? So if I started this combined extract and then went and changed the VO selection while the Cloud Data Extract part is running, would the Deleted rows extract use the original selection (that was there at the start of the job) or would it use the selection that is present at the point when the Deleted Rows Extract starts? A: Its NOT recommended to make any VO selection change when any schedule is in progress, which will cause undesirable results. Q:If the Deleted Rows Extract fails for some reason, would the next schedule for the combined extract still be picked up? A: Yes, next schedule will work as expected, as delete record extract is always in FULL mode and does NOT depend on last extract date. Q:Is there a way to restart from failed point? So if Cloud Data Extract succeeds and Deleted rows extract fails, can I restart the job to do only the deleted rows extract again? A: No. Its NOT possible to restart any scheduled job.

Authors: "Sankara Sampath - Oracle " and "Guna Vasireddy - Oracle" Starting PB14 onwards, Fusion 11 now has a new option that allows you to combine the Data and Deleted Rows extract into a single job...

EBS

Special use cases for Replication to SDS via OGG

Authors: "Guna Vasireddy - Oracle","Sai Rajendra Vadakattu - Oracle ", "Ankur Jain - Oracle"   Replicating Table without Primary Key Replicating Views Replicating OLTP Packages     Replicating Table without Primary Key: Use Case: Customer is using Oracle Golden Gate (OGG) to replicate the Source to SDS and would like to configure BIAPPS to use the SDS instead of the Source. Note: The SDS table differs from the source table in primarily two things: BIAPPS expects CDC columns in the SDS. These are used by the BIAPPS mappings. These columns are automatically generated when you use the  BIAPPS provided tools to create the SDS tables. Refer to the BIAPPS documentation for more details. Hard Deletes in source get translated as soft deletes in the SDS. The column CDC$_DML_CODE=’D’ for such rows in the SDS. Issue: Some of the source tables that are used by BIAPPS, neither have a primary key in the Source nor have a key defined in the BIAPPS ODI repository. At times, the source system enforces the primary key using the application layer and does not have primary keys at the database level. In such cases, the key would have been identified and mentioned in the BIAPPS ODI repository. And the BIAPPS documentation specifies how to configure OGG to use those columns using the keycols clause. There are other source tables that don't have a primary key at all. An example is the MTL_TRANSACTION_ACCOUNTS table in EBS. This table does not have a primary/unique key or a combination of columns that would uniquely identify a row. This blog details how such tables can be replicated.  Solution: Follow the below steps to be able to replicate a table without Primary Keys using OGG. Essentially we ask OGG to treat all the columns in the table as the identifier using keycols clause. Make a list of all the columns present in the table. Using the list prepare the keycols clause for OGG including all the columns. Edit the replicat param file to specify the keycols clause for the tables being replicated. Restart OGG so that the new replicat file becomes effective. Do a full replication of the table initially using either dblink or other methods (like expdp) as would have been done with other tables. Subsequently use OGG to perform the incremental replication. If the table is known to have large volumes, then to improve performance, consider creating a regular index on the join columns. These are usually the columns on which the table is joined with other tables in the SDE maps. Create this after doing the initial full replication of the table. Following is the observed behavior when using the above solution   Action on Source Result in Target Db Results as Expected Insert records Record inserted Y Update record Existing record get updated Y Delete record Existing record gets marked for delete CDC$_DML_CODE=’D’ Y Have two duplicates rows in source. These match in all columns. Perform Inserts/updates/deletes and check Have same two duplicates rows in target. It is randomly picking one row out of the two rows for updates/deletes N So as long as there are no duplicates in the tables (i.e. rows having values matching in all columns), the above approach works fine. It is unlikely that the source table will have two exact duplicate rows and so this approach should be expected to work. The above is explained by taking the example for one EBS table, MTL_TRANSACTION_ACCOUNTS, which falls under this category. The Replicat file is modified to include all the columns. Sample Replicat file: -------------------------------------------------- MTL_TRANSACTION_ACCOUNTS ------------------------------------------------ -- Process Inserts -- GETINSERTS IGNOREUPDATES IGNOREDELETES MAP EBS_SRC.MTL_TRANSACTION_ACCOUNTS, TARGET SRC_REPLICA.MTL_TRANSACTION_ACCOUNTS ,COLMAP(USEDEFAULTS) ,CDC$_RPL_LAST_UPDATE_DATE = @DATENOW (), CDC$_SRC_LAST_UPDATE_DATE = @GETENV ('GGHEADER', 'COMMITTIMESTAMP'), CDC$_DML_CODE = 'I') ,keycols(TRANSACTION_ID,REFERENCE_ACCOUNT,LAST_UPDATE_DATE,LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOGIN,INVENTORY_ITEM_ID, ORGANIZATION_ID, TRANSACTION_DATE, TRANSACTION_SOURCE_ID, TRANSACTION_SOURCE_TYPE_ID, TRANSACTION_VALUE, PRIMARY_QUANTITY, GL_BATCH_ID, ACCOUNTING_LINE_TYPE, BASE_TRANSACTION_VALUE, CONTRA_SET_ID, RATE_OR_AMOUNT, BASIS_TYPE, RESOURCE_ID,  COST_ELEMENT_ID, ACTIVITY_ID, CURRENCY_CODE, CURRENCY_CONVERSION_DATE, CURRENCY_CONVERSION_TYPE, CURRENCY_CONVERSION_RATE, REQUEST_ID,  PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE,  ENCUMBRANCE_TYPE_ID, REPETITIVE_SCHEDULE_ID, GL_SL_LINK_ID, USSGL_TRANSACTION_CODE, INV_SUB_LEDGER_ID); -- Process Updates -- GETUPDATES IGNOREINSERTS IGNOREDELETES MAP EBS_SRC.MTL_TRANSACTION_ACCOUNTS, TARGET SRC_REPLICA.MTL_TRANSACTION_ACCOUNTS , COLMAP (USEDEFAULTS) ,CDC$_RPL_LAST_UPDATE_DATE = @DATENOW (), CDC$_SRC_LAST_UPDATE_DATE = @GETENV ('GGHEADER', 'COMMITTIMESTAMP'), CDC$_DML_CODE = 'U') ,keycols(TRANSACTION_ID,REFERENCE_ACCOUNT,LAST_UPDATE_DATE,LAST_UPDATED_BY, CREATION_DATE,CREATED_BY,LAST_UPDATE_LOGIN, INVENTORY_ITEM_ID, ORGANIZATION_ID, TRANSACTION_DATE,TRANSACTION_SOURCE_ID, TRANSACTION_SOURCE_TYPE_ID, TRANSACTION_VALUE,PRIMARY_QUANTITY,GL_BATCH_ID, ACCOUNTING_LINE_TYPE, BASE_TRANSACTION_VALUE,CONTRA_SET_ID, RATE_OR_AMOUNT, BASIS_TYPE,RESOURCE_ID ,COST_ELEMENT_ID, ACTIVITY_ID,CURRENCY_CODE, CURRENCY_CONVERSION_DATE, CURRENCY_CONVERSION_TYPE,CURRENCY_CONVERSION_RATE, REQUEST_ID, PROGRAM_APPLICATION_ID, PROGRAM_ID,PROGRAM_UPDATE_DATE,ENCUMBRANCE_TYPE_ID, REPETITIVE_SCHEDULE_ID, GL_SL_LINK_ID,USSGL_TRANSACTION_CODE,INV_SUB_LEDGER_ID) ;   -- Process  Deletes -- IGNOREINSERTS IGNOREUPDATES GETDELETES UPDATEDELETES MAP EBS_SRC.MTL_TRANSACTION_ACCOUNTS, TARGET  SRC_REPLICA.MTL_TRANSACTION_ACCOUNTS ,COLMAP (USEDEFAULTS) ,CDC$_RPL_LAST_UPDATE_DATE = @DATENOW (), CDC$_SRC_LAST_UPDATE_DATE = @GETENV ('GGHEADER', 'COMMITTIMESTAMP'), CDC$_DML_CODE = 'D') ,keycols(TRANSACTION_ID,REFERENCE_ACCOUNT,LAST_UPDATE_DATE,LAST_UPDATED_BY, CREATION_DATE, CREATED_BY,LAST_UPDATE_LOGIN, INVENTORY_ITEM_ID, ORGANIZATION_ID, TRANSACTION_DATE, TRANSACTION_SOURCE_ID, TRANSACTION_SOURCE_TYPE_ID, TRANSACTION_VALUE, PRIMARY_QUANTITY, GL_BATCH_ID, ACCOUNTING_LINE_TYPE, BASE_TRANSACTION_VALUE, CONTRA_SET_ID, RATE_OR_AMOUNT,BASIS_TYPE,RESOURCE_ID, COST_ELEMENT_ID, ACTIVITY_ID, CURRENCY_CODE, CURRENCY_CONVERSION_DATE, CURRENCY_CONVERSION_TYPE, CURRENCY_CONVERSION_RATE, REQUEST_ID, PROGRAM_APPLICATION_ID,  PROGRAM_ID, PROGRAM_UPDATE_DATE, ENCUMBRANCE_TYPE_ID, REPETITIVE_SCHEDULE_ID, GL_SL_LINK_ID, USSGL_TRANSACTION_CODE, INV_SUB_LEDGER_ID) ; Suggested columns for indexes in SDS: Looking at columns used as joins for this table, TRANSACTION_ID is a likely candidate for an index on the SDS. Table Name Suggested Columns for index MTL_TRANSACTION_ACCOUNTS TRANSACTION_ID Note: These are regular indexes and not unique/primary keys and should help with the mappings that extract from this SDS table. FAQs: 1) If I don't specify any keycols, Golden Gate is supposed to use all the columns as key columns. How is this different from that? Not specifying any keycols is the same as specifying keycols on all the columns. Specifying keycols however make it clear to the person reviewing it as to how OGG will do the replication. 2) Is there any performance impact due to this? OGG will need to store the values against all the columns inorder to find the row in the target to perform the same operation. So there definitely will be an overhead. So where primary keys are available, you should use them. Use this approach only for cases where it is not possible to identify such a key 3) I don't have a primary key index on my source and we are not allowed to modify the Source tables. However I know the columns that make up the primary key. What do I do? If you have a primary/unique key/index defined on Source/Target, OGG will automatically use them. In the absence of such keys, OGG by default will treat all columns as keycols as specified above. However if you do know the columns that make up the primary key and do not want to define the primary key index at the database level, then you can use the keycols clause detailed above to specify only those columns. In other words keycols can be used to specify which subset of columns make up the primary key in the source/target. And that would be more efficient than specifying all the columns.   4) If we specify all columns as keycols, won't we end up with new rows everytime there is an update to the source That is not how keycols work. Keycols specify the columns that will be used to find the record in the target and then the operation that was performed on the source, will be applied on that row in the target. Consider a case where source has three columns and a row with col1=a,col2=b,col3=c. In the source col2 is now updated to b2. The operation OGG will do is roughly equivalent to this SQL update table set col2=b2 where col1=a,col2=b and col3=c  So OGG uses the values of the columns mentioned in keycols to identify the row in the target and then applies the same operation to that row. Replicating Views: Use Case: In case of golden gate as the replication technology, the mappings which use a view as source are set to run in non-SDS mode, and such mappings will try to directly connect to source OLTP. If customer has security restriction on direct connection to source OLTP, then such mappings will fail. Solution: Following workaround can be used for running such mappings. Create tables/materialized view on top of views in the OLTP schema and trigger a process to refresh these objects in OLTP before ETL starts, Replicate these MVs to target tables in the SDS using OGG.There are few views from the source which are used in the BI ETL. List of such views are provided at the end. SDS schema should already have these views as tables. First time replication should be full load of views to target SDS tables, Populate CDC$_SRC_LAST_UPDATE_DATE and CDC$_DML_CODE with current date and ‘I’ respectively. For incremental, A materialized view can be created on source OLTP on top of view CREATE MATERIALIZED VIEW test_mv BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND AS select * from test_v;   In above sql command, REFRESH can be set to FORCE or COMPLETE. For any change in the underlying view definition, MV might need to be recreated. A procedure/process need to refresh the MV before the BI Load Plan runs. Use Oracle DB procedure DBMS_MVIEW.REFRESH to achieve the same. For replicate parameter file, As the OGG Replicat param files for OBIA are generated using an ODI procedure, from the generated file remove the complete entry for these views, And add a normal entry like ------------------------------------------------ -- Test_v ------------------------------------------------ MAP ggtest.test_mv; TARGET ankur_test.test_v,COLMAP (USEDEFAULTS, CDC$_SRC_LAST_UPDATE_DATE = @GETENV ('GGHEADER','COMMITTIMESTAMP'),CDC$_DML_CODE = 'I'),KEYCOLS(COL1,COL2,COL3); If no PK is being defined at target, you should choose appropriate key columns and define them in OGG in replicat processes using KEYCOLS keyword. In ODI, source OLTP connection should be altered to point to SDS itself.   Note: If faced with performance issues (If data is huge in views). Users should explore alternate approach to achieve this view replication. List of such views in E-Business Suite PO_VENDORS JTF_TASK_ASSIGNMENTS GL_SETS_OF_BOOKS ORG_ORGANIZATION_DEFINITIONS BOM_BILL_OF_MATERIALS MTL_ITEM_LOCATIONS_KFV CST_ORGANIZATION_DEFINITIONS CS_LOOKUPS PA_EGO_LIFECYCLES_PHASES_V GL_CODE_COMBINATIONS_KFV PON_AUCTION_HEADERS_ALL_V MTL_ITEM_CATALOG_GROUPS_B_KFV AP_INVOICES_V PER_WORKFORCE_CURRENT_X List of such views in PeopleSoft CM_ITEM_METH_VW DEPENDENT_BENEF EMPLOYMENT PERSON_ADDRESS   Replicating OLTP Packages: There might be few cases where ODI maps are using the package or function from OLTP. Since these would be not be available in the SDS, these mappings cannot be run against the SDS normally. Solution: These need to be handled on case to case basis. Technical notes for such cases are available on oracle support.For example, Units of Measure (UOM) is not supported for SDS based E-Business Suite source. A tech note is available for the OTBI-E, but same with few changes can be leveraged for OBIA. http://docs.oracle.com/applications/biapps102/relnotes/GUID-A4A75F7A-8AFC-4279-9AE1-1448AEEE25F2.htm#BIARL-GUID-4E9C40DF-6A1D-4521-8AEF-6D67D8EAED69 Similar to above tech note, A table can be created called W_SDS_UOM_CONVERSION in EBS source system; same as what is defined in SDS schema.  You can develop a stored procedure which will truncate the table and load it with each of the four SQL statements as given in the above tech note (remove all the to_char conditions from the select clause, as that is used for generating CSV files).  This stored procedure should ultimately be scheduled to run prior to the running of the load plan.  Then configure GoldenGate to extract and replicate this table to the SDS.

Authors: "Guna Vasireddy - Oracle","Sai Rajendra Vadakattu - Oracle ", "Ankur Jain - Oracle"   Replicating Table without Primary Key Replicating Views Replicating OLTP Packages     Replicating Table...

Customization

Importing Custom tables from Source Systems

At times, the source system for BIAPPS might have additional tables that you want to extract the data from and get into BIAPPS. This blog specifies the steps on how to import a source table that is there in Source system but not present in the BIAPPS ODI repository. It is assumed that the user has already registered the Source system before attempting any of the steps in this document. Please refer to the BIAPPS documentation incase you need help with the Source Registration. Identify the Model in BIAPPS Repository: First step is to identify the model in the BIAPPS repository corresponding to the source system in question. For the purpose of the blog, we will assume there is a EBS table CST_ITEM_COST_DETAILS that we want to import from EBS 12.2 Source system. The models are visible under Model tab in the Designer window of ODI studio. Select the right model corresponding to the Source System Version and then double click on the model in ODI studio. Below screenshot shows the EBS 12.2 model opened. Once you have the model open in ODI Studio, select the Reverse Engineer tab and click on Customized. This will allow the user to select the BIAPPS RKM. Ensure that the RKM relevant for the Source System is selected. In the above example, the RKM ought to be RKM BIAPPS E-Business Suite(Oracle).BI Apps Project. Specify the table you want to import using the LIST_OF_TABLES option. You can leave the other parameters as defaults. In the example above, the LIST_OF_TABLES would be set to CST_ITEM_COST_DETAILS to import a table called CST_ITEM_COST_DETAILS in the source table. Once the appropriate RKM is selected and the required table specified in the RKM options, click on Reverse Engineer on the top to start the process Monitor the RKM Session log Once you start the reverse engineer process, it will start a session that is visible from Operator just like any other scenario session. Monitor that session to check if the process completed. If the session failed, then view the session log to see the reasons for failure. Screenshot below show the session in progress as visible from Operator.   Once the run completes successfully, you should be able to see the desired table in the ODI repository. The table would normally get placed in the appropriate sub-model. Certain RKMs (like the one for BIAPPS warehouse) would place it in a sub-model like others. If you are not able to locate the newly imported table (as there are many sub-models), you can run a backend query against the ODI repository to identify the sub-model it got imported under. select m.mod_name,s.smod_name, t.res_name table_name from snp_table t inner join snp_model m on t.i_mod=m.i_mod inner join snp_sub_model s on t.i_sub_model = s.i_smod where t.res_name='CST_ITEM_COST_DETAILS' Common Pitfalls: Some of the common mistakes that should be avoided are listed below: Not selecting the customized radio button in Reverse Engineer tab. If you don't do that, ODI will use the standard RKM which is not designed to extract table metadata from ERP sources. Not selecting the appropriate RKM. Once you select the customized radio button, you will have the option to select the RKM from the "Knowledge Module" drop down list. Ensure that the RKM relevant to the Source System is selected in the drop down Specifying the table list in wrong place or not specifying a list at all. The table(s) to be imported should be specified against the KM option LIST_OF_TABLES. Use comma separated list in case you have more than one table ( E.g. TABLE1,TABLE2). Do not enclose names in quotes or in braces. Do not specify the names against the MASK text box that appears above the KM. That won't be used. Do not run the RKM without specifying a list as that will attempt to import all tables from the Source and can unnecessarily increase the size of the ODI repository, thereby slowing it down. If the table you are trying to import is a synonym in the user that you are connecting to the database with, then the table will get imported as a synonym and won't show any columns. If you want to import the columns as well, you should connect to the database using the user who is the owner of the table that the synonym is pointing to. The ERP Specific RKM may read application specific tables to get the table/column metadata. For example, the EBS RKM in example above, looks at fnd_tables/fnd_columns tables in EBS to get the table/column metadata. If the custom table that you are importing, is not present in those application tables, then the RKM will run fine but nothing will be imported into ODI. If you want to still import such tables into ODI, then use the ODI provided standard RKM instead to import those tables. Please note however that doing so, will not populate any of the BIAPPS table/column flexfields or any additional information that BIAPPS mappings use (Like CDC columns) and hence adding these tables to existing BIAPPS mappings may cause failures. The customer should instead use them in his custom mappings only.The Steps to run standard RKM would be as below: a)Open the model b)Go to Reverse Engineer Tab c)Select Standard d)Provide the table name in the mask field e)Click on Reverse Engineer f)The table should then get imported directly under the model (not under any sub model). Now move the table to the desired sub model.        Refer to ODI standard documentation for more details    

At times, the source system for BIAPPS might have additional tables that you want to extract the data from and get into BIAPPS. This blog specifies the steps on how to import a source table that is...

ETL

Troubleshooting OBIA Load Plan Schedules

In OBIA, the normal process would be to use OBIACM to generate the load plan then schedule it. However at times, customers report issues with regards to load plan schedules. This blog explains why such things are likely to happen and how to resolve them Symptoms Observed: Duplicate Load plan schedules, so the same load plan running multiple times. However only one schedule visible in OBIACM Schedule not visible in OBIACM but load plan running daily as if it were scheduled Schedule shows up in OBIACM but the load plan is not triggered as per that.  Why this might happen? Whenever you re-generate a load plan, OBIACM creates a new load plan in ODI. And then when you schedule the load plan, OBIACM will apply the schedule to the last generated load plan (corresponding to that Load plan definition). From OBIA 10.1 onwards, the old load plan schedules are automatically carried forward to the new load plan on regeneration. For versions prior to that, the old schedules will not be carried forward to the new generated load plan. Also schedules directly created in ODI using ODI studio may or may not show up in BIACM, especially if the schedules are not set against the latest load plan for a particular load plan definition. The other reason for duplicate schedules is when you have a HA/Cluster setup for ODI agent. Whenever you update/create a Load Plan Schedule from BIACM, you must click on the Update schedules for the new schedule to be updated. Not doing so, will mean the new load plan schedule will not take effect. How to fix it? To fix the issue, login to ODI studio, Operator Tab-> Scheduling-> All schedules. Delete all the schedules until no more entries show up. This ensures that no more schedules are present. Now login to OBIACM and setup a new schedule as per your requirement. Figure 1- Screenshot showing no schedules when viewed from Operator tab after cleaning up all existing schedules How to prevent this? To prevent this from occurring, take the following precautions: If you are on OBIA version prior to release 11.1.1.10.1, before re-generating or deleting a Load plan from BIACM, ensure there are no active schedules against it Do not setup schedules directly in ODI studio as they may not show up in OBIACM If you have HA or load balancing, then ensure that you have followed the ODI documentation on how to configure the setup so as to prevent each ODI agent from triggering the Load plan schedule. (Refer to MOS doc 2075828.1) Do not export/import Load plans or their schedules from one environment to another If the ODI Agent is already running at the time you define the schedule in OBIACM, then you must click the Update Schedules icon in the toolbar to add the schedule to the ODI Agent scheduler Note: If you have created a custom load plan in ODI manually, then you cannot use OBIACM to schedule it. Only load plans generated using OBIACM will be visible in OBIACM and hence only those can be scheduled using OBIACM.Use ODI Studio to schedule such custom load plans instead. Do not schedule load plans in such a way that one load plan execution overlaps with another. This is not supported and will cause data corruption which will require a full re-load to fix it.    

In OBIA, the normal process would be to use OBIACM to generate the load plan then schedule it. However at times, customers report issues with regards to load plan schedules. This blog explains why...

Cloud Adaptor

OBIA Fusion Cloud Adaptor - View extracted files in UCM

Authors: "Sireesha Mandava - Oracle" When using Cloud Adaptor, the files will get uploaded to UCM/Storage Service. This blog explains how to view the files in UCM. Every extract that is run creates what is referred to as a Manifest file. This file contains the list of files that make up that extract. The file name is MANIFEST.MF which gets renamed to <timestamp>_MANIFEST_<TIMESTAMP>.MF post successful download via the ODI Load Plan. At times you want to verify the files that got extracted before running the load plan. This blog explains the steps to do so. Locate the Manifest File: From the UCM console, search for MANIFEST.MF. Note: If files are already downloaded, the manifest would have renamed as explained above.Change your search criteria accordingly in such a case. You will see the list of Manifest files in the search results as below: Click on the hyperlink to view the contents of the manifest file. How to use MANIFEST Extractor uploads a MANIFEST.MF file along with all the extracted VO's csv/mdcsv/pecsv in compressed/zip form. MANIFEST file will have the list of files that are uploaded in the current batch run. MANIFEST file is NOT compressed/zip. There can be more than one MANIFEST.MF file; one per extract run. .csv files are VO data files. .mdcsv files are metadata files which has column and its data type definition for Flex VOs. .pecsv files are data files having only primary key column values, which are used to identify deleted records in warehouse. File name format "file_{VONAME}-batch{number}-{TIMESTAMP}". Sample file_crmanalyticsam_agreementam_agreementitempvo-batch1813107209-20140712_123812 All files will be uploaded with file extension .zip irrespective of the above mentioned file types. BatchDetails files can be ignored as its no more used and will not be generated/uploaded in future.   Sample UCM MANIFEST.MF file content - PLV_KEY=FUSION_10_0 crmanalyticsam_partiesanalyticsam_customer;9526;b2af2bf486366e2c2cb7598849f0df2e crmanalyticsam_opportunityam_revenue;9527;9417782870595b5ba34ce08874cfbdf7 crmanalyticsam_partiesanalyticsam_customercontact;9528;879a0328eb936d06c57ee1cb6607516b crmanalyticsam_partiesanalyticsam_customeraccount;9529;35a8c42e079d36a12eaaad1878e8d61e crmanalyticsam_opportunityam_opportunity;9530;fd0aece1bc067abe1abac7a4fd121ea7 First row of the MANIFEST has information about fusion apps source version used by BI Apps warehouse integration which can be ignored for custom warehouse integration. Further lines will have information about the uploaded files in the form vo_name;ucm_document_id;md5_check_sum_value. For example crmanalyticsam_partiesanalyticsam_customer;9526;b2af2bf486366e2c2cb7598849f0df2e Here '9526' is the 'ucm_document_id' of uploaded file, 'b2af2bf486366e2c2cb7598849f0df2e' is 'md5_check_sum_value'.So the file corresponding to the entry above would be named something like file_crmanalyticsam_partiesanalyticsam_customer-batch1813107209-20140712_123812.csv While downloading content from UCM, search for DOCTITLE "MANIFEST.MF" with sort by DOCDATE in DESC order. This will provide all the MANIFEST.Mf UCM docIds in order. Download each MANIFEST file using docId. Read its content to download data files using their respective 'ucm_document_id' by parsing the above mentioned form. May use MD5 check sum to verify downloaded file content. After unzipping, you can process files based on their file extension like .csv, .mdcsv, .pecsv Once the data files are processed, the corresponding MANIFEST.MF file in UCM is stamped with TIMESTAMP prefix like '{TIMESTAMP}_MANIFEST.MF' so that its not picked in the next download. View csv or mdscv files in UCM Once you have identified the manifest file corresponding to the extract you are interested in, you can get the name of the csv/mdsv/pecsv file from the manifest. Then search by that file name to locate the file in UCM and then view it (similar to steps mentioned under "Locate the Manifest file" above).     

Authors: "Sireesha Mandava - Oracle" When using Cloud Adaptor, the files will get uploaded to UCM/Storage Service. This blog explains how to view the files in UCM. Every extract that is run creates what...

BIAPPS on PAAS

BICS Lift and Shift of BIAPPS Content

Authors: Anand Sadaiyan, Swathi Singamareddygari Table of Contents Disclaimer Section 1: RPD Lift and shift Section 2: Application Roles Lift and shift Section 3: Webcat Lift and shift Section 4: Using RDC to connect to ON-PREM oracle database Section 4 .1: Troubleshooting BI Cloud Service RDC Section 5: Limitations Disclaimer This document does not replace the Oracle Business Intelligence Cloud Service Documentation Library or other Cloud Services documents. This document serves as a supplement for doing Lift and Shift of BIAPPS Content to BICS. This document is written based on the BI cloud version 16.3.6.0.0. GUI screenshots shown in the document might differ from the actual Interfaces   Section 1: RPD Lift and shift: Note: It is always a good practice to take a snapshot of the Current environment in BICS, before doing Lift and Shift. So that in any case, if any issue happens, we can restore to the previous working version.  Overview: Administrators can upload data models built with Oracle BI Enterprise Edition to Oracle BI Cloud Service. After uploading a data model file (.rpd) to the cloud, content authors can build reports in the usual way. Any modifications to the uploaded RPD should be done in the Local environment.No modifications are allowed in the BICS. Provisioned RPD can be obtained from Compute environment  For ex. /u01/app/oracle/middleware/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/repository in the compute node. Modify Database as a Service listener.ora to use SID as Service : If you chose an Oracle 12c Database as a Service, for the Database as a Service connection to work with BI Cloud Service you must add the following line to Database as a Service listener.ora configuration (where “listener” is the name of your listener) and restart the TNS listener: USE_SID_AS_SERVICE_LISTENER=on More information can be found here. Steps to be followed for RPD Lift and shift: 1. Login to the BICS environment 2. Navigate to console from the home page and click on snapshots and models. 3. Replace the BICS data model with ON-PREM RPD using “Replace Data model” option and provide the Password for ON-PREM RPD.     4. Verify the Uploaded RPD by Navigating to Analyses. By clicking “Create Analysis”, all the available Subject Areas are displayed. 5. For more details on RPD LIFT on Shift please refer to Uploading an On-Premises Data Model to Oracle BI Cloud Service Section 2: Application Roles Lift and shift: Steps to be followed for uploading the App roles: 1. From the BICS home page navigate to console and click on snapshots and models. 2. Upload the delivered Application Roles bar file by clicking on Upload Snapshot 3. Choose the delivered “obia_bics.bar” file and choose the password as “Admin123” 4. For more details on uploading a snapshot refer Uploading snapshots . 5. Select the uploaded snapshot and click on restore button (check only application roles) to restore the snapshot for application roles. 6. For more details on restoring a snapshot refer Restoring snapshots . 7. Verify the imported application roles which are available under “Users and Roles”-> “Application Roles”. Section 3: Webcat Lift and shift: Administrators can upload content from another Oracle BI Cloud Service or Oracle BI Enterprise Edition 11.1.1.9.0 or later. On a high level, following activities will be performed Replace /shared to /company_shared in webcat available in the compute node. 1) Archive and UnArchive to a temp folder in BICS environment 2) From Temp folder, move the content to Company Shared. In Compute node 1. Zip the webcat available in the compute node (Ex. OBIEE_HOME/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog) and copy to different path. 2. Unzip the webcat file in the compute node. 3. Invoke the catalog manger by using the command ./runcat.sh which is available in the path : OBIEE_HOME/instances/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalogmanager) 4. Then catalog manager will get opened 5. Now open the unzipped webcat through this catalog manager in offline mode. 6. Expand the tree structure in the left pane. 7. Select shared and click on “XML search and replace” available under Tools. 8. In the “XML Search/Replace” window give “/shared” in the old text and “/company_shared” in the new text as shown below and click ok. 9. After replacing, Items changed window will be shown as below. 10. Archive the shared folder by selecting the shared folder and clicking File -> Archive.   11. Once archiving is done login to the BICS instance and follow the steps given below BICS Environment 1. Login to BICS Environment 2. Navigate to catalog from home page 3. Under “company shared” create a temporary folder. In the example below, we have used “temp”, it can be any name of your choice. 4. Expand company shared and select temp folder created in step 3. 5. Select Un-archive option available in the Tasks pane and browse to the archive file created in the step 6.   6. Once archiving is done navigate to “temp\Shared Folders” and select any functional area folder like “Human Capital management” and click on copy available in the Tasks section. 7. Now select Company Shared folder and click on paste. 8. We can observe that Human Capital Management folder will get copied under Company shared. 9. Repeat Steps 6 and 7 to copy all the folders to Company shared folder. 10. Click on dashboard menu then we will be able to see all the dashboards  11. Click on any dashboard and check for results.     Section 4: Using RDC to connect to ON-PREM oracle database: Overview : The Oracle BI Applications use Variables/Initialization Blocks which require a connection to source OLTP (i.e. E-Business Suite, PSFT) databases in order to execute SQL queries, in order to achieve this in the Cloud deployment we will use a new BI Cloud Service feature Remote Data Connector (RDC).   Perform the following steps:   1. Download Oracle BI Cloud Service Remote Data Connector. 2. Deploy the RDC in ON-PREM weblogic server follow the installation and configuration steps in the Configuring the Remote Data Connector (RDC) for BI Cloud Service (BICS) 3. Create a data source pointing to ON-PREM OLTP db in weblogic server. 4. Copy the Public Key from BICS and paste in WLS “<domain_home>/webcenter/rdc_keys/obi-remotedataconnector” 5. Use 12C BI Admin tool, “Load Java Datasources” and provide the On-Prem WLS Connection details and test the connectivity. 6. Modify the OLTP connection in the RPD to use JDBC (JNDI). Able to see JDBC (JNDI) interface only when step 7 followed and the connection is successful.   Section 4 .1: Troubleshooting BI Cloud Service RDC   For the latest troubleshooting information please consult the BICS RDC team. Oracle BI Administration tool Errors Unable to see "JDBC (JNDI)" in Oracle BI Administration tool 12c, after loading Java datasources. Close the Oracle BI Administration tool and re-open it. Do not open the Repository in the Administration tool. Load the Java Datasources first, then open the Repository. "Load Java Datasources" fails OR "401--Unauthorized" when visiting /obiee/javads?status Check the URL. Make sure that the WebLogic username and password is correct. Disable RDC's Metadata security by running the following in the same shell as the one starting WebLogic: Linux: export DISABLE_RDC_METADATA_SECURITY=1 Windows: set DISABLE_RDC_METADATA_SECURITY=1 Unable to import tables/check number of rows in Oracle BI Administration tool. Getting "401--Unauthorized" error when trying to do so. RDC is not designed to allow access through Oracle BI Administration tool. This is for security reasons. The Repository should be created without using RDC (connecting directly via ODBC, or by other means), then Connection Type should be switched to "JDBC (JNDI)". Make sure you switch to the "Miscellaneous" tab after switching to "JDBC (JNDI)" connection type. After these changes the Repository is ready for uploading to BICS. BI Cloud Service Errors Query fails with "401--Unauthorized error": 1. Make sure you have copied public key obtained from BI Cloud Service to $DOMAIN_HOME/rdc_keys/<deployment_name>/oracle_bics_rdc.pem · Here $DOMAIN_HOME is WebLogic's Domain Home. <deployment_name> is generally name of the WAR file without the ".war" extension. For example, if you deployed obi-remotedataconnector.war, deployment name by default is obi-remotedataconnector. 2. Another issue could be that you didn't visit the "Miscellaneous" tab after switching the connection to JDBC (JNDI). Open the Oracle BI Administration tool, switch to miscellaneous tab of the connection pool, and then press OK. Here "Use SQL over HTTP" should be "true".  Query fails with "Unknown Call Interface JDBC (JNDI)": 1. Make sure you don't have other Connection Pools that also use JDBC (JNDI), but pointing to incorrect URL. All JDBC (JNDI) connections should be through valid URLs. 2. Also, before uploading the Repository, user should switch to the Miscellaneous tab in each JDBC (JNDI) connection pool within Oracle BI Administration tool and then save. The best way to check if everything is good is to follow the steps below:  a. open the Repository (if already open, close and reopen) b. right click on the connection pool  c. "Copy"  Paste in a notepad window d. Each Connection Pool having JDBC (JNDI) type should have 'use.sql.over.http'='true' in the PROPERTIES section within the text that was copied. 3. A known bug prevents JDBC (JNDI) connections from going through when using a non-working URL in the Repository. This happens even if the Repository is updated with the correct URL later. Make sure you have the correct, working URL before uploading the Repository. Section 5: Limitations Webcat lift and shift doesn’t carry forward the webcat permissions. It will be available in the upcoming releases. Multi Currency is not supported in the Current version and it will be available in the upcoming releases Following features which are used in BIAPPS are not supported in BICS environment a. Condition b. Group c. KPI d. KPI Watchlist e. List Format f. Segment g. Scorecard Name Path Signature Specific Period Condition /shared/Customer Relationship Management/Service/Role Dashboards Content/Service Executive/Common Reports/Specific Period Condition Condition Trailing Days Condition /shared/Customer Relationship Management/Service/Role Dashboards Content/Service Executive/Common Reports/Trailing Days Condition Condition All Groups /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/All Groups Group Below Top Performance /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Below Top Performance Group Global /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Global Group Top Performers /shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Top Performers Group Average Negotiation Cycle Time /shared/Procurement/Procurement Scorecard/KPIs/Customer/Average Negotiation Cycle Time KPI Fulfilled Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Fulfilled Requisition Lines past expected date KPI Late Receipts /shared/Procurement/Procurement Scorecard/KPIs/Customer/Late Receipts KPI Processed Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Processed Requisition Lines past expected date KPI Procurement Cycle Time /shared/Procurement/Procurement Scorecard/KPIs/Customer/Procurement Cycle Time KPI Unfulfilled Requisition Lines past expected date /shared/Procurement/Procurement Scorecard/KPIs/Customer/Unfulfilled Requisition Lines past expected date KPI Off-Contract Spend /shared/Procurement/Procurement Scorecard/KPIs/Financial/Off-Contract Spend KPI Perfect invoices /shared/Procurement/Procurement Scorecard/KPIs/Financial/Perfect invoices KPI Realized Cost Savings /shared/Procurement/Procurement Scorecard/KPIs/Financial/Realized Cost Savings KPI Invoice Automation /shared/Procurement/Procurement Scorecard/KPIs/Operations/Invoice Automation KPI Manual Requisition Lines Rate /shared/Procurement/Procurement Scorecard/KPIs/Operations/Manual Requisition Lines Rate KPI PO Transactions per Buyer /shared/Procurement/Procurement Scorecard/KPIs/Operations/PO Transactions per Buyer KPI Processed Negotiation Lines /shared/Procurement/Procurement Scorecard/KPIs/Operations/Processed Negotiation Lines KPI # of Suppliers per Category /shared/Procurement/Procurement Scorecard/KPIs/Supplier/# of Suppliers per Category KPI % of Spend By Diversified Suppliers /shared/Procurement/Procurement Scorecard/KPIs/Supplier/% of Spend By Diversified Suppliers KPI On-Time Delivery performance /shared/Procurement/Procurement Scorecard/KPIs/Supplier/On-Time Delivery performance KPI Quality Performance /shared/Procurement/Procurement Scorecard/KPIs/Supplier/Quality Performance KPI Returns /shared/Procurement/Procurement Scorecard/KPIs/Supplier/Returns KPI Exact Match Rate /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Exact Match Rate KPI Hit/Miss Accuracy /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Hit\/Miss Accuracy KPI Inventory Value /shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Inventory Value KPI Average Change Order Approval Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Approval Time KPI Average Change Order Cycle Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Cycle Time KPI Average New Item Creation Approval Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Approval Time KPI Average New Item Creation Cycle Time /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Cycle Time KPI Percentage of Shared Categories /shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Percentage of Shared Categories KPI List Export - Contacts Example /shared/Marketing/Segmentation/List Formats/Siebel List Formats/List Export - Contacts Example List Export Suspect Sync Segment /shared/Marketing/Segmentation/Segments/Suspect Sync Segment Segment Automate Invoice Processing /shared/Procurement/Procurement Scorecard/Automate Invoice Processing Scorecard Initiative Consolidate Supplier Base /shared/Procurement/Procurement Scorecard/Consolidate Supplier Base Scorecard Initiative Develop and Implement New policies to support Contract compliance /shared/Procurement/Procurement Scorecard/Develop and Implement New policies to support Contract compliance Scorecard Initiative Establish and Monitor SLAs /shared/Procurement/Procurement Scorecard/Establish and Monitor SLAs Scorecard Initiative Implement Internet Supplier Portal /shared/Procurement/Procurement Scorecard/Implement Internet Supplier Portal Scorecard Initiative Implement Self Service Procurement Application /shared/Procurement/Procurement Scorecard/Implement Self Service Procurement Application Scorecard Initiative Implement Spend Analytics /shared/Procurement/Procurement Scorecard/Implement Spend Analytics Scorecard Initiative Initiatives /shared/Procurement/Procurement Scorecard/Initiatives Scorecard Initiative Monitor Performance and provide regular feedback on quarterly basis /shared/Procurement/Procurement Scorecard/Monitor Performance and provide regular feedback on quarterly basis Scorecard Initiative Monitor Spend and Savings on Monthly Basis /shared/Procurement/Procurement Scorecard/Monitor Spend and Savings on Monthly Basis Scorecard Initiative Monitor Spend by Diversified Suppliers on monthly basis /shared/Procurement/Procurement Scorecard/Monitor Spend by Diversified Suppliers on monthly basis Scorecard Initiative Reward high performing employees /shared/Procurement/Procurement Scorecard/Reward high performing employees Scorecard Initiative Vision /shared/Procurement/Procurement Scorecard/Vision Scorecard Vision   All blogs related to BIAPPS on PAAS  

Authors: Anand Sadaiyan, Swathi Singamareddygari Table of Contents Disclaimer Section 1: RPD Lift and shift Section 2: Application Roles Lift and shift Section 3: Webcat Lift and shift Section 4: Using RDC...

BIAPPS on PAAS

BICS-DataSync use steps for OBIA on PaaS

Contents Scope. 1 Business Needs and Overview.. 1 1. Create the place holder SDS schema using RCU.. 2 2. Run the python script to create datasource in WebLogic Server and wiring in ODI 2 3. Setup Source connection in BIACM... 2 4. Run the SDS DDL generate scenario to create SDS objects in SDS schema. 2 5. Datasync - Configure target-auto-columns.txt file. 3 6. Datasync - Create connections for target SDS schema and source OLTP in data sync tool. 4 7. Datasync - Import source tables in data sync. 4 8. Datasync - Define load strategy for tables. 6 9. Datasync - Datasync property changes. 8 10. Datasync - Create job and execute to load the data from selected source to Target SDS schema 9 Pros/Cons: 9   Disclaimer Following is intended for information purpose only. It does not emphasize on using Datasync to populate the Source Dependent Schema (SDS). Datasync is one of the choices to populate SDS and it’s entirely customer responsibility to make it work upto SDS. Scope BIApps version 11.1.1.10.1 and 11.1.1.10.2 Oracle BICS Data Sync 2.2 Applicable for PeopleSoft, E-Business Suite, Sieble etc on premise sources Business Needs and Overview For Oracle BI Applications on PaaS, Oracle BI Applications components are deployed on different OPC offerings like PaaS, IaaS, DBaaS and BICS Cloud Services. OBIA related Database schemas are maintained in the DBaaS. It might be required to replicate the on-premise source data to DBaaS using a replication tool like Golden Gate or Datasync. Following summary steps can be followed for using Oracle Business Intelligence Cloud Service Data Sync to move your on premise data to DBaaS.   1. Create the place holder SDS schema using RCU The SDS is a separate schema usually stored on the same database as the Oracle Business Analytics Warehouse, which contains data extracted from an OLTP schema on a separate machine. The OLTP schema is treated as the source and the SDS schema as the target of the replication tool processes which maintain the replicated SDS. To create such SDS schemas, use the Business Analytics Applications Suite RCU, This will create an empty place holder SDS schema for the selected Source 2. Run the python script to create datasource in WebLogic Server and wiring in ODI http://docs.oracle.com/applications/biapps102/install/GUID-F76321CA-24EE-4B1A-BC68-D247D5C18487.htm#BIAIN-GUID-F76321CA-24EE-4B1A-BC68-D247D5C18487 3. Setup Source connection in BIACM Follow Business Intelligence Applications Installation Guide 4. Run the SDS DDL generate scenario to create SDS objects in SDS schema.   Note: For GolderGate, provide value for BIAPPS.UTIL_GENDDL_GG_USER_SOURCE and BIAPPS.UTIL_GENDDL_GG_USER_DW also 5. Datasync - Configure target-auto-columns.txt file SDS tables have few Change data capture (CDC) columns which are used for selecting incremental dataset and for delete processing; target-auto-columns.txt file should be used to automatically update these CDC columns of target SDS schema tables defined in Datasync. Datasync adds all the columns declared in this file to all target tables which are imported in Datasync after configuring this file. After placing the above file, Datasync adds these CDC columns in the target table with default value.     6. Datasync - Create connections for target SDS schema and source OLTP in data sync tool. Follow Datasync user guide.   7. Datasync - Import source tables in data sync Source tables used in BIA may be much lesser than the total number of tables present in the source schema (OLTP), so it is recommended that you only import the required tables in data sync. To create the list of tables to be imported, Get the list of all the tables present in the SDS schema created in earlier steps, either in comma separated or one table per row format. Now use such list to import those specific tables in Datasync Click ‘Data From Table’ button Choose appropriate option and provide the list of tables to be imported. 8. Datasync - Define load strategy for tables Datasync requires a last Update Date (LUD) column to be present in source table, to enable the table for incremental extraction. Once a LUD column is identified, then Incremental Load strategy can be set by defining the Key and Incremental Filter on LUD column. Note: For any change in Project, make sure to update the job using right click and click ‘Update Job’ 9. Datasync - Datasync property changes SDS schema is created using metadata from ODI model, Datasync might try to alter it comparing it with source OLTP, to avoid that set Value as false to “automatically create/alter table” in system properties.   10. Datasync - Create job and execute to load the data from selected source to Target SDS schema Once a job in created, disabling the Drop/Create Indices checkbox might be required as not all the source indexes will be required for the target SDS schema, also job tasks might fail while datasync tries to create the index on DSYS_ datasync columns as these columns might not be present in the target schema.   Run the Job Pros/Cons: Cons/Limitations Datasync currently doesn’t support deletes. Hence a row deleted in source will still remain in Target. Incremental data replication support only if a Last Update date column exists in the source For update strategy, Keys need to be defined manually. CLOB columns replication is not supported. Pros: Good monitoring, troubleshooting and scheduling features. SSH movement is supported, so the data movement is encrypted Comes with the BICS, no extra licensing Datasync supports replication of database Views as well All blogs related to BIAPPS on PAAS    

Contents Scope. 1 Business Needs and Overview.. 1 1. Create the place holder SDS schema using RCU.. 2 2. Run the python script to create datasource in WebLogic Server and wiring in ODI 2 3. Setup Source...

BIAPPS on PAAS

BIAPPS Lift and Shift to OAC / BICS

This blog is the landing page for Migrating Oracle BIAPPS (RPD, webcat, Jazn) Content to OAC (Oracle Analytics Cloud) or BICS (Business Intelligence Cloud Service). For returning Users, “Welcome Back.. If you are looking for the existing content in this blog, it has been moved to a different link based on the classification. Please read further for understanding the classifications”. The Steps for Uploading BIAPPS artifacts to OAC (External Compute) and BICS (Internal Compute), varies based upon the corresponding version of BICS or OAC. So different blogs have been published based on the latest versions available and they are all grouped together in this blog. Note: if you are not sure about the version of BICS or OAC being used, please contact Oracle Support. All the blogs published in the series are based on BIAPPS10.2 Version. Links for Oracle Analytics Cloud (BIAPPS / OBIA with OAC) Component Version Blog Link OAC 17.2.1 https://blogs.oracle.com/biapps/biapps_lift_shift_to_oac (This blog is based on BIAPPS 10.2 Version) OAC 17.2.1 To use OAC with rest of the BIAPPS versions, kindly refer to DOC ID : 2290664.1   Links for Oracle Business Intelligence Cloud Services (BIAPPS / OBIA with BICS) Component Version Blog Link BICS 16.x.x https://blogs.oracle.com/biapps/bics16x-lift-and-shift-of-biapps-content BICS 17.2.x https://blogs.oracle.com/biapps/biapps_lift_shift_to_bics   All blogs related to BIAPPS on PAAS

This blog is the landing page for Migrating Oracle BIAPPS (RPD, webcat, Jazn) Content to OAC (Oracle Analytics Cloud) or BICS (Business Intelligence Cloud Service). For returning Users, “Welcome Back.....

Architecture

BIAPPS on PAAS – Source Connectivity

OBIA 11g: Oracle BI Applications Installation on Cloud Services, Doc ID 2136376.1 white paper describes how you can install BIAPPS on PaaS or in other words OBIA on the cloud. This article explains the options available to connect to various cloud and non-cloud sources using BIAPPS on PAAS. Connectivity to Cloud Sources:   For supported cloud sources like Fusion and Taleo, BI Apps connects to the sources via https and then loads the data to the SDS. From there the data is transformed and loaded into the warehouse (DWH). Connectivity to On Prem Sources: There are two options when you need to load data from On Prem Sources to the BI Apps on PaaS. Option 1: Using Corente VPN In this option, you establish a VPN connection between the Oracle Cloud PaaS machines and the On-Prem source thereby allowing a two way communication between the two. From BIAPPS perspective, it is as though both machines are on the same network and no additional configuration is required. Diagram 2 below shows the data flow when using VPN. Refer to the Oracle Cloud Documentation for more information on Corente VPN. Option 2: Using a Replication tool to load the SDS via ssh tunnel In this option, you use a replication tool like DataSync (which comes with BICS), Oracle Golden Gate (has a cloud option as well) or any other mechanism to load the SDS via a SSH tunnel. The same is also shown in above diagram. The replication client could reside on the On Prem network and then configured to pull data from the On Prem source and load the SDS to the database on Oracle DBCS (Database Cloud Service). Data is then moved from the SDS to the warehouse by the ODI ETL mappings.   Comparison between the two options: Corente VPN Replication Tool Bi-Directional tunnel. Uni-Directional. Data flows from On prem to Cloud Data is extracted at the time of ETL load. If load on source is a concern, the ETL has to be timed to run when load on source is minimal Replication is done outside the ETL load. Can be scheduled to run at a time when the load on Source in minimal Will need to spare additional OCPU for hosting Corente Gateway on the cloud Likely need to host the Replication Tool client on an on-prem machine SDS is not required for data from On-Prem Sources. Data loaded directly to the warehouse SDS is mandatory and to that extent additional space is required No additional configuration required after installing BIAPPS and registering the source Additional configuration required to setup the replication (specify tables to replicate, set up schedule to replicate etc) Will work with all BIAPPS Mappings Certain BIAPPS Mappings which rely on Source PL/SQL packages or views may not be supported on SDS Both options have their pros and cons and the customer will have to understand them and accordingly decide which option is best suited for them. The choice solely rests with the customer and hence it is best he tries out the various options. Most of Oracle Cloud offerings (including BICS/Data Sync and Corente VPN) have trial versions that can be used for this purpose. Also note that the Replication to SDS is the customer's responsibility. BIAPPS can be configured to pick data from the SDS once loaded.  Disclaimer: Refer to the latest BIAPPS and Oracle Cloud Documentation as things might have changed since this blog was written. All blogs related to BIAPPS on PAAS  

OBIA 11g: Oracle BI Applications Installation on Cloud Services, Doc ID 2136376.1 white paper describes how you can install BIAPPS on PaaS or in other words OBIA on the cloud. This article explains...

Cloud Adaptor

Including Fusion-on-Cloud Extensions in BIAPPS - Part 2

Authors: "Guna Vasireddy - Oracle" and "Yadukrishna Munugoti - Oracle " The blog explains how to get Fusion DFFs into BI Apps when Fusion is On Premise. This blog explains how you can do the same if Fusion Applications is on Cloud. When using BI Apps for Fusion (Cloud adaptor) , certain Flex VOs are already seeded out of the box. The steps to follow for getting Fusion Flex Extensions would depend on whether the Flex VO is already seeded for extraction Out of the Box (OOTB) or not. The first blog on this topic covered the steps for cases where the VO is already seeded. This blog covers the use-case where the VO is not seeded. Extract and Load a Custom Flex VO (VO not seeded): When using BI Apps for Fusion (Cloud adaptor) , certain Flex VOs are already seeded out of the box. The steps to follow for getting Fusion Flex Extensions into BI Apps would depend on whether the Flex VO is already seeded or not. The below steps apply when the VO is not seeded by default (so the VO does not get extracted by  default). Step-wise instructions as below: Do the required extensions in Fusion Applications Run BI Extender to get the VO/Flexfields into RPD. Documentation (Section 2.3.6). Note if the VO/Columns do not come into the RPD, then we cannot extract data from it. Enable extraction for that custom VO into BICC. BICC does a select * for such VOs which are added as custom VOs. So if the VO is very wide and you are having extraction issues, consider disabling certain fields as BI enabled (refer Documentation (Section 2.3.5) on how to disable and run BI extender to make the changes effective). (Note: if you turn off BI Enabled, then the column will not appear in OTBI reports as well). Run an extract for that VO. The mdcsv file that is generated will help define the VO/Columns in ODI. Refer to BICC documentation for more detail. On ODI side, we need to essentially load the flat files ( data csv file and the .pecsv files) into the SDS and then take the flow all the way to the warehouse. Create the VO in the ODI BI Sever model (for example " Oracle Fusion XX HCM"). Tip: In Fusion 10, the extractor provides mdcsv files which will contain the metadata of the VO columns. Refer to that mdcsv file for metadata (Column Name, Datatypes etc) when you are manually adding the columns in ODI. Do not add the column flexfields (Like Column Short Name), when adding the columns. The RKM step (step #3) below will take care of that. You will need to add all columns that appear in the mdcsv file.  Define Primary key on Data stores. Also update the ODI key flex field values with the count of number of key columns that you have defined under the Key. So for example, if the primary key is based on 2 columns, set the flexfield(ODI Number of Unique Columns) against the ODI Constraint to 2 Run the RKM against BI Server model in "REFRESH_SDS_FLEXFIELDS" mode to populate the VO/Column ODI flexfields. Choose introspect_source =false since we can't connect to the source directly. Set Mask property as relevant. For Example for the VO that is ending with FLEX_BI_BaseWorkerAsgDFF_VI, the property would be set to “%.FLEX_BI_BaseWorkerAsgDFF_VI”  as shown below. This is to prevent attempting to refresh the model for all the VOs inadvertently. Create File data store under corresponding Oracle Fusion XX Replication Stage Files model which is a FILE technology model, by  duplicating newly created VO definition under Oracle Fusion XX HCM model and moving it to the Oracle Fusion XX Replication Stage Files model. By doing this way , you can retain all the required date store flex fields. Example : Duplicating newly created VO definition under Oracle Fusion XX HCM model. The below screenshot shows it for one CRM VO. Move Duplicated object from Oracle Fusion XX HCM model to Oracle Fusion XX Replication Stage Files model  as shown below: Set file data store name and alias appropriately. In example below they are set as FILE_CRMANALYTICSAM_ACTIVITIESAM_ACTIVITY and its alias as FILE_CRMANALYTICSAM_ACTIVITIESAM_ACTIVITY. This will change as per the VO name. Set resource name as file_crmanalyticsam_activitiesam_activity#CLOUD_FILE_BATCH_ID.csv (Please note that the file resource name should be in lowercase). Go to Files tab and update the format section as shown below. Go to Columns tab and make sure that all columns data type is set to String and Length is set to 4000 as shown below. This is only for the File VO and not the actual VO. Save the changes. Create a copy of an existing FTS Mapping folder and then edit the interface, package etc to point to the new VO. Make sure you copy the FTS map from the right adaptor version. So if you are loading custom VO for Fusion 10 then take the copy of a map from Fusion 10 adaptor only and not Fusion 9. FTS Interface is a one to one mapping. Refer to an existing interface and create the new one accordingly. The package will also include a call to ODI procedure to load PE csv file (Primary extract or deleted file extract). Change the parameters being passed to it accordingly. Below screenshot shows the steps to change in the package. You will first need to create the interface before changing the package. As per the customization guidelines, Package, scenario names should ends with "CUSTOM" word. For Example, if you are creating a Package & scenario for a custom VO "CrmAnalyticsAM.ActivitiesAM.Activity" then your package name should be  "FTS_FUSION_CrmAnalyticsAM.ActivitiesAM.Activity_Custom" and your Scenario name should be "SDE_FUSION_10_ADAPTOR_FTS_FUSION_CRMANALYTICSAM_ACTIVITIESAM_ACTIVITY_CUSTOM "     At this point it should be like regular Customization. Create/Customize SDE/SIL/PLP mappings to take the new VO/columns through to the warehouse tables. Run the Generate SDS DDL to create the table in the SDS(this is also part of the Regular load plan and it will get created then if you don’t explicitly run it). Include the Custom Maps into the generated Load plan at appropriate places (i.e. FTS, SDE, SIL etc). No additional steps are required to download the files from UCM. All files that are extracted and present in UCM will automatically get  downloaded when running the ODI Load Plan. So assuming you have included the custom VO/Column for extract, nothing else needs to be done for download. Refer to BIAPPS Customization Documentation for generic guidelines around customization. Refer to BIAPPS Configure Fusion Source in BIAPPS Installation Guide for details around Flex VOs that are supported out of the box.(this varies with BIAPPS release, so ensure you are looking up the right release). The third blog in this series explains how to troubleshoot common issues when trying to  work with Fusion Flex Extensions.

Authors: "Guna Vasireddy - Oracle" and "Yadukrishna Munugoti - Oracle " The blogexplains how to get Fusion DFFs into BI Apps when Fusion is On Premise. This blog explains how you can do the same if...

Cloud Adaptor

Including Fusion-on-Cloud Extensions in BIAPPS

Authors: "Guna Vasireddy - Oracle" and "Yadukrishna Munugoti - Oracle " The blog explains how to get Fusion DFFs into BI Apps when Fusion is On Premise. This blog explains how you can do the same if Fusion Applications is on Cloud. When using BI Apps for Fusion (Cloud adaptor), certain Flex VOs are already seeded out of the box. The steps to follow for getting Fusion Flex Extensions into BIAPPS for such VOs are as below. Extract and Load Fusion Custom Flex Extensions(VO already seeded): Note:This is only for cases where the VO is already seeded. Do the required extensions in Fusion Run BI Extender to get the VO/Flexfields into RPD. Documentation (Section 2.3.6). Note if the VO/Columns do not come into the RPD, then we cannot extract data from it. Ensure extraction for that Flex VO is enabled in BICC. BICC does a select * for such VOs. So if the VO is very wide and you are having extraction issues, consider disabling certain fields as BI enabled. Run an extract for that VO. Refer to BICC documentation for more details. If the VO is already seeded and support exists for that VO in ODI for that Fusion Release, then the flex columns will get auto-extended and mapped both in ODI and OBIEE. Nothing additional needs to be done. Refer to BIAPPS Configure Fusion Source in BIAPPS Installation Guide for details around Flex VOs that are supported out of the box.(this varies with BIAPPS release, so ensure you are looking up the right release). If the VO you need is not seeded out of the box, certain additional steps will need to be performed. The second blog in this series covers this use-case. The third blog in this series explains how to troubleshoot common issues when trying to  work with Fusion Flex Extensions.  

Authors: "Guna Vasireddy - Oracle" and "Yadukrishna Munugoti - Oracle " The blog explains how to get Fusion DFFs into BI Apps when Fusion is On Premise. This blog explains how you can do the same...

ETL

Monitoring BIAPPS ETL via Oracle Enterprise Manager

Monitor ETL Status Monitor ETL Duration Phase-wise Sample Alerts Background BIAPPS ETL or Load Plan can be monitored from ODI clients or BIACM. However what if you wanted to automate the monitoring? Is there a way to detect ETL is running? Is there a way to be notified if a particular ETL phase (like SDE/SIL/PLP) is taking longer than expected? This blog shows how it is possible using the latest BIAPPS Version 11.1.1.10.2 and  Oracle Database Enterprise Manager (OEM),that comes with the database. There is a new table starting BIAPPS 10.2 release called W_ETL_STATE. Following columns in that table are relevant for this blog.  Column Name  Purpose PROCESS_TYPE Will contain ETL for ETL runs STATE_CODE Represents the current state of the process. Contains one of the following STARTED, FAILED,COMPLETED SUCC_PHASE_CODE Represents the last successfully completed phase of the process. E.g. if SUCC_PHASE_CODE ='SDS_COMPLETED' and STATE_CODE='FAILED' it implies that the ETL failed after the SDS got successfully loaded   This table is automatically populated by the BIAPPS Load Plan (assuming it is a Generated Load Plan and not a custom built one). We can write a query on this table and use it to achieve the desired monitoring. Monitor ETL Status: User Defined Metric for ETL Status: Warning when ETL is running and Critical when ETL fails Follow the below steps to create a user defined metric in Oracle Database Enterprise Manager(OEM). 1) Open the User Defined Metric Screen from OEM (see screenshot below). 2) In the screen that opens, choose create to create a new metric and then populate the values as shown below   The SQL to be entered is as below SELECT 'ETL_STATUS', CASE WHEN STATE_CODE='STARTED' THEN CURRENT_ETL_PHASE     ELSE STATE_CODE END ETL_STATUS FROM (SELECT STATE_CODE,     CASE WHEN SUCC_PHASE_CODE='INIT_COMPLETED' THEN 'RUNNING'          WHEN SUCC_PHASE_CODE='SDS_COMPLETED' THEN 'RUNNING'          WHEN SUCC_PHASE_CODE='SDE_COMPLETED' THEN 'RUNNING'          WHEN SUCC_PHASE_CODE='SIL_COMPLETED' THEN 'RUNNING'          WHEN SUCC_PHASE_CODE='PLP_COMPLETED' THEN 'LOAD_DONE' END AS CURRENT_ETL_PHASE FROM W_ETL_STATE); Define the threshold as below. Setup a schedule as per your preference. Once you have defined the metric and the schedule is active, it will show up in your home screen, whenever ETL is running or FAILED. Monitor ETL Duration Phase-wise User Defined Metric for ETL Duration Phase-wise: Warning when any phase takes over x hours and Critical when ETL phase takes over y hours Follow the below steps to create the required user defined metric in Oracle Database Enterprise Manager(OEM). 1) Open the User Defined Metric Screen from OEM (see screenshot below). 2) In the screen that opens, choose create to create a new metric and then populate the values as shown below The SQL to be entered is as below SELECT 'SDS_DURATION',    CASE WHEN SUCC_PHASE_CODE='INIT_COMPLETED' THEN (SYSDATE-W_UPDATE_DT)*24    ELSE -1 END AS SDS_DURATION_HOURS  FROM W_ETL_STATE UNION ALL SELECT 'SDE_DURATION',    CASE WHEN SUCC_PHASE_CODE='SDS_COMPLETED' THEN (SYSDATE-W_UPDATE_DT)*24     ELSE -1 END AS SDE_DURATION_HOURS FROM W_ETL_STATE UNION ALL SELECT 'SIL_DURATION',    CASE WHEN SUCC_PHASE_CODE='SDE_COMPLETED' THEN (SYSDATE-W_UPDATE_DT)*24     ELSE -1 END AS SIL_DURATION_HOURS FROM W_ETL_STATE UNION ALL SELECT 'PLP_DURATION',    CASE WHEN SUCC_PHASE_CODE='SIL_COMPLETED' THEN (SYSDATE-W_UPDATE_DT)*24     ELSE -1 END AS PLP_DURATION_HOURS FROM W_ETL_STATE ;   Define the threshold as required. Setup a schedule as per your preference. Once you have defined it, it will show up in your home screen. Whenever any ETL phase takes over 2 hours it will throw a warning and when it exceeds 3 hours it will throw an exception. Note: The 2 hours and 3 hours shown in this blog, is only an example.  You can change them as per your requirement. You can also setup individual alerts for each phase, each having a different value for warning and exception. In this  example, each of the phase is expected to run under 2 hours. Sample Alerts ETL failed and SIL duration is over 188 hours. So both raised as critical. ETL running (so warning) and SDE Duration is over 189 hours(so critical). Note: Once a warning/alert is raised, it is not raised again until the value changes. You can also see the history of alerts from Oracle Database EM.   This opens up a screen as below which shows the history. Similar screen with history can be seen for the ETL_STATUS metric Note: BIAPPS 10.1 customers can apply the Patch 25196751 available on MOS site and then follow the instructions in this blog to achieve the same functionality. No patch required if you are on BIAPPS 10.2 as this functionality is available out of the box. 

Monitor ETL Status Monitor ETL Duration Phase-wise Sample Alerts Background BIAPPS ETL or Load Plan can be monitored from ODI clients or BIACM. However what if you wanted to automate the monitoring? Is...

Analytic Applications

Cross Referencing Taleo Recruiting Data With HCM

  Applies to OTBI-Enterprise Version 2 PB 10   TABLE OF CONTENTS Business Needs and Overview.. 3 Feature Overview.. 3 Supported Use Cases. 4 Oracle HCM Cloud and Oracle Talent Cloud (Taleo) Integration using the OOTB Integration Product. 5 How to set up?. 5 Configure the required parameters. 5 Custom Integration of Oracle Talent Cloud (Taleo) with Any HCM System.. 6 How to set up?. 6 Configure the required parameters. 6 Provide Cross Reference data to OTBI-E. 7 Download templates and prepare your Cross Reference Data Files. 7 Understand and Prepare your Cross Reference Data Files. 9 file_tlo_appl_empl_xref_ps.csv. 10 file_tlo_busn_location_xref_ps.csv. 11 file_tlo_dept_int_org_xref_ps.csv. 12 file_tlo_int_org_xref_ps.csv. 13 file_tlo_job_xref_ps.csv. 14 file_tlo_user_empl_xref_ps.csv. 15 Upload your Cross Reference Data Files. 16 Upload your Cross Reference Data Files using Automation. 17 Configuration and Implementation Tips. 18 Business Needs and Overview Oracle Talent Cloud (Taleo) Recruiting module uses some basic OLF data (Organization, Location, Job) from the HCM system to get on with its business. More than often, the HCM system represents the master source of data for such items. You would pre-populate Taleo Recruiting with all potential Organizations/Departments, Locations and Job Codes from your HCM system. These are the items your Recruiters or Hiring Managers will “select” (and not “enter”) when they create a Job Requisition in Taleo. In terms of data, we essentially have the same data set in two different systems. Candidates usually come straight into Taleo Recruiting through the portal. Some get dropped in the process of recruitment cycle, while some may be selected for on-boarding. For those selected for hire, Taleo sends the candidate’s information back to the HCM system, where the hire process kicks in. With this, we end up having the same person in two different systems, again. These data can get changed in the individual systems and without a proper master data management mechanism, there is no way one can get a 360 degree view of the data in its “master” form. We need to be able to process the knowledge that the “Candidate X” who was interviewed in the Taleo system is essentially the “Employee Y” in our HCM system. Similarly, the Job “S/W Dev 1” in Taleo system is essentially the same as the Job “Software Developer IC1” in our HCM system. In the reporting layer, we should not need to worry about how to tie up these data that came from different sources. Our reporting should always automatically (unless chosen otherwise) show the master attributes of these entities, and the metrics should add up to these master data. Feature Overview OTBI-Enterprise offers a unique feature to cross reference your master data when working with a given source data. In short, the way it works is this: a) OTBI-E stores the Master – to – Source relation in its most granular level. This is the biggest challenge and we will discuss this later in our use cases. b) Load the data warehouse normally, as a multi-source ETL. c) As post load steps, enrich the facts to refer to the master dimension data. There is no difference in what you see in the final reporting layer (OBIEE Answers). You use OTBI-E, as usual, without paying any attention whatsoever about master data. You pull in your data elements to your report and rest assured that what is see is the synched-up data between your HCM system and Taleo Recruiting (well, obviously if Cross Reference feature was enabled to start with). For the OTBI-E Recruiting subject area, the following are the dimensions that can potentially go through this Cross Referencing process: · Employee/Candidate · User · Organization/Department · Location · Job · Assignment · HR Person/HR Person Legislation You could use any of the above dimensions in your report and get advantage of the feature. On top of these, you can always use additional (and important) dimensions such as “Job Requisitions”, “Recruitment Events” and/or “Recruitment Source” seamlessly. However, since these dimensions are mastered only in your Taleo Recruiting source, they don’t have to go through the cross referencing process. The success of the entire feature depends on the step (a) above, getting to store the Master – Source relation for each entity at a ground zero level. In the following sections on supported use cases, we will touch upon that topic and later, go to more details on what this really relates to. Supported Use Cases At a high level, there are two major supportable use cases. Oracle HCM Cloud (Fusion HCM) has an Out-Of-The-Box (OOTB) integration product at the application level that talks to the Oracle Talent Cloud (Taleo). This product allows those initial “pushes” of seed data (mostly OLF) from Fusion HCM into Taleo. Furthermore, the on-boarding process also allows the Taleo Candidate to be “hired” in Fusion HCM as a reverse push from Taleo into Fusion HCM. Finally, that candidate (now an employee in Fusion HCM) gets pushed again from Fusion HCM into Taleo to update its property as an “Internal Employee Candidate”. So, the first use case is this. 1. Support those Fusion HCM and Taleo customers that use the OOTB integration product. The flip side to this is for those cases where this OOTB product is not usable for a variety of reasons (like No-Fit, or Taleo-First-Fusion-Second kind of a situation). Or even the cases where Taleo is your recruiting cloud solution but your HCM source isn’t Fusion HCM – let’s say it is PeopleSoft or E-Business Suite or something else. Whatever be the case, in order to get going with Taleo, you must have propagated those OLF data from your HCM source into Taleo, and you must have adopted business processes that kicks in the “Hire” event in your HCM source once a candidate is selected for hire in Taleo. And so on. To do all these, you didn’t take advantage of an Oracle provided Integration Product (because there’s none) – you did it yourselves. Therefore, the second use case is this. 2. Support all other customers who use a custom approach to HCM – Taleo integration. Note that this may also include you, a Fusion HCM customer, who didn’t employ the OOTB integration solution. We will now go over these two use cases one at a time. The complexities differ, and so does the configuration that you carry out in OTBI-Enterprise. Oracle HCM Cloud and Oracle Talent Cloud (Taleo) Integration using the OOTB Integration Product So you are a Fusion HCM customer and was able to make use of the OOTB integration product to integrate your Fusion HCM with your Taleo Recruiting. Nice! More than half the work is already done. OTBI-E now already knows about the Master to Source relations and you don’t need to tell us anything about it. We will pull in things as necessary. What else do you need to configure? Just a few items as described below. How to set up? There are three parameters you would need to configure, and that is all. They are: Configure the required parameters CROSS_REFERENCE_ENABLED This parameter is used to control the cross reference feature. For whatever reasons, if you do not wish to enable this entire feature, keep it as “N”. Else, switch it to a “Y”. This decision doesn’t have to be taken at the very first load. You can choose to turn it on at a later point as well. However, remember that once turned ON (meaning, with a value “Y”) there is no turning back. We mentioned that OTBI-Enterprise would have already enriched your fact tables referring to the master dimensions. This cannot be undone. For whatever reasons if you choose to disable this feature, you need to carry out a “Reset Data warehouse”, or a “Full Load” in other words. TLO_INTEGRATED_HCM_DSN  (Taleo Integrated HCM Data Source Number) When you registered your Fusion HCM source in BIACM, you would have picked a Data Source Number that represents this source. Put that same number here. This is how you indicate to OTBI-Enterprise to treat your Fusion HCM source as the “master” source. Obviously, the Taleo source becomes the subordinate one, but you don’t have to tell this. OTBI-Enterprise will assume that. If the CROSS_REFERENCE_ENABLED was set to “N” in the previous step, it doesn’t matter what you set for this current parameter. However, if CROSS_REFERENCE_ENABLED is a “Y”, OTBI-Enterprise needs a valid Data Source Number here. TLO_CUSTOM_INTEGRATION (Taleo Custom Integration) This parameter indicates that you do not use the OOTB Fusion-Taleo integration product and that you have done the application layer integration yourself. For the current use case we are dealing with, set this to “N”. The default is “N”, so just leave it like that. Custom Integration of Oracle Talent Cloud (Taleo) with Any HCM System So you don’t use the Fusion-Taleo integration product at the application layer. You could be a Fusion HCM customer but didn’t want to use that OOTB product, or you are a PeopleSoft HCM of E-Business Suites HRMS customer using Taleo for your Recruiting. Or your HCM source is something that is completely non-Oracle (like SAP, Workday, home-grown, whatever). But since you are running your business with Taleo Recruiting, you must have pushed those OLF data into Taleo and have a process developed as to how to turn the “on-boarding to hire” from Taleo to your HCM source. In other words, you know how to relate entities between your HCM source and your Taleo source. Well, that’s what OTBI-Enterprise needs as data feeds from you. Plus, you set the same three parameters as before. We do in detail below. How to set up? Let’s get the easy part first – setting up the parameters. Configure the required parameters CROSS_REFERENCE_ENABLED This parameter is used to control the cross reference feature. For whatever reasons, if you do not wish to enable this entire feature, keep it as “N”. Else, switch it to a “Y”. This decision doesn’t have to be taken at the very first load. You can choose to turn it on at a later point as well. However, remember that once turned ON (meaning, with a value “Y”) there is no turning back. We mentioned that OTBI-Enterprise would have already enriched your fact tables referring to the master dimensions. This cannot be undone. For whatever reasons if you choose to disable this feature, you need to carry out a “Reset Data warehouse”, or a “Full Load” in other words. TLO_INTEGRATED_HCM_DSN (Taleo Integrated HCM Datasource Number) When you registered your HCM source (E-Business Suite or PeopleSoft or Universal) in BIACM, you would have picked a Data Source Number that represents this source. Put that same number here. This is how you indicate to OTBI-Enterprise to treat your HCM source as the “master” source. Obviously, the Taleo source becomes the subordinate one, but you don’t have to tell this. OTBI-Enterprise will assume that. If the CROSS_REFERENCE_ENABLED was set to “N” in the previous step, it doesn’t matter what you set for this current parameter. However, if CROSS_REFERENCE_ENABLED is a “Y”, OTBI-Enterprise needs a valid Data Source Number here. TLO_CUSTOM_INTEGRATION (Taleo Custom Integration) This parameter indicates that you do not use the OOTB Fusion-Taleo integration product and that you have done the application layer integration yourself. For the current use case we are dealing with, set this to “Y”. The default is “N”, so make sure you change it to a “Y”. Provide Cross Reference data to OTBI-E Providing the cross reference data (master – source relation) is the challenging work here. But there is no way out of it. OTBI-Enterprise needs to know the master entity ID for each of your source entity IDs. The steps looks not too difficult. There are currently six CSV files encompassing all the supported dimensions where OTBI-Enterprise supports cross referencing today. These are more of “configuration files” rather than “data files” – although technically you can argue the other way round. You download these configuration (template) files for your Taleo source from BIACM, add data to them, and upload them back. As and when your cross reference data change, you re-download/edit/re-upload the files. Note that it is not necessary for you to go overboard and provide cross reference data for all files in the first go itself. You can stagger the file delivery at times comfortable to you. Same stands for corrections to already-provided cross reference data. You can get them in as and when you find time. But until OTBI-Enterprise receives the correct cross reference data set, it cannot do proper enrichment of the facts. You might see two “same” entities as though “different” in OTBI-Enterprise. The moment OTBI-Enterprise knows they are same, they will be merged into one. Detailed steps below. Download templates and prepare your Cross Reference Data Files Login to BIACM. Go to System Setups -> Define Business Intelligence Applications Instance. Select your Taleo source (say Taleo 14B, as in this case) and click on the Actions menu.   Click on Manage Source Configuration Data Files. You should see the “Manage Source Configuration Data Files” screen, as below: Click on the “Export Source Configuration Data Files” button at the top. It displays the six files on the left side. Move them to the right, and click on “Export”. This should export the six files as “zipped” in your local downloads folder. Unzip the six files, as below: 1. file_tlo_appl_empl_xref_ps.csv 2. file_tlo_busn_location_xref_ps.csv 3. file_tlo_dept_int_org_xref_ps.csv 4. file_tlo_int_org_xref_ps.csv 5. file_tlo_job_xref_ps.csv 6. file_tlo_user_empl_xref_ps.csv These are the six files where you need to provide the cross reference data to OTBI-Enterprise. To know more about the content of these files, you will need to go through the following section about the files. Understand and Prepare your Cross Reference Data Files Depending on your master source (E-Business Suites, or PeopleSoft, or Fusion) you should look carefully at the description of the “MSTR**” columns for that source. The “SRC**” columns are about Taleo, which are essentially your source entities. They follow Taleo terminology, such as Candidate Number, and so on. Go through the file and column descriptions of these potential six cross reference CSV files. You need to be very clear in what you provide here, as your reporting experience is going to heavily depend upon that. file_tlo_appl_empl_xref_ps.csv Description: Stores the Taleo Candidate to Master Employee reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_CANDIDATE_NUMBER Source Candidate Number Numeric 20  Y Oracle Talent Cloud (Taleo) Candidate Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_PERSON_OR_EMPL_ID Master Person or Employee ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_PERSON.EMPLID Oracle E-Business Suite: PER_ALL_PEOPLE_F.PERSON_ID Oracle Fusion: FscmTopModelAM.PersonAM. GlobalPersonForETLPVO.PersonId 5 MSTR_PRIMARY_ASSIGNMENT_ID Master Primary Assignment ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_JOB.EMPLID||'~'||PS_JOB.EMPL_RCD Also to be noted that the EMPL_RCD should correspond to the Primary Assignment (JOB_INDICATOR = ‘P’ from the same table). Oracle E-Business Suite: PER_ALL_ASSIGNMENTS_F.ASSIGNMENT_ID Oracle Fusion: HcmTopModelAnalyticsGlobalAM. AssignmentAM.AssignmentTempPVO.AssignmentId 6 MSTR_PRIMARY_LEGISLATION_CODE Master Primary Legislation Code String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_JOB.REG_REGION Oracle E-Business Suite: HR_ORGANIZATION_INFORMATION. ORG_INFORMATION9 Oracle Fusion: HcmTopModelAnalyticsGlobalAM. AnalyticsServiceAM.TerritoriesPVO.TerritoryCode file_tlo_busn_location_xref_ps.csv Description: Stores the Taleo Location to Master Location reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_LOCATION_NUMBER Source Location Number Numeric 20  Y Oracle Talent Cloud (Taleo) Location Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_LOCATION_ID Master Location ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_LOCATION_TBL.SETID || ‘~’ || PS_LOCATION_TBL.LOCATION Oracle E-Business Suite: HR_LOCATIONS_ALL.LOCATION_ID Oracle Fusion: HcmTopModelAnalyticsGlobalAM.LocationAM. HRLocationsPVO.LocationPEOLocationId   file_tlo_dept_int_org_xref_ps.csv Description: Stores the Taleo Department to Master Department reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_DEPARTMENT_NUMBER Source Department Number Numeric 20  Y Oracle Talent Cloud (Taleo) Department Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_INT_ORG_ID Master Internal Organization ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_DEPT_TBL.SETID||’~’|| PS_DEPT_TBL.DEPT_ID Oracle E-Business Suite: HR_ALL_ORGANIZATION_UNITS.ORGANIZATION_ID Oracle Fusion: FscmTopModelAM.OrganizationAM.OrganizationUnitPVO. OrganizationId   file_tlo_int_org_xref_ps.csv Description: Stores the Taleo Organization to Master Organization reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_ORGANIZATION_NUMBER Source Organization Number Numeric 20  Y Oracle Talent Cloud (Taleo) Organization Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_INT_ORG_ID Master Internal Organization ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_BUS_UNIT_TBL_HR.BUSINESS_UNIT Oracle E-Business Suite: HR_ALL_ORGANIZATION_UNITS.ORGANIZATION_ID Oracle Fusion: FscmTopModelAM.OrganizationAM.OrganizationUnitPVO. OrganizationId   file_tlo_job_xref_ps.csv Description: Stores the Taleo Job Information to Master Job reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_JOB_INFO_NUMBER Source Job Information Number Numeric 20  Y Oracle Talent Cloud (Taleo) Job Information Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_JOB_ID Master Job ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_JOBCODE_TBL.SET_ID || '~' || PS_JOBCODE_TBL.JOB_CODE Oracle E-Business Suite: PER_JOBS.JOB_ID Oracle Fusion: HcmTopModelAnalyticsGlobalAM.JobAM.JobPVO. JobId   file_tlo_user_empl_xref_ps.csv Description: Stores the Taleo User to Master Employee reference. ** DSN = Data Source Number # Col Name Col Business Name Data Type Len Reqd. Col Description 1 SRC_DATASOURCE_NUM_ID Source DSN Numeric 10  Y Oracle Talent Cloud (Taleo) DSN as configured during BIACM source registration 2 SRC_USER_NUMBER Source User Number Numeric 20  Y Oracle Talent Cloud (Taleo) User Number 3 MSTR_DATASOURCE_NUM_ID Master DSN Numeric 10  Y DSN of the Master HCM source (such as PSFT, EBS, or Fusion, as configured during BIACM source registration 4 MSTR_PERSON_OR_EMPL_ID Master Person or Employee ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_PERSON.EMPLID Oracle E-Business Suite: PER_ALL_PEOPLE_F.PERSON_ID Oracle Fusion: FscmTopModelAM.PersonAM.GlobalPersonForETLPVO. PersonId 5 MSTR_PRIMARY_ASSIGNMENT_ID Master Primary Assignment ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PS_JOB.EMPLID||'~'||PS_JOB.EMPL_RCD Also to be noted that the EMPL_RCD should correspond to the Primary Assignment (JOB_INDICATOR = ‘P’ from the same table). Oracle E-Business Suite: PER_ALL_ASSIGNMENTS_F.ASSIGNMENT_ID Oracle Fusion: HcmTopModelAnalyticsGlobalAM.AssignmentAM. AssignmentTempPVO.AssignmentId 6 MSTR_USER_ID Master User ID String 80  Y This column tracks the master information, and means differently for different sources, as mentioned below: Oracle PeopleSoft: PSOPRDEFN.OPRID Oracle E-Business Suite: FND_USER.USER_ID Oracle Fusion: HcmTopModelAnalyticsGlobalAM.UserAM.UserPVO. UserHistoryPEOUsername Once ready, fill out these six CSVs. The next step would be to upload them back to OTBI-Enterprise. Upload your Cross Reference Data Files Uploading navigation is same as downloading. Log into BIACM. Go to System Setups -> Define Business Intelligence Applications Instance. Select your Taleo source (say Taleo 14B, as in this case) and click on the Actions menu. Click on Manage Source Configuration Data Files, then click on the “Import Source Configuration Data Files” menu at the top. The following dialog will allow you to import the files, one at a time. You could zip up all the files and upload one zip file, or update individual files separately.   Upload/Import the six files. Upload your Cross Reference Data Files using Automation Speaking about change management in providing the cross reference data, information such as Job or Department or Organization wouldn’t probably change as much. However, Candidate to Employee Cross Reference, or User Cross Reference information are examples that can potentially be fast changing. Downloading or uploading the corresponding CSV files manually may not be optimum. OTBI-Enterprise BIACM offers some automations around that using Web Services. If you can program the preparation of the CSV files (managing the change captures etc.) at your end, you can use the BIACM Web Services in your program to upload them directly to where they are supposed to go. ETL Source Configuration files can now be imported using REST style web services exposed from Configuration Manager Application. Once the files are ready, use the following REST API to import them programmatically: REST API to import ETL Source Files Method http://<host:port>/biacm/rest/biacmservice/importETLSourceFiles Request Type POST Parameters PLV – Product Line Version of the source registered in CM [For Taleo 14B, the value would be TALEO_14_B, for Taleo 15A it is TALEO_15_A etc.] DSN – Data Source Number of the source registered in CM [in this case, Taleo DSN] File – The full name and path of the file to be uploaded Authentication Provide the username/password used to login to CM Note that multiple files can be imported as one single ZIP file. OTBI-Enterprise will UNZIP them while storing them in the necessary folder. An Example …. Any REST client can be used to invoke the web services. Here is an example using curl: curl -i -F "file=@<file to be uploaded>" http://<host:port>/biacm/rest/biacmservice/importETLSourceFiles?plv=<plv>&dsn=<dsn> -u username:password Response Codes HTTP Code Description 200 Success 400 Validation Errors 401 Unauthorized Configuration and Implementation Tips Ø If you choose to make use of the feature, make sure you configure all three parameters correctly. If you choose to not use this, then  CROSS_REFERENCE_ENABLED parameter should be configured to ‘N’ and the remaining two isn’t necessary. Ø If you enabled the Cross Reference feature and later wish to disable it, you will need a full load. So make your call thinking through very deeply. Ø Out of all the CSV files, it is apparent that the file “file_tlo_appl_empl_xref_ps.csv” will potentially change very frequently. It is worth to develop an automation around creation of this file. The rest of the files wouldn’t change that frequently (hopefully). Ø Provide the data only when there is a source to master mapping. Source entities that does not have a corresponding master doesn’t need to be added to the files. Except in one use case below, with asterisk (*). Ø How to do source – master relation (cross reference dataset) corrections? o You indicated a wrong master for a given source which you want to correct now. Just provide the same source to master mapping the next day, this time providing the correct master. OTBI-Enterprise will honor this correction and make appropriate change. o You want to undo a master source relation. Meaning, a source to master mapping isn’t valid. This source is the master to itself. (*) Just removing the old source to master mapping in the following day’s load won’t help. You need to provide that source to master mapping again in the following day, except that this would be a source to source map. This will enable OTBI-Enterprise to understand that this source entity is the master of itself. In other words, this has no master mapping.  

  Applies to OTBI-Enterprise Version 2 PB 10   TABLE OF CONTENTS Business Needs and Overview.. 3 Feature Overview.. 3 Supported Use Cases. 4 Oracle HCM Cloud and Oracle Talent Cloud (Taleo) Integration using...

Knowledge

An Implementer’s Guide to External Data Support for Universal Sources

    Applies to OTBI-Enterprise Version 2 PB 2   TABLE OF CONTENTS Overview of the External Data Support Feature. 3 Support for On-Premise Oracle AU sources. 3 Support for Non-Oracle sources. 3 Supported Subject Areas – HCM+. 4 Practical Use Cases. 5 Getting Started with External Data – Non-Oracle Sources. 5 Sequence of Steps – High Level 6 Deep Dive into the Actual Steps. 6 Register Source and Configure Storage Service Container. 6 Build Load Plans. 8 Prepare Data Files and Upload. 11 Prepare Data Files. 11 Upload Data Files to SSC. 15 Run your Load Plans. 16 Managing Deletes in Source System.. 17 General Tips. 18 Resolving Domains. 18 Special Cases. 22 UNSPSC. 22 Data Security with External Data. 24 Potential Automations. 26 File Upload to Storage Service Container. 26 Overview of the External Data Support Feature Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats). With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered. With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together. With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those. Support for On-Premise Oracle AU sources For Oracle sources (like E-Business Suite, or PeopleSoft), see my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”. Support for Non-Oracle sources For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), obviously, there are no pre-built adaptors. However, Oracle BI Application’s standardized data models for their data warehouse makes it possible for you to populate your Universal Staging area tables and thereon, Oracle BI  Applications ETL takes over. The Source Independent Load routines read from these Universal Staging area tables and populate the end data warehouse tables, the Post Load Process takes it further to populate required Aggregate tables. Finally, the reporting layer is already attached to the warehouse. In short, majority of the heavy lifting of the data is already taken care of; you need to just populate the Universal Staging Tables. How? Oracle provides documentation through which you get to know which staging tables (by subject areas) they need. Along with that comes information about the map of those tables with the CSV files. Finally, once you know the list of CSV files to work on, Oracle’s documentation provide detailed information about the structure of those files, their data types, column (functional) descriptions, key information and any other related information. With all that knowledge, you create the required Universal CSV files and upload to SSC. OTBI-Enterprise now takes over and loads them to the data warehouse and connect the dots to the reporting layer. Here is a quick diagrammatic representation of Non Oracle (Universal Adaptor) External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.   Supported Subject Areas – HCM+ HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, for Universal sources. Practical Use Cases You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud. Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area. Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). This is the case we are dealing with in this document. Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”. Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area. There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes from where. Getting Started with External Data – Non-Oracle Sources When it comes to Non Oracle sources, OTBI-Enterprise uses the Universal Adaptor to populate the data warehouse tables. The Universal Adaptor relies on CSV data files to be present in the ..\srcFiles directory. The actual directory gets provisioned through set up tasks at BI Application Configuration Manager (BIACM). The ETL maps to these CSV files and loads the Universal Staging area tables. Thereon, the regular ETL processes like source independent load, post load etc. kicks in. Note that there is no SDS schema when it comes to Universal Adaptor. The ETL reads directly from the files. Populating the UA CSV files is usually the biggest challenge for customers. First, to figure out which files are to be populated and what should be their names. Next, to understand the content that goes in each of these CSV files. We will go over these in details below.   Sequence of Steps – High Level At a high level, the sequence of steps goes like this: 1. Register Source, configure Storage Service Container (SSC) 2. Pick subject areas (FGs, or Fact Groups) and build Load Plans [Regular and Domains-only] 3. Populate required data files and manifest file (list of files) 4. Upload data files (for Domains-only LP) to SSC 5. Run Domain-only LP 6. Upload data files (for regular LP) and new manifest file to SSC 7. Run regular LP 8. Repeat steps 6 and 7 for incremental ETL runs Deep Dive into the Actual Steps Let’s take a deeper look into what goes in each of these steps. Register Source and Configure Storage Service Container First, you need to enable BI Applications Offerings. To do that, Ø Login to BIACM Ø Click on Tasks -> Source Setups ->Manage Business Applications Ø Select the offerings you prefer Ø Click Save The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do. Now we proceed to source registration. Register Universal Adaptor source for External Data. This is how: Ø Click on Tasks -> Source Setups -> Define Business Intelligence Applications Instance Ø Source Systems Tab ->Click on Create -> Then provide a Source Instance Name and a Data Source Number, as shown in the screenshot below. It is recommended that you keep 900 as the data source number for Universal file sources. Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically. Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “Container_Universal”. Ø Click on Save. The Source system will be registered and the Storage Service Container is created Build Load Plans Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode). Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans-> Add Give a name to your LP. We call it “UADomains_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”. Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported. Ø Click Generate. Wait for the Domains LP generation to complete. Now build another Load Plan which you will be using for regular loads, full and incremental. Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add Give a name to your LP. We call it “UAFull_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”. Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP. Ø Click Generate. Wait for the Full LP generation to complete. This completes the step of building all necessary Load Plans. Prepare Data Files and Upload This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC. Prepare Data Files Depending upon the FGs you picked for your LP, the first thing you need to gather is the list of CSV files you will actually need. OTBI-Enterprise documents it quite clearly, by subject areas. You could download that from OTBI-Enterprise Document Library under the section “View Universal Adaptor Reference for HCM Subject Areas”. A sneak peak of that file is here:   Once you know the names of physical CSV files, and also the names of the logical ODI model store name, the next (and probably the biggest) concern is to know what goes in there. Fortunately, OTBI-Enterprise provides some decent documentation about the entire topic – all that you need to know. For example, what should be grain of the data set, which staging area table and which final dimension or fact table this data will get into, a summary about the nature of data at a high level, and so on. Next, it provides the shape of the CSV file, column header names, data types and lengths, whether the column is mandatory or not [some actually aren’t], whether this column is a domain code [including list of potential target warehouse conformed domain codes], and lastly the description of the column through which you get to know what the column is about. You can download this documentation from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. A sneak peak at that document is provided here for reference purposes. This should be a reasonable enough information for you to get started. OTBI-Enterprise even provides sample CSV files (with no data) so that the structure of the CSV file is already there. You just need to add data to it. There are a whole bunch of rules about the data that you are going to provide. Here is the list you should keep in mind: Ø First four lines mandatorily contains true header information. You can put whatever you want here, OTBI-Enterprise will assume these as non-data and skip these four lines. Ø The fifth line is the column header line. This should match with the sample CSVs OTBI-Enterprise provides. If you are using one of their sample files instead of creating your own, you are OK here. This is also assumed to be a non-data row, and OTBI-Enterprise will skip it. Ø Data starts from the sixth row. Ø File name should be of the format file_<table name>-timestamp.csv. For example, file_absence_event_fs-20140916_004935.csv, where timestamp format is YearMonthDay_24HourMinuteSecond [20140916_004935] Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed. Ø Maximum size limit of a file is 5Gigs. Ø Ensure you structure your csv file with the columns in the order as specified in OTBI-E documentation, or start from OTBI-Enterprise provided sample CSVs. When there is no data for a column, leave an empty placeholder. For example, if the format specifies First Name, Middle Name, Last Name, with Middle Name as not required, you must still submit the CSV file data as "Mary" ,"", "Washington". Ø Since “comma” itself can be part of your data, all character data must be encapsulated within double-quotes, if that is your escape character. Ø Non-character columns must not be encapsulated within double-quotes. Ø Remove any new line characters such as \r or \n. Ø Date data must be defined as per OTBI-Enterprise documentation. It is usually of string type in the format: YYYYMMDDHH24MISS. Don’t encapsulate this within double-quotes. So that’s about your data file. You will populate a set of such CSV data files. Now, OTBI-Enterprise for External Data support for Universal Adaptor requires another file, called the “Manifest” file. This is more like an index file, containing all the data CSV file names which would be consumed in a given lot. Requirements on the manifest file is as follows: Ø MANIFEST should be the last file to get uploaded after uploading all data files. The download program looks at the Manifest to determine which files ought to be downloaded. After successful download and processing of the CDV data file, the MANIFEST file gets renamed so that they are NOT further picked up and data CSV files will be marked for expiry (purged when it reaches stipulated number of expiry days). Ø MANIFEST file naming format is "MANIFEST-TIMESTAMP.MF". Timestamp format is YearMonthDay_24HourMinuteSecond [20150417_035459]. It is mandatory to have timestamp in MANIFEST file name as same file names will be overwritten in Cloud Storage Service. For example, MANIFEST-20150417_035459.MF. Ø MANIFEST file should NOT be compressed/zip. A sample data CSV file and a sample manifest file is shown in the following screenshot. Finally, you have all your CSV data files ready and populated, along with the manifest file for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files and the MF file to a particular folder, say “C:/temp/upload”. All files from here will go into SSC. Upload Data Files to SSC OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service. I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own. Assuming that you’ve done it, this concludes your data preparation and upload step. Run your Load Plans At first, you are required to run the Domains-Only Load Plan. Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans. Then pick the Domains-Only LP called “UADomains_LP1” and click on Execute. Check on the Execution status and wait for it to finish. Once this finishes, you need to re-upload dated CSV files that are needed for the regular LP and a new dated manifest file. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Then run the regular LP. Same navigation as above. Managing Deletes in Source System Managing deletes for External Data with Non-Oracle sources can be a bit tricky. In this case, OTBI-Enterprise having no clues about the actual source system, relies on you to provide the deleted data related information. Managing deleted data (from source) in the data warehouse is crucial, otherwise you end up getting incorrect numbers. Note that unlike the case with Oracle sources, the OTBI-Enterprise does not maintain a replica schema (an SDS) of the OLTP within the OTBI-E pod. That’s because OTBI-Enterprise isn’t aware of how the source system looks like. Therefore, it turns the responsibility to you to inform about the deleted data through the same flat file (CSV) approach. The CSV data files, as we discussed above, are to be provided one per staging area table. The column headers of the CSV exactly match up with the columns of the staging area table. There is one such column in all staging area tables (and hence in all CSV files) called “DELETE_FLG”. When you upload data files for initial load, all values for this column would be “Y”. In your incremental run, assuming you have found out some updates on a previously uploaded record, you would need to re-upload that with the changes. Similarly, if the record that you had uploaded earlier is now hard deleted in your source, you will need to re-upload that same record, this time with “DELETE_FLG” set as “N”. Take a close look at the above screenshot. This is an incremental data file that you are about to upload. Usually the PK definition of universal staging table (and hence the CSV files) is made up of columns: INTEGRATION_ID, and DATASOURCE_NUM_ID For data effective tables, a third column gets included as well, making it a three-column PK: INTEGRATION_ID, DATASOURCE_NUM_ID and SRC_EFF_FROM_DT For this example, we have only the first option (two-column PK). You had originally uploaded data for this PK: '900,300100044288208~300100044268215' with DELETE_FLG = ‘N’. The data warehouse have been using this record for its calculation/analysis so far. Later, you figured out that this record was hard deleted in your source. You should now have a process (within our source) that spits out deleted records along with new/updated records. New/Updated records goes their usual way with DELETE_FLG as “N”, but deleted records must have the DELETE_FLG column set to “Y”, as shown in the above picture. The downstream data warehouse process will adopt this as a soft-delete and move on accordingly, removing this record from all calculations and analysis. That’s pretty much the idea. You can choose to provide all deleted records for all files in a completely separate batch, where all records are marked DELETE_FLG as “Y”, or you can mix with your daily regular data files – your choice. General Tips Resolving Domains Resolving source domain member codes to target domain member codes happens as a part of the SDE interfaces in cases of pre-built adaptors (for E-Business Suites, PeopleSoft, Fusion etc.). However, for Universal sources, you are held responsible for providing the target domain member codes in the CSV files itself. This is quite an important step. Fortunately, there are data model standards in place through which you know that a given column is a target domain code column. Target domain column names are usually prefixed with “W_”. The documentation available will also make a note of that. We could take an example from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. We took a similar example above, but that was for a fact source, which usually doesn’t carry a domain. An example of a dimension source would probably help to understand this concept. See below:   So, the first and second column for this file are for you to populate your source domain member code values about “Status” (indicating requisition or applicant status) and “Reason” (indicating the reason, if available, for such a requisition or applicant status). These are your “raw” source values. Then, the third column in this file is a target domain column. The values would be one from the list of values OTBI-Enterprise requires. Based on the data for first two source domain member code columns, you need to do the mapping and “choose” the appropriate target domain member code value. Assume that the list of target domain member code values include items like “ASSESSMENT_INTERVIEW”, “ASSESSMENT_INTERVIEW1” and “ASSESSMENT_INTERVIEW2” (names are “Assessment Interview”, “Assessment First Interview” and “Assessment Second Interview” respectively). There are actually 22 out of the box values for this target domain. Also assume that your source domain member code values include items like “WT”, “INT1”, “INT2”, “INT3” and “FINT” (names are “Written Test”, “Interview Round 1”, “Interview Round 2”, “Interview Round 3” and “Final Interview” respectively). In this case, your job would be to map your 5 source values to the possible list of 3 target values. And you have to get this done in the CSV file itself, before providing it to OTBI-Enterprise. That kind of logic needs to be there at your end. You cannot change the list of target values, unless they are coded as “extensible”. If extensible, you will be allowed to add more to the list in BIACM. Must be wondering where to get this list of allowed target domain member code values? Simple. Ø Log into BIACM and navigate to Manage Warehouse Domains. Ø Do your search based on the domain code (for example, W_RECRUITMENT_EVENT) Ø Note down the Warehouse Members in the bottom pane. A screenshot below. While on this topic, let me go over another much related issue here – Domain Mapping Hierarchies. In the above file, the first and second columns (source domain) will lead to mapping the third column (target domain) – that’s great. But what about the fourth column? That is also a target domain column. How do we map that? How do we map the fifth, the sixth, etc.? All of them seem to be target domain columns. The best thing is the read up the functional task manager documentation for this task, if there is any. For any complicated mappings like this, OTBI-Enterprise provides additional information help topics. For the scope of this document, you need to understand only this – you may have to simply copy the mapping from BIACM, if it falls under a domain mapping “hierarchy”. See this screenshot (Navigation: BIACM -> Manage Domain Mappings and Hierarchies-> Search on Domain Code like “W_RECRUITMENT”). Then go to the Hierarchies tab within the middle pane. You notice the hierarchy up from Recruitment Event to Recruitment Sub Stage to Recruitment Stage. This is target domain hierarchy. The parent value at any level is based on the child values. And since the lowest value is a target domain (Recruitment Event, in this case) OTBI-Enterprise has pre-done the mapping up to Sub Stage (see above). The mapping is visible in the bottom most pane. The next screenshot shows the mapping from Sub Stage to Stage level. As a universal adaptor user, all you need to do is to copy these “mappings” from BIACM and reflect them in your CSV files. This is true for any target domain columns that are a part of a domain hierarchy. If not (which you can find out looking into BIACM in the above way), you would need to do the mapping from your source domains to the target domains. Mapping domains is mandatory before even starting a full load in OTBI-Enterprise. A whole lot depends heavily on these mappings. Special Cases UNSPSC OTBI-Enterprise product dimension uses a customer configurable source file called “file_unspsc.csv”. Out of the box they provide a few rows of sample data, but customers who have paid for UNSPSC (United Nations Standard Products and Services Code) data from unspsc.org, are expected to provide their own “paid” data. If you are one such customer, you need to download this sample file from OTBI-Enterprise, put your own UNSPSC data in it in the specified format, and upload it to Storage Service. That’s nothing different in terms of how it works for Oracle Sources (E-Business Suite, or PeopleSoft). However, for non-Oracle sources, there is a bit of a special case, which is why I am talking about UNSPSC in this section. Problem is, even though this is necessarily a one-time activity (until the international standard changes), you need to upload this same file in each and every batch of ETL, over and over again. For Universal CSV files, once you upload to SSC along with a manifest file, OTBI-Enterprise downloads everything to the Source Files folder (remember that during your source registration step?). So, file_unspsc.csv gets downloaded and consumed in the ETL. Next time, before downloading, OTBI-Enterprise  deletes all files from the source files folder. So, file_unspsc.csv gets deleted. For real changing data files, this is OK. But for UNSPSC data, this wasn’t warranted. However, that’s the way it is. Long story short, just remember to provide the file_unspsc.csv in every batch. If you do use UNSPSC, provide with valid data. If not, just keep providing OTBI-Enterprise’s sample file. You can find this information in the OTBI-Enterprise Document Library under the Known Issues section for UNSPSC. Data Security with External Data For Non Oracle sources, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure. Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL. Here are the steps, once you are ready: Ø Log into BIACM. Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”. Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example. Ø From the bottom pane, click the “plus” button to add a security object. Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box. Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them. Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK. Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below. Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps. Instead of having to do this for every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it. Potential Automations None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better” ones) on your own. File Upload to Storage Service Container The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation. This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows: STORAGE_SERVICE_URL -> Storage service URL that Oracle gave you. STORAGE_SERVICE_NAME -> Name of the Storage Service. Oracle gives you this as well. STORAGE_SERVICE_USER_NAME-> Your username to access Storage Service. STORAGE_SERVICE_PASSWORD -> Your password to access Storage Service. STORAGE_SERVICE_CONTAINER_NAME -> The name of the container, as you configured while registering your source. ACTION -> The action you want this program to take. Options are: LIST_CONTAINERS -> Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME” UPLOAD_FILES -> Uploads all your files from “UPLOAD_FILES_FROM_DIRECTORY” into the container “STORAGE_SERVICE_CONTAINER_NAME” LIST_FILES -> Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME” DELETE_FILES -> Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME” CREATE_CONTAINER -> Creates a container by the name “STORAGE_SERVICE_CONTAINER_NAME” UPLOAD_FILES_FROM_DIRECTORY -> This is the location in your local server where all your files are waiting to be uploaded. Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below: Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot):     import java.io.*; import java.util.*; import oracle.cloud.storage.CloudStorage; import oracle.cloud.storage.CloudStorageConfig; import oracle.cloud.storage.CloudStorageFactory; import oracle.cloud.storage.exception.SystemException; import oracle.cloud.storage.model.Key; import oracle.cloud.storage.model.StorageInputStream; public class StorageService_Manager {     public static Properties inParam;     public static String storageUrl;     public static String storageServiceName;     public static String storageUserName;     public static String storagePassword;     public static String storageContainerName;     public static String storageAction;     public static String uploadFromDirectory;     public static CloudStorage myConnection;     public static void main(String[] args) {         try{             initProperties();             storageUrl             = inParam.getProperty("STORAGE_SERVICE_URL").trim();             storageServiceName             = inParam.getProperty("STORAGE_SERVICE_NAME").trim();             storageUserName             = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();             storagePassword             = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();             storageContainerName             = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();             uploadFromDirectory             = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();             storageAction             = inParam.getProperty("ACTION").trim();             //config and cloud storage             object             CloudStorageConfig             myConfig = new CloudStorageConfig();             myConfig.setServiceName(storageServiceName)             .setUsername(storageUserName)             .setPassword(storagePassword.toCharArray())             .setServiceUrl(storageUrl);             myConnection             = CloudStorageFactory.getStorage(myConfig);             //list containers within a             storage             if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {                 listContainers();             }             //upload files             if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {                 uploadFiles();             }             //list files             if(storageAction.toUpperCase().equals("LIST_FILES")) {                 listFiles();             }             //delete files             if(storageAction.toUpperCase().equals("DELETE_FILES")) {                 deleteFiles();             }             //create container             if(storageAction.toUpperCase().equals("DELETE_FILES")) {                 deleteFiles();             }             //rename files             if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {                 createStorageContainer();             }             System.out.println("End ...");         }catch (Exception e){             e.printStackTrace();             System.out.println("Error has ocuured \n" +e);         }     }     private static void createStorageContainer() {         System.out.println("Creating container :" + storageContainerName);         try {             oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);         }catch (Exception e){             e.printStackTrace();             System.out.println("Error in creating container\n" +e);         }     }     private static void renameFiles() {         System.out.println("Renaming files in container :" + storageContainerName);         java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);         for (int i = 0; i < cobjects.size(); i++) {             System.out.println("Renaming :" + cobjects.get(i).getKey());             StorageInputStream             sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());             myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);             myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());         }     }     private static void deleteFiles() {         System.out.println("Deleting files in container :" + storageContainerName);         try {             java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);             for (int i = 0; i < cobjects.size(); i++) {                 System.out.println("Deleting :" + cobjects.get(i).getKey());                 myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());             }         }catch(SystemException ex) {             System.out.println("Error in deleting files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void listFiles() {         System.out.println("Listing files in container :" + storageContainerName);         try {             java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);             //System.out.println(cobjects);             for (int i = 0; i < cobjects.size(); i++) {                 System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());             }         }catch(SystemException ex) {             System.out.println("Error in listing files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void uploadFiles() {         try {             File f = new File(uploadFromDirectory);             File files[] = f.listFiles();             System.out.println("Uploading files to container :" + storageContainerName);             for (File file : files) {                 try {                     FileInputStream fis = new FileInputStream(file);                     String fileName = file.getName();                     System.out.println("Uploading :" + fileName);                     myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);                 }catch(FileNotFoundException ex) {                     System.out.println("Error in uploading file :" + file.getName());                     ex.printStackTrace(System.out);                 }             }         }catch (NullPointerException ex){             System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");             ex.printStackTrace(System.out);         }         catch (Exception ex){             System.out.println("Error in uploading files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void listContainers() {         System.out.println("Listing Containers");         try {             List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();             for (int i = 0; i < containerList.size(); i++) {                 System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());             }         }catch(Exception ex) {             System.out.println("Error in listing containers");             ex.printStackTrace(System.out);         }     }     private static void initProperties() throws IOException {         inParam         = new Properties();         inParam.load(ClassLoader.getSystemResourceAsStream("StorageService_Manager_Input.properties"));     } } That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as: java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log The output file “storage_out.log” records all that happens on the console.

    Applies to OTBI-Enterprise Version 2 PB 2   TABLE OF CONTENTS Overview of the External Data Support Feature. 3 Support for On-Premise Oracle AU sources. 3 Support for Non-Oracle sources. 3 Supported Subject...

EBS

An Implementer’s Guide to External Data Support for Oracle Sources

Applies to OTBI-Enterprise Version 2 PB 2   TABLE OF CONTENTS Overview of the External Data Support Feature. 3 Support for On-Premise Oracle AU sources. 3 Support for Non-Oracle sources. 4 Supported Subject Areas – HCM+. 4 Practical Use Cases. 5 Getting Started with External Data – Oracle AU Sources. 5 Sequence of Steps – High Level 6 Deep Dive into the Actual Steps. 6 Register Source and Configure Storage Service Container. 6 Build Load Plans. 8 Prepare Data Files and Upload. 10 Prepare Data Files. 10 Upload Data Files to SSC. 18 Run your Load Plans. 18 Managing Deletes in Source System.. 19 General Tips. 20 Special Cases. 21 UOM... 21 Data Security with External Data. 22 Potential Automations. 24 File Upload to Storage Service Container. 24 CSV file generator for Oracle Sources. 28 Overview of the External Data Support Feature Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats). With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered. With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together. With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those. Support for On-Premise Oracle AU sources At a high level, the way it works is quite simple. You upload a set of your OLTP table’s data (EBS or PSFT) in the CSV file format to designated Oracle Storage Service Containers (henceforth called SSC). OTBI-Enterprise will then download those files at run-time from SSC and populate an on-Cloud SDS (Source Data Store) database schema. This acts as the mirror image of the actual on-Premise source system. The respective ETL adaptors then gets kicked in and the final data warehouse gets populated. With the reporting layer already built in the pod, you get direct access to OBIEE Answers and Dashboards. Here is a quick diagrammatic representation of AU External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.   Support for Non-Oracle sources For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), see my other blog on Universal Adaptors, called “An Implementer’s Guide to External Data Support for Universal Sources”. Supported Subject Areas – HCM+ OTBI-Enterprise External Data support does not support any HCM subject areas that makes extensive use of OLTP procedures and packages. For example, for E-Business Suite Applications, the HCM subject area “Accruals” relies on certain EBS PL/SQL packages. As it stands now, there is no easy way to replicate those packages within the cloud SDS. Therefore, Accruals subject area from EBS isn’t supported in OTBI-Enterprise Version 2. HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, by sources. Practical Use Cases You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud. Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area. Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Universal Sources”. Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. This is the case we are dealing with in this document. Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area. There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes from where. Getting Started with External Data – Oracle AU Sources For Oracle sources (E-Business Suites 12.2 or PeopleSoft Applications 9.2), OTBI-Enterprise uses the corresponding ETL Adaptors to populate the data warehouse tables. These Adaptors relies on a relational database where all the required OLTP tables would be present. For OTBI-Enterprise, this would be the SDS schema, where all the required OLTP tables (no OLTP specific PL/SQL packages or procedures or functions) will be replicated by the Cloud Replicator. Thereon, the regular ETL processes like the source dependent extracts (SDE), source independent load (SILOS), and post load (PLP) kicks in. Providing the CSV files against each required OLTP tables is the biggest challenge here. Fortunately, OTBI-Enterprise provides decent template SQL files that would help you to figure out which tables are required and how to extract the data from those tables. We will go over these in details below. Once the CSV files are there in the Storage Service Container, OTBI-Enterprise picks them up from there and loads up the SDS. Sequence of Steps – High Level At a high level, the sequence of steps goes like this: 1. Register Source (PSFT or EBS), Configure Storage Service Container (SSC) 2. Pick subject areas (FGs or Fact Groups) and build Load Plans [Regular and Domains-only] 3. Download ORDER.SQL, ORDER_DDL.SQL and ORDER_PE.SQL files for each LP 4. Follow ORDER.SQL against Domains-only LP and prepare the data exports in CSV format for Domains-only LP 5. Upload data files (for Domains-only LP) to SSC 6. Run Domain-only LP 7. Configure domains 8. Follow ORDER.SQL against regular LP and prepare the data exports in CSV format for regular LP 9. Upload data files (for regular LP) to SSC 10. Run regular LP 11. Repeat steps 8 through 10 for incremental ETL runs 12. If you have deletes in the OLTP for a table, follow ORDER_PE.SQL for that table and prepare data exports in PECSV format and upload in SSC and run along with usual Incremental run Deep Dive into the Actual Steps Let’s take a deeper look into what goes in each of these steps. Register Source and Configure Storage Service Container First, you need to enable BI Applications Offerings. To do that, Ø Login to BIACM Ø Click on Tasks -> Source Setups -> Manage Business Applications Ø Select the offerings you prefer Ø Click Save The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do. Now we proceed to source registration. Register E-Business Suite Applications 12.2 or PeopleSoft Applications 9.2 source for External Data. This is how (taking the EBS example): Ø Click on Tasks -> Source Setups -> Define Business Intelligence Applications Instance Ø Source Systems Tab à Click on Create à Pick Product Line as Oracle E-Business Suite, and Product Line Version as Oracle E-Business Suite R12.2.x. Set a name for your source instance, say EBS122, and pick a Data Source Number, maybe 201. Set Source Load Technology to “Replicator External File”. Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically. Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “ExternalData_EBS”. Ø Click on Save. The Source system will be registered and the Storage Service Container is created Build Load Plans Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode). Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add Give a name to your LP. We call it “DomainsOnly_EBS” in this example. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”. Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported. Ø Click Generate. Wait for the Domains LP generation to complete. Now build another Load Plan which you will be using for regular loads, full and incremental. Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add Give a name to your LP. We call it “Full_EBS”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”. Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP. Ø Click Generate. Wait for the Full LP generation to complete. This completes the step of building all necessary Load Plans. Prepare Data Files and Upload This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC. Prepare Data Files You need to create two sets of data CSV files. First one would have those that are needed by the Domains-Only ETL (this is one-time only), and the second one for the Full ETL (you repeat this for each incremental run). The process for creating the files in both cases are exactly the same. The following illustrates the steps for Domains-Only ETL. To be clear, file preparation is a two-step process. First, you generate the Order Files (SQL files that Oracle would generate for you depending upon your FG selections in the LP), and second, use these Order Files and run them against your EBS source to extract data out in CSV format. Let’s start with the first step, generating the Order Files Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Select the Domains-Only LP that you just now generated (DomainsOnly_EBS). Ø Keeping that selected, click on a button on the toolbar, the mouse-over of which reads “Export Source System Extract SQL Command Files”. Ø It pops up new window, where under “Actions”, you would click on one that says “Execute Source System Extract SQL Command Files Scenario”. Ø Since you are actually running an ODI scenario behind the scenes, you will be prompted to provide information about your Agent and all. Provide those, and then click OK. Ø It takes a few seconds to run and then you get the results back in a new popup window. Three .SQL files would be presented to you. Ø Select the files and export them out to your local box, using the ‘Export’ button. We will go over what is in each of these files, and how it helps you. But before that, like I mentioned earlier, the process is exactly the same so far for the Full_EBS Load Plan. Here is a screenshot of the final step on that LP. Note that the .SQL files has the Load Plan name prefixed with it, as in “Full_EBS_***” or “DomainsOnly_EBS_***”. You can’t mix up. You can actually download all six together, but while creating and providing CSV files for consumption, you would take care of uploading the ones needed by the appropriate LP. Now that Order Files are generated, let’s go to the next step. Generate your extract CSV files These three sets of .SQL files are supposed to help you generate your data CSV files. To understand how and why, let’s take a closer look on the contents of these files. Each of these files have a comment section at the beginning that explains what this file has and how one should use the information in here. The main file that will be used is called ORDER.SQL. The other two files are ORDER_DDL.SQL and ORDER_PE.SQL. For the interest of this section, we will not spend more time on these two files, apart from simply mentioning that ORDER_DDL.SQL provides you with the structure of the required OLTP tables, as per the OTBI-Enterprise model. You could use this to compare with your real OLTP and spot differences upfront. The ORDER_PE.CSV is similar to the ORDER.SQL in a sense that it also provides a bunch of SQL SELECT statements. However, the intent of ORDER_PE.SQL is to only select the Primary Keys from each OLTP table. To support physical hard deletes from your OLTP, this helps OTBI-Enterprise to synch up the data in the SDS with your OLTP and mark deleted records as “deleted” [set the CDC_$DML_CODE to ‘D’]. The data file CSVs resulting out of this ORDER_PE.SQL would have extensions as .PECSV, while the main ones against ORDER.SQL would have extension as .CSV. Here are screenshots of the comment section for all the three files. We will go over the details of the last and the most important one, ORDER.SQL, right after that. The above is a screenshot showing a sample ORDER_DDL.SQL file. The comments should be self-explanatory. The above is a screenshot showing a sample ORDER_PE.SQL file. The comments should be self-explanatory. The above is a screenshot showing a sample ORDER.SQL file. The comments should be reasonably clear, but here are a few tips on this topic: Ø If an OLTP table has a LAST_UPDATED_DATE column (which most of the E-Business Suites tables do, not necessarily true for PeopleSoft), the ORDER.SQL file would have added a WHERE clause using that date as a filter. Couple of points about this, but first a screenshot to display what I’m talking about. o You should maintain values for these variables for each of your extracts – initial, then first incremental, then second incremental and so on. o For initial load, if you prefer to load everything for all tables, simply remove this WHERE clause, or provide a reasonably old date (like 01-JAN-1970) if you don’t like to edit the file. You can also use this format by specifying a date since when you want your data warehouse to have data. For example, 01-JAN-2006, in case you want 10 years data to start with. o Once the extract for your initial run is over, the date when you did so becomes the date for your first incremental. When the first incremental is over, the date when you did so becomes the date for your second incremental, and so on. o You should plan to maintain all such dates and use it appropriately before each extractions. You may want to set up an automation script for this. Ø Certain Date-Effective tables are to be handled in a different way. The ORDER.SQL provides you with the template SQL to take care of that. Apart from setting the value of the #LAST_EXPORTED_DATE variable, it is recommended that you do not modify this part. This is critical in maintaining the sanctity of the date effective data. Here’s a screenshot displaying the portion of the SQL. This is not an action, just more of an FYI. Now that you have your SQLs handy, your date parameter (#LAST_EXPORTED_DATE) values picked/programmed, it is time to crank up the CSV files. There are certain rules you need to be aware of, when dumping data into these CSV files. These are as follows: Ø The first line is the column header line. This should match with the columns selected in ORDER.SQL. If you are using the ORDER.SQL without modifications in the SELECT clause, you are OK here. This is assumed to be a non-data row, and OTBI-Enterprise will only use it match up with the SDS schema and populate the right data against the right column. Ø Data starts from the second row. Ø File Naming Standards: o For Regular Data Files (.CSV) § Format - file_TableName-DateTime.CSV where § TableName - Table name of the OLTP table whose data is being generated (table name in UPPER CASE). § Date - YearMonthDay [20141017] § Time - (24)HourMinuteSecond [230512] § Ex: file_PS_HRS_APP_PROFILE-20140916_010515.csv [OLTP table used here is PS_HRS_APP_PROFILE] o For Primary Extract Files (.PECSV) § Format - file_keys_TableName-DateTime.PECSV where § TableName - Table name of the OLTP table whose Primary Key data is being generated (table name in UPPER CASE). § Date - YearMonthDay [20141017] § Time - (24)HourMinuteSecond [230512] § Ex: file_keys_PS_HRS_APP_PROFILE-20140916_120512.pescv [OLTP table used here is PS_HRS_APP_PROFILE] Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed. Ø File Size: Maximum size of a file is 5Gigs. If a table is large, you can split it into multiple files as a part of the same batch. Use the timestamp to differentiate them. E.g. file_PS_HRS_APP_PROFILE-20150708_132847.csv file_PS_HRS_APP_PROFILE-20150708_132848.csv The 5 GB limitation is for a single file in Cloud Storage. However, from a Cloud Replicator stand point, they put further limits using Source Connection properties with “cumulative file size limit per table”. Initial (Full) extract and Incremental extract are different set of properties for file size limit. During a replicate call, cumulative file size limit check will be performed per table and exceptions will be raised when limit hits. Default values of seeded Connection Properties are as follows: MAX_CUMMULATIVE_INITIAL_FILE_SIZE_IN_MB = 5000 MAX_CUMMULATIVE_INCREMENTAL_FILE_SIZE_IN_MB = 1000 Cumulative size will be sum of all the available files per table in container irrespective of date. For example, the following files are for different dates 16th and 17th and all their file size are summed up as cumulative size. file_PS_HRS_APP_PROFILE-20140916_010515.csv file_PS_HRS_APP_PROFILE-20140916_100512.csv file_PS_HRS_APP_PROFILE-20141017_230512.csv So if your cumulative file size (sum of multiple-day-file, or sum of single-day-split-files) goes beyond 5 GB, you should change the above parameters in your Cloud Replicator set up, or just remove the properties completely (so that this check isn’t even performed). Ø File content standards: o CSV delimited data format should have fields/columns separated by the comma (,) character and records/rows terminated by newline (\n) character. Carriage return (\r) should be avoided for portability across platforms. o Any newline characters present in the column data needs to be removed as it will cause problems during the load. Yes, this is a modification to the source data, but all such columns are typically comment-like columns, and unlikely to be used for analysis. o CSV should include column headers, columns can be in any arbitrary order. o CSV column value supported data types are VARCHAR, INTEGER, DOUBLE, NUMBER, DATE, and TIMESTAMP. o All CSV column values should be enclosed within double quotes (") and null value should be represented by pair of empty double quotes. E.g. "1997","Ford","E350","". Note: This applies for even numeric, date and all other data types and not just char columns. o Character values can have comma character and as long as they are enclosed within the delimiter (double-quotes) it is fine. If character values have double quotes, they should be enclosed within a pair of double quotes (""), it should be strictly consecutive characters. As an example, let’s say that the raw values are: 1997,Ford,E350,Super, "luxurious" truck,"Fully enclosed value" Then, the values that go into the CSV file would be: "1997","Ford","E350","Super,""luxurious"" truck","""Fully enclosed value""" o Date/Time Formats: Date values can have time component. Timestamp does NOT support TIMEZONE. Default Date and Timestamp format is 'YYYY-MM-DD"T"HH24:MI:SS' o For example, Date/Timestamp values: CSV value '2014-11-25T10:00:00', DB stored value '25-NOV-14 10.00.00.000000000 AM' CSV value '2014-11-25T16:30:00', DB stored value '25-NOV-14 04.30.00.000000000 PM' CSV value '2014-11-25T00:00:00', DB stored value '25-NOV-14 00.00.00.000000000 AM' CSV value (can ignore time component) '2014-11-25', DB stored value '25-NOV-14 00.00.00.000000000 AM' o CLOB data type values have to be trimmed and limited to VARCHAR2 (4000). Does not make sense to extract BLOB columns as they store binary data and cannot be used for reporting. A sample data CSV file is shown in the following screenshot. Finally, you have all your CSV data files ready and populated for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files to a particular folder, say “C:/temp/upload”. All files from here will go into SSC. I am going to discuss a specific programming option using Java (see the Potential Automations section) to generate CSV files against your source tables. This isn’t supported by OTBI-Enterprise though. You are free to adopt this program, change it to your needs, or come up with your own. Upload Data Files to SSC OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service. I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own. Assuming that you’ve done it, this concludes your data preparation and upload step. Run your Load Plans At first, you are required to run the Domains-Only Load Plan. Upload the CSV files that are required for this LP. Then run this LP. Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans. Then pick the Domains-Only LP called “DomainsOnly_EBS” and click on Execute. Check on the Execution status and wait for it to finish. Once this finishes, you need to carry out the necessary domain configurations. That is a whole different topic, out of scope for this write-up. Then, you upload the CSV files that are required for the full LP (Full_EBS). Then run this LP. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Same navigation as above. Managing Deletes in Source System Hard deletes often happen on the source system. These have to be reciprocated in the data warehouse, otherwise it stands a chance to report incorrect data. In the case of OTBI-Enterprise, the hard deletes that happens on your source system is first reflected in the SDS schema. The ETL routines are already capable to handling DELETES, once found. The following section discusses more about how OTBI-Enterprise manages to reflect hard deletes in the SDS layer, and what your actions are. Remember the ORDER_PE.SQL file that was produced (as a part of the three .SQL files that you downloaded)? That’s the one which comes into use now. The ORDER_PE.SQL file provides select statements against the primary key of the OLTP table, without any date filters, by design. See a portion of the file below: The tables in the SDS schema already has the record that has now been deleted from the source system. Now when you provide OTBI-Enterprise with the set of PK data, it compares with what the SDS already has which went missing in your provided PK data set. Those will be assumed as “hard deleted” from the source. SDS will then turn on a column called CDC_$DML_FLG to “D”, indicating ‘Delete’. The downstream ETL processes are aware of how to take care of this. Note that OTBI-Enterprise would never hard delete a record from neither SDS nor the data warehouse. They only mark the record as deleted, in other words, soft delete the record. So that’s how it works. Now, your action. If you happen to know that certain records were hard deleted from your source from certain tables, you should use the corresponding SQL statement from ORDER_PE.SQL and use it to populate your PK data file (format mentioned above for .PECSV files). For example, if you happen to know that records were hard deleted from only two tables AP_SUPPLIERS and AP_TERMS_LINES only, you should provide only two .PECSV files (one against each table) to the upcoming incremental batch. Not all. That brings up another point. These .PECSV files are not required when you are doing an initial load. These are required only for incremental loads. You decide when you are ready to provide the .PECSV files. You can provide them during any batch, starting from the first incremental run, along with the main data file in the same batch. Meaning, files like file_AP_SUPPLIERS -20140916_010515.csv, and, file_keys_AP_SUPPLIERS-20140916_120512.pescv can go in the same batch. If you are unsure what got hard deleted in your source system, or missed to keep track of them, you really have two choices: Ø Provide .PECSV for all tables [following the ORDER_PE.SQL file] in the next batch Ø Discard the warehouse and carry out another fresh full load Going by the first approach, this would provide to be quite a costly process. If 90% of the tables did not have deletes, by this you are forcing OTBI-Enterprise to do a full brute force comparison of keys unnecessarily. But then, you can get a fit smart and provide only .PECSV files in this batch and not provide any real data files. This way, you are keeping this batch (and following ETL run) only reserved to manage your deletes. Maybe do that during a long weekend or so. Ideally though, you should start keeping track of your deletes, maybe using database triggers, if your DBA allows that. Going by the second approach, you get a clean data warehouse, but at the cost of losing all rich historical information. This isn’t recommended, and should be carried out as a last resort. General Tips A large amount of problem usually comes (a) with data quality, and (b) with data structures. Data quality is an issue which you will be warned by OTBI-Enterprise’s Health Check feature. When you run your ETL, the Health Check Routines fires up at the beginning of the Load Plan. If any discrepancies are found in terms of data, the Health Check will warn you of that. You can download the Health Check log as an HTML file, and look for “Errors” or “Failures”. You can probably live with “Warnings”, but it is a good idea to address those as well. The failures are almost always related to bad data, or looping records in your source. These may be acceptable to the source application, but not to the BI Applications. If you allow bad data to flow in the data warehouse (those failures and errors), it is bound to cause unknown issues downstream. The above discussion about Health Check applies equally for Fusion Cloud, and you may be already aware of it. The current write up is about External data, and so the next item (b) is probably more suited here. The issue about data structures. OTBI-Enterprise supports E-Business Suite R12.2 and PeopleSoft 9.2 source applications. This means, they support these as their base versions. Nowhere does it claim that it supports R12.2 with Patch 14, for instance. Or PeopleSoft 9.2 with Bundle 8. And if you have applied newer patches on top of the base versions of your sources, chances are there that you might have a newer data structure for your underlying tables. Remember, when OTBI-Enterprise generates the ORDER.SQL file, it is unaware of your source versions/tables. It doesn’t get a chance to “connect and see” across the cloud. What is generates is purely based on its knowledge of the data structures at the base level. But there are ways you can prepare yourselves for surprises. Remember that ORDER_DDL.SQL file OTBI-Enterprise generated? Its time you out that to work. The ORDER_DDL.SQL file contains the DDL statements (CREATE TABLE) for the source tables against the base version/release of the source application. This is what OTBI-Enterprise expects your data structures should look like. You could use these DDL statements, run them in a new database schema to create a replica of the source tables (without data of course), and compare the objects against your real source application database schema. Read the comments section of the ORDER_DDL.SQL file and you will know exactly “what” needs to be done. And now that you have read this section, you know “why” you need to do this. Special Cases UOM I had mentioned earlier that OTBI-Enterprise does not support usage of your source-side PL/SQL packages, procedures or functions, or even E-Business Suite specific functions called “Fast Formula”. However, in some corner cases, they do support in a way. Currently, they do support only for UOM (Units of Measure) Conversion for E-Business, because there is no other way OTBI-Enterprise could make use of the conversion rates they use in their source. And of course, without UOM Conversions properly done, BI numbers would be off. The way they support UOM for E-Business Suite is documented in Oracle’s OTBI-E Document Library in the Known Issues section for UOM. It is slightly different than what we’ve discussed so far – so worth mentioning. At a high level, there are 4 ETL interfaces that captures this UOM Conversion rate data. Each of these call a PL/SQL procedure at the E-Business Suite end, and load up a temporary table, which is put to work in downstream ETLs. Since the PL/SQL procedure won’t be available in the SDS schema in OTBI-Enterprise pod, they ask you to run four complex SQLs (doing the job of the procedure), and export the data out to a particular CSV file [name of this file is important, and is documented]. OTBI-Enterprise is already aware how to read and process this particular file and take it further down. Data Security with External Data For Oracle Sources like E-Business Suite Applications and PeopleSoft Applications, the IDM (Identity Manager) at the source (on premise) end cannot be synched up with the one at OTBI-Enterprise (cloud) end. There is no way to guarantee single sign on across the cloud. Also, the security grants data from on premise source applications shouldn’t be replicated to the cloud. There could be potential security threats due to latency of data. For Non Oracle sources, there is no equivalent comparisons. For either of these cases, therefore, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure. Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL. Here are the steps, once you are ready: Ø Log into BIACM. Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”. Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example. Ø From the bottom pane, click the “plus” button to add a security object. Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box. Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them. Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK. Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below. Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps.Instead of having to do this for every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it. Potential Automations None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better” ones) on your own. File Upload to Storage Service Container The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation. This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows: STORAGE_SERVICE_URL -> Storage service URL that Oracle gave you. STORAGE_SERVICE_NAME -> Name of the Storage Service. Oracle gives you this as well. STORAGE_SERVICE_USER_NAME ->Your username to access Storage Service. STORAGE_SERVICE_PASSWORD -> Your password to access Storage Service. STORAGE_SERVICE_CONTAINER_NAME -> The name of the container, as you configured while registering your source. ACTION -> The action you want this program to take. Options are: LIST_CONTAINERS -> Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME” UPLOAD_FILES -> Uploads all your files from “UPLOAD_FILES_FROM_DIRECTORY” into the container “STORAGE_SERVICE_CONTAINER_NAME” LIST_FILES -> Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME” DELETE_FILES -> Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME” CREATE_CONTAINER -> Creates a container by the name “STORAGE_SERVICE_CONTAINER_NAME” UPLOAD_FILES_FROM_DIRECTORY -> This is the location in your local server where all your files are waiting to be uploaded. Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below: Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot): import java.io.*; import java.util.*; import oracle.cloud.storage.CloudStorage; import oracle.cloud.storage.CloudStorageConfig; import oracle.cloud.storage.CloudStorageFactory; import oracle.cloud.storage.exception.SystemException; import oracle.cloud.storage.model.Key; import oracle.cloud.storage.model.StorageInputStream; public class StorageService_Manager {     public static Properties inParam;     public static String storageUrl;     public static String storageServiceName;     public static String storageUserName;     public static String storagePassword;     public static String storageContainerName;     public static String storageAction;     public static String uploadFromDirectory;     public static CloudStorage myConnection;     public static void main(String[] args) {         try{             initProperties();             storageUrl = inParam.getProperty("STORAGE_SERVICE_URL").trim();             storageServiceName = inParam.getProperty("STORAGE_SERVICE_NAME").trim();             storageUserName = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();             storagePassword = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();             storageContainerName = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();             uploadFromDirectory = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();             storageAction = inParam.getProperty("ACTION").trim();             //config             and cloud storage object             CloudStorageConfig myConfig = new CloudStorageConfig();             myConfig.setServiceName(storageServiceName)             .setUsername(storageUserName)             .setPassword(storagePassword.toCharArray())             .setServiceUrl(storageUrl);             myConnection = CloudStorageFactory.getStorage(myConfig);             //list containers within a             storage             if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {                 listContainers();             }             //upload files             if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {                 uploadFiles();             }             //list files             if(storageAction.toUpperCase().equals("LIST_FILES")) {                 listFiles();             }             //delete files             if(storageAction.toUpperCase().equals("DELETE_FILES")) {                 deleteFiles();             }             //create container             if(storageAction.toUpperCase().equals("DELETE_FILES")) {                 deleteFiles();             }             //rename files             if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {                 createStorageContainer();             }             System.out.println("End ...");         }catch (Exception e){             e.printStackTrace();             System.out.println("Error has ocuured \n" +e);         }     }     private static void createStorageContainer() {         System.out.println("Creating container :" + storageContainerName);         try {             oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);         }catch (Exception e){             e.printStackTrace();             System.out.println("Error in creating container \n" +e);         }     }     private static void renameFiles() {         System.out.println("Renaming files in container :" + storageContainerName);         java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);         for (int i = 0; i < cobjects.size(); i++) {             System.out.println("Renaming :" + cobjects.get(i).getKey());             StorageInputStream sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());             myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);             myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());         }     }     private static void deleteFiles() {         System.out.println("Deleting files in container :" + storageContainerName);         try {             java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);             for (int i = 0; i < cobjects.size(); i++) {                 System.out.println("Deleting :" + cobjects.get(i).getKey());                 myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());             }         }catch(SystemException ex) {             System.out.println("Error in deleting files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void listFiles() {         System.out.println("Listing files in container :" + storageContainerName);         try {             java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);             //System.out.println(cobjects);             for (int i = 0; i < cobjects.size(); i++) {                 System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());             }         }catch(SystemException ex) {             System.out.println("Error in listing files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void uploadFiles() {         try {             File f = new File(uploadFromDirectory);             File files[] = f.listFiles();             System.out.println("Uploading files to container :" + storageContainerName);             for (File file : files) {                 try {                     FileInputStream fis = new FileInputStream(file);                     String fileName = file.getName();                     System.out.println("Uploading :" + fileName);                     myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);                 }catch(FileNotFoundException ex) {                     System.out.println("Error in uploading file :" + file.getName());                     ex.printStackTrace(System.out);                 }             }         }catch (NullPointerException ex){             System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");             ex.printStackTrace(System.out);         }         catch (Exception ex){             System.out.println("Error in uploading files in container :" + storageContainerName);             ex.printStackTrace(System.out);         }     }     private static void listContainers() {         System.out.println("Listing Containers");         try {             List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();             for (int i = 0; i < containerList.size(); i++) {                 System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());             }         }catch(Exception ex) {             System.out.println("Error in listing containers");             ex.printStackTrace(System.out);         }     }     private static void initProperties() throws IOException {         inParam = new Properties();         inParam.load(ClassLoader.getSystemResourceAsStream("StorageService_Manager_Input.properties"));     } } That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as: java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log The output file “storage_out.log” records all that happens on the console. CSV file generator for Oracle Sources With the ORDER.SQL file (or for that matter, ORDER_PE.SQL file) at your hand, the next step is to run those SQLs against your source database, set the #LAST_EXPORTED_DATE parameter with each run, export the result set in a CSV file following OTBI-Enterprise guided rules & regulations – and that too, doing the same thing at a daily basis, is a nightmare if done manually. This is yet another area that calls for an obvious automation. Here is a small and quick Java program that could potentially carry out the job and take you a step closer to complete automation. Actually, two programs. The first one extends a few classes from open source Java CSV libraries to meet our needs (rules etc.). That is called “MyResultSetHelperService.java”. The second one is the main program responsible for spitting out the CSVs, called “CSVLoader.java”. You need to have both of these together in your project and compile them to a jar called “CSVLoader.jar”. I will provide the source codes for both of these JAVA programs, but before that I’ll explain how this thing works. The utility expects a property file as an input, called “CSVLoader_Input.properties”. In this, you provide certain details about your CSV file generating situations, as follows: SQL_FILE_NAME -> Name and path of your ORDER.SQL or ORDER_PE.SQL SRC_DB_URL -> Standard JDBC URL pointing toyou’re your source database Format is: jdbc:oracle:thin:@host:port:SID SRC_DB_USER -> Source database username. Should have SELECT access. SRC_DB_PSWD -> Source database password. OUTPUT_DIR -> Name and path of a directory where the CSVs will be created LAST_EXPORTED_DATE ->  Last exported date.Format is: YYYY-MM-DD FULL_LOAD -> Whether or not full load N -> With this setting, the utility will set the date you provided in LAST_EXPORTED_DATE in the WHERE clause your SQL file has. Note that if you wish to extract since a certain date even for the first initial load, you could trick the utility by using this option and setting your initial date likewise. Needless to mention, with this setting, a valid date value is needed for the LAST_EXPORTED_DATE parameter. Y -> With this setting, the utility will ignore all the WHERE Clauses your SQL file has and extract all data.Whatever you provide in LAST_EXPORTED_DATE parameter is ignored with this setting. REMOVE_LINE_SEP -> Whether or not to remove line separator. N -> Will not remove Y -> Will remove. This is what you should set, since that’s what OTBI-Enterprise’s requirement is. Create and save such a property file by the name “CSVLoader_Input.properties”. A Sample screenshot below. I have used this to extract data for my first full load since year 2010. Note that I have set the FULL_LOAD option to ‘N’ and have provided the value of LAST_EXPORTED_DATE as 2010-01-01. Following are the two JAVA programs you could use. MyResultSetHelperService.java import java.awt.image.ReplicateScaleFilter; import java.io.IOException; import java.io.Reader; import java.sql.Clob; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Types; import java.util.ArrayList; import java.util.List; import com.opencsv.ResultSetHelperService; public class MyResultSetHelperService extends ResultSetHelperService {     static final java.lang.String     MY_DEFAULT_DATE_FORMAT = "YYYY-MM-DD";     static final java.lang.String     MY_DEFAULT_TIMESTAMP_FORMAT = "yyyy-MM-dd'T'HH:mm:ss";     static final int NVARCHAR = -9;     static final int NCHAR = -15;     static final int LONGNVARCHAR = -16;     static final int NCLOB = 2011;     final String removeLineFeeds;     public MyResultSetHelperService(final String removeLineFeeds) {         super();         this.removeLineFeeds = removeLineFeeds;     }     //overridden method for date formats     public String[] getColumnValues(ResultSet rs) throws SQLException, IOException {         return this.getColumnValues(rs, false, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);     }     //overridden method for date formats     public String[] getColumnValues(ResultSet rs, boolean trim) throws SQLException, IOException {         return this.getColumnValues(rs, trim, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);     }     //copy paste as-is from super class for getColumnValue method to use     public String[] getColumnValues(ResultSet rs, boolean trim, String dateFormatString, String timeFormatString) throws SQLException, IOException {         List<String> values = new ArrayList<>();         ResultSetMetaData         metadata = rs.getMetaData();         for (int i = 0; i < metadata.getColumnCount(); i++) {             //System.out.println(metadata.getColumnTypeName(i+1));             values.add(getColumnValue(rs, metadata.getColumnType(i + 1), i + 1, trim, dateFormatString, timeFormatString));         }         String[] valueArray = new String[values.size()];         return values.toArray(valueArray);     }     //overridden method to get 4000 chars from CLOB     private static String read(Clob c) throws SQLException, IOException {         StringBuilder sb = new StringBuilder((int) c.length());         Reader r = c.getCharacterStream();         char[] cbuf = new char[CLOBBUFFERSIZE];         int n;         while ((n = r.read(cbuf, 0, cbuf.length)) != -1) {             sb.append(cbuf, 0, n);         }         String value = sb.toString();         if(value.length()>4000){;             value = value.substring(0, 4000);         }         return value;     }     //overridden method to trim new line characters in column value     //overridden method to support ROWID support     public String getColumnValue(ResultSet rs, int colType, int colIndex, boolean trim, String dateFormatString, String timestampFormatString)     throws SQLException, IOException {         String value = "";         switch (colType) {         case Types.BIT:         case Types.JAVA_OBJECT:             value = handleObject(rs.getObject(colIndex));             break;         case Types.BOOLEAN:             boolean b = rs.getBoolean(colIndex);             value = Boolean.valueOf(b).toString();             break;         case NCLOB: // todo             : use rs.getNClob         case Types.CLOB:             Clob c = rs.getClob(colIndex);             if (c != null) {                 value = read(c);             }             //code added to trim new line chars from             column value             if(value!=null){                 value = removeLineBreaks(value);             }             break;         case Types.BIGINT:             value = handleLong(rs, colIndex);             break;         case Types.DECIMAL:         case Types.DOUBLE:         case Types.FLOAT:         case Types.REAL:         case Types.NUMERIC:             value = handleBigDecimal(rs.getBigDecimal(colIndex));             break;         case Types.INTEGER:         case Types.TINYINT:         case Types.SMALLINT:             value = handleInteger(rs, colIndex);             break;         case Types.DATE:             value = handleDate(rs, colIndex, dateFormatString);             break;         case Types.TIME:             value = handleTime(rs.getTime(colIndex));             break;         case Types.TIMESTAMP:             value = handleTimestamp(rs.getTimestamp(colIndex), timestampFormatString);             break;         case NVARCHAR: // todo : use rs.getNString         case NCHAR: // todo : use rs.getNString         case LONGNVARCHAR: // todo : use rs.getNString         case Types.LONGVARCHAR:         case Types.VARCHAR:         case Types.CHAR:             String columnValue = rs.getString(colIndex);             if (trim && columnValue != null) {                 value = columnValue.trim();             } else {                 value = columnValue;             }             //code added to trim new line chars from column value             if(value!=null){                 value = removeLineBreaks(value);             }             break;         case Types.ROWID:             value = rs.getString(colIndex);             break;         default:             value = "";         }         if (value == null) {             value = "";         }         return value;     }     // local method to trim newline chars from column value     public String removeLineBreaks(String origStr) {         if(removeLineFeeds.equalsIgnoreCase("Y"))         return origStr.replaceAll("\\r\\n", " ").replaceAll("\\r", " ").replaceAll("\\n", " ");         else         return origStr;     } } --------------------------- CSVLoader.java ----------------------------- import java.io.BufferedReader; import java.io.File; import java.io.FileOutputStream; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.OutputStreamWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.Properties; import java.util.regex.Matcher; import java.util.regex.Pattern; import com.opencsv.CSVWriter; import com.opencsv.ResultSetHelperService; public class CSVLoader {     public static Properties inParam;     public static void main(String[] args) {         String         inFileName = "C:/Temp/Order.sql";         String         srcDB_URL = "jdbc:oracle:thin:@abc.com:1521:orcl";         String         srcDB_User = "user";         String         srcDB_Pswd = "password";         String         outDir = "";         String         lastExportedDt = "TO_DATE('2000-01-01', 'YYYY-MM-DD')";         String         fullLoadFlag = "Y";         String         outFileName = "";         String         removeLineSep = "Y";         String         line = null;         String         sqlStmt = "";         char sqlStarted = 'N';         char sqlEnded = 'N';         Connection         conn = null;         try{             //load properties and set             them             initProperties();             inFileName = inParam.getProperty("SQL_FILE_NAME").trim();             srcDB_URL = inParam.getProperty("SRC_DB_URL").trim();             srcDB_User = inParam.getProperty("SRC_DB_USER").trim();             srcDB_Pswd = inParam.getProperty("SRC_DB_PSWD").trim();             outDir = inParam.getProperty("OUTPUT_DIR").trim();             lastExportedDt = inParam.getProperty("LAST_EXPORTED_DATE").trim();             fullLoadFlag = inParam.getProperty("FULL_LOAD").trim();             removeLineSep = inParam.getProperty("REMOVE_LINE_SEP").trim();             System.out.println("\nOutput Directory:"+ outDir);             System.out.println("\nFull Load Flag:"+ fullLoadFlag);             System.out.println("\nRemove Line Seperator Flag:"+ removeLineSep);             FileReader fileReader = new FileReader(inFileName);             BufferedReader bufferedReader = new BufferedReader(fileReader);             // set source DB connection             DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());             conn = DriverManager.getConnection(srcDB_URL,srcDB_User,srcDB_Pswd);             conn.setAutoCommit(false);             Statement             statement = conn.createStatement();             // Read sql             file             while((line = bufferedReader.readLine()) != null) {                 line = line.trim();                 if(line.length()>0                         && (line.equals("SELECT") || line.startsWith("SELECT "))) {                     sqlStarted = 'Y';                 }                 if(line.endsWith(";")) {                     sqlEnded = 'Y';                 }                 if(sqlStarted=='Y'){                     sqlStmt = sqlStmt + " " + line;                 }                 if(sqlEnded == 'Y') {                     //remove ;                     sqlStmt=sqlStmt.replaceAll(";", "");                     //replace LAST_EXPORTED_DATE                     sqlStmt=sqlStmt.replaceAll("#LAST_EXPORTED_DATE", "TO_DATE('" + lastExportedDt + "', 'YYYY-MM-DD')" );                     System.out.println("\nsqlStmt :" + sqlStmt);                     //get table name for a sql String                     tblName = getTableName(sqlStmt);                     System.out.println("tblName :" + tblName);                     if(fullLoadFlag.equalsIgnoreCase("Y")){                         sqlStmt = trimWhereClause(sqlStmt);                         System.out.println("trimed sqlStmt :" + sqlStmt);                     }                     try {                         //execute the select sql                         ResultSet resultData = statement.executeQuery(sqlStmt);                         //get load date time                         DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd_HHmmss");                         Calendar cal = Calendar.getInstance();                         String                         loadDateTime = dateFormat.format(cal.getTime());                         //set Output File Name                         if(inFileName.toUpperCase().contains("ORDER_PE.SQL")){                             outFileName = outDir + "/" + "file_keys_"+tblName+"-"+loadDateTime+".pecsv";                         } else {                             outFileName = outDir + "/" + "file_"+tblName+"-"+loadDateTime+".csv";                         }                         System.out.println("Out FileName: "+ outFileName);                         //write to output csv file                         //CSVWriter                         writer = new CSVWriter(new FileWriter(new File(outFileName)), ',');                         CSVWriter writer = new CSVWriter(new OutputStreamWriter(new FileOutputStream(outFileName), "UTF-8"), ',');                         writer.setResultService(new MyResultSetHelperService(removeLineSep));                         writer.writeAll(resultData, true);                         writer.close();                         resultData.close();                     } catch (Exception e) {                         e.printStackTrace();                         System.out.println("Error in executing query: " + sqlStmt);                         System.out.println("Error: " + e);                     }                     sqlEnded = 'N';                     sqlStarted = 'N';                     sqlStmt = "";                 }             } //end while             System.out.println("\n Source Extraction and CSV load done");         }catch (Exception e){             e.printStackTrace();             System.out.println("Error" +e);         } finally {             try { if(null!=conn) conn.close(); }catch(Exception e){}         }     }     public static String getTableName(String sqlStr) {         String         patternStr = "SELECT\\s(.*?)\\sFROM\\s"         + "((?!WHERE).*?)"         + "(?:WHERE\\s|\\Z)(.*)";         String         tmpStr = sqlStr.replaceAll(";", "").replaceAll("\\r\\n", " ").replaceAll("\\s+", " ").trim();         //System.out.println("tmpStr :" + tmpStr);         Pattern         pattern = Pattern.compile(patternStr);         Matcher         matcher = pattern.matcher(tmpStr);         if(matcher.matches()){             String             tableName = matcher.group(2);             if(tableName.contains(" "))             tableName = tableName.substring(0, tableName.indexOf(" "));             return tableName;         }else return null;     }     public static void initProperties() throws IOException {         inParam = new Properties();         inParam.load(ClassLoader.getSystemResourceAsStream("CSVLoader_Input.properties"));     }     public static String trimWhereClause(String sqlStr) {         String         tmpStr = sqlStr.replaceAll(";", "").replaceAll("\\r\\n", " ").replaceAll("\\s+", " ").trim();         if(tmpStr.contains("WHERE "))         tmpStr = tmpStr.substring(0, tmpStr.indexOf("WHERE "));         return tmpStr;             } }   That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as: java -cp .;%CLASS_PATH% -jar CSVLoader.jar >> out.log The output file “out.log” records all that happens on the console, in case you care. All that you need (the CSV files) will now be there in the directory as specified in OUTPUT_DIR parameter. Note that this utility is not multi-threaded. If you are using this, I’d suggest that you improvise on it and make it multi-threaded. It may take a whole lot of time to export data from your source, if done serially.

Applies to OTBI-Enterprise Version 2 PB 2   TABLE OF CONTENTS Overview of the External Data Support Feature. 3 Support for On-Premise Oracle AU sources. 3 Support for Non-Oracle sources. 4 Supported Subject...

Knowledge

Utility For Selectively Disabling Upgrade Load Plan Steps

Author: Gourav Sarkar, Software Development Manager, Oracle BI-Apps Development Introduction: In OBIA, for data upgrades from one version to the other, standalone load plans are provided out-of-the-box (OOTB). These standalone load plans need to be executed by the customers after up-taking a new BI-Apps version. Data upgrade is required to facilitate the customers apply the data fixes related to OBIA upgrade. Impact to Customized Warehouse If customizations are done in DW ETL, upgrade will result in overwriting the customizations on DW data. To avoid overwriting the customized data in DW, a utility is provided to help customers apply upgrade selectively. Challenges faced in this scenario are -        (1) Customizations leading to DW new tables        (2) Customizations in out-of-box DW tables i.e. adding new columns to OOTB tables For the first scenario, the OOTB upgrade ETL will not regress / overwrite the customizations. In the case of the second scenario, the OOTB tables may get  truncated as  part of upgrade ETL run. In such a situation, customers will need to re-run the customized code or use the utility to selectively run upgrade  ETL. Below steps outline how the utility can be used to selectively run the OOTB Upgrade ETL. Utility Functionality : The utility disables Load Plan steps which customers don’t want to execute. It also intelligently finds out and disables the related load plan steps that truncate / reset the target tables (the target tables are the ones associated with the scenarios which customers don’t want to reload). If a reset / truncate table step has multiple target tables associated with it, the utility removes only the target table (which customers don't want to execute) from that step.  The utility consists of 2 parts – (1) CSV Files CSV files act as inputs to the utility. Each adaptor version (e.g. EBS 12.2, EBS 12.1.3, EBS 12 etc.) has separate csv files. The CSV files contain all SDE, SIL and PLP  scenarios used in the upgrade load plans to load various target tables. Each CSV file contains 4 columns - (a) Load Plan step name, (b) Scenario Name, (c) Target table name populated by the scenario, (d) Enable Flag (Y/N) The enable flag controls the enabling/disabling of load plan steps. If the flag is set to ‘N’, the load plan step pertaining to the scenario and the other associated steps (i.e. reset / truncate table step) will be disabled. (2) The Groovy Code The groovy code is the heart of the utility and disables the Load Plan steps based on the .CSV input file. Running the Utility : (1) List down the target tables (staging or warehouse tables) for which you have added new columns and you don’t want to run the upgrade maps. (2) Create a copy of the .csv file (e.g. “Copy_Of_EBS1213_DW_Upgrade.csv”) pertaining to the specific adaptor. The csv files for various adaptors are available here. (3) Set the Enable_flag value to ‘N’ (last column in each row) in the copied version of the corresponding .CSV file for those tables mentioned in point#1.     Assume if you don’t want to load W_AP_AGING_INVOICE_A table, then find the W_AP_AGING_INVOICE_A table entry (or entries) in the .csv file and set the enable_flag to ‘N’ (as depicted below) (4) Remove the other rows (which you don’t want to modify) from the copied version of the .csv file. Please ensure that there are no additional blank rows at the end of the .csv file. (5) Save the .csv file (6) Log into the ODI Studio using your login credentials. (7) Create a duplicate copy of the upgrade load plan that you want to execute. Please also create backups of the tables mentioned in point #1. (8) Open Tools ->  Groovy -> New Script (9) Copy the code available here and paste it in the highlighted section below. (10) Execute the script by clicking on the green triangle button - (11) The program will run and the following window will pop up. (12) Select the upgrade Load Plan Name that you want to execute. Also enter location of the copied version of .CSV file. Click on “OK” button. (13) The program runs and disables the steps flagged in the .csv files . (14) After successful groovy script execution, a log file named "DisableLPSteps.log" will be created in the same directory where you placed the copied version of the .csv file.

Author: Gourav Sarkar, Software Development Manager, Oracle BI-Apps Development Introduction: In OBIA, for data upgrades from one version to the other, standalone load plans are provided out-of-the-box...

Customization

Loading Custom Calendars using the Universal Adaptor

  Loading Custom Calendars using the Universal Adaptor Author: Chuan Shi Introduction In Oracle BI Apps time dimension, we support fiscal calendars extracted from customers’ OLTP systems. However, there could be situations that customers need to use a custom calendar that does not exist. For example, Siebel customers do not have any fiscal calendars in the OLTP EBS/PSFT/Fusion customers do not have the specific calendar they want to use in their OLTP. In these cases, the custom fiscal calendar can be seeded in the following csv files (that are shipped OOTB) and be extracted via the UNIVERSAL adaptor: file_mcal_cal_d.csv* file_mcal_period_ds.csv file_mcal_context_g.csv* * Note that these two files are replaced by file_mcal_cal_ds.csv and file_mcal_context_gs.csv in Oracle BI Apps 11.1.1.10.1 release and beyond. This blog article provides explanation and example on this. Load Custom Calendar via Universal Adaptor A customer can load any custom fiscal calendar via the universal adaptor. There are three SDE tasks for fiscal calendars in this adaptor and they are: SDE_Universal_TimeDimension_Calendar SDE_Universal_TimeDimension_MCalPeriod SDE_Universal_TimeDimension_MCalContext These tasks load the fiscal calendar data seeded in the corresponding csv files into relevant DW tables. As long as the customer provides the fiscal calendar data correctly for the custom fiscal calendar in those csv files, this calendar will be loaded into the DW. To demonstrate the idea, we assume that a customer wants to load a fiscal calendar where each fiscal year starts on July 1st of the current calendar year and ends on June 30th of the next calendar year; each fiscal period is a calendar month (i.e., the first period of each fiscal year is from July 1st to July 31st); and date range of this fiscal calendar is from 2015 July 1st to 2017 June 30th (i.e., two fiscal years – FY 2016 and FY 2017, are considered). We discuss how to prepare the data for these three SDE tasks next. Prepare Data for FILE_MCAL_CAL_D.CSV FILE_MCAL_CAL_D.CSV is used to register the custom calender in W_MCAL_CAL_D. The granularity of this DW table is at the calendar level and each row of it specifies a unique fiscal calendar. In the csv file, a customer is required to provide necessary information required to populate this calendar. The information includes: Name Description ROW_WID A unique numerical ID that specifies the calendar Note that you may have other calendars loaded from OLTP and DW. Each of these calendars together with your custom calender must have a unique ROW_WID MCAL_CAL_ID A string that specifies the ID of the custom calendar MCAL_CAL_NAME A string that specifies the name of the custom calendar MCAL_CAL_CLASS A string that specifies the class of the calendar, e.g., OLTP Sourced or Generated. For custom calendar, OLTP Sourced can be used. FISCAL_CALENDAR_FLG Y if this is a fiscal calendar; else N BUDGETARY_CALENDAR_FLG Y if this is a budgetary calendar; else N INTEGRATION_ID The unique identifier of the calendar; it can be the same as the MCAL_CAL_ID DATASOURCE_NUM_ID DSN for this calendar W_INSERT_DT Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000 W_UPDATE_DT Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000 TENANT_ID DEFAULT X_CUSTOM 0   No matter if a column is a string or a number, do not use any quotes in the csv file. Suppose that for the customer calendar we are interested, we name it Universal~Custom. Then, the content of the sample csv file for this calendar is:   Prepare Data for FILE_MCAL_PERIOD_DS.CSV FILE_MCAL_PERIOD_DS.CSV is used to load all the fiscal periods of the custom calender in W_MCAL_PERIOD_DS. The granularity of this DW table is at the fiscal period level and each row of it specifies a unique fiscal period. In the csv file, a customer is required to provide necessary information required to populate all the periods of the calendar. The information includes: Name Description MCAL_CAL_ID A string that specifies the ID of the custom calendar MCAL_CAL_NAME A string that specifies the name of the custom calendar MCAL_CAL_CLASS A string that specifies the class of the calendar, e.g., OLTP Sourced or Generated. For custom calendar, OLTP Sourced can be used. ADJUSTMENT_PERIOD_FLG Y if the period is an adjustment period; else N MCAL_PERIOD_TYPE Something that is meaning to specify the type of the period, e.g., Custom MCAL_PERIOD_NAME Name of the fiscal period MCAL_PERIOD A number that shows the period number, e.g., 1, 2, etc MCAL_PERIOD_ST_DT Period start date; format is YYYYMMDDHHMISS, e.g., 20150701000000 MCAL_PERIOD_END_DT Period end date; format is YYYYMMDDHHMISS, e.g., 20150731000000 MCAL_QTR A number that shows the fiscal quarter (e.g., 1, 2, 3 or 4) to which this period belongs MCAL_YEAR Four digit number that shows the fiscal year (e.g., 2016) to which this period belongs MCAL_QTR_START_DT Start date of the fiscal quarter to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150701000000 MCAL_QTR_END_DT End date of the fiscal quarter to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150930000000 MCAL_YEAR_START_DT Start date of the fiscal year to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20150701000000 MCAL_YEAR_END_DT End date of the fiscal year to which this period belongs; format is YYYYMMDDHHMISS, e.g., 20160630000000 INTEGRATION_ID The unique identifier of the fiscal period; it is suggested that you use the concatenation of MCAL_CAL_ID and MCAL_PERIOD_NAME PERIOD_CLASSIFICATION_CODE __NOT_APPLICABLE__ DATASOURCE_NUM_ID DSN for this calendar W_INSERT_DT Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000. You can leave it empty if you don’t want to provide it. W_UPDATE_DT Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000. You can leave it empty if you don’t want to provide it. TENANT_ID DEFAULT X_CUSTOM 0   No matter if a column is a string or a number, do not use any quotes in the csv file. It is very important for a customer to provide the correct values to some crucial columns such as the start and end dates of fiscal periods, quarters, and years. If they are incorrect, they will be loaded into W_MCAL_PERIOD_D as bad data and when this table is aggregated to load W_MCAL_QTR_D and W_MCAL_YEAR_D, there could be ETL failures. As in our example: Each fiscal year starts from July 1st of the current year and ends on June 30th of the next year; Each fiscal period is a month. Using fiscal year 2016 as an example, the 12 fiscal periods should have the following values (also be careful about the leap year): The content of the sample csv file for all the periods in our example is:   Prepare Data for FILE_MCAL_CONTEXT_G.CSV Note that not all customers may need this. FILE_MCAL_CONTEXT_G.CSV is mainly used for financial customers where they have different organizations or ledgers and each organization or ledger may use a different fiscal calendar. Therefore the relationship between organization and fiscal calendar is stored in W_MCAL_CONTEXT_G table. (Siebel customers do not need to configure this csv file.) In the csv file, a customer is required to provide necessary information required to populate this table. The information includes: Name Description ORG_ID A string that specifies the ID of the organization; either this or the LEDGER_ID should not be null ORG_CODE A string that specifies the code of the organization, this can be null LEDGER_ID A string that specifies the ID of the ledger; either this or the ORG_ID should not be null LEDGER_NAME A string that specifies the name of the ledger, this can be null CALENDAR_ID The ID of the calendar, e.g., Universal~Custom in our example CALENDAR_NAME The name of the calendar INTEGRATION_ID The unique identifier of the org/ledger and calendar combination; e.g. you can concatenate ORG_ID or LEDGER_ID with CALENDAR_ID, you can also prefix the concatenated string with GL to stand for general ledger. E.g., GL~3142~Universal~Custom MCAL_CAL_WID The WID of the calendar. This must be the same as the ROW_WID chosen for this given calendar. E.g., in our example, we use 4999 as the ROW_WID for the calendar Universal~Custom and here we use Universal~Custom as the calendar for this org or ledger, than we need to put 4999 for the MCAL_CAL_WID DATASOURCE_NUM_ID DSN W_INSERT_DT Insert date; format is YYYYMMDDHHMISS, e.g., 20150513000000 W_UPDATE_DT Update date; format is YYYYMMDDHHMISS, e.g., 20150513000000 TENANT_ID DEFAULT X_CUSTOM 0   No matter if a column is a string or a number, do not use any quotes in the csv file. Suppose that for the customer calendar we are interested, we name it Universal~Custom. Then, the content of the sample csv file for this calendar is:   Add Universal Adaptor to the Generated Execution Plan Once these csv files are ready, a customer should generate a load plan that includes the universal adaptor as well for these tasks to be picked up in the generated EP. Then running the EP will load the custom calendar specified in these csv files into the DW, and therefore the customer can use it later in the reporting.   Modify or Extend Custom Calendar during Incremental Loads Before the full load, customers are suggested to prepare data for all the fiscal years they want a fiscal calendar to cover in the file_mcal_period_ds.csv file. However, it is quite possible that customers later want to extend such a calendar loaded using the universal adaptor. Or customers may want to make corrections to existing data loaded via the universal adaptor. In these cases, customer needs to modify the corresponding csv files before the next incremental load. If customer needs to extend a custom calendar, they need to add additional period entries in file_mcal_period_ds.csv for the fiscal years they want to extend the calendar to. Existing entries can be kept in the csv file or they can also be removed. The w_insert_dt and w_update_dt of these newly inserted rows should be correctly provided in the csv files. If customer needs to make correction to a specific entry in the csv file, they should make sure the w_update_dt column of that entry is updated appropriately so that the incremental logic can pick them up during the next incremental load. Use the Custom Calendar as the Enterprise Calendar Since the custom calendar loaded via the universal adaptor is a fiscal calendar, it can be used as the enterprise calendar. Recall that the enterprise calendar is nothing but a chosen fiscal calendar. So, it means that customer must first load this custom calendar into W_MCAL% tables via the universal adaptor and then they are free to choose it as the enterprise calendar. To set this custom calendar as the enterprise calendar, in BIACM, customer should set GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID correctly for the enterprise calendar. For the calendar considered in this article, the calendar customer wants to use is called Universal~Custom and its DSN is 900. Then, in BIACM, customer should set GBL_CALENDAR_ID to Universal~Custom, and GBL_DATASOURCE_NUM_ID to 900. This way, after ETL is run, the W_ENT% tables will be populated. (Note that W_ENT_WEEK_D will NOT be populated as customer is not using the DW generated calendar as the enterprise calendar.) For more on how to set the enterprise calendar, the readers are referred to the following article: https://blogs.oracle.com/biapps/entry/configuring_enterprise_calendar_for_oracle  

  Loading Custom Calendars using the Universal Adaptor Author: Chuan Shi Introduction In Oracle BI Apps time dimension, we support fiscal calendars extracted from customers’ OLTP systems. However, there...

Analytic Applications

Customer RFM Analysis

Customer RFM Analysis Author: Grace Wang Customer RFM RFM (Recency, Frequency, Monetary) is a marketing technique used to determine quantitatively which customers are the best ones by examining: Recency – How recently did the customer purchase? Frequency – How often do they purchase? Monetary Value – How much do they spend? The method is based on the marketing axiom that “80% of your business comes from 20% of your customers.” RFM methodology provides an easy and economic way to help business identify those valuable customers and is widely used in various industries. Case study performed at University of North Carolina Kenan-Flagler Business School has proven: Customers who purchased recently were more likely to buy again versus customers who had not purchased in a while Customers who purchased frequently were more likely to buy again versus customers who had made just one or two purchases Customers who had spent the most money in total were more likely to buy again.  The most valuable customers tended to continue to become even more valuable RFM Methodology There are variations of doing RFM. The most common method is finding a table with Customer Name, Date of Purchase and Purchase Value. Using RFM analysis, customers are ranked and assigned a ranking number of 1,2,3,4, or 5 (with 5 being highest) based on quintile distribution for each RFM parameter. The three ranking number makes up the score in each category (Recency Score / Frequency Score / Monetary Score), three scores together are referred to as an RFM "cell". e.g. 555,231.. The Customer data is sorted to determine which customers were "the best customers" in the past, with a cell ranking of "555" being ideal. Customer RFM in OBIA In OBIA, Customer Recency / Frequency / Monetary category assignment and score calculation are done at PLP (Post Load Process) stage during ETL. When CRM Revenue Fact table (W_REVN_F) completes data loading, a PLP task will access the “won” opportunities in CRM Revenue Fact table and evaluate RFM based on the Opportunity Closed Date for Recency, Number of Opportunities for Frequency and Opportunity Closed Revenue for Monetary. Each Customer then is ranked among three categories and assigned a Recency Category Code and Recency Score, a Frequency Category Code and Frequency Score, a Monetary Category Code and Monetary Score. RFM cell score is also calculated for each customer. The outcome of the RFM category codes and scores are captured in one task (PLP_PartyDimension_RFMInfo) and stored in Customer Dimension table  (W_PARTY_D) through one update task (PLP_PartyDimension_UpdateRFMInfo). Below is a quick glance of RFM ETL process:   RECENCY FREQUENCY MONETARY   How long ago the customer last made a purchase? How many Purchases the customer has made? How much Revenue the customer has generated? Source MAX (W_REVN_F.CLOSE_DT_WID) COUNT (DISTINCT W_REVN_F.OPTY_WID) SUM (W_REVN_F.DOC_CLOSED_REVN) Target W_PARTY_D. W_RECENCY_CAT_CODE RECENCY_SCORE W_PARTY_D. W_FREQUENCY_CAT_CODE FREQUENCY_SCORE W_PARTY_D. W_MONETARY_CAT_CODE MONETARY _SCORE   Rank Customer with the WON opportunity closed date. The later the Closed Date the higher rank. Rank Customer with the number of WON opportunities. The more Opportunities the higher rank. Rank Customer with the highest WON Revenue. The higher the Revenue the higher rank.   Domain Code   W_DBM_QUINTILE_TYPE W_RECENCY_CAT_CODE W_DBM_QUINTILE_TYPE W_FREQUENCY_CAT_CODE W_DBM_QUINTILE_TYPE W_MONETARY_CAT_CODE Customer RFM in RPD Total of ten logical columns related to RFM are defined in Customer Dimension in OBIA RPD. Logical Column Description Recency Category Code Contains the value of : 1st Quintile, 2nd Quintile,3rd Quintile, 4th Quintile, 5th Quintile   Recency Category   The descriptive value of Recency Category Code: Most Recent, Very Recent, Recent, Somewhat Recent, Less Recent Recency Score Contains the score from 1 to 5 for Recency Frequency Category Code Contains the value of : 1st Quintile, 2nd  Quintile,3rd Quintile, 4th Quintile, 5th Quintile Frequency Category   The descriptive value of Frequency Category Code: Most Frequent, Very Frequent, Frequent, Somewhat Frequent, Less Frequent Frequency Score Contains the score from 1 to 5 for Frequency Monetary Category Code Contains the value of : 1st Quintile, 2ndQuintile,3rd Quintile, 4th Quintile, 5th Quintile Monetary Category The descriptive value of Recency Category Code: Most Valued, Very Valued, Valued, Somewhat Valued, Less Valued Monetary Score Contains the score from 1 to 5 for Monetary RFM Score Contains the value of RFM cell ranging from 111 to 555.   Sample Customer RFM Webcat     More Applications RFM is an easy and economic way to mine current data to provide quick snapshot of customer. As a Customer dimension attribute, RFM columns can be used as a filter in various Customer related reports to produce more insightful analysis. In the mean time, the RFM methodology could be expanded into different subject areas. For example, Instead of Revenue Fact, the RFM can be performed on Web Visit and Web Submit fact from online marketing. We could identify what cluster of Customers visited the website most recently and frequently with high order amount. Using Purchase Order Fact from Spending, RFM can be applied to Supplier. We could identify who are the Suppliers most recently and frequently used by the company and with most spending expenditure. With RFM scores captured on the periodical basis, it can also help detect the change of Customer Purchasing Behavior. For example, if we keep the monthly RFM scores for each customer on a separate table. By doing time series analysis, a declining RFM score trend may indicate the customer is losing interest while an ascending RFM score showing the customer is picking up momentum.  

Customer RFM Analysis Author: Grace Wang Customer RFM RFM (Recency, Frequency, Monetary) is a marketing technique used to determine quantitatively which customers are the best ones by examining: Recency –...

Architecture

ETL Design for sharing the sequence ID

Introduction This blog describes an ETL design to share the same sequence ID among several tables. Party Dimension is an exceptional case where Records from different streams merge into one table. Organization, person and group party types are loaded into separate tables W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D and later get merged into a master table W_PARTY_D. These dimensions are Slowly Changing dimension out-of-the-box in the BI Application. They also carry a slowly changing type1 WID named as SCD1_WID. It holds the  same value as for the new record also in case of SCD type 2 change. For example, if Organization Name is changed from ‘ABC Software’ to ‘ABCD Software’, the  current record would still have the same value for SCD1_WID. This WID is populated with a sequence generated numeric value. Knowledge modules (KM) use a separate DB sequence for each target table while loading data into W_PARTY_ORG_D, W_PARTY_PER_D & W_PARTY_GROUP_D tables resulting in different sequence numbers for SCD1_WID column in master table W_PARTY_D. The following steps describe an approach to share the same sequence ID. Step 1: KM driven option ‘OBI_SCD1_WID’ should be disabled to refrain it from creating separate DB sequences. Set the default value of OBI_SCD1_WID IKM option to ‘false’ as shown in the screen shot below. Step 2: Create a mapping to populate the helper table W_PARTY_PER_T1_D table. The source for this mapping should be the corresponding staging table W_PARTY_PER_DS. The mapping expression for SCD1_WID column should read from the sequence created from the previous load stream (in this case W_PARTY_ORG_D load). Set the mapping expression as NEXTVAL (W_PARTY_ORG_S1W) and uncheck ‘Update’ checkbox. Step 3: In the flow tab, DETECTION_STRATEGY IKM option should be set to ‘NONE’. Step 4: Configure LP components to execute the scenarios loading W_PARTY_PER_T1_D, W_PARTY_PER_D & W_PARTY_D in serial mode in the order as follows.

Introduction This blog describes an ETL design to share the same sequence ID among several tables. Party Dimension is an exceptional case where Records from different streams merge into one table....

Architecture

Fact Table Partitioning with Oracle BI Applications

Authors: Patrick Block, Principal Member Technical Staff, Oracle BI Apps Development Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) Support for table partitioning was introduced to the BI Applications 11.x release series starting with version 11.1.1.8.1.  Partitioning support is largely automated - define the partitions on the fact table directly in the database, and then the BI Apps Fact Table IKMs and the Oracle database does the work.  The Bitmap indexes on the fact table are automatically created as Local (rather than as the default Global) Indexes and are only rebuild the Bitmap Indexes on the partitions that have any data  changes. The combination of having local indexes which may be created in parallel on each partition and only rebuilding the required indexes, considerably reduces the overall time taken to load the fact or aggregate table. While the documentation on the topic: BI Application Configuration Guide - Review Table Partitioning for Human Resources Analytics, uses an HR Analytics example, this feature is supported for all BI Apps Fact tables. The following blog post details how Fact Partitioning can be implemented. Partitioning Support The process for defining the partitions is simple.  An ODI procedure is provided in BI Apps repository that generates a template .SQL file for implementing partitions on a particular table - edit this .SQL file to reflect the desired partition strategy and execute it.  After that, everything is automatically taken care of.

 Partitions can be introduced at any time - prior to an initial load or after several subsequent incremental loads.  We strongly recommend initially loading the DW tables in an unpartitioned state.  Once the tables are loaded, determine an appropriate partition strategy based on your data profile, then implement the partitions per this document.  Performing an initial load on partitioned tables can take longer than performing an initial load on unpartitioned tables - the Oracle database has to spend extra cycles creating multiple new partitions and determining which partition a record should be assigned to.  It is generally faster to perform the initial load then implement the partitions.  On partitioning, the Oracle database will assign the records to the appropriate partitions in bulk rather than on the fly during the initial load.

 Once a table is partitioned, subsequent incremental loads should go much faster.  Typically a partitioning strategy should be designed such that an incremental load data is assigned to a single or a few partitions and rebuilding the bitmap indexes on just a single or just a few partitions and therefore is much faster than rebuilding across the entire data set.  This is where most of the E-LT performance improvement comes from.  Note that with an initial load, bitmap indexes are rebuilt across all partitions anyway which is a major reason why you do not see performance improvement during an initial load on partitioned tables. The following types of partitioning are currently supported (Sub-partitioning is also supported): Partitioning Type  Description Range  Assigns rows to partitions based on column values falling within a given range.  Commonly used with dates.  For example partitioning by month or year. Interval  This is an extension of Range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. List  Similar to partitioning by Range, except that the partition is selected based on columns matching one of a set of discrete values (the list) for each partition. For example grouping US states into Territory partitions. Hash  Hash partitioning enables easy partitioning of data that does not lend itself to range or list partitioning.  Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to the partitioning key.  Composite: Range - Range  Supports partitioning of 2 ranges, e.g. order_date and shipping_date. Composite: Range - List  Supports partitioning by range and then a list as the sup-partitioning type. Composite: Range - Hash  Supports partitioning by a range and then a hash as the sup-partitioning type. Composite: Hash - (anything)  Supports partitioning by hash and then any other partitioning type as the sub-partitioning type. Configuring Partitioning A utility is provided to generate a basic database script to enable partitioning for a given fact table.  This script generates a SQL file that enables Range partitioning with an Interval of 1.  A Number column must be provided in the utility as the partitioning key - if another column type needs to be partitioned, edit the generated SQL file to use a different column as the partitioning key. 1. Execute Utility to generate partition script Execute the IMPLEMENT_DW_TABLE_PARTITIONS scenario found under Components => DW => Oracle => Implement DW Table Partitions. The utility can be run at any time. Either before the table is loaded or after. Any existing Bitmap indexes will be converted to Local indexes. When executed, you will be prompted with the following options: Option Description SCRIPT_LOCATION The location where the script should be created if the CREATE_SCRIPT_FILE option is true. PARTITION_KEY Specify the column to be used as the Partition Key. As only Interval partitioning is supported by this procedure, the column datatype must be Number (though Date is supported by Oracle, this procedure does not support the Date datatype). Only a single column is supported by Interval partitioning. TABLE_NAME Specify the table to be partitioned. RUN_DDL Whether you wish to have this run immediately or not. Best to review first. The procedure will create a file named: ‘BIA_DW_Create_Partitioned_Table_DDL_<Unique Identifer>.sql’ 2. Edit SQL File The script generated implements range partitioning with an interval of 1 on a number column type. If another partitioning strategy is desired or range partitioning with an interval larger than 1 or range partitioning using a Date column type, edit the BIA_DW_Create_Partitioned_Table_DDL_<Unique Identifer>.sql file and alter it as necessary.  Generally, the only change required is to the PARTITION clause. The SQL generated by this procedure includes the following Partition clause: Advanced Implementations Monthly Interval on Date Wid column You may wish to leverage range partitioning for W_AP_HOLDS_F on the HOLD_DATE_WID column with an interval of every month. HOLD_DATE_WID is a number column in the format YYYYMMDD. Update the script as follows, using an interval of 100 (Date Wid columns are numbers in the format of YYYYMMDD where the hundredth position represents the month) and establish an appropriate transition point for the initial partition: PARTITION BY RANGE (HOLD_DATE_WID) INTERVAL(100) (PARTITION p0 VALUES LESS THAN (20100101)) Partitioning with Virtual Columns If a table does not have an appropriate column for the desired partitioning but a portion of a column does have the required information, you can create a virtual column and configure this column as your partition key. For example, the W_GL_BALANCE_F.BALANCE_DT_WID is used to join to the Fiscal Calendar dimension. The key is fairly complex but a portion of this column is appropriate for partitioning. For example, this column may contain the value '201020110120000' which includes the date value '20110120' in the YYYYMMDD format. First, add a virtual column to the fact table: ALTER TABLE W_GL_BALANCE_F add BALANCE_DT_V AS (TO_NUMBER(SUBSTR(BALANCE_DT_WID, 5, 8))); Next, edit the partition script PARTITION BY RANGE (BALANCE_DT_V) INTERVAL(100) (PARTITION p0 VALUES LESS THAN (20100101)) Finally, execute the script on the data warehouse database with a user who has sufficient privileges. Updateable Partition Keys It is possible to select a column that can be updated as the partition key. This requires that row movement be enabled for the table.  Without row movement enabled, you will get the following error message if an attempt is made to update the partition key: ‘ORA-14402: updating partition key column would cause a partition change’ By default, row movement is enabled in the SQL script when the table is partitioned.  No additional steps are required to support updateable partition keys.

Authors: Patrick Block, Principal Member Technical Staff, Oracle BI Apps Development Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) Support for table partitioning...

Architecture

DB Link with Oracle BI Applications

Authors: Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) and Patrick Block, Principal, OBIA Development In an earlier blog post 3 Modes for Moving Data to the BI Applications DW from a Source Application Database the available options for setting up data source connections was provided in detail, along with screenshots.  It should be noted this information written for OBIA 11.1.1.7.1 is still applicable for 11.1.1.8.1. While not all customers will deploy the SDS (Source Dependent Store with GoldenGate), most customers should consider using the DB link option. The DB Link allows for direct Oracle database to Oracle database communication, and the data won’t need to flow through your ODI Agent/WLS.  The objective of this blog is to elaborate on the above mentioned post and add tips and troubleshooting guidance.  Firstly, the naming conventions required can be a bit tricky, and while it looks odd, it does work. When your DBLink is setup, you can test it like this: In the above example, 310 is the DSN number defined in the BIACM and EBS11510 is the DSN Name. This DBLink was setup with the following syntax: CREATE DATABASE LINK "EBS11510.WORLD@DSN_310"     CONNECT TO "SDS_EBS11510_FULL" IDENTIFIED BY VALUES '054339818D629B44BA4880F2536FBAB4C0223F5FAC15FE03350F952A3FB376523F'     USING '(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=slcae159.us.oracle.com)(PORT=1581)) (CONNECT_DATA= (SID=dev159a)))'; If you get an error when trying to create the db link then check to see if your GLOBAL_NAMES initialization parameter is set to TRUE. This requires the database link name to match the remote database's global name.   When FALSE you can name the database link whatever you want (what you’ll require), see: Oracle Database 12.1 documentation, Names for Database Links You can find the <DATASERVER_NAME>  and <DSN_ID> from the BIACM or ODI Studio Client. In the BIACM the data server name is called ‘Oracle Data Integrator Server Name’ and DSN ID is called ‘Data Source Number’. There are a couple of other gotchas you should be aware of to avoid other commonly seen errors.  We are ultimately looking for a DB Link whose name corresponds to the Remote Object Mask below where %SCHEMA corresponds to the 'Instance / dblink' property, %OBJECT is the hardcoded value 'WORLD' and %DSERVER is derived based on the datasource num id.  The screenshots below are for an EBS R12.1.3 source with DSN name EBSR1213_DEFAULT and DSN ID 335. Make sure the ‘Instance/dblink (Data Server)’ property matches the ‘Name’ property. The ‘Name’ corresponds to the ‘Oracle Data Integrator Data Server Name’ property as seen in the BIACM when editing your source. While this should be set automatically when defining your data sources, you should verify the ‘default’ checkbox is checked in the physical schema (in ODI Studio Client) and the ‘Local Object Mask’ and ‘Remote Object Mask’ are configured correctly. Finally, please continue to refer to the 3 Modes for Moving Data to the BI Applications DW from a Source Application Database post, which explains how to let the BIACM know to use the DBLink.

Authors: Simon Miller, Director OBIA, Oracle Customer Engineering and Advocacy Lab (CEAL team) and Patrick Block, Principal, OBIA Development In an earlier blog post 3 Modes for Moving Data to the BI...

Architecture

Configure DW generated 4-4-5 calendar and 13 period calendar

Author: Chuan Shi Introduction Oracle BI APPs provide two data warehouse generated fiscal calendars OOTB. They are the 4-4-5 calendar (with three different disciplines, i.e., 4-4-5, 4-5-4 and 5-4-4, and one of them can be chosen) and 13 period calendar. These fiscal calendars can be populated according to the configuration rule chosen by the customer, such as when such a calendar starts, when it ends, and if customer wants to use 4-4-5, 4-5-4, or 5-4-4 discipline for the 4-4-5 calendar. This is done by configuring file_mcal_config_g.csv, which is used by the ETL logic. The purpose of this article is to explain how to configure file_mcal_config_g.csv. To start with, we briefly explain the 4-4-5 calendar and the 13 period calendar. 4-4-5 Calendar This calendar is in general used for weekly and period analysis. Each fiscal year is composed of twelve 4-4-5 periods. A 4-4-5 period is defined to be either 4 or 5 weeks long. A week can start with any day (Sunday, Monday, etc) and ends on the 7th day following the start day. For example, if customer starts weeks on every Monday, then all weeks end on the following Sunday. A 4-4-5 calendar can use a 4-4-5 discipline, a 4-5-4 discipline, or a 5-4-4 discipline. In a 4-4-5 discipline, the first period is composed of the first 4 weeks of a given fiscal year. The next 4 weeks compose the second period. The next 5 weeks compose the third period and so on. For example, the first three periods of a 4-4-5 discipline is shown in Figure 1. (5-4-4 and 4-5-4 disciplines are defined similarly.) 1 Week = 7 days (the first period can start from any day – configurable) Figure 1. First three periods of a 4-4-5 discipline A regular 4-4-5 calendar has 52 = (4+4+5 + 4+4+5 + 4+4+5 + 4+4+5) weeks. 13 Period Calendar This calendar is used when period to period comparisons are needed. (Note the 4-4-5 calendar is not good for this, since a 5 week period is 25% longer than a 4 week period.) It is mostly used in Retail. Each fiscal year is composed of 13 periods, where each period is 4 weeks long. Again, a week can start with any day (Sunday, Monday, etc) and ends on the 7th day following the start day. Figure 2 shows the first three periods of a 13 period calendar. 1 Week = 7 days (the first period can start from any day – configurable) Figure 2. First three periods of a 13 period calendar A regular 13 period calendar has 13 periods where each period has 4 weeks. Therefore, a regular 13 period calendar also contains 52 weeks. 52 Weeks vs. 53 Weeks From the introduction above, we see that for both the 4-4-5 calendar and 13 period calendar, a regular fiscal year contain 52 weeks, or equivalently 364 days. However, each Gregorian calendar year contain either 365 days or 366 days (in leap years). So, if we require all fiscal calendar years (or the 4-4-5 calendar or 13 period calendar) to precisely have 364 days, there will be accumulated gaps between the fiscal calendar cycle and the Gregorian calendar cycle. Therefore, what happen is that, for every several such fiscal years that have 52 weeks, the difference between the fiscal calendar cycles and Gregorian calendar cycles becomes 7 days. Once that happens, we allocate that 7 days to the last period of the current fiscal year. It means that in every several fiscal years of 52 weeks, there will be a fiscal year that is 53 weeks, where the last period of that fiscal year has one more week. For example, in a 4-4-5 discipline, the last period of a 53 week fiscal year will have 6 weeks, instead of 5. In a 13 period calendar, the last period of a 53 week fiscal year will have 5 weeks, instead of 4. Whether a fiscal year has 52 or 53 weeks depends on the start date and end date of each fiscal year. (For example, a customer can choose to load the first fiscal year with 53 weeks.) The start date of the first fiscal year is chosen by customers. The end date of the first fiscal year as well as the start and end dates of subsequent fiscal years of the calendar are determined by an ETL logic that uses a parameter called REFERENCE_DATE which is also set by the customer. We discuss them next. Configuring DW Generated Calendars For the two DW generated calendars, customers are allowed to configure the following: Start date of the first fiscal year of a given calendar End date of the last fiscal year of a given calendar Discipline of the 4-4-5 calendar Reference date that determines if a fiscal year has 52 or 53 weeks The configuration is done by using the csv file named file_mcal_config_g.csv, whose content is provided below: Description,,,,,,,,,,,, Examples,,,,,,,,,,,, Last Updated By,,,,,,,,,,,, Last Updated On,,,,,,,,,,,, CALENDAR_ID,CALENDAR_NAME,CALENDAR_CLASS,PERIOD_TYPE,CAL_ST_DT,CAL_END_DT,CAL_OFFSET,WEEK_ALLOCATION_RULE,REFERENCE_DATE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM 10000,4-4-5,Generated,4-4-5,20021229000000,20301231000000,0,4-4-5,1230,,,DEFAULT,0 10001,13,Generated,13,20021229000000,20301231000000,0,13,1230,,,DEFAULT,0 Customers are required to provide desirable values for both 4-4-5 calendar and 13 period calendar. The columns to be configured are: The first three columns are straightforward and therefore we only explain the last one. Our ETL logic assumes that the end date of a given fiscal year must be within +/- 3 days of the REFERNECE_DATE. For example, if a customer sets REFERENCE_DATE to 1230 (Dec 30), then the ETL logic knows that the end date of any given fiscal year should be from DEC-27 (which is DEC-30 – 3 days) to JAN-02 (which is DEC-30 + 3 days). Recall that a week starts with the day that the CAL_ST_DT is on and ends on the 7th day following it. As a result, in this example with CAL_ST_DT = 20021229000000 which is a Sunday and REFERENCE_DATE = 1230, the end date of a given fiscal year will be the unique Saturday that falls between December 27 and January 2 inclusively. Once the end date of the fiscal year is determined, the number of weeks is also determined. Consequently, this is how the ETL logic determines if a given fiscal year has 52 or 53 weeks. The table below shows the data for fiscal years from 2002 to 2014 for a 5-4-4 discipline calendar with REFERENCE_DATE = 1230. We can verify that the MCAL_YEAR_END_DT of all these years fall between Dec 27 and Jan 02. Some of the fiscal years have 53 weeks while the rest have 52 weeks.  

Author: Chuan Shi Introduction Oracle BI APPs provide two data warehouse generated fiscal calendars OOTB. They are the 4-4-5 calendar (with three different disciplines, i.e., 4-4-5, 4-5-4 and 5-4-4, and...

Product News

BI apps 11.1.1.8.1 is available now

Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is now available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of ODI, and certified with the latest version of Oracle BI Foundation 11.1.1.7 Highlights from the announcement.   What’s New This new release of Oracle Business Intelligence Applications 11g adds: 1. Prebuilt Connector for Oracle Procurement and Spend Analytics with Oracle JD Edwards 2. Expanded functional content coverage across existing Oracle BI Applications 3. New utilities and enhancements to increase productivity and further drive down Total Cost of Ownership (TCO) New Prebuilt Connector for BI Applications · Oracle Procurement and Spend Analytics introduces a new adapter that provides out-of-box integration between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards EnterpriseOne, enabling purchasing and sourcing organizations quickly identify savings opportunities and improve operational performance through decision-ready, best practice analytics. New Content Enhancements: Significant expansion of content improves existing Oracle BI Applications including: · Oracle Human Resources Analytics expands analysis on HR programs and workforce performance with the introduction of Talent Management Analysis, to help HR and business managers assess talent strengths and build potential leaders by delivering greater insight into job profiles. Improvements to Absence Management, new support for Workforce Frozen Snapshots and Workforce Flex Fields are also available. · Oracle Financial Analytics improves company Financial Performance with the new Subledger Accounting Module, providing complete account balance analysis, including supporting reference balances and drill down to accounting transactions. Aging Snapshot restoration for Receivables and Payables, drilldown from Payables to Purchase Orders, and Fixed Assets support for GL Posting Status are features also included in the release. · Oracle Project Analytics expands insight into project-based data with new Earned Value Management Analysis, providing visibility for Project Stakeholders into Planned Value,  Earned Value, and Actual Cost. Analysis of variances and indices, derived from Cost and Schedule Variance, CPI, TCPI, SPI and TSPI, is also available. · Oracle Supply Chain & Order Management Analytics introduces Costing and Inventory Turn support for Oracle JDE EnterpriseOne. In addition, the ability to save and restore the Inventory Snapshot when executing a full load is now supported. · Oracle Student Information Analytics introduces new Financial Aid Analysis, containing term-based detail information associated with a student award such as amount, status, disbursement, and aid type. · Oracle Manufacturing Analytics and Enterprise Asset Management Analytics have been re-introduced into the BI Applications data model. Although new with ODI, both modules provide now the same broad functionality available with BI Applications 7.9.6.4 · Unstructured Data for Information Discovery – The BI Applications Data Model has been extended to include unstructured CRM and ERP text fields. Sample Information Discovery templates – including ODI interfaces and runtime security definitions –provide the ability to quickly create sample Information Discovery Applications from combined structured and unstructured data. New Utilities and Enhancements to increase productivity and drive down TCO · New Data Lineage Tool allows customers and implementers to track lineage from source to target – improving information transparency across their enterprise · Configuration Management expands System Setup and Load Plan definition capabilities, including the registration of multiple instances for the same Product Line Version, UI enhancements to support DW connection configuration and Load Plan definition, and Load Plan definition migration. • Fact Partitioning IKMs and Parallel Index Builds are key performance Innovations introduced with this release. The former enables a feature that tags partitions so that only the indexes that need to be re-built during incremental loads are executed. The latter uses ODI parallel process so that Indexes can be built using multiple session (faster than and in addition to DB parallel).  

Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is now available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major...

Architecture

How does TYPE2_FLG Work in ETL

Author: Vivian(Weibei) Li Instruction TYPE2_FLG is usually used in slowly changing dimensions in BI Applications. This flag indicates if the dimension is type 2, and it determines the data storing behavior in ETL. This blog is to give you a better understanding on how TYPE2_FLG works in ETL. Background Slowly Changing dimension There are many fundamental dimensions such as Customer, Product, Location and Employee in BI application. The attributes in these dimensions are revised from time to time. Sometimes the revised attributes merely correct an error in the data. But many times the revised attributes represent a true change at a point in time. These changes arrive unexpectedly, sporadically and far less frequently than fact table measurements, so we call this topic slowly changing dimensions (SCDs). Slowly changing dimensions (SCD) entities like Employee, Customer, Product and others determine how the historical changes in the dimension tables are handled and decide how to respond to the changes. There are three different kinds of responses are needed: slowly changing dimension (SCD) Types 1, 2 and 3. Type 1: Overwrite the attributes with new changes Type 2: Add a New Dimension Record Type 3: Add a New Field We are talking about type 2 in this blog. In the Type 2 SCD model the whole history is stored in the database. An additional dimension record is created and the segmenting between the old record values and the new (current) value is easy to extract and the history is clear. A minimum of three additional columns should be added to the dimension row with type 2 changes: 1) row effective date or date/time stamp (EFFECTIVE_FROM_DT); 2) row expiration date or date/time stamp (EFFECTIVE_END_DT); and 3) current row indicator (CURRENT_FLG). SRC_EFF_FROM_DT and EFFECTIVE_FROM_DT The two columns have different concepts though they have similar name. We saw many customers getting confused about the two columns. SRC_EFF_FROM_DT is extracted from the effective start date of the source (mainly from the main driven source) if the source has the history. If the source doesn’t store history or the history is not extracted, it is hard coded as #LOW_DATE. EFFECTIVE_FROM_DT is a system column in dimension table to track the history. Remember that we use the knowledge modules (KM) for repeatable logic that can  be reused across ETL tasks. Updating the SCD related columns, such as EFFECTIVE_FROM_DT, is usually handled by KM. EFFECTIVE_FROM_DT is modified when inserting a new type 2 record in incremental run, and it is usually modified to the same date as the changed on date from the source. EFFECTIVE_FROM_DT does not always map to the Source Effective Dates. In type 2 SCD model, EFFECTIVE_FROM_DT is the date used to track the history. TYPE2_FLG in BI Application TYPE2_FLG is a flag used to indicate if the dimension is type 2 or not. This flag is used in many dimensions in BI application, such as employee, user, position, and so on. This flag is very important because it determines the history storing behavior. TYPE2_FLG has two values: ‘Y’ and ‘N’. ‘Y’ means the dimension is a type 2, and ‘N’ means the dimension is type 1. Type 2 dimensions store the history, while type 1 dimensions only store the current record. For example, if the supervisor is changed from Peter to Susan for an employee on 01/02/2012: Type 1 EMPLOYEE_ID SUPERVISOR_NAME CURRENT_FLG 123 Susan Y   Type 2 EMPLOYEE_ID EFFECTIVE_FROM_DT EFFECTIVE_TO_DT SUPERVISOR_NAME CURRENT_FLG 123 01/02/2012 Future Susan ‘Y’ 123 01/01/1999 01/02/2012 Peter ‘N’   As shown above, type 1 dimension overwrites the supervisor with the new supervisor, and only stores the current record. Type 2 dimension inserts a new record with the new supervisor name and keeps the old record as a history. The EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT and CURRENT_FLG are modified accordingly: EFFECTIVE_TO_DT is changed to 01/02/2012 and CURRENT_FLG is set as ‘N’ for the old record. The ‘CURRENT_FLG’ is set as ‘Y’ for the new record with the new EFFECTIVE_FROM_DT. How to Setup TYPE2_FLG The out of the box code in BI application should have setup the default values. For the type 2 dimensions, it is usually set as ‘Y’. The TYPE2_FLG can be configured in BIACM. This variable is configured by different dimension groups. The screenshot above shows that you can configure the value of this flag for difference dimension groups by clicking the parameter value and overwriting it to a different value.   Note: You can only configure the TYPE2_FLG for the dimension groups that are in this BIACM list. The dimension groups that are not in the list cannot be configured. You should set the value of TYPE2_FLG carefully. If you override the TYPE2_FLG to ‘N’ for a type 2 dimension, you may meet some issues. I will describe more details in the next session. Possible Issues Related to TYPE2_FLG As mentioned earlier, sometimes for some reason, the value of TYPE2_FLG may be set to ‘N’ for the type 2 dimension. This may cause some issues. In BI application, SDE mapping brings the history from the source in the initial full load in some adapters, such as EBS. TYPE2_FLG affects the storing behavior for these historic records. Here compares the different behaviors when setting TYPE2_FLG to ‘Y’ and ‘N’ for a type 2 dimension. Case 1-TYPE2_FLG = ‘Y’ Let’s take employee dimension (type 2 dimension) as an example Source EMPLOYEE_ID SRC_EFF_FROM_DT SUPERVISOR_NAME ROW # 123 01/01/1999 Peter 1 123 01/02/2012 Susan 2 When loading the data into data warehouse in the initial full run, both the rows (including the historical record #1) will be loaded. TYPE2_FLG is ‘Y’ in this case, KM, which will handle the loading behavior, uses this value to determine the type of employee dimension, and accordingly the storing method. KM will modify EFFECTIVE_TO_DT and CURRENT_FLG for the two records as TYPE2_FLG=’Y’ in this case. EMPLOYEE_ID EFFECTIVE_FROM_DT EFFECTIVE_TO_DT SUPERVISOR_NAME CURRENT_FLG 123 01/02/2012 Future Susan ‘Y’ 123 01/01/1999 01/02/2012 Peter ‘N’ Case 2 - TYPE2_FLG =’N’ This time, the TYPE2_FLG is set as ‘N’ for employee dimension (type 2 dimension), which is incorrect. KM will treat it as type 1 rather than type 2. Source EMPLOYEE_ID SRC_EFF_FROM_DT SUPERVISOR_NAME ROW # 123 01/01/1999 Peter 1 123 01/02/2012 Susan 2 When loading the data into data warehouse, both the rows will be loaded because the history from the source is stored. However, because TYPE2_FLG is ‘N’, KM  won’t modify EFFECTIVE_TO_DT and CURRENT_FLG accordingly, and this will cause issues. Employee Table in Data warehouse EMPLOYEE_ID EFFECTIVE_FROM_DT EFFECTIVE_TO_DT SUPERVISOR_NAME CURRENT_FLG 123 01/02/2012 Future Susan ‘Y’ 123 01/01/1999 Future Peter ‘Y’ As shown above, the two records are in an overlapping time range, and both have CURRENT_FLG as ‘Y’. It may give duplicates when resolving the employee from the facts. For example, the transaction date 02/04/2013 will fall into the time range of the two records, so both will be extracted, thus causing the duplicates in the facts. How to Debug TYPE2_FLG Issues As discussed in the previous session, in order to avoid this kind of issues, you should set the value of TYPE2_FLG carefully, and set it as ‘Y’ for out of the box TYPE2 dimensions. In addition, when you get the duplicates in the fact, you can do the following checks. Check where the duplicates come from in the fact, and find out the problematic dimension if they are from the dimension. Check the data in the dimension for the duplicates to see if you see the similar loading behavior as the one in use case 2 of the previous session. You can first simply see if multiple records having CURRENT_FLG=’Y’. Check the value of the TYPE2_FLG in ODI repository. 1. Open the session log of the task 2. Open ‘Definition’ 3. Expand ‘Variable and Sequence Values’ 4. Find TYPE2_FLG and check the value 5. If the value is ‘N’ but the dimension is type 2, you may hit the issue described in the previous session. I also would like to provide you some tips to find out the type of a dimension here. You can find out this information in ODI repository. For one dimension, such as employee dimension, you should first know the dimension table name, for example, W_EMPLOYEE_D Go to ODI repository->’Designer’->’Models’ Find out the dimension table and open it by double clicking it Go to ‘Definition’ and check the OLAP type. The type of slowly changing dimension tells you that this dimension is type 2 You can also find out which attributes are type 2 by checking the column attribute 1. Expand the dimension table, for example, W_EMPLOYEE_D and then expand Columns   2. Open the attribute of a column by double clicking it 3. Go to ‘Description’ and check ‘Slowly Changing Dimension Behavior’ As shown above, ‘Add Rows on Change’ option tells you that this attribute is type 2. Conclusion This blog helps you understand how TYPE2_FLG works in ETL and recognize the importance of this flag. It also gives you a way to debug the possible TYPE2_FLG issue.

Author: Vivian(Weibei) Li Instruction TYPE2_FLG is usually used in slowly changing dimensions in BI Applications. This flag indicates if the dimension is type 2, and it determines the data storing...

Architecture

3 Modes for Moving Data to the BI Applications DW from a Source Application Database

In BI Applications 11.1.1.7.1 the adaptors for the following product lines use the LKM BIAPPS SQL to Oracle (Multi Transport) to move the data from the source Application database to the target BI Applications DW database:  E-Business Suite  Siebel  PeopleSoft  JDE A key feature of this LKM developed specifically for BI Applications is that the data from the source system may be transported in 3 different ways and using a parameters set in Configuration Manager the mode can be selected to suit how the system has been setup, thereby optimizing ETL performance.  This blog post details those 3 modes, and how to configure BI Applications to use the mode that best suits the deployment. The transport modes are: JDBC mode - the data is moved via JDBC from the source up to the ODI Agent and from the ODI Agent down to the Target.  The ODI Agent transfers the data into a dynamically created temporary target table.  This is the default mode. Database Link mode - the data is moved over a predefined database link.  A view is created on the source Oracle database, and a synonym is used instead of a temporary target table to reference the data across the database link SDS mode - the data is referenced directly from an SDS replicated table.  The SDS table is continuously and asynchrounously replicated using Oracle GoldenGate. A view is used instead of a temporary target table and the view references the source data in the local SDS schema. The default mode is selected as it works in all deployments, however the other modes if they've been setup and enabled allow significant improvements in ETL performance.  The following details each mode. JDBC Mode This is the default mode based on the default settings for the underlying variables. In this mode data is transferred from the source via JDBC. This is the slowest mode amongst the three. Use this mode when you do not have SDS and when your source is non-Oracle. The Array Fetch Size against the Physical Data Server being used to run the interface can impact the performance of this mode. Refer to ODI documentation on more about that parameter. JDBC Mode is used when KM option OBI_SRC_VIA_DBLINK (default variable ETL_SRC_VIA_DBLINK) is not enabled (N) and variable IS_SDS_DEPLOYED is not enabled (N) Database Link Mode In this mode data is transferred from the source via a DB Link. This mode is usually faster than the JDBC mode assuming the network between the source and the warehouse remains constant. This KM is controlled via the option OBI_SRC_VIA_DBLINK is defaulted to variable ETL_SRC_VIA_DBLINK. When the ETL Parameter ETL_SRC_VIA_DBLINK is set to Yes, and SDS is not enabled (see next section), then transport is done via db link.  Database Link Mode can be used only when the source database is Oracle and a compatible DB Link can be created from the Oracle Warehouse to the source. The LKM does not create a DB Link. It instead relies on a db link already existing with the naming convention followed below: <DATASERVER_NAME>.WORLD@DSN_<DSN_ID> So if the Data Server name for EBS is EBS11510 and the DSN for EBS11510 is 310, then the db link name should be: EBS11510.WORLD@DSN_310 This DB Link should exist in the warehouse and be accessible using the warehouse credentials (user specified against the warehouse connection in ODI). For details of what a Data Server is and how it's defined please take a look at this prior blog post: Registering BI Applications Data Sources Database Link Mode is used when option OBI_SRC_VIA_DBLINK (default variable ETL_SRC_VIA_DBLINK) is enabled (Y) and variable IS_SDS_DEPLOYED is not enabled (N) The ETL Parameter ETL_SRC_VIA_DBLINK is set in Configuration Manager in the Manage Data Load Parameter screen and can be set for the source Instance.     SDS Mode When the IS_SDS_DEPLOYED variable is set to Y, the LKM uses the SDS as the source. A view is created pointing to the SDS schema and that view is used to insert data into the target. SDS resides on the same database as the warehouse. This is the fastest mode as no data is moved over the network.  Use this mode only when you have a SDS deployed and populated.  Refer to the Bi Applications Administrator's Guide and the Section on Administering Oracle GoldenGate and SDS for more details on how to setup the SDS and populate the SDS using Oracle GoldenGate. SDS Mode is used when the variable IS_SDS_DEPLOYED is enabled (Y)  The Parameter IS_SDS_DEPLOYED is set in Configuration Manager in the Manage Data Load Parameter screen. and can be set either for the Instance or for each individual Dimension or Fact Group for more granular control.  Note if IS_SDS_DEPLOYED is not enabled for a Fact Group or Dimension whilst others for that same Instance are enabled, then the LKM will fall back to the other mode, i.e if ETL_SRV_VIA_DBLINK is enabled it will used the Database Link Mode, otherwise it will used the JDBC Mode.  

In BI Applications 11.1.1.7.1 the adaptors for the following product lines use the LKM BIAPPS SQL to Oracle (Multi Transport) to move the data from the source Application database to the target BI...

Architecture

Snapshot Facts in OBIA (3)

Authors: Anbing Xue, Zhi Lin Delta Snapshot History Fact Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The changes here can be either quantitative (on amount, units, etc) or qualitative (on names, description, etc). Hence we invented a new kind of snapshot fact, to specifically address it. Typical Data Model A typical attributes of a delta snapshot history fact is– SNAPSHOT_DT IMAGE_TYPE Many attributes of the original transactions are kept and inherited, like– Primary Key Foreign keys to the dimensions Implementation The delta snapshot history fact would capture and store a new pair of images, whenever we detect a change on the original transaction. The image pair is essential, especially for qualitative changes. Usually it consists of one pre image (or “negative” image) as well as one post image (or “positive” image).  For example,   IMAGE_NUM ORDER_ID SHIP_TO_LOCATION AMOUNT SNAPSHOT_DT IMAGE_TYPE 1 001 New York City $100 Mar 1, 2014 POST 2 001 New York City -$100 Mar 2, 2014 PRE 3 001 New York City $120 Mar 2, 2014 POST 4 001 New York City -$120 Mar 3, 2014 PRE 5 001 Boston $120 Mar 3, 2014 POST Basically, this delta snapshot history fact stores regular snapshots per change, and adds the “negative” snapshots before change. So it is enriched with a unique feature to report delta’s trend line, simply by clubbing both kinds of snapshots together. Besides, we also introduced more flexibility for users to configure a subset of columns they are interested to track. Based on the configuration, the ETL would create new snapshots for changes only on the interested columns. Changes on the other columns would trigger an update to existing snapshots instead, in order to sync up with the original transactions. Though, extra ETL complexity has to be introduced to handle pre vs. post images separately, plus the flexibility to track subset but all changes. The number of records is much less than regular daily snapshot facts. The data size of this fact is proportional to the number of changes.

Authors: Anbing Xue, Zhi Lin Delta Snapshot History Fact Besides the trend lines of snapshots, we have requirement to plot the trend lines of the delta changes on a transaction along time line. The...

Architecture

Snapshot Facts in OBIA (2)

Authors: Divya Nandakumar, Zhi Lin Range Snapshot History Fact In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the snapshot fact could be humongous, depending on the snapshot frequency.  The dilemma is that better accuracy of change history would be achieved with more frequent captures, which makes data size overwhelming and performance badly impacted. A solution is to create a variation of the snapshot history fact, which we call snapshot period history fact. The idea is simple, concatenating consecutive snapshots, if they happen to share identical images, of a transaction into a new snapshot. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly.   Typical data model The typical attributes of a range snapshot history fact are– EFFECTIVE_FROM_DT EFFECTIVE_TO_DT Many attributes of the original transactions are kept and inherited, like– Primary Key Foreign keys to the dimensions Implementations Concatenate daily snapshots into a range This is a conventional way to build up range snapshots. Two consecutive daily snapshots sharing identical status can be merged into one snapshot spanning across these two days. Two having different statuses would be stored as two separate snapshots for each day. Concatenation of these daily snapshots could be created as a result of gathering related daily snapshot records together. The degree of condensation of data that can be  achieved is remarkable, because the gathering may span to range of period unlike the fixed period of week or month. In case the triggering event occurs very often, for example 20 times a day then, this approach is not advisable. Meanwhile, every detail got preserved as no daily snapshot got dropped off the concatenation. The ETL flow requires daily snapshots to start with, and do group-by on “interested” status to merge identical rows. Its dependency on accumulation of daily snapshots is extra task and large storage. Incremental load could be a challenge, especially for a back-dated snapshot. Also, this method assumes no gap between daily snapshots, which could lead to an exception difficult to handle in ETL. A status change in these related daily snapshots could trigger a snapshot record to be entered into the data warehouse.   Range snapshots directly from transactions Here we invented a new way to overcome the shortage of the conventional method above to build range snapshots. We removed the dependency on daily snapshots and directly build range snapshots by scanning through all transaction footprints. A few key points we have introduced to achieve this. 1)     Create multiple range snapshots trailing each footprint (transaction). For example, one order placed in Mar 25, 2012 by Adam, derives to range snapshots trailing as below. The period duration in each snapshot is one year here, which is configurable.   Customer Status Status Start Date Status End Date Adam Active Mar 25, 2012 Mar 25, 2013 Adam Dormant Mar 25, 2013 Mar 25, 2014 Adam Lost Mar 25, 2014 Mar 25, 2015 2)      Collapse all trailing series generated in (1),and come out only one status at any point of time, using priority rules. In the same example, the priority rule to override is, Active > Dormant > Lost. 3)       On top of the results from collapsing, concatenate the snapshots having identical statuses. The new snapshot would be labeled with starting as well as end timestamps, which indicates the time period the image lasts. This way merges duplicate snapshots and reduces the resulted data size significantly. The challenge on incremental load, especially back-dated records, can be solved here relatively easier, as all the source information here, the transaction footprints, are usually persisted anyway. In similar example, our ETL can be as simple as deleting records from the target table and recreating the records for a particular customer from scratch, every time there is an order placed by the customer. Here we still achieve a great amount of data compression and robust ETL processing. The incremental load is still not precise yet to the most granular level. One incremental load involving one transaction per customer would end up to truncate and rebuild the entire target table.

Authors: Divya Nandakumar, Zhi Lin Range Snapshot History Fact In a data warehouse implementation, the volume of OLTP transactions could be very big already, and consequentially the volume of the...

Architecture

Snapshot Facts in OBIA (1)

Authors: Divya Nandakumar, Zhi Lin Snapshot History Fact A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular, transactional fact intends to store data in data warehouse format and reflect OLTP transactions with near-real-time latency. In this context, a regular fact basically captures the near-current snapshot of the OLTP transactions, and is capable to support status de quo analysis. A snapshot history fact, (or snapshot fact, in short,) accumulates a series of snapshots and preserve all, each with a different time label. In this way, the change history of each transaction is preserved by the snapshot series. This fact is very useful to do trend analysis over time. Typical Data Model The typical attribute of a snapshot history fact is– SNAPSHOT_PERIOD_DT Many attributes of the original transactions are kept and inherited, like– Primary Key Foreign keys to the dimensions Implementation Rolling period of daily snapshots In the case of a Business need to Analyze DAILY quantity-on-hand inventory levels by product and store for a Business process like “Retail store inventory”, the Granularity could be - Daily inventory by product at each store, the Dimensions - Date, product, store & Fact - Quantity on hand. Storing source system daily snapshot would have a serious impact on storage of Data Warehouse. The first solution is to keep daily snapshots for a limited rolling period, like last 90 days for example. The daily snapshot table would accumulate daily snapshots from 1 day, 2 days,…, until 90 days. After that, it would always drop off the oldest daily snapshot, before it adds one more daily snapshot. Hence the ETL should always delete the snapshots older than 90 days first, and then append a new snapshot. This method enables to keep a fairly granular of snapshots on daily level. However, older snapshots are not kept, so it’s not good for long term historical trending. Monthly snapshots At the end of the month all accounts have their month ending balance captured. The event is the end of the month, and the month is stored as part of the data warehouse. The selection program reads through the operational data and upon encountering a record that meets the qualifications, moves the record to the data warehouse. At the end of the month, each account is queried and the balance of the account at the end of the month is transferred to the data warehouse environment. One account may have had no activities during the month and another account may have had 200 activities during the month. Both accounts will show up as exactly one record in the data warehouse environment. No continuity of activity is assumed using this technique. The passage of time - day end, week end, month end, etc. - is all common ways of triggering a snapshot. But the periodic passage of time is hardly the only way that snapshots are triggered. The Monthly snapshot table stores snapshots of all previous day’s historical data. ETL design would have a preload mapping which deletes the data loaded for current month, based on Current Month End date and then load with the latest data for current month. In this way, we “aggregate” up from daily snapshots and archive great compact on data size. Longer term history trending can be stored and reported. However, we lost gross levels of details in between every two month ends.

Authors: Divya Nandakumar, Zhi Lin Snapshot History Fact A snapshot captures an image of source data at certain point of time, and preserves the data image plus a (snapshot) time label. A regular,...

Architecture

How to Implement Object Security in Project Analytics in OBIA 11.1.1.7.1

Author: Vijay Rajgopal Introduction   This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA 11.1.1.7.1 onwards.   Object-level security controls the visibility to logical objects based on a user's duty/application roles. The access to following objects can be restricted using object level security: Presentation tables, Presentation table columns, Subject Areas, Reports, Dashboards, and Project Specific shared folders.   To apply object security over subject area, individual tables or individual column the default access for authenticated user application role must be set to No Access.   We need to explicitly grant read access to duty roles (which are based on adaptor as explained above) which can access/view the particular subject area or individual table or individual column.   Supported OBIA release: 11.1.1.7.1 onwards   Project Analytics Application Roles used for enforcing object security –   In Enterprise Manager select WebLogic -> Domain -> bifoundation_domain -> Security -> Application Roles, Select obi application stripe and search for role name which starts with OBIA and you will see the list of all application roles that start with OBIA.     Following is the list of OOTB duty roles by adaptor   EBS Adaptor Duty Roles –   OBIA_EBS_PROJECT_EXECUTIVE_ANALYSIS_DUTY   OBIA_EBS_PROJECT_MANAGEMENT_ANALYSIS_DUTY   PSFT Adaptor Duty Roles –   OBIA_PSFT_PROJECT_EXECUTIVE_ANALYSIS_DUTY   OBIA_PSFT_PROJECT_MANAGEMENT_ANALYSIS_DUTY   Fusion Adaptor Duty Roles –   OBIA_PROJECT_EXECUTIVE_ANALYSIS_DUTY   OBIA_PROJECT_MANAGEMENT_ANALYSIS_DUTY   Project Analytics object security implementation -   2.1 Subject Area:   Eg: Project - Cost GL Reconciliation is a newly added area for EBS and PSFT adaptors. We want to ensure that this subject area is not seen by Fusion Adaptor customers.   Bring down the OBIEE Server, backup the existing rpd and open the rpd in the Admin tool.   Double click Project - Cost GL Reconciliation -> Permissions     As you can see read access has been granted explicitly to duty roles associated with EBS and PSFT adaptors. All other duty roles would inherit the default access from Authenticated User application role which is set to No Access. This ensures that this subject area is not visible for Fusion adaptor users   2.2 Presentation Table:   Eg: Dim – Analysis Type is supported only for PSFT adaptor. We hide this presentation table from EBS and Fusion Adaptor customers.   Under Project - BillingAnalysis Type Permissions     As it can be seen above only users associated to PSFT duty roles would be able to view Analysis Type table. For EBS and Fusion adaptor users this table would be hidden.   2.3 Individual Columns:   Eg: Interproject Billing Amount metric in Project-Billing subject area is supported only for EBS and Fusion adaptors. We hide this individual column from PSFT customers.   Under Project - Billing Fact – Project Billing Interproject Invoice Amount -> Permissions     As it can be seen above this metric would be viewed by EBS and Fusion adaptor users and hidden from PSFT adaptor users.   Save the rpd, do a consistency check and deploy the updated rpd in the OBIEE server.   Additional Information –   General Details about 11.1.1.7.1 OBIA can be found here  

Author: Vijay Rajgopal Introduction   This blog details the steps needed to implement object security for any custom objects which the Customer has created in the Project Analytics Module in OBIA11.1.1.7....

Analytic Applications

Notes for implementing Universal adapter for OBIA Project analytics

Author: Amit Kothari Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules. Supported OBIA releases: 11.1.1.7.1 onwards Supported Apps releases: Universal Adapter. Steps:  Please refer to the OBIA documentation and the DMR as a starting point for this exercise.Also refer to this blog entry. Please login to the ODI Designer to see the OBIA Projects Universal interfaces, the Source files can be seen in the Model layer. 1. High level steps to import data into the data warehouse through the Universal adapter. a. Populate the csv files with your data (eg. file_proj_budget_fs,.csv is the source file for w_proj_budget_fs table). Typically customer writes an extract program like a shell file/PL*SQLprogram etc which creates these data files from a non supported Source OLTP system. b. Refer to the steps details of how to populate these files. c. Build a Load Plan with fact groups: "900: Universal Adaptor Instance"."Project". d. Run the Load Plan that you created in the previous step. e. Note: If applicable this Load Plan must be run after the regular Load Plan to populate Oracle Business Analytics Warehouse for the other Subject Areas has completed. 2. The configuration file or files for this task are provided on installation of Oracle BI Applications at one of the following locations: a. Source-independent files: <Oracle Home for BI>\biapps\etl\data_files\src_files\. b. Source-specific files: <Oracle Home for BI>\biapps\etl\data_files\src_files\<source adaptor>. c. Your system administrator will have copied these files to another location and configured ODI connections to read from this location. Work with your system administrator to obtain the files. When configuration is complete, your system administrator will need to copy the configured files to the location from which ODI reads these files. d. Refer to the Appendix section ‘Setting Up the Delimiter for a Source File’. 3. As a general rule default 0 for numeric columns and '__NOT_APPLICABLE__’ for string columns so that we do not run into ‘Not Null’ errors when ETLs start loading data. 4. Date columns should be populated in the CSV file as a number in the format YYYYMMDDHH24MISS or kept null 5. The dimension ID fields in the fact staging tables have to be populated with the integration_id of the various dimensions. This is very important otherwise the dimension wids fields in the fact tables will default to 0. Please refer to the ODI Model or the DMR for the star schema diagrams and other FK info. 6. Similarly the common dimensions which Projects uses like W_INT_ORG_D, W_MCAL_DAY_D, W_MCAL_CONTEXT_G, W_EMPLOYE_D, W_JOB_D, W_INVENTORY_PRODUCT_D etc also needs to populated correctly via source files 7. W_MCAL_CONTEXT_G has a class fields that holds two values – GL or PROJECTS. To resolve the project accounting dates in the fact tables there must be data present in this table for class ‘PROJECTS’ 8. There are various Domain codes which are loaded to Warehouse staging table W_DOMAIN_MEMBER_GS. In order to load this table the generic file File_domain_member_gs.csv has to be populated with the correct Domain code. The granularity of this file is each domain member per language for any of the domains listed above. Domain codes for Projects are listed in the Appendix. Just load the domains based on the Facts/Dims you are planning to load. Table B-164 file_domain_member_gs.csv Field Descriptions Column Name Data Type Sample Data Description DOMAIN_CODE Not available. Not available. This should be populated with the Domain Code corresponding to the Source Domain that is to be configured. DOMAIN_TYPE_CODE Not available. Not available. Defaulted to 'S' - indicates this is a Source Domain Code. DOMAIN_MEMBER_CODE Not available. Not available. This should be populated with the CODE value supplied in any of the above files. DOMAIN_MEMBER_NAME Not available. Not available. This should be populated with the NAME value that corresponds to the Member Code supplied. DOMAIN_MEMBER_DESCR Not available. Not available. Not available. DOMAIN_MEMBER_REF_CODE Not available. Not available. Hardcode to '__NOT_APPLICABLE__'. DOMAIN_MEMBER_DEFN_TYPE_CODE Not available. Not available. Not available. LANGUAGE_CODE Not available. Not available. Warehouse Language Code. SRC_LANGUAGE_CODE Not available. Not available. Source Language Code. INTEGRATION_ID Not available. Not available. This is the unique ID for the record. The INTEGRATION_ID for this file can also be populated as DOMAIN_CODE~DOMAIN_MEMBER_CODE. DATASOURCE_NUM_ID Not available. Not available. The unique Data Source ID of the Source Instance you are configuring.   Appendix A. Setting Up the Delimiter for a Source File When you load data from a Comma Separated Values (CSV) formatted source file, if the data contains a comma character (,), you must enclose the source data with a suitable enclosing character known as a delimiter that does not exist in the source data. Note: Alternatively, you could configure your data extraction program to enclose the data with a suitable enclosing character automatically. For example, you might have a CSV source data file with the following data: Months, Status January, February, March, Active April, May, June, Active If you loaded this data without modification, ODI would load 'January' as the Months value, and 'February' as the Status value. The remaining data for the first record (that is, March, Active) would not be loaded. To enable ODI to load this data correctly, you might enclose the data in the Months field within the double-quotation mark enclosing character (" ") as follows: Months, Status "January, February, March", Active "April, May, June", Active After modification, ODI would load the data correctly. In this example, for the first record ODI would load 'January, February, March' as the Months value, and 'Active' as the Status value. To set up the delimiter for a source file: 1. Open the CSV file containing the source data. 2. Enclose the data fields with the enclosing character that you have chosen (for example, ("). You must choose an enclosing character that is not present in the source data. Common enclosing characters include single quotation marks (') and double quotation marks ("). 3. Save and close the CSV file. 4. In ODI Designer, display the Models view, and expand the Oracle BI Applications folder. Identify the data stores that are associated with the modified CSV files. The CSV file that you modified might be associated with one or more data stores. 5. In ODI Designer, change the properties for each of these data stores to use the enclosing character, as follows: 1. Double-click the data source, to display the DataStore: <Name> dialog. 2. Display the Files tab. 3. Use the Text Delimiter field to specify the enclosing character that you used in step 2 to enclose the data. 4. Click OK to save the changes. You can now load data from the modified CSV file.   Project Domains PROJECT_ANALYSIS_TYPE PROJECT_ASSIGNMENT_STATUS PROJECT_ASSIGNMENT_TYPE PROJECT_BILLING_TYPE PROJECT_BILL_HOLD_CODE PROJECT_BILL_HOLD_FLG PROJECT_BUDGET_CHANGE_REASON PROJECT_BUDGET_CLASS PROJECT_BUDGET_ENTRY_LEVEL PROJECT_BUDGET_LEVEL PROJECT_BUDGET_STATUS PROJECT_BUDGET_TIME_PHASE PROJECT_BUDGET_TYPE PROJECT_BUDGET_VER_TYPE PROJECT_CLASS_CATEGORY PROJECT_CLASS_CODE PROJECT_COMMITMENT_SOURCE_TYPE PROJECT_COMMITMENT_TYPE PROJECT_COST_CODE_TYPE PROJECT_CROSS_CHARGE PROJECT_CROSS_CHARGE_TYPE PROJECT_DOCUMENT_ENTRY PROJECT_DOCUMENT_SOURCE PROJECT_EVENT_CLASS PROJECT_EVENT_INVOICING_STATUS PROJECT_EVENT_SOURCE PROJECT_EVT_REV_DISTRIB PROJECT_EXPENDITURE_CATEGORY PROJECT_EXPENDITURE_CLASS PROJECT_EXPENDITURE_SUBCATEGORY PROJECT_FIN_PLAN_TYPE PROJECT_FUNDING_APPROVAL PROJECT_FUNDING_CATEGORY PROJECT_FUNDING_LEVEL PROJECT_INVOICE_CLASS PROJECT_INVOICE_HDR_TYPE PROJECT_INVOICE_LINE_TYPE PROJECT_INV_STATUS PROJECT_MFG_RESOURCE_TYPE PROJECT_PRIORITY PROJECT_PROGRESS_STATUS PROJECT_REQUIREMENT_STATUS PROJECT_RESOURCE_CLASS PROJECT_RESOURCE_CODE PROJECT_RESOURCE_TYPE PROJECT_REVENUE_CATEGORY PROJECT_REVENUE_STATUS PROJECT_ROLE PROJECT_SECURITY PROJECT_SERVICE_TYPE PROJECT_STATUS PROJECT_SUB_STATUS PROJECT_TASK_PRIORITY PROJECT_TASK_STATUS PROJECT_TASK_TYPE PROJECT_TRANSACTION_SOURCE PROJECT_TRANSFER_STATUS PROJECT_TYPE PROJECT_TYPE_CLASS PROJECT_WORK_TYPE

Author: Amit Kothari Introduction: This blog outlines the steps for implementing OBIA Project Analytics for Universal Adaptor. Similar steps can be followed for other modules. Supported OBIA releases: 11....

Customization

How to include Fusion DFFs Into OBIA 11.1.1.7.1 In Premise Data Warehouse

Author: Saurabh Gautam Introduction This is a technote that explains the steps needed to extract the Fusion descriptive Flexfield (DFF) information into the in premise Oracle BI application (OBIA) Warehouse from the in premise Fusion applications (not SAAS). Note: The OBIA changes have to be done manually. Supported OBIA release: 11.1.1.7.1 onwards Supported Apps release: Fusion Release 5 onwards A. Enable and Deploy DFF 1. Enable the Descriptive Flexfield in Fusion Apps. Mark the DFF/attributes as BI Enabled. For e.g. Enable PJC_EXP_ITEMS_DESC_FLEX 2. Deploy the Flexfield Vo. 3. Refer to this link for more info. B. Setup the rpd 1. Bring down the OBIEE server and presentation services. 2. Open the Oracle BI Applications repository file (*.rpd) via the admin tool. 3. Import the newly deployed DFF VO(for e.g. FscmTopModelAM.PjcEiBIAM.FLEX_BI_PjcEi_VI) into the rpd. Select the appropriate logical table (for e.g.Dim – Project Costing Details) while importing. This should import the DFF VO and also create the physical join to the appropriate VO. E.g. screenshot below:   1. Make sure that the VO name is <= 80 chars, if not then create an Alias on that VO with name <= 80 chars 2. Save the rpd, start the BI server C. ETL Changes in ODI  1. Please note that the steps documented in this note follow our standard customization process. This is needed in order for our future ODI metadata patches to work in your repository. As part of the standard customization process, you will copy the existing mapping folder to a custom folder, make changes in the custom folder, delete the OOTB scenario from the original mapping folder, and then generate the new scenarios in the custom folder using the original OOTB scenario name. Please refer to the customization guide before you start on this. 2. Open the ODI Studio client and login to the appropriate repository. Go to the Model tab->Fusion 1.0->Oracle Fusion 1.0 FSCM/HCM/CRM folder 3. Import the newly deployed DFF VO using the RKM BIAPPS Oracle BI 5. Open the Oracle BI Applications-> Oracle BI Applications model sub folder and add the fields to the target DS/FS and D/F data waterhouse (DW) tables in the correct folder. 6. Apply these target table changes to the target warehouse by doing an alter table. 7. Click on the Designer tab and navigate to the appropriate sde folder for fusion under BI apps project : Mappings ->SDE_FUSION_V1_Adaptor. Duplicate the appropriate SDE folder and copy it to your CUSTOM_SDE folder. 8. Open the temporary interface (icon marked in yellow) in that custom folder. 9. 10. Pull the DFF VO into the mapping tab of the interface. 11. Join the DFF VO to the base VO and drag the DFF VO fields which need to be extracted in to the DW into the right hand target pane 12. Open the main interface and map the fields from the temporary interface to the target. 13. Save all the objects. Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder. 14. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name. 15. Similarly copy the appropriate Dim or Fact in the SILOS folder to the CUSTOM_SILOS folder, and then map the new DS/FS fields to the D/F table in the main interface.Save. Before generating the new scenario rename the original scenario in the base OOTB folder where you had copied the folder. 16. Navigate to the Packages->Scenarios and on the scenario name right click and select the ‘Generate’ option to generate the scenario. Rename the scenario name to use the original out of box scenario name. 17. Unit test all the changes D. RPD changes 1. Open the rpd in the admin tool in your dev environment, in the physical layer add the new fields to the modified_D/_F table under the DataWarehouse connection pool. 2. Drag the new fields from the alias to the BMM layer, rename it to give it a business name and drag it to the presentation layer. 3. Run the consistency check and save the rpd. 4. Deploy the modified rpd and restart the BI server and test the new fields from an answer. E. Additional Information General Details about 11.1.1.7.1 OBIA can be found here Note: These fixes are to be applied in the right folder for e.g. Apply them in the SDE_Fusion_Adaptor folder of the ODI repository if you are running Fusion app. If you have customized the maps mentioned above then please carefully apply the steps mentioned above

Author: Saurabh Gautam Introduction This is a technote that explains the steps needed to extract the Fusion descriptive Flexfield (DFF) information into the in premise Oracle BI application (OBIA)...

Knowledge

How to Compare RPDs

Author: Vivian(Weibei) Li Introduction Comparing RPD is a very necessary and useful process in RPD development. It is good to consider doing it in the following use cases. You have a customized RPD and want to find out the differences between the customized RPD and the OOTB RPD. You did some customization on the RPD and want to check if the customization is done as expected. The administrator wants to check the differences in the later RPD with the older RPDs. You want to upgrade your RPD and you are not sure if there are any upgrade issues from one release to anther release. You can compare the RPDs for the two releases to catch issues before upgrading. There are many more cases to compare RPDs. This blog describes how to compare the RPDs. There are two ways to compare RPDs, one is to use the compare utility and another is to use the compare dialogue in the admin tool.You can choose either way to compare your RPDs. Here are the comparison of the two methods and some tips. RPD compare utility is executed by the command line, so it doesn’t need the admin tool open. RPD compare dialogue can be directly used in admin tool. When two local RPDs are compared, especially when they require different OBIEE versions, RPD compare utility is recommended. When modifications are performed in the RPD with admin tool and it is to be compared with benchmark RPDs(for example, the original RPD), RPD compare dialogue is recommended. See the details in the next sessions.   Compare RPDs with Utility There are utilities used for rpd comparison in OBIEE. You can call the utilities with a few commands in the command prompt of Windows. Here lists the steps to compare RPDs with this option. Step 1 - Choose your OBIEE version Use the later version of OBIEE to do the comparison if you compare the two releases, and the two releases need different versions of OBIEE. For example, there are two releases, let’s say release 1 and release 2. release 2 uses the later version of OBIEE. To compare the two releases, you need to use the OBIEE version for release 2. Step 2 – Check out the subset of RPDs It is recommended to check out the subset of RPDs that you want to compare with the projects you are interested in. Comparing the entire RPDs will be very inefficient and time-consuming. Step 3 – Equalize the RPDs Before comparing the RPDs, you should equalize the RPDs first with equalizerpds utility. The equalizerpds utility will equalize the Upgrade ID of objects in two separate repositories. If objects have the same Upgrade ID, they are considered to be the same object. The utility compares Upgrade IDs from the first repository (typically the original repository) with Upgrade IDs from the second repository (typically the modified repository). Then, the utility equalizes the Upgrade IDs of objects with the same name, using the Upgrade ID from the original repository. equalizerpds.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin. · Syntax The equalizerpds utility takes the following parameters: equalizerpds [-B original_repository_password] -C original_repository_name [-E modified_repository_password] -F modified_repository_name [-J rename_map_file] [-O output_repository_name] [-Y equalStringSet] · Explanation Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository. For example, if you want to compare release 1 and release 2 RPDs to find out upgrading issues to upgrade  from release 1 to release 2, put release 1 RPD as the original and release 2 RPD as the modified repository. rename_map_file When you equalize objects, you can lose track of object renames because legitimate object renames become different objects. In other words, intentional renames you did in the repository might be changed to different Upgrade IDs, so subsequent merges erroneously treat the renamed object as a new object. To avoid this situation, enter the before and after names of intentionally renamed objects in a rename map file that you then pass to the utility. The equalizerpds utility uses the information in the file to ensure that the original IDs are used in the renamed current objects. rename_map_file is a text file containing a list of objects that were renamed and that you want to equalize. The format is a tab-separated file with the following columns:                     TypeName     Name1     Name2        For example, logical column "Core"."Dim - Customer"."Address4" is re-named as "Core"."Dim - Customer"."Address 4" from Release 1 to Release 2. The file can be written as                     Logical Column "Core"."Dim - Customer"."Address4" "Core"."Dim - Customer"."Address 4" Tip: How to find out the TypeName value? Query your object with Query Repository tool in the admin tool, and you will find the TypeName value in the result. 1.       Open the admin tool. Go to Tools->Query Repository   2.       In the popup dialogue, query your object. 3.       You will find the Type value in the result.   You can put this file in any folder in your machine, and give the absolute path in rename_map_file parameter. See the example below. · An equalization command example equalizerpds -B pwd123 -C C:\rpdcomparison\release1\release1.rpd -E pwd123 -F C:\rpdcomparison\release2\release2.rpd -J C:\rpdcomparison\rename-map-file.txt -O C:\rpdcomparison\release2\equalizedrpd.rpd Step 4 – Compare the RPDs Now you can compare the RPDs with the comparerpd utility. comparerpd.exe can be found under <ORA_HOME>\Oracle_BI1\bifoundation\server\bin. · Syntax The comparerpd utility takes the following parameters: comparerpd [-P modified_rpd_password] -C modified_rpd_pathname [-W original_rpd_password] -G original_rpd_pathname {-O output_csv_file_name | -D output_patch_file_name | -M output_mds_xml_directory_name} -E -8 · Explanation Original and modified repository – Use the base repository as the ‘original’ repository and use repository you want to compare as ‘modified’ repository. The ‘modified’ repository should be the equalized RPD got from step 3. -O output_csv_file_name is the name and location of a csv file where you want to store the repository object comparison output. -D output_patch_file_name is the name and location of an XML patch file where you want to store the differences between the two repositories. -M output_mds_xml_directory_name is the top-level directory where you want to store diff information in MDS XML format. Note: You can specify an output CSV file using -O, an XML patch file using -D, or an MDS XML directory tree using -M. You cannot specify more than one output type at the same time. · A comparison command example comparerpd -P pwd123 -C C:\rpdcomparison\release2\equalizedrpd.rpd -W pwd123 -G C:\rpdcomparison\release1\release1.rpd -O C:\rpdcomparison\results.csv Compare RPDs with Compare Dialogue In this session, I will describe how to compare RPDs with the compare dialogue. The compare dialogue must be used with admin tool opened. Compare the MUD RPD Before and After the Modification · Open your fixed MUD RPD. · Go to ‘File’->‘Multiuser’->’Compare with Original’. It will compare the current modified rpd with the local copy of the original rpd. Compare the local RPDs You can also use compare dialogue to do the comparison for the local RPDs. Note: It is recommended to use the compare utility to compare the local RPDs. Remember to extract the subset of RPDs before comparing. Comparing the entire RPDs will be time-consuming. · Open your local RPD, let’s say RPD1 with admin tool. This RPD is the base RPD in your comparison. · Go to ‘File’->’Compare’. · Select the repository or XML file in the popup compare dialogue. · Enter the password for the repository which you want to compare to, let’s say RPD2. RPD2 will be the modified repository. · The compare dialogue will open the RPD2, equalize RPD2, then compare the RPD2 with RPD1, and finally show the results. · You can see what the object looks like in RPD1 and RPD2 by clicking ‘View 1’ and ‘Edit 2’. View 1 – the object in RPD1 Edit 2 – the object in RPD2 · Save the diff file as .CSV file in your local machine. In summary, RPD comparison is a good tool for RPD development.  Using it appropriately will build more confidence for your RPD modification, fix and upgrade. It makes the RPD development more smooth and less error-prone if adding RPD comparison as a necessary process.

Author: Vivian(Weibei) Li Introduction Comparing RPD is a very necessary and useful process in RPD development. It is good to consider doing it in the following use cases. You have a customized RPD and...

Customization

Configuring Enterprise Calendar for Oracle BI Apps Time Dimension

Author: Chuan Shi Introduction One of the key common dimensions of Oracle BI Apps is the Time dimension. It contains calendars of different natures to support different types of analysis within various subjects in Oracle BI Apps. Different types of Calendar include: Gregorian Calendar Fiscal Calendar (multiple) Enterprise Calendar (unique enterprise wide) The Enterprise Calendar (or reporting calendar) enables cross subject area analysis. The Enterprise Calendar data warehouse tables have the W_ENT prefix. Within a single BI Apps deployment by the customer, only one fiscal calendar can be chosen as the Enterprise Calendar. The purpose of this blog is to explain how to configure the Enterprise Calendar. Configure the Enterprise Calendar The Enterprise Calendar can be set to one of the OLTP sourced fiscal calendars, or to one of the warehouse generated fiscal calendars (e.g., the 4-4-5 calendar and 13 period calendar supported by Oracle BI Apps). This can be done by setting the following source system parameters in the Business Intelligence Applications Configuration Manager (BIACM): GBL_CALENDAR_ID (used to set the ID of the calendar to be used as the Enterprise Calendar) GBL_DATASOURCE_NUM_ID (used to set the DSN of the source from which the Enterprise Calendar is chosen) The following sections show how to set up these two parameters for the Enterprise Calendar in different scenarios. Scenario 1: Using an Oracle EBS fiscal calendar as the Enterprise Calendar GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In EBS, it should have the format of MCAL_CAL_NAME~MCAL_PERIOD_TYPE. For example, GBL_CALENDAR_ID will be 'Accounting~41' if MCAL_CAL_NAME = 'Accounting' and MCAL_PERIOD_TYPE = '41'. Note 1: MCAL_CAL_NAME and MCAL_PERIOD_TYPE are sourced from PERIOD_SET_NAME and PERIOD_TYPE of the GL_PERIODS table (an Oracle EBS OLTP table). To see a valid list of combinations of MCAL_CAL_NAME~MCAL_PERIOD_TYPE, run the following query in the OLTP: SELECT DISTINCT PERIOD_SET_NAME || '~' || PERIOD_TYPE FROM GL_PERIODS; Note 2: The available EBS calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW: SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID FROM W_MCAL_CAL_D WHERE DATASOURCE_NUM_ID = <the value corresponding to the EBS version that you use>; GBL_DATASOURCE_NUM_ID: For EBS, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For example, if you are running EBS R11.5.10 and the DATASOURCE_NUM_ID for this source is 310, then you need to set GBL_DATASOURCE_NUM_ID  to 310. GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID are set in BIACM, and this will be covered in a later section. Scenario 2: Using a PeopleSoft fiscal calendar as the Enterprise Calendar GBL_CALENDAR_ID: This parameter is used to select the Enterprise Calendar. In PSFT, it should have the format of  SETID~CALENDAR_ID. For example, GBL_CALENDAR_ID will be 'SHARE~01' if SET_ID = 'SHARE' and CALENDAR_ID = '01'. Note 1: SETID and CALENDAR_ID are sourced from the PS_CAL_DEFN_TBL table (a PeopleSoft OLTP table). To see a valid list of combinations of SETID~CALENDAR_ID, run the following query in the OLTP: SELECT DISTINCT SETID || '~' || CALENDAR_ID FROM PS_CAL_DEFN_TBL; Note 2: The available PeopleSoft calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW: SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID FROM W_MCAL_CAL_D WHERE DATASOURCE_NUM_ID = <the value corresponding to the PeopleSoft version that you use>; GBL_DATASOURCE_NUM_ID: For PSFT, this parameter should be the DATASOURCE_NUM_ID of the source system from where the calendar is taken. For instance, if you are running PeopleSoft 9.0 FSCM Instance and the DATASOURCE_NUM_ID for this source is 518, then you need to set GBL_DATASOURCE_NUM_ID to 518. Note: OLTP sourced calendars are not supported in PeopleSoft HCM pillars. Therefore, should you want to choose an enterprise calendar if you are running PSFT HCM, you need to choose one of the DW generated calendars (i.e., 4-4-5 or 13 period calendar), which is explained in Scenario 3. Scenario 3: Using a warehouse generated calendar as the Enterprise calendar GBL_CALENDAR_ID: This parameter should be the CALENDAR_ID of the DW generated Calendar (so either 4-4-5 or 13 period calendar). By default, the 4-4-5 calendar has a CALENDAR_ID of '10000', and the 13-period calendar has a CALENDAR_ID of '10001'. GBL_DATASOURCE_NUM_ID: If Enterprise Calendar is a DW generated Calendar, this parameter should be the DATASOURCE_NUM_ID value of the OLAP (Data warehouse). For example, if the DATASOURCE_NUM_ID for your DW is 999, then GBL_ DATASOURCE_NUM_ID should be set to 999. Note 1: Customers can generate additional warehouse generated calendars which can be picked as the Enterprise Calendar. Note 2: The available data warehouse calendars are also loaded into the OLAP warehouse table W_MCAL_CAL_D. Therefore, they can be viewed by running the following query in DW: SELECT MCAL_CAL_ID, MCAL_CAL_NAME, MCAL_CAL_CLASS, DATASOURCE_NUM_ID FROM W_MCAL_CAL_D WHERE DATASOURCE_NUM_ID = <DSN of your DW> Setting GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID in BIACM To set GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID, log into BIACM, and click on Manage Data Load Parameters from the navigation bar on the left. Once being directed to the Manager Data Load Parameters page, type in GBL_CALENDAR_ID in the parameter field and choose Code as the parameter type. Then click the Search button after which the parameter with its current value will be returned. Refer to the example below, which shows 10000 as the current value of GBL_CALENDAR_ID. To change the value of GBL_CALENDAR_ID, click on its current value, and then an edit dialog pops up.   Provide the desired value in the Parameter Value field (note that you do not need to include single quotes in the value, e.g., use Accounting~41 rather than ‘Accounting~41’), and then click Save and Close to save your change. The new value of GBL_CALENDAR_ID has been set. The setting procedure for GBL_DATASOURCE_NUM_ID is similar. It should be first retrieved by searching for this variable. Once it is returned, click on its current value and then an edit dialog pops up. Change its parameter value there and then save the change. Setting GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID in a multi-source ETL In a multi-source ETL run, multiple fiscal calendars from different data sources can be loaded. However, in this case, ONLY ONE calendar can be chosen as the Enterprise Calendar. For example, if you have two data sources, PeopleSoft and Oracle, then you can only choose either a calendar from PeopleSoft or a calendar from Oracle as the Enterprise Calendar. The two parameters GBL_CALENDAR_ID and GBL_DATASOURCE_NUM_ID should be set in BIACM according to the calendar that you choose. Never provide more than one value to GBL_CALENDAR_ID or GBL_DATASOURCE_NUM_ID in BIACM. That would fail the ETL run.  

Author: Chuan Shi Introduction One of the key common dimensions of Oracle BI Apps is the Time dimension. It contains calendars of different natures to support different types of analysis withinvarious...

Knowledge

Configure summary calendar file (file_summary_calendar.csv) for PeopleSoft ETL

Author: Chuan Shi Purpose and Audience This article explains how to configure an ETL source file for time dimension from PeopleSoft (PSFT) adaptors, i.e., file_summary_calendar.csv. This file is needed to populate the fiscal quarter information for fiscal calendars sourced from PSFT. It applies to all Oracle BI Apps customers who use PSFT sources. Oracle BI Apps supports calendars of different natures, including Gregorian calendar, fiscal calendar, and enterprise calendar. Fiscal calendars are stored in W_MCAL_%_D data warehouse tables, where W_MCAL_PERIOD_D is the most important table (loaded from the corresponding staging table W_MCAL_PERIOD_DS). Among many attributes of this table is the MCAL_QTR column which stores fiscal quarter information. The mapping logic of this column for PSFT OLTPs requires some special design. In what follows, we explain what needs to be done to ensure MCAL_QTR being populated correctly. Populate MCAL_QTR (fiscal quarter information) for PSFT sourced fiscal calendars In PSFT, there are concepts of detailed calendars (stored in PS_CAL_DETP_TBL) and summary calendars (stored in PS_CAL_SUMP_TBL). Definitions of all calendars  are stored in PS_CAL_DEFN_TBL, where one can check if a calendar is a detailed or summary calendar. Detailed fiscal calendars are loaded into W_MCAL_%_D tables. However, the detailed calendars by themselves do not have information about fiscal quarter. Therefore, for MCAL_QTR, an ETL logic is used to correctly allocate each period of a given detailed fiscal calendar to one of the four fiscal quarters (so that its MCAL_QTR is populated), by using a summary calendar that corresponds to the detailed calendar. As a result, the prerequisite of this ETL logic is to correctly connect a detailed calendar to a summary calendar. However, such dependency is not available from the PSFT sources, and it is a customer  decision that determines which detailed calendar should be associated with which summary calendar. Therefore, in our design, we create a source file called file_summary_calendar.csv. Customers are required to configure this csv file by associating each applicable detailed calendar to an appropriate summary calendar (i.e., each row in the file is a relationship between a paired detailed and summary calendar). This csv file is loaded into a DW table called W_MCAL_PSFT_SUMP_CONFIG_G. Consequently, this table contains relationships of different detailed calendars and their summary calendars. It is in turn used to help populate MCAL_QTR in W_MCAL_PERIOD_DS. To summarize, in order to have MCAL_QTR correctly populated for a detailed calendar, you need: A detailed calendar defined in PSFT source; A summary calendar defined in PSFT source that is appropriate for the detailed calendar; Put an entry of the pair of detailed and summary calendar in file_summary_calendar.csv file. Examples of how to configure the csv file is provided in the appendix. Commonly known reasons for MCAL_QTR being unpopulated or populated incorrectly A detailed calendar and its corresponding summary calendar should be synchronized perfectly in PSFT OLTP source data and their pair should be put in this csv file for MCAL_QTR to be populated correctly. However, not all detailed calendars need to be put into this csv file, because the fiscal quarter (MCAL_QTR) concept does not apply to some detailed calendars by nature (we will provide examples below). Unsynchronized detailed and summary calendars lead to MCAL_QTR being populated incorrectly or unpopulated. There are four commonly known reasons of why MCAL_QTR is unpopulated or incorrectly populated for a fiscal period. They are either due to source data issues or due to configuration of the csv file. The four cases are: Case 1: The fiscal quarter concept is not applicable to a detailed calendar. For example, if a detailed calendar is a year level calendar, then each row of this calendar in W_MCAL_PERIOD_DS is a fiscal year. It does not have a granularity lower than year. Therefore, it is not possible (or does not make sense) to populate quarter information to this calendar. Case 2: Not all the years of a detailed calendar are covered by the corresponding summary calendar. Therefore, MCAL_QTR is null for the periods in those uncovered years. For example, the detailed calendar covers from fiscal year 1990 to 2020, while its paired summary calendar only covers from fiscal year 2001 to 2020. In this case, the periods between 1990 and 2000 will not have MCAL_QTR populated. Case 3: The summary calendar is not suitable for that detailed calendar. This means that the quarter allocation ETL logic fails to correctly allocate periods of the detailed calendar into one of the four fiscal quarters. Case 4: Entries of appropriate detailed and summary calendars are missing in the .csv file. I provide below several examples of detailed calendars. Refer to the Comment column to see if they should be put in the csv file along with an appropriate summary calendar. Detailed Calendar Examples: When you have to create a summary calendar for one or more detailed calendars (so that they can be put in the csv file), the key is that in any given year, all the periods of the corresponding detailed calendar (so in a weekly calendar there are 52 or 53 periods) should be correctly allocated to the 4 fiscal quarters. This requires the summary calendar to have correct values in DETL_PERIOD_FROM and DETL_PERIOD_TO of PS_CAL_SUMP_TBL. For example, if you want to create a summary calendar for all your detailed weekly calendars, then the data entries in PS_CAL_SUMP_TBL of a given year (say 1990) should look like This is because this summary calendar will be used to assign fiscal quarters to those weekly detailed calendars, and such a calendar has 52 or 53 periods (i.e., 52 or 53 weeks) in a given fiscal year. So, the DETL_PERIOD_FROM and DETL_PERIOD_TO values indicate that periods 1-13 (of a weekly detailed calendar) belong to fiscal quarter 1, periods 14-26 belong to fiscal quarter 2, etc. As you can see from this example, this summary calendar can be used for multiple weekly detailed calendars. Therefore, you don’t have to create different summary calendars for different weekly detailed calendars. However, for each detailed weekly calendar, you have to put an entry for it, together with this summary calendar, in the csv file. As a comparison, if instead you need to create a summary calendar used for monthly detailed calendars, you may expect to have its DETL_PERIOD_FROM and DETL_PERIOD_TO (for a given year, say 1990,) as: This is because this summary calendar is used to assign quarters to those monthly detailed calendars, and a monthly calendar only has 12 periods (i.e., 12 months) in a given year. So, the DETL_PERIOD_FROM and DETL_PERIOD_TO values allow periods 1-3 (of a detailed monthly calendar) to have fiscal quarter 1, periods 4-6 to have fiscal quarter 2, etc. Again, you can use this summary calendar for multiple monthly detailed calendars that you may have, but each of them requires an entry in the csv file. Appendix: Configuring file_summary_calendar.csv This csv file has the following header: Description,Contains static fiscal period (weeks) information,,,,,,,,,,,, Examples,The flat file can provide data for the FISCAL_YEAR- FISCAL_MONT- FISCAL_WEEK_NUM and FISCAL_WEEK_START_DT as 1996- 03- 14- and 03121996,,,,,,,,,,,, Last Updated By,Siebel Analytics R&D,,,,,,,,,,,, Last Updated On,10/17/2001 20:13,,,,,,,,,,,, DETAIL_CALENDAR_SETID,DETAIL_CALENDAR_ID,SUMMARY_CALENDAR_SETID_QTR, SUMMARY_CALENDAR_QTR,SUMMARY_CALENDAR_SETID_YEAR,SUMMARY_CALENDAR_YEAR, SUMMARY_CALENDAR_SETID_MONTH,SUMMARY_CALENDAR_MONTH,SUMMARY_CALENDAR_SETID_HALF, SUMMARY_CALENDAR_HALF,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM The first four columns are the most important ones. Suppose that you have a monthly detailed calendar whose SETID = 'SHARE' and CALENDAR_ID = '01'; as well as a weekly detailed calendar whose SETID = 'SHARE' and CALENDAR_ID = '02'. Their corresponding summary calendars have SETID = 'SHARE' and CALENDAR_ID = 'QM', and SETID = 'SHARE' and CALENDAR_ID = 'QW', respectively. Then, you need to put the following two entries in the csv file (after the header): SHARE,01,SHARE,QM,,,,,,,,,, SHARE,02,SHARE,QW,,,,,,,,,, And therefore, your final csv file becomes: Description,Contains static fiscal period (weeks) information,,,,,,,,,,,, Examples,The flat file can provide data for the FISCAL_YEAR- FISCAL_MONT- FISCAL_WEEK_NUM and FISCAL_WEEK_START_DT as 1996- 03- 14- and 03121996,,,,,,,,,,,, Last Updated By,Siebel Analytics R&D,,,,,,,,,,,, Last Updated On,10/17/2001 20:13,,,,,,,,,,,, DETAIL_CALENDAR_SETID,DETAIL_CALENDAR_ID,SUMMARY_CALENDAR_SETID_QTR, SUMMARY_CALENDAR_QTR,SUMMARY_CALENDAR_SETID_YEAR,SUMMARY_CALENDAR_YEAR, SUMMARY_CALENDAR_SETID_MONTH,SUMMARY_CALENDAR_MONTH,SUMMARY_CALENDAR_SETID_HALF, SUMMARY_CALENDAR_HALF,W_INSERT_DT,W_UPDATE_DT,TENANT_ID,X_CUSTOM SHARE,01,SHARE,QM,,,,,,,,,,   SHARE,02,SHARE,QW,,,,,,,,,,  

Author: Chuan Shi Purpose and Audience This article explains how to configure an ETL source file for time dimension from PeopleSoft (PSFT) adaptors, i.e., file_summary_calendar.csv. This file is needed...

Architecture

Introduction and Tips of Oracle BI Apps Variable Refresh

Author: Chuan Shi Introduction The ETL logic in BI Apps uses parameters in packages, interfaces, load plans, and knowledge modules (KM) to control the ETL behaviors. Parameters can be configured by customers, pre-seeded in the ETL code, or maintained internally: Data Load parameters that can be configured by customers are maintained at product line (PLV) and fact/dimension group levels in the BI Apps Configuration Manager (BIACM). Some parameters are pre-seeded in the ETL code OOTB, e.g. DATASOURCE_NUM_ID. Other parameters are used internally by the load plan to control the execution of the load plan, e.g. EXECUTION_ID. ETL parameters are handled by using ODI variables. The purpose of this blog is to explain ODI variable refreshing to ensure that correct values of variables are used in Oracle BI Apps ETL. ODI Variable Classification To ensure correct values to be used in ETL, variables are refreshed at run time. A variable can be refreshed in either a package or a load plan. Variables can be classified into four categories based on whether and where they are refreshed in ODI (and if a variable needs to be refreshed, then there must be a refreshing logic specified for it, which will be discussed later). The four categories are: Not refreshed Definition: Generally, internal parameters used by the load plan to control the ETL execution, and therefore they shall not be refreshed Examples: DATASOURCE_NUM_ID,EXECUTION_ID Refreshed in the package Definition: Variables that are not feasible to be refreshed in load plans. (The refreshing logic of such a variable depends on ETL run facts. It sources from DW tables and uses the QUALIFY user define function (UDF). ODI fails to interpret QUALIFY at load plan level in some cases.) Examples: IS_INCREMENTAL, LAST_EXTARCT_DATE Hardcoded in the load plan Definition: Variables whose values are hardcoded by overwriting in load plans. The variables will take the hardcoded values in ETL. Examples: DOMAIN_CODE, LOOKUP_TYPE Refreshed in the load plan Definition: Variables whose values are configured in BIACM by customers. (In other words, the values of these variables come from BIACM. The refreshing logic of such a variable uses a UDF to extract its value from BIACM.) Examples: UPDATE_ALL_HISTORY, TYPE2_FLG For a complete list of ODI variables, please refer to this post. Refreshing Variables in a Package Refreshing variables in a package is straightforward. One needs to create Refresh Variable steps for the variables to be refreshed. The screenshot below shows examples of refreshing IS_INCREMENTAL.   Hard-coding Variables in a Load Plan Some variables, such as DOMAIN_CODE and LOOKUP_TYPE are hardcoded in load plan components. To do that, go to the load plan component to which the variables are to be hardcoded, select the Overwrite checkbox and provide the hardcode values for the variables. The screenshot below shows examples of hardcoding DOMAIN_CODE and LOOKUP_TYPE. Refreshing Variables from BIACM BIACM is a central UI where customers can define the values of data load parameters (i.e., ODI variables), among with many other features offered. ODI variables, which are refreshed in load plans, have their values extracted from BIACM. We also refer to such variables as BIACM variables. BIACM Variables are classified into truly global variables, PLV specific variables, and fact/dimension group level variables. A truly global variable (e.g., 13P_CALENDAR_ID) is a variable that has the same value in all product lines (i.e., EBS11510, PSFT90, etc) and for all fact/dimension groups. Truly global variables are refreshed centrally in the load plan system components. A PLV specific variable (e.g., LANGUAGE_BASE) is a variable that takes the same value for all fact/dimension groups within a product line, but different values of the variable can be used in different production lines. They are refreshed individually in consuming load plan dev components. A fact/dimension group level variable (e.g., UPDATE_ALL_HISTORY) is group specific. It can take different values in different fact/dimension groups within the same PLV and across different PLVs. They are refreshed individually in consuming load plan dev components. From a variable value overriding perspective: A truly global variable has a unique value. PLVs and fact/dimension groups cannot override its value. A PLV variable has product line specific values (e.g., LANGUAGE_BASE takes the value of US in EBS product lines but ENG in PSFT product lines). The value is the same for all fact/dimension groups within that product line. A fact/dimension group level variable has group specific values (e.g., TYPE2_FLG has the value of Yes in Position Dimension Hierarchy, while it has the value of No in Asset Dimension). Also, such a variable has a global default value. If a fact/dimension group does not specify the value of such a variable for its use, then the global default value will be used whenever this variable is called by that group (e.g., the global default value of TYPE2_FLG is No). These variables are defined in ODI. To ensure that variable refreshing works correctly, there are some rules on the definitions of ODI variables: Set the ‘Keep History’ option to ‘No History’; Always provide a default value (the default value will be picked if refreshing from BIACM does not return a value for some reason. Otherwise the ETL will fail.). As a good practice, the ODI default value of the variable can be set the same as the global value of the variable in BIACM. (Whenever the Keep History option or the Default Value of a variable is changed, the scenarios that use this variable need to be regenerated.) Once ODI variables are defined, a refreshing logic is needed to refresh them from BIACM. In this regard, The ODI UDF GET_CM_PARAM is used To return the correct value for a variable, we need to specify the following in the refreshing logic: variable name; product line; fact/dimension group. Syntax: getcmparam($(param_code),$(DATASOURCE_NUM_ID)) $(param_code) is the name of  the variable (e.g., TYPE2_FLG) $(DATASOURCE_NUM_ID) is used to specify the product line. For PLV/group level variables, we pass #DATASOURCE_NUM_ID as $(DATASOURCE_NUM_ID); e.g., getcmparam('TYPE2_FLG','#DATASOURCE_NUM_ID') For truly global variable, we pass #WH_DATASOURCE_NUM_ID as a pseudo-PLV ID. e.g., getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID') Do not pass fact/dimension group directly into the syntax. They are determined by where the variable is refreshed. BIACM variables are refreshed in load plans. To refresh a variable in a load plan, the following three steps are required (they have been done OOTB): Step 1: Specify the correct logical schema and refreshing logic in the refreshing tab of the variable definition. The logical schema has to be CM_BIAPPS11G. The refreshing logic should be getcmparam() with appropriate inputs, e.g., getcmparam('13P_CALENDAR_ID','#WH_DATASOURCE_NUM_ID') Step 2: Update the variable definition in the variables tab of the load plan. Go to the load plan component where you want to refresh the variables. In the Variables tab, right click on the variables and select ‘Refresh Variable Definition’ so that the variable definition in the LP is synchronized with its real definition. Once this is done, verify that the logical schema is showing CM_BIAPPS11G, and the select statement is showing the embedded SQL in the getcmparam() function. Step 3: Check the refreshing checkbox at the appropriate LP step. For truly global variables, Step 3 becomes: The logic behind getcmparam() guarantees that appropriate value of the variable is returned from BIACM given the name of the variable, the DATASOURCE_NUM_ID passed in, and the LPC step where it is refreshed. Values stored in BIACM are strings. Therefore all ODI variables refreshed from BIACM will come in as strings. Each of the consuming codes (where the variables are used) should make sure it converts the data type accordingly. For example, dates are returned as a string in format yyyy-mm-dd hh:mi:ss. TO_DATE_VAR UDF is used to convert the returned string to DATE format. Number values are returned as strings as well. Checklist when Things Go Wrong What can go wrong? The value of a variable used in ETL is not in line with expectation. A variable refreshed has no value returned, and it fails ETL run. Overriding Rule (1) In a load plan, when a variable is refreshed in the parent step (e.g., the root step), its value will be inherited by all its child steps, unless this variable is refreshed/overwritten in a child step. · However, if a variable is refreshed and/or overwritten in a child step, the value refreshed from this step will override the value refreshed from the parent step. Other child steps of the same level will NOT be affected. They will still inherit the value refreshed in the parent step. Overriding Rule (2) (unlikely to happen but it exists) If a variable is refreshed both in a package and in a load plan, then the value refreshed from the package will override the value refreshed from the load plan. When the value of a variable returned from BIACM is not in line with expectation: Confirm where the variable is refreshed, e.g., BIACM? ETL tables in DW? etc. For BIACM PLV or group level variables: Check its value(s) in BIACM UI. For PLV variables, check its value in each product line; for group level variables, check its group specific values as well as global default value. Check if the variable is refreshed in a root step of a load plan (refresh checkbox checked). In the meanwhile, check if the root step is named after a fact/dim group. Check if the variable is incorrectly refreshed or hardcoded in a child step belonging to the root step (avoid overriding rule 1). Check the ODI default value of this variable. If BIACM returns (null), i.e., nothing, for this variable, its ODI default value will be used. Also, if we check the overwrite box (but not the refresh checkbox) of a variable in a load plan step, but forget to provide the value, then the ODI default value will be used. Note: overriding rule (2) is unlikely to happen to BIACM variables. For variables refreshed from ETL tables in DW, an incorrect value likely to indicate run issue. Check the run of that specific task. When variable has no value: Confirm where the variable should be refreshed, e.g., BIACM? ETL tables in DW? etc. In rare cases, a variable may not have a value returned when it is refreshed, and this leads to ETL failures. ODI behaves like this: it first refreshes the variable from its designated source (e.g., BIACM). If its source returns (null), i.e., nothing, for this variable, the ODI default value of this variable will be used in ETL. However, if the ODI default value is not provided, then this variable will not have a value.

Author: Chuan Shi Introduction The ETL logic in BI Apps uses parameters in packages, interfaces, load plans, and knowledge modules (KM) to control the ETL behaviors. Parameters can be configured by...

Analytic Applications

Tips and Usage of Group Account Number Configuration

Author: Sridhar Kasam Introduction The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0. When you implement Financial Analytics, one of the most important steps is to set up Group Account Numbers. Many issues are reported due to incorrect setup of Group Account Numbers. - What is Group Account Number? o Group Account Number logically groups GL natural accounts into reportable group(s), so users can view financial reports at a higher level than that of a GL natural account. Example: Assign natural account range 1210 – 1240 to Group Account Number “AR” (Accounts Receivable). CHART OF ACCOUNTS ID FROM ACCT TO ACCT GROUP_ACCT_NUM 101 1110 1110 CASH 101 1210 1240 AR 101 1280 1280 OTHER CA 101 1340 1340 PPAID EXP   o Group Account Number is used to establish relationship between GL natural accounts and Financial Statement Item Code. Financial Statement Item Code is mapped   to base fact tables for GL reconciliation process. Example: Group Account Number, Financial Statement Item Code & Base Fact Table GROUP_ACCT_NUM FIN_STMT_ITEM_CODE BASE FACT TABLE ACC DEPCN OTHERS W_GL_OTHER_F ACC LIAB OTHERS W_GL_OTHER_F AP AP W_AP_XACT_F AR AR W_AR_XACT_F CASH OTHERS W_GL_OTHER_F  CMMN STOCK OTHERS W_GL_OTHER_F  COGS COGS W_GL_COGS_F REVENUE REVENUE W_GL_REVN_F     - How to configure Group Account Number(s) and Financial Statement Item Code(s)? o Group Account Numbers are defined / configured through a configurable csv file. In case of Oracle the file name is file_group_acct_codes_ora.csv. For PeopleSoft, it is file_group_acct_codes_psft.csv, and for JDE, it is file_group_acct_codes_jde.csv. Users are allowed to specify which GL natural accounts are assigned to a particular group account within chart of accounts, GL Business Units and company for Oracle, PeopleSoft and JDE sources respectively. Please ensure that the account ranges are continuous without any overlaps. If necessary you can have multiple ranges and/or accounts assigned to the same group account. Additionally, please ensure that the accounts being provided are all leaf accounts / ranges and not parent nodes/ranges. o By mapping GL accounts to group account numbers and then associating group accounts to a financial statement Item code, an indirect association is built between  GL accounts and financial statement Item codes as well. In case of BI APPS 11.1.1.7.1 or later, association of group account numbers to financial statement item code  is performed in Oracle BI Applications Configuration Manager. But, in earlier releases like 7.9.6.3 this configuration was also done using a configurable file file_grpact_fstmt.csv. Note: Customers should not modify the mapping relationship between Group Account Number and Financial Statement Item Code for Group Account Numbers AP, AR, COGS and REVENUE. o It is not always necessary to assign all out of box group account numbers to certain account ranges. In cases where customers are not planning to use the logical metrics under GL which uses Group Account Number as part of their metric definitions, users do not need to configure majority of the Group Account Numbers. However, if users are still planning to implement Payables, Receivables, Revenue, or COGS facts, they still need to configure Group Account Numbers AP, AR, REVENUE, or COGS because these Group Account numbers are also used for the GL reconciliation process against these facts. o If for some reason, out of the box group account numbers are not sufficient, then the high level steps to add additional group accounts are (11g): § Define new Group Account Numbers in Configuration Manager § Assign the newly created Group Account Number to a financial statement code (e.g. AP, AR, REVENUE, COGS, OTHERS). § Assign GL account to Group Accounts in source specific csv file. § Model the RPD to include a new group account measure and expose in presentation layer. - Why is it important? Group Account Number configuration is important as it determines that the right GL Accounts are assigned to the group account number(s). In other words, it determines the accuracy of most of the reports that are reported from Financial Analytics where Group Account Number is part of the report. Group Account Numbers in combination with Financial Statement Item Codes are also leveraged in GL reconciliation process to ensure that sub ledger data reconciles with GL Journal entries. Group Account Number Usage in Financial Analytics - GL reports For “Account Receivable” in balance sheet reports, we calculate the total amount for GL natural accounts from 1210 to 1240 for ledgers with chart of account id 101. o Because of this association and subsequent configuration to expose the group account number metric in RPD, users can now report on Group Account Number  metric. For example in a Balance Sheet Report, users can now report on “AR Amount” from “Fact – Fins – GL Balance” (Logical Fact) corresponding to “AR” group account. - Reconciliation o Because of the indirect relationship that is built between GL Accounts and Financial statement Item codes, it is now possible to do GL reconciliation to ensure that the sub ledger data reconciles with GL Journal entries. For example, it is possible that after an invoice has been transferred to GL, the user might decide to adjust the invoice in GL. In which case, it is important that the adjustment amount is trickled down to the sub ledger base fact along with the balance fact. So, to determine such sub ledger transactions in GL, the reconciliation process uses Financial Statement item codes. Common Issues due to Incorrect Group Account Number Setup 1. 1. Sub ledger transactions are not posted. Group account numbers are associated with financial statement item codes. The GL reconciliation process uses the financial item codes to identify sub ledger fact tables to be reconciled with GL journals. If the group account number assignment to GL account is incorrect, sub ledger facts remain “unposted”. As Sub ledger balance facts are based on posted transactions, for example, AP balances or AR balances reports may return incorrect results. For Example: GL account “1210” is supposed to be associated to “AR” (Accounts Receivables) group account but was mistakenly associated to “AP” (Accounts Payables). In which case, as part of ETL Process all the GJ Journal lines for account 1210 are attempted to be reconciled against sub ledger accounting records in AP fact. But, in reality these GL journal lines came from AR and not AP. So, because it couldn’t find the corresponding entries in “AP” they will remain ��unposted”. 2. 2. Unnecessary Manual Records are created in Sub ledger facts. When a group account number is incorrectly assigned to an account, unnecessary records with transaction type “Manual” are created in sub ledger facts (e.g. W_AP_XACT_F, W_GL_REVN_F etc). Those records do not have sub ledger transaction details (e.g. supplier, customer, transaction number, and so on). Following the same example above, because it couldn’t find the corresponding entries in “AP”, the ETL process will insert “Manual” records into the AP fact because it thinks that these entries are “Manual” Journal entries created directly in the GL system. 3. 3. GL reports return incorrect results. When group account number assignments are not correct, GL metrics such as “AP”, “AR” from the above example will have incorrect amount. Therefore, whenever, you  see GL metrics showing incorrect amount, the first thing you should check is if the group account num used by that metric has correct GL account ranges assigned to it.      

Author: Sridhar Kasam Introduction The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0. When...

Analytic Applications

Extracting PSFT Chartfield Hierarchy Data in OBIA 11.1.1.7.1

Author: Akshay Satyendranath  BI Apps 11.1.1.7.1 supports 26 chartfields with 3 of them considered to be mandatory chartfields and are mapped to specific dimensions as shown below. Department – Cost Center Dimension Fund – Balancing Segment Dimension Account – Natural Account Dimension All the remaining chartfields are mapped to one specific dimension which is GL Segment Dimension. The physical table is shared for all chartfields but there are multiple logical dimensions in the RPD which you can use. You will need to apply the appropriate filters for these logical dimensions to filter the data relevant to those chartfields.  If you want to extract the trees defined for these chartfields into the DW, you will need to configure some parameters as explained below. For each chartfield, there are two sets of parameters that you can configure.  · List of Tree Structures · List of Trees within the Tree Structures configured as in 1. The first parameter is considered to be a mandatory parameter and you will need to configure this parameter by giving a comma separated list of all the Tree Structures you need to extract for that Chartfield. The second parameter is considered to be an optional parameter. If you don’t configure this parameter then all the Trees pertaining to the Tree Structures  configured in 1 will be extracted. If you need only specific trees to be extracted within those tree structures, then configure this parameter by giving a comma separated list of SETID + Tree Name. The actual parameter names for each dimension are given in the table below Dimension Parameter Name Cost Center TREE_STRUCT_ID_LIST TREE_SETID_NAME_LIST Balancing Segment TREE_STRUCT_ID_LIST TREE_SETID_NAME_LIST Natural Account TREE_STRUCT_ID_LIST TREE_SETID_NAME_LIST GL Segment TREE_STRUCT_ID_LIST_<CF> TREE_SETID_NAME_LIST_<CF> Since GL Segment dimension supports multiple Chartfields we have one set of parameters for each chartfield. You will need to configure those many parameters based on the number of Chartfields you need in BI. For e.g. say you have configured the program chartfield, then you will need to configure TREE_STRUCT_ID_LIST_PROGRAM and TREE_SETID_NAME_LIST_PROGRAM. Screenshots for one such set of parameters along with the sample values are given below The following are the FSM tasks you need to configure for setting these parameters: 1) Configure Data Load Parameters for People Soft Trees 2) Configure Trees to be extracted for GL Account Chartfields for PeopleSoft The following section gives you some sample SQL’s which you can use as a reference to get the values of these parameters for each chartfield. Please note that these  are to be used just as references and you will have to validate the parameter values before you configure them. 1) For Tree Structures SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME=<Detail Table of your CF>; E.g. for the Department chartfield which maps to the Cost Center Dimension you can use SELECT TREE_STRCT_ID FROM PSTREESTRCT WHERE DTL_RECNAME='DEPT_TBL'; 2) For Trees within Tree Structures SELECT A.SETID||'~'||A.TREE_NAME FROM PSTREEDEFN A, PSTREESTRCT B WHERE A.TREE_STRCT_ID=B.TREE_STRCT_ID AND B.DTL_RECNAME=<Detail Table of your CF>; E.g. for the Department chartfield which maps to the Cost Center Dimension you can u