X

The Oracle BI Application Blogs provides the latest and tips and tricks on Oracle BI Applications Product (OBIA)

BICS-DataSync use steps for OBIA on PaaS

Contents

Scope. 1

Business Needs and Overview.. 1

1. Create the place holder SDS schema using RCU.. 2

2. Run the python script to create datasource in WebLogic Server and wiring in ODI 2

3. Setup Source connection in BIACM... 2

4. Run the SDS DDL generate scenario to create SDS objects in SDS schema. 2

5. Datasync - Configure target-auto-columns.txt file. 3

6. Datasync - Create connections for target SDS schema and source OLTP in data sync tool. 4

7. Datasync - Import source tables in data sync. 4

8. Datasync - Define load strategy for tables. 6

9. Datasync - Datasync property changes. 8

10. Datasync - Create job and execute to load the data from selected source to Target SDS schema 9

Pros/Cons: 9

 

Disclaimer

Following is intended for information purpose only. It does not emphasize on using Datasync to populate the Source Dependent Schema (SDS). Datasync is one of the choices to populate SDS and it’s entirely customer responsibility to make it work upto SDS.

Scope

BIApps version 11.1.1.10.1 and 11.1.1.10.2

Oracle BICS Data Sync 2.2

Applicable for PeopleSoft, E-Business Suite, Sieble etc on premise sources

Business Needs and Overview

For Oracle BI Applications on PaaS, Oracle BI Applications components are deployed on different OPC offerings like PaaS, IaaS, DBaaS and BICS Cloud Services.

OBIA related Database schemas are maintained in the DBaaS. It might be required to replicate the on-premise source data to DBaaS using a replication tool like Golden Gate or Datasync. Following summary steps can be followed for using Oracle Business Intelligence Cloud Service Data Sync to move your on premise data to DBaaS.

 

1. Create the place holder SDS schema using RCU

The SDS is a separate schema usually stored on the same database as the Oracle Business Analytics Warehouse, which contains data extracted from an OLTP schema on a separate machine. The OLTP schema is treated as the source and the SDS schema as the target of the replication tool processes which maintain the replicated SDS. To create such SDS schemas, use the Business Analytics Applications Suite RCU, This will create an empty place holder SDS schema for the selected Source

2. Run the python script to create datasource in WebLogic Server and wiring in ODI

http://docs.oracle.com/applications/biapps102/install/GUID-F76321CA-24EE-4B1A-BC68-D247D5C18487.htm#BIAIN-GUID-F76321CA-24EE-4B1A-BC68-D247D5C18487

3. Setup Source connection in BIACM

Follow Business Intelligence Applications Installation Guide

4. Run the SDS DDL generate scenario to create SDS objects in SDS schema.

 

Note: For GolderGate, provide value for BIAPPS.UTIL_GENDDL_GG_USER_SOURCE and BIAPPS.UTIL_GENDDL_GG_USER_DW also

5. Datasync - Configure target-auto-columns.txt file

SDS tables have few Change data capture (CDC) columns which are used for selecting incremental dataset and for delete processing; target-auto-columns.txt file should be used to automatically update these CDC columns of target SDS schema tables defined in Datasync. Datasync adds all the columns declared in this file to all target tables which are imported in Datasync after configuring this file.

After placing the above file, Datasync adds these CDC columns in the target table with default value.

 

 

6. Datasync - Create connections for target SDS schema and source OLTP in data sync tool.

Follow Datasync user guide.

 

7. Datasync - Import source tables in data sync

Source tables used in BIA may be much lesser than the total number of tables present in the source schema (OLTP), so it is recommended that you only import the required tables in data sync. To create the list of tables to be imported, Get the list of all the tables present in the SDS schema created in earlier steps, either in comma separated or one table per row format.

Now use such list to import those specific tables in Datasync

Click ‘Data From Table’ button

Choose appropriate option and provide the list of tables to be imported.

8. Datasync - Define load strategy for tables

Datasync requires a last Update Date (LUD) column to be present in source table, to enable the table for incremental extraction. Once a LUD column is identified, then Incremental Load strategy can be set by defining the Key and Incremental Filter on LUD column.

Note: For any change in Project, make sure to update the job using right click and click ‘Update Job’

9. Datasync - Datasync property changes

SDS schema is created using metadata from ODI model, Datasync might try to alter it comparing it with source OLTP, to avoid that set Value as false to “automatically create/alter table” in system properties.

 

10. Datasync - Create job and execute to load the data from selected source to Target SDS schema

Once a job in created, disabling the Drop/Create Indices checkbox might be required as not all the source indexes will be required for the target SDS schema, also job tasks might fail while datasync tries to create the index on DSYS_ datasync columns as these columns might not be present in the target schema.

 

Run the Job

Pros/Cons:

Cons/Limitations

  • Datasync currently doesn’t support deletes. Hence a row deleted in source will still remain in Target.
  • Incremental data replication support only if a Last Update date column exists in the source
  • For update strategy, Keys need to be defined manually.
  • CLOB columns replication is not supported.

Pros:

  • Good monitoring, troubleshooting and scheduling features.
  • SSH movement is supported, so the data movement is encrypted
  • Comes with the BICS, no extra licensing
  • Datasync supports replication of database Views as well

All blogs related to BIAPPS on PAAS

 

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.