Monday Nov 09, 2015

An Implementer’s Guide to External Data Support for Universal Sources


Applies to OTBI-Enterprise Version 2 PB 2


TABLE OF CONTENTS

Overview of the External Data Support Feature. 3

Support for On-Premise Oracle AU sources. 3

Support for Non-Oracle sources. 3

Supported Subject Areas – HCM+. 4

Practical Use Cases. 5

Getting Started with External Data – Non-Oracle Sources. 5

Sequence of Steps – High Level 6

Deep Dive into the Actual Steps. 6

Register Source and Configure Storage Service Container. 6

Build Load Plans. 8

Prepare Data Files and Upload. 11

Prepare Data Files. 11

Upload Data Files to SSC. 15

Run your Load Plans. 16

Managing Deletes in Source System.. 17

General Tips. 18

Resolving Domains. 18

Special Cases. 22

UNSPSC. 22

Data Security with External Data. 24

Potential Automations. 26

File Upload to Storage Service Container. 26


Overview of the External Data Support Feature

Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats).

With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered.

With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together.

With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those.

Support for On-Premise Oracle AU sources

For Oracle sources (like E-Business Suite, or PeopleSoft), see my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”.

Support for Non-Oracle sources

For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), obviously, there are no pre-built adaptors. However, Oracle BI Application’s standardized data models for their data warehouse makes it possible for you to populate your Universal Staging area tables and thereon, Oracle BI Applications ETL takes over. The Source Independent Load routines read from these Universal Staging area tables and populate the end data warehouse tables, the Post Load Process takes it further to populate required Aggregate tables. Finally, the reporting layer is already attached to the warehouse. In short, majority of the heavy lifting of the data is already taken care of; you need to just populate the Universal Staging Tables. How?

Oracle provides documentation through which you get to know which staging tables (by subject areas) they need. Along with that comes information about the map of those tables with the CSV files. Finally, once you know the list of CSV files to work on, Oracle’s documentation provide detailed information about the structure of those files, their data types, column (functional) descriptions, key information and any other related information. With all that knowledge, you create the required Universal CSV files and upload to SSC. OTBI-Enterprise now takes over and loads them to the data warehouse and connect the dots to the reporting layer.

Here is a quick diagrammatic representation of Non Oracle (Universal Adaptor) External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.

Supported Subject Areas – HCM+

HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, for Universal sources.

Practical Use Cases

You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud.

Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area.

Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). This is the case we are dealing with in this document.

Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”.

Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area.

There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes from where.

Getting Started with External Data – Non-Oracle Sources

When it comes to Non Oracle sources, OTBI-Enterprise uses the Universal Adaptor to populate the data warehouse tables. The Universal Adaptor relies on CSV data files to be present in the ..\srcFiles directory. The actual directory gets provisioned through set up tasks at BI Application Configuration Manager (BIACM). The ETL maps to these CSV files and loads the Universal Staging area tables. Thereon, the regular ETL processes like source independent load, post load etc. kicks in. Note that there is no SDS schema when it comes to Universal Adaptor. The ETL reads directly from the files.

Populating the UA CSV files is usually the biggest challenge for customers. First, to figure out which files are to be populated and what should be their names. Next, to understand the content that goes in each of these CSV files. We will go over these in details below.

Sequence of Steps – High Level

At a high level, the sequence of steps goes like this:

1. Register Source, configure Storage Service Container (SSC)

2. Pick subject areas (FGs, or Fact Groups) and build Load Plans [Regular and Domains-only]

3. Populate required data files and manifest file (list of files)

4. Upload data files (for Domains-only LP) to SSC

5. Run Domain-only LP

6. Upload data files (for regular LP) and new manifest file to SSC

7. Run regular LP

8. Repeat steps 6 and 7 for incremental ETL runs

Deep Dive into the Actual Steps

Let’s take a deeper look into what goes in each of these steps.

Register Source and Configure Storage Service Container

First, you need to enable BI Applications Offerings. To do that,

Ø Login to BIACM

Ø Click on Tasks à Source Setups à Manage Business Applications

Ø Select the offerings you prefer

Ø Click Save

The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do.

Now we proceed to source registration. Register Universal Adaptor source for External Data. This is how:

Ø Click on Tasks à Source Setups à Define Business Intelligence Applications Instance

Ø Source Systems Tab à Click on Create à Then provide a Source Instance Name and a Data Source Number, as shown in the screenshot below. It is recommended that you keep 900 as the data source number for Universal file sources.

Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically.

Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “Container_Universal”.

Ø Click on Save. The Source system will be registered and the Storage Service Container is created

Build Load Plans

Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode).

Ø Click on Tasks à Load Plan Administration à Manage Load Plans à Add

Give a name to your LP. We call it “UADomains_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported.

Ø Click Generate. Wait for the Domains LP generation to complete.

Now build another Load Plan which you will be using for regular loads, full and incremental.

Ø Click on Tasks à Load Plan Administration à Manage Load Plans à Add

Give a name to your LP. We call it “UAFull_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP.

Ø Click Generate. Wait for the Full LP generation to complete.

This completes the step of building all necessary Load Plans.

Prepare Data Files and Upload

This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC.

Prepare Data Files

Depending upon the FGs you picked for your LP, the first thing you need to gather is the list of CSV files you will actually need. OTBI-Enterprise documents it quite clearly, by subject areas. You could download that from OTBI-Enterprise Document Library under the section “View Universal Adaptor Reference for HCM Subject Areas”. A sneak peak of that file is here:

Once you know the names of physical CSV files, and also the names of the logical ODI model store name, the next (and probably the biggest) concern is to know what goes in there. Fortunately, OTBI-Enterprise provides some decent documentation about the entire topic – all that you need to know. For example, what should be grain of the data set, which staging area table and which final dimension or fact table this data will get into, a summary about the nature of data at a high level, and so on. Next, it provides the shape of the CSV file, column header names, data types and lengths, whether the column is mandatory or not [some actually aren’t], whether this column is a domain code [including list of potential target warehouse conformed domain codes], and lastly the description of the column through which you get to know what the column is about.

You can download this documentation from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. A sneak peak at that document is provided here for reference purposes.

This should be a reasonable enough information for you to get started. OTBI-Enterprise even provides sample CSV files (with no data) so that the structure of the CSV file is already there. You just need to add data to it. There are a whole bunch of rules about the data that you are going to provide. Here is the list you should keep in mind:

Ø First four lines mandatorily contains true header information. You can put whatever you want here, OTBI-Enterprise will assume these as non-data and skip these four lines.

Ø The fifth line is the column header line. This should match with the sample CSVs OTBI-Enterprise provides. If you are using one of their sample files instead of creating your own, you are OK here. This is also assumed to be a non-data row, and OTBI-Enterprise will skip it.

Ø Data starts from the sixth row.

Ø File name should be of the format file_<table name>-timestamp.csv. For example, file_absence_event_fs-20140916_004935.csv, where timestamp format is YearMonthDay_24HourMinuteSecond [20140916_004935]

Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed.

Ø Maximum size limit of a file is 5Gigs.

Ø Ensure you structure your csv file with the columns in the order as specified in OTBI-E documentation, or start from OTBI-Enterprise provided sample CSVs. When there is no data for a column, leave an empty placeholder. For example, if the format specifies First Name, Middle Name, Last Name, with Middle Name as not required, you must still submit the CSV file data as "Mary" ,"", "Washington".

Ø Since “comma” itself can be part of your data, all character data must be encapsulated within double-quotes, if that is your escape character.

Ø Non-character columns must not be encapsulated within double-quotes.

Ø Remove any new line characters such as \r or \n.

Ø Date data must be defined as per OTBI-Enterprise documentation. It is usually of string type in the format: YYYYMMDDHH24MISS. Don’t encapsulate this within double-quotes.

So that’s about your data file. You will populate a set of such CSV data files. Now, OTBI-Enterprise for External Data support for Universal Adaptor requires another file, called the “Manifest” file. This is more like an index file, containing all the data CSV file names which would be consumed in a given lot. Requirements on the manifest file is as follows:

Ø MANIFEST should be the last file to get uploaded after uploading all data files. The download program looks at the Manifest to determine which files ought to be downloaded. After successful download and processing of the CDV data file, the MANIFEST file gets renamed so that they are NOT further picked up and data CSV files will be marked for expiry (purged when it reaches stipulated number of expiry days).

Ø MANIFEST file naming format is "MANIFEST-TIMESTAMP.MF". Timestamp format is YearMonthDay_24HourMinuteSecond [20150417_035459]. It is mandatory to have timestamp in MANIFEST file name as same file names will be overwritten in Cloud Storage Service. For example, MANIFEST-20150417_035459.MF.

Ø MANIFEST file should NOT be compressed/zip.

A sample data CSV file and a sample manifest file is shown in the following screenshot.

Finally, you have all your CSV data files ready and populated, along with the manifest file for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files and the MF file to a particular folder, say “C:/temp/upload”. All files from here will go into SSC.

Upload Data Files to SSC

OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service.

I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own.

Assuming that you’ve done it, this concludes your data preparation and upload step.

Run your Load Plans

At first, you are required to run the Domains-Only Load Plan.

Ø Click on Tasks à Load Plan Administration à Manage Load Plans. Then pick the Domains-Only LP called “UADomains_LP1” and click on Execute. Check on the Execution status and wait for it to finish.

Once this finishes, you need to re-upload dated CSV files that are needed for the regular LP and a new dated manifest file. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Then run the regular LP. Same navigation as above.

Managing Deletes in Source System

Managing deletes for External Data with Non-Oracle sources can be a bit tricky. In this case, OTBI-Enterprise having no clues about the actual source system, relies on you to provide the deleted data related information. Managing deleted data (from source) in the data warehouse is crucial, otherwise you end up getting incorrect numbers. Note that unlike the case with Oracle sources, the OTBI-Enterprise does not maintain a replica schema (an SDS) of the OLTP within the OTBI-E pod. That’s because OTBI-Enterprise isn’t aware of how the source system looks like. Therefore, it turns the responsibility to you to inform about the deleted data through the same flat file (CSV) approach.

The CSV data files, as we discussed above, are to be provided one per staging area table. The column headers of the CSV exactly match up with the columns of the staging area table. There is one such column in all staging area tables (and hence in all CSV files) called “DELETE_FLG”. When you upload data files for initial load, all values for this column would be “Y”. In your incremental run, assuming you have found out some updates on a previously uploaded record, you would need to re-upload that with the changes. Similarly, if the record that you had uploaded earlier is now hard deleted in your source, you will need to re-upload that same record, this time with “DELETE_FLG” set as “N”.

Take a close look at the above screenshot. This is an incremental data file that you are about to upload. Usually the PK definition of universal staging table (and hence the CSV files) is made up of columns:

INTEGRATION_ID, and DATASOURCE_NUM_ID

For data effective tables, a third column gets included as well, making it a three-column PK:

INTEGRATION_ID, DATASOURCE_NUM_ID and SRC_EFF_FROM_DT

For this example, we have only the first option (two-column PK). You had originally uploaded data for this PK: '900,300100044288208~300100044268215' with DELETE_FLG = ‘N’. The data warehouse have been using this record for its calculation/analysis so far. Later, you figured out that this record was hard deleted in your source. You should now have a process (within our source) that spits out deleted records along with new/updated records. New/Updated records goes their usual way with DELETE_FLG as “N”, but deleted records must have the DELETE_FLG column set to “Y”, as shown in the above picture. The downstream data warehouse process will adopt this as a soft-delete and move on accordingly, removing this record from all calculations and analysis. That’s pretty much the idea.

You can choose to provide all deleted records for all files in a completely separate batch, where all records are marked DELETE_FLG as “Y”, or you can mix with your daily regular data files – your choice.

General Tips

Resolving Domains

Resolving source domain member codes to target domain member codes happens as a part of the SDE interfaces in cases of pre-built adaptors (for E-Business Suites, PeopleSoft, Fusion etc.). However, for Universal sources, you are held responsible for providing the target domain member codes in the CSV files itself. This is quite an important step. Fortunately, there are data model standards in place through which you know that a given column is a target domain code column. Target domain column names are usually prefixed with “W_”. The documentation available will also make a note of that. We could take an example from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. We took a similar example above, but that was for a fact source, which usually doesn’t carry a domain. An example of a dimension source would probably help to understand this concept. See below:

So, the first and second column for this file are for you to populate your source domain member code values about “Status” (indicating requisition or applicant status) and “Reason” (indicating the reason, if available, for such a requisition or applicant status). These are your “raw” source values. Then, the third column in this file is a target domain column. The values would be one from the list of values OTBI-Enterprise requires. Based on the data for first two source domain member code columns, you need to do the mapping and “choose” the appropriate target domain member code value.

Assume that the list of target domain member code values include items like “ASSESSMENT_INTERVIEW”, “ASSESSMENT_INTERVIEW1” and “ASSESSMENT_INTERVIEW2” (names are “Assessment Interview”, “Assessment First Interview” and “Assessment Second Interview” respectively). There are actually 22 out of the box values for this target domain.

Also assume that your source domain member code values include items like “WT”, “INT1”, “INT2”, “INT3” and “FINT” (names are “Written Test”, “Interview Round 1”, “Interview Round 2”, “Interview Round 3” and “Final Interview” respectively).

In this case, your job would be to map your 5 source values to the possible list of 3 target values. And you have to get this done in the CSV file itself, before providing it to OTBI-Enterprise. That kind of logic needs to be there at your end. You cannot change the list of target values, unless they are coded as “extensible”. If extensible, you will be allowed to add more to the list in BIACM.

Must be wondering where to get this list of allowed target domain member code values? Simple.

Ø Log into BIACM and navigate to Manage Warehouse Domains.

Ø Do your search based on the domain code (for example, W_RECRUITMENT_EVENT)

Ø Note down the Warehouse Members in the bottom pane. A screenshot below.

While on this topic, let me go over another much related issue here – Domain Mapping Hierarchies. In the above file, the first and second columns (source domain) will lead to mapping the third column (target domain) – that’s great. But what about the fourth column? That is also a target domain column. How do we map that? How do we map the fifth, the sixth, etc.? All of them seem to be target domain columns. The best thing is the read up the functional task manager documentation for this task, if there is any. For any complicated mappings like this, OTBI-Enterprise provides additional information help topics. For the scope of this document, you need to understand only this – you may have to simply copy the mapping from BIACM, if it falls under a domain mapping “hierarchy”.

See this screenshot (Navigation: BIACM à Manage Domain Mappings and Hierarchies à Search on Domain Code like “W_RECRUITMENT”). Then go to the Hierarchies tab within the middle pane.

You notice the hierarchy up from Recruitment Event to Recruitment Sub Stage to Recruitment Stage. This is target domain hierarchy. The parent value at any level is based on the child values. And since the lowest value is a target domain (Recruitment Event, in this case) OTBI-Enterprise has pre-done the mapping up to Sub Stage (see above). The mapping is visible in the bottom most pane.

The next screenshot shows the mapping from Sub Stage to Stage level.

As a universal adaptor user, all you need to do is to copy these “mappings” from BIACM and reflect them in your CSV files. This is true for any target domain columns that are a part of a domain hierarchy. If not (which you can find out looking into BIACM in the above way), you would need to do the mapping from your source domains to the target domains.

Mapping domains is mandatory before even starting a full load in OTBI-Enterprise. A whole lot depends heavily on these mappings.

Special Cases

UNSPSC

OTBI-Enterprise product dimension uses a customer configurable source file called “file_unspsc.csv”. Out of the box they provide a few rows of sample data, but customers who have paid for UNSPSC (United Nations Standard Products and Services Code) data from unspsc.org, are expected to provide their own “paid” data. If you are one such customer, you need to download this sample file from OTBI-Enterprise, put your own UNSPSC data in it in the specified format, and upload it to Storage Service.

That’s nothing different in terms of how it works for Oracle Sources (E-Business Suite, or PeopleSoft). However, for non-Oracle sources, there is a bit of a special case, which is why I am talking about UNSPSC in this section.

Problem is, even though this is necessarily a one-time activity (until the international standard changes), you need to upload this same file in each and every batch of ETL, over and over again. For Universal CSV files, once you upload to SSC along with a manifest file, OTBI-Enterprise downloads everything to the Source Files folder (remember that during your source registration step?). So, file_unspsc.csv gets downloaded and consumed in the ETL. Next time, before downloading, OTBI-Enterprise deletes all files from the source files folder. So, file_unspsc.csv gets deleted. For real changing data files, this is OK. But for UNSPSC data, this wasn’t warranted. However, that’s the way it is.

Long story short, just remember to provide the file_unspsc.csv in every batch. If you do use UNSPSC, provide with valid data. If not, just keep providing OTBI-Enterprise’s sample file. You can find this information in the OTBI-Enterprise Document Library under the Known Issues section for UNSPSC.

Data Security with External Data

For Non Oracle sources, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure.

Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL.

Here are the steps, once you are ready:

Ø Log into BIACM.

Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”.

Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example.

Ø From the bottom pane, click the “plus” button to add a security object.

Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box.

Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them.

Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK.

Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below.

Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps.

Instead of having to do this for every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it.

Potential Automations

None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better” ones) on your own.

File Upload to Storage Service Container

The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation.

This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows:

STORAGE_SERVICE_URL à Storage service URL that Oracle gave you.

STORAGE_SERVICE_NAME à Name of the Storage Service. Oracle gives you this as well.

STORAGE_SERVICE_USER_NAME à Your username to access Storage Service.

STORAGE_SERVICE_PASSWORD à Your password to access Storage Service.

STORAGE_SERVICE_CONTAINER_NAME à The name of the container, as you configured while registering your source.

ACTION à The action you want this program to take. Options are:

LIST_CONTAINERS à Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES à Uploads all your files from “UPLOAD_FILES_FROM_DIRECTORY” into the container “STORAGE_SERVICE_CONTAINER_NAME”

LIST_FILES à Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME”

DELETE_FILES à Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME”

CREATE_CONTAINER à Creates a container by the name “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES_FROM_DIRECTORY à This is the location in your local server where all your files are waiting to be uploaded.

Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below:

Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot):

import java.io.*;

import java.util.*;

import oracle.cloud.storage.CloudStorage;

import oracle.cloud.storage.CloudStorageConfig;

import oracle.cloud.storage.CloudStorageFactory;

import oracle.cloud.storage.exception.SystemException;

import oracle.cloud.storage.model.Key;

import oracle.cloud.storage.model.StorageInputStream;

public class StorageService_Manager {

public static Properties inParam;

public static String storageUrl;

public static String storageServiceName;

public static String storageUserName;

public static String storagePassword;

public static String storageContainerName;

public static String storageAction;

public static String uploadFromDirectory;

public static CloudStorage myConnection;

public static void main(String[] args) {

try{

initProperties();

storageUrl = inParam.getProperty("STORAGE_SERVICE_URL").trim();

storageServiceName = inParam.getProperty("STORAGE_SERVICE_NAME").trim();

storageUserName = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();

storagePassword = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();

storageContainerName = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();

uploadFromDirectory = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();

storageAction = inParam.getProperty("ACTION").trim();

//config and cloud storage object

CloudStorageConfig myConfig = new CloudStorageConfig();

myConfig.setServiceName(storageServiceName)

.setUsername(storageUserName)

.setPassword(storagePassword.toCharArray())

.setServiceUrl(storageUrl);

myConnection = CloudStorageFactory.getStorage(myConfig);

//list containers within a storage

if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {

listContainers();

}

//upload files

if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {

uploadFiles();

}

//list files

if(storageAction.toUpperCase().equals("LIST_FILES")) {

listFiles();

}

//delete files

if(storageAction.toUpperCase().equals("DELETE_FILES")) {

deleteFiles();

}

//create container

if(storageAction.toUpperCase().equals("DELETE_FILES")) {

deleteFiles();

}

//rename files

if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {

createStorageContainer();

}

System.out.println("End ...");

}catch (Exception e){

e.printStackTrace();

System.out.println("Error has ocuured \n" +e);

}

}

private static void createStorageContainer() {

System.out.println("Creating container :" + storageContainerName);

try {

oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);

}catch (Exception e){

e.printStackTrace();

System.out.println("Error in creating container \n" +e);

}

}

private static void renameFiles() {

System.out.println("Renaming files in container :" + storageContainerName);

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println("Renaming :" + cobjects.get(i).getKey());

StorageInputStream sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());

myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);

myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());

}

}

private static void deleteFiles() {

System.out.println("Deleting files in container :" + storageContainerName);

try {

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println("Deleting :" + cobjects.get(i).getKey());

myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());

}

}catch(SystemException ex) {

System.out.println("Error in deleting files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void listFiles() {

System.out.println("Listing files in container :" + storageContainerName);

try {

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

//System.out.println(cobjects);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());

}

}catch(SystemException ex) {

System.out.println("Error in listing files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void uploadFiles() {

try {

File f = new File(uploadFromDirectory);

File files[] = f.listFiles();

System.out.println("Uploading files to container :" + storageContainerName);

for (File file : files) {

try {

FileInputStream fis = new FileInputStream(file);

String fileName = file.getName();

System.out.println("Uploading :" + fileName);

myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);

}catch(FileNotFoundException ex) {

System.out.println("Error in uploading file :" + file.getName());

ex.printStackTrace(System.out);

}

}

}catch (NullPointerException ex){

System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");

ex.printStackTrace(System.out);

}

catch (Exception ex){

System.out.println("Error in uploading files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void listContainers() {

System.out.println("Listing Containers");

try {

List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();

for (int i = 0; i < containerList.size(); i++) {

System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());

}

}catch(Exception ex) {

System.out.println("Error in listing containers");

ex.printStackTrace(System.out);

}

}

private static void initProperties() throws IOException {

inParam = new Properties();

inParam.load(ClassLoader.getSystemResourceAsStream("StorageService_Manager_Input.properties"));

}

}

That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as:

java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log

The output file “storage_out.log” records all that happens on the console.

An Implementer’s Guide to External Data Support for Oracle Sources

Applies to OTBI-Enterprise Version 2 PB 2


TABLE OF CONTENTS

Overview of the External Data Support Feature. 3

Support for On-Premise Oracle AU sources. 3

Support for Non-Oracle sources. 4

Supported Subject Areas – HCM+. 4

Practical Use Cases. 5

Getting Started with External Data – Oracle AU Sources. 5

Sequence of Steps – High Level 6

Deep Dive into the Actual Steps. 6

Register Source and Configure Storage Service Container. 6

Build Load Plans. 8

Prepare Data Files and Upload. 10

Prepare Data Files. 10

Upload Data Files to SSC. 18

Run your Load Plans. 18

Managing Deletes in Source System.. 19

General Tips. 20

Special Cases. 21

UOM... 21

Data Security with External Data. 22

Potential Automations. 24

File Upload to Storage Service Container. 24

CSV file generator for Oracle Sources. 28


Overview of the External Data Support Feature

Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats).

With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered.

With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together.

With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those.

Support for On-Premise Oracle AU sources

At a high level, the way it works is quite simple. You upload a set of your OLTP table’s data (EBS or PSFT) in the CSV file format to designated Oracle Storage Service Containers (henceforth called SSC). OTBI-Enterprise will then download those files at run-time from SSC and populate an on-Cloud SDS (Source Data Store) database schema. This acts as the mirror image of the actual on-Premise source system. The respective ETL adaptors then gets kicked in and the final data warehouse gets populated. With the reporting layer already built in the pod, you get direct access to OBIEE Answers and Dashboards. Here is a quick diagrammatic representation of AU External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.

Support for Non-Oracle sources

For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), see my other blog on Universal Adaptors, called “An Implementer’s Guide to External Data Support for Universal Sources”.

Supported Subject Areas – HCM+

OTBI-Enterprise External Data support does not support any HCM subject areas that makes extensive use of OLTP procedures and packages. For example, for E-Business Suite Applications, the HCM subject area “Accruals” relies on certain EBS PL/SQL packages. As it stands now, there is no easy way to replicate those packages within the cloud SDS. Therefore, Accruals subject area from EBS isn’t supported in OTBI-Enterprise Version 2.

HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, by sources.

Practical Use Cases

You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud.

Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area.

Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Universal Sources”.

Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. This is the case we are dealing with in this document.

Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area.

There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes from where.

Getting Started with External Data – Oracle AU Sources

For Oracle sources (E-Business Suites 12.2 or PeopleSoft Applications 9.2), OTBI-Enterprise uses the corresponding ETL Adaptors to populate the data warehouse tables. These Adaptors relies on a relational database where all the required OLTP tables would be present. For OTBI-Enterprise, this would be the SDS schema, where all the required OLTP tables (no OLTP specific PL/SQL packages or procedures or functions) will be replicated by the Cloud Replicator. Thereon, the regular ETL processes like the source dependent extracts (SDE), source independent load (SILOS), and post load (PLP) kicks in.

Providing the CSV files against each required OLTP tables is the biggest challenge here. Fortunately, OTBI-Enterprise provides decent template SQL files that would help you to figure out which tables are required and how to extract the data from those tables. We will go over these in details below.

Once the CSV files are there in the Storage Service Container, OTBI-Enterprise picks them up from there and loads up the SDS.

Sequence of Steps – High Level

At a high level, the sequence of steps goes like this:

1. Register Source (PSFT or EBS), Configure Storage Service Container (SSC)

2. Pick subject areas (FGs or Fact Groups) and build Load Plans [Regular and Domains-only]

3. Download ORDER.SQL, ORDER_DDL.SQL and ORDER_PE.SQL files for each LP

4. Follow ORDER.SQL against Domains-only LP and prepare the data exports in CSV format for Domains-only LP

5. Upload data files (for Domains-only LP) to SSC

6. Run Domain-only LP

7. Configure domains

8. Follow ORDER.SQL against regular LP and prepare the data exports in CSV format for regular LP

9. Upload data files (for regular LP) to SSC

10. Run regular LP

11. Repeat steps 8 through 10 for incremental ETL runs

12. If you have deletes in the OLTP for a table, follow ORDER_PE.SQL for that table and prepare data exports in PECSV format and upload in SSC and run along with usual Incremental run

Deep Dive into the Actual Steps

Let’s take a deeper look into what goes in each of these steps.

Register Source and Configure Storage Service Container

First, you need to enable BI Applications Offerings. To do that,

Ø Login to BIACM

Ø Click on Tasks à Source Setups à Manage Business Applications

Ø Select the offerings you prefer

Ø Click Save

The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do.

Now we proceed to source registration. Register E-Business Suite Applications 12.2 or PeopleSoft Applications 9.2 source for External Data. This is how (taking the EBS example):

Ø Click on Tasks à Source Setups à Define Business Intelligence Applications Instance

Ø Source Systems Tab à Click on Create à Pick Product Line as Oracle E-Business Suite, and Product Line Version as Oracle E-Business Suite R12.2.x. Set a name for your source instance, say EBS122, and pick a Data Source Number, maybe 201. Set Source Load Technology to “Replicator External File”.

Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically.

Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “ExternalData_EBS”.

Ø Click on Save. The Source system will be registered and the Storage Service Container is created

Build Load Plans

Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode).

Ø Click on Tasks à Load Plan Administration à Manage Load Plans à Add

Give a name to your LP. We call it “DomainsOnly_EBS” in this example. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported.

Ø Click Generate. Wait for the Domains LP generation to complete.

Now build another Load Plan which you will be using for regular loads, full and incremental.

Ø Click on Tasks à Load Plan Administration à Manage Load Plans à Add

Give a name to your LP. We call it “Full_EBS”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP.

Ø Click Generate. Wait for the Full LP generation to complete.

This completes the step of building all necessary Load Plans.

Prepare Data Files and Upload

This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC.

Prepare Data Files

You need to create two sets of data CSV files. First one would have those that are needed by the Domains-Only ETL (this is one-time only), and the second one for the Full ETL (you repeat this for each incremental run). The process for creating the files in both cases are exactly the same. The following illustrates the steps for Domains-Only ETL. To be clear, file preparation is a two-step process. First, you generate the Order Files (SQL files that Oracle would generate for you depending upon your FG selections in the LP), and second, use these Order Files and run them against your EBS source to extract data out in CSV format.

Let’s start with the first step, generating the Order Files

Ø Click on Tasks à Load Plan Administration à Manage Load Plans à Select the Domains-Only LP that you just now generated (DomainsOnly_EBS).

Ø Keeping that selected, click on a button on the toolbar, the mouse-over of which reads “Export Source System Extract SQL Command Files”.

Ø It pops up new window, where under “Actions”, you would click on one that says “Execute Source System Extract SQL Command Files Scenario”.

Ø Since you are actually running an ODI scenario behind the scenes, you will be prompted to provide information about your Agent and all. Provide those, and then click OK.

Ø It takes a few seconds to run and then you get the results back in a new popup window. Three .SQL files would be presented to you.

Ø Select the files and export them out to your local box, using the ‘Export’ button.

We will go over what is in each of these files, and how it helps you. But before that, like I mentioned earlier, the process is exactly the same so far for the Full_EBS Load Plan. Here is a screenshot of the final step on that LP.

Note that the .SQL files has the Load Plan name prefixed with it, as in “Full_EBS_***” or “DomainsOnly_EBS_***”. You can’t mix up. You can actually download all six together, but while creating and providing CSV files for consumption, you would take care of uploading the ones needed by the appropriate LP.

Now that Order Files are generated, let’s go to the next step.

Generate your extract CSV files

These three sets of .SQL files are supposed to help you generate your data CSV files. To understand how and why, let’s take a closer look on the contents of these files. Each of these files have a comment section at the beginning that explains what this file has and how one should use the information in here. The main file that will be used is called ORDER.SQL.

The other two files are ORDER_DDL.SQL and ORDER_PE.SQL. For the interest of this section, we will not spend more time on these two files, apart from simply mentioning that ORDER_DDL.SQL provides you with the structure of the required OLTP tables, as per the OTBI-Enterprise model. You could use this to compare with your real OLTP and spot differences upfront. The ORDER_PE.CSV is similar to the ORDER.SQL in a sense that it also provides a bunch of SQL SELECT statements. However, the intent of ORDER_PE.SQL is to only select the Primary Keys from each OLTP table. To support physical hard deletes from your OLTP, this helps OTBI-Enterprise to synch up the data in the SDS with your OLTP and mark deleted records as “deleted” [set the CDC_$DML_CODE to ‘D’]. The data file CSVs resulting out of this ORDER_PE.SQL would have extensions as .PECSV, while the main ones against ORDER.SQL would have extension as .CSV.

Here are screenshots of the comment section for all the three files. We will go over the details of the last and the most important one, ORDER.SQL, right after that.

The above is a screenshot showing a sample ORDER_DDL.SQL file. The comments should be self-explanatory.

The above is a screenshot showing a sample ORDER_PE.SQL file. The comments should be self-explanatory.

The above is a screenshot showing a sample ORDER.SQL file. The comments should be reasonably clear, but here are a few tips on this topic:

Ø If an OLTP table has a LAST_UPDATED_DATE column (which most of the E-Business Suites tables do, not necessarily true for PeopleSoft), the ORDER.SQL file would have added a WHERE clause using that date as a filter. Couple of points about this, but first a screenshot to display what I’m talking about.

o You should maintain values for these variables for each of your extracts – initial, then first incremental, then second incremental and so on.

o For initial load, if you prefer to load everything for all tables, simply remove this WHERE clause, or provide a reasonably old date (like 01-JAN-1970) if you don’t like to edit the file. You can also use this format by specifying a date since when you want your data warehouse to have data. For example, 01-JAN-2006, in case you want 10 years data to start with.

o Once the extract for your initial run is over, the date when you did so becomes the date for your first incremental. When the first incremental is over, the date when you did so becomes the date for your second incremental, and so on.

o You should plan to maintain all such dates and use it appropriately before each extractions. You may want to set up an automation script for this.

Ø Certain Date-Effective tables are to be handled in a different way. The ORDER.SQL provides you with the template SQL to take care of that. Apart from setting the value of the #LAST_EXPORTED_DATE variable, it is recommended that you do not modify this part. This is critical in maintaining the sanctity of the date effective data. Here’s a screenshot displaying the portion of the SQL. This is not an action, just more of an FYI.

Now that you have your SQLs handy, your date parameter (#LAST_EXPORTED_DATE) values picked/programmed, it is time to crank up the CSV files. There are certain rules you need to be aware of, when dumping data into these CSV files. These are as follows:

Ø The first line is the column header line. This should match with the columns selected in ORDER.SQL. If you are using the ORDER.SQL without modifications in the SELECT clause, you are OK here. This is assumed to be a non-data row, and OTBI-Enterprise will only use it match up with the SDS schema and populate the right data against the right column.

Ø Data starts from the second row.

Ø File Naming Standards:

o For Regular Data Files (.CSV)

§ Format - file_TableName-DateTime.CSV where

§ TableName - Table name of the OLTP table whose data is being generated (table name in UPPER CASE).

§ Date - YearMonthDay [20141017]

§ Time - (24)HourMinuteSecond [230512]

§ Ex: file_PS_HRS_APP_PROFILE-20140916_010515.csv [OLTP table used here is PS_HRS_APP_PROFILE]

o For Primary Extract Files (.PECSV)

§ Format - file_keys_TableName-DateTime.PECSV where

§ TableName - Table name of the OLTP table whose Primary Key data is being generated (table name in UPPER CASE).

§ Date - YearMonthDay [20141017]

§ Time - (24)HourMinuteSecond [230512]

§ Ex: file_keys_PS_HRS_APP_PROFILE-20140916_120512.pescv [OLTP table used here is PS_HRS_APP_PROFILE]

Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed.

Ø File Size: Maximum size of a file is 5Gigs. If a table is large, you can split it into multiple files as a part of the same batch. Use the timestamp to differentiate them. E.g.

file_PS_HRS_APP_PROFILE-20150708_132847.csv

file_PS_HRS_APP_PROFILE-20150708_132848.csv

The 5 GB limitation is for a single file in Cloud Storage. However, from a Cloud Replicator stand point, they put further limits using Source Connection properties with “cumulative file size limit per table”. Initial (Full) extract and Incremental extract are different set of properties for file size limit. During a replicate call, cumulative file size limit check will be performed per table and exceptions will be raised when limit hits.

Default values of seeded Connection Properties are as follows:

MAX_CUMMULATIVE_INITIAL_FILE_SIZE_IN_MB = 5000

MAX_CUMMULATIVE_INCREMENTAL_FILE_SIZE_IN_MB = 1000

Cumulative size will be sum of all the available files per table in container irrespective of date. For example, the following files are for different dates 16th and 17th and all their file size are summed up as cumulative size.

file_PS_HRS_APP_PROFILE-20140916_010515.csv

file_PS_HRS_APP_PROFILE-20140916_100512.csv

file_PS_HRS_APP_PROFILE-20141017_230512.csv

So if your cumulative file size (sum of multiple-day-file, or sum of single-day-split-files) goes beyond 5 GB, you should change the above parameters in your Cloud Replicator set up, or just remove the properties completely (so that this check isn’t even performed).

Ø File content standards:

o CSV delimited data format should have fields/columns separated by the comma (,) character and records/rows terminated by newline (\n) character. Carriage return (\r) should be avoided for portability across platforms.

o Any newline characters present in the column data needs to be removed as it will cause problems during the load. Yes, this is a modification to the source data, but all such columns are typically comment-like columns, and unlikely to be used for analysis.

o CSV should include column headers, columns can be in any arbitrary order.

o CSV column value supported data types are VARCHAR, INTEGER, DOUBLE, NUMBER, DATE, and TIMESTAMP.

o All CSV column values should be enclosed within double quotes (") and null value should be represented by pair of empty double quotes. E.g. "1997","Ford","E350","". Note: This applies for even numeric, date and all other data types and not just char columns.

o Character values can have comma character and as long as they are enclosed within the delimiter (double-quotes) it is fine. If character values have double quotes, they should be enclosed within a pair of double quotes (""), it should be strictly consecutive characters. As an example, let’s say that the raw values are:

1997,Ford,E350,Super, "luxurious" truck,"Fully enclosed value"

Then, the values that go into the CSV file would be:

"1997","Ford","E350","Super, ""luxurious"" truck","""Fully enclosed value"""

o Date/Time Formats: Date values can have time component. Timestamp does NOT support TIMEZONE. Default Date and Timestamp format is 'YYYY-MM-DD"T"HH24:MI:SS'

o For example, Date/Timestamp values:

CSV value '2014-11-25T10:00:00', DB stored value '25-NOV-14 10.00.00.000000000 AM'

CSV value '2014-11-25T16:30:00', DB stored value '25-NOV-14 04.30.00.000000000 PM'

CSV value '2014-11-25T00:00:00', DB stored value '25-NOV-14 00.00.00.000000000 AM'

CSV value (can ignore time component) '2014-11-25', DB stored value '25-NOV-14 00.00.00.000000000 AM'

o CLOB data type values have to be trimmed and limited to VARCHAR2 (4000). Does not make sense to extract BLOB columns as they store binary data and cannot be used for reporting.

A sample data CSV file is shown in the following screenshot.

Finally, you have all your CSV data files ready and populated for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files to a particular folder, say “C:/temp/upload”. All files from here will go into SSC.

I am going to discuss a specific programming option using Java (see the Potential Automations section) to generate CSV files against your source tables. This isn’t supported by OTBI-Enterprise though. You are free to adopt this program, change it to your needs, or come up with your own.

Upload Data Files to SSC

OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service.

I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own.

Assuming that you’ve done it, this concludes your data preparation and upload step.

Run your Load Plans

At first, you are required to run the Domains-Only Load Plan. Upload the CSV files that are required for this LP. Then run this LP.

Ø Click on Tasks à Load Plan Administration à Manage Load Plans. Then pick the Domains-Only LP called “DomainsOnly_EBS” and click on Execute. Check on the Execution status and wait for it to finish.

Once this finishes, you need to carry out the necessary domain configurations. That is a whole different topic, out of scope for this write-up. Then, you upload the CSV files that are required for the full LP (Full_EBS). Then run this LP. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Same navigation as above.

Managing Deletes in Source System

Hard deletes often happen on the source system. These have to be reciprocated in the data warehouse, otherwise it stands a chance to report incorrect data. In the case of OTBI-Enterprise, the hard deletes that happens on your source system is first reflected in the SDS schema. The ETL routines are already capable to handling DELETES, once found.

The following section discusses more about how OTBI-Enterprise manages to reflect hard deletes in the SDS layer, and what your actions are.

Remember the ORDER_PE.SQL file that was produced (as a part of the three .SQL files that you downloaded)? That’s the one which comes into use now. The ORDER_PE.SQL file provides select statements against the primary key of the OLTP table, without any date filters, by design. See a portion of the file below:

The tables in the SDS schema already has the record that has now been deleted from the source system. Now when you provide OTBI-Enterprise with the set of PK data, it compares with what the SDS already has which went missing in your provided PK data set. Those will be assumed as “hard deleted” from the source. SDS will then turn on a column called CDC_$DML_FLG to “D”, indicating ‘Delete’. The downstream ETL processes are aware of how to take care of this. Note that OTBI-Enterprise would never hard delete a record from neither SDS nor the data warehouse. They only mark the record as deleted, in other words, soft delete the record. So that’s how it works.

Now, your action.

If you happen to know that certain records were hard deleted from your source from certain tables, you should use the corresponding SQL statement from ORDER_PE.SQL and use it to populate your PK data file (format mentioned above for .PECSV files). For example, if you happen to know that records were hard deleted from only two tables AP_SUPPLIERS and AP_TERMS_LINES only, you should provide only two .PECSV files (one against each table) to the upcoming incremental batch. Not all.

That brings up another point. These .PECSV files are not required when you are doing an initial load. These are required only for incremental loads. You decide when you are ready to provide the .PECSV files. You can provide them during any batch, starting from the first incremental run, along with the main data file in the same batch. Meaning, files like file_AP_SUPPLIERS -20140916_010515.csv, and, file_keys_AP_SUPPLIERS-20140916_120512.pescv can go in the same batch.

If you are unsure what got hard deleted in your source system, or missed to keep track of them, you really have two choices:

Ø Provide .PECSV for all tables [following the ORDER_PE.SQL file] in the next batch

Ø Discard the warehouse and carry out another fresh full load

Going by the first approach, this would provide to be quite a costly process. If 90% of the tables did not have deletes, by this you are forcing OTBI-Enterprise to do a full brute force comparison of keys unnecessarily. But then, you can get a fit smart and provide only .PECSV files in this batch and not provide any real data files. This way, you are keeping this batch (and following ETL run) only reserved to manage your deletes. Maybe do that during a long weekend or so. Ideally though, you should start keeping track of your deletes, maybe using database triggers, if your DBA allows that.

Going by the second approach, you get a clean data warehouse, but at the cost of losing all rich historical information. This isn’t recommended, and should be carried out as a last resort.

General Tips

A large amount of problem usually comes (a) with data quality, and (b) with data structures. Data quality is an issue which you will be warned by OTBI-Enterprise’s Health Check feature. When you run your ETL, the Health Check Routines fires up at the beginning of the Load Plan. If any discrepancies are found in terms of data, the Health Check will warn you of that. You can download the Health Check log as an HTML file, and look for “Errors” or “Failures”. You can probably live with “Warnings”, but it is a good idea to address those as well. The failures are almost always related to bad data, or looping records in your source. These may be acceptable to the source application, but not to the BI Applications. If you allow bad data to flow in the data warehouse (those failures and errors), it is bound to cause unknown issues downstream.

The above discussion about Health Check applies equally for Fusion Cloud, and you may be already aware of it. The current write up is about External data, and so the next item (b) is probably more suited here. The issue about data structures.

OTBI-Enterprise supports E-Business Suite R12.2 and PeopleSoft 9.2 source applications. This means, they support these as their base versions. Nowhere does it claim that it supports R12.2 with Patch 14, for instance. Or PeopleSoft 9.2 with Bundle 8. And if you have applied newer patches on top of the base versions of your sources, chances are there that you might have a newer data structure for your underlying tables. Remember, when OTBI-Enterprise generates the ORDER.SQL file, it is unaware of your source versions/tables. It doesn’t get a chance to “connect and see” across the cloud. What is generates is purely based on its knowledge of the data structures at the base level.

But there are ways you can prepare yourselves for surprises. Remember that ORDER_DDL.SQL file OTBI-Enterprise generated? Its time you out that to work.

The ORDER_DDL.SQL file contains the DDL statements (CREATE TABLE) for the source tables against the base version/release of the source application. This is what OTBI-Enterprise expects your data structures should look like. You could use these DDL statements, run them in a new database schema to create a replica of the source tables (without data of course), and compare the objects against your real source application database schema. Read the comments section of the ORDER_DDL.SQL file and you will know exactly “what” needs to be done. And now that you have read this section, you know “why” you need to do this.

Special Cases

UOM

I had mentioned earlier that OTBI-Enterprise does not support usage of your source-side PL/SQL packages, procedures or functions, or even E-Business Suite specific functions called “Fast Formula”. However, in some corner cases, they do support in a way. Currently, they do support only for UOM (Units of Measure) Conversion for E-Business, because there is no other way OTBI-Enterprise could make use of the conversion rates they use in their source. And of course, without UOM Conversions properly done, BI numbers would be off.

The way they support UOM for E-Business Suite is documented in Oracle’s OTBI-E Document Library in the Known Issues section for UOM. It is slightly different than what we’ve discussed so far – so worth mentioning. At a high level, there are 4 ETL interfaces that captures this UOM Conversion rate data. Each of these call a PL/SQL procedure at the E-Business Suite end, and load up a temporary table, which is put to work in downstream ETLs. Since the PL/SQL procedure won’t be available in the SDS schema in OTBI-Enterprise pod, they ask you to run four complex SQLs (doing the job of the procedure), and export the data out to a particular CSV file [name of this file is important, and is documented]. OTBI-Enterprise is already aware how to read and process this particular file and take it further down.

Data Security with External Data

For Oracle Sources like E-Business Suite Applications and PeopleSoft Applications, the IDM (Identity Manager) at the source (on premise) end cannot be synched up with the one at OTBI-Enterprise (cloud) end. There is no way to guarantee single sign on across the cloud. Also, the security grants data from on premise source applications shouldn’t be replicated to the cloud. There could be potential security threats due to latency of data.

For Non Oracle sources, there is no equivalent comparisons.

For either of these cases, therefore, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure.

Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL.

Here are the steps, once you are ready:

Ø Log into BIACM.

Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”.

Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example.

Ø From the bottom pane, click the “plus” button to add a security object.

Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box.

Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them.

Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK.

Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below.

Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps.

Instead of having to do this for every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it.

Potential Automations

None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better” ones) on your own.

File Upload to Storage Service Container

The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation.

This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows:

STORAGE_SERVICE_URL à Storage service URL that Oracle gave you.

STORAGE_SERVICE_NAME à Name of the Storage Service. Oracle gives you this as well.

STORAGE_SERVICE_USER_NAME à Your username to access Storage Service.

STORAGE_SERVICE_PASSWORD à Your password to access Storage Service.

STORAGE_SERVICE_CONTAINER_NAME à The name of the container, as you configured while registering your source.

ACTION à The action you want this program to take. Options are:

LIST_CONTAINERS à Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES à Uploads all your files from “UPLOAD_FILES_FROM_DIRECTORY” into the container “STORAGE_SERVICE_CONTAINER_NAME”

LIST_FILES à Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME”

DELETE_FILES à Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME”

CREATE_CONTAINER à Creates a container by the name “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES_FROM_DIRECTORY à This is the location in your local server where all your files are waiting to be uploaded.

Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below:

Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot):

import java.io.*;

import java.util.*;

import oracle.cloud.storage.CloudStorage;

import oracle.cloud.storage.CloudStorageConfig;

import oracle.cloud.storage.CloudStorageFactory;

import oracle.cloud.storage.exception.SystemException;

import oracle.cloud.storage.model.Key;

import oracle.cloud.storage.model.StorageInputStream;

public class StorageService_Manager {

public static Properties inParam;

public static String storageUrl;

public static String storageServiceName;

public static String storageUserName;

public static String storagePassword;

public static String storageContainerName;

public static String storageAction;

public static String uploadFromDirectory;

public static CloudStorage myConnection;

public static void main(String[] args) {

try{

initProperties();

storageUrl = inParam.getProperty("STORAGE_SERVICE_URL").trim();

storageServiceName = inParam.getProperty("STORAGE_SERVICE_NAME").trim();

storageUserName = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();

storagePassword = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();

storageContainerName = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();

uploadFromDirectory = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();

storageAction = inParam.getProperty("ACTION").trim();

//config and cloud storage object

CloudStorageConfig myConfig = new CloudStorageConfig();

myConfig.setServiceName(storageServiceName)

.setUsername(storageUserName)

.setPassword(storagePassword.toCharArray())

.setServiceUrl(storageUrl);

myConnection = CloudStorageFactory.getStorage(myConfig);

//list containers within a storage

if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {

listContainers();

}

//upload files

if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {

uploadFiles();

}

//list files

if(storageAction.toUpperCase().equals("LIST_FILES")) {

listFiles();

}

//delete files

if(storageAction.toUpperCase().equals("DELETE_FILES")) {

deleteFiles();

}

//create container

if(storageAction.toUpperCase().equals("DELETE_FILES")) {

deleteFiles();

}

//rename files

if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {

createStorageContainer();

}

System.out.println("End ...");

}catch (Exception e){

e.printStackTrace();

System.out.println("Error has ocuured \n" +e);

}

}

private static void createStorageContainer() {

System.out.println("Creating container :" + storageContainerName);

try {

oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);

}catch (Exception e){

e.printStackTrace();

System.out.println("Error in creating container \n" +e);

}

}

private static void renameFiles() {

System.out.println("Renaming files in container :" + storageContainerName);

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println("Renaming :" + cobjects.get(i).getKey());

StorageInputStream sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());

myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);

myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());

}

}

private static void deleteFiles() {

System.out.println("Deleting files in container :" + storageContainerName);

try {

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println("Deleting :" + cobjects.get(i).getKey());

myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());

}

}catch(SystemException ex) {

System.out.println("Error in deleting files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void listFiles() {

System.out.println("Listing files in container :" + storageContainerName);

try {

java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);

//System.out.println(cobjects);

for (int i = 0; i < cobjects.size(); i++) {

System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());

}

}catch(SystemException ex) {

System.out.println("Error in listing files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void uploadFiles() {

try {

File f = new File(uploadFromDirectory);

File files[] = f.listFiles();

System.out.println("Uploading files to container :" + storageContainerName);

for (File file : files) {

try {

FileInputStream fis = new FileInputStream(file);

String fileName = file.getName();

System.out.println("Uploading :" + fileName);

myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);

}catch(FileNotFoundException ex) {

System.out.println("Error in uploading file :" + file.getName());

ex.printStackTrace(System.out);

}

}

}catch (NullPointerException ex){

System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");

ex.printStackTrace(System.out);

}

catch (Exception ex){

System.out.println("Error in uploading files in container :" + storageContainerName);

ex.printStackTrace(System.out);

}

}

private static void listContainers() {

System.out.println("Listing Containers");

try {

List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();

for (int i = 0; i < containerList.size(); i++) {

System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());

}

}catch(Exception ex) {

System.out.println("Error in listing containers");

ex.printStackTrace(System.out);

}

}

private static void initProperties() throws IOException {

inParam = new Properties();

inParam.load(ClassLoader.getSystemResourceAsStream("StorageService_Manager_Input.properties"));

}

}

That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as:

java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log

The output file “storage_out.log” records all that happens on the console.

CSV file generator for Oracle Sources

With the ORDER.SQL file (or for that matter, ORDER_PE.SQL file) at your hand, the next step is to run those SQLs against your source database, set the #LAST_EXPORTED_DATE parameter with each run, export the result set in a CSV file following OTBI-Enterprise guided rules & regulations – and that too, doing the same thing at a daily basis, is a nightmare if done manually. This is yet another area that calls for an obvious automation. Here is a small and quick Java program that could potentially carry out the job and take you a step closer to complete automation.

Actually, two programs. The first one extends a few classes from open source Java CSV libraries to meet our needs (rules etc.). That is called “MyResultSetHelperService.java”. The second one is the main program responsible for spitting out the CSVs, called “CSVLoader.java”. You need to have both of these together in your project and compile them to a jar called “CSVLoader.jar”.

I will provide the source codes for both of these JAVA programs, but before that I’ll explain how this thing works. The utility expects a property file as an input, called “CSVLoader_Input.properties”. In this, you provide certain details about your CSV file generating situations, as follows:

SQL_FILE_NAME à Name and path of your ORDER.SQL or ORDER_PE.SQL

SRC_DB_URL à Standard JDBC URL pointing toyou’re your source database

Format is: jdbc:oracle:thin:@host:port:SID

SRC_DB_USER à Source database username. Should have SELECT access.

SRC_DB_PSWD à Source database password.

OUTPUT_DIR à Name and path of a directory where the CSVs will be created

LAST_EXPORTED_DATE à Last exported date. Format is: YYYY-MM-DD

FULL_LOAD à Whether or not full load

N à With this setting, the utility will set the date you provided in LAST_EXPORTED_DATE in the WHERE clause your SQL file has. Note that if you wish to extract since a certain date even for the first initial load, you could trick the utility by using this option and setting your initial date likewise. Needless to mention, with this setting, a valid date value is needed for the LAST_EXPORTED_DATE parameter.

Y à With this setting, the utility will ignore all the WHERE Clauses your SQL file has and extract all data. Whatever you provide in LAST_EXPORTED_DATE parameter is ignored with this setting.

REMOVE_LINE_SEP à Whether or not to remove line separator.

N à Will not remove

Y à Will remove. This is what you should set, since that’s what OTBI-Enterprise’s requirement is.

Create and save such a property file by the name “CSVLoader_Input.properties”. A Sample screenshot below. I have used this to extract data for my first full load since year 2010. Note that I have set the FULL_LOAD option to ‘N’ and have provided the value of LAST_EXPORTED_DATE as 2010-01-01.

Following are the two JAVA programs you could use.

MyResultSetHelperService.java

import java.awt.image.ReplicateScaleFilter;

import java.io.IOException;

import java.io.Reader;

import java.sql.Clob;

import java.sql.ResultSet;

import java.sql.ResultSetMetaData;

import java.sql.SQLException;

import java.sql.Types;

import java.util.ArrayList;

import java.util.List;

import com.opencsv.ResultSetHelperService;

public class MyResultSetHelperService extends ResultSetHelperService {

static final java.lang.String MY_DEFAULT_DATE_FORMAT = "YYYY-MM-DD";

static final java.lang.String MY_DEFAULT_TIMESTAMP_FORMAT = "yyyy-MM-dd'T'HH:mm:ss";

static final int NVARCHAR = -9;

static final int NCHAR = -15;

static final int LONGNVARCHAR = -16;

static final int NCLOB = 2011;

final String removeLineFeeds;

public MyResultSetHelperService(final String removeLineFeeds) {

super();

this.removeLineFeeds = removeLineFeeds;

}

//overridden method for date formats

public String[] getColumnValues(ResultSet rs) throws SQLException, IOException {

return this.getColumnValues(rs, false, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);

}

//overridden method for date formats

public String[] getColumnValues(ResultSet rs, boolean trim) throws SQLException, IOException {

return this.getColumnValues(rs, trim, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);

}

//copy paste as-is from super class for getColumnValue method to use

public String[] getColumnValues(ResultSet rs, boolean trim, String dateFormatString, String timeFormatString) throws SQLException, IOException {

List<String> values = new ArrayList<>();

ResultSetMetaData metadata = rs.getMetaData();

for (int i = 0; i < metadata.getColumnCount(); i++) {

//System.out.println(metadata.getColumnTypeName(i+1));

values.add(getColumnValue(rs, metadata.getColumnType(i + 1), i + 1, trim, dateFormatString, timeFormatString));

}

String[] valueArray = new String[values.size()];

return values.toArray(valueArray);

}

//overridden method to get 4000 chars from CLOB

private static String read(Clob c) throws SQLException, IOException {

StringBuilder sb = new StringBuilder((int) c.length());

Reader r = c.getCharacterStream();

char[] cbuf = new char[CLOBBUFFERSIZE];

int n;

while ((n = r.read(cbuf, 0, cbuf.length)) != -1) {

sb.append(cbuf, 0, n);

}

String value = sb.toString();

if(value.length()>4000){;

value = value.substring(0, 4000);

}

return value;

}

//overridden method to trim new line characters in column value

//overridden method to support ROWID support

public String getColumnValue(ResultSet rs, int colType, int colIndex, boolean trim, String dateFormatString, String timestampFormatString)

throws SQLException, IOException {

String value = "";

switch (colType) {

case Types.BIT:

case Types.JAVA_OBJECT:

value = handleObject(rs.getObject(colIndex));

break;

case Types.BOOLEAN:

boolean b = rs.getBoolean(colIndex);

value = Boolean.valueOf(b).toString();

break;

case NCLOB: // todo : use rs.getNClob

case Types.CLOB:

Clob c = rs.getClob(colIndex);

if (c != null) {

value = read(c);

}

//code added to trim new line chars from column value

if(value!=null){

value = removeLineBreaks(value);

}

break;

case Types.BIGINT:

value = handleLong(rs, colIndex);

break;

case Types.DECIMAL:

case Types.DOUBLE:

case Types.FLOAT:

case Types.REAL:

case Types.NUMERIC:

value = handleBigDecimal(rs.getBigDecimal(colIndex));

break;

case Types.INTEGER:

case Types.TINYINT:

case Types.SMALLINT:

value = handleInteger(rs, colIndex);

break;

case Types.DATE:

value = handleDate(rs, colIndex, dateFormatString);

break;

case Types.TIME:

value = handleTime(rs.getTime(colIndex));

break;

case Types.TIMESTAMP:

value = handleTimestamp(rs.getTimestamp(colIndex), timestampFormatString);

break;

case NVARCHAR: // todo : use rs.getNString

case NCHAR: // todo : use rs.getNString

case LONGNVARCHAR: // todo : use rs.getNString