X

The Oracle BI Application Blogs provides the latest and tips and tricks on Oracle BI Applications Product (OBIA)

An Implementer’s Guide to External Data Support for Universal Sources

 

 

Applies to OTBI-Enterprise Version 2 PB 2

 

TABLE OF CONTENTS

Overview of the External Data Support Feature. 3

Support for On-Premise Oracle AU sources. 3

Support for Non-Oracle sources. 3

Supported Subject Areas – HCM+. 4

Practical Use Cases. 5

Getting Started with External Data – Non-Oracle Sources. 5

Sequence of Steps – High Level 6

Deep Dive into the Actual Steps. 6

Register Source and Configure Storage Service Container. 6

Build Load Plans. 8

Prepare Data Files and Upload. 11

Prepare Data Files. 11

Upload Data Files to SSC. 15

Run your Load Plans. 16

Managing Deletes in Source System.. 17

General Tips. 18

Resolving Domains. 18

Special Cases. 22

UNSPSC. 22

Data Security with External Data. 24

Potential Automations. 26

File Upload to Storage Service Container. 26


Overview of the External Data Support Feature

Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats).

With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered.

With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together.

With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those.

Support for On-Premise Oracle AU sources

For Oracle sources (like E-Business Suite, or PeopleSoft), see my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”.

Support for Non-Oracle sources

For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), obviously, there are no pre-built adaptors. However, Oracle BI Application’s standardized data models for their data warehouse makes it possible for you to populate your Universal Staging area tables and thereon, Oracle BI  Applications ETL takes over. The Source Independent Load routines read from these Universal Staging area tables and populate the end data warehouse tables, the Post Load Process takes it further to populate required Aggregate tables. Finally, the reporting layer is already attached to the warehouse. In short, majority of the heavy lifting of the data is already taken care of; you need to just populate the Universal Staging Tables. How?

Oracle provides documentation through which you get to know which staging tables (by subject areas) they need. Along with that comes information about the map of those tables with the CSV files. Finally, once you know the list of CSV files to work on, Oracle’s documentation provide detailed information about the structure of those files, their data types, column (functional) descriptions, key information and any other related information. With all that knowledge, you create the required Universal CSV files and upload to SSC. OTBI-Enterprise now takes over and loads them to the data warehouse and connect the dots to the reporting layer.

Here is a quick diagrammatic representation of Non Oracle (Universal Adaptor) External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.

 

Supported Subject Areas – HCM+

HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, for Universal sources.

Practical Use Cases

You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud.

Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area.

Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). This is the case we are dealing with in this document.

Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Oracle Sources”.

Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area.

There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes from where.

Getting Started with External Data – Non-Oracle Sources

When it comes to Non Oracle sources, OTBI-Enterprise uses the Universal Adaptor to populate the data warehouse tables. The Universal Adaptor relies on CSV data files to be present in the ..\srcFiles directory. The actual directory gets provisioned through set up tasks at BI Application Configuration Manager (BIACM). The ETL maps to these CSV files and loads the Universal Staging area tables. Thereon, the regular ETL processes like source independent load, post load etc. kicks in. Note that there is no SDS schema when it comes to Universal Adaptor. The ETL reads directly from the files.

Populating the UA CSV files is usually the biggest challenge for customers. First, to figure out which files are to be populated and what should be their names. Next, to understand the content that goes in each of these CSV files. We will go over these in details below.

 

Sequence of Steps – High Level

At a high level, the sequence of steps goes like this:

1. Register Source, configure Storage Service Container (SSC)

2. Pick subject areas (FGs, or Fact Groups) and build Load Plans [Regular and Domains-only]

3. Populate required data files and manifest file (list of files)

4. Upload data files (for Domains-only LP) to SSC

5. Run Domain-only LP

6. Upload data files (for regular LP) and new manifest file to SSC

7. Run regular LP

8. Repeat steps 6 and 7 for incremental ETL runs

Deep Dive into the Actual Steps

Let’s take a deeper look into what goes in each of these steps.

Register Source and Configure Storage Service Container

First, you need to enable BI Applications Offerings. To do that,

Ø Login to BIACM

Ø Click on Tasks -> Source Setups ->Manage Business Applications

Ø Select the offerings you prefer

Ø Click Save

The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do.

Now we proceed to source registration. Register Universal Adaptor source for External Data. This is how:

Ø Click on Tasks -> Source Setups -> Define Business Intelligence Applications Instance

Ø Source Systems Tab ->Click on Create -> Then provide a Source Instance Name and a Data Source Number, as shown in the screenshot below. It is recommended that you keep 900 as the data source number for Universal file sources.

Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically.

Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “Container_Universal”.

Ø Click on Save. The Source system will be registered and the Storage Service Container is created

Build Load Plans

Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode).

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans-> Add

Give a name to your LP. We call it “UADomains_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported.

Ø Click Generate. Wait for the Domains LP generation to complete.

Now build another Load Plan which you will be using for regular loads, full and incremental.

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add

Give a name to your LP. We call it “UAFull_LP1”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP.

Ø Click Generate. Wait for the Full LP generation to complete.

This completes the step of building all necessary Load Plans.

Prepare Data Files and Upload

This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC.

Prepare Data Files

Depending upon the FGs you picked for your LP, the first thing you need to gather is the list of CSV files you will actually need. OTBI-Enterprise documents it quite clearly, by subject areas. You could download that from OTBI-Enterprise Document Library under the section “View Universal Adaptor Reference for HCM Subject Areas”. A sneak peak of that file is here:

 

Once you know the names of physical CSV files, and also the names of the logical ODI model store name, the next (and probably the biggest) concern is to know what goes in there. Fortunately, OTBI-Enterprise provides some decent documentation about the entire topic – all that you need to know. For example, what should be grain of the data set, which staging area table and which final dimension or fact table this data will get into, a summary about the nature of data at a high level, and so on. Next, it provides the shape of the CSV file, column header names, data types and lengths, whether the column is mandatory or not [some actually aren’t], whether this column is a domain code [including list of potential target warehouse conformed domain codes], and lastly the description of the column through which you get to know what the column is about.

You can download this documentation from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. A sneak peak at that document is provided here for reference purposes.

This should be a reasonable enough information for you to get started. OTBI-Enterprise even provides sample CSV files (with no data) so that the structure of the CSV file is already there. You just need to add data to it. There are a whole bunch of rules about the data that you are going to provide. Here is the list you should keep in mind:

Ø First four lines mandatorily contains true header information. You can put whatever you want here, OTBI-Enterprise will assume these as non-data and skip these four lines.

Ø The fifth line is the column header line. This should match with the sample CSVs OTBI-Enterprise provides. If you are using one of their sample files instead of creating your own, you are OK here. This is also assumed to be a non-data row, and OTBI-Enterprise will skip it.

Ø Data starts from the sixth row.

Ø File name should be of the format file_<table name>-timestamp.csv. For example, file_absence_event_fs-20140916_004935.csv, where timestamp format is
YearMonthDay_24HourMinuteSecond [20140916_004935]

Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed.

Ø Maximum size limit of a file is 5Gigs.

Ø Ensure you structure your csv file with the columns in the order as specified in OTBI-E documentation, or start from OTBI-Enterprise provided sample CSVs. When there is no data for a column, leave an empty placeholder. For example, if the format specifies First Name, Middle Name, Last Name, with Middle Name as not required, you must still submit the CSV file data as "Mary" ,"", "Washington".

Ø Since “comma” itself can be part of your data, all character data must be encapsulated within double-quotes, if that is your escape character.

Ø Non-character columns must not be encapsulated within double-quotes.

Ø Remove any new line characters such as \r or \n.

Ø Date data must be defined as per OTBI-Enterprise documentation. It is usually of string type in the format: YYYYMMDDHH24MISS. Don’t encapsulate this within double-quotes.

So that’s about your data file. You will populate a set of such CSV data files. Now, OTBI-Enterprise for External Data support for Universal Adaptor requires another file, called the “Manifest” file. This is more like an index file, containing all the data CSV file names which would be consumed in a given lot. Requirements on the manifest file is as follows:

Ø MANIFEST should be the last file to get uploaded after uploading all data files. The download program looks at the Manifest to determine which files ought to be downloaded. After successful download and processing of the CDV data file, the MANIFEST file gets renamed so that they are NOT further picked up and data CSV files will be marked for expiry (purged when it reaches stipulated number of expiry days).

Ø MANIFEST file naming format is "MANIFEST-TIMESTAMP.MF". Timestamp format is YearMonthDay_24HourMinuteSecond [20150417_035459]. It is mandatory to have timestamp in MANIFEST file name as same file names will be overwritten in Cloud Storage Service. For example, MANIFEST-20150417_035459.MF.

Ø MANIFEST file should NOT be compressed/zip.

A sample data CSV file and a sample manifest file is shown in the following screenshot.

Finally, you have all your CSV data files ready and populated, along with the manifest file for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files and the MF file to a particular folder, say “C:/temp/upload”. All files from here will go into SSC.

Upload Data Files to SSC

OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service.

I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own.

Assuming that you’ve done it, this concludes your data preparation and upload step.

Run your Load Plans

At first, you are required to run the Domains-Only Load Plan.

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans. Then pick the Domains-Only LP called “UADomains_LP1” and click on Execute. Check on the Execution status and wait for it to finish.

Once this finishes, you need to re-upload dated CSV files that are needed for the regular LP and a new dated manifest file. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Then run the regular LP. Same navigation as above.

Managing Deletes in Source System

Managing deletes for External Data with Non-Oracle sources can be a bit tricky. In this case, OTBI-Enterprise having no clues about the actual source system, relies on you to provide the deleted data related information. Managing deleted data (from source) in the data warehouse is crucial, otherwise you end up getting incorrect numbers. Note that unlike the case with Oracle sources, the OTBI-Enterprise does not maintain a replica schema (an SDS) of the OLTP within the OTBI-E pod. That’s because OTBI-Enterprise isn’t aware of how the source system looks like. Therefore, it turns the responsibility to you to inform about the deleted data through the
same flat file (CSV) approach.

The CSV data files, as we discussed above, are to be provided one per staging area table. The column headers of the CSV exactly match up with the columns of the staging area table. There is one such column in all staging area tables (and hence in all CSV files) called “DELETE_FLG”. When you upload data files for initial load, all values for this column would be “Y”. In your incremental run, assuming you have found out some updates on a previously uploaded record, you would need to re-upload that with the changes. Similarly, if the record that you had uploaded earlier is now hard deleted in your source, you will need to re-upload that same record, this time with
“DELETE_FLG” set as “N”.

Take a close look at the above screenshot. This is an incremental data file that you are about to upload. Usually the PK definition of universal staging table (and hence the CSV files) is made up of columns:

INTEGRATION_ID, and DATASOURCE_NUM_ID

For data effective tables, a third column gets included as well, making it a three-column PK:

INTEGRATION_ID, DATASOURCE_NUM_ID and SRC_EFF_FROM_DT

For this example, we have only the first option (two-column PK). You had originally uploaded data for this PK: '900,300100044288208~300100044268215' with DELETE_FLG = ‘N’. The data warehouse have been using this record for its calculation/analysis so far. Later, you figured out that this record was hard deleted in your source. You should now have a process (within our source) that spits out deleted records along with new/updated records. New/Updated records goes their usual way with DELETE_FLG as “N”, but deleted records must have the DELETE_FLG column set to “Y”, as shown in the above picture. The downstream data warehouse process will adopt this as a soft-delete and move on accordingly, removing this record from all calculations and analysis. That’s pretty much the idea.

You can choose to provide all deleted records for all files in a completely separate batch, where all records are marked DELETE_FLG as “Y”, or you can mix with your daily regular data files – your choice.

General Tips

Resolving Domains

Resolving source domain member codes to target domain member codes happens as a part of the SDE interfaces in cases of pre-built adaptors (for E-Business Suites, PeopleSoft, Fusion etc.). However, for Universal sources, you are held responsible for providing the target domain member codes in the CSV files itself. This is quite an important step. Fortunately, there are data model standards in place through which you know that a given column is a target domain code column. Target domain column names are usually prefixed with “W_”. The documentation available will also make a note of that. We could take an example from OTBI-Enterprise Document Library under the section “View External Data File Load Specifications”. We took a similar example above, but that was for a fact source, which usually doesn’t carry a domain. An example of a dimension source would probably help to understand this concept. See below:

 

So, the first and second column for this file are for you to populate your source domain member code values about “Status” (indicating requisition or applicant status) and “Reason” (indicating the reason, if available, for such a requisition or applicant status). These are your “raw” source values. Then, the third column in this file is a target domain column. The values would be one from the list of values OTBI-Enterprise requires. Based on the data for first two source domain member code columns, you need to do the mapping and “choose” the appropriate target domain member code value.

Assume that the list of target domain member code values include items like “ASSESSMENT_INTERVIEW”, “ASSESSMENT_INTERVIEW1” and “ASSESSMENT_INTERVIEW2” (names are “Assessment Interview”, “Assessment First Interview” and “Assessment Second Interview” respectively). There are actually 22 out of the box values for this target domain.

Also assume that your source domain member code values include items like “WT”, “INT1”, “INT2”, “INT3” and “FINT” (names are “Written Test”, “Interview Round 1”, “Interview Round 2”, “Interview Round 3” and “Final Interview” respectively).

In this case, your job would be to map your 5 source values to the possible list of 3 target values. And you have to get this done in the CSV file itself, before providing it to OTBI-Enterprise. That kind of logic needs to be there at your end. You cannot change the list of target values, unless they are coded as “extensible”. If extensible, you will be allowed to add more to the list in BIACM.

Must be wondering where to get this list of allowed target domain member code values? Simple.

Ø Log into BIACM and navigate to Manage Warehouse Domains.

Ø Do your search based on the domain code (for example, W_RECRUITMENT_EVENT)

Ø Note down the Warehouse Members in the bottom pane. A screenshot below.

While on this topic, let me go over another much related issue here – Domain Mapping Hierarchies. In the above file, the first and second columns (source domain) will lead to mapping the third column (target domain) – that’s great. But what about the fourth column? That is also a target domain column. How do we map that? How do we map the fifth, the sixth, etc.? All of them seem to be target domain columns. The best thing is the read up the functional task manager documentation for this task, if there is any. For any complicated mappings like this, OTBI-Enterprise provides additional information help topics. For the scope of this document, you need to understand only this – you may have to simply copy the mapping from BIACM, if it falls under a domain mapping “hierarchy”.

See this screenshot (Navigation: BIACM -> Manage Domain Mappings and Hierarchies-> Search on Domain Code like “W_RECRUITMENT”). Then go to the Hierarchies tab within the middle pane.

You notice the hierarchy up from Recruitment Event to Recruitment Sub Stage to Recruitment Stage. This is target domain hierarchy. The parent value at any level is based on the child values. And since the lowest value is a target domain (Recruitment Event, in this case) OTBI-Enterprise has pre-done the mapping up to Sub Stage (see above). The mapping is visible in the bottom most pane.

The next screenshot shows the mapping from Sub Stage to Stage level.

As a universal adaptor user, all you need to do is to copy these “mappings” from BIACM and reflect them in your CSV files. This is true for any target domain columns that are a part of a domain hierarchy. If not (which you can find out looking into BIACM in the above way), you would need to do the mapping from your source domains to the target domains.

Mapping domains is mandatory before even starting a full load in OTBI-Enterprise. A whole lot depends heavily on these mappings.

Special Cases

UNSPSC

OTBI-Enterprise product dimension uses a customer configurable source file called “file_unspsc.csv”. Out of the box they provide a few rows of sample data, but customers who have paid for UNSPSC (United Nations Standard Products and Services Code) data from unspsc.org, are expected to provide their own “paid” data. If you are one such customer, you need to download this sample file from OTBI-Enterprise, put your own UNSPSC data in it in the specified format, and upload it to Storage Service.

That’s nothing different in terms of how it works for Oracle Sources (E-Business Suite, or PeopleSoft). However, for non-Oracle sources, there is a bit of a special case, which is why I am talking about UNSPSC in this section.

Problem is, even though this is necessarily a one-time activity (until the international standard changes), you need to upload this same file in each and every batch of ETL, over and over again. For Universal CSV files, once you upload to SSC along with a manifest file, OTBI-Enterprise downloads everything to the Source Files folder (remember that during your source registration step?). So, file_unspsc.csv gets downloaded and consumed in the ETL. Next time, before downloading, OTBI-Enterprise  deletes all files from the source files folder. So, file_unspsc.csv gets deleted. For real changing data files, this is OK. But for UNSPSC data, this wasn’t warranted. However, that’s the way it is.

Long story short, just remember to provide the file_unspsc.csv in every batch. If you do use UNSPSC, provide with valid data. If not, just keep providing OTBI-Enterprise’s sample file. You can find this information in the OTBI-Enterprise Document Library under the Known Issues section for UNSPSC.

Data Security with External Data

For Non Oracle sources, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure.

Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL.

Here are the steps, once you are ready:

Ø Log into BIACM.

Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”.

Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example.

Ø From the bottom pane, click the “plus” button to add a security object.

Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box.

Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them.

Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK.

Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below.

Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps.

Instead of having to do this for every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it.

Potential Automations

None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better”
ones) on your own.

File Upload to Storage Service Container

The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation.

This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows:

STORAGE_SERVICE_URL -> Storage service URL that Oracle gave you.

STORAGE_SERVICE_NAME -> Name of the Storage Service. Oracle gives you this as well.

STORAGE_SERVICE_USER_NAME-> Your username to access Storage Service.

STORAGE_SERVICE_PASSWORD -> Your password to access Storage Service.

STORAGE_SERVICE_CONTAINER_NAME -> The name of the container, as you configured while registering your source.

ACTION -> The action you want this program to take. Options are:

LIST_CONTAINERS -> Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES -> Uploads all your files from “UPLOAD_FILES_FROM_DIRECTORY” into the container “STORAGE_SERVICE_CONTAINER_NAME”

LIST_FILES -> Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME”

DELETE_FILES -> Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME”

CREATE_CONTAINER -> Creates a container by the name “STORAGE_SERVICE_CONTAINER_NAME”

UPLOAD_FILES_FROM_DIRECTORY -> This is the location in your local server where all your files are waiting to be uploaded.

Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below:

Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot):

 

 

import java.io.*;
import java.util.*;
import oracle.cloud.storage.CloudStorage;
import oracle.cloud.storage.CloudStorageConfig;
import oracle.cloud.storage.CloudStorageFactory;
import oracle.cloud.storage.exception.SystemException;
import oracle.cloud.storage.model.Key;
import oracle.cloud.storage.model.StorageInputStream;
public class StorageService_Manager {
    public static Properties inParam;
    public static String storageUrl;
    public static String storageServiceName;
    public static String storageUserName;
    public static String storagePassword;
    public static String storageContainerName;
    public static String storageAction;
    public static String uploadFromDirectory;
    public static CloudStorage myConnection;
    public static void main(String[] args) {
        try{
            initProperties();
            storageUrl
            = inParam.getProperty("STORAGE_SERVICE_URL").trim();
            storageServiceName
            = inParam.getProperty("STORAGE_SERVICE_NAME").trim();
            storageUserName
            = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();
            storagePassword
            = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();
            storageContainerName
            = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();
            uploadFromDirectory
            = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();
            storageAction
            = inParam.getProperty("ACTION").trim();
            //config and cloud storage
            object
            CloudStorageConfig
            myConfig = new CloudStorageConfig();
            myConfig.setServiceName(storageServiceName)
            .setUsername(storageUserName)
            .setPassword(storagePassword.toCharArray())
            .setServiceUrl(storageUrl);
            myConnection
            = CloudStorageFactory.getStorage(myConfig);
            //list containers within a
            storage
            if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {
                listContainers();
            }
            //upload files
            if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {
                uploadFiles();
            }
            //list files
            if(storageAction.toUpperCase().equals("LIST_FILES")) {
                listFiles();
            }
            //delete files
            if(storageAction.toUpperCase().equals("DELETE_FILES")) {
                deleteFiles();
            }
            //create container
            if(storageAction.toUpperCase().equals("DELETE_FILES")) {
                deleteFiles();
            }
            //rename files
            if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {
                createStorageContainer();
            }
            System.out.println("End ...");
        }catch (Exception e){
            e.printStackTrace();
            System.out.println("Error has ocuured \n" +e);
        }
    }
    private static void createStorageContainer() {
        System.out.println("Creating container
:" + storageContainerName);
        try {
            oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);
        }catch (Exception e){
            e.printStackTrace();
            System.out.println("Error in creating container\n" +e);
        }
    }
    private static void renameFiles() {
        System.out.println("Renaming files in container :" + storageContainerName);
        java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
        for (int i = 0; i < cobjects.size(); i++) {
            System.out.println("Renaming :" + cobjects.get(i).getKey());
            StorageInputStream
            sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());
            myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);
            myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());
        }
    }
    private static void deleteFiles() {
        System.out.println("Deleting files in container :" + storageContainerName);
        try {
            java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
            for (int i = 0; i < cobjects.size(); i++) {
                System.out.println("Deleting :" + cobjects.get(i).getKey());
                myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());
            }
        }catch(SystemException ex) {
            System.out.println("Error in deleting files in container :" + storageContainerName);
            ex.printStackTrace(System.out);
        }
    }
    private static void listFiles() {
        System.out.println("Listing files in container :" + storageContainerName);
        try {
            java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
            //System.out.println(cobjects);
            for (int i = 0; i < cobjects.size(); i++) {
                System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());
            }
        }catch(SystemException ex) {
            System.out.println("Error in listing files in container :" + storageContainerName);
            ex.printStackTrace(System.out);
        }
    }
    private static void uploadFiles() {
        try {
            File f = new File(uploadFromDirectory);
            File files[] = f.listFiles();
            System.out.println("Uploading files to container :" + storageContainerName);
            for (File file : files) {
                try {
                    FileInputStream fis = new FileInputStream(file);
                    String fileName = file.getName();
                    System.out.println("Uploading :" + fileName);
                    myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);
                }catch(FileNotFoundException ex) {
                    System.out.println("Error in uploading file :" + file.getName());
                    ex.printStackTrace(System.out);
                }
            }
        }catch (NullPointerException ex){
            System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");
            ex.printStackTrace(System.out);
        }
        catch (Exception ex){
            System.out.println("Error in uploading files in container :" + storageContainerName);
            ex.printStackTrace(System.out);
        }
    }
    private static void listContainers() {
        System.out.println("Listing Containers");
        try {
            List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();
            for (int i = 0; i < containerList.size(); i++) {
                System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());
            }
        }catch(Exception ex) {
            System.out.println("Error in listing containers");
            ex.printStackTrace(System.out);
        }
    }
    private static void initProperties() throws IOException {
        inParam
        = new Properties();
        inParam.load(ClassLoader.getSystemResourceAsStream("StorageService_Manager_Input.properties"));
    }
}

That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as:

java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log

The output file “storage_out.log” records all that happens on the console.

Join the discussion

Comments ( 1 )
  • guest Wednesday, January 20, 2016

    Is it possible to import non-HCM related data into OTBI-E? For example, POS data, to then tie back to an employee?


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.