The Oracle BI Application Blogs provides the latest and tips and tricks on Oracle BI Applications Product (OBIA)

An Implementer’s Guide to External Data Support for Oracle Sources

Applies to OTBI-Enterprise Version 2 PB 2



Overview of the External Data Support Feature. 3

Support for On-Premise Oracle AU sources. 3

Support for Non-Oracle sources. 4

Supported Subject Areas – HCM+. 4

Practical Use Cases. 5

Getting Started with External Data – Oracle AU Sources. 5

Sequence of Steps – High Level 6

Deep Dive into the Actual Steps. 6

Register Source and Configure Storage Service Container. 6

Build Load Plans. 8

Prepare Data Files and Upload. 10

Prepare Data Files. 10

Upload Data Files to SSC. 18

Run your Load Plans. 18

Managing Deletes in Source System.. 19

General Tips. 20

Special Cases. 21

UOM... 21

Data Security with External Data. 22

Potential Automations. 24

File Upload to Storage Service Container. 24

CSV file generator for Oracle Sources. 28

Overview of the External Data Support Feature

Oracle Business Intelligence Applications traditionally supported packaged data warehouses using pre-built ETL adaptors for on premise Oracle sources such as E-Business Suites, PeopleSoft, Siebel, JD Edwards and so on. For Non-Oracle sources, a template based Universal Adaptors were supported through flat files (CSV formats).

With the inception of Oracle Fusion Cloud and other cloud base applications, such as Oracle Talent Cloud (Taleo), ETL adaptors for the same were also delivered.

With their latest product line called OTBI-Enterprise, Oracle delivers their data warehouse solution to the cloud. However, until Version 1, they supported only Oracle Fusion HCM Cloud as the only source. With Version 2, Oracle Talent Cloud (Taleo) was also added to the support matrix, thereby making it possible to analyze HCM data (from Fusion HCM) and recruiting data (from Taleo) together.

With OTBI-Enterprise’s latest patch bundle (2), they extended support for HCM data from two potential on-Premise Oracle sources, namely, E-Business Suite Applications 12.2 and PeopleSoft Applications 9.2 using their pre-built ETL adaptors for these sources. On top of that, support is now also extended for HCM subject areas coming out of Non Oracle sources via the Universal Adaptors. Together put, this is what is being termed as “External Data Support”. With this, now you will be able to upload your on-Premise EBS or PSFT data to the on-Cloud data warehouse, along with the mix of your potential other cloud based sources like Oracle Fusion HCM Cloud or Oracle Talent Cloud (Taleo), in case you have those.

Support for On-Premise Oracle AU sources

At a high level, the way it works is quite simple. You upload a set of your OLTP table’s data (EBS or PSFT) in the CSV file format to designated Oracle Storage Service Containers (henceforth called SSC). OTBI-Enterprise will then download those files at run-time from SSC and populate an on-Cloud SDS (Source Data Store) database schema. This acts as the mirror image of the actual on-Premise source system. The respective ETL adaptors then gets kicked in and the final data warehouse gets populated. With the reporting layer already built in the pod, you get direct access to OBIEE Answers and Dashboards. Here is a quick diagrammatic representation of AU External Data support, illustrating how it fits into the rest of the OTBI-Enterprise picture.


Support for Non-Oracle sources

For Non-Oracle sources (could be practically anything, including Non-Oracle cloud applications), see my other blog on Universal Adaptors, called “An Implementer’s Guide to External Data Support for Universal Sources”.

Supported Subject Areas – HCM+

OTBI-Enterprise External Data support does not support any HCM subject areas that makes extensive use of OLTP procedures and packages. For example, for E-Business Suite Applications, the HCM subject area “Accruals” relies on certain EBS PL/SQL packages. As it stands now, there is no easy way to replicate those packages within the cloud SDS. Therefore, Accruals subject area from EBS isn’t supported in OTBI-Enterprise Version 2.

HCM+ acronym stands for certain additional areas outside of core HCM, such as GL Journals. The “Workforce Effectiveness” subject area uses this, and is supported for External Data. Following shows the entire HCM+ support matrix, by sources.

Practical Use Cases

You would most likely have a mix of source systems against which you would want your data warehouse to be built. For example, you may have upgraded to Oracle Fusion HCM cloud for your core HCM data. However, your payroll processing is outsourced to a third party vendor (maybe ADP). You might still be using EBS for your Financials (upgrade to Oracle Fusion FSCM work-in-progress). And you use Oracle Talent Cloud (Taleo) for recruiting data. Such a mix is quite normal. With the delivery of External Data Support, this is how you can work out your solution in OTBI-Enterprise and have your data warehouse on the cloud.

Ø Use Oracle Fusion HCM Cloud Adaptor for core HCM, except Payroll subject area.

Ø Use External Data – Universal adaptor approach for Payroll data from third party (ADP). See my other blog on Oracle Sources, called “An Implementer’s Guide to External Data Support for Universal Sources”.

Ø Use External Data – EBS for GL Journals [that’s the only subject area supported with HCM+]. This is the case we are dealing with in this document.

Ø Use Oracle Talent Cloud (Taleo) Adaptor for recruiting subject area.

There could be many such cases and combinations. As long as you understand what is supported (matrix above) you get to decide which data comes
from where.

Getting Started with External Data – Oracle AU Sources

For Oracle sources (E-Business Suites 12.2 or PeopleSoft Applications 9.2), OTBI-Enterprise uses the corresponding ETL Adaptors to populate the data warehouse tables. These Adaptors relies on a relational database where all the required OLTP tables would be present. For OTBI-Enterprise, this would be the SDS schema, where all the required OLTP tables (no OLTP specific PL/SQL packages or procedures or functions) will be replicated by the Cloud Replicator. Thereon, the regular ETL processes like the source dependent extracts (SDE), source independent load (SILOS), and post load (PLP) kicks in.

Providing the CSV files against each required OLTP tables is the biggest challenge here. Fortunately, OTBI-Enterprise provides decent template SQL files that would help you to figure out which tables are required and how to extract the data from those tables. We will go over these in details below.

Once the CSV files are there in the Storage Service Container, OTBI-Enterprise picks them up from there and loads up the SDS.

Sequence of Steps – High Level

At a high level, the sequence of steps goes like this:

1. Register Source (PSFT or EBS), Configure Storage Service Container (SSC)

2. Pick subject areas (FGs or Fact Groups) and build Load Plans [Regular and Domains-only]

3. Download ORDER.SQL, ORDER_DDL.SQL and ORDER_PE.SQL files for each LP

4. Follow ORDER.SQL against Domains-only LP and prepare the data exports in CSV format for Domains-only LP

5. Upload data files (for Domains-only LP) to SSC

6. Run Domain-only LP

7. Configure domains

8. Follow ORDER.SQL against regular LP and prepare the data exports in CSV format for regular LP

9. Upload data files (for regular LP) to SSC

10. Run regular LP

11. Repeat steps 8 through 10 for incremental ETL runs

12. If you have deletes in the OLTP for a table, follow ORDER_PE.SQL for that table and prepare data exports in PECSV format and upload in SSC and run along with usual Incremental run

Deep Dive into the Actual Steps

Let’s take a deeper look into what goes in each of these

Register Source and Configure Storage Service

First, you need to enable BI Applications Offerings. To do that,

Ø Login to BIACM

Ø Click on Tasks -> Source Setups -> Manage Business Applications

Ø Select the offerings you prefer

Ø Click Save

The following is a screenshot showing Human Resources and Financial Offerings were enabled. Custom Offerings always come by default. For OTBI-Enterprise, recommendation is to disable it, since there’s not much of a customization you will get to do.

Now we proceed to source registration. Register E-Business Suite Applications 12.2 or PeopleSoft Applications 9.2 source for External Data. This is how (taking the EBS example):

Ø Click on Tasks -> Source Setups -> Define Business Intelligence Applications Instance

Ø Source Systems Tab à Click on Create à Pick Product Line as Oracle E-Business Suite, and Product Line Version as Oracle E-Business Suite R12.2.x.
Set a name for your source instance, say EBS122, and pick a Data Source Number, maybe 201. Set Source Load Technology to “Replicator External File”.

Ø Click on Next. Note that the Source Files folder path gets populated. At the end of the successful registration, BIACM will provision this folder automatically.

Ø Create a new Container (Drop Down from “Container” and pick “Create Container”) if configuring for the first time. Pick any name. The screenshot below shows that the name given is “ExternalData_EBS”.

Ø Click on Save. The Source system will be registered and the Storage Service Container is created

Build Load Plans

Next, build your load plans. Two load plans are needed. A “Domains-Only Load Plan” which you will run first. The next one is your regular load plan, which you will be running regularly (first time goes in Full mode, thereafter in Incremental mode).

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add

Give a name to your LP. We call it “DomainsOnly_EBS” in this example. Pick the right LP type and select your source instance that you had set while registering your source. Since we do Domains-Only LP first, the correct LP Type should be “Domain-Only Extract and Load (SDE and SIL)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the information in the HCM+ support matrix mentioned above in this document. Make sure you pick from what is supported.

Ø Click Generate. Wait for the Domains LP generation to complete.

Now build another Load Plan which you will be using for regular loads, full and incremental.

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Add

Give a name to your LP. We call it “Full_EBS”. Pick the right LP type and select your source instance that you had set while registering your source. Since we are doing regular LP now, the LP Type should be “Source Extract and Load (SDE, SIL and PLP)”.

Ø Click on Next, and then pick the FGs (Fact Groups) as needed. Use the same FGs as you used while creating the Domains-Only LP.

Ø Click Generate. Wait for the Full LP generation to complete.

This completes the step of building all necessary Load Plans.

Prepare Data Files and Upload

This is the most critical part of the implementation project. Let’s break it down to two parts, Preparation of files, and Upload to SSC.

Prepare Data Files

You need to create two sets of data CSV files. First one would have those that are needed by the Domains-Only ETL (this is one-time only), and the second one for the Full ETL (you repeat this for each incremental run). The process for creating the files in both cases are exactly the same. The following illustrates the steps for Domains-Only ETL. To be clear, file preparation is a two-step process. First, you generate the Order Files (SQL files that Oracle would generate for you depending upon your FG selections in the LP), and second, use these Order Files and run them against your EBS source to extract data out in CSV format.

Let’s start with the first step, generating the Order Files

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans -> Select the Domains-Only LP that you just now generated (DomainsOnly_EBS).

Ø Keeping that selected, click on a button on the toolbar, the mouse-over of which reads “Export Source System Extract SQL Command Files”.

Ø It pops up new window, where under “Actions”, you would click on one that says “Execute Source System Extract SQL Command Files Scenario”.

Ø Since you are actually running an ODI scenario behind the scenes, you will be prompted to provide information about your Agent and all. Provide those, and then click OK.

Ø It takes a few seconds to run and then you get the results back in a new popup window. Three .SQL files would be presented to you.

Ø Select the files and export them out to your local box, using the ‘Export’ button.

We will go over what is in each of these files, and how it helps you. But before that, like I mentioned earlier, the process is exactly the same so far for the Full_EBS Load Plan. Here is a screenshot of the final step on that LP.

Note that the .SQL files has the Load Plan name prefixed with it, as in “Full_EBS_***” or “DomainsOnly_EBS_***”. You can’t mix up. You can actually download all six together, but while creating and providing CSV files for consumption, you would take care of uploading the ones needed by the appropriate LP.

Now that Order Files are generated, let’s go to the next step.

Generate your extract CSV files

These three sets of .SQL files are supposed to help you generate your data CSV files. To understand how and why, let’s take a closer look on the contents of these files. Each of these files have a comment section at the beginning that explains what this file has and how one should use the information in here. The main file that will be used is called ORDER.SQL.

The other two files are ORDER_DDL.SQL and ORDER_PE.SQL. For the interest of this section, we will not spend more time on these two files, apart from simply mentioning that ORDER_DDL.SQL provides you with the structure of the required OLTP tables, as per the OTBI-Enterprise model. You could use this to compare with your real OLTP and spot differences upfront. The ORDER_PE.CSV is similar to the ORDER.SQL in a sense that it also provides a bunch of SQL SELECT statements. However, the intent of ORDER_PE.SQL is to only select the Primary Keys from each OLTP table. To support physical hard deletes from your OLTP, this helps OTBI-Enterprise to synch up the data in the SDS with your OLTP and mark deleted records as “deleted” [set the CDC_$DML_CODE to ‘D’]. The data file CSVs resulting out of this ORDER_PE.SQL would have extensions as .PECSV, while the main ones against ORDER.SQL would have extension as .CSV.

Here are screenshots of the comment section for all the three files. We will go over the details of the last and the most important one, ORDER.SQL, right after that.

The above is a screenshot showing a sample ORDER_DDL.SQL file. The comments should be self-explanatory.

The above is a screenshot showing a sample ORDER_PE.SQL file. The comments should be self-explanatory.

The above is a screenshot showing a sample ORDER.SQL file. The comments should be reasonably clear, but here are a few tips on this topic:

Ø If an OLTP table has a LAST_UPDATED_DATE column (which most of the E-Business Suites tables do, not necessarily true for PeopleSoft), the ORDER.SQL file would have added a WHERE clause using that date as a filter. Couple of points about this, but first a screenshot to display what I’m talking about.

o You should maintain values for these variables for each of your extracts – initial, then first incremental, then second incremental and so on.

o For initial load, if you prefer to load everything for all tables, simply remove this WHERE clause, or provide a reasonably old date (like 01-JAN-1970) if you don’t like to edit the file. You can also use this format by specifying a date since when you want your data warehouse to have data. For example, 01-JAN-2006, in case you want 10 years data to start with.

o Once the extract for your initial run is over, the date when you did so becomes the date for your first incremental. When the first incremental is over, the date when you did so becomes the date for your second incremental, and so on.

o You should plan to maintain all such dates and use it appropriately before each extractions. You may want to set up an automation script for this.

Ø Certain Date-Effective tables are to be handled in a different way. The ORDER.SQL provides you with the template SQL to take care of that. Apart from setting the value of the #LAST_EXPORTED_DATE variable, it is recommended that you do not modify this part. This is critical in maintaining the sanctity of the date effective data. Here’s a screenshot displaying the portion of the SQL. This is not an action, just more of an FYI.

Now that you have your SQLs handy, your date parameter (#LAST_EXPORTED_DATE) values picked/programmed, it is time to crank up the CSV files. There are certain rules you need to be aware of, when dumping data into these CSV files. These are as follows:

Ø The first line is the column header line. This should match with the columns selected in ORDER.SQL. If you are using the ORDER.SQL without modifications in the SELECT clause, you are OK here. This is assumed to be a non-data row, and OTBI-Enterprise will only use it match up with the SDS schema and populate the right data against the right column.

Ø Data starts from the second row.

Ø File Naming Standards:

o For Regular Data Files (.CSV)

§ Format - file_TableName-DateTime.CSV where

§ TableName - Table name of the OLTP table whose data is being generated (table name in UPPER CASE).

§ Date - YearMonthDay [20141017]

§ Time - (24)HourMinuteSecond [230512]

§ Ex: file_PS_HRS_APP_PROFILE-20140916_010515.csv [OLTP table used here is PS_HRS_APP_PROFILE]

o For Primary Extract Files (.PECSV)

§ Format - file_keys_TableName-DateTime.PECSV where

§ TableName - Table name of the OLTP table whose Primary Key data is being generated (table name in UPPER CASE).

§ Date - YearMonthDay [20141017]

§ Time - (24)HourMinuteSecond [230512]

§ Ex: file_keys_PS_HRS_APP_PROFILE-20140916_120512.pescv [OLTP table used here is PS_HRS_APP_PROFILE]

Ø Raw CSVs are only accepted. Compressed (zipped) files aren’t allowed.

Ø File Size: Maximum size of a file is 5Gigs. If a table is large, you can split it into multiple files as a part of the same batch. Use the timestamp to differentiate them. E.g.


The 5 GB limitation is for a single file in Cloud Storage. However, from a Cloud Replicator stand point, they put further limits using Source Connection properties with “cumulative file size limit per table”. Initial (Full) extract and Incremental extract are different set of properties for file size limit. During a replicate call, cumulative file size limit check will be performed per table and exceptions will be raised when limit hits.

Default values of seeded Connection Properties are as follows:



Cumulative size will be sum of all the available files per table in container irrespective of date. For example, the following files are for different dates 16th and 17th and all their file size are summed up as cumulative size.


So if your cumulative file size (sum of multiple-day-file, or sum of single-day-split-files) goes beyond 5 GB, you should change the above parameters in your Cloud Replicator set up, or just remove the properties completely (so that this check isn’t even performed).

Ø File content standards:

o CSV delimited data format should have fields/columns separated by the comma (,) character and records/rows terminated by newline (\n) character. Carriage return (\r) should be avoided for portability across platforms.

o Any newline characters present in the column data needs to be removed as it will cause problems during the load. Yes, this is a modification to the source data, but all such columns are typically comment-like columns, and unlikely to be used for analysis.

o CSV should include column headers, columns can be in any arbitrary order.

o CSV column value supported data types are VARCHAR, INTEGER, DOUBLE, NUMBER, DATE, and TIMESTAMP.

o All CSV column values should be enclosed within double quotes (") and null value should be represented by pair of empty double quotes. E.g.
"1997","Ford","E350","". Note: This applies for even numeric, date and all other data types and not just char columns.

o Character values can have comma character and as long as they are enclosed within the delimiter (double-quotes) it is fine. If character values have double quotes, they should be enclosed within a pair of double quotes (""), it should be strictly consecutive characters. As an example, let’s say that the raw values are:

1997,Ford,E350,Super, "luxurious" truck,"Fully enclosed value"

Then, the values that go into the CSV file would be:

"1997","Ford","E350","Super,""luxurious"" truck","""Fully enclosed value"""

o Date/Time Formats: Date values can have time component. Timestamp does NOT support TIMEZONE. Default Date and Timestamp format is 'YYYY-MM-DD"T"HH24:MI:SS'

o For example, Date/Timestamp values:

CSV value '2014-11-25T10:00:00', DB stored value '25-NOV-14 AM'

CSV value '2014-11-25T16:30:00', DB stored value '25-NOV-14 PM'

CSV value '2014-11-25T00:00:00', DB stored value '25-NOV-14 AM'

CSV value (can ignore time component) '2014-11-25', DB stored value '25-NOV-14 AM'

o CLOB data type values have to be trimmed and limited to VARCHAR2 (4000). Does not make sense to extract BLOB columns as they store binary data and cannot be used for reporting.

A sample data CSV file is shown in the following screenshot.

Finally, you have all your CSV data files ready and populated for the first batch. You now upload these to the SSC. We discuss that next. But for now, put all your data CSV files to a particular folder, say “C:/temp/upload”. All files from here will go into SSC.

I am going to discuss a specific programming option using Java (see the Potential Automations section) to generate CSV files against your source tables. This isn’t supported by OTBI-Enterprise though. You are free to adopt this program, change it to your needs, or come up with your own.

Upload Data Files to SSC

OTBI-Enterprise actually leaves it up to the customer how they want to upload the files to the Storage Service Containers. However, it is assumed that customers could get a bit technical and prepare their own script to upload the files. There are quite a few ways to do that, and utilities available as well. One of them is called CyberDuck, which has a module for Oracle Storage Cloud Service.

I am going to discuss a specific programming option using Java (see the Potential Automations section). This isn’t supported by OTBI-Enterprise, but hey, it works! You are free to adopt this program, change it to your needs, or come up with your own.

Assuming that you’ve done it, this concludes your data preparation and upload step.

Run your Load Plans

At first, you are required to run the Domains-Only Load Plan. Upload the CSV files that are required for this LP. Then run this LP.

Ø Click on Tasks -> Load Plan Administration -> Manage Load Plans. Then pick the Domains-Only LP called “DomainsOnly_EBS” and click on Execute. Check on the Execution status and wait for it to finish.

Once this finishes, you need to carry out the necessary domain configurations. That is a whole different topic, out of scope for this write-up. Then, you upload the CSV files that are required for the full LP (Full_EBS). Then run this LP. You don’t have to clean up the files existing in SSC, but it is a good practice to remove clutter from time to time. Same navigation as above.

Managing Deletes in Source System

Hard deletes often happen on the source system. These have to be reciprocated in the data warehouse, otherwise it stands a chance to report incorrect data. In the case of OTBI-Enterprise, the hard deletes that happens on your source system is first reflected in the SDS schema. The ETL routines are already capable to handling DELETES, once found.

The following section discusses more about how OTBI-Enterprise manages to reflect hard deletes in the SDS layer, and what your actions are.

Remember the ORDER_PE.SQL file that was produced (as a part of the three .SQL files that you downloaded)? That’s the one which comes into use now. The ORDER_PE.SQL file provides select statements against the primary key of the OLTP table, without any date filters, by design. See a portion of the file below:

The tables in the SDS schema already has the record that has now been deleted from the source system. Now when you provide OTBI-Enterprise with the set of PK data, it compares with what the SDS already has which went missing in your provided PK data set. Those will be assumed as “hard deleted” from the source. SDS will then turn on a column called CDC_$DML_FLG to “D”, indicating ‘Delete’. The downstream ETL processes are aware of how to take care of this. Note that OTBI-Enterprise would never hard delete a record from neither SDS nor the data warehouse. They only mark the record as deleted, in other words, soft delete the record. So that’s how it works.

Now, your action.

If you happen to know that certain records were hard deleted from your source from certain tables, you should use the corresponding SQL statement from ORDER_PE.SQL and use it to populate your PK data file (format mentioned above for .PECSV files). For example, if you happen to know that records were hard deleted from only two tables AP_SUPPLIERS and AP_TERMS_LINES only, you should provide only two .PECSV files (one against each table) to the upcoming incremental batch. Not all.

That brings up another point. These .PECSV files are not required when you are doing an initial load. These are required only for incremental loads. You decide when you are ready to provide the .PECSV files. You can provide them during any batch, starting from the first incremental run, along with the main data file in the same batch. Meaning, files like file_AP_SUPPLIERS -20140916_010515.csv, and, file_keys_AP_SUPPLIERS-20140916_120512.pescv can go in the same batch.

If you are unsure what got hard deleted in your source system, or missed to keep track of them, you really have two choices:

Ø Provide .PECSV for all tables [following the ORDER_PE.SQL file] in the next batch

Ø Discard the warehouse and carry out another fresh full load

Going by the first approach, this would provide to be quite a costly process. If 90% of the tables did not have deletes, by this you are forcing OTBI-Enterprise to do a full brute force comparison of keys unnecessarily. But then, you can get a fit smart and provide only .PECSV files in this batch and not provide any real data files. This way, you are keeping this batch (and following ETL run) only reserved to manage your deletes. Maybe do that during a long weekend or so. Ideally though, you should start keeping track of your deletes, maybe using database triggers, if your DBA allows that.

Going by the second approach, you get a clean data warehouse, but at the cost of losing all rich historical information. This isn’t recommended, and should be carried out as a last resort.

General Tips

A large amount of problem usually comes (a) with data quality, and (b) with data structures. Data quality is an issue which you will be warned by OTBI-Enterprise’s Health Check feature. When you run your ETL, the Health Check Routines fires up at the beginning of the Load Plan. If any discrepancies are found in terms of data, the Health Check will warn you of that. You can download the Health Check log as an HTML file, and look for “Errors” or “Failures”. You can probably live with “Warnings”, but it is a good idea to address those as well. The failures are almost always related to bad data, or looping records in your source. These may be acceptable to the source application, but not to the BI Applications. If you allow bad data to flow in the data warehouse (those failures and errors), it is bound to cause unknown
issues downstream.

The above discussion about Health Check applies equally for Fusion Cloud, and you may be already aware of it. The current write up is about External data, and so the next item (b) is probably more suited here. The issue about data structures.

OTBI-Enterprise supports E-Business Suite R12.2 and PeopleSoft 9.2 source applications. This means, they support these as their base versions. Nowhere does it claim that it supports R12.2 with Patch 14, for instance. Or PeopleSoft 9.2 with Bundle 8. And if you have applied newer patches on top of the base versions of your sources, chances are there that you might have a newer data structure for your underlying tables. Remember, when OTBI-Enterprise generates the ORDER.SQL file, it is unaware of your source versions/tables. It doesn’t get a chance to “connect and see” across the cloud. What is generates is purely based on its knowledge of the data structures at the base level.

But there are ways you can prepare yourselves for surprises. Remember that ORDER_DDL.SQL file OTBI-Enterprise generated? Its time you out that to work.

The ORDER_DDL.SQL file contains the DDL statements (CREATE TABLE) for the source tables against the base version/release of the source application. This is what OTBI-Enterprise expects your data structures should look like. You could use these DDL statements, run them in a new database schema to create a replica of the source tables (without data of course), and compare the objects against your real source application database schema. Read the comments section of the ORDER_DDL.SQL file and you will know exactly “what” needs to be done. And now that you have read this section, you know “why” you need to do this.

Special Cases


I had mentioned earlier that OTBI-Enterprise does not support usage of your source-side PL/SQL packages, procedures or functions, or even E-Business Suite specific functions called “Fast Formula”. However, in some corner cases, they do support in a way. Currently, they do support only for UOM (Units of Measure) Conversion for E-Business, because there is no other way OTBI-Enterprise could make use of the conversion rates they use in their source. And of course, without UOM Conversions properly done, BI numbers would be off.

The way they support UOM for E-Business Suite is documented in Oracle’s OTBI-E Document Library in the Known Issues section for UOM. It is slightly different than what we’ve discussed so far – so worth mentioning. At a high level, there are 4 ETL interfaces that captures this UOM Conversion rate data. Each of these call a PL/SQL procedure at the E-Business Suite end, and load up a temporary table, which is put to work in downstream ETLs. Since the PL/SQL procedure won’t be available in the
SDS schema in OTBI-Enterprise pod, they ask you to run four complex SQLs (doing the job of the procedure), and export the data out to a particular CSV file [name of this file is important, and is documented]. OTBI-Enterprise is already aware how to read and process this particular file and take it further down.

Data Security with External Data

For Oracle Sources like E-Business Suite Applications and PeopleSoft Applications, the IDM (Identity Manager) at the source (on premise) end cannot be synched up with the one at OTBI-Enterprise (cloud) end. There is no way to guarantee single sign on across the cloud. Also, the security grants data from on premise source applications shouldn’t be replicated to the cloud. There could be potential security threats due to latency of data.

For Non Oracle sources, there is no equivalent comparisons.

For either of these cases, therefore, OTBI-Enterprise relies on data security measures built on top of the data warehouse itself, leveraging configurations you do on BIACM. This won’t necessarily “match” with your source applications, but that is how far you may be able to get. With more configuration options OTBI-Enterprise provide in the future, we can only get closer. The following section describes how OTBI-Enterprise supports data security and what they recommend you to configure.

Prior to that, it is assumed that you have all your OTBI-Enterprise users pre-configured in the OTBI-E pod. Also, you should have a clear plan in your mind on which of these users should get what kind of data access. And lastly, it is also assumed that you have configured the rest of the things and have run a complete full load of the ETL.

Here are the steps, once you are ready:

Ø Log into BIACM.

Ø On the “Tasks” section on your left hand panel, click on “Manage Data Security”.

Ø Select the source instance that you created before generating your Load Plans. In the following screenshot, the source instance selected was “EBS1222”, which is the same one we created earlier in our example.

Ø From the bottom pane, click the “plus” button to add a security object.

Ø Pick your source instance (EBS1222 in this case), and then select your “Secure by Object”. If you are planning to enable Department based data security, pick the appropriate item from the drop down box.

Ø From the “Users” list, pick your user (or users) who would have similar Department based security restrictions. In this case, we are picking a user called TM-MFITZIMMONS. Select this user and click the single arrow to get this user on the right hand side. Note that if you have multiple users planned, you could use the User Filter to search for them.

Ø Scroll down to see the list of Departments. Select (or multi-select) the Departments to which your user(s) would have access to. Click the single arrow to get them to the right hand side. Then click OK.

Ø The Security objects that you’ve configured will be shown in the bottom pane, as shown below.

Ø If you have other users or wish to secure by other (supported) dimension objects, you need to carry out the same steps.Instead of having to do this for
every user (or users with similar security requirements), OTBI-Enterprise provides a way to import all your security configurations from an external CSV
file. For the sake of this document, I am not repeating what is already documented within OTBI-Enterprise Document Library - Setting up Data Security section. Even the single user based process, as I described above, is also mentioned in the same document library. However, I had to pick one that relates to the examples I discussed earlier. And it happened to be the first approach. Anyway, that’s pretty much about it.

Potential Automations

None of these options are supported by OTBI-Enterprise. These are just options for you to use or come up with similar (maybe “better” ones) on your own.

File Upload to Storage Service Container

The first area of potential automation would be a way to upload your files to Storage Service Containers, programmatically. This is going to be a daily-exercise, so it better be automated. Here is a small and quick Java program that utilizes class libraries from Oracle Cloud Storage Services, and helps you move towards automation.

This program expects a property file as an input, called “StorageService_Manager_Input.properties”. In this, you provide certain details about your storage container situations, as follows:

STORAGE_SERVICE_URL -> Storage service URL that Oracle gave you.

STORAGE_SERVICE_NAME -> Name of the Storage Service. Oracle gives you this as well.

STORAGE_SERVICE_USER_NAME ->Your username to access Storage Service.

STORAGE_SERVICE_PASSWORD -> Your password to access Storage Service.

STORAGE_SERVICE_CONTAINER_NAME -> The name of the container, as you configured while registering your source.

ACTION -> The action you want this program to take. Options are:

LIST_CONTAINERS -> Gives a listing of all containers within “STORAGE_SERVICE_CONTAINER_NAME”


LIST_FILES -> Gives a listing of all files currently existing in the container “STORAGE_SERVICE_CONTAINER_NAME”

DELETE_FILES -> Deletes all files from container “STORAGE_SERVICE_CONTAINER_NAME”


UPLOAD_FILES_FROM_DIRECTORY -> This is the location in your local server where all your files are waiting to be uploaded.

Create and save such a property file by the name “StorageService_Manager_Input.properties”. A Sample screenshot below:

Following is the Java code. Copy it to an appropriate IDE like Eclipse, save the code as “StorageService_Manager.java”. Then, compile it to create a .jar file. Call it “StorageService_Manager.jar”. Needless to mention, you need to have the Oracle Cloud Service Libraries referenced here, and a few more. I had these under by Library folder (see screenshot):

import java.io.*;
import java.util.*;
import oracle.cloud.storage.CloudStorage;
import oracle.cloud.storage.CloudStorageConfig;
import oracle.cloud.storage.CloudStorageFactory;
import oracle.cloud.storage.exception.SystemException;
import oracle.cloud.storage.model.Key;
import oracle.cloud.storage.model.StorageInputStream;
public class StorageService_Manager {
    public static Properties inParam;
    public static String storageUrl;
    public static String storageServiceName;
    public static String storageUserName;
    public static String storagePassword;
    public static String storageContainerName;
    public static String storageAction;
    public static String uploadFromDirectory;
    public static CloudStorage myConnection;
    public static void main(String[] args) {
            storageUrl = inParam.getProperty("STORAGE_SERVICE_URL").trim();
            storageServiceName = inParam.getProperty("STORAGE_SERVICE_NAME").trim();
            storageUserName = inParam.getProperty("STORAGE_SERVICE_USER_NAME").trim();
            storagePassword = inParam.getProperty("STORAGE_SERVICE_PASSWORD").trim();
            storageContainerName = inParam.getProperty("STORAGE_SERVICE_CONTAINER_NAME").trim();
            uploadFromDirectory = inParam.getProperty("UPLOAD_FILES_FROM_DIRECTORY").trim();
            storageAction = inParam.getProperty("ACTION").trim();
            and cloud storage object
            CloudStorageConfig myConfig = new CloudStorageConfig();
            myConnection = CloudStorageFactory.getStorage(myConfig);
            //list containers within a
            if(storageAction.toUpperCase().equals("LIST_CONTAINERS")) {
            //upload files
            if(storageAction.toUpperCase().equals("UPLOAD_FILES")) {
            //list files
            if(storageAction.toUpperCase().equals("LIST_FILES")) {
            //delete files
            if(storageAction.toUpperCase().equals("DELETE_FILES")) {
            //create container
            if(storageAction.toUpperCase().equals("DELETE_FILES")) {
            //rename files
            if(storageAction.toUpperCase().equals("CREATE_CONTAINER")) {
            System.out.println("End ...");
        }catch (Exception e){
            System.out.println("Error has ocuured \n" +e);
    private static void createStorageContainer() {
        System.out.println("Creating container :" + storageContainerName);
        try {
            oracle.cloud.storage.model.Container c = myConnection.createContainer(storageContainerName);
        }catch (Exception e){
            System.out.println("Error in creating container \n" +e);
    private static void renameFiles() {
        System.out.println("Renaming files in container :" + storageContainerName);
        java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
        for (int i = 0; i < cobjects.size(); i++) {
            System.out.println("Renaming :" + cobjects.get(i).getKey());
            StorageInputStream sins = myConnection.retrieveObject(storageContainerName, cobjects.get(i).getKey());
            myConnection.storeObject(storageContainerName, "done_"+cobjects.get(i).getKey(), "text/plain", sins);
            myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());
    private static void deleteFiles() {
        System.out.println("Deleting files in container :" + storageContainerName);
        try {
            java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
            for (int i = 0; i < cobjects.size(); i++) {
                System.out.println("Deleting :" + cobjects.get(i).getKey());
                myConnection.deleteObject(storageContainerName, cobjects.get(i).getKey());
        }catch(SystemException ex) {
            System.out.println("Error in deleting files in container :" + storageContainerName);
    private static void listFiles() {
        System.out.println("Listing files in container :" + storageContainerName);
        try {
            java.util.List<Key> cobjects = myConnection.listObjects(storageContainerName, null);
            for (int i = 0; i < cobjects.size(); i++) {
                System.out.println(cobjects.get(i) + " :" + cobjects.get(i).getKey());
        }catch(SystemException ex) {
            System.out.println("Error in listing files in container :" + storageContainerName);
    private static void uploadFiles() {
        try {
            File f = new File(uploadFromDirectory);
            File files[] = f.listFiles();
            System.out.println("Uploading files to container :" + storageContainerName);
            for (File file : files) {
                try {
                    FileInputStream fis = new FileInputStream(file);
                    String fileName = file.getName();
                    System.out.println("Uploading :" + fileName);
                    myConnection.storeObject(storageContainerName, fileName, "text/plain", fis);
                }catch(FileNotFoundException ex) {
                    System.out.println("Error in uploading file :" + file.getName());
        }catch (NullPointerException ex){
            System.out.println("Error in uploading files in container :" + "Upload file directory does not exist");
        catch (Exception ex){
            System.out.println("Error in uploading files in container :" + storageContainerName);
    private static void listContainers() {
        System.out.println("Listing Containers");
        try {
            List<oracle.cloud.storage.model.Container> containerList = myConnection.listContainers();
            for (int i = 0; i < containerList.size(); i++) {
                System.out.println(containerList.get(i) + " :" + containerList.get(i).getName());
        }catch(Exception ex) {
            System.out.println("Error in listing containers");
    private static void initProperties() throws IOException {
        inParam = new Properties();

That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as:

java -cp .;%CLASS_PATH% -jar StorageService_Manager.jar >> storage_out.log

The output file “storage_out.log” records all that happens on the console.

CSV file generator for Oracle Sources

With the ORDER.SQL file (or for that matter, ORDER_PE.SQL file) at your hand, the next step is to run those SQLs against your source database, set the #LAST_EXPORTED_DATE parameter with each run, export the result set in a CSV file following OTBI-Enterprise guided rules & regulations – and that too, doing the same thing at a daily basis, is a nightmare if done manually. This is yet another area that calls for an obvious automation. Here is a small and quick Java program that could potentially carry out the job and take you a step closer to complete automation.

Actually, two programs. The first one extends a few classes from open source Java CSV libraries to meet our needs (rules etc.). That is called “MyResultSetHelperService.java”. The second one is the main program responsible for spitting out the CSVs, called “CSVLoader.java”. You need to have both of these together in your project and compile them to a jar called “CSVLoader.jar”.

I will provide the source codes for both of these JAVA programs, but before that I’ll explain how this thing works. The utility expects a property file as an input, called “CSVLoader_Input.properties”. In this, you provide certain details about your CSV file generating situations, as follows:

SQL_FILE_NAME -> Name and path of your ORDER.SQL or ORDER_PE.SQL

SRC_DB_URL -> Standard JDBC URL pointing toyou’re your source database

Format is: jdbc:oracle:thin:@host:port:SID

SRC_DB_USER -> Source database username. Should have SELECT access.

SRC_DB_PSWD -> Source database password.

OUTPUT_DIR -> Name and path of a directory where the CSVs will be created

LAST_EXPORTED_DATE ->  Last exported date.Format is: YYYY-MM-DD

FULL_LOAD -> Whether or not full load

N -> With this setting, the utility will set the date you provided in LAST_EXPORTED_DATE in the WHERE clause your SQL file has. Note that if you wish to extract since a certain date even for the first initial load, you could trick the utility by using this option and setting your initial date likewise. Needless to mention, with this setting, a valid date value is needed for the LAST_EXPORTED_DATE parameter.

Y -> With this setting, the utility will ignore all the WHERE Clauses your SQL file has and extract all data.Whatever you provide in LAST_EXPORTED_DATE parameter is ignored with this setting.

REMOVE_LINE_SEP -> Whether or not to remove line separator.

N -> Will not remove

Y -> Will remove. This is what you should set, since that’s what OTBI-Enterprise’s requirement is.

Create and save such a property file by the name “CSVLoader_Input.properties”. A Sample screenshot below. I have used this to extract data for my first full
load since year 2010. Note that I have set the FULL_LOAD option to ‘N’ and have provided the value of LAST_EXPORTED_DATE as 2010-01-01.

Following are the two JAVA programs you could use.


import java.awt.image.ReplicateScaleFilter;
import java.io.IOException;
import java.io.Reader;
import java.sql.Clob;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Types;
import java.util.ArrayList;
import java.util.List;
import com.opencsv.ResultSetHelperService;
public class MyResultSetHelperService extends ResultSetHelperService {
    static final java.lang.String
    static final java.lang.String
    MY_DEFAULT_TIMESTAMP_FORMAT = "yyyy-MM-dd'T'HH:mm:ss";
    static final int NVARCHAR = -9;
    static final int NCHAR = -15;
    static final int LONGNVARCHAR = -16;
    static final int NCLOB = 2011;
    final String removeLineFeeds;
    public MyResultSetHelperService(final String removeLineFeeds) {
        this.removeLineFeeds = removeLineFeeds;
    //overridden method for date formats
    public String[] getColumnValues(ResultSet rs) throws SQLException, IOException {
        return this.getColumnValues(rs, false, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);
    //overridden method for date formats
    public String[] getColumnValues(ResultSet rs, boolean trim) throws SQLException, IOException {
        return this.getColumnValues(rs, trim, MY_DEFAULT_DATE_FORMAT, MY_DEFAULT_TIMESTAMP_FORMAT);
    //copy paste as-is from super class for getColumnValue method to use
    public String[] getColumnValues(ResultSet rs, boolean trim, String dateFormatString, String timeFormatString) throws SQLException, IOException {
        List<String> values = new ArrayList<>();
        metadata = rs.getMetaData();
        for (int i = 0; i < metadata.getColumnCount(); i++) {
            values.add(getColumnValue(rs, metadata.getColumnType(i + 1), i + 1, trim, dateFormatString, timeFormatString));
        String[] valueArray = new String[values.size()];
        return values.toArray(valueArray);
    //overridden method to get 4000 chars from CLOB
    private static String read(Clob c) throws SQLException, IOException {
        StringBuilder sb = new StringBuilder((int) c.length());
        Reader r = c.getCharacterStream();
        char[] cbuf = new char[CLOBBUFFERSIZE];
        int n;
        while ((n = r.read(cbuf, 0, cbuf.length)) != -1) {
            sb.append(cbuf, 0, n);
        String value = sb.toString();
            value = value.substring(0, 4000);
        return value;
    //overridden method to trim new line characters in column value
    //overridden method to support ROWID support
    public String getColumnValue(ResultSet rs, int colType, int colIndex, boolean trim, String dateFormatString, String timestampFormatString)
    throws SQLException, IOException {
        String value = "";
        switch (colType) {
        case Types.BIT:
        case Types.JAVA_OBJECT:
            value = handleObject(rs.getObject(colIndex));
        case Types.BOOLEAN:
            boolean b = rs.getBoolean(colIndex);
            value = Boolean.valueOf(b).toString();
        case NCLOB: // todo
            : use rs.getNClob
        case Types.CLOB:
            Clob c = rs.getClob(colIndex);
            if (c != null) {
                value = read(c);
            //code added to trim new line chars from
            column value
                value = removeLineBreaks(value);
        case Types.BIGINT:
            value = handleLong(rs, colIndex);
        case Types.DECIMAL:
        case Types.DOUBLE:
        case Types.FLOAT:
        case Types.REAL:
        case Types.NUMERIC:
            value = handleBigDecimal(rs.getBigDecimal(colIndex));
        case Types.INTEGER:
        case Types.TINYINT:
        case Types.SMALLINT:
            value = handleInteger(rs, colIndex);
        case Types.DATE:
            value = handleDate(rs, colIndex, dateFormatString);
        case Types.TIME:
            value = handleTime(rs.getTime(colIndex));
        case Types.TIMESTAMP:
            value = handleTimestamp(rs.getTimestamp(colIndex), timestampFormatString);
        case NVARCHAR: // todo : use rs.getNString
        case NCHAR: // todo : use rs.getNString
        case LONGNVARCHAR: // todo : use rs.getNString
        case Types.LONGVARCHAR:
        case Types.VARCHAR:
        case Types.CHAR:
            String columnValue = rs.getString(colIndex);
            if (trim && columnValue != null) {
                value = columnValue.trim();
            } else {
                value = columnValue;
            //code added to trim new line chars from column value
                value = removeLineBreaks(value);
        case Types.ROWID:
            value = rs.getString(colIndex);
            value = "";
        if (value == null) {
            value = "";
        return value;
    // local method to trim newline chars from column value
    public String removeLineBreaks(String origStr) {
        return origStr.replaceAll("\\r\\n", " ").replaceAll("\\r", " ").replaceAll("\\n", " ");
        return origStr;


import java.io.BufferedReader;
import java.io.File;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.Statement;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Properties;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import com.opencsv.CSVWriter;
import com.opencsv.ResultSetHelperService;
public class CSVLoader {
    public static Properties inParam;
    public static void main(String[] args) {
        inFileName = "C:/Temp/Order.sql";
        srcDB_URL = "jdbc:oracle:thin:@abc.com:1521:orcl";
        srcDB_User = "user";
        srcDB_Pswd = "password";
        outDir = "";
        lastExportedDt = "TO_DATE('2000-01-01', 'YYYY-MM-DD')";
        fullLoadFlag = "Y";
        outFileName = "";
        removeLineSep = "Y";
        line = null;
        sqlStmt = "";
        char sqlStarted = 'N';
        char sqlEnded = 'N';
        conn = null;
            //load properties and set
            inFileName = inParam.getProperty("SQL_FILE_NAME").trim();
            srcDB_URL = inParam.getProperty("SRC_DB_URL").trim();
            srcDB_User = inParam.getProperty("SRC_DB_USER").trim();
            srcDB_Pswd = inParam.getProperty("SRC_DB_PSWD").trim();
            outDir = inParam.getProperty("OUTPUT_DIR").trim();
            lastExportedDt = inParam.getProperty("LAST_EXPORTED_DATE").trim();
            fullLoadFlag = inParam.getProperty("FULL_LOAD").trim();
            removeLineSep = inParam.getProperty("REMOVE_LINE_SEP").trim();
            System.out.println("\nOutput Directory:"+ outDir);
            System.out.println("\nFull Load Flag:"+ fullLoadFlag);
            System.out.println("\nRemove Line Seperator Flag:"+ removeLineSep);
            FileReader fileReader = new FileReader(inFileName);
            BufferedReader bufferedReader = new BufferedReader(fileReader);
            // set source DB connection
            DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
            conn = DriverManager.getConnection(srcDB_URL,srcDB_User,srcDB_Pswd);
            statement = conn.createStatement();
            // Read sql
            while((line = bufferedReader.readLine()) != null) {
                line = line.trim();
                        && (line.equals("SELECT") || line.startsWith("SELECT "))) {
                    sqlStarted = 'Y';
                if(line.endsWith(";")) {
                    sqlEnded = 'Y';
                    sqlStmt = sqlStmt + " " + line;
                if(sqlEnded == 'Y') {
                    //remove ;
                    sqlStmt=sqlStmt.replaceAll(";", "");
                    //replace LAST_EXPORTED_DATE
                    sqlStmt=sqlStmt.replaceAll("#LAST_EXPORTED_DATE", "TO_DATE('" + lastExportedDt + "', 'YYYY-MM-DD')" );
                    System.out.println("\nsqlStmt :" + sqlStmt);
                    //get table name for a sql String
                    tblName = getTableName(sqlStmt);
                    System.out.println("tblName :" + tblName);
                        sqlStmt = trimWhereClause(sqlStmt);
                        System.out.println("trimed sqlStmt :" + sqlStmt);
                    try {
                        //execute the select sql
                        ResultSet resultData = statement.executeQuery(sqlStmt);
                        //get load date time
                        DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd_HHmmss");
                        Calendar cal = Calendar.getInstance();
                        loadDateTime = dateFormat.format(cal.getTime());
                        //set Output File Name
                            outFileName = outDir + "/" + "file_keys_"+tblName+"-"+loadDateTime+".pecsv";
                        } else {
                            outFileName = outDir + "/" + "file_"+tblName+"-"+loadDateTime+".csv";
                        System.out.println("Out FileName: "+ outFileName);
                        //write to output csv file
                        writer = new CSVWriter(new FileWriter(new File(outFileName)), ',');
                        CSVWriter writer = new CSVWriter(new OutputStreamWriter(new FileOutputStream(outFileName), "UTF-8"), ',');
                        writer.setResultService(new MyResultSetHelperService(removeLineSep));
                        writer.writeAll(resultData, true);
                    } catch (Exception e) {
                        System.out.println("Error in executing query: " + sqlStmt);
                        System.out.println("Error: " + e);
                    sqlEnded = 'N';
                    sqlStarted = 'N';
                    sqlStmt = "";
            } //end while
            System.out.println("\n Source Extraction and CSV load done");
        }catch (Exception e){
            System.out.println("Error" +e);
        } finally {
            try { if(null!=conn) conn.close(); }catch(Exception e){}
    public static String getTableName(String sqlStr) {
        patternStr = "SELECT\\s(.*?)\\sFROM\\s"
        + "((?!WHERE).*?)"
        + "(?:WHERE\\s|\\Z)(.*)";
        tmpStr = sqlStr.replaceAll(";", "").replaceAll("\\r\\n", " ").replaceAll("\\s+", " ").trim();
        //System.out.println("tmpStr :" + tmpStr);
        pattern = Pattern.compile(patternStr);
        matcher = pattern.matcher(tmpStr);
            tableName = matcher.group(2);
            if(tableName.contains(" "))
            tableName = tableName.substring(0, tableName.indexOf(" "));
            return tableName;
        }else return null;
    public static void initProperties() throws IOException {
        inParam = new Properties();
    public static String trimWhereClause(String sqlStr) {
        tmpStr = sqlStr.replaceAll(";", "").replaceAll("\\r\\n", " ").replaceAll("\\s+", " ").trim();
        if(tmpStr.contains("WHERE "))
        tmpStr = tmpStr.substring(0, tmpStr.indexOf("WHERE "));
        return tmpStr;        


That’s it. Running it is fairly simple. Be on the directory where you have your JAR. Then run as:

java -cp .;%CLASS_PATH% -jar CSVLoader.jar >> out.log

The output file “out.log” records all that happens on the console, in case you care. All that you need (the CSV files) will now be there in the directory as specified in OUTPUT_DIR parameter.

Note that this utility is not multi-threaded. If you are using this, I’d suggest that you improvise on it and make it multi-threaded. It may take a whole lot of time to export data from your source, if done serially.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.