Author: Thejas B Shetty (Oracle SSI)
The following article represents the views of the author only & Oracle does not endorse it.
It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
In the On-Premise version of Hyperion Planning, it is possible to use the pre-packaged ODI Knowledge Modules (KMs) for Essbase to integrate data with the plan types. These knowledge modules use the Essbase Java APIs to connect and communicate directly with the underlying Essbase cubes (plan types), having additional options to execute calculation scripts, MaxL commands etc.
In the Cloud version of Hyperion Planning, namely Planning Budgeting Cloud Service(PBCS)/Enterprise Planning Budgeting Cloud Service(EPBCS), customers do not have the direct access into the underlying Essbase cube and hence they cannot communicate/integrate directly with the Essbase cube.
Oracle has no plans of providing native access to the underlying Essbase cubes of the PBCS/EPBCS applications and hence recommends using alternative methods like EPM Automate / REST APIs to communicate with PBCS application components & perform various integration operations. These different integration operations are pre-defined as JOBS within the Simplified Interface which are then invoked from either EPM Automate or direct REST API calls.
Many customers, who have extensively used ODI in the past to integrate with Hyperion Planning applications, would still like to use ODI with PBCS the same way, without investing significant time/efforts in building new interfaces using FDMEE/Data Management. It would also help customers migrating their On-Prem Planning applications to PBCS/EPBCS, to move their historical data from the source Essbase cubes to the new PBCS/EPBCS application. The transformation/re-mapping of data if any can be managed in the ODI layer instead of manipulating the data in either the source Planning application or the new PBCS/EPBCS application.
However the limitation of not having direct access to the underlying Essbase cube of PBCS/EPBCS application makes it impossible to use the existing Essbase Knowledge Modules & needs the ODI developer to either utilize an additional command line utility called EPM Automate within ODI OS Commands/procedures or learn a new technology namely REST API & leverage it to perform various jobs within PBCS/EPBCS. All of these cannot be done in a single ODI Mapping & requires sequencing of multiple components in an ODI package.
I have created the ODI KMs for PBCS/EPBCS utilizing the PBCS REST APIs in the ODI 12c version. The KM options are similar to the On-Prem Essbase KM options & contains the KM code written in Jython that invokes REST APIs to the PBCS to perform equivalent jobs. There are additional KM options added specific to PBCS, to make the integration process simpler, but yet very robust.
This way, any data in the source, regardless of the technology/platform, can be integrated with PBCS/EPBCS using the data stores on the ODI Mapping by treating it as if it were an on-premise Essbase data store. For an ODI developer, this KM code decouples the underlying technology (REST APIs) from the standard knowledge/skills needed to load data into PBCS, thereby letting the developer only concentrate on the core ODI components & PBCS functionalities. All other processes like the file-generation, uploading of data file to cloud, clear data etc are done within the KM code by invoking REST APIs in the background. The required end-point URLs for the REST API calls are also hardcoded within the KM & hence the ODI developer need not have a knowledge on the same.
These KMs are presently not officially supported by Oracle and hence no SRs can be raised for the same. However, by sharing them with a wider community, I encourage people to use them, modify them, and contribute their valuable input and feedback. You can find these KMs in the ODI Exchange within ODI Studio as well as on this website: link.
There are no pre-requisite .jar files that are required to use the PBCS Knowledge Modules. All the Java libraries used within the KM code should be present in the ODI agent/studio by default during installation of ODI.
However, a new ODI Technology by name EPM Cloud Service needs to be added to the ODI repository before the KMs can be used. The ODI technology can be downloaded from here
Make sure that the machine where ODI Agent (or ODI Studio) is installed, has the required network connection enabled to the PBCS Application URL. Make sure there are no firewall restrictions to the PBCS application URL.
Create a new Data Server under the EPM Cloud Service technology.
Choose a name for the Data Server.
Update the Topology as below for the Data Server by entering the PBCS Cloud URL in the ServiceURL (Data Server) field.
Please note that this Service URL should end with oraclecloud.com & not suffixed with anything else in the end, including special characters and/or spaces.
*Oracle employees using the PBCS/EPBCS VM on their laptops can use the Service URL as http://192.168.56.101:9000 (Please modify the IP address as per your VM configuration)
Update the Username / Password used for connecting to PBCS/EPBCS application.
The Username should be of the format: domain.username for all PBCS/EPBCS GSE/Test/Production pods.
*Oracle Employees using the PBCS/EPBCS VM on their laptop, can put only the username without the domain. Eg: epm_default_cloud_admin
Create a physical schema, under this Data Server, to represent the ApplicationType and ApplicationName.
PBCS should be used as Application Type for both PBCS/EPBCS type of applications. The KM has not been tested with FCCS/TRCS & hence may or may not work with FCCS/TRCS applications.
Use the RKM EPM Cloud Service-PBCS Knowledge Module to reverse engineer the PBCS/EPBCS Application Datastores into your ODI Models, pointing to an EPM Cloud Service type of logical schema.
A separate datastore is fetched corresponding to each of the BSO/ASO plantypes within the PBCS/EPBCS application:
·<AppName>_<CubeName>_Data: Used for loading/extracting data into/from PBCS/EPBCS.
The datastore will have each of the dimensions excluding the attribute dimensions as ODI Column Attributes. The numeric column attribute for the data amount is added in the end.
Use any Datastore that contains source data on the left hand side of an ODI Mapping, and connect it with a PBCS/EPBCS Datastore on right hand side. If the source datastore is a RDBMS technology (Oracle/MS SQL Server / Generic SQL), it does not require any Staging area. To load data from other technologies including text files, use an appropriate LKM (eg: LKM File to SQL) to extract data into a SQL Staging area (eg: SUNOPSIS_MEMORY_ENGINE, Oracle Staging Area etc). The source data store need not be in the same format as that of PBCS/EPBCS and can have different column names / column count. The IKM will apply the mappings defined, before loading data into PBCS/EPBCS.
Map the target expressions from the source datastore.
The below image shows an example of how a text file can be integrated with PBCS/EPBCS using an Oracle Database as staging area.
Click on the PBCS/EPBCS datastore in Target_Group on the Physical tab and choose the IKM as IKM SQL to EPM Cloud Service-PBCS(DATA).
The IKM Options are categorized into 3 groups:
The name of the job of type IMPORT_DATA defined in the PBCS UI. This job should have Source Type as Essbase & have the same Cube name as that of the Plan type of the target datastore. The Source File name can be any arbitrary name ending with a .txt extension. The actual file name passed to perform the data import, is passed dynamically from ODI using the ODI session number as suffix & overrides any static filename already defined in the job.
Flag to indicate if logging should be performed during the load process. If set to Yes, logging would be done to the file specified by the LOG_FILE_NAME option.
The fully qualified name of the file into which logging is to be done. Do not use backslashes (\) in the filename.
Boolean Flag to indicate whether to run a CLEAR_CUBE job on the Cloud before data is loaded. This requires a job of type CLEAR_CUBE to be setup onetime in the PBCS UI. The name of the CLEAR_CUBE job must be referenced in the CLEAR_CUBE_JOB_NAME option. The definition of the Clear Cube operation has to be defined in the PBCS UI using the various options as shown below & cannot be controlled from the ODI Option.
Name of the CLEAR_CUBE job defined in the PBCS UI.
Determines whether to run a business rule before data load. This is useful if you want to selectively clear an intersection of the cube, by passing run time parameters. The CLEAR_CUBE job doesn’t let you choose the selective slice of the cube that needs to be cleared.
Name of the pre-load business rule as defined/deployed from the Calc Manager.
Run time parameters for pre-load business rule.
In case of multiple parameters, separate each parameter by comma delimiters.
Determines whether to run a business rule after data load. It is useful if one intends to aggregate or copy data after data is loaded by passing run time parameters.
Name of the post-load business rule as defined/deployed from the Calc Manager.
Run time parameters for post-load business rule.
In case of multiple params, separate each param by comma delimiters.
Sample Process Logging:
This concludes the initial setup and will get you up and running with the ODI 12c Knowledge Module for PBCS/EPBCS to load data. In the next blog, I will include more Knowledge Modules with features to extract data.
For any troubleshooting /queries/suggestions, do not hesitate to contact me: firstname.lastname@example.org