Wednesday Jul 09, 2014

ODI 11g - HDFS Files to Oracle, community post from ToadWorld

There is a new tutorial on using ODI to move HDFS files into an Oracle database using OLH-OSCH from Deepak Vohra on the ToadWorld blog. This article covers all the setup required in great detail and will be very helpful if you're planning on integrating with HDFS files.

http://www.toadworld.com/platforms/oracle/w/wiki/10957.integrating-hdfs-file-data-with-oracle-database-11g-in-oracle-data-integrator-11g.aspx

Thursday Jul 03, 2014

ODI 11g - Hive to Oracle with OLH-OSCH, community post from ToadWorld

There is a new blog post on using ODI to move data from Hive to Oracle using OLH-OSCH from Deepak Vohra on the ToadWorld blog. It covers everything from install to all the configuration of paths and configurations files. So if you are going down this route it's worth checking it out, he goes into great detail into everything that needs done and setup.

http://www.toadworld.com/platforms/oracle/w/wiki/10955.integrating-apache-hive-table-data-with-oracle-database-11g-in-oracle-data-integrator-11g.aspx

Big thanks to Deepak for sharing his experiences and providing the blog to get folk up and running. 

Saturday Jun 21, 2014

Oracle Data Integration Continues to Shine on Oracle Exadata

In previous posts I have shared with you how Oracle offers the best data integration for Oracle Exadata. Oracle’s key differentiation in providing solutions that are engineered to work together, applies to the critical add-on technologies such as data integration and replication as well. Our customers’ successes with using Oracle GoldenGate and Oracle Data Integrator is a continuous flow of confirmation that Oracle’s differentiated offering brings real results, simplifies IT, and enables innovation.

Today, I have two brand new customer examples to share with you. 11ST uses Oracle GoldenGate for real-time data integration into the data warehouse, and Abu Dhabi National Oil Company Distribution uses Oracle GoldenGate for achieving high availability for its private cloud environment running on Exadata.

  • 11ST is Korea’s largest online marketplace where customers and sellers can freely trade retail goods, ranging from clothing and food, to electronic equipment. With over 10x turnover growth over the last 4 years, the company faced challenges in delivering high speed transaction processing and good customer experience. So the company decided to implement Oracle Exadata for core database platform to accelerate data-processing speed for online transactions, and for the data warehouse to improve analytical capabilities with a solution that can handle intensive analytical workloads in a scalable fashion.With Exadata, the company increased storage capacity by up to 5x and decreased storage costs, while improving customer satisfaction by accelerating data-processing speed for online transactions.
11ST chose Oracle GoldenGate as the real-time data integration solution for the data warehouse running on Exadata. Their goal is to enable staff access to more timely data throughout the day such as online sales tracking and earnings by period. Oracle GoldenGate supports fast decision-making by completing daily batch loads 4.5x faster—in two hours instead of nine hours.

The key business transformation is seen in their customers’ experience. With the availability of real-time data synchronization between the enterprise data warehouse and marketing systems, the company can act fast to changing customer tastes, such as for women’s clothing and jewelry, to improve its offering and customer service. Using timely data and advanced analytics, the company runs personalized marketing programs, such as cosmetics or shoe promotions, through mobile, private-brand services, which was not possible with the legacy system. You can read more about 11ST story here.

Especially for data warehousing solutions many customers leverage the comprehensive and integrated Oracle Data Integration platform for Exadata. Below is a diagram that shows how customers can achieve an end-to-end solution.

Avea,and Paychex are ywo other good examples of using Oracle Data Integration for Exadata environments for improved analytical capabilities.

  • Abu Dhabi National Oil Company Distribution (ADNOC Distribution) is a United Arab Emirates (UAE) government-owned company that specializes in marketing and distributing petroleum products within the country. The organization decided to implement Oracle Exadata Database Machine to consolidate five Oracle Databases—supporting enterprise resource planning (ERP), online transaction processing (OLTP) applications, and business intelligence (BI) applications—onto a single database platform within a private database cloud.

As this private cloud environment supports the backbone of a business with 250 million financial transactions annually, ADNOC Distribution cannot afford even a minute's downtime. To ensure 24/7 uptime and meet service-level agreements, the company uses Oracle Exadata with Oracle Real Application Clusters and Oracle Automatic Storage Management. Using Oracle GoldenGate ADNOC maximized system availability and improved its ability to integrate Oracle Exadata with nonOracle platforms, such as Microsoft SQL or IBM DB2. You can read more about 11ST story here.

 In a recent webcast we discussed Oracle GoldenGate's offering for consolidation into private cloud. I also recommend watching the webcast: Zero Downtime Consolidation to Oracle Database 12c with Oracle GoldenGate 12c on demand to learn how to use Oracle GoldenGate for maximizing availability in private cloud environments.

Friday May 30, 2014

Looking for Cutting-Edge Data Integration: 2014 Excellence Awards

2014 Oracle Excellence Awards Data Integration

It is nomination time!!!

This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with one or a few of our Oracle Data Integration products?

We would love to hear from you! Please submit today.

The deadline for the nomination is June 20, 2014.

What you win:

  • An Oracle Fusion Middleware Innovation trophy

  • One free pass to Oracle OpenWorld 2014

  • Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications & press release

  • Oracle Fusion Middleware Innovation logo for inclusion on your own Website and/or press release

Let us reminisce a little…

For details on the 2013 Data Integration Winners:

Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are…

and for details on the 2012 Data Integration Winners:

Raymond James and Morrisons, check out this blog post: And the Winners of Fusion Middleware Innovation Awards in Data Integration are… 

Now to view the 2013 Winners (for all categories).

We hope to honor you!

Here's what you need to do: 

Click here to submit your nomination today.  And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on June 20, 2014.

Thursday May 15, 2014

Oracle Data Integrator Webcast Archives

Have you missed some of our Oracle Data Integrator (ODI) Product Management Webcasts?

Don’t worry – we do record and post these webcasts for your viewing pleasure. Recent topics include Oracle Data Integrator (ODI) and Oracle GoldenGate Integration, BigData Lite, the Oracle Warehouse Builder (OWB) Migration Utility, the Management Pack for Oracle Data Integrator (ODI), along with other various themes focused on Oracle Data Integrator (ODI) 12c. We run these webcasts monthly, so please check back regularly.

You can find the Oracle Data Integrator (ODI) Webcast Archives here.

And for a bit more detail:

The webcasts are publicized on the ODI OTN Forum if you want to view them live.  You will find the announcement at the top of the page, with the title and details for the upcoming webcast.

Thank you – and happy listening!

Monday May 12, 2014

Check it out – BI Apps 11.1.1.8.1 is now available!

As of May 8, 2014, Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of Oracle Data Integrator (ODI), and certified with the latest version of Oracle BI Foundation 11.1.1.7. For more details on this release and what’s new – check it out!

Friday May 02, 2014

3 Key Practices For Using Big Data Effectively for Enhanced Customer Experience

As organizations focus on differentiating their offering via superior customer experience, they are looking into ways to leverage big data in this effort. Couple of weeks ago my colleague Pete Schutt and I hosted a webcast on this very topic: Turning Big Data into Real-Time Action for a Greater Customer Experience

In this webcast we talked about 3 key practices to make the most out of big data for improving customer experience, which are:

  1. Know your customer leveraging big data: Leverage all relevant data (internal and external; structured, semi-structured, and unstructured) to understand and predict customers needs & preferences accurately.
  2. Capture, analyze, act on data fast to create value: Achieve accurate insight and take the right action fast so your action can be still relevant to the customer’s situation.
  3. Empower employees & systems with insight & smarter decisions: In this step you ensure that the capability to act right and fast is not limited to a few in the organization, but everyone and every system that interacts and influences customers’ experience.


After explaining why these practices are critical to improving customer experience, we discussed Oracle’s complete big data analytics and management platform, as well as the specific products and architectural approaches to execute on these 3 key areas. We focused particularly on data integration for fast and timely data acquisition and business analytics for real-time insight and action, and how they fit together in a real-time analytics architecture.

You can watch this webcast now on demand via the link below:

Turning Big Data into Real-Time Action for a Greater Customer Experience

In this webcast we received many great questions and I have provided below a few of them along with the answers.

Is real-time action related to the Internet of Things?

Yes, more physical things will be connected to the internet, often wirelessly with RFID tags or other sensors and Java to record where they are and what they are doing (or not doing). The IoT will be more practical by automating the information process from capture to analysis to appropriate and immediate action.

What does Oracle have for real-time mobile analytics?

Oracle BI Mobile App Designer empowers business users to easily create interactive analytical applications on any device without writing a single line of code and to also take action and respond to events in the context of their day-today business activities

Can these real-time systems be managed by business users?

Yes, you need the agility for business owners to be able to respond, experiment, and adapt, in real-time as the environment or consumer behavior changes. The systems have to be intuitive enough for users with the business content and context who can easily visualize, understand, and change the patterns they're looking and the rules that are being enforced.

Can the real-time systems use other statistical models or algorithms?

Yes. Oracle Advanced Analytics offer an enterprise version of R and Oracle RTD can source and publish scores from other advanced analytical models such as R, SAS, or SPSS or others.

Where do we get more information about ODI for big data?

 You can start with Oracle Data Integrator Application Adapter for Hadoop. And also take a look at the  Oracle BigDataLite Virtual Machine, a pre-built environment to get you started on an environment reflecting the core software of Oracle's Big Data Appliance 2.4. BigDataLite is a VirtualBox VM that contains a fully configured Cloudera Hadoop distribution CDH 4.5, an Oracle DB 12c, Oracle's Big Data Connectors, Oracle Data Integrator 12.1.2, and other software. You can use this environment to see ODI 12c in action integrating big data with Oracle DB using ODI's declarative graphical design, efficient EL-T loads, and Knowledge Modules designed to optimize big data integration. 

For GoldenGate, can a target be something other than a database, e.g. queue?

Yes, GoldenGate can deliver database changes into JMS message queues and topics, as well as in flat file format. Oracle GoldenGate Application Adapters would need to be used for those use cases. For low-impact real-time data integration into Hadoop systems customers will need to use the Java Adapter within this GoldenGate Application Adapters license as well.

What other data warehouses can does Oracle support for real-time data integration?

Oracle's data integration offering is heterogeneous for both sources and targets. Both Oracle Data Integrator and Oracle GoldenGate work with non-Oracle data warehouses including Teradata, DB2, Netezza, Greenplum.

I invite you to watch this webcast on demand to hear the details of our solution discussion and the Q&A with the audience. For more information big data integration and analytics you can review Bridging Two Worlds. Big Data and Enterprise Data and Big Data @ Work Turning Customer Interactions into Opportunities.

·


Tuesday Apr 29, 2014

What's New in Oracle GoldenGate 12c for DB2 iSeries

Oracle GoldenGate 12c (12.1.2.0.1) for IBM DB2 iSeries was released on February 15, 2014. The new version delivers the following new features: 

  • Native Delivery (Replicat): the new feature allows the user to install Oracle GoldenGate on the IBM i server and delivers the data directly to the IBM DB2 for i database. In previous releases, users had to install Oracle GoldenGate on a remote server and apply the data to the IBM DB2 for i database via ODBC. The native delivery not only improves the performance but also supports the GRAPHIC, VARGRAPHIC and DBCLOB datatypes, which are not supported in the ODBC remote delivery for IBM DB2 for i. 
  • Schema Wildcarding: as with other databases that Oracle GoldenGate supports, the schema wildcarding is now available for the IBM DB2 for i database as well.
  • Auto discard: the discard file is now created by default. 
  • Coordinated Delivery (Replicat):  the coordinated delivery is supported in the native delivery (replicat). 
  • BATCHSQL: supported on IBM i 7.1 and higher version. 

Friday Apr 25, 2014

What's New in Oracle GoldenGate 12c for Teradata

Oracle GoldenGate 12c (12.1.2.0.1) for Teradata was released on April 24, 2014; it is currently available for download at eDelivery (https://edelivery.oracle.com). In this release, the following new features are available:

  • Capture and delivery support for Teradata 14.10. The support is based on Teradata Access Module (TAM) 13.10 with no support for the capture of the NUMBER or VARRAY datatypes.
  • Delivery support for Teradata 15.0.
  • Coordinated delivery (replicat): provides the ability to spawn multiple Oracle GoldenGate replicat processes from one parameter file.

Oracle GoldenGate for Teradata’s statement of direction has been updated in the Oracle GoldenGate Statement of Direction document. 

The Oracle GoldenGate for Teradata Best Practice Document is updated at: 

  • Oracle GoldenGate Best Practice: Configuring Oracle GoldenGate for Teradata Databases (Doc ID 1323119.1)

For more information of the coordinated delivery support, please refer to the following blogs:


Long Running Jobs in Oracle Business Intelligence Applications (OBIA) and Recovery from Failures

Written by Jayant Mahto, Oracle Data Integrator Product Management

In Oracle Business Applications 11.1.1.7.1 (OBIA), the Data Warehouse load is performed using Oracle Data Integrator (ODI). In ODI, using packages and load plans, one can create quite a complex load job that kicks off many scenarios in a coordinated fashion. This complex load job may run for a long time and may fail before completing the entire job successfully and will require restarts to recover from failure and complete successfully.

This blog uses the complex load plan defined in Load Plan for Oracle Business Applications 11.1.1.7.1 (OBIA) to illustrate the method of recovery from failures. Similar methods can be used in the recovery of complex load plans defined independently in Oracle Data Integrator (ODI). Note that this post does not go into the details of troubleshooting a failed load plan and only talks about the different restart parameters that affect the behavior of a restarted job.

The failures can happen due to the following reasons:

  • Access failure – Source/Target DB down, network failure etc.
  • Agent failure.
  • Problem with the Database – As in running out of space or some other DB related issue.
  • Data related failure – Exceptions not caught gracefully, like null in not null column etc.

It is important to find out the reason of failure and address it before attempting to restart the load plan otherwise the same failure may happen again. In order to recover from the failure successfully the recover parameters in the load plan steps need to be selected carefully. These parameters are selected during design time of the load plan by the developers. The goal is to be able to make the restarts robust enough so that the administrator can do restart without knowing the details of the failed steps. This is why it is the developer’s responsibility to select the restart parameters for the load plan steps in such a way which guarantees that the correct set of steps will be re-run during restart to make sure that data integrity is maintained.

In the case of OBIA, the load plans have appropriate restart parameters in the generated load plans for out of the box steps. If you are adding a custom steps then you need to choose similar restart parameters for the custom steps.

Now let us look at a typical load plan and the restart parameters at various steps.

Restart of a serial load plan step:


SDE Dimension Group Step highlighted above is a serial step. Let us say the Load plan failed when running the 3 SDE Dims GEO_DIM step. Since this is a serial step and it has been set to “Restart from Failure”, the load plan on restart would start from 3 SDE Dims GEO_DIM only and not run the 3 SDE Dims USER_DIM again. This parameter is widely used in the OBIA serial steps.

The other restart parameter for a serial steps is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of a parallel load plan step:


The Workforce Dependant Facts Step highlighted above is a parallel step with restart parameter set to “Restart from failed children”. It means all the 5 parallel steps under it would be kicked off in parallel (subject to free sessions being available). Now amongst those 5 steps let us say 2 of them completed (indicated by the green boxes above) and then the load plan failed. When the load plan is restarted all the steps that did not complete/failed, will be started again (in this example being Learning Enrollment Fact, Payroll Facts and Recruitment Facts). This parameter is widely used in the OBIA parallel steps.

The other restart parameter for a parallel step is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of the scenario session:

At the lowest order in any load plan are the scenario steps. While the parent steps (serial or parallel) are used to set the dependencies, the scenario steps are what finally load the tables. A scenario step in turn could have one or more steps (corresponding to number of steps inside the package).

It is important to understand the structure of a session that gets created for the execution of a scenario step to understand the failure points and how the restart takes place.

The following diagram illustrates different components in a session:


The restart parameters for the scenario steps in the load plan are:

  • Restart from a new session – This creates a new session for the failed scenario during restart and executed all the steps again.
  • Restart from a failed task – This uses the old session for the failed scenario during restart and starts from the failed task.
  • Restart from a failed step – This uses the old session for the failed scenario during restart and re-executes all the tasks in the failed step again. This is the most common parameter used by OBIA and is illustrated below.


In the above example, scenario step 2 failed when running. It internally has 3 steps (all under the same session in Operator log but identified with different step numbers 0,1,2 in above case). As per the setting corresponding to OBIA Standard, the Scenario would execute from Failed Step which is from Step number 2 Table_Maint_Proc (and the substep 3 Initialize Variables onwards as shown in diagram).

Note that the successful tasks such as “3 – Procedure – TABLE_MAINT_PROC – Initialize variables” will be executed again during restart since the scenario restart parameter is set to “Restart from failed step” in the Load Plan.

Summary:

OBIA has certain coding standard for setting up restart parameters as discussed above. For serial and parallel steps, the parameters “Restart from failure” and “Restart from failed children” allow the completed steps to be skipped. For scenario steps (which actually kick of the load sessions) the restart parameter of “Restart from failed step” skips the completed steps in the session and reruns all the tasks in the failed step, allowing recovery of an incomplete step.

This standard allows a hands free approach to restart a failed load plan by an administrator who has no knowledge of what the load plan is doing.


Tuesday Apr 22, 2014

Fusion Application Incremental Data Migration

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

In the last post we discussed how Fusion Application uses ODI to transform and move data from their interface tables to the internal tables. In this article we will look into the use case of how Fusion Application uses ODI for extracting data from legacy applications and loading into Interface tables.

Fusion Applications have created a large number of ODI interfaces and packages for addressing this use case. These ODI artifacts are shipped to Fusion Application customers for performing the initial and incremental data migration from their legacy applications to Fusion Applications interface tables. These shipped artifacts can be customized as per the customizations in the customer’s environment. The below diagram depicts the data migration from Siebel to Fusion Customer Relationship Management (CRM). The whole process is performed in two stages. First the data is extracted from the underlying database of Siebel’s production system into staging area on Oracle database and then migrated into Fusion Applications interface tables. ODI is used for both of these operations. Once the initial migration is done then the trickle of change data is replicated and transformed through ODI and Golden gate combination.


Incremental Migration using ODI and Golden Gate

Initial Load

The initial is the bulk data movement process where the snapshot of the data from legacy application is moved into staging area in Oracle database. Depending upon the underlying database type of the legacy application, appropriate ODI Knowledge Module is used in the interfaces to get the best performance. For instance, for the Oracle to Oracle data movement the Knowledge Modules with DBLINK is used to move data natively through DBLINK.

Replication

The replication process takes care of moving data incrementally after the Initial load is complete. Oracle Golden Gate is leveraged for this incremental change data replication which continuously replicates the changes into the staging area. The trickle of change data is then moved from staging area to Fusion Applications interface tables through ODI’s change data capture processes using ODI Journalizing Knowledge Modules.

Thanks again for reading about ODI in the Fusion Applications!  This was the last in a series of three posts.  To review the related posts:  Oracle Data Integrator (ODI) Usage in Fusion Applications and Fusion Application Bulk Import Process.

Wednesday Apr 16, 2014

Learn about Oracle Data Integrator (ODI) Agents

Check out two new ODI A-Team blog posts – all about the Oracle Data Integrator (ODI) Agents! Understand where to install the ODI standalone agent, and find out more about the ODI agent flavors and installation types. Which one(s) make sense for you?

Understanding Where to Install the ODI Standalone Agent

ODI Agents: Standalone, JEE and Colocated 

Happy reading!

Tuesday Apr 15, 2014

Fusion Application Bulk Import Process

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

In the previous blog post we looked at the Fusion Applications end-to-end bulk data integration use cases. Now let’s take a closer look at the Bulk Import process that transforms and moves data from Interface tables to internal tables. For this use case ODI is bundled along with Fusion Application and get configured transparently by the Fusion Application provisioning process. The entire process is automated and controlled through the Fusion Application User Interface. It also seeds the ODI repository with the Fusion Application specific Models, Interfaces, Procedures and Packages which are then dynamically modified through ODI SDK for any Fusion Application customizations.


Fusion Application Bulk Import Process

The above diagram shows the Bulk import process in Fusion Application where ODI is used for data transformation. Here the Interface tables are the source tables which were populated by other processes before the kicking off the Bulk Import process. The Fusion Application internal tables are the target for these integrations where the data needs to be loaded. These internal tables are directly used for Fusion Application functionalities therefore a number of data validations are applied to load only the good quality data into the internal tables. The data validation errors are monitored and corrected through Fusion Application User Interface. The metadata of Fusion Application tables is not fixed and gets modified as the Application is customized for customer’s requirement. Any change in such source or target tables would require corresponding adjustments in ODI artifacts too and is taken care of by the AppComposer which uses ODI SDK to make such changes in ODI artifacts. If auditing is enabled then any change in the internal table data or the changes in ODI artifacts are recorded in centralized auditing table.

Packaged ODI Artifacts

There are a large number of ODI models, interfaces and packages seeded in the default ODI repository used for Bulk Import. These ODI artifacts are built based upon the base metadata of Fusion Application schema.

Extensibility

As part of the customization, Fusion Application entities are added or modified as per the customer’s requirement. Such customizations result in changes in the underlying Fusion Application’s internal tables and interface tables, and require the ODI artifacts to be updated accordingly. The Fusion Application development team as built the extensibility framework to update ODI artifacts dynamically along with any change in Fusion Application schema. It leverages the ODI-SDK for performing any changes in the ODI repository. The dynamic generation of ODI artifacts is automatically kicked off as part of Patching and Upgrades process. Fusion Application AppComposer User Interface also supports explicitly triggering this process so that administrators can regenerate ODI artifacts whenever they make any customizations.

Validation Error Reporting

The validation errors are populated in intermediate tables and are exposed through BI Publisher so that admin users can correct and recycle these error records.

Auditing

The Fusion Application auditing framework keeps track of the changes performed by each of the users and at what time. There are two levels of auditing captured in Fusion Application audit table for Bulk Import use case. First, metadata changes in ODI artifacts through ODI SDK during customizations. Second, the transactional data changes in the Fusion Application table data as part of ODI interfaces execution. For these purposes the ODI team has exposed some substitution APIs that are used by Fusion Application development team to customize ODI KMs to perform such auditing during the actual data movement.

Provisioning and Upgrade

The provisioning process takes care of install and configuring ODI for the Fusion Application instance.

It takes care of automatically creating ODI repository schemas, configuring topology, setting up ODI agents, setup configurations for ODI –ESS bridge, seeding packaged ODI artifacts, apply modifications to seeded artifacts and create internal users in IDM for external authentication. There is a separate process to apply patches or upgrade the environment to the newer release. Such patching or upgrade processes not only take care of importing newer ODI artifacts but also kick off a CRM extensibility process that modifies ODI artifacts as per the Fusion Application customizations.

External Authentication

There is a dedicated IDM configured with each Fusion Application instance and all Fusion Application components are expected to have their users authenticated through this centralized IDM. For Bulk Import use case ODI is configured with external authentication and there are internal users created in IDM that are used for communication with ODI agent and kicking off ODI jobs.

Enterprise Scheduler Service (ESS) - ODI Bridge

The ODI scenarios are kicked off through ODI-ESS bridges. It is a separate library build for ODI-ESS integration and gets deployed along with Enterprise Scheduler Service (ESS) in Fusion Application environment. It supports both synchronous and asynchronous modes of invocation for ODI jobs. In the asynchronous mode the session status is updated to callbacks to the ESS services. There is a topology editor provided to manage the ESS callback service connectivity exclusively for Fusion Application use cases.

Note: Use of ESS-ODI Bridge is restricted to Fusion Application use case only at the moment.

High Availability

The ODI agent is deployed on Weblogic cluster in the Fusion Application environment to take advantage of ODI high availability capabilities. By default there is only one managed server in the Weblogic cluster created for ODI but as the load increases more managed servers can be added to the cluster to distribute execution of ODI sessions among ODI agent instances in the cluster.

Stay tuned for the last post on this topic coming soon.  This was part two in a series of three posts.  The initial post can be found here.

Friday Apr 11, 2014

Oracle Data Integrator (ODI) Usage in Fusion Applications (FA)

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

Oracle Data Integrator (ODI) is the bulk data transformation platform for Fusion Applications (FA). ODI is used by Fusion Customer Relationship Management (CRM), Fusion Human Capital Management (HCM), Fusion Supply Chain Management (SCM), Fusion Incentive Compensation (IC) and Fusion Financials family products and many other Fusion Application teams are following suit. Among all these product families CRM is the biggest consumer of ODI leveraging a breadth of ODI features and functionality, out of which some features were developed specifically for Fusion Applications use. Some ODI features they utilize include: ODI SDK, high availability, external authentication, various out of the box and customized Knowledge Modules, ODI-ESS bridge, callbacks to ESS EJBs, auditing, open tools, etc. In this post we will first talk about the different Fusion Application use cases at higher level and then take a closer look at different integration points.

Figure 1 shows data integration need of a typical on-premise Fusion Applications deployment.

  1. Bulk Import: Fusion Applications exposes a set of interface tables as the entry point for data load from any outside source. The bulk import process validates this data and loads it in the internal table which can then be used by the fusion application.
  2. Data Migration: Extracting data from external applications, legacy applications or any other data source and loading it into Fusion Application’s interface table. ODI can be used for such data load.
  3. Preparing Data Files: Converting data into Comma Separated Values (CSV) files that can be imported through Fusion Application’s files import wizard. ODI can be used to extract data into such CSV file.

Figure 1: Data Integration Needs in On-Premise Fusion Application

Figure 2 shows the on-demand or cloud environment requirements, which are slightly different as there is no direct connectivity available to the interface tables.

  1. Bulk Import: Fusion Application exposes a set of interface tables as the entry point for any data load from any outside source. The bulk import process validates this data and then loads it in the internal table which can then be used by the application.
  2. Preparing Data Files: Converting data into CSV files that can be imported through Fusion Application’s files import wizard. ODI can be used for creation on such CSV files.
  3. Uploading data files: The data files are uploaded to the Tenant File repository through either Fusion Application’s File import page or Oracle WebCenter Content Document Transfer Utility. The WebCenter Utility is built using ODI open tool framework allowing orchestrating entire process through ODI package.
  4. Loading Interface Table: Data files to be loaded in the interface tables so that it can be consumed by the Bulk Import process. ODI is used for loading these interface tables.

Figure 2: Data Integration Needs for On-Demand Fusion Application

Stay tuned for more blog posts on this topic coming next week. This was part one in a series of three posts.