Wednesday Apr 30, 2014

ODI - Who Changed What and When? Search!

Cell phones ringing, systems with problems, what's going on? How do you diagnose? Sound a common problem? Some ODI runtime object failed in a production run, what's the first thing to do? Be suspicious. Be very suspicious. I have lost count of the number of times I have heard 'I changed nothing' over the years. The ODI repository has audit information for all objects so you can see the created date and user for objects as well as the updated date and user. This let's you at least identify if an object was updated.

Firstly you can check the object you are suspicious of, you can do this in the studio, operator navigator or console - the information is also in the SDK and for some adventurous folk in the repository tables. Below you can see who updated the object and when.

You can also check other objects such as variables (were they updated, what's their values?) and load plans.

Casting the web wider -Search

There is a very useful but not very well known feature in ODI for searching for objects, invoke it from the Search -> Find ODI Object menu.

For example if I wanted to query the objects that have been updated in the last 7 days, then this could be used - I can do this for runtime objects like scenarios and load plans as well as design objects. Below you can see I have the search scope as 'Scenarios' - this lets you restrict the search to scenarios, variables and scheduled executions. Here is a table summarizing the search scope and objects within scope;

Search Scope Object Types
 Projects  Folder, Package, Mapping, Procedure, Knowledge Module, Variable, User Function, Sequence
 Meta-data  Model, Diagram, Datastore, Column, Key, Reference, Condition, Sub-Model, Model Folder
 Scenarios  Scenario, Scenario Variable, Scheduled Execution
 Organization  Project, Folder, Model Folder, Model, Sub-Model, Load Plan and Scenario Folder
 Procedure / Knowledge Modules  Knowledge Module,Option, Procedure, Procedure Command
 Packages  Package, Step
 Load Plans  Load Plan

The search supports searching objects based on the last update date, by object type (scenario, load plan etc.), by object name and by update user. You can define information about the search in the text field in the favorite criteria section.

So if you are working in a large customer deployment and have to cast the web wider, this would let you quickly find objects that have recently been updated across an ODI repository. This can have many useful purposes - could use it for identifying potential suspicious changes, for doing a cross and check between what has changed and what you have been told has changed and also for understanding the changes so you can then perform some subsequent actions - such as exporting and importing them.

The results of this utility are returned into the result panel which can be viewed in a hierarchic tree including foldering or in a linear list, you can then open/edit individual objects. Below you can see the result in a linear list. Selecting the 'Hierarchical View' tab would allow you to see any foldering.

One thing I did note, is that their is no wildcard support for users, so you have to pick a specific user. It would be better if you could search for all objects updated within some date period regardless of user - then be able to sort the objects based on date/name etc. The audit information is also available via the SDK and the non-supported but useful repository tables in order to build custom scripts.

When a search is made you can also save the search (as an XML file) and use this across teams by opening the saved file whenever you wish to use this search criteria. This is useful for discovering what updates have happened recently - whether it is runtime objects or design artifacts, you can use this. 

Tuesday Apr 29, 2014

What's New in Oracle GoldenGate 12c for DB2 iSeries

Oracle GoldenGate 12c ( for IBM DB2 iSeries was released on February 15, 2014. The new version delivers the following new features: 

  • Native Delivery (Replicat): the new feature allows the user to install Oracle GoldenGate on the IBM i server and delivers the data directly to the IBM DB2 for i database. In previous releases, users had to install Oracle GoldenGate on a remote server and apply the data to the IBM DB2 for i database via ODBC. The native delivery not only improves the performance but also supports the GRAPHIC, VARGRAPHIC and DBCLOB datatypes, which are not supported in the ODBC remote delivery for IBM DB2 for i. 
  • Schema Wildcarding: as with other databases that Oracle GoldenGate supports, the schema wildcarding is now available for the IBM DB2 for i database as well.
  • Auto discard: the discard file is now created by default. 
  • Coordinated Delivery (Replicat):  the coordinated delivery is supported in the native delivery (replicat). 
  • BATCHSQL: supported on IBM i 7.1 and higher version. 

Friday Apr 25, 2014

What's New in Oracle GoldenGate 12c for Teradata

Oracle GoldenGate 12c ( for Teradata was released on April 24, 2014; it is currently available for download at eDelivery ( In this release, the following new features are available:

  • Capture and delivery support for Teradata 14.10. The support is based on Teradata Access Module (TAM) 13.10 with no support for the capture of the NUMBER or VARRAY datatypes.
  • Delivery support for Teradata 15.0.
  • Coordinated delivery (replicat): provides the ability to spawn multiple Oracle GoldenGate replicat processes from one parameter file.

Oracle GoldenGate for Teradata’s statement of direction has been updated in the Oracle GoldenGate Statement of Direction document. 

The Oracle GoldenGate for Teradata Best Practice Document is updated at: 

  • Oracle GoldenGate Best Practice: Configuring Oracle GoldenGate for Teradata Databases (Doc ID 1323119.1)

For more information of the coordinated delivery support, please refer to the following blogs:

Long Running Jobs in Oracle Business Intelligence Applications (OBIA) and Recovery from Failures

Written by Jayant Mahto, Oracle Data Integrator Product Management

In Oracle Business Applications (OBIA), the Data Warehouse load is performed using Oracle Data Integrator (ODI). In ODI, using packages and load plans, one can create quite a complex load job that kicks off many scenarios in a coordinated fashion. This complex load job may run for a long time and may fail before completing the entire job successfully and will require restarts to recover from failure and complete successfully.

This blog uses the complex load plan defined in Load Plan for Oracle Business Applications (OBIA) to illustrate the method of recovery from failures. Similar methods can be used in the recovery of complex load plans defined independently in Oracle Data Integrator (ODI). Note that this post does not go into the details of troubleshooting a failed load plan and only talks about the different restart parameters that affect the behavior of a restarted job.

The failures can happen due to the following reasons:

  • Access failure – Source/Target DB down, network failure etc.
  • Agent failure.
  • Problem with the Database – As in running out of space or some other DB related issue.
  • Data related failure – Exceptions not caught gracefully, like null in not null column etc.

It is important to find out the reason of failure and address it before attempting to restart the load plan otherwise the same failure may happen again. In order to recover from the failure successfully the recover parameters in the load plan steps need to be selected carefully. These parameters are selected during design time of the load plan by the developers. The goal is to be able to make the restarts robust enough so that the administrator can do restart without knowing the details of the failed steps. This is why it is the developer’s responsibility to select the restart parameters for the load plan steps in such a way which guarantees that the correct set of steps will be re-run during restart to make sure that data integrity is maintained.

In the case of OBIA, the load plans have appropriate restart parameters in the generated load plans for out of the box steps. If you are adding a custom steps then you need to choose similar restart parameters for the custom steps.

Now let us look at a typical load plan and the restart parameters at various steps.

Restart of a serial load plan step:

SDE Dimension Group Step highlighted above is a serial step. Let us say the Load plan failed when running the 3 SDE Dims GEO_DIM step. Since this is a serial step and it has been set to “Restart from Failure”, the load plan on restart would start from 3 SDE Dims GEO_DIM only and not run the 3 SDE Dims USER_DIM again. This parameter is widely used in the OBIA serial steps.

The other restart parameter for a serial steps is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of a parallel load plan step:

The Workforce Dependant Facts Step highlighted above is a parallel step with restart parameter set to “Restart from failed children”. It means all the 5 parallel steps under it would be kicked off in parallel (subject to free sessions being available). Now amongst those 5 steps let us say 2 of them completed (indicated by the green boxes above) and then the load plan failed. When the load plan is restarted all the steps that did not complete/failed, will be started again (in this example being Learning Enrollment Fact, Payroll Facts and Recruitment Facts). This parameter is widely used in the OBIA parallel steps.

The other restart parameter for a parallel step is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of the scenario session:

At the lowest order in any load plan are the scenario steps. While the parent steps (serial or parallel) are used to set the dependencies, the scenario steps are what finally load the tables. A scenario step in turn could have one or more steps (corresponding to number of steps inside the package).

It is important to understand the structure of a session that gets created for the execution of a scenario step to understand the failure points and how the restart takes place.

The following diagram illustrates different components in a session:

The restart parameters for the scenario steps in the load plan are:

  • Restart from a new session – This creates a new session for the failed scenario during restart and executed all the steps again.
  • Restart from a failed task – This uses the old session for the failed scenario during restart and starts from the failed task.
  • Restart from a failed step – This uses the old session for the failed scenario during restart and re-executes all the tasks in the failed step again. This is the most common parameter used by OBIA and is illustrated below.

In the above example, scenario step 2 failed when running. It internally has 3 steps (all under the same session in Operator log but identified with different step numbers 0,1,2 in above case). As per the setting corresponding to OBIA Standard, the Scenario would execute from Failed Step which is from Step number 2 Table_Maint_Proc (and the substep 3 Initialize Variables onwards as shown in diagram).

Note that the successful tasks such as “3 – Procedure – TABLE_MAINT_PROC – Initialize variables” will be executed again during restart since the scenario restart parameter is set to “Restart from failed step” in the Load Plan.


OBIA has certain coding standard for setting up restart parameters as discussed above. For serial and parallel steps, the parameters “Restart from failure” and “Restart from failed children” allow the completed steps to be skipped. For scenario steps (which actually kick of the load sessions) the restart parameter of “Restart from failed step” skips the completed steps in the session and reruns all the tasks in the failed step, allowing recovery of an incomplete step.

This standard allows a hands free approach to restart a failed load plan by an administrator who has no knowledge of what the load plan is doing.

Tuesday Apr 22, 2014

Fusion Application Incremental Data Migration

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

In the last post we discussed how Fusion Application uses ODI to transform and move data from their interface tables to the internal tables. In this article we will look into the use case of how Fusion Application uses ODI for extracting data from legacy applications and loading into Interface tables.

Fusion Applications have created a large number of ODI interfaces and packages for addressing this use case. These ODI artifacts are shipped to Fusion Application customers for performing the initial and incremental data migration from their legacy applications to Fusion Applications interface tables. These shipped artifacts can be customized as per the customizations in the customer’s environment. The below diagram depicts the data migration from Siebel to Fusion Customer Relationship Management (CRM). The whole process is performed in two stages. First the data is extracted from the underlying database of Siebel’s production system into staging area on Oracle database and then migrated into Fusion Applications interface tables. ODI is used for both of these operations. Once the initial migration is done then the trickle of change data is replicated and transformed through ODI and Golden gate combination.

Incremental Migration using ODI and Golden Gate

Initial Load

The initial is the bulk data movement process where the snapshot of the data from legacy application is moved into staging area in Oracle database. Depending upon the underlying database type of the legacy application, appropriate ODI Knowledge Module is used in the interfaces to get the best performance. For instance, for the Oracle to Oracle data movement the Knowledge Modules with DBLINK is used to move data natively through DBLINK.


The replication process takes care of moving data incrementally after the Initial load is complete. Oracle Golden Gate is leveraged for this incremental change data replication which continuously replicates the changes into the staging area. The trickle of change data is then moved from staging area to Fusion Applications interface tables through ODI’s change data capture processes using ODI Journalizing Knowledge Modules.

Thanks again for reading about ODI in the Fusion Applications!  This was the last in a series of three posts.  To review the related posts:  Oracle Data Integrator (ODI) Usage in Fusion Applications and Fusion Application Bulk Import Process.

Wednesday Apr 16, 2014

Learn about Oracle Data Integrator (ODI) Agents

Check out two new ODI A-Team blog posts – all about the Oracle Data Integrator (ODI) Agents! Understand where to install the ODI standalone agent, and find out more about the ODI agent flavors and installation types. Which one(s) make sense for you?

Understanding Where to Install the ODI Standalone Agent

ODI Agents: Standalone, JEE and Colocated 

Happy reading!

Tuesday Apr 15, 2014

Fusion Application Bulk Import Process

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

In the previous blog post we looked at the Fusion Applications end-to-end bulk data integration use cases. Now let’s take a closer look at the Bulk Import process that transforms and moves data from Interface tables to internal tables. For this use case ODI is bundled along with Fusion Application and get configured transparently by the Fusion Application provisioning process. The entire process is automated and controlled through the Fusion Application User Interface. It also seeds the ODI repository with the Fusion Application specific Models, Interfaces, Procedures and Packages which are then dynamically modified through ODI SDK for any Fusion Application customizations.

Fusion Application Bulk Import Process

The above diagram shows the Bulk import process in Fusion Application where ODI is used for data transformation. Here the Interface tables are the source tables which were populated by other processes before the kicking off the Bulk Import process. The Fusion Application internal tables are the target for these integrations where the data needs to be loaded. These internal tables are directly used for Fusion Application functionalities therefore a number of data validations are applied to load only the good quality data into the internal tables. The data validation errors are monitored and corrected through Fusion Application User Interface. The metadata of Fusion Application tables is not fixed and gets modified as the Application is customized for customer’s requirement. Any change in such source or target tables would require corresponding adjustments in ODI artifacts too and is taken care of by the AppComposer which uses ODI SDK to make such changes in ODI artifacts. If auditing is enabled then any change in the internal table data or the changes in ODI artifacts are recorded in centralized auditing table.

Packaged ODI Artifacts

There are a large number of ODI models, interfaces and packages seeded in the default ODI repository used for Bulk Import. These ODI artifacts are built based upon the base metadata of Fusion Application schema.


As part of the customization, Fusion Application entities are added or modified as per the customer’s requirement. Such customizations result in changes in the underlying Fusion Application’s internal tables and interface tables, and require the ODI artifacts to be updated accordingly. The Fusion Application development team as built the extensibility framework to update ODI artifacts dynamically along with any change in Fusion Application schema. It leverages the ODI-SDK for performing any changes in the ODI repository. The dynamic generation of ODI artifacts is automatically kicked off as part of Patching and Upgrades process. Fusion Application AppComposer User Interface also supports explicitly triggering this process so that administrators can regenerate ODI artifacts whenever they make any customizations.

Validation Error Reporting

The validation errors are populated in intermediate tables and are exposed through BI Publisher so that admin users can correct and recycle these error records.


The Fusion Application auditing framework keeps track of the changes performed by each of the users and at what time. There are two levels of auditing captured in Fusion Application audit table for Bulk Import use case. First, metadata changes in ODI artifacts through ODI SDK during customizations. Second, the transactional data changes in the Fusion Application table data as part of ODI interfaces execution. For these purposes the ODI team has exposed some substitution APIs that are used by Fusion Application development team to customize ODI KMs to perform such auditing during the actual data movement.

Provisioning and Upgrade

The provisioning process takes care of install and configuring ODI for the Fusion Application instance.

It takes care of automatically creating ODI repository schemas, configuring topology, setting up ODI agents, setup configurations for ODI –ESS bridge, seeding packaged ODI artifacts, apply modifications to seeded artifacts and create internal users in IDM for external authentication. There is a separate process to apply patches or upgrade the environment to the newer release. Such patching or upgrade processes not only take care of importing newer ODI artifacts but also kick off a CRM extensibility process that modifies ODI artifacts as per the Fusion Application customizations.

External Authentication

There is a dedicated IDM configured with each Fusion Application instance and all Fusion Application components are expected to have their users authenticated through this centralized IDM. For Bulk Import use case ODI is configured with external authentication and there are internal users created in IDM that are used for communication with ODI agent and kicking off ODI jobs.

Enterprise Scheduler Service (ESS) - ODI Bridge

The ODI scenarios are kicked off through ODI-ESS bridges. It is a separate library build for ODI-ESS integration and gets deployed along with Enterprise Scheduler Service (ESS) in Fusion Application environment. It supports both synchronous and asynchronous modes of invocation for ODI jobs. In the asynchronous mode the session status is updated to callbacks to the ESS services. There is a topology editor provided to manage the ESS callback service connectivity exclusively for Fusion Application use cases.

Note: Use of ESS-ODI Bridge is restricted to Fusion Application use case only at the moment.

High Availability

The ODI agent is deployed on Weblogic cluster in the Fusion Application environment to take advantage of ODI high availability capabilities. By default there is only one managed server in the Weblogic cluster created for ODI but as the load increases more managed servers can be added to the cluster to distribute execution of ODI sessions among ODI agent instances in the cluster.

Stay tuned for the last post on this topic coming soon.  This was part two in a series of three posts.  The initial post can be found here.

Friday Apr 11, 2014

ODI 12c - Expression Hints

The ODI 12c mapping designer let's you design a mapping using components and define expressions for each of those components. Like 11g, in 12c there are hints to let ODI know how you would like to compute a physical plan for your design and from this generate code. I think the rules about how some of this work are not known - both in 11g and 12c - people second guess how they think it works. There's no magic to it, let's have a look at it. Underpinning the mapping are the technology definitions, it's here that datatypes are defined and datatype mappings between technologies. This let's ODI be very flexible in support for arbitrary data transformations between systems and how such data and its datatype is mapped across heterogeneous systems.

Putting the heterogeneous nature aside we can look at how datatypes are transformed just in a distributed example for Oracle (the example is for demonstration, in reality the database link LKM will be used which will do no staging). The example has 2 columns in a source table that are both VARCHAR2(30), one of those columns has an actual string, the other has a date. The target system has 2 columns in our target table that are VARCHAR2(10) and DATE. Note the target for one is shorter than its source and the other is a DATE datatype and not a string.

We can define the simple table to table mapping as below and define the expressions on the target table.

By default the expressions have no hint defined and will execute where the table is executed - in this case on the target system. We can see how the C$ table would be defined by previewing the DDL code in the physical design, we will see the type/DDL in the syntax of the particular technology. Below you can see the source datatype information is propagated - the length is still 30.

 If we look at the target table we can see the expressions defined on it, in the example below I have selected the TGTEMP table and I can see the 2 expressions, I could actually change where the expression is defined for this particular physical design, I will not do that though, I will go back to the logical designer and set the hint there - then all of my potential physical designs leverage it.

Use the 'Execute on hint' property for the attribute, you can see the different options there, just now it has value no hint. 

Below I have selected 'Source' to indicate I want the SUBSTR expression executed on the source system. 

After this change has been made, if you look at the physical design you will see that the datatype information on our AP is now different. Because the data has been prepared on the source then the datatype for our C$ column now takes on the definition of the target (VARCHAR2(10)). 

This gives you some idea as to how the hints works for expressions in a datastore. ODI 12c also has an expression component that let's you define groups of expressions. I generally think this is good for when an expression is reused within a mapping, but I know everyone works differently with tools and I have heard that some people like using this for all complex expressions rather than it being defined within a component such as a target table as they can easily 'see' where complex expressions are defined. Each to their own, the good thing is that you can do whatever you like. One benefit with using the expression component is that ODI by default will push that expression component as close to the source as possible and you can easily grab the entire component and push it to the stage, target or wherever.

The mapping below defines the expressions on the expression component, again there are no hints defined on the component or individual expressions.

When the physical design for this mapping is inspected we can see the expression component is by default on the source execution unit. This goes for other components too (filter, join etc.). In this case you can see both of the columns in the AP take on the downstream target table's datatypes (VARCHAR2(10) and DATE).

Changing the hint on the logical design for the expression component to stage will place the expression in the downstream execution unit. If I had just switched the hint to be stage for the expression component then in the physical design the expression would go in TARGET_UNIT_1. In 11g, ODI also supported a concept where the stage was different for the target. This is still available in 12c and is configured by defining what you want to execute in the stage by using these hints plus defining what the stage is (so similar to 11g apart from you don't have to switch tabs and the gestures are more simple). So firstly, define the expression to execute on the stage using that hint. Then on the logical mapping if you click on the canvas background you will see a property named 'Staging Location Hint', you can set this to the logical schema location for the staging area if you have one. By default it is not set as the staging area is the same as the target.

Let's change this to MEMORY_ENGINE just to see what the physical design looks like. We see we now have multiple execution units and the middle one where we executed the expression component is the 'stage' executing on the MEMORY_ENGINE location.

The hints are done on the logical design. You can also hard-wire physical changes in the physical design, I will go into that in a subsequent post but wanted to stick to the logical hints here to demystify how this works. I hope this is some useful background, I think for ODIers from 11g it will help.

Oracle Data Integrator (ODI) Usage in Fusion Applications (FA)

Written by Ayush Ganeriwal, Oracle Data Integrator Product Management

Oracle Data Integrator (ODI) is the bulk data transformation platform for Fusion Applications (FA). ODI is used by Fusion Customer Relationship Management (CRM), Fusion Human Capital Management (HCM), Fusion Supply Chain Management (SCM), Fusion Incentive Compensation (IC) and Fusion Financials family products and many other Fusion Application teams are following suit. Among all these product families CRM is the biggest consumer of ODI leveraging a breadth of ODI features and functionality, out of which some features were developed specifically for Fusion Applications use. Some ODI features they utilize include: ODI SDK, high availability, external authentication, various out of the box and customized Knowledge Modules, ODI-ESS bridge, callbacks to ESS EJBs, auditing, open tools, etc. In this post we will first talk about the different Fusion Application use cases at higher level and then take a closer look at different integration points.

Figure 1 shows data integration need of a typical on-premise Fusion Applications deployment.

  1. Bulk Import: Fusion Applications exposes a set of interface tables as the entry point for data load from any outside source. The bulk import process validates this data and loads it in the internal table which can then be used by the fusion application.
  2. Data Migration: Extracting data from external applications, legacy applications or any other data source and loading it into Fusion Application’s interface table. ODI can be used for such data load.
  3. Preparing Data Files: Converting data into Comma Separated Values (CSV) files that can be imported through Fusion Application’s files import wizard. ODI can be used to extract data into such CSV file.

Figure 1: Data Integration Needs in On-Premise Fusion Application

Figure 2 shows the on-demand or cloud environment requirements, which are slightly different as there is no direct connectivity available to the interface tables.

  1. Bulk Import: Fusion Application exposes a set of interface tables as the entry point for any data load from any outside source. The bulk import process validates this data and then loads it in the internal table which can then be used by the application.
  2. Preparing Data Files: Converting data into CSV files that can be imported through Fusion Application’s files import wizard. ODI can be used for creation on such CSV files.
  3. Uploading data files: The data files are uploaded to the Tenant File repository through either Fusion Application’s File import page or Oracle WebCenter Content Document Transfer Utility. The WebCenter Utility is built using ODI open tool framework allowing orchestrating entire process through ODI package.
  4. Loading Interface Table: Data files to be loaded in the interface tables so that it can be consumed by the Bulk Import process. ODI is used for loading these interface tables.

Figure 2: Data Integration Needs for On-Demand Fusion Application

Stay tuned for more blog posts on this topic coming next week. This was part one in a series of three posts.

Friday Apr 04, 2014

Turning Big Data into Real-Time Action for a Greater Customer Experience

The power shifted to us, consumers. The digital revolution allows us to access broader set of services, and communicate without boundaries. Today we demand more and better choices in a competitive market, putting pressures on businesses to catch up with our expectations.

By offering differentiated and improved experience to their customers organizations see that they can drive revenue growth via higher loyalty, and improved brand perception.  Because technology is a key enabler for delivering superb and consistent customer experience across all touchpoints, in recent years customer experience solutions have become a top priority for CIOs. Thanks to the availability of big data analytics, organizations can now analyze a broader variety of data, rather than a few basic data points, and gain deeper insight into their customers and operations. In turn, this deeper insight helps align their business to provide a seamless customer experience.

In our digital, fact-paced world we produce large volumes of data with unprecedented velocity. This data contains perishable value that requires fast capture, analysis, and action to be able to influence the operations or the interaction with the customer. Otherwise the insight or action may become irrelevant, which decreases the value for the customer and the organization significantly.  To extract the maximum value from highly dynamic and perishable data, you need to process much faster and take timely action. This is the main premise behind Oracle's Fast Data solutions, which we have discussed in previous blogs and webcasts.

Real-time data integration and analytics play a crucial role in our new world of big and fast data. Organizations that look into leveraging  big data to create greater customer experience, need to evaluate the analytical foundation behind their customer-facing systems and resulting interactions, and determine whether they can improve how and when they collect, analyze, and act on their ever-growing data assets.

In our next webcast my colleague Pete Schutt in the Oracle Business Analytics team and I will discuss how organizations can create value for their customers using real-time customer analytics, and how to leverage big data to build a solid business analytics foundation using the latest features of Oracle Data Integration and Oracle Business Analytics. We will provide multiple customer examples for different solution architectures.

Join us on April 15th 10am PT/ 1pm ET by registering via the link below.

Turning Big Data into Real-Time Action for a Greater Customer Experience

Tuesday, April 15th 10am PT/ 1pm ET

Until we meet at this webcast, please review my related article on this topic published on DBTA earlier this year:

Tuesday Apr 01, 2014

ODI 12c - Mapping Builder

A few years ago I posted a utility (see interface builder post here) to build interfaces from driver files, here I have updated it for 12c to build mappings from driver files. The example uses a tab delimited text file to control the mapping creation, but it could be easily taken and changed to drive from whatever you wanted to capture the design of the mapping.

The mapping can be as complete or incomplete as you’d like, so could just contain the objects or could be concise and semantically complete.

The control file is VERY simple and just like ODI requests the minimal amount of information required. The basic format is as follows;So for example the control file below can define the sources, target, joins, mapping expressions etc;

Directive Column2 Column3 Column4 Column5 Column6
 source  <model>  <datastore>  <alias>
 .....can add many
 target  <model>  <datastore>
 filter  <filter_condition>  <filter_datastore_alias>  <alias>
 lookup  <model> <datastore>  <driver_alias> <lookup_condition>  <alias>
 join  <join_condition>  <alias>
 ....can add many of the components above.
 mapping  <column>  <expression>

So for example the control file below can define the sources, target, joins, mapping expressions etc;

  • source SOURCE EMP EMP
  • filter EMP.SAL > 1 EMP AFILTER
  • mapping COMM ABS(BONUS.COMM)

When executed, this generates the mapping below with the join, filter, lookup and target expressions from the file;

You should be able to join the dots between the control file sample and the mapping design above. You will need to compile and execute the code in OdiMappingBuilder;

java –classpath <cp> OdinterfaceBuilder jdbc:oracle:thin:@localhost:1521:ora112 oracle.jdbc.OracleDriver ODI_MASTER mypwd WORKREP1 SUPERVISOR myodipwd DEMOS SDK DEMO1 <

The mapping to be created is passed from the command line. You can intersperse other documentation lines between the control lines so long as the control keywords in first column don’t clash. See the driver file below viewed from within Excel;

Anyway some useful snippets of code for those learning the SDK (download OdiMappingBuilder here), or for those wanting to capture the design outside and generate ODI mappings. Have fun!

Looking for Cutting-Edge Data Integration: 2014 Excellence Awards