Wednesday Sep 26, 2012

Data-Driven SOA with Oracle Data Integrator

By Mike Eisterer,

Data integration is more than simply moving data in bulk or in real-time, it is also about unifying information for improved business agility and integrating it in today’s service-oriented architectures. SOA enables organizations to easily define services which may then be discovered and leveraged by varying consumers. These consumers may be applications, customer facing portals, or complex business rules which are assembling services to automate process. Data as a foundational service provider is a key component of today’s successful SOA implementations.

Oracle offers the broadest and most integrated portfolio of products to help you define, organize, orchestrate and consume data services.

If you are attending Oracle OpenWorld next week, you will have ample opportunity to see the latest Oracle Data Integrator live in action and work with it yourself in two offered Hands-on Labs. Visit the hands-on lab to gain experience firsthand:

 

 

Oracle Data Integrator and Oracle SOA Suite: Hands-on- Lab (HOL10480)

Wed Oct 3rd 11:45AM Marriott Marquis- Salon 1/2


 

To learn more about Oracle Data Integrator, please visit our Introduction Hands-on LAB:

Introduction to Oracle Data Integrator (HOL10481)

Mon Oct 1st 3:15PM, Marriott Marquis- Salon 1/2


If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

Tuesday Sep 11, 2012

Oracle Data Integrator at Oracle OpenWorld 2012: Sessions, Demos and Hands-On Labs

 By Mike Eisterer

 Oracle OpenWorld is just a few weeks away and the Oracle Data Integrator team would like to introduce you to the sessions, demos and hands-on labs we will be offering this year. We will be out in force at the show with four demo pods and two hands-on labs, plus numerous speaking sessions.

Sessions this year will provide valuable information towards the use and direction of Oracle Data Integration solutions, including:

  • Tackling Big Data Analytics with Oracle Data Integrator, October 1, 2012 - 12:15 PM, at Moscone West – 3005
  • Real-Time Data Integration with Oracle Data Integrator, October 1, 2012 - 4:45 PM, at Raymond James Moscone West – 3005
  • Future Strategy, Direction, and Roadmap of Oracle’s Data Integration Platform, October 1, 2012 - 10:45 AM, at Moscone West – 3005
  • Customer Perspectives: Oracle Data Integrator Marriott Marquis, October 3, 2012 - 1:15 PM, at Marriot Marquis GoldenGate C3
 To see the full list of sessions for data integration topic please check out our Focus-on for Data Integration.

Demos this year will be running Monday through Wednesday in Moscone South and we will be showcasing:

· Oracle Data Integrator for Big Data (Moscone South, S-236)

· Oracle Data Integrator for Enterprise Data Warehousing (Moscone South, S-238)

· Oracle Data Integrator and Service Integration (Moscone South, S-235)

· Oracle Data Integrator and Oracle GoldenGate for Oracle Applications (Moscone South, S-240)

Hands-on labs will feature instructor lead exercises providing direct experience with Oracle Data Integrator, including:

· “Introduction to Oracle Data Integrator” where students will learn how to define sources and create mappings to extract, load and transform data.

· “Oracle Data Integrator and Oracle SOA Suite” where students will define integration flows as web services, access web services as a transformation and integrate ODI sessions into a BPEL process.

If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld.

 I hope to see you at OpenWorld and stay in touch via our future blogs.

Sunday Jul 22, 2012

Is Big Data just Super Sexy Batch?

One of the key expectations we have for big data and our information architecture is to yield faster, better and more insightful analytics. That appeal of processing so much information quickly is why the Hadoop technologies may have originally been invented. But is it nothing more than a super sexy batch? Yes – on sexy. But there’s definitely an important real-time element involved. Read the rest of the article to see more on our take on the intersection of batch, real-time, big data, and business analytics. [Read More]

Friday Jun 01, 2012

Looking for the latest Data in Integration Reading? Read on..

Looking for some exciting reading on data integration?  With the hundreds of books in the market on data integration, there really isn't a book that is heads-down, focused on Oracle Data Integrator.  Not any more.  Recently published is the first book truly dedicated to Oracle Data Integrator.  The title of the book, which is published by Packt Publishing is:

Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial
Authors: Peter C. Boyd-Bowman, Christophe Dupupet, Denis Gray, David Hecksel,  Julien Testut, Bernard Wheeler



I would like to extend my hearty Congratulations to everyone who contributed to this book!

You can get more information about 'Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial' including the table of contents and a sample chapter at http://www.packtpub.com/oracle-data-integrator-11g-getting-started/book.

The book is now available on Amazon.com, Amazon.co.uk, Barnes & Noble and Safari Books Online. Order your copy today!

Wednesday Mar 14, 2012

New Feature in ODI 11.1.1.6: Smart Export and Import

By Jayant Mahto

Oracle Data Integrator 11.1.1.6.0 introduces a major new feature called Smart Export and Import. This post will give you an overview of this feature.

ODI export and import feature has been used in previous releases to move ODI objects in and out of ODI repository. Smart Export and Import builds on top of the existing ODI capabilities to avoid common pitfalls and guide end users through the process.

[Read More]

Monday Jun 20, 2011

Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

Overview

ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes:

    • Different record types in one list that use different parsing rules
    • Hierarchical lists, for example customers with nested orders
    • Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers
    • Complex headers such as multiple header lines or parseable information in header
    • Skipping of lines
    • Conditional or choice fields

Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.

pic1.jpg

The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized. The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver.

How to Create an nXSD file

Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces.

The following is a sample NXSD schema blog.xsd:

<?xml version="1.0"?>

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified"

nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD">

<xsd:element name="Root">

<xsd:complexType><xsd:sequence>

<xsd:element name="Header">

<xsd:complexType><xsd:sequence>

<xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>

<xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>

</xsd:sequence></xsd:complexType>

</xsd:element>

<xsd:element name="Customer" maxOccurs="unbounded">

<xsd:complexType><xsd:sequence>

<xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>

<xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />

<xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />

</xsd:sequence></xsd:complexType>

</xsd:element>

</xsd:sequence></xsd:complexType>

</xsd:element>

</xsd:schema>

The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more.

nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:

pic2.jpg

The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually. The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches.

This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example blog.dat:

Redwood City Downtown Branch, 06/01/2011
Ebeneezer Scrooge, Sandy Lane, Atherton
Tiny Tim, Winton Terrace, Menlo Park

The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are:

Table ROOT (1 row):

ROOTPK

Primary Key for root element

SNPSFILENAME

Name of the file

SNPSFILEPATH

Path of the file

SNPSLOADDATE

Date of load

Table HEADER (1 row):

ROOTFK

Foreign Key to ROOT record

HEADERORDER

Order of row in native document

BRANCH

Data

BRANCHORDER

Order of Branch within row

LISTDATE

Data

LISTDATEORDER

Order of ListDate within row

Table CUSTOMER (2 rows):

ROOTFK

Foreign Key to ROOT record

CUSTOMERORDER

Order of rows in native document

NAME

Data

NAMEORDER

Oder of Name within row

STREET

Data

STREETORDER

Order of Street within row

CITY

Data

CITYORDER

Order of City within row

Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All child element tables also have a order field (HEADERORDER, CUSTOMERORDER) to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored.

How to Create an Complex File Data Server in ODI

After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology.

pic3_new.jpg

This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL:

pic4.jpg

The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.

All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference.

David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server.

How to Create Models based on an Complex File Schema

Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well.

Debugging and Error Handling

There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed.

Sample error message:

Error while reading native data.
[Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input.

Complex File FAQ

Is the size of the native file limited by available memory?
No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory.

Should I always use the complex file driver instead of the file driver in ODI now?
No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver.

Should I use the complex file driver to parse standard file formats such as EDI, HL7, FIX, SWIFT, etc.? 
The complex file driver is technically able to parse most standard file formats, the user would have to develop an nXSD to parse the expected message. However, in some instances the use case requires a supporting infrastructure, such as message validation, acknowledgement messages, routing rules, etc. In these cases products such as Oracle B2B or  Oracle Service Bus for Financial Services will be better suited and could be combined with ODI.

Are we generating XML out of flat files before we write it into a database?
We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products.

Is the nXSD file interchangeable with SOA Suite?
Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format.

Can I start the Native Format Builder from the ODI Studio?
No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors.

When is the database data written back to the native file?
Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur.

Is the nXSD metadata part of the ODI Master or Work Repository?
No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system.

Where can I find sample nXSD files?
The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

Friday Jun 03, 2011

Oracle Data Integrator - Key to Success on Data Integration Projects

Data is the lifeline of any enterprise.  Now, figuring out how to integrate that data so that it is available to you in the systems that require it -- well, that's a different story.  Data integration projects are notoriously known for being challenging due to:  multiple siloed systems, plethora of home-grown systems and interfaces that were created eons ago, mismatched data formats, different souce and target database and this is just a few of the issues.

Oracle Data Integrator was designed to to help tackle these issues and help maximize success on data integration projects. Built and designed using an ELT architecture, ODI is the most complete solution for bulk data movement and data transformation.  High performance, heterogeneity and developer productivity, reduced cost and faster time-to-value are just a sampling of what ODI has to offer.  Take a look at the latest whitepaper on what makes data integration projects successful and how Oracle Data Integrator is integral to helping organizations trust that their data is where it needs to be while saving time, money and extending their existing IT investments. Read here

Tuesday May 24, 2011

What’s New in the Data Integration Market?

Data integration is essential to many strategic IT initiatives including master data management (MDM), modernization, and service-oriented architecture (SOA). That’s why it is not surprising that the data integration market continues to grow in its size, importance and impact on enterprise information systems. The technology has evolved from simple extract, transform, and load (ETL) procedures to include real time data movement, bi-directional replication/synchronization along with data quality and data profiling and data services.

 The user base for data integration tools has also evolved from low-level technology for developers and DBAs to high-level environments for operations staff, data stewards, business analysts and enterprise architects. In short, what was once merely the background “plumbing” underlying our information systems now influences just about every aspect of IT, and touches just about every type of stakeholder.

 In a new white paper we have examined the state of the data integration market including the trends in business intelligence, data warehousing, data quality, consolidation, cloud computing and IT modernization initiatives that are driving data-intensive projects. You can access this brand new white paper along with other data integration focused white papers and resources here.

Sunday Apr 17, 2011

Make Oracle Your Choice for ETL

[Read More]

Friday Nov 20, 2009

Parallel Processing in ODI

This post assumes that you have some level of familiarity with ODI. The concepts of Packages, Interfaces, Procedures and Scenarios are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information.

ODI: Parallel Processing

A common question in ODI is how to run processes in parallel. When you look at a typical ODI package, all steps are described in a serial fashion and will be executed in sequence.

ParallelPackageSerial.PNG

However, this same package can parallelize and synchronize processes if needed.

PARALLEL PROCESSES

The first piece of the puzzle if you want to parallelize your executions is that a package can invoke other packages once they have been compiled into scenarios (the process of generation of scenarios is described later in this post). You can then have a master package that will orchestrate other scenarios. There is no limit as to how many levels of nesting you will have, as long as your processes are making sense: Your master package invokes a seconday package which, in turn invokes another package...

When you invoke these scenarios, you have two possible execution modes: synchronous and asynchronous.

ParallelScenario.PNG

A synchronous execution will serialize the scenario execution with other steps in the package: ODI executes the scenario, and only after its execution is completed, runs the next step.

An asynchronous execution will only invoke the scenario but will immediately execute the next step in the calling package: the scenario will then run in parallel with the next step. You can use this option to start multiple scenarios concurrently: they will all run in parallel, independently of one another.

SYNCHRONIZING PROCESSES

Once we have started multiple processes in parallel, a common requirement is to synchronize these processes: some steps may run in parallel, but at times we will need all separate threads to be completed before we proceed with a final series of steps. ODI provides a tool for this: OdiWaitForChildSession.

ParallelSynchronize.PNG

An interesting feature is that as you start your different processes in parallel, they can each be assigned a keyword (this is just one of the parameters you can set when you start a scenario). When you synchronize the processes, you can select which processes will be synchronized based on a selection of keywords.

ADDING SCENARIOS TO YOUR PACKAGE FOR PARALLEL PROCESSING

To add a scenario to your package, simply drag and drop the generated scenario in the package, and edit the execution parameters as needed. In particular, remember to set the execution mode to Asynchronous.

You can generate a scenario from a package, from an interface, or from a procedure. The last two will be more atomic (one interface or one procedure only per execution unit). The typical way to generate a scenario is to right-click on one of these objects and to select Generate Scenario.

The generation of scenarios can also be automated with ODI processes that would invoke the ODI tool OdiGenerateAllScen. The parameters of this tool will let you define which scenarios are being generated automatically.

In all cases, scenarios can be found in the object tree, under the object they were generated from - or in the Operator interface, in the Scenarios tab.

While you are developing your different objects, keep in mind that you can Regenerate existing scenarios. This is faster than deleting existing ones only to re-create them with the same version number. To re-generate a scenario, simply right-click on the existing version and select Regenerate ... .

From an execution perspective, you can specify that the scenario you will execute is version -1 (negative one) to ensure that the latest version number is always the one executed. This is a lot easier than editing the parameters with each new release.

DISPLAYING PARALLEL PROCESSING

You will notice that as of 10.1.3.4, ODI does not graphically differentiate between serialized and parallelized executions: all are represented in a serial manner. One way to make parallel executions more visible is stack up the objects vertically, versus the more natural horizontal execution for serialized objects. (If we have electricians reading this, the layout will be very familiar to them, but this is only a coincidence...)

ParallelPackageStackUp.PNG

OTHER OBJECTS THAN SCENARIOS

Scenarios are not the only objects that will allow for parallel (or Asynchronous) execution. If you look at the ODI tool OdiOSCommand, you will notice a Synchronous option that will allow you to define if the external component you are executing will run in parallel with the current process, or if it will be serialized in your process. The same is true for the Data Quality tool OdiDataQuality.

EXECUTION LOGS

As you will start running more processes in parallel, be ready to see more processes being executed concurrently in the Operator interface. If you are only interested in seing the master processes though, the Hierarchy tab will allow you to limit your view to parent processes. Children processes will be listed under the entry Childres Sessions under each session.

Likewise, when you access the logs from the web front end, you can view the Parent processes only.

Enjoy!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Monday Nov 09, 2009

The Benefits of ODI Knowledge Modules: a Template Approach for Better Data Integration

This post assumes that you have some level of familiarity with ODI. The concepts of Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..

At the core, ODI knowledge modules are templates of code that will be leveraged for data integration tasks: they pre-define data integration steps that are required to extract, load, stage - if needed - and integrate data.

Several types of Knowledge Modules are available, and are grouped in families for Loading operations (LKMs), Integration operations (IKMs), Data Control operations (CKMs), and more.

For a more detailed description of what a knowledge module is, simply picture the multiple steps required to load data from a flat file into a database. You can connect to the file using JDBC, or leverage the database native loading utility (sqlldr for Oracle, bcp for SQL Server or Sybase, load for db2, etc.). External tables are another alternative for databases that support this feature.
As you use one or the other technique, you may first want to stage the data before loading your actual target table; in other cases, staging will only slow down your data load.

As far as the integration in the target system is concerned, again multiple strategies are available: simple inserts, inserts and updates, upserts, slowly changing dimension... these techniques may be as simple as one step, or be a complex series of commands that must be issued to your database for proper execution.

The Knowledge Modules will basically list these steps so that a developer who needs to repeat the same integration pattern only has to select the appropriate templates, versus re-developing the same logic over and over again.

The immediate benefits of this approach are well known and well documented:
- All developers use the same approach, and code development is consistent across the company, hence guarantying the quality of the code
- Productivity is greatly improved, as proven path are re-used versus being re-developed
- Code improvement and modification can be centralized and has a much broader impact: optimization and regulatory changes are done once and inherited by all processes
- Maintenance is greatly simplified

To fully appreciate all the benefits of using knowledge Modules, there is a lot more that needs to be exposed and understood about the technology. This post is a modest attempt at addressing this need.

GENERATION OF CODE AND TRANSFORMATIONS

Most tools today will offer the ability to generate SQL code (or some other type of code, such as scripts) on your source or target system. As most products come with a transformation engine, they will also generate proprietary code for this engine where data is staged (I'll skip the debate here as to whether a transformation engine is a staging area or not - the point being that code can be generated on either source, "middle-tier" or target).

However, real life requirements are rarely either/or. Often times, it makes sense to leverage all systems to optimize the processing: spread out the load for the transformations, reduce the amount of data to be transferred over the network, process the data where it is versus moving the data around solely for the purpose of transformations.

To achieve this, Data Integration tools must be able to distribute the transformation logic across the different systems.

KMCodeExecution.PNG

Only ODI will effectively generate code and transformations on all systems. This feature is only possible thanks to the KM technology.

Beyond the ability to generate code, you have to make sure that the generated code is the best possible code for the selected technology. Too often, tools first generate code that is then translated for the appropriate database. With the KMs technology, no translation is required: the generated code was initially conceived explicitly for a given technology, hence taking advantage of all the specifics of this technology.

And since the KMs are technology specific, there is no limit to what can be leveraged on the databases, including user defined functions or stored procedures.

KMCodeGeneration.PNG

CODE ADAPTABILITY

Whenever a tool generates code, the most common complaint is that there is very little (if any) control over the generated result. What if a simple modification of the code could provide dramatic performance improvements? Basic examples would include index management, statistics generation, joins management, and a lot more.

The KM technology is open and expansible so that developers have complete control over the code generation process. Beyond the ability to optimize the code, they can extend their solution to define and enforce in house best practices, and comply with corporate, industry or regulatory requirements. KMs Modifications are done directly from the developers graphical interface.

One point that can easily be adapted is whether data have to be materialized throughout the integration process. Some out-of-the-box KMs will explicitly land data in a physical file or tables. Others will avoid I/Os by leveraging pipes instead of files, views and synonyms instead of tables. Again, developers can adapt the behavior to their actual requirements.

EXPANDING THE TOOL TO NEW TECHNOLOGIES

How much time does it take to adapt your code to a new release of your database? How much time does it take to add a new technology altogether? In both cases, KMs will provide a quick and easy answer.

Let us start with the case of a new version of the database. While our engineering teams will release new KMs as quickly as possible to take advantage of the latest releases of any new database, you do not have to wait for them. A new release typically means new parameters for your DDL and DML, as well as new functions for your existing transformations. Adapt the existing KMs with the features you need, and in minutes your code is ready to leverage the latest and greatest of your database.

Likewise, if you ever need to define a new technology that would not be listed by ODI (in spite of the already extensive list we provide), simply define the behavior of this technology in the Topology interface, and design technology specific KMs to take advantage of the specific features of this database. I can guaranty you that 80% of the code you need (at least!) is already available in an existing KM... Thus dramatically reducing the amount of effort required to generate code for your own technology.

ARE KM MODIFICATIONS REQUIRED?

I am a strong advocate of the customization of KMs: I like to get the best I can out of what I am given. But often times, good enough is more than enough. I will always remember trying to optimize performance for a customer: we did not know initially what our processing window would be - other than "give us your best possible performance". The first out-of-the-box KM we tried processed the required 30,000,000 records in 20 minutes. Due to IT limitations, we could only leverage lesser systems for faster KMs... but still reduced performance to 6 minutes for the same volume of data. We started modifying KMs to get even better results, when the customer admitted that we actually had 3 hour for the process to complete... At this point, spending time in KM modifications was clearly not needed anymore.

KMs are meant to give the best possible performance out of the box. But every environment is unique, and assuming that we can have the best possible code for you before knowing your own specific challenges would be an illusion - hence the ability to push the product and the code to the limit

Another common question is: do you have to leverage both source and target systems as part of your transformations? Clearly, the answer is no. But in most cases, it is crucial to have the flexibility to leverage all systems, versus being cornered in using only one of them. Over time, you will want to reduce the volume of data transferred over the network; you will want to distribute some of your processing... all more reasons to leverage all available engines in your environment.

Do not hesitate and share with us how you extend your KMs!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

 

Friday Oct 09, 2009

Did You Know that ODI Automatically Summarizes Data Errors?

Looking for Data Integration at OpenWorld 2009? Click here!

This post assumes that you have some level of familiarity with ODI. The concepts of Interface, Flow and Static Control, as well as Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..


TABLES GENERATED BY ODI TO IDENTIFY ERRORS

If you take advantage of either Flow Control or Static Control in your interfaces, you know that ODI will automatically trap errors for you as you run your interfaces.

When you select the Controls tab of your interface, where you will decide which Knowledge Module will be used to identify the errors, you have an option to drop the Error table and another one to drop the Check table. Have you ever wondered what these are?

Interface Controls Tab

The Error table is the table that will be created by ODI to store all errors trapped by the FLOW_CONTROL and STATIC_CONTROL of your interface. You have probably already used the error table. This table is structured after your target table, along with administrative information needed to re-cycle or re-process the invalid records. It is loaded by ODI with all records that fail to pass the validation of the rules defined on your Target table. This feature is often referred to as a Data Quality Firewall as only the "good" data will make it to the target table.

Once all errors have been identified for a given interface, ODI will summarize them into another table: the Check table. There will be only one such table per data server: all target tables in the server (irrespectively of their schema) will share the same summary table. The name of this table is defined by default by the CKMs as SNP_CHECK_TAB.


LOCATION OF THE CHECK TABLE

You will find the check table in the default work schema of your server. To locate this schema, you have to go back to topology, in the Physical Architecture tab. Expand your data server to list the different physical schemas. One of the schemas is your default schema and will be identified by a checkmark on the schema icon (see SALES_DWH in the example below).

Default Schema

When you edit the schema, it has an associated work schema. The work schema associated to your default schema is your default work schema: ODI_TMP is the following example.

Default Work Schema

 

Note that you can change your default schema by selecting/unselecting the default option in the schema definition. But remember that you will always need exactly one default schema for each server.

 

Now that we know where to find this table, let's look at its structure:

  • CATALOG_NAME, SCHEMA_NAME: location of the table that was being loaded (i.e. the target table)
  • RESOURCE_NAME, FULL_RES_NAME: name of the table that was being loaded
  • ERR_TYPE: type of control that was performed (Flow Control or Static Control)
  • ERR_MESS: plain English error message associated with the error
  • CHECK_DATE: date and time of the control
  • ORIGIN: name of the ODI process that identified the errors
  • CONS_NAME: name of the constraint (as defined in the ODI Models) that defines the rule that the record violated
  • CONS_TYPE: type of error (duplicate primary key, invalid reference, conditional check failed, Null Value)
  • ERR_COUNT: number of records identified by the process that failed to pass that specific control rule.

 

ErrorsSummaryStructure.PNG

A sample of the data available in that summary table is show below (we split the content in 2 screenshots to make this more readable - this is one and only one table):

Errors Summary Data

Errors Summary Data2

There are many possible uses for this table: decision making in your ODI processes based on the number of errors identified or the type of errors identified, basic reporting on errors trapped by ODI, trend analysis or the evolution of errors over time...

Do not hesitate and share with us how you leverage this table!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

 

Thursday Oct 01, 2009

Creating a New Knowledge Module for Sample Data Sets Generation

Looking for Data Integration at OpenWorld 2009? Click here!

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Interface and Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..
In particular, to learn more on Knowledge Modules, I strongly recommend the Knowledge Module Developer's Guide - Fundamentals that comes with the product. You will have to download and install ODI to access this document in the Documentation Library.

This post will look into "when" and "how" to create a knowledge module. Then it will walk through some of the choices that can be made when designing a Knowledge Module.

To illustrate the descriptions, we are working on an example described previously in this post.


1. WHAT BELONGS TO A KNOWLEDGE MODULE?

The first element to look into is what parts of the logic of your code are reusable. What you are typically looking for are the following:

  • Sequences of steps that are repeated commonly, even though some steps may be optional. For instance: creation of a staging table, creation of a script or parameter file for a utility, invoking an external program, extraction of data from a database, etc.
  • For each step, identification of the non variable parts vs. the variable parts. For instance, in a select statement, the body of the code remains the same. In the following example, the elements in brackets are variables, the others are fixed:
    • Insert into [TableName] ([ListOfColumns]) select ([ListOfColumns and Expressions]) from [List of Tables] where [conditions]

  • For your module to be re-usable, you want to make sure that no information that physically relates your code to any system or table structure is left out of the code. The idea behind the KMs is that as developers will build their transformations and mappings, ODI will "fill in the blanks" with the appropriate data.

The easiest way to get started with a knowledge module is actually to take an existing one and modify it. As the syntax has already been validated in existing KMs, the amount of work required to produce valid code will be greatly reduced.
In most cases, column names, mapping expressions do not belong to a knowledge module. The exception would be administrative columns that you add as part of the logic of your KM. For instance, most Incremental Update knowledge modules that ship with ODI create an IND_UPDATE column to differentiate records that will be updated from those that will be inserted. These columns definitely belong in the code of the KM.

 

Likewise, you may want to create your own tables (administrative tables, audit tables, etc.) with a very static name. These can be created by the Knowledge Module. But in general, it is better to dynamically generate the table name after the table being loaded, to prevent multiple processes running in parallel from trying to use the same intermediate table.


2. DO I HAVE A CASE FOR A KNOWLEDGE MODULE?

Any technique used to extract data out of a database (or file, or messaging system, or web service for that matter) can be a good opportunity to create a new KM. The same is true for loading techniques and integration techniques: inserts, updates, slowly changing dimension, etc.

In the scenario that we are contemplating, we want to insert data (albeit random data) into a table, so we probably have a good case for a knowledge module.

The first step is usually to look for available techniques, try the code independently of any knowledge module, and check out how it behaves: how is performance? How does the code behave when data volume grows? You want to make sure that the code you will integrate as a template is as good as it can be before you share it with the entire corporation!

Typically, extracting from a source system to stage data is done in an LKM. Loading data into a target table is done with an IKM. In our case, we will clearly create an IKM.


3. RANDOM DATA GENERATION: CODE GENESIS

For our example, will start with a KM that works exclusively for Oracle Databases. Adaptations of the code will be possible later on to make similar processes run on other databases.

The Oracle database provides a fantastic feature that we can leverage to generate a large number of records: group by cube: it returns all the possible permutation for the selected columns. So the following code:

select NULL from dual group by cube(1,1,1)

returns 8 records (2 to the power 3). Add columns for the list for the permutations, and you are adding an exponential number of records.

Now when I played with this function on my (very) little installation of the database, I seemed to hit a limit for (very) large permutation numbers. I have to admit that I am not using the function in its expected fashion, no I cannot really complain. But at least I can easily generate 1024 records (2 to the power 10). Now from a usability perspective, I do not really want to use that number for the users of my KM (1024 has a geeky flavor to it, doesn't it?). How about generating a table with just 1,000 records?

The following code will do the trick:

select NULL from dual group by cube(1,1,1,1,1,1,1,1,1,1)
where rownum<=1000

Note that so far, all the instructions we have are hard-coded. We still do not have anything that would be dynamic in nature.

Now we need to use the above query to create some table with our 1,000 records. Again, we can hard-code the table name - but this does not make for very portable code. In particular, from one environment to the next, the database schema names will vary. We have three options to create our staging table, from the least portable to the most portable:

  • Hardcoded table name and schema name: myschema.SEED
  • Dynamic schema name, hardcoded table name: let ODI retrieve the proper schema name and automatically update the code at execution time:  (Generated code: myschema.SEED)
  • Fully dynamic table name and schema name (usually, dynamic tables are named after the target table with some sort of extension): _SEED (generated code: if you are loading TRG_CUSTOMERS, then the SEED table name is myschema.TRG_CUSTOMER_SEED)

Best practice is of course to use the last one of these options to allow for multiple processes to run in parallel. To keep our explanations simple, we will use the second option above - but keep in mind that best practice would be to use the fully dynamic one.

 

As we will use our KM over and over, it is important to make the developer's life easy. Steps have to be included here to create our seeding table, drop it when we are done, and make sure before we create it that it is not there from a previous run that could have failed.
The typical sequence of steps for a KM creating any type of staging table is:

  • Drop table (and ignore errors - if there is no table, we are fine)
  • Create table (re-create it to reflect any possible meta-data changes)
  • Load the table with staging data - in our case a sequence of numbers that we will be able to leverage later on for filtering... (please be patient: we will come back to this). Here a rownum will do the trick...

Now that we have the code to insert data into our staging table, we can put all the pieces together and have the first three steps of our knowledge module. Keep in mind that you have to be consistent from one step to the next as you name your table. The actual knowledge module with all the matching code is available here (look for KM_IKM Oracle - Build Sample Data - Gen II.xml).

 


4. COMPLETING THE CODE

So far our table only has 1,000 records. Not much in terms of volume. But all we need now to create a table with 1,000,000 records... is a Cartesian product (you know, the one thing your mother told you NOT to do with a database? It comes very handy here!):

insert into [Target] ([Columns]) Select * from SEED S1, SEED s2

And if we want to return less records, all we have to do is filter on the S2 table. For instance the clause:

where S2.SEED_ID<=10

will return 10,000 records!. Remember when we stored rownums in this table earlier? This is where it becomes very handy...

So far the only thing we have done is to generate a fairly large number of records. Where the exercise becomes even more interesting is if we can generate data for each record that matches our requirements for sample data. In a previous post we have seen how to generate User Functions in ODI to abstract this type of random generation. The code for the sample data generation typically does not belong to the Knowledge Module as it would not give us enough flexibility for all the possible combinations out there.

The User Function examples used before can generate numerics and strings. We could expand their design to generate random phone numbers, or random social security numbers... and work with random data that will now look like real data, instead of exposing sensitive information.

5. ADDING FLEXIBILITY TO THE KNOWLEDGE MODULE

The fact that User Functions do not belong to the code of the Knowledge Module does not mean that there is no flexibility in the Knowledge Modules. Here, we are building a sample table out of thin air: it is very possible that the table does not exist in the first place. Or if want to run multiple tests with different amounts of records each time, we may want to truncate the table before each new run.

KM options are a very simple way to toggle such behaviors in a KM. We can create 2 options: CREATE_TABLE and TRUNCATE. Then create the appropriate steps in our KM:

  • Create Target Table
  • Truncate table

When you want to associate an option to a given step, edit the step itself; then click on the "option" tab and un-check "always execute". Select only the appropriate option in the list and click OK...

 

KM_Option.PNG

As we define the options, it is also good to think of the most common usage for the KM. In our case, chances are we will often want to create the table and truncate it for successive runs: we can then define that these steps will be executed by default (set the default for the variables to "Yes". More conventional KMs would typically have these options, but their defaults would be set to "No".

We now have a complete Knowledge Module that can be used to generate between 1,000 and 1,000,000 records in any table of your choice, complete with options that will let the users of the KM adapt the behavior to their actual needs...

Again, if you want to review all the code in details, it is available here (look for KM_IKM Oracle - Build Sample Data - Gen II.xml).

Enjoy!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Sunday Sep 20, 2009

ODI User Functions: A Case Study

Looking for Data Integration at OpenWorld 2009? Check out here!

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Interface and User Function are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..

This post will give some examples of how and where user functions can be used in ODI. We will look back at a previous post and see why and how user functions were created for this case.

1. A CASE FOR USER FUNCTIONS

As I was trying to design a simple way to generate random data, I simple approach to specify the type of data to be generated. An example would be to easily generate a random string.

Working on Oracle for this example, I could leverage the database function DBMS_RANDOM.STRING. It takes 2 parameters: one for the type of characters (uppercase, lowercase, mixed case), and on for the length of the string. But I want a little more than this: I also want the ability to force my generation to have a minimum number of characters. For this, I now need to generate a random number. The database function DBMS_RANDOM.VALUE does this, but returns a decimal value. Well, the TRUNC function can take care of this... but my mapping expression becomes somewhat complex:

DBMS_RANDOM.STRING(Format, TRUNC(DBMS_RANDOM.VALUE(MinLen,MaxLen)))

Imagine now using this formula over and over again in your mappings - not the easiest and most readable portion of code to handle. And maintenance will easily become a nightmare if you simply cut and paste...

A user function makes sense at this point, and will provide the following benefits:


  • A readable name that makes the usage and understanding of the transformation formula a lot easier
  • A central place to maintain the transformation code
  • The ability to share the transformation logic with other developers who do not have to come up with the code for this anymore.

 

From a code generation perspective, if you use the User Function in your interfaces, ODI will replace the function name with the associated code and send that code to the database. Databases will never see the User Function name - the substitution is part of the code generation.

2. BUILDING THE USER FUNCTION

You need to provide 3 elements when you build a user function:


  • A name (and a group name to organize the functions)
  • A syntax
  • The actual code to be generated when the developers will use the functions

 


2.1 Naming the User Function

The name of the user function is yours to choose, but make sure that it properly describes what this function is doing: this will make the function all the more usable for others. You will also notice a drop down menu that will let you select a group for this function. If you want to create a new group, you can directly type the group name in the drop down itself.

For our example, we will name the user Function RandomString and create a new group called RandomGenerators.

User Function Name


2.2 The User Function Syntax

The next step will be to define the syntax for the function. Parameters are defined with a starting dollarsign:
$
and enclosed in parenthesis:
().
You can name your parameters as you want: these names will be used when you put together the code for the user functions, along with the $ and (). You can have no parameters or as many parameters as you want...

In our case, we need 3 parameters: One for the string format, one for the minimum length of the generated string, one for the maximum length. Our syntax will hence be:

RandomString($(Format), $(MinLen), $(MaxLen))

If you want to force the data types for the parameters, you can do so by adding the appropriate character after the parameter name: s for String, n for Numeric and d for Date. In that case our User Function syntax would be:

RandomString($(Format)s, $(MinLen)n, $(MaxLen)n)

User Function Syntax


2.3 Code of the User Function

The last part in the definition of the user function will be to define the associated SQL code. To do this, click on the Implementation tab and click the Add button. A window will pop up with two parts: the upper part is for the code per se, the bottom part is for the selection of the technologies where this syntax can be used. Whenever you will use the User Function in your mappings, if the underlying technology is one of the ones you have selected, then ODI will substitute the function name with the code you enter here.

For our example, select Oracle in the list of Linked Technologies and type the following code in the upper part:

DBMS_RANDOM.STRING($(Format), TRUNC(DBMS_RANDOM.VALUE($(MinLen),$(MaxLen))))

Note that we are replacing here the column names with the names of the parameters we have defined in the syntax field previously.

User Function Code

Click on the Ok button to save your syntax and on the Ok or Apply button to save your User Function.

3. EXPANDING THE USER FUNCTIONS TO OTHER TECHNOLOGIES

Once we are done with the previous step, the user function is ready to be used. One downside with our design so far: it can only be used on one technology. Chances are we will need a more flexibility.

One key feature of the User Functions is that they will give you the opportunity to enter the matching syntax for any other database of your choice. No matter which technology you will use later on, you will only have to provide the name of the user function, irrespectively of the underlying technology.

To add other implementations and technologies, simply go back to the Implementation tab of the User Function, click the Add button. Select the technology for which you are defining the code, and add the code.

Note that you can select multiple technologies for any given code implementation: these technologies will then be listed next to one antother.

4. USING THE USER FUNCTIONS IN IINTERFACES

To use the User Functions in your interfaces, simply enter them in your mappings, filters, joins and constraints the same way you would use SQL code.

Interface Mappings

When you will execute your interface, the generated code can be reviewed in the Operator interface. Note that you should never see the function names in the generated code. If you do, check out the following elements:


  • User Function names are case sensitive. Make sure that you are using the appropriate combination of uppercase and lowercase characters
  • Make sure that you are using the appropriate number of parameters, and that they have the appropriate type (string, number or date)
  • Make sure that there is a definition for the User Function for the technology in which it is running. This last case may be the easiest one to oversee, so try to keep it in mind!

As long as you see SQL code in place of the User Function name, the substitution happened successfully.

 

Enjoy!

All Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Sunday Sep 13, 2009

How to Define Multi Record Format Files in ODI

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Datastore, Model and Logical Schema are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for more details.

It is not unusual to have to load files containing various record formats. For example a company might store orders and order lines using distinct record formats in the same flat file or you might have one single file containing a header, some records and a footer.
In this post, we'll use the following source file as an example:


1,101,2009/09/01,Computer Parts
2,234,101,Motherboard, Asus P6T,239.99
2,235,101,CPU,Intel Celeron 430,40
1,102,2009/09/02,Computer Parts
2,301,102,CPU,AMD Phenom II X4,170
1,103,2009/09/05,Printers
2,401,103,Inkjet Printer,Canon iP4600,69.99
2,402,103,Inkjet Printer,Epson WF30,39.99
2,403,103,Inkjet Printer,HP Deskjet D2660,49.99

As we can see the Order and Order Lines records have different formats (one has 4 fields, the other has 6 fields), they could also have a different field separator.

Identifying the Record Codes

The first step in order to handle such a file in ODI is to identify a record code, this record code should be unique for a particular record type. In our example the record code will be used by ODI to identify if the record is an Order or an Order Line. All the Order records should have the same record code, this also applies to the Order Lines records.
In our example the first field indicates the record code:
- 1 for Orders records.
- 2 for Order Lines records.

Define the Datastores

We assume that you have already created a Model using a Logical Schema that points to the directory containing your source file.

We will start by defining a datastore for the Order records.

Right-click on the File model and select Insert Datastore.
In the Definition tab, enter a name and specify the flat file resource name in the Resource Name field. 

Order_Datastore.jpg

 

 

In the Files tab, specify your flat file settings (delimiter, field separator etc.)

Refer to the ODI documentation for additional information regarding how to define a flat file datastore.

 

 

In our example the Order records have 4 fields:
- RECORD_TYPE
- ORDER_ID
- ORDER_DATE
- ORDER_TYPE

Go to the columns tab and add those 4 columns to your datastore.

Now specify the Record Code in the 'Rec. Code' field of the RECORD_CODE column.

 

 

Order_Datastore_Columns.jpg

Click OK.

 

 

In the Models view, right-click on the datastore and select View Data to display the file content and make sure it is defined correctly. 

 

Order_Datastore_Data.jpg

The data is filtered based on the record code value, we only see the Order records.

 

We will now apply the same approach to the Order Lines record.

Right-click on the File model and select Insert Datastore to add a second datastore for the Order Lines record.

In the Definition tab, enter a name and specify the flat file resource name in the Resource Name field. We are pointing this datastore to the same file we used for the Order records. 

 

Order_Line_Datastore.jpg

In the Files tab, specify your flat file settings (delimiter, field separator etc.).
Refer to the ODI documentation for additional information regarding how to define a flat file datastore.

 

 

In our example the Order Lines records have 6 fields:
- RECORD_TYPE
- LORDER_ID
- ORDER_ID
- LINE_ORDER_TYPE
- ITEM
- PRICE

Go to the columns tab and add those 6 columns to your datastore.

Now specify the Record Code in the 'Rec. Code' field of the RECORD_CODE column.

 

Order_Line_Datastore_Columns.jpg

Click OK.

 

Right-click on the datastore and select View Data to display the file content and make sure it is defined correctly.

 

Order_Line_Datastore_Data.jpg

The data is filtered based on the record code value, we only see the Order Lines records.

You can now use those 2 datastores in your interfaces.

 

All Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

About

Learn the latest trends, use cases, product updates, and customer success examples for Oracle's data integration products-- including Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Data Quality

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
3
5
6
7
8
9
10
12
13
14
17
18
19
20
21
23
24
25
26
27
28
29
30
   
       
Today