Wednesday Jan 29, 2014

ODI 12.1.2 Demo on the Oracle BigDataLite Virtual Machine

Oracle's big data team has just announced the Oracle BigDataLite Virtual Machine, a pre-built environment to get you started on an environment reflecting the core software of Oracle's Big Data Appliance 2.4. BigDataLite is a VirtualBox VM that contains a fully configured Cloudera Hadoop distribution CDH 4.5, an Oracle DB 12c, Oracle's Big Data Connectors, Oracle Data Integrator 12.1.2, and other software.

You can use this environment to see ODI 12c in action integrating big data with Oracle DB using ODI's declarative graphical design, efficient EL-T loads, and Knowledge Modules designed to optimize big data integration. 

The sample data contained in BigDataLite represents the fictional Oracle MoviePlex on-line movie streaming company. The ODI sample performs the following two steps:

  • Pre-process application logs within Hadoop: All user activity on the MoviePlex web site is gathered on HDFS in Avro format. ODI is reading these logs through Hive and processes activities by aggregating, filtering, joining and unioning the records in an ODI flow-based mapping. All processing is performed inside Hive map-reduce jobs controlled by ODI, and the resulting data is stored in a staging table within Hive.
  • Loading user activity data from Hadoop into Oracle: The previously pre-processed data is loaded from Hadoop into an Oracle 12c database, where this data can be used as basis for Business Intelligence reports. ODI is using the Oracle Loader for Hadoop (OLH) connector, which executes distributed Map-Reduce processes to load data in parallel from Hadoop into Oracle. ODI is transparently configuring and invoking this connector through the Hive-to-Oracle Knowledge Module.

Both steps are orchestrated and executed through an ODI Package workflow. 

Demo Instructions

Please follow these steps to execute the ODI demo in BigDataLite:

  1. Download and install BigDataLite. Please follow the instructions in the Deployment Guide at the download page
  2. Start the VM and log in as user oracle, password welcome1.
  3. Start the Oracle Database 12c by double-clicking the icon on the desktop.


  4. Start ODI 12.1.2 by clicking the icon on the toolbar.


  5. Press Connect To Repository... on the ODI Studio window. 


  6. Press OK in the ODI Login dialog.


  7. Switch to the Designer tab, open the Projects accordion and expand the projects tree to Movie > First Folder > Mappings. Double-click on the mapping Transform Hive Avro to Hive Staging.


  8. Review the mapping that transforms source Avro data by aggregating, joining, and unioning data within Hive. You can also review the mapping Load Hive Staging to Oracle the same way. 

    (Click image for full size)

  9. In the Projects accordion expand the projects tree to Movie > First Folder > Packages. Double-click on the package Process Movie Data.


  10. The Package workflow for Process Movie Data opens. You can review the package.


  11. Press the Run icon on the toolbar. Press OK for the Run and Information: Session started dialogs. 




  12. You can follow the progress of the load by switching to the Operator tab and expanding All Executions and the upmost Process Movie Data entry. You can refresh the display by pressing the refresh button or setting Auto-Refresh. 


  13. Depending on the environment, the load can take 5-15 minutes. When the load is complete, the execution will show all green checkboxes. You can traverse the operator log and double-click entries to explore statistics and executed commands. 

This demo shows only some of the ODI big data capabilities. You can find more information about ODI's big data capabilities at:


Wednesday Jan 22, 2014

Deep Dive Into Oracle Data Integrator Changed Data Capture Leveraging Oracle GoldenGate

Check out this blog post below from Christophe – first the details related to Oracle Data Integrator’s (ODI) Journalizing Knowledge Modules (JKMs) as well as a deeper dive into the particulars around the seamless out-of-the-box integration between Oracle Data Integrator (ODI) and Oracle GoldenGate.

http://www.ateam-oracle.com/understanding-the-odi-jkms-and-how-they-work-with-oracle-goldengate/

Happy reading!

Monday Nov 09, 2009

The Benefits of ODI Knowledge Modules: a Template Approach for Better Data Integration

This post assumes that you have some level of familiarity with ODI. The concepts of Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..

At the core, ODI knowledge modules are templates of code that will be leveraged for data integration tasks: they pre-define data integration steps that are required to extract, load, stage - if needed - and integrate data.

Several types of Knowledge Modules are available, and are grouped in families for Loading operations (LKMs), Integration operations (IKMs), Data Control operations (CKMs), and more.

For a more detailed description of what a knowledge module is, simply picture the multiple steps required to load data from a flat file into a database. You can connect to the file using JDBC, or leverage the database native loading utility (sqlldr for Oracle, bcp for SQL Server or Sybase, load for db2, etc.). External tables are another alternative for databases that support this feature.
As you use one or the other technique, you may first want to stage the data before loading your actual target table; in other cases, staging will only slow down your data load.

As far as the integration in the target system is concerned, again multiple strategies are available: simple inserts, inserts and updates, upserts, slowly changing dimension... these techniques may be as simple as one step, or be a complex series of commands that must be issued to your database for proper execution.

The Knowledge Modules will basically list these steps so that a developer who needs to repeat the same integration pattern only has to select the appropriate templates, versus re-developing the same logic over and over again.

The immediate benefits of this approach are well known and well documented:
- All developers use the same approach, and code development is consistent across the company, hence guarantying the quality of the code
- Productivity is greatly improved, as proven path are re-used versus being re-developed
- Code improvement and modification can be centralized and has a much broader impact: optimization and regulatory changes are done once and inherited by all processes
- Maintenance is greatly simplified

To fully appreciate all the benefits of using knowledge Modules, there is a lot more that needs to be exposed and understood about the technology. This post is a modest attempt at addressing this need.

GENERATION OF CODE AND TRANSFORMATIONS

Most tools today will offer the ability to generate SQL code (or some other type of code, such as scripts) on your source or target system. As most products come with a transformation engine, they will also generate proprietary code for this engine where data is staged (I'll skip the debate here as to whether a transformation engine is a staging area or not - the point being that code can be generated on either source, "middle-tier" or target).

However, real life requirements are rarely either/or. Often times, it makes sense to leverage all systems to optimize the processing: spread out the load for the transformations, reduce the amount of data to be transferred over the network, process the data where it is versus moving the data around solely for the purpose of transformations.

To achieve this, Data Integration tools must be able to distribute the transformation logic across the different systems.

KMCodeExecution.PNG

Only ODI will effectively generate code and transformations on all systems. This feature is only possible thanks to the KM technology.

Beyond the ability to generate code, you have to make sure that the generated code is the best possible code for the selected technology. Too often, tools first generate code that is then translated for the appropriate database. With the KMs technology, no translation is required: the generated code was initially conceived explicitly for a given technology, hence taking advantage of all the specifics of this technology.

And since the KMs are technology specific, there is no limit to what can be leveraged on the databases, including user defined functions or stored procedures.

KMCodeGeneration.PNG

CODE ADAPTABILITY

Whenever a tool generates code, the most common complaint is that there is very little (if any) control over the generated result. What if a simple modification of the code could provide dramatic performance improvements? Basic examples would include index management, statistics generation, joins management, and a lot more.

The KM technology is open and expansible so that developers have complete control over the code generation process. Beyond the ability to optimize the code, they can extend their solution to define and enforce in house best practices, and comply with corporate, industry or regulatory requirements. KMs Modifications are done directly from the developers graphical interface.

One point that can easily be adapted is whether data have to be materialized throughout the integration process. Some out-of-the-box KMs will explicitly land data in a physical file or tables. Others will avoid I/Os by leveraging pipes instead of files, views and synonyms instead of tables. Again, developers can adapt the behavior to their actual requirements.

EXPANDING THE TOOL TO NEW TECHNOLOGIES

How much time does it take to adapt your code to a new release of your database? How much time does it take to add a new technology altogether? In both cases, KMs will provide a quick and easy answer.

Let us start with the case of a new version of the database. While our engineering teams will release new KMs as quickly as possible to take advantage of the latest releases of any new database, you do not have to wait for them. A new release typically means new parameters for your DDL and DML, as well as new functions for your existing transformations. Adapt the existing KMs with the features you need, and in minutes your code is ready to leverage the latest and greatest of your database.

Likewise, if you ever need to define a new technology that would not be listed by ODI (in spite of the already extensive list we provide), simply define the behavior of this technology in the Topology interface, and design technology specific KMs to take advantage of the specific features of this database. I can guaranty you that 80% of the code you need (at least!) is already available in an existing KM... Thus dramatically reducing the amount of effort required to generate code for your own technology.

ARE KM MODIFICATIONS REQUIRED?

I am a strong advocate of the customization of KMs: I like to get the best I can out of what I am given. But often times, good enough is more than enough. I will always remember trying to optimize performance for a customer: we did not know initially what our processing window would be - other than "give us your best possible performance". The first out-of-the-box KM we tried processed the required 30,000,000 records in 20 minutes. Due to IT limitations, we could only leverage lesser systems for faster KMs... but still reduced performance to 6 minutes for the same volume of data. We started modifying KMs to get even better results, when the customer admitted that we actually had 3 hour for the process to complete... At this point, spending time in KM modifications was clearly not needed anymore.

KMs are meant to give the best possible performance out of the box. But every environment is unique, and assuming that we can have the best possible code for you before knowing your own specific challenges would be an illusion - hence the ability to push the product and the code to the limit

Another common question is: do you have to leverage both source and target systems as part of your transformations? Clearly, the answer is no. But in most cases, it is crucial to have the flexibility to leverage all systems, versus being cornered in using only one of them. Over time, you will want to reduce the volume of data transferred over the network; you will want to distribute some of your processing... all more reasons to leverage all available engines in your environment.

Do not hesitate and share with us how you extend your KMs!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

 

Friday Oct 09, 2009

Did You Know that ODI Automatically Summarizes Data Errors?

Looking for Data Integration at OpenWorld 2009? Click here!

This post assumes that you have some level of familiarity with ODI. The concepts of Interface, Flow and Static Control, as well as Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..


TABLES GENERATED BY ODI TO IDENTIFY ERRORS

If you take advantage of either Flow Control or Static Control in your interfaces, you know that ODI will automatically trap errors for you as you run your interfaces.

When you select the Controls tab of your interface, where you will decide which Knowledge Module will be used to identify the errors, you have an option to drop the Error table and another one to drop the Check table. Have you ever wondered what these are?

Interface Controls Tab

The Error table is the table that will be created by ODI to store all errors trapped by the FLOW_CONTROL and STATIC_CONTROL of your interface. You have probably already used the error table. This table is structured after your target table, along with administrative information needed to re-cycle or re-process the invalid records. It is loaded by ODI with all records that fail to pass the validation of the rules defined on your Target table. This feature is often referred to as a Data Quality Firewall as only the "good" data will make it to the target table.

Once all errors have been identified for a given interface, ODI will summarize them into another table: the Check table. There will be only one such table per data server: all target tables in the server (irrespectively of their schema) will share the same summary table. The name of this table is defined by default by the CKMs as SNP_CHECK_TAB.


LOCATION OF THE CHECK TABLE

You will find the check table in the default work schema of your server. To locate this schema, you have to go back to topology, in the Physical Architecture tab. Expand your data server to list the different physical schemas. One of the schemas is your default schema and will be identified by a checkmark on the schema icon (see SALES_DWH in the example below).

Default Schema

When you edit the schema, it has an associated work schema. The work schema associated to your default schema is your default work schema: ODI_TMP is the following example.

Default Work Schema

 

Note that you can change your default schema by selecting/unselecting the default option in the schema definition. But remember that you will always need exactly one default schema for each server.

 

Now that we know where to find this table, let's look at its structure:

  • CATALOG_NAME, SCHEMA_NAME: location of the table that was being loaded (i.e. the target table)
  • RESOURCE_NAME, FULL_RES_NAME: name of the table that was being loaded
  • ERR_TYPE: type of control that was performed (Flow Control or Static Control)
  • ERR_MESS: plain English error message associated with the error
  • CHECK_DATE: date and time of the control
  • ORIGIN: name of the ODI process that identified the errors
  • CONS_NAME: name of the constraint (as defined in the ODI Models) that defines the rule that the record violated
  • CONS_TYPE: type of error (duplicate primary key, invalid reference, conditional check failed, Null Value)
  • ERR_COUNT: number of records identified by the process that failed to pass that specific control rule.

 

ErrorsSummaryStructure.PNG

A sample of the data available in that summary table is show below (we split the content in 2 screenshots to make this more readable - this is one and only one table):

Errors Summary Data

Errors Summary Data2

There are many possible uses for this table: decision making in your ODI processes based on the number of errors identified or the type of errors identified, basic reporting on errors trapped by ODI, trend analysis or the evolution of errors over time...

Do not hesitate and share with us how you leverage this table!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

 

Thursday Oct 01, 2009

Creating a New Knowledge Module for Sample Data Sets Generation

Looking for Data Integration at OpenWorld 2009? Click here!

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Interface and Knowledge Module are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..
In particular, to learn more on Knowledge Modules, I strongly recommend the Knowledge Module Developer's Guide - Fundamentals that comes with the product. You will have to download and install ODI to access this document in the Documentation Library.

This post will look into "when" and "how" to create a knowledge module. Then it will walk through some of the choices that can be made when designing a Knowledge Module.

To illustrate the descriptions, we are working on an example described previously in this post.


1. WHAT BELONGS TO A KNOWLEDGE MODULE?

The first element to look into is what parts of the logic of your code are reusable. What you are typically looking for are the following:

  • Sequences of steps that are repeated commonly, even though some steps may be optional. For instance: creation of a staging table, creation of a script or parameter file for a utility, invoking an external program, extraction of data from a database, etc.
  • For each step, identification of the non variable parts vs. the variable parts. For instance, in a select statement, the body of the code remains the same. In the following example, the elements in brackets are variables, the others are fixed:
    • Insert into [TableName] ([ListOfColumns]) select ([ListOfColumns and Expressions]) from [List of Tables] where [conditions]

  • For your module to be re-usable, you want to make sure that no information that physically relates your code to any system or table structure is left out of the code. The idea behind the KMs is that as developers will build their transformations and mappings, ODI will "fill in the blanks" with the appropriate data.

The easiest way to get started with a knowledge module is actually to take an existing one and modify it. As the syntax has already been validated in existing KMs, the amount of work required to produce valid code will be greatly reduced.
In most cases, column names, mapping expressions do not belong to a knowledge module. The exception would be administrative columns that you add as part of the logic of your KM. For instance, most Incremental Update knowledge modules that ship with ODI create an IND_UPDATE column to differentiate records that will be updated from those that will be inserted. These columns definitely belong in the code of the KM.

 

Likewise, you may want to create your own tables (administrative tables, audit tables, etc.) with a very static name. These can be created by the Knowledge Module. But in general, it is better to dynamically generate the table name after the table being loaded, to prevent multiple processes running in parallel from trying to use the same intermediate table.


2. DO I HAVE A CASE FOR A KNOWLEDGE MODULE?

Any technique used to extract data out of a database (or file, or messaging system, or web service for that matter) can be a good opportunity to create a new KM. The same is true for loading techniques and integration techniques: inserts, updates, slowly changing dimension, etc.

In the scenario that we are contemplating, we want to insert data (albeit random data) into a table, so we probably have a good case for a knowledge module.

The first step is usually to look for available techniques, try the code independently of any knowledge module, and check out how it behaves: how is performance? How does the code behave when data volume grows? You want to make sure that the code you will integrate as a template is as good as it can be before you share it with the entire corporation!

Typically, extracting from a source system to stage data is done in an LKM. Loading data into a target table is done with an IKM. In our case, we will clearly create an IKM.


3. RANDOM DATA GENERATION: CODE GENESIS

For our example, will start with a KM that works exclusively for Oracle Databases. Adaptations of the code will be possible later on to make similar processes run on other databases.

The Oracle database provides a fantastic feature that we can leverage to generate a large number of records: group by cube: it returns all the possible permutation for the selected columns. So the following code:

select NULL from dual group by cube(1,1,1)

returns 8 records (2 to the power 3). Add columns for the list for the permutations, and you are adding an exponential number of records.

Now when I played with this function on my (very) little installation of the database, I seemed to hit a limit for (very) large permutation numbers. I have to admit that I am not using the function in its expected fashion, no I cannot really complain. But at least I can easily generate 1024 records (2 to the power 10). Now from a usability perspective, I do not really want to use that number for the users of my KM (1024 has a geeky flavor to it, doesn't it?). How about generating a table with just 1,000 records?

The following code will do the trick:

select NULL from dual group by cube(1,1,1,1,1,1,1,1,1,1)
where rownum<=1000

Note that so far, all the instructions we have are hard-coded. We still do not have anything that would be dynamic in nature.

Now we need to use the above query to create some table with our 1,000 records. Again, we can hard-code the table name - but this does not make for very portable code. In particular, from one environment to the next, the database schema names will vary. We have three options to create our staging table, from the least portable to the most portable:

  • Hardcoded table name and schema name: myschema.SEED
  • Dynamic schema name, hardcoded table name: let ODI retrieve the proper schema name and automatically update the code at execution time:  (Generated code: myschema.SEED)
  • Fully dynamic table name and schema name (usually, dynamic tables are named after the target table with some sort of extension): _SEED (generated code: if you are loading TRG_CUSTOMERS, then the SEED table name is myschema.TRG_CUSTOMER_SEED)

Best practice is of course to use the last one of these options to allow for multiple processes to run in parallel. To keep our explanations simple, we will use the second option above - but keep in mind that best practice would be to use the fully dynamic one.

 

As we will use our KM over and over, it is important to make the developer's life easy. Steps have to be included here to create our seeding table, drop it when we are done, and make sure before we create it that it is not there from a previous run that could have failed.
The typical sequence of steps for a KM creating any type of staging table is:

  • Drop table (and ignore errors - if there is no table, we are fine)
  • Create table (re-create it to reflect any possible meta-data changes)
  • Load the table with staging data - in our case a sequence of numbers that we will be able to leverage later on for filtering... (please be patient: we will come back to this). Here a rownum will do the trick...

Now that we have the code to insert data into our staging table, we can put all the pieces together and have the first three steps of our knowledge module. Keep in mind that you have to be consistent from one step to the next as you name your table. The actual knowledge module with all the matching code is available here (look for KM_IKM Oracle - Build Sample Data - Gen II.xml).

 


4. COMPLETING THE CODE

So far our table only has 1,000 records. Not much in terms of volume. But all we need now to create a table with 1,000,000 records... is a Cartesian product (you know, the one thing your mother told you NOT to do with a database? It comes very handy here!):

insert into [Target] ([Columns]) Select * from SEED S1, SEED s2

And if we want to return less records, all we have to do is filter on the S2 table. For instance the clause:

where S2.SEED_ID<=10

will return 10,000 records!. Remember when we stored rownums in this table earlier? This is where it becomes very handy...

So far the only thing we have done is to generate a fairly large number of records. Where the exercise becomes even more interesting is if we can generate data for each record that matches our requirements for sample data. In a previous post we have seen how to generate User Functions in ODI to abstract this type of random generation. The code for the sample data generation typically does not belong to the Knowledge Module as it would not give us enough flexibility for all the possible combinations out there.

The User Function examples used before can generate numerics and strings. We could expand their design to generate random phone numbers, or random social security numbers... and work with random data that will now look like real data, instead of exposing sensitive information.

5. ADDING FLEXIBILITY TO THE KNOWLEDGE MODULE

The fact that User Functions do not belong to the code of the Knowledge Module does not mean that there is no flexibility in the Knowledge Modules. Here, we are building a sample table out of thin air: it is very possible that the table does not exist in the first place. Or if want to run multiple tests with different amounts of records each time, we may want to truncate the table before each new run.

KM options are a very simple way to toggle such behaviors in a KM. We can create 2 options: CREATE_TABLE and TRUNCATE. Then create the appropriate steps in our KM:

  • Create Target Table
  • Truncate table

When you want to associate an option to a given step, edit the step itself; then click on the "option" tab and un-check "always execute". Select only the appropriate option in the list and click OK...

 

KM_Option.PNG

As we define the options, it is also good to think of the most common usage for the KM. In our case, chances are we will often want to create the table and truncate it for successive runs: we can then define that these steps will be executed by default (set the default for the variables to "Yes". More conventional KMs would typically have these options, but their defaults would be set to "No".

We now have a complete Knowledge Module that can be used to generate between 1,000 and 1,000,000 records in any table of your choice, complete with options that will let the users of the KM adapt the behavior to their actual needs...

Again, if you want to review all the code in details, it is available here (look for KM_IKM Oracle - Build Sample Data - Gen II.xml).

Enjoy!

Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Sunday Sep 20, 2009

ODI User Functions: A Case Study

Looking for Data Integration at OpenWorld 2009? Check out here!

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Interface and User Function are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information..

This post will give some examples of how and where user functions can be used in ODI. We will look back at a previous post and see why and how user functions were created for this case.

1. A CASE FOR USER FUNCTIONS

As I was trying to design a simple way to generate random data, I simple approach to specify the type of data to be generated. An example would be to easily generate a random string.

Working on Oracle for this example, I could leverage the database function DBMS_RANDOM.STRING. It takes 2 parameters: one for the type of characters (uppercase, lowercase, mixed case), and on for the length of the string. But I want a little more than this: I also want the ability to force my generation to have a minimum number of characters. For this, I now need to generate a random number. The database function DBMS_RANDOM.VALUE does this, but returns a decimal value. Well, the TRUNC function can take care of this... but my mapping expression becomes somewhat complex:

DBMS_RANDOM.STRING(Format, TRUNC(DBMS_RANDOM.VALUE(MinLen,MaxLen)))

Imagine now using this formula over and over again in your mappings - not the easiest and most readable portion of code to handle. And maintenance will easily become a nightmare if you simply cut and paste...

A user function makes sense at this point, and will provide the following benefits:


  • A readable name that makes the usage and understanding of the transformation formula a lot easier
  • A central place to maintain the transformation code
  • The ability to share the transformation logic with other developers who do not have to come up with the code for this anymore.

 

From a code generation perspective, if you use the User Function in your interfaces, ODI will replace the function name with the associated code and send that code to the database. Databases will never see the User Function name - the substitution is part of the code generation.

2. BUILDING THE USER FUNCTION

You need to provide 3 elements when you build a user function:


  • A name (and a group name to organize the functions)
  • A syntax
  • The actual code to be generated when the developers will use the functions

 


2.1 Naming the User Function

The name of the user function is yours to choose, but make sure that it properly describes what this function is doing: this will make the function all the more usable for others. You will also notice a drop down menu that will let you select a group for this function. If you want to create a new group, you can directly type the group name in the drop down itself.

For our example, we will name the user Function RandomString and create a new group called RandomGenerators.

User Function Name


2.2 The User Function Syntax

The next step will be to define the syntax for the function. Parameters are defined with a starting dollarsign:
$
and enclosed in parenthesis:
().
You can name your parameters as you want: these names will be used when you put together the code for the user functions, along with the $ and (). You can have no parameters or as many parameters as you want...

In our case, we need 3 parameters: One for the string format, one for the minimum length of the generated string, one for the maximum length. Our syntax will hence be:

RandomString($(Format), $(MinLen), $(MaxLen))

If you want to force the data types for the parameters, you can do so by adding the appropriate character after the parameter name: s for String, n for Numeric and d for Date. In that case our User Function syntax would be:

RandomString($(Format)s, $(MinLen)n, $(MaxLen)n)

User Function Syntax


2.3 Code of the User Function

The last part in the definition of the user function will be to define the associated SQL code. To do this, click on the Implementation tab and click the Add button. A window will pop up with two parts: the upper part is for the code per se, the bottom part is for the selection of the technologies where this syntax can be used. Whenever you will use the User Function in your mappings, if the underlying technology is one of the ones you have selected, then ODI will substitute the function name with the code you enter here.

For our example, select Oracle in the list of Linked Technologies and type the following code in the upper part:

DBMS_RANDOM.STRING($(Format), TRUNC(DBMS_RANDOM.VALUE($(MinLen),$(MaxLen))))

Note that we are replacing here the column names with the names of the parameters we have defined in the syntax field previously.

User Function Code

Click on the Ok button to save your syntax and on the Ok or Apply button to save your User Function.

3. EXPANDING THE USER FUNCTIONS TO OTHER TECHNOLOGIES

Once we are done with the previous step, the user function is ready to be used. One downside with our design so far: it can only be used on one technology. Chances are we will need a more flexibility.

One key feature of the User Functions is that they will give you the opportunity to enter the matching syntax for any other database of your choice. No matter which technology you will use later on, you will only have to provide the name of the user function, irrespectively of the underlying technology.

To add other implementations and technologies, simply go back to the Implementation tab of the User Function, click the Add button. Select the technology for which you are defining the code, and add the code.

Note that you can select multiple technologies for any given code implementation: these technologies will then be listed next to one antother.

4. USING THE USER FUNCTIONS IN IINTERFACES

To use the User Functions in your interfaces, simply enter them in your mappings, filters, joins and constraints the same way you would use SQL code.

Interface Mappings

When you will execute your interface, the generated code can be reviewed in the Operator interface. Note that you should never see the function names in the generated code. If you do, check out the following elements:


  • User Function names are case sensitive. Make sure that you are using the appropriate combination of uppercase and lowercase characters
  • Make sure that you are using the appropriate number of parameters, and that they have the appropriate type (string, number or date)
  • Make sure that there is a definition for the User Function for the technology in which it is running. This last case may be the easiest one to oversee, so try to keep it in mind!

As long as you see SQL code in place of the User Function name, the substitution happened successfully.

 

Enjoy!

All Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Wednesday Sep 09, 2009

Generating Sample Data with ODI: A Case Study For Knowledge Modules and User Functions

Looking for Data Integration at OpenWorld 2009? Look no further: all you need is here!

The posts in this series assume that you have some level of familiarity with ODI. The concepts of Interface, Model, Knowledge Module and User Function are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information.


We've all been there: we start coding, waiting for a set of sample data to be available. We move along with the code... and the data is not available. Or we need to build a small (or not so small) data set quickly. Sure, we all have sample databases left and right for that purpose. But recently I was looking for a decent size data set for some tests (more than the traditional 30 sample records) and could not put my hands on what I needed. What the heck: why not have ODI build this for me?

The techniques that we will leveraged for this are the following:

  • Creation of a temporary interface to create the sample table (See this previous post for details on how to create a temporary interface)
  • Creation of a new knowledge module to generate enough records in the new table
  • Creation of ODI User Functions to simplify the generation of random values

 


All the objects mentioned in this article can be downloaded. Save

this XML file
if you want to import in your repository a project that already contains all the objects (IKM and User functions). Click
here
if you want to download a file that will let you import the different objects individually. You will have to unzip the file before importing the objects in the later case.

The samples provided here have all been designed for an Oracle database, but can be modified and adapted for other technologies.

Today we will discuss the different elements that allow us to generate the sample data set. In future posts, we will dissect the Knowledge Modules and User Functions to see what technological choices were made based on the different challenges that had to be solved.

1. THE INTERFACE

For more details on how to create a temporary interface, you can refer to this post. For our example, we will create a new table in an existing schema. When you create your temporary interface, remember to set the following elements:

  • Select of your staging area ( In the Definition tab of the interface)
  • Name your target table
  • Select the location of your target table (work schema / data schema)
  • Name the Columns, and set their individual data type and length

 


Interface Definition Tab

For our example, we will use a fairly simple table structure:
TABLE NAME:

SAMPLER

COLUMNS:
SAMPLER_ID number(3)
SAMPLER_NAME varchar2(30)
SAMPLER_PROMOTION varchar2(10)
SAMPLER_PRICE number(10,2)
SAMPLER_RELEASE_DATE date

 

Sampler Interface Table Creation

2. USER FUNCTIONS

The Oracle database comes with a package called DBMS_RANDOM. Other random generators can be used (DBMS_CRYPTO for instance has random generation functions as well). These functions take more or less parameters, and if we realize after creating dozens of mappings that using the "other" package would have been better... we would be in a lot of trouble. Creating user functions will allow us to:

  • Have a naming convention that is simplified
  • Limit the number of parameters
  • Limit the complexity of the code
  • Later maintain the code independently of our interfaces, in a centralized location: if we decide to change the code entirely, we will make modifications in one single place - no matter how often we use that function.

 


For our example, we will have 5 ODI user functions in ODI (again, these can be downloaded

here
):

  • RandomDecimal(Min, Max): generates a random value (with decimals) between the Min and Max values
  • RandomNumber(Min, Max): generates a random value (without decimals) between the Min and Max values
  • RandomBool(): generate a 0 or a 1
  • RandomDate(MinDate, MaxDate): returns a date between MinDate and MaxDate (make sure MinDate and MaxDate are valid dates for Oracle)
  • RandomString(Format, Min, Max): generates a random string with a minimum of Min characters and a maximum of Max characters. Valid formats are:
    • 'u', 'U' - returning string in uppercase alpha characters
    • 'l', 'L' - returning string in lowercase alpha characters
    • 'a', 'A' - returning string in mixed case alpha characters
    • 'x', 'X' - returning string in uppercase alpha-numeric characters
    • 'p', 'P' - returning string in any printable characters.

 


SamplerUserFunctions.PNG

We can either use these functions as is or as part of a more complex code logic, such as a case...when statement.

For our example, we will build the following mappings:

Column Mapping
SAMPLER_ID RandomNumber(1,999)
SAMPLER_NAME RandomString('A', 1, 30)
SAMPLER_PROMOTION case when RandomBool()=0 then 'FALSE'
else 'TRUE'
end
SAMPLER_PRICE RandomDecimal(1,10000)
SAMPLER_RELEASE_DATE RandomDate('01-JAN-2000', sysdate)


In ODI, the mappings will look like this:

Sampler Interface Mappings

3. THE KNOWLEDGE MODULE

Since we do not have any source table in this interface, we only need an IKM. The IKM provided will this example needs to be imported in your project.

Because the purpose of this KM is to generate sample data, it will have a few options where the default values will be different from the usual KMs:

  • TRUNCATE defaults to 'YES': we assume here that if you re-run the interface, you want to create a new sample. If you only want to add more records to an existing table, simply set this option to 'NO' in your interface.
  • CREATE_TABLE defaults to 'YES': we assume that the table to be loaded does not exist yet. You can turn that option to 'NO' if there is no need to create the table.
  • THOUSANDS_OF_RECORDS: set this to any value between 1 and 1,000 to generate between 1,000 and 1,000,000 records

 

Sampler IKM

Once you have set the values for your KM, you can run the interface and let it generate the random data set.

With the above configuration, and using a standard laptop (dual core 1.86GHz processor and 2 Gb of RAM) equipped with Oracle XE my statistics were as follows:

10,000 records generated in 5 seconds
100,000 records generated in 24 to 35 seconds (about 30 seconds on average)
1,000,000 records generated in 211 to 235 seconds (about 4 minutes on average)

Note that the machine was not dedicated to this process and was running other processes.

Statistics are available in the Operator interface.

Sampler Stats

To review the data loaded by ODI in your target table, simply reverse-engineer this table in a model, then right-click on the table and select View Data to see what was generated!

SamplerData.PNG

4. EXPANDING TO OTHER TECHNOLOGIES

One question: why did I stop here and did not try to make this work for other technologies? Well, it turns out that ODI is really meant to move and transform data. As long as I have at least ONE table with random data in any one of my databases, it is now faster to just create a regular ODI interface and move the data across... The design will take less than a minute. The data transfer should not take much time either. Who would try to spend more time coding when the solution is that simple?

But if you want to make this work for other databases, here are your entry points:

  • Duplicate the KM and modify it to use SQL that would work on these other databases
  • Update the user functions to make sure that they use the appropriate functions for the given databases
  • Use the same logic to create your interface

Enjoy!

 


All Screenshots were taken using version 10.1.3.5 of ODI. Actual icons and graphical representations may vary with other versions of ODI.

Data Integration Showcased at OpenWorld 2009

About

Learn the latest trends, use cases, product updates, and customer success examples for Oracle's data integration products-- including Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Data Quality

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
3
5
6
7
8
9
10
12
13
14
17
18
19
20
21
23
24
25
26
27
28
29
30
   
       
Today