Wednesday Jul 08, 2015

ODI KMs for Business Intelligence Cloud Service

In this article we will learn how to leverage Oracle Data Integrator’s extensible Knowledge Modules (KM) framework to create knowledge modules to load data into the Oracle Business Intelligence Cloud Service (BICS). The following instructions are targeted for BICS instances using Schema Cloud Service, if your BICS instance uses Database as a Service then you can directly load data into DBaaS tables as described in the blog post ODI 12c and DBaaS in the Oracle Public Cloud. More details on implementing Knowledge Modules can be found in Knowledge Module Developer Guide.

BICS exposes REST APIs that allows programmatically creating, managing, and loading schemas, tables, and data into Oracle BI Cloud Service. We will invoke these REST APIs using the Jersey client libraries providing wrapper implementation for invoking RESTful web services. The sample implementation of it is available on : RKM and IKM for Oracle BI Cloud Service
[Read More]

Wednesday Jul 01, 2015

ODI - Integration with Oracle Storage Cloud Service

Oracle Data Integrator’s open tool framework can be leveraged to quickly get access to the Oracle Storage Cloud Service, which is gradually becoming an essential part for integrating on premise data to many cloud services. The reference implementation of an open tool for Oracle Storage Cloud is now available on the Data Integration project on ODI OpenTool for Oracle Storage Cloud which can be used and modified as per your integration needs. [Read More]

Monday May 11, 2015

Oracle Big Data Preparation Cloud Service (BDP) – Coming Soon

What are your plans around Big Data and Cloud?

If your organization has already begun to explore these topics, you might be interested a new offering from Oracle that will dramatically simplify how you use your data in Hadoop and the Cloud:

Oracle Big Data Preparation Cloud Service (BDP)

There is a perception that most of the time spent in Big Data projects is dedicated to harvesting value. The reality is that 90% of the time in Big Data projects is really spent on data preparation. Data may be structured, but more often it will be semi-structured such as weblogs, or fully unstructured such as free form text. The content is vast, inconsistent, and incomplete, often off topic, and from multiple differing formats and sources. In this environment each new dataset takes weeks or months of effort to process, frequently requiring programmers writing custom scripts. Minimizing data preparation time is the key to unlocking the potential of Big Data.

Oracle Big Data Preparation Cloud Service (BDP) addresses this very reality. BDP is a non-technical, web-based tool that sets out to minimize data preparation time in an effort to quickly unlock the potential of your data. The BDP tool provides an interactive set of services that automate, streamline, and guide the process of data ingestion, preparation, enrichment, and governance without costly manual intervention.

The technology behind this service is amazing; it intuitively guides the user with a machine learning driven recommendation engine based on semantic data classification and natural language processing algorithms. But the best part is that non-technical staff can use this tool as easily as they use Excel, resulting in a significant cost advantage for data intensive projects by reducing the amount of time and resources required to ingest and prepare new datasets for downstream IT processes.

Curious to find out more? We invite you to view a short demonstration of BDP below:

Let us know what you think!

Stay tuned as we write more about this offering… visit often here!

Tuesday May 05, 2015

Oracle Data Integrator for Big Data Webcast - Recap

We followed our recent announcement of Oracle Data Integrator (ODI) for Big Data with an in-depth webcast with Denis Gray, Product Management Director for Oracle Data Integration and me. It was a deep dive into the product features, the differentiators and an inside look of how ODI for Big Data functions to support the various Apache projects. If you missed it you can watch it again here on demand.

We also talked about Oracle Metadata Management, a data governance tool that brings trust and transparency to Big Data projects within Oracle and 3rd party solutions.

 You will want to watch this if you are interested in knowing

a. How to become an ETL developer for Big Data without learning Java Coding

b. Why ODI for Big Data is a stand out technology architecture wise for Big Data processing

c. A comparative study on Big Data ETL vendors and offerings in the market.

 Below are some of the questions that we encountered in the session.

 How is unstructured data handled by ODI?

How is unstructured data handled by ODI?

We have different solutions for unstructured input.

We constantly put best practices and lessons learnt here at our blog. These blog posts on ODI for Big Data will also help get started.

ODI for Big Data Announcement post

Big Data Lite Free Demo and Download

This white paper Top 5 Big Data Integration Mistakes to Avoid also talks about the most common pitfalls enterprises make when approaching a big data project.

Is Oracle DI for Big Data a seperate licensed product than Oracle DI ?

Oracle Data Integrator Enterprise Edition Advanced Big Data Option is a separately licensed option. It is an option which would be purchased in addition to Oracle Data Integrator Enterprise Edition for advanced big data processing. More on this option can be found at the website and on the datasheet.

How do I load data from Oracle to Hadoop in an event driven manner ?

You can use Oracle GoldenGate for Big Data for this to capture all committed transactions on a source Oracle DB and deliver to Hive, HDFS, HBase, Flume, Kafka, and others. You can learn more about Oracle GoldenGate for Big Data here.

Can a customer be just fine with ODI, rather than puchasing Oracle Warehoue Builder for data warehousing projects, What are the strategic directions for these products from Oracle for data warehousing projects?

Oracle Data Integrator (ODI) is Oracle's strategic product for data integration. You can read the full statement of direction on this topic here. There are also automated migration utilities that help migrate your OWB work into the ODI environment.

Does the ODI comes with Financial Analytics also have the big data capability? or its only in Full version?

Financial Analytics Hub uses a restricted use license of ODI which is meant for use specifically with the Financial Analytics products as outlined in the license.

ODI EE has basic big data functionality such as Hive, Sqoop and Hbase. The Pig, Spark and Oozie functionality required ODI EE as well as the Advanced Big Data Option for ODI.

When customers expand beyond the specialized financial analytics they upgrade to the full license of ODIEE and ODIEE for Big Data.

Can I use Impala (Cloudera advancement on Hive). Does DI recognize Impala?

We have customers using Impala with ODI. This is supported through our regular JDBC support.

Will the utilization of Hadoop cut down on the need for a lot of manual coding, or will manual coding still be an essential part of Oracle Data Integration for Big Data?

ODI is specifically made to avoid manual coding and provide a graphical and metadata-driven way of Big Data Integration. Using tools instead of manual coding has been understood in the Data Integration community for decades, and this realization is coming to the Big Data community now through painful experiences.

See also this article: • Top 5 Big Data Integration Mistakes to Avoid

Are resources required for ODI managed by YARN in Hadoop?

ODI is using standard Hadoop subsystems like Hive, HDFS, Oozie, Pig, Spark, HBase which are managed by YARN, so we are implicitly taking advantage of YARN.

Can you please share any performance benchmarks we have with the other competitors?

We might suggest a couple whitepapers for your review on this subject:

Data Integration Platforms for Big Data and the Enterprise: Customer Perspectives on IBM, Informatica and Oracle

The Oracle Data Integrator Architecture –

Best Practices for Real-time Data Warehousing

Oracle Data Integrator Performance Guide

What is the major differentiator in the usage of Hive, spark or Pig within DI?

ODI provides a graphical and logical abstraction over these engines, so you can design your transformation without concern what the implementation engine will be. You can choose your engine after the fact based on their performance characteristics, such as in-memory performance of Spark. If a new Hadoop engine comes up in the future, it's easy to retool your logical design to run on a future language.

Can you explain the difference between using GG Big Data Adapters and GG java flat file Adapaters

The OGG BD adapter comes with 4 pre-packaged adapters for Hive, HDFS, Flume, HBase that have been developed and tested by Oracle R&D. It also comes with custom JAva and JMS support like the "Java flat file App Adapter". The only thing that is only in the "Java flat file App Adapter" is the Flat File support.

Oracle ODI is an alternate/equivalent to which particular product of Apache?

ODI is not competing against any Apache/Hadoop project, instead it integrates with them. We are utilizing HDFS, Hive, Spark, Pig, HBase, Sqoop for our transformations, unlike other DI vendors who deploy their own engine on Hadoop cluster.

So is ODI generating Hive/pig/MR code behind the scene when you define mappings?

You are correct. You design a logical mapping and pick an execution engine (Spark/Pig/Hive), ODI will generate and execute code for this engine. You have full visibility into this code, unlike a proprietary ETL engine.

Is Oracle Metadata Management (OMM) part of ODI or a separate product?

Oracle Metadata Management is a separate product (from ODI)however one that is complementary - please find the datasheet here:

Can you please share the details on GG on Big data

You can find more info here

Can you share the names of the blogs for effective ODI data integration..?

The ODI team is regularly posting on, but there is a rich community of bloggers writing about ODI:

And many many more - google for ODI BLOGS to get more info.

ODI can pull data at regular schedule (say every 2 min). golden gate do it real time. so it 2 min is kind of dealy is ok then do we need GG for bigdata ?

That is the general principle. If looking for real time replication, sub second latency then GoldenGate is the product. If looking for heavy processing of Big Data then ODI is the answer. They are actually complementary and work off of one another where customers use GG for data ingestion and ODI for data processing.

I'm Oracle apps DBA and Oracle performance DBA. Can i use my existing skillsets to transition into oracle DI for big data ? Is this completely different from DBA skillset?

ODI is popular with DBAs as the generated SQL code (RBDMS or Hive/Impala) is visible, and all of our runtime actions are "white box" so you can see what's happening. You can review queries and their query plans and optimize them using our Knowledge Module framework.

Monday Apr 06, 2015

Announcing Oracle Data Integrator for Big Data

Proudly announcing the availability of Oracle Data Integrator for Big Data. This release is the latest in the series of advanced Big Data updates and features that Oracle Data Integration is rolling out for customers to help take their Hadoop projects to the next level.

Increasing Big Data Heterogeneity and Transparency

This release sees significant additions in heterogeneity and governance for customers. Some significant highlights of this release include

  • Support for Apache Spark,
  • Support for Apache Pig, and
  • Orchestration using Oozie.

Click here for a detailed list of what is new in Oracle Data Integrator (ODI).

Oracle Data Integrator for Big Data helps transform and enrich data within the big data reservoir/data lake without users having to learn the languages necessary to manipulate them. ODI for Big Data generates native code that is then run on the underlying Hadoop platform without requiring any additional agents. ODI separates the design interface to build logic and the physical implementation layer to run the code. This allows ODI users to build business and data mappings without having to learn HiveQL, Pig Latin and Map Reduce.

Oracle Data Integrator for Big Data Webcast

We invite you to join us on the 30th of April for our webcast to learn more about Oracle Data Integrator for Big data and to get your questions answered about Big Data Integration. We discuss how the newly announced Oracle Data Integrator for Big Data

  • Provides advanced scale and expanded heterogeneity for big data projects 
  • Uniquely compliments Hadoop’s strengths to accelerate decision making, and 
  • Ensures sub second latency with Oracle GoldenGate for Big Data.

Friday Feb 27, 2015

How to Future Proof Your Big Data Investments - An Oracle webcast with Cloudera

Cutting through the Big Data Clutter

The Big Data world is changing rapidly, giving raise to new standards, languages and architectures. Customers are unclear about which Big Data technology will benefit their business the most, and how to future proof their Big Data investments.

This webcast helps customers sift through the changing Big Data architectures to help customer build their own resilient Big Data platform. Oracle and Cloudera experts discuss how enterprise platforms need to provide more flexibility to handle real-time and in memory computations for Big Data.

The speakers introduce the 4th generation architecture for Big Data that allows for expanded and critical capabilities to exist alongside each other. Customers can now see higher returns on their Big Data investment by ingesting real time data and improved data transformation for their Big Data analytics solutions. By choosing Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Metadata Management, customers gain the ability to keep pace with changing Big Data technologies like Spark, Oozie, Pig and Flume without losing productivity and reduce risk through robust Big Data governance.

In this webcast we also discuss the newly announced Oracle GoldenGate for Big Data. With this release, customers can stream real time data from their heterogeneous production systems into Hadoop and other Big Data systems like Apache Hive, HBase and Flume. This brings real time capabilities to customer’s Big Data architecture allowing them to enhance their big data analytics and ensure their Big Data reservoirs are up-to-date with production systems.

Click here to mark your calendars and join us for the webcast to understand Big Data Integration and ensure that you are investing in the right Big Data Integration solutions.

Tuesday Dec 09, 2014

Big Data Governance– Balancing Big Risks and Bigger Profits

To me, nothing exemplifies the real value that Big Data brings to life than the role it played in the last edition of the FIFA soccer world cup. Stephen Hawkins predicted that England’s chance of winning a game drops by 60 percent every time the temperature increases by 5ºC. Moreover, he found that England plays better in stadiums situated 500 meters above sea level, and perform better if the games kick off later than 3PM local time. In short, England’s soccer team struggles to cope with the conditions in hot and humid countries.

We all have heard, meditated and opined about the value of Big Data, the panacea for all problems. And it is true. Big Data has started delivering real profits and wins to businesses. But as with any data management program, profit benefits should be maximized while striving to minimize potential risks and costs.

Customer Data is Especially Combustible Data

The biggest lift in businesses using Big Data is obtained through the mining of customer data. By storing and analyzing seemingly disparate customer attributes and running analytic models through the whole data set (data sampling is dying a painful demise), businesses are able to accurately predict buying patterns, customer preferences and create products and services that cater to today’s demanding consumers. But this veritable mine of customer information is combustible. And by that, what I mean is that a small leak is enough to undo any benefits hitherto extracted from ensuing blowbacks like financial remuneration, regulatory constrictions and most important of all the immense reputational damage. And this is why Big Data should always be well governed. Data Governance is an aspect of data security that helps with safeguarding Big Data in business enterprises.

Big Data Governance

Big Data Governance is but a part (albeit a very important part) of a larger Big Data Security strategy. Big Data security should involve considerations along the efficient and economic storage of data, retrieval of data and consumption of data. It should also deal with backups, disaster management and other traditional considerations.

When properly implemented a good Governance program serves as a crystal ball to the data flow within the organizations. It will answer questions on how safe the data is, who can and should be able to lay their hands on the data and proactively prevent data leakage and misuse. Because when dealing with Big Reservoirs of Data, small leakages can go unnoticed. 

Friday Oct 10, 2014

Oracle Data Integrator Webcast Archives

Check out the recorded webcasts on Oracle Data Integrator! 

Each month the Product Management Team hosts a themed session for your viewing pleasure.  Recent topics include Oracle Data Integrator (ODI) and Big Data, Oracle Data Integrator (ODI) and Oracle GoldenGate Integration, BigData Lite, the Oracle Warehouse Builder (OWB) Migration Utility, the Management Pack for Oracle Data Integrator (ODI), along with other various topics focused on Oracle Data Integrator (ODI) 12c.

You can find the Oracle Data Integrator (ODI) Webcast Archives here.

Take a look at the individual sessions:

The webcasts are publicized on the ODI OTN Forum if you want to view them live.  You will find the announcement at the top of the page, with the title and details for the upcoming webcast.

Thank you – and happy listening!

Monday Sep 15, 2014

You are invited! See the Power of Innovation with Oracle Fusion Middleware

Are you going to be at Oracle OpenWorld? If so, don't miss the opportunity to meet the most innovative Oracle Fusion Middleware projects of 2014. Winners of 2014 Oracle Excellence Awards for Fusion Middleware Innovation will be announced in a special Awards Ceremony on Tuesday September 30th during Oracle OpenWorld.

Oracle Fusion Middleware Innovation Awards honor customers with cutting-edge use of Oracle Fusion Middleware technologies to solve unique business challenges or create business value. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Data Integration is among the 9 award categories recognized this year, along with Oracle Exalogic Elastic Cloud, Oracle Cloud Application Foundation, Oracle Service Oriented Architecture, Oracle Business Process Management, Oracle WebCenter, Oracle Identity Management, Oracle Development Tools and Framework, and Big Data and Business Analytics.

If you are planning to attend Oracle OpenWorld in San Francisco or plan to be in the area during Oracle OpenWorld, we hope you can join us, and bring back to your organization real-life examples of Fusion Middleware in action.

   Oracle Excellence Awards Ceremony: Oracle Fusion Middleware: Meet This Year’s Most Impressive Innovators(Session ID: CON7029)

   When: Tuesday September 30, 2014

   Time: Champagne Reception 4:30 pm, Ceremony 5-5:45 pm PT

   Where: Yerba Buena Center for the Arts, YBCA Theater (next to Moscone North) 700 Howard St., San Francisco, CA, 94103

To learn more about last year’s Data Integration winners please read our blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are…

To attend this Award Ceremony, Oracle OpenWorld Badges are required. You can register for this session through the Schedule Builder on the Oracle OpenWorld website. If you are not attending the conference, but will be in the area and would like to join the celebration – please contact us here.

We hope to see you there!

Monday Aug 11, 2014

Oracle is a Leader in the Gartner 2014 Magic Quadrant for Data Integration Tools

Oracle maintains its position as a leader in the 2014 “Magic Quadrant for Data Integration Tools” report. This year’s report expands into the key trends of Big Data and Cloud in addition to classic data integration capabilities and visions available in the market.

On the importance of Data Integration, Gartner states, “Data integration is central to enterprises' information infrastructure. Enterprises pursuing the frictionless sharing of data are increasingly favoring technology tools that are flexible in regard to time-to-value demands, integration patterns, optimization for cost and delivery models, and synergies with information and application infrastructures.” Of the vendors it also notes, “Evolving their relevance and competitive positioning requires vendors to extend their vision and deepen their capability to harness market inertia and broaden applicability of data integration offerings. This is in line with buyer expectation for optimal functions, performance and scalability in data integration tools, so that they operate well with the same vendor's technology stack and, increasingly, interoperate across related information and application infrastructures.”

More than 4,500 organizations spanning nearly every industry are using Oracle Data Integration to cost-effectively manage and create value from ever-growing streams of structured and unstructured data, and address emerging deployment styles. Oracle continues to invest in its Data Integration offerings to help customers expand their business by:

Embracing and leading in Cloud Computing, Big Data and Business Analytics initiatives

Delivering real-time data integration solutions for maximum availability

Promoting cross-functional interoperability between heterogeneous systems and applications

Providing a complete and integrated choice of offerings across a variety of styles and deployments including ETL/ELT, data replication, data quality, metadata management, data virtualization and data governance.

Download the Report Now

About the Magic Quadrant: Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.  

Sunday Jul 13, 2014

New Big Data Features in ODI 12.1.3

Oracle Data Integrator (ODI) 12.1.3 extends its Hadoop capabilities through a number of exciting new cababilities. The new features include:

  • Loading of RDBMS data from and to Hadoop using Sqoop
  • Support for Apache HBase databases
  • Support for Hive append functionality
With these new additions ODI provides full connectivity to load, transform, and unload data in a Big Data environment.

The diagram below shows all ODI Hadoop knowledge modules with KMs added in ODI 12.1.3 in red. 

Sqoop support

Apache Sqoop is designed for efficiently transferring bulk amounts of data between Hadoop and relational databases such as Oracle, MySQL, Teradata, DB2, and others. Sqoop operates by creating multiple parallel map-reduce processes across a Hadoop cluster and connecting to an external database and transfering data from or to Hadoop storage in a partitioned fashion. Data can be stored in Hadoop using HDFS, Hive, or HBase. ODI adds two knowledge modules IKM SQL to Hive- HBase-File (SQOOP) and IKM File-Hive to SQL (SQOOP).

Loading from and to Sqoop in ODI is straightforward. Create a mapping with the database source and hadoop target (or vice versa) and apply any necessary transformation expressions.

In the physical design of the map, make sure to set the LKM of the target to LKM SQL Multi-Connect.GLOBAL and choose a Sqoop IKM, such as  IKM SQL to Hive- HBase-File (SQOOP). Change the MapReduce Output Directory IKM property MAPRED_OUTPUT_BASE_DIR to an appropriate HDFS dir. Review all other properties and tune as necessary. Using these simple steps you should be able to perform a quick Sqoop load. 

For more information please review the great ODI Sqoop article from Benjamin Perez-Goytia, or read the ODI 12.1.3 documentation about Sqoop.

HBase support

ODI adds support for HBase as a source and target. HBase metadata can be reverse-engineered using the RKM HBase knowledge module, and HBase can be used as source and target of a Hive transformation using LKM HBase to Hive and IKM Hive to HBase. Sqoop KMs also support HBase as a target for loads from a database. 

For more information please read the ODI 12.1.3 documentation about HBase.

Hive Append support

Prior to Hive 0.8 there had been no direct way to append data to an existing table. Prior Hive KMs emulated such logic by renaming the existing table and concatenating old and new data into a new table with the prior name. This emulated append operation caused major data movement, particularly when the target table has been large.

Starting with version 0.8 Hive has been enhanced to support appending. All ODI 12.1.3 Hive KMs have been updated to support the append capability by default but provide backward compatibility to the old behavior through the KM property HIVE_COMPATIBLE=0.7. 


ODI 12.1.3 provides an optimal and easy-to use way to perform data integration in a Big Data environment. ODI utilizes the processing power of the data storage and processing environment rather than relying on a proprietary transformation engine. This core "ELT" philosophy has its perfect match in a Hadoop environment, where ODI can provide unique value by providing a native and easy-to-use data integration envionment.

Thursday Jun 26, 2014

Announcing ODI 12.1.3 – Building on Big Data Capabilities.

Continuing the constant innovation that goes into each Oracle Data Integrator release, this release focuses on supporting Big Data standards that we have seen sweeping the market.

Big Data Heterogeneity

Recognizing the need to integrate data from various forms and sources ODI without losing productivity ODI delivers the following capabilities. Having been chosen as the strategic Extract, Load and Transform (EL-T) standard for Oracle, (ODI is being used as the de facto standard for Oracle applications) it is natural that the product’s latest avatar focuses on some of the most impactful features for our customers. Some of the new Big Data features include

        a. JSON support,

        b. Hadoop SQOOP integration and

        c. Hadoop HBase Integration.

ODI now supports data loading from relation databases to HDFS, Hive and HBase and from HDFS and Hive into relational databases.

All the data movement features are further enhanced through the use of ODI’s Knowledge Modules. Knowledge Modules are easy out of the box functional components that can be easily customized and used to get your project off the ground quicker with easy reusability and productivity gains. As an example, there are Knowledge Modules that write to Hive and helps append data to existing data files.

Enhanced Security and Performance:

All the focus on Big Data does not mean a lesser focus on classic product features. ODI 12.1.3 comes with upgraded Federal Information Processing Standard (FIPS) compliant security standards for all its passwords and sensitive information.

It also has increased performance in two ways. ODI can now load target tables using multiple parallel connections and an improved control for data loading allows users to customize concurrency of jobs that can be run.

To learn more about ODI please visit the ODI home page. And for more in depth technical and business scoop the Resource Kit is an excellent starting point. More customer validation here.

Monday May 12, 2014

Check it out – BI Apps is now available!

As of May 8, 2014, Oracle Business Intelligence (BI) Applications is available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of Oracle Data Integrator (ODI), and certified with the latest version of Oracle BI Foundation For more details on this release and what’s new – check it out!

Monday Feb 24, 2014

Highlighting Oracle Data Integrator 12c (ODI12c)

Towards the last two months of 2013 we highlighted several features of ODI12c's various features with full length blogs for each of the features. This was so popular that we bring you a one stop shop where you can browse through the various entries at your convenience. This is a great page to bookmark, even if we say it ourselves if you are using or thinking of using ODI12c.

Blog Title

Blog Description

Kicking off the ODI12c Blog Series

Talks about ODI12c themes and features at a high level shedding light on the new releases focus areas.

ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

Talks about ODI's new declarative designer with the familiar flow based designer.

Big Data Matters with ODI12c

Talks about ODI12c enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments.

ODI 12c - Aggregating Data

Look at the aggregation component that was introduced in ODI 12c for composing data with relational like operations such as sum, average and so forth.

ODI 12c - Parallel Table Load

Looks at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints.

In-Session Parallelism in ODI12c

Discusses the new in-session parallelism, the intelligence to concurrently execute part of the mappings that are independent of each other, introduced in the ODI12c release.

ODI 12c - Mapping SDK the ins and outs

Talks about the ODI 12c SDK that provides a mechanism to accelerate data integration development using patterns and the APIs in the SDK

ODI 12c - XML improvements

Explains ODI support to advanced XML Schema constructs including union, list, substitution groups, mixed content, and annotations.

ODI 12c - Components and LKMs/IKMs

Illustrates capabilities of ODI 12c's knowledge module framework in combination with the new component based mapper.

Welcome Oracle Management Pack for Oracle Data Integrator! Let’s maximize the value of your Oracle Data Integrator investments!

To help you make the most of Oracle Data Integrator, and to deliver a superior ownership experience in an effort to minimize systems management costs, Oracle recently released Oracle Management Pack for Oracle Data Integrator.

ODI 12c - Mapping SDK Auto Mapping

If you want to properly leverage the 12c release the new mapping designer and SDK is the way forward.

ODI 12c - Table Functions, Parallel Unload to File and More

Helps you integrate an existing table function implementation into a flow.

And below is a list of “how to” and “hands on” blogs about ODI 12c and to get started.

ODI 12c - Getting up and running fast

A quick A-B-C to show you how to quickly get up and running with ODI 12c, from getting the software to creating a repository via wizard or the command line, then installing an agent for running load plans and the like.

Time to Get Started with Oracle Data Integrator 12c!

We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI,

ODI 12c - Slowly Changing Dimensions

Helps setup a slowly changing dimension load in ODI 12c, everything from defining the metadata on the datastore to loading the data.

ODI 12c - Temporal Data Loading

The temporal validity feature in 12c of the Oracle Database is a great feature for any time based data. If you are thinking dimensional data that varies over time.... the temporal validity capabilities of 12c are a great fit, worth checking it out.

ODI 12.1.2 Demo on the Oracle BigDataLite Virtual Machine

Oracle's big data team has just announced the Oracle BigDataLite Virtual Machine, a pre-built environment to get you started on an environment reflecting the core software of Oracle's Big Data Appliance 2.4. BigDataLite is a VirtualBox VM that contains a fully configured Cloudera Hadoop distribution CDH 4.5, an Oracle DB 12c, Oracle's Big Data Connectors, Oracle Data Integrator 12.1.2, and other software.

Webcast - Oracle Data Integrator 12c and Oracle Warehouse Builder

If you missed the recent Oracle Data Integrator 12c and Oracle Warehouse builder live webcast. You can catch up on the events and connect with us with your feedback here. Here we discuss customer examples,ODI12c new features, Big Data compatibility, Oracle Warehouse Builder Migration Utility and Support and live Q and A among other topics.  

 For more information on Oracle Data Integrator visit the ODI Resource Center.

Thursday Jan 30, 2014

Webcast - Oracle Data Integrator 12c and Oracle Warehouse Builder

If you missed the recent Oracle Data Integrator 12c and Oracle Warehouse builder live webcast do not worry. You can still catch up on the events and connect with us with your feedback here. Here we discuss customer examples,ODI12c new features, Big Data compatibility, Oracle Warehouse Builder Migration Utility and Support and live Q and A among other topics.  

Register to listen. 

ODI12c and OWB Webcast.

Here are some questions from the audiences that were answered following the webcast.



Is the management pack free to ODI customers?

While the Management Pack for ODI works to consolidate and manage your ODI infrastructure it is a separately licensed product.  

- You can learn more about the Management Pack for ODI in this Data Sheet

Also visit the Oracle Management Pack for ODI homepage for more details.

Where can we download ODI 12c?

The ODI 12c can be downloaded in various ways including Virtual Machines and complete downloads from the Oracle Technology Network.  

This page also gives the latest on trainings, patches and other happenings around Oracle Data Integrator so keep this page bookmarked to follow the latest on ODI.

Thank you.

Happy to help, please let me know if you need any other assistance. Thank you for your participation.

Could you expand on how the time reduction is effected through parallelism?

As you might have suspected the time reductions depends on various factors including database tunings, network capabilities and code efficiencies.

ODI’s parallelism offers parallel threading and intelligent identification of parallelizable components.

This blog on the in-session parallelism functionality explains the workings and logic behind the process.

What has Oracle done to ensure any applications written today using existing KM's will work going forward for any ODI upgrades?

While we do our best to ensure consistency across the versions with Knowledge Modules - any Knowledge Modules you have changed would require additional testing to make sure everything works as it should.

Customization allows your Knowledge Modules to work most efficiently but also means changes from the out of the box KMs provided. Unfortunately there is no simple/easy answer outside of further testing.

Could you tell more about integrations to Workday? (real-time)

Workday integration is currently provided through some of our partners , mostly with customized Knowledge Modules.  For help in contacting such a partner, please reach out to your Oracle Data Integration Sales Manager.

Does Oracle Business Intelligence Application offer a choice to use ODI or a third party DAC?

Oracle BI Apps version which was released in May of 2013 was built on Oracle Data Integrator and fully functions only on ODI.<