Wednesday Jan 29, 2014

ODI 12.1.2 Demo on the Oracle BigDataLite Virtual Machine

Oracle's big data team has just announced the Oracle BigDataLite Virtual Machine, a pre-built environment to get you started on an environment reflecting the core software of Oracle's Big Data Appliance 2.4. BigDataLite is a VirtualBox VM that contains a fully configured Cloudera Hadoop distribution CDH 4.5, an Oracle DB 12c, Oracle's Big Data Connectors, Oracle Data Integrator 12.1.2, and other software.

You can use this environment to see ODI 12c in action integrating big data with Oracle DB using ODI's declarative graphical design, efficient EL-T loads, and Knowledge Modules designed to optimize big data integration. 

The sample data contained in BigDataLite represents the fictional Oracle MoviePlex on-line movie streaming company. The ODI sample performs the following two steps:

  • Pre-process application logs within Hadoop: All user activity on the MoviePlex web site is gathered on HDFS in Avro format. ODI is reading these logs through Hive and processes activities by aggregating, filtering, joining and unioning the records in an ODI flow-based mapping. All processing is performed inside Hive map-reduce jobs controlled by ODI, and the resulting data is stored in a staging table within Hive.
  • Loading user activity data from Hadoop into Oracle: The previously pre-processed data is loaded from Hadoop into an Oracle 12c database, where this data can be used as basis for Business Intelligence reports. ODI is using the Oracle Loader for Hadoop (OLH) connector, which executes distributed Map-Reduce processes to load data in parallel from Hadoop into Oracle. ODI is transparently configuring and invoking this connector through the Hive-to-Oracle Knowledge Module.

Both steps are orchestrated and executed through an ODI Package workflow. 

Demo Instructions

Please follow these steps to execute the ODI demo in BigDataLite:

  1. Download and install BigDataLite. Please follow the instructions in the Deployment Guide at the download page
  2. Start the VM and log in as user oracle, password welcome1.
  3. Start the Oracle Database 12c by double-clicking the icon on the desktop.


  4. Start ODI 12.1.2 by clicking the icon on the toolbar.


  5. Press Connect To Repository... on the ODI Studio window. 


  6. Press OK in the ODI Login dialog.


  7. Switch to the Designer tab, open the Projects accordion and expand the projects tree to Movie > First Folder > Mappings. Double-click on the mapping Transform Hive Avro to Hive Staging.


  8. Review the mapping that transforms source Avro data by aggregating, joining, and unioning data within Hive. You can also review the mapping Load Hive Staging to Oracle the same way. 

    (Click image for full size)

  9. In the Projects accordion expand the projects tree to Movie > First Folder > Packages. Double-click on the package Process Movie Data.


  10. The Package workflow for Process Movie Data opens. You can review the package.


  11. Press the Run icon on the toolbar. Press OK for the Run and Information: Session started dialogs. 




  12. You can follow the progress of the load by switching to the Operator tab and expanding All Executions and the upmost Process Movie Data entry. You can refresh the display by pressing the refresh button or setting Auto-Refresh. 


  13. Depending on the environment, the load can take 5-15 minutes. When the load is complete, the execution will show all green checkboxes. You can traverse the operator log and double-click entries to explore statistics and executed commands. 

This demo shows only some of the ODI big data capabilities. You can find more information about ODI's big data capabilities at:


Monday Nov 04, 2013

Big Data Matters with ODI12c

contributed by Mike Eisterer

On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration.

Why Big Data = Big Business

Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions.

ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through:

  • The ability for existing ODI and SQL developers to leverage new Big Data technologies.
  • A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions.
  • Integration between many heterogeneous environments and technologies such as HDFS and Hive.
  • Generation of Hive Query Language.

Working with Big Data using Knowledge Modules

 ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs. 

The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include:

  • Check Knowledge Module
  • Reverse Engineer Knowledge Module
  • Hive Transform Knowledge Module
  • Hive Control Append Knowledge Module
  • File to Hive (LOAD DATA) Knowledge Module
  • File-Hive to Oracle (OLH-OSCH) Knowledge Module 

Nothing to beat an Example:

To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets. 

The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process. 

In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over.

Figure 1  Defining the Mapping

Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping.


Figure 2  Importing the Big Data Knowledge Modules

Following the import, the KMs are available in the Designer Navigator.

Figure 3  The Project View in Designer, Showing Installed IKMs

Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping.


Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target

Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well.

The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive.

To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4.


Figure 5  Process and Steps Managed by the KM

What’s Next

Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others.

Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop:

  • Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets
  • Generating Transformations using Hive Query language
  • Loading Oracle from Hadoop Sources

For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html

Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c.

Thursday Sep 12, 2013

Stream Relational Transactions into Big Data Systems

Are you one of the organizations adopting ‘big data systems’ to manage and analyze a class of data typically referred to as big data? If so, you may know that big data includes data that could be structured, semi-structured or unstructured, each of which originates from a variety of different sources.  Another characterization of big data is described by the data's volume, velocity, and veracity. Due to its promise to help harness the data deluge we are faced with, the adoption of big data solutions is becoming quite pervasive. In this blog post I’d like discuss how to leverage Oracle GoldenGate’s real-time replication for big data systems.

The term 'big data systems' is an umbrella terminology used in general to discuss a wide variety of technologies each of which is used for a specific purpose. Broadly speaking, big data technologies address the needs for batch, transactional, and real-time processing requirements. Using the appropriate big data technology is highly dependent on the use case being addressed.

While gaining business intelligence from transactional data continues to be a dominant factor in the decision making process, businesses have realized that gaining intelligence from other forms of data they have been collecting will enable them achieve a more complete view, address additional business objectives, and lead to better decision making. The following table illustrates some examples of various industry verticals, forms of data, and the objective the business attempts to achieve using the other forms of data.

Industry

Data

Objective

Healthcare

Practitioner’s notes, machine statistics.

Best practices and reduced hospitalization.

Retail

Weblog, click streams.

Micro-segmentation recommendations.

Banking

Weblogs, fraud reports.

Fraud detection, risk analysis.

Utilities

Smart meter reading, call center data.

Real-time and predictive utilization analysis.

Role of transactional data

When using other forms of data for analytics, better contextual intelligence is obtained when the analysis is combined with transactional data. Especially low-latency transactional data brings additional value to dynamically changing operations that day-old data cannot deliver. In organizations, a vast majority of applications' transactional data is captured in relational databases. In order to ensure an efficient supply of transactional data for big data analytics, there are several requirements that the data integration solution should address:

<!--[if !supportLists]-->· <!--[endif]-->Reliable change data capture and delivery mechanism

<!--[if !supportLists]-->· <!--[endif]-->Minimal resource consumption when extracting data from the relational data source

<!--[if !supportLists]-->· <!--[endif]-->Secured data delivery

<!--[if !supportLists]-->· <!--[endif]-->Ability to customize data delivery

<!--[if !supportLists]-->· <!--[endif]-->Support heterogeneous database sources

<!--[if !supportLists]-->· <!--[endif]-->Easy to install, configure and maintain

A solution which can reliably stream database transactions to a desired target enables that the effort is spent on data analysis rather than data acquisition. Also, when the solution is non-intrusive and minimally impacts the source database, it minimizes the need for additional resources and changes on the source database.

Oracle GoldenGate is a time tested and proven product for real-time, heterogeneous relational database replication. Oracle GoldenGate addresses the challenges listed above and is widely used by organizations for mission critical data replication among relational databases. Furthermore, GoldenGate moves transactional data in real-time to support timely operational business intelligence needs.

Oracle GoldenGate Integration Options for Big Data Analytics

There is a variety of integration options available with the Oracle GoldenGate product that facilitates delivering transactions on relational databases into non-relational targets.

Oracle GoldenGate provides pre-built adapters which integrate with Flat Files and Messaging Systems. Please refer to Oracle GoldenGate for Java - Administration Guide and Oracle GoldenGate for Flat Files -Administration Guide for more information.

Oracle GoldenGate also provides Java APIs and a framework for developing custom integrations to Java enabled targets. Using this capability, custom adapters or handlers can be developed to address specific requirements. In this blog post I’d like focus on Oracle GoldenGate Java APIs for developing custom integrations to big data systems.

As we mentioned earlier, 'big data systems' is an umbrella terminology used in general to describe a wide variety of technologies, each of which is used for a specific purpose. Among the various big data systems, Hadoop and its suite of technologies are widely adopted by various organizations for processing big data. The below diagram illustrates a general high level architecture for integrating with Hadoop.

<!--[if !vml]--><!--[endif]--> <!--[if !vml]--><!--[endif]-->

Custom Adapter

<!--[if !vml]--><!--[endif]--> <!--[if !vml]--><!--[endif]--> <!--[if !vml]--><!--[endif]--> <!--[if !vml]--><!--[endif]-->

Pump Parameter file

Adapter Properties file

<!--[if !vml]-->

You can implement custom adapter or handler for the big data system using Oracle GoldenGate's Java API. The custom adapter is deployed as an integral part of the Oracle GoldenGate Pump process. The Pump and the custom adapter are configured through the Pump parameter file and custom adapter's properties file respectively. Depending upon the requirements, the properties for the custom adapter will need to be determined and implemented.

The Pump process will execute the adapter in its address space. The Pump reads the Trail File created by the Oracle GoldenGate Capture process and passes the transactions to the adapter. Based on the configuration, the adapter will write the transactions into Hadoop.

Enabling the co-existence of big data systems with relational systems will benefit organizations to better serve customers and improve decision-making capabilities. Oracle GoldenGate, which has an excellent record of empowering IT on the various aspects of data management requirements, provides the capability to integrate with big data systems. In the upcoming blog posts, we will discuss in depth the implementation and the configuration of integrating Oracle GoldenGate with Hadoop technologies. 

Thursday Aug 29, 2013

ODI - File transformation debatching

Carrying on from the blog post on performing file transformations within ODI using nXSD, in this post I will show how a large file can be debatched into smaller chunks/shards. The example uses a COBOL data file as input and debatches this file into parts using the debatching capability of the adapter.

COBOL is a great example of a complex data source for Hadoop and Big Data initiatives since there are plenty of systems that companies wish to unlock potential gems from. Its big data and there is tonnes of it! The ODI file transformer tool on java.net has been extended to include an optional parameter to define the number of rows in the chunk/shard. Having this parameter now lets you take control of the relative size of the files being processed - perhaps the platform has certain characteristics that work better than others. Hadoop has challenges with millions of small files and with very very large ones, so being able to prepare the data is useful.

The COBOL copybook was used as an input in the Native Format Builder and an nXSD generated for the copybook, I used the following copybook;

  •         02  DTAR020.                                      
  •         03  DTAR020-KCODE-STORE-KEY.                                      
  •             05 DTAR020-KEYCODE-NO      PIC X(08).                         
  •             05 DTAR020-STORE-NO        PIC S9(03)   COMP-3.               
  •         03  DTAR020-DATE               PIC S9(07)   COMP-3.               
  •         03  DTAR020-DEPT-NO            PIC S9(03)   COMP-3.               
  •         03  DTAR020-QTY-SOLD           PIC S9(9)    COMP-3.               
  •         03  DTAR020-SALE-PRICE         PIC S9(9)V99 COMP-3.               

The data file came from an example on the web. Below you can see for the ODIFileTransformer tool, an actual example of the command executed, the SHARDROWS parameter defines the number of rows to be written to each data file. The tool simply adds an integer index to the end of the output parameter - a little basic I know, the source is on java.net if you feel like being creative.

ODIFileTransformer "-INPUT=D:\input\cbl_data.bin" "-SCHEMA=D:\reference\cbl.xsd" "-OUTPUT=d:\output\out.xml" "-ROOT=ROOT" "-SHARDROWS=2" 

Executing this generates many output files with for example 2 rows in each, this is a contrived example just to illustrate the 2 rows in the generated file;

  1. <ROOT xmlns="http://TargetNamespace.com/CBL">
  2.    <DTAR020>
  3.       <DTAR020-KCODE-STORE-KEY>
  4.          <DTAR020-KEYCODE-NO>69684558</DTAR020-KEYCODE-NO>
  5.          <DTAR020-STORE-NO>+020</DTAR020-STORE-NO>
  6.       </DTAR020-KCODE-STORE-KEY>
  7.       <DTAR020-DATE>+0040118</DTAR020-DATE>
  8.       <DTAR020-DEPT-NO>+280</DTAR020-DEPT-NO>
  9.       <DTAR020-QTY-SOLD>+000000001</DTAR020-QTY-SOLD>
  10.       <DTAR020-SALE-PRICE>+00000019.00</DTAR020-SALE-PRICE>
  11.    </DTAR020>
  12.    <DTAR020>
  13.       <DTAR020-KCODE-STORE-KEY>
  14.          <DTAR020-KEYCODE-NO>69684558</DTAR020-KEYCODE-NO>
  15.          <DTAR020-STORE-NO>+020</DTAR020-STORE-NO>
  16.       </DTAR020-KCODE-STORE-KEY>
  17.       <DTAR020-DATE>+0040118</DTAR020-DATE>
  18.       <DTAR020-DEPT-NO>+280</DTAR020-DEPT-NO>
  19.       <DTAR020-QTY-SOLD>-000000001</DTAR020-QTY-SOLD>
  20.       <DTAR020-SALE-PRICE>-00000019.00</DTAR020-SALE-PRICE>
  21.    </DTAR020>
  22. </ROOT>
This is a small extension to the earlier version of the tool and provides a useful debatching capability. The ZIP containing the tool is on the java.net site, see the earlier post on some background details. ...and then what? Well that would be telling.

Friday Mar 15, 2013

Pervasive Access to Any Data

In my previous blog, I shared with you the five key data integration requirements, which can be summarized as: integrating any data from any source, stored on premise or in the cloud, with maximum performance and availability, to achieve 24/7 access to timely and trusted information. Today, I want to focus on the requirement for integrating “any data”.

We all feel the impact of huge growth in the amount of raw data collected on a daily basis. And big data is a popular topic of information technology these days. Highly complex, large volumes of data bring opportunities and challenges to IT and business audiences. The opportunities, as discussed in McKinsey’s report, are vast, and companies are ready to tap into big data to differentiate and innovate in today’s competitive world.

One of the key challenges of big data is managing the unstructured data, which is estimated to be %80 of enterprise data. Structured and unstructured data must coexist and be used in conjunction with each other in order to gain maximum insight. This means, organizations must collect, organize, and analyze data from sensors, conversations, e-commerce websites, social networks, and many other sources.

Big data also changes the perspective into information management. It changes the question from “How do you look at your data?” to “How do you look at the data that is relevant to you?” This shift in perspective has huge implications in terms of information-management best practices and technologies applied. Data integration technologies now need to support unstructured and semi-structured data, in addition to structured transactional data, to be able to support a complete picture of the enterprise that will drive higher efficiencies, productivity and innovation.

Oracle addresses big data requirements with a complete solution.

In addition to Oracle Big Data Appliance for acquiring and organizing big data, Oracle offers Oracle Big Data Connectors that enable an integrated data set for analysis. Big Data Connectors is a software suite that integrates big data across the enterprise. Oracle Data Integrator offers an Application Adapter for Hadoop, which is part of the Big Data Connectors, and allows organizations to build Hadoop metadata within Oracle Data Integrator, load data into Hadoop, transform data within Hadoop, and load data directly into Oracle Database using Oracle Loader for Hadoop. Oracle Data Integrator has the necessary capabilities for integrating structured, semi-structured, and unstructured data to support organizations with transforming any type of data into real value.

If you would like to learn more about how to use Oracle’s data integration offering for your big data initiatives take a look at our resources on Bridging the Big Data Divide with Oracle Data Integration.

Monday Feb 25, 2013

Connecting Velocity to Value: Introducing Oracle Fast Data

To understand fast data, one must first look at one of the most compelling new the breakthroughs in data management: big data. Big data solutions address the challenge today’s businesses are facing when it comes to managing the increasing volume, velocity, variety of all data - not just data within as well as about the organization. Much of the buzz and to-do around big data has been around Hadoop, NoSQL technologies, but little has been talked about velocity. Velocity is about the speed this data is generating. In many cases the economic value of this data diminishes fast as well. As a result, companies need to process large volumes of data in real-time and make decisions in a more rapid fashion to create value from highly-perishable, high-volumes of data in business operations.

This is where fast data comes in. Fast data solutions help manage the velocity (and scale) of any type of data and any type of event to enable precise action for real-time results.

Fast data solutions come from multiple technologies, and some of the concepts, such as complex event processing and business activity monitoring, have been in use in areas such as the financial services industry for years. But often, the pieces were used in isolation—a complex event process engine as a standalone application to apply predefined business rules to filter data, for example. But when these concepts are tied to analytics, capabilities expand to allow improved real-time insights. By tying together these strands, companies can filter/correlate, move/transform, analyze, and finally act on information from big data sources quickly and efficiently, enabling both real-time analysis and further business intelligence work once the information is stored.

Oracle’s Fast Data solutions offer multiple technologies that work hand-in-hand to create value out of high-velocity, high-volume data. They are designed to optimize the efficiency, scale for processing high volume events and transactions.

[Read More]

Tuesday Jan 15, 2013

ODI - Hive and MongoDB

I've been experimenting with another Hive storage handler, this time for MongoDB, there are a few out there including this one from MongoDB. The one I have been using supports basic primitive types and also supports read and write - using the standard approach of storage handler class and custom properties to describe the data mask. This then lets you access MongoDB via hive external table very easily and abstract away a lot of integration complexity - also makes it ideal for using in ODI. I have been using on my Linux VM where I have Hive running to access my MongoDB running on an another machine. The storage handler is found here, I used it to access the same example I blogged about here, below is the external table definition;

  1. ADD JAR /home/oracle/mongo/hive-mongo.jar;

  2. create external table mongo_emps(EMPNO string, ENAME string, SAL int)  
  3. stored by "org.yong3.hive.mongo.MongoStorageHandler"  
  4. with serdeproperties ( "mongo.column.mapping" = "EMPNO,ENAME,SAL" )  
  5. tblproperties ( "mongo.host" = "<my_mongo_ipaddress>", "mongo.port" = "27017",  
  6. "mongo.db" = "test", "mongo.collection" = "myColl" );

Very simple. The nice aspect of the Hive external table are the SerDeProperties that can be specified, very simple but provides a nice flexible approach. I can then reverse engineer this into ODI (see reverse engineering posting here) and use it in my Hive integration mappings to read and potentially write to MongoDB.

The primitive types supported can also project nested document types, so for example in the document below (taken from here), name, contribs and awards are strings but have JSON structures;

  1. {
  2. "_id" : 1,
  3. "name" : {
  4. "first" : "John",
  5. "last" :"Backus"
  6. },
  7. "birth" : ISODate("1924-12-03T05:00:00Z"),
  8. "death" : ISODate("2007-03-17T04:00:00Z"),
  9. "contribs" : [ "Fortran", "ALGOL", "Backus-Naur Form", "FP" ],
  10. "awards" : [
  11. {
  12. "award" : "W.W. McDowellAward",
  13. "year" : 1967,
  14. "by" : "IEEE Computer Society"
  15. },
  16. {
  17. "award" : "National Medal of Science",
  18. "year" : 1975,
  19. "by" : "National Science Foundation"
  20. },
  21. {
  22. "award" : "Turing Award",
  23. "year" : 1977,
  24. "by" : "ACM"
  25. },
  26. {
  27. "award" : "Draper Prize",
  28. "year" : 1993,
  29. "by" : "National Academy of Engineering"
  30. }
  31. }

can be processed with the following external table definition, which then can be used in ODI;

  1. create external table mongo_bios(name string, birth string, death string, contribs string, awards string)  
  2. stored by "org.yong3.hive.mongo.MongoStorageHandler"  
  3. with serdeproperties ( "mongo.column.mapping" = "name,birth,death,contribs,awards" )  
  4. tblproperties ( "mongo.host" = "<my_ip_address>", "mongo.port" = "27017",  
  5. "mongo.db" = "test", "mongo.collection" = "bios" );

All very simple and that's what makes it so appealing. Anyway, that's a quick following on using external tables with MongoDB and Hive to the SQL oriented approach I described here that used java table functions.

Wednesday Jan 02, 2013

ODI - Hive and NoSQL, the code

This post includes the Java client demonstration code used in the Hive and NoSQL post illustrated here. The BasicBigData.java code is a NoSQL client which populates a key value store that is queryable using the Hive external table from that post. It didn't take long to code and a few peeks at the NoSQL javadoc to get it going. You can take this java code and compile and run it (instructions for compiling are similar to the verification demo here - it is very easy).

The java code uses the NoSQL major/minor path constructor to describe the Key, below is a snippet to define the birthdate for Bob Smith;

  1. ArrayList<String> mjc1 = new ArrayList<String>();
  2. mjc1.add("Smith");
  3. mjc1.add("Bob");
  4. ...
  5. ArrayList<String> mnrb = new ArrayList<String>();
  6. mnrb.add("birthdate");
  7. ...
  8. store.put(Key.createKey(mjc1,mnrb),Value.createValue("05/02/1975".getBytes()));
  9. ...

In the referenced post, to actually aggregate the key values, we used the Hive collect_set aggregation function (see here for Hive aggregation functions). The collect_set aggregation function returns a set of objects with duplicates eliminated. To get the aggregation function behavior in ODI with the correct group by we must tell ODI about the Hive aggregation function. We can define a new language element for collect set in the Topology tree, define the element as a group function, and also define the expression for Hive under the Implementation tab;

We are then able to define expressions which reference this aggregation function and get the exact syntax defined in the earlier post. Below we see the Hive expressions using collect_set below;

From this design and the definition of the aggregation function in ODI, when its executed you can see the generated Hive QL with the correct columns in the grouping function;

The target Hive datastore in the interface I defined as been loaded with the key values from the NoSQL keystore, cool!

Those are a few of the missing pieces which would let you query NoSQL through Hive external tables, hopefully some useful pointers. 

Monday Dec 31, 2012

ODI - Hive and NoSQL

The Hive external table let's us do lots of cool stuff including processing data from NoSQL. We have seen how custom SerDes are used, Hive storage handlers also provide some cool capabilities. Using the Hive storage handler defined here, an external table can be defined to project data from a NoSQL key-value store. The external table can then be used as a source in ODI, very simple.

The illustration on github has the following data stored in a Oracle NoSQL Database (the key is the lastname/firstname etc):

  • /Smith/Bob/-/birthdate: 05/02/1975
  • /Smith/Bob/-/phonenumber: 1111-1111
  • /Smith/Bob/-/userid: 1
  • /Smith/Patricia/-/birthdate: 10/25/1967
  • /Smith/Patricia/-/phonenumber: 2222-2222
  • /Smith/Patricia/-/userid: 2
  • /Wong/Bill/-/birthdate: 03/10/1982
  • /Wong/Bill/-/phonenumber: 3333-3333
  • /Wong/Bill/-/userid: 3

Using the Hive external table and the custom storage handler for a key value store, we define a mask to project the data through the external table. 

  1. ADD JAR /home/oracle/kv/HiveKVStorageHandler.jar;
  2. CREATE EXTERNAL TABLE MY_KV_TABLE (lastname string, firstname string, birthdate string, phonenumber string, userid string)
  3.       STORED BY 'org.vilcek.hive.kv.KVHiveStorageHandler'
  4.       WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "lastname,firstname", "kv.minor.keys.mapping" = "birthdate,phonenumber,userID")
  5.       TBLPROPERTIES ("kv.host.port" = "localhost:5000", "kv.name" = "kvstore");

There are a few interesting properties here;

  • we specify the keyvalue store using TBLPROPERTIES, identify the host/port and the keystore name (kvstore).
  • the SerDe properties contains the mapping of the keys to column names, you will get a row for each value of birthdate, phonenumber, userID

Fairly straightforward. We can then reverse engineer this into ODI, using the same mechanism as I described in previous posts here setting the ODI_HIVE_SESSION_JARS and so forth. The data projected looks like this;

  1. hive> SELECT * FROM MY_KV_TABLE;
  2. OK
  3. Smith     Patricia     10/25/1967     NULL NULL
  4. Smith Patricia NULL 2222-2222     NULL
  5. Smith Patricia NULL NULL 2
  6. Smith Bob 05/02/1975 NULL NULL
  7. Smith Bob NULL 1111-1111 NULL
  8. Smith Bob NULL NULL 1
  9. Wong Bill 03/10/1982 NULL NULL
  10. Wong Bill NULL 3333-3333 NULL
  11. Wong Bill NULL NULL 3

In ODI by defining the Hive collect_set function as an aggregation function, we can then aggregate the data and pivot the data to get it as a row;

  1. SELECT lastname, firstname, collect_set(birthdate)[0], collect_set(phonenumber)[0], collect_set(userid)[0]
  2.       FROM MY_KV_TABLE
  3.       GROUP BY lastname, firstname;

So another interesting illustration of external tables in Hive and what they can provide.

Saturday Dec 29, 2012

ODI - Hive and Complex JSON

Time for a nested JSON example using Hive external tables. ODI treats nested complex types in Hive the same way it treats types in other technologies such as Oracle, the type name is captured, not the definition - you can see XMLType or SDO_GEOMETRY as an example within the ODI Oracle technology. The Hive technology in ODI also has the ARRAY, STRUCT and MAP types seeded. For the nested JSON example I will use the example define in this JSON SerDe page here.

The external table definition is below, I have defined this in Hive and reverse engineered into ODI just like the previous post. Note it is using a different SerDe from the post here, when I tried using that Google SerDe the data was not projected properly (all data was projected as null...so beware of components used). Just like the previous post we need to add the jar whenever it is used (plus don't forget the ODI_HIVE_SESSION_JARS), either from Hive or ODI;

  1. ADD JAR /home/oracle/json/hive-serde-1.0.jar;
  2. CREATE EXTERNAL TABLE message (
  3.   messageid string,
  4.   messagesize int,
  5.   sender string,
  6.   recipients array<string>,
  7.   messageparts array<struct<
  8.     extension: string,
  9.     size: int
  10.   >>,
  11.   headers map<string,string>
  12. )
  13. ROW FORMAT SERDE 'com.proofpoint.hive.serde.JsonSerde'
  14. LOCATION '/user/oracle/json_complex';

This external table has ARRAY fields, STRUCT fields and MAP fields, so we are going above and beyond simple types. The data I will use is the same as the referenced web page;

  1. {
  2.   "messageId": "34dd0d3c-f53b-11e0-ac12-d3e782dff199",
  3.   "messageSize": 12345,
  4.   "sender": "alice@example.com",
  5.   "recipients": ["joe@example.com", "bob@example.com"],
  6.   "messageParts": [
  7.     {
  8.       "extension": "pdf",
  9.       "size": 4567
  10.     },
  11.     {
  12.       "extension": "jpg",
  13.       "size": 9451
  14.     }
  15.   ],
  16.   "headers": {
  17.     "Received-SPF": "pass",
  18.     "X-Broadcast-Id": "9876"
  19.   }
  20. }

Again, I will use the Hive RKM that I mentioned in the post here in order to reverse engineer the external table defined in Hive into ODI. Below you can see the table and how it is represented in ODI. You can see the recipients and messageparts columns are defined as ARRAY, and headers is a MAP.

We can view the data in ODI just like other tables - remember the external table is projected the JSON data from HDFS through Hive;

When the datastore is used in an interface these are the columns that are available in mapping, you can use any of the Hive functions or operators available on these complex types, plus constructors for building them. For example to retrieve the 1st element in the array the following Hive QL can be used;

  • select sender, recipients[0] from message;

 You can build such expressions in ODI;

You just need to be aware of the datatypes you are using and the functions available. Haven't gone into complex SQL/Hive QL here, but you see the basic mechanics are pretty straightforward. One of the points that comes home here is the functionality level of the non-core pieces of Hadoop, so the first 2 JSON SerDe libraries I have used support different capabilities of JSON - so be aware.

ODI - Hive External Tables, reverse engineering

Here we see Hive external tables on JSON data files in action and being reversed engineered from Hive into ODI. Carrying on from my earlier post on Hive reverse engineering here we will tackle external tables with primitive data types. Just like Oracle external tables for accessing data on the filesystem, Hive external tables can access big data files in HDFS. This example uses a JSON SerDe for Hive which was downloaded from here. The external tables in Hive are quite nice! You just specify the HDFS directory and all files in that directory are projected through the external table definition, also you can specify sophisticated SerDe classes that comply with the Hive SerDe interfaces - these classes can reverse engineer complex types also which make them very useful.

Before launching ODI or the agent for this I added the SerDe JAR file to this environment variable used by ODI, here is what I did in my system;

  • export ODI_HIVE_SESSION_JARS=$ODI_HIVE_SESSION_JARS:/home/oracle/json/hive-json-serde-0.2.jar

I have 2 JSON files with data in my HDFS under /user/oracle/json I will specify this directory in the external table definition, along with the JSON SerDe. Within my Hive shell I can define the external table, remembering to add the JSON SerDe file (using the ADD JAR command) just before I create the external table.

  1. ADD JAR /home/oracle/json/hive-json-serde-0.2.jar;
  2. CREATE EXTERNAL TABLE IF NOT EXISTS my_table (
  3.           field1 string, field2 int, field3 string, field4 double
  4.        )
  5.        ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde'
  6.        LOCATION '/user/oracle/json';

 The data in the first JSON file looks like the following, this was just an example from the google code snippets;

  • {"field1":"data1","field2":100,"field3":"more data1","field4":123.001}
  • {"field1":"data2","field2":200,"field3":"more data2","field4":123.002}
  • {"field1":"data3","field2":300,"field3":"more data3","field4":123.003}
  • {"field1":"data4","field2":400,"field3":"more data4","field4":123.004}

I have defined these fields in the external table. Back in ODI, in my model below I am using the customized reverse with the RKM Hive mentioned above, also for this reverse I just want one table imported, so I defined the mask just to be my_table. Clicking the 'Reverse Engineer' button on the top left of the panel starts the reverse engineering.

If there are errors, then check the ODI Operator, that's where you will see anything that goes wrong. The table in ODI has just some rudimentary information about the number of columns, all of the external table definition for this is defined in Hive, this is the same as when we reverse engineer an Oracle external table. Once the table is captured in ODI I can view the data;

The custom Hive RKM I used is here - this has some changes to the one shipped with ODI to include external table reverse (I included EXTERNAL_TABLE in the getTables API call), plus support for custom SerDe JARs loaded via the environment variable ODI_HIVE_SESSION_JARS (just like the ODI IKMs load the JARs, the RKM also needs to).

Anyway, that's a quick peek at external tables in Hive and reverse engineering them into ODI. Once the table is captured in ODI you can use it in mappings just like other tables!

Friday Dec 28, 2012

ODI - Basic Hive Queries

Here we will see a basic example joining the movie lens data and then loading a Hive table based on the tables from the Reverse Engineering Hive Tables post. The Hive table was defined and created via ODI, I duplicated the movies table and added a column for the rating, just for demo purposes...

When I build my interface, I add movies for my source and movies_info as my target, the auto mapping completes much of the mapping, the rating (which is not mapped below) comes from another table - this is where ODI's incremental design is nice, I can add in a new datastore as a source and map columns from it, then describe the join. 

 After I have added the movie ratings table, I will define the join just by dragging movie_id from movies to the ratings table movie_id column. That's the join...mostly defined.

The other thing you need to check is that the ordered join property is set. This will generate the ordered join (ANSI style, but using the Hive technology's template) syntax.

 We can also perform transformations using built in or user defined functions, below I am performing the Hive built-in UPPER function on the movie name for example.

In the physical, or flow view I am using the Hive Control Append IKM, I am using ODI to create the target table in Hive and also performing a truncate if it exists. Also have the control flow switched off. 

Executing this is just like any other interface apart from we leverage Hive to perform the heavy lifting. The resultant execution can be inspected in the ODI operator or console and the resultant table inspected when complete. 

Friday Sep 21, 2012

Tackling Big Data Analytics with Oracle Data Integrator

 By Mike Eisterer

 The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.

 Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity.

 

 

Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships

 

Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms.

 

Tackling Big Data Analytics with Oracle Data Integrator

October 1, 2012 12:15 PM at MOSCONE WEST – 3005

For other data integration sessions at OpenWorld, please check our Focus-On document

If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

Monday Aug 20, 2012

Hadoop - Invoke Map Reduce

Carrying on from the previous post on Hadoop and HDFS with ODI packages, this is another needed call out task - how to execute existing map-reduce code from ODI. I will show this by using ODI Packages and the Open Tool framework.

The Hadoop JobConf SDK is the class needed for initiating jobs whether local or remote – so the ODI agent could be hosted on a system other than the Hadoop cluster for example, and just fire jobs off to the Hadoop cluster. Also some useful posts such as this older one on executing map-reduce jobs from java (following the reply from Thomas Jungblut in this post) helped me get up to speed.

Where better to start than the WordCount example (see a version of it here, both mapper and reducer and inner classes), let’s see how this can be invoked from an ODI package. The HadoopRunJob below is a tool I added via the Open Tool framework, it basically wrappers the JobConf SDK, the parameters are defined in ODI.

You can see some of the parameters below, so I define the various class names I need below, plus various other parameters including the Hadoop job name, can also specify the job tracker to fire the job on (for a client-server style architecture). The input path and output path are also defined as parameters, you can see the first tool in the package is calling the copy file to HDFS – this is just to demonstrate that I will copy the files needed by the WordCount program into HDFS ready for it to run.

Nice and simple, and shields a lot of the complexity hidden behind some simple tools. The JAR file containing WordCount needed to be available to the ODI Agent (or Studio since I invoked it with the local agent), that was it. When the package is executed, just like normal the agent processes the code and executes the steps. I I run the package above it will successfuly copy the files to HDFS and perform the word count. On a second execution of the package an error will be reported because the output directory already exists as below.

I left the example like this to illustrate that we can then extend the package design to have conditional branching to handle errors after a flow, just like the following;

Here after executing the word count, the status is checked and you can conditionally branch on success or failure – just like any other ODI package. I used the beep just for demonstration.

The above HadoopRunJob tool used above was done using the v1 MapReduce SDK, with MR2/Yarn this again will change – these kinds of changes hammer home the need for better tooling to abstract common concepts for users to exploit.

You can see from these posts that we can provide useful tooling behind the basic mechanics of Hadoop and HDFS very easily, along with the power of generating map-reduce jobs from interface designs which you can see from the OBEs here.

Thursday Aug 16, 2012

Hadoop and HDFS - file system tools

Underpinning the Oracle Big Data Appliance (and any other Hadoop cluster) is HDFS, working with files in HDFS is just like working with regular file systems (quite different under the hood) but to manipulate from the OS you just have a different API to use, Hadoop uses the 'hadoop fs' command prior to a mkdir or rm, whereas local file systems use just mkdir or rm. ODI has file management tools available in the package editor- there are tools for preparing files and moving them around. The HDFS commands can act just like any other tool in ODI, let’s see how!

Here I will show you how I added a bunch of tools to perform common HDFS file system commands and map-reduce job execution from within the ODI package editor – this is over and above the support ODI has for building interfaces that exploit the Hadoop ecosystem.

You will see how users can easily perform HDFS actions and execute map-reduce jobs – I will save the later for another post. Using ODI Tools extensibility into the package editor, I have wrapped the following;

Tool

Description

HDFSCopyFromLocalFile

Copy a file (or group of files) from the local file system to HDFS.

HDFSCopyToLocalFile

Copy a file (or group of files) from HDFS to the local file system.

HDFSMkDirs

Create directories in HDFS. Will create intermediate directories in a path that are missing.

HDFSMoveFromLocalFile

Move a file (or group of files) from the local file system to HDFS.

HDFSRm

Delete directories in HDFS – can recursively delete also.

HadoopRunJob

Run a map-reduce job by defining the job parameters - mapper class, reducer class, formats, input, output etc.

These are common HDFS actions that every Hadoop training walks the user through and are equivalents of the local file system tools ODI has under the Files group in the package toolbox. You can see from the example below the HDFSCopyFromLocalFile call has a source uri to copy and a destination on HDFS - its very simple to use. This uses the FileSystem SDK to manipulate the HDFS file system. The HDFSCopyFromLocalFile below copies a file /home/oracle/weblogs/apache08122012_1290.log from the local file system, to HDFS at /users/oracle/logs/input. Very simple.

The above flow basically removes a directory structure, creates a directory, copies some files, runs a few map-reduce jobs and then tidies up with some removal. I built these tools using ODI’s Open Tool SDK, its another great way of extending the product. In the above image you can see in the Toolbox there is a bunch of tools in the Plugins folder. These tools used Hadoop’s SDKs including org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path and org.apache.hadoop.conf.Configuration. Another cool aspect to ODI Open Tools is that you can call the tools from commands within a KM task, or from procedure commands.

The dependent JARs must be available for ODI to execute. This is a common reply in forums across the web to developers using these SDKs. The HadoopRunJob uses the JobConf SDK to define the map-reduce job (following the reply from Thomas Jungblut in this post). I will cover this in another blog post. For more on integrating with Hadoop and ODI see the self study material here.

About

Learn the latest trends, use cases, product updates, and customer success examples for Oracle's data integration products-- including Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Data Quality

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
3
5
6
7
8
9
10
12
13
14
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today