Thursday Sep 05, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 4 - OSCH Hello World)

Oracle OSCH: A “World Hello” Example

World Hello

In this post we will walk through Alice in Wonderland's looking glass and do a “Hello World” example for Oracle SQL Connector for HDFS (i.e. OSCH). The above title, “World Hello” is a play on words meant to drive home the relationship between the two loading models: OLH and OSCH. They both can be used to load an Oracle table but while OLH is run on Hadoop and uses Hadoop’s Map Reduce engine to write data into Oracle tables, OSCH uses the SQL engine running on Oracle to read data living in HDFS files. OLH pushes data to Oracle on demand from Hadoop. OSCH pulls data from HDFS on demand from Oracle.

Below we will first review the OSCH execution’s model. We will then discuss configuration. OSCH has a few more moving parts to worry about than OLH which invariably will create hiccups, but if you follow my instructions, in the following order, these should be minimized.

  • Perform one-time configuration steps
  • Create an Oracle external table that works against local test data
  • Load the same local test data into an HDFS directory
  • Recreate the external table to reference a Hadoop cluster
  • Use OSCH External Table publishing tool to point the external table to the test data location in HDFS

The OSCH Execution Model

OSCH was explained in the first lesson of the tutorial, but since we are revisiting it in depth, let’s review how it works.

OSCH is simply the plumping that lets Oracle external tables access HDFS content. Oracle external tables are a well established mechanism for reading content that is not populated or managed by Oracle. For conventional Oracle external tables, the content lives as files visible to the OS where the Oracle system is running.  These would be either local files, or shared network files (e.g. NFS). When you create an Oracle External table you point it to a set of files that constitute data that can be rendered as SQL tables. Oracle External table definitions call these “location” files.

Before OSCH was invented, external tables introduced an option called a PREPROCESSOR directive. Originally it was an option that allowed a user to preprocess a single location file before the content was streamed into Oracle. For instance, if your contents were zip files, the PREPROCESSOR option could specify that “unzip –p” is to be called with each location file, which would unzip the files before passing the unzipped content to Oracle. The output of an executable specified in the PREPROCESSOR directive is always stdout (hence the “-p” option for the unzip call). A PREPROCESSOR executable is a black box to Oracle. All Oracle knows is that when it launches it and feeds it a location file path as an argument, the executable will feed it a stream of bits that represents data of an external table.

OSCH repurposed the PREPROCESSOR directive to provide access to HDFS. Instead of calling something like “unzip” it calls an OSCH tool that streams HDFS file content from Hadoop. The files it reads from HDFS are specified as OSCH metadata living in the external table “location” files locally. (These metadata files are created using OSCH’s publishing tool.) In other words, for OSCH, location files do not contain HDFS content, but contains references to HDFS files living in a Hadoop cluster. The OSCH supplied preprocessor expects to find OSCH metadata in this file.

All this is encapsulated with the Oracle external table definition. The preprocessor logic gets invoked every time one issues a SELECT statement in SQL against the external table. At run time, the OSCH preprocessor is invoked, which opens a “location” file with metadata. It parses it the metadata and then generates a list of files in HDFS it will open, one at a time, and read, piping the content into Oracle. (The metadata also includes optional CODEC directives, so if the HDFS content needs to be decompressed before being fed to Oracle, the OSCH preprocessor can handle it).

BTW, if you just got nervous about the performance implications of the “one at a time” phrase above, don’t be. This model is massively scalable.

One Time Configuration Steps

Checklist

Understand the Requirements for Installing and Configuring OSCH

The things you will need for installing and configuring OSCH include:
  • Access to the system where Oracle is running and to the OS account where Oracle is running (typically the Unix account "oracle”)
  • Access to SQL*Plus and permission to connect as DBA
  • Ability to create an Oracle user (e.g. "oschuser") with enough permission to create an external table and directory objects
  • The OSCH kit 
  • The Hadoop client kit for the Hadoop cluster you want to access
  • The Hadoop client configuration for HDFS access
  • Permission to read, write, and delete files in HDFS as OS user "oracle"  (i.e. "oracle" is an Hadoop user)
The formal documentation to install OSCH is here. Below I try to outline a process that has works best for me.

Install the Bits

Logon to the system where Oracle is running as “oracle”. Carve out an independent directory structure (e.g. /home/oracle/osch) outside of the directory structure of ORACLE_HOME. Install the OSCH kit (called “orahdfs-2.2.0”) and the Hadoop client kit (“hadoop-2.0.0”). I typically make these peers. Both kits need to be unzipped. Hadoop client kits typically require some building to create a few native libraries typically related to CODECs. You will also unzip the Hadoop configurations files (“hadoop-conf”). Finally you want to create a default directory for location files that will be referenced by external tables. This is the “exttab” directory below. This directory needs read and write privileges set for “oracle”.

At this point you should have a directory structure that looks something like this:

/home/oracle/osch/orahdfs-2.2.0
/home/oracle/osch/hadoop-2.0.0
/home/oracle/osch/hadoop-conf
/home/oracle/osch/exttab

Configure HDFS

Follow the standard Hadoop client instructions that allow you access the Hadoop cluster via HDFS from the Oracle system logged in as “oracle”. Typically this is to call Hadoop pointing to the hadoop-conf files you copied over.

With Hadoop you will want to be able to create, read, and write files under HDFS /user/oracle directory. For the moment carve out an area where we will put test data to read from HDFS using OSCH.

hadoop --config /home/oracle/osch/hadoop-conf fs –mkdir /user/oracle/osch/exttab

Configure OSCH

In the OSCH kit you will need to configure the preprocessor that is used to access the Hadoop cluster and read HDFS files. It is in the OSCH kit under the bin directory, and is called hdfs_stream. This is a bash script which invokes an OSCH executable under the covers. You need to edit the script and provide a definition for OSCH_HOME. You will also need to modify and export modified PATH and JAVA_LIBRARY_PATH definitions to pick up Hadoop client binaries.

e.g.
OSCH_PATH = /home/oracle/orahdfs-2.2.0
export PATH=/home/oracle/hadoop-2.0.0/bin:/user/bin:/bin
export JAVA_LIBRARY_PATH=/home/oracle/hadoop-2.0.0/lib/native

Optionally hdfs_stream allows you to specify where external table log files go. By default it goes into the log directory living in the OSCH installation (e.g. /home/oracle/orahdfs-2.2.0/log).

When you’ve complete this step, interactively invoke hdfs_stream with a single bogus argument “foo”, again on the Oracle system logged in as “oracle”.

e.g.

./hdfs_stream
OSCH: Error reading or parsing location file foo

This might seem lame, but it is a good sanity check that ensures Oracle can execute the script while processing an external table. If you get a Java stack trace rather than the above error message, the paths you defined in hdfs_stream are probably broken and need to be fixed.

Configure Oracle for OSCH

In this step you need to first connect to Oracle as SYSDBA and create an Oracle DIRECTORY object that points to the file location where hdfs_stream exists. You create one of these to be shared by any Oracle users running OSCH to connect to a particular Hadoop cluster.

SQLPLUS> CREATE DIRECTORY osch_bin_path as ‘/home/oracle/osch/oradhdfs-2.2.0/bin’;

Assuming you’ve created a vanilla Oracle user (e.g. "oschuser") which will own the external table, you want to grant execute privileges on the osch_bin_path directory.

SQLPLUS>GRANT EXECUTE ON DIRECTORY osch_bin_path TO oschuser;

Now reconnect to Oracle as “oschuser” and create an additional directory to point to the directory where location files live.

SQLPLUS> CREATE DIRECTORY exttab_default_directory AS ‘/home/oracle/osch/exttab’;

At this point you have configured OSCH to run against a Hadoop cluster. Now you move on to create external tables to map to content living in HDFS.

Create an Oracle External Table that works against Local Test Data

You want to create an external table definition that mirrors the table you want to load (e.g. reflecting the same column names and data types.)

Even the simplest local external table definitions take some time to get right, and 99% of the external table verbiage needed to get it working against HDFS is identical to getting it to work against local files, so it makes sense to get a vanilla local external table working before trying it against HDFS. 

What you want to do is take a small representative set of sample data that you want to access in HDFS and localize it into as a single file local to the Oracle system and to the “oracle” user. Call it testdata.txt and put it in the /home/oracle/osch/exttab directory, which is our directory for location files.   I would recommend starting with a simple text CSV file.

To make things easier we will use the OSCH External Table tool to create an external table definition that you can use as a template to tweak to conform to your data.  This tool can be run from any system that can connect to the Oracle database, but in this case we are going to stay put and run it locally where Oracle is running as the OS "oracle" user.

The tool requires two environmental settings to run: specifically JAVA_HOME and CLASSPATH which needs to reference the tool's jar files:

export JAVA_HOME=/usr/lib/jdk
export CLASSPATH=/home/oracle/osch/orahdfs-2.2.0/jlib/*

For our running example it would look like this:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle..hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.defaultDirectory=exttab_default_directory
  -D oracle.hadoop.exttab.locationFileCount=1
  -D oracle.hadoop.exttab.columnCount=7
  -createTable --noexecute

Let’s decompose this command.

The following invokes the OSCH External Table tool by pointing to the OSCH jar file (“orahdfs.jar”):

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
/home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable

These two lines connect to the Oracle database service ("dbm") as Oracle user “oschuser”:

  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser

This identifies the name of the external table we want to create:

-D oracle.hadoop.exttab.tableName=helloworld_exttab

This tells the tool the directory in HDFS where data lives:

-D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab

This indicates where the location files will live (using the name of the Oracle directory created above that maps to "/home/oracle/osch/exttab"):

-D oracle.hadoop.exttab.defaultDirectory=exttab_default_dir

This indicates how many location files we generate. For now since we are only loading one HDFS file, we need only one location file to reference it, so we feed it a value of 1:

-D oracle.hadoop.exttab.locationFileCount=1

 This indicates how many columns are in the table:

-D oracle.hadoop.exttab.columnCount=7

Finally we tell the tool to just pretend to create an external table.  This will generate an external table definition and output it to the console:

-createTable --noexecute

The generated console output should look something like this:

Oracle SQL Connector for HDFS Release 2.2.0 - Production

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.

The create table command was not executed.

The following table would be created.

CREATE TABLE "OSCHUSER"."HELLOWORLD_EXTTAB"
(
 "C1"                             VARCHAR2(4000),
 "C2"                             VARCHAR2(4000),
 "C3"                             VARCHAR2(4000),
 "C4"                             VARCHAR2(4000),
 "C5"                             VARCHAR2(4000),
 "C6"                             VARCHAR2(4000),
 "C7"                             VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
   TYPE ORACLE_LOADER
   DEFAULT DIRECTORY "EXTTAB_DEFAULT_DIRECTORY"
   ACCESS PARAMETERS
   (
     RECORDS DELIMITED BY 0X'0A'
     CHARACTERSET AL32UTF8
     STRING SIZES ARE IN CHARACTERS
     PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
     FIELDS TERMINATED BY 0X'2C'
     MISSING FIELD VALUES ARE NULL
     (
       "C1" CHAR(4000),
       "C2" CHAR(4000),
       "C3" CHAR(4000),
       "C4" CHAR(4000),
       "C5" CHAR(4000),
       "C6" CHAR(4000),
       "C7" CHAR(4000)
     )
   )
   LOCATION
   (
     'osch-20130904094340-966-1'
   )
) PARALLEL REJECT LIMIT UNLIMITED;

Cut and paste the console output to an editor (or cut and paste the text above) and temporarily remove the PREPROCESSOR directive and rename the location file (i.e. "osch=20130904094340-966-1") to "testdata.txt" (the name of your data file).  You then want to twiddle with the external table verbiage and change the dummy column names (e.g. C1), data types (e.g. VARCHAR2), and field definitions (e.g. CHAR) to reflect the table you want to load. (The details for creating Oracle external tables are explained here).  Note that the rest of the verbiage (e.g. RECORDS DELIMITED BY) is used to support standard CSV text files, so if the data in your test file is correctly formed as CSV input, then this stuff should be left as is.

When you think your external table definition is correct, create the table in Oracle and  try accessing the data from SQL:

SQLPLUS>SELECT * FROM helloworld_exttab;

After you’ve got a SQL SELECT statement working, it's time to load the same data file it into HDFS and recreate the external table for remote access.

Load an HDFS directory with Local Test Data File

Using your hadoop client on your Oracle system upload the working test file you got working into HDFS into a the data directory you created earlier.

hadoop fs –put /home/oracle/osch/exttab/testdata.txt /user/oracle/osch/exttab

Recreate the External Table Using the PREPROCESSOR Directive

Now drop the local external table, and recreate it using the identical syntax that worked above, but putting back the PREPROCESSOR directive:

PREPROCESSOR "OSCH_BIN_PATH":hdfs_stream

This will redirect processing to HDFS files living in your Hadoop cluster. Don’t try doing a SELECT statement yet. The last step is to recreate location files so they point to content living in HDFS.

Big Switch

Using the OSCH Publishing Tool to point to test data living in HDFS

By adding the PREPROCESSOR directive, you now have an external table that is bound to data living in a Hadoop cluster. You now want to point the external table to data living somewhere in HDFS. For our case that is the data living in the HDFS directory we created and populated above: “/user/oracle/osch/exttab”.

First delete the local data file, testdata.txt, living under /home/oracle/osch/exttab.  That way we know if the external table works, it's not fooling us simply accessing local data.

Then rerun the External Table tool with the "publish" command:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.locationFileCount=1
  -publish

This time the tool executes the "publish" command, which connects to the Oracle database, prompts for "oschuser"'s password, reads the files living in the HDFS under “/user/oracle/osch/exttab” and creates one location file that references our singleton data file "testdata.txt" that we moved into HDFS.  If you look at your local directory, “ /home/oracle/osch/exttab”, you will see that it has been populated with a machine generated file (e.g. “osch-20130821102309-6509-1”) which contains XML verbiage referring to testdata.txt in HDFS.

Test an Oracle External Table that works against HDFS Data

Now you connect to Oracle as “oschuser" and issue the same SQL query you did when the data was local.  You should get identical results as you did earlier (the order of the rows might be different).

SQLPLUS>SELECT * FROM helloworld_exttab;

At this point you have SQL access to content living in HDFS.   To use it to load an Oracle table (e.g. "helloworld") you need to use either an INSERT statement:

SQLPLUS> INSERT INTO helloworld SELECT * FROM helloworld_exttab;

or a CREATE TABLE statement.

SQLPLUS>CREATE TABLE helloworld as SELECT * from helloworld_exttab;

What Did We Just Do?

Abby Normal

Aside from doing one time initialization steps, what we did was create an external table and tested it locally to see if it would work with a particular data file format, then we recreated the external table definition, adding the PREPROCESSOR directive to point to HDFS living in a Hadoop cluster.  We then used the OSCH External Table tool to point an external table to a directory in HDFS with data files having the same format.

The bindings here are simple to understand:

  • The PREPROCESSOR directive references hdfs_stream which binds external tables to a particular Hadoop cluster
  • The External Table publishing tool binds an external table to a set of data files living in that cluster

If you want an to access multiple Hadoop clusters, you simply need to create a copy of “hdfs_stream” giving it a new name. (e.g. "hdfs_stream_2”), configure it to work against the other cluster, and embed the use the PREPROCESSOR directive to call “hdfs_stream_2”.

If you want two external tables to point to two different data sources of the same format, then create a new external table with the same attributes, and use OSCH External Table tool to point to another directory in HDFS.

One question that frequently comes up has to do with using OSCH for SQL access.  Specifically, since external tables map HDFS data, are they useful for doing general purpose Oracle SQL queries against HDFS data, not just for loading an Oracle table?

If the data set is very large and you intend to run multiple SQL queries, then you want load it into an Oracle table and run your queries against it. The reason has to do with the “black box” design of external tables. The storage is not controlled by Oracle, so there are no indices and no internal structures that Oracle would need to make access by SQL efficient. SELECT statements against any external table are a full table scan, something Oracle SQL optimizer tries to avoid because it is resource expensive.

One last point, always use external table definitions to facilitate the conversion of text to Oracle native datatypes (e.g. NUMBER, INTEGER, TIMESTAMP, DATE).  Do not rely on CAST and other functions (e.g. to_date) in SQL.  The data type conversion code in external tables is much more efficient.

Next Steps

This post was to get a toy example working with a single data file. The next post will focus on how to tune OSCH to for large data sets living in HDFS and exploit Oracle Parallel query infrastructure for high performance loads.  We will also discuss the pros and cons of using OSCH versus OLH.

Monday Jun 17, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 2 - Hello World)

HelloWorld


A “Hello World” Example using OLH

This post will discuss the basic mechanics for loading an Oracle table using Oracle Loader for Hadoop (OLH). For this “Hello World” discussion, we will use JDBC to drive the example, loading delimited text living in HDFS files into a simple un-partitioned table called “FIVDTI” living in an Oracle schema. It will illustrate the bare bones structure of an OLH job, and touch upon the minimal configuration properties you need to set to get something working end to end. This tutorial assumes that you know how to run basic MapReduce jobs and know how to connect to Oracle with SQL*Plus to create and drop tables using Oracle SQL.

Restating what was explained in the introduction to this tutorial, OLH uses a MapReduce job to read data living in HDFS and to load it into a target table living in Oracle. If the data you are loading is in a typical form (e.g. delimited text or CSV files), you should be able to load a table interactively with a single command.

Requirements for Running an OLH Job

Since OLH runs MapReduce jobs, the OLH command will need to run on either some system on a Hadoop cluster, or on a system that has client access to the cluster. That system will also need JDBC access to an Oracle database that has a schema with the table that you want to load. Finally the system will need access to an OLH installation and its jar files (i.e. OLH_HOME in our running example below)..

If you want to use a Hadoop client, setting it up on a development node is not a lot of hard work. You simply need to download and install the Hadoop software that is running on the cluster using the configuration files specific to the cluster.

You want to sanity test everything to make sure the plumbing works. If you can kick off the Hadoop Wordcount MapReduce job and can read/write/delete HDFS files interactively using Hadoop, your Hadoop plumbing should be ready to accept an OLH job. (See http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html for details on sanity checking MapReduce.)

At this point it would be good to carve out a subdirectory in HDFS that you own, that will hold result directories of OLH jobs that you want to run. (In our working example in this post the result log directory will live in “/user/olh_test/results/fivdti”.) You use hadoop to do this:

hadoop fs –mkdir /user/oracle/olh_test/results/fivdti

You want to make sure that you are either the owner of this HDFS directory or at least have read and write access to it.

Connecting to an Oracle database requires you to connect using an Oracle connection URL with the name of the Oracle user and the Oracle password. In this example, we are assuming the Oracle schema and the Oracle user are the same. (See http://radiofreetooting.blogspot.com/2007/02/user-schema.html for an explanation of the difference between the two concepts.)

Before kicking off an OLH job it is worth taking the time to ensure the Oracle connection credentials are correct. Remember OLH will load an Oracle table across many Hadoop map or reduce tasks each of which will try to make a JDBC connection to the database, so you want to eliminate trial-and-error type authentication hiccups before passing them on to OLH. (See http://www.dba-oracle.com/t_oracle_jdbc_connection_testing.htm for details.)

While configuring and testing the OLH framework you will also want some minimal administration of Hadoop and Oracle. Specifically you will want to browse the Hadoop URL that contains JobTracker information where you can monitor a load job. You will also need to connect to Oracle via SQL*Plus to do administration with your schema and to manually inspect the target table using SQL. For purposes of OLH, installing a formal Oracle client on your development system, though convenient, is overkill. You simply need to log onto a system where SQL*Plus is available (e.g. the system where Oracle is running) and where you can connect to the Oracle database.

The Structure of an OLH Command

Plumbing parts

Let’s start by looking at an OLH command that you would invoke to kick off an OLH MapReduce job. Again this will use JDBC to connect to the Oracle database and load a table with delimited text living in files in HDFS.

$HADOOP_HOME/bin/hadoop jar
  $OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader
  -D oracle.hadoop.loader.jobName=OLHP_fivdti_dtext_jdbc_0_722

  -D oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml
  -D mapred.reduce.tasks=0

  -D mapred.input.dir=/user/olh_performance/fivdti/56000000_90
  -D mapred.output.dir=/user/oracle/olh_test/results/fivdti/722

  -conf /tmp/oracle_connection.xml
  -conf /tmp/dtextInput.xml

  -conf /tmp/jdbcOutput.xml

The command starts with an invocation of hadoop (living in the Hadoop client) passing the “jar” command.

$HADOOP_HOME/bin/hadoop jar

This is followed by a reference to OLH’s jar file (“oraloader.jar”) and the OLH loader class (fully qualified by its classpath) that defines an OLH load job.

$OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader

The following two properties are consumed by OLH. (Note that the space between a –D and the property name is not a typo. It is Hadoop’s convention for setting properties that are directed for Hadoop rather than for the JVM running underneath it.)

The first is to give the job a meaningful name when it executes in Hadoop. When creating a name, I find it useful to capture the name of the table being loaded, the type of input, the load method, the number of reducers used, and a unique job number for the OLH run (i.e. 722).

-D oracle.hadoop.loader.jobName=OLHP_fivdti_dtext_jdbc_0_722

MapReduce jobs are long running batch jobs. While debugging and tuning OLH load operations you will want to spend a lot of time looking at Hadoop’s JobTracker URL to find your job by name and see how it behaves.

The second property points to the loader map. The loader map is an XML file that indicates what Oracle table will be loaded, and indicates the fields in a delimited text file that map to columns in the target table.

-D oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml

The next three properties are consumed by Hadoop.

The first designates the number of reduce tasks to run. The reduce stage in OLH is used for improving load performance by sorting records by table partitions and optionally sorting by table columns. If the table being loaded is not partitioned and if there is no value of sorting by columns, then you will want to set this value to zero. This means that the MapReduce job only runs the map stage, which simply loads the Oracle table.  Note that for JDBC, a zero value can be used even if the table is partitioned.  But if a table is partitioned you probably will want to use the OCI Direct Load method which does require a non-zero value.  (Using OCI Direct Load to load partitioned tables will be discussed in depth in the next post.)  

-D mapred.reduce.tasks=0

The second property designates the HDFS directory that holds input files of delimited text that are to serve as the payload for the target table.

-D mapred.input.dir=/user/olh_performance/fivdti/56000000_90

Finally you specify an HDFS directory where OLH can write logging output specific to the job.

In this case we carved out a directory under /user/oracle/olh_test/results, and assigned a unique integer for the job (i.e. 722). An important note: the subdirectory “722” is created by Hadoop once the job is submitted. If it exists before the job is submitted Hadoop will fail.

-D mapred.output.dir=/user/oracle/olh_test/results/fivdti/722/

The final arguments are files that complete configuration. For purposes of modularity they are split across three configuration files that can be reused for different types of jobs.

The first file has the credentials needed to connect to the Oracle database.

-conf /tmp/oracle_connection.xml

The second file designates the form of input (“delimited text”).

-conf /tmp/dtextInput.xml

The last file designates the output (i.e. online loading of an Oracle table via JDBC).

-conf /tmp/jdbcOutput.xml

Again all of these properties embedded in these various configuration files could be bundled into one large configuration file or called out explicitly using in-line “-D” properties.

However, assuming you don’t have a penchant for obscurity or verbosity, it’s probably prudent to breakdown the various configuration settings into configurations files that are organized to be reusable. Tuning Hadoop to run big MapReduce jobs can take some time to get optimal, and once you get there it’s a good idea to isolate the settings so they can be easily reused for different jobs. It’s also easier to test different combinations on the fly (e.g. trying JDBC and then trying OCI Direct Path) against the same table. You just need to swap the jdbcOutput.xml for the analogous specification for OCI Direct Path.

Loader Maps

Loader maps are used to map fields read as input to columns living in the Oracle table that will be loaded. They also identify the table being loaded (e.g. “FIVDTI”) and the schema (“OLHP”) where the table resides.

First let’s look at the Oracle table we are loading:

SQL> describe fivdti

Name Null Type
-------------------------------------------------------------
F1 NUMBER

I2 NUMBER(38)
V3 VARCHAR2(50)

D4 DATE
T5 TIMESTAMP(6)

V6 VARCHAR2(200)
I7 NUMBER(38)

Again, the loader map file is identified by the following OLH property:

-D oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml

The file contains this specification:

<LOADER_MAP>
   <SCHEMA>OLHP</SCHEMA>
   <TABLE>FIVDTI</TABLE>

   <COLUMN field="F0">F1</COLUMN>
   <COLUMN field="F1">I2</COLUMN>

   <COLUMN field="F2">V3</COLUMN>
   <COLUMN field="F3" format="yyyy-MM-dd HH:mm:ss">D4</COLUMN>

   <COLUMN field="F4">T5</COLUMN>
   <COLUMN field="F5">V6</COLUMN>

   <COLUMN field="F6">I7</COLUMN>
</LOADER_MAP>

We need to map fields in the delimited text to the column names specified in the table. Field names in delimited text files can default to a simple naming convention, where fields are named by default to “F0, F1, F2…” reflecting the physical order that they appear in, in a line of CSV text. These field names are then paired with column names. At the top of the specification is the schema name and table name.

What is critical about loader maps is ensuring that the text fields being loaded can be legally converted to the Oracle data type of the mapped columns, considering issues such as precision and scale that are asserted on the Oracle columns. Typically for standard scalar types this is straight forward. However DATE columns are fussier and require the user to describe an explicit format. (Strictly speaking, loader maps are not required in cases when column names reflect field names and the DATE data type is not used, but in this case we are using default field names that are different from the formal column names of the Oracle table.)

Configuration Specified in Files

Let’s look at the settings living in the configuration files specified above.

OLH Connection Properties

The oracle_connection.xml file in the example above contains the credentials needed to find the Oracle database and to connect to a schema. This information gets passed to the OLH MapReduce jobs.

The connection URL follows the “jdbc:oracle:thin” pattern. You need to know the host where the Oracle database is running (e.g. “myoraclehost”), the listening port (e.g. 1511), and the database service name (e.g. “dbm”). You also need to identify the Oracle user (e.g. “olhp”) and password (e.g. “welcome1”). (In a production environment you will want to use Oracle Wallet to avoid storing passwords in clear text, but we will save that issue for a later post.)

<configuration>

<property>
    <name>oracle.hadoop.loader.connection.url</name>

    <value>jdbc:oracle:thin:@myoraclehost:1511/dbm</value>
  </property>

  <property>
    <name>oracle.hadoop.loader.connection.user</name>

    <value>olhp</value>
  </property>

  <property>
    <name>oracle.hadoop.loader.connection.password</name>

    <value>welcome1</value> 
  </property>

</configuration>

OLH Input Format

The dtextInput.xml configuration file in the example above describes the physical characteristics of the rows of data that OLH will read. OLH provides off-the-shelf built-in input format implementations that cover many common formats that you would expect to see living in Hadoop data files. These include delimited text and CSV, text with regular expressions, Hive formats, and Oracle NoSQL database. In general, it’s much easier to generate output in a form that is compliant with the existing built-ins, although rolling your own input format class is a supported option.

In our running example we are using default settings for delimited text, which is CSV. Additional properties for delimited text allow you to specify alternative field terminators, and require explicit field enclosers (useful when you want to use CSV as an input format but when there are commas embedded in field values.)

<configuration>

 <property>
    <name>mapreduce.inputformat.class</name>

    <value>oracle.hadoop.loader.lib.input.DelimitedTextInputFormat</value>
 </property>

</configuration>

OLH Output Format

The jdbcOutput.xml file is used to specify how we want to load Oracle. Output formats can be divided into two types: those that load the data directly into an Oracle database table, or those that store the data in a new set of files in HDFS that can be pulled into Oracle later.

For our running example, we are going to use JDBC.

<configuration>

   <property>
    <name>mapreduce.outputformat.class</name>
    <value>oracle.hadoop.loaderlib.output.JDBCOutputFormat</value>
  </property>

</configuration>

Running the OLH Job

When you kick off the OLH Job it will give you console output that looks something like this:

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
13/05/30 09:18:00 INFO loader.OraLoader: Oracle Loader for Hadoop Release 2.1.0 - Production

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.

13/05/30 09:18:00 INFO loader.OraLoader: Built-Against: hadoop-2.0.0-mr1-cdh4.1.2 hive-0.9.0-cdh4.1.2 avro-1.6.3 jackson-1.8.8
13/05/30 09:18:02 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition is disabled because mapred.reduce.tasks=0
13/05/30 09:18:02 INFO loader.OraLoader: oracle.hadoop.loader.enableSorting disabled: cannot sort by key when number of reducers is zero
13/05/30 09:18:02 INFO output.DBOutputFormat: Setting map tasks speculative execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat
13/05/30 09:18:02 WARN loader.OraLoader: Sampler error: the number of reduce tasks must be greater than one; the configured value is 0 . Job will continue without sampled information.
13/05/30 09:18:02 INFO loader.OraLoader: Sampling time=0D:0h:0m:0s:14ms (14 ms)
13/05/30 09:18:02 INFO loader.OraLoader: Submitting OraLoader job OraLoader .
13/05/30 09:18:02 INFO input.FileInputFormat: Total input paths to process : 90
13/05/30 09:18:04 INFO loader.OraLoader: map 0% reduce 0%
13/05/30 09:18:19 INFO loader.OraLoader: map 1% reduce 0%
13/05/30 09:18:20 INFO loader.OraLoader: map 2% reduce 0%
13/05/30 09:18:22 INFO loader.OraLoader: map 3% reduce 0%

….
13/05/30 09:21:13 INFO loader.OraLoader: map 95% reduce 0%
13/05/30 09:21:16 INFO loader.OraLoader: map 96% reduce 0%
13/05/30 09:21:18 INFO loader.OraLoader: map 97% reduce 0%
13/05/30 09:21:20 INFO loader.OraLoader: map 98% reduce 0%
13/05/30 09:21:23 INFO loader.OraLoader: map 99% reduce 0%
13/05/30 09:21:31 INFO loader.OraLoader: map 100% reduce 0%
13/05/30 09:21:33 INFO loader.OraLoader: Job complete: OraLoader (job_201305201106_0524)

Note that because we are loading a simple non-partitioned table, this is a map-only job where loading is done by mappers and there is no reduce phase. The warning message at the outset of the job is about an OLH feature called the sampler. It is used when tables are partitioned to balance the reducers doing a load. Since the target table is not partitioned the warning about the sampler being disabled is not interesting.

Where to look if something went wrong

I’ve run OLH jobs interactively daily for more than a year. When something goes wrong, Hadoop console output will make it obvious, and typically gives a pretty good idea of what problem you are having. I also rely on Hadoop’s Job and Task tracker UIs which allow you to drill down to a failed task and look at the output it produces: typically Java stack dumps and log messages that detail the problems it was having.

The results directory in HDFS that was specified in the “mapred.output.dir" setting in the OLH command contains lots of information about a job run. In the directory there will be a top level report called “oraloader-report.txt”. It offers a clean breakdown of time spent by tasks running in Hadoop that were used to load the target table. It probably is the quickest way of looking at the workloads and determining if they are unbalanced.

Hadoop and Connection Resources

Overload

About the only additional issue that you need to be concerned about for this kind of job is the number of concurrent connections the Oracle database accepts. This becomes a problem when the number of Hadoop tasks that are loading a table concurrently exceeds the number of connections that Oracle will accept. If that happens the loading tasks will fail with an ORA-00020 error.

You want to check the number of map and reduce slots that are configured for Hadoop. For map-only jobs, if the number of map slots is less than the number of connections Oracle accepts, there won’t be a problem. The same holds true for full MapReduce jobs if the number of reduce slots are less than the max number of Oracle connections accepted.

If this is not true you need to artificially restrict the concurrency of load tasks.

For map-only jobs (like the one illustrated above) this means you will need to restrict the number of map slots in the cluster available to the OLH job to something less than the number of connections Oracle allows.

For full MapReduce OLH jobs (which are more typical) loading occurs in the reduce phase, and this can be easily controlled in the OLH command by tweaking the “mapred.reduce.tasks” property mentioned above, and setting it to an appropriate number.

-D mapred.reduce.tasks=20

Summary

To summarize, a bare bones OLH configuration typically needs the following information:

  • How to connect to Oracle
  • How many reduce tasks to run
  • The form of input (e.g. CSV) and output (e.g. JDBC)
  • An HDFS directory containing input files
  • An HDFS directory where OLH can write information about job results
  • A loader map that tells what fields correspond to what columns in a table

Done


That's it.  Easy-peasy lemon squeezy.  All OLH commands pretty much look like the one used above with minor substitutions.  Creating configurations for other tables typically requires only creating new a loader map specification. All the other configuration files can be reused.

The next post will discuss using OLH with OCI Direct Load which is what you will want to use when loading big tables that are sorted by key or are partitioned. We will spend a lot of time discussing performance issues such as load balancing, using the OLH sampler, and using the SDP transport protocol.

Thursday Nov 08, 2012

Welcome!

Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database.

In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop.

Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS.

Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

About

Oracle Loader for Hadoop and Oracle SQL Connector for HDFS

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today