Tuesday Oct 22, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

Using OSCH: Beyond Hello World

In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH.

The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads.

Using OSCH External Tables for Access and Loading

OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL:

SELECT * FROM my_hdfs_external_table;

or use the same SQL access to load a table in Oracle.

INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table;

To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints.

ALTER SESSION FORCE PARALLEL DML PARALLEL  8;
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8;
INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table;

There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively.

/*+ parallel(my_oracle_table,8) */
/*+ parallel(my_hdfs_external_table,8) */

Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other words, it doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient.

Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table.

Determine Your DOP

DOP

It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables.

The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control.

Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32).

In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen.

The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour.

The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible.

Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system

Determining the Number of Location Files

Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.)

Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists

For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB.

The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool.

Rule 3: The number of location files chosen should be a small multiple of the DOP

Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea.

Determining the Number of HDFS Files

Let’s start with the next rule and then explain it:

Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size

TheCount

In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few.

For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour.

If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.)

As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time.

What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA.

The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading.

Next Steps

So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA.


Thursday Sep 05, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 4 - OSCH Hello World)

Oracle OSCH: A “World Hello” Example

World Hello

In this post we will walk through Alice in Wonderland's looking glass and do a “Hello World” example for Oracle SQL Connector for HDFS (i.e. OSCH). The above title, “World Hello” is a play on words meant to drive home the relationship between the two loading models: OLH and OSCH. They both can be used to load an Oracle table but while OLH is run on Hadoop and uses Hadoop’s Map Reduce engine to write data into Oracle tables, OSCH uses the SQL engine running on Oracle to read data living in HDFS files. OLH pushes data to Oracle on demand from Hadoop. OSCH pulls data from HDFS on demand from Oracle.

Below we will first review the OSCH execution’s model. We will then discuss configuration. OSCH has a few more moving parts to worry about than OLH which invariably will create hiccups, but if you follow my instructions, in the following order, these should be minimized.

  • Perform one-time configuration steps
  • Create an Oracle external table that works against local test data
  • Load the same local test data into an HDFS directory
  • Recreate the external table to reference a Hadoop cluster
  • Use OSCH External Table publishing tool to point the external table to the test data location in HDFS

The OSCH Execution Model

OSCH was explained in the first lesson of the tutorial, but since we are revisiting it in depth, let’s review how it works.

OSCH is simply the plumping that lets Oracle external tables access HDFS content. Oracle external tables are a well established mechanism for reading content that is not populated or managed by Oracle. For conventional Oracle external tables, the content lives as files visible to the OS where the Oracle system is running.  These would be either local files, or shared network files (e.g. NFS). When you create an Oracle External table you point it to a set of files that constitute data that can be rendered as SQL tables. Oracle External table definitions call these “location” files.

Before OSCH was invented, external tables introduced an option called a PREPROCESSOR directive. Originally it was an option that allowed a user to preprocess a single location file before the content was streamed into Oracle. For instance, if your contents were zip files, the PREPROCESSOR option could specify that “unzip –p” is to be called with each location file, which would unzip the files before passing the unzipped content to Oracle. The output of an executable specified in the PREPROCESSOR directive is always stdout (hence the “-p” option for the unzip call). A PREPROCESSOR executable is a black box to Oracle. All Oracle knows is that when it launches it and feeds it a location file path as an argument, the executable will feed it a stream of bits that represents data of an external table.

OSCH repurposed the PREPROCESSOR directive to provide access to HDFS. Instead of calling something like “unzip” it calls an OSCH tool that streams HDFS file content from Hadoop. The files it reads from HDFS are specified as OSCH metadata living in the external table “location” files locally. (These metadata files are created using OSCH’s publishing tool.) In other words, for OSCH, location files do not contain HDFS content, but contains references to HDFS files living in a Hadoop cluster. The OSCH supplied preprocessor expects to find OSCH metadata in this file.

All this is encapsulated with the Oracle external table definition. The preprocessor logic gets invoked every time one issues a SELECT statement in SQL against the external table. At run time, the OSCH preprocessor is invoked, which opens a “location” file with metadata. It parses it the metadata and then generates a list of files in HDFS it will open, one at a time, and read, piping the content into Oracle. (The metadata also includes optional CODEC directives, so if the HDFS content needs to be decompressed before being fed to Oracle, the OSCH preprocessor can handle it).

BTW, if you just got nervous about the performance implications of the “one at a time” phrase above, don’t be. This model is massively scalable.

One Time Configuration Steps

Checklist

Understand the Requirements for Installing and Configuring OSCH

The things you will need for installing and configuring OSCH include:
  • Access to the system where Oracle is running and to the OS account where Oracle is running (typically the Unix account "oracle”)
  • Access to SQL*Plus and permission to connect as DBA
  • Ability to create an Oracle user (e.g. "oschuser") with enough permission to create an external table and directory objects
  • The OSCH kit 
  • The Hadoop client kit for the Hadoop cluster you want to access
  • The Hadoop client configuration for HDFS access
  • Permission to read, write, and delete files in HDFS as OS user "oracle"  (i.e. "oracle" is an Hadoop user)
The formal documentation to install OSCH is here. Below I try to outline a process that has works best for me.

Install the Bits

Logon to the system where Oracle is running as “oracle”. Carve out an independent directory structure (e.g. /home/oracle/osch) outside of the directory structure of ORACLE_HOME. Install the OSCH kit (called “orahdfs-2.2.0”) and the Hadoop client kit (“hadoop-2.0.0”). I typically make these peers. Both kits need to be unzipped. Hadoop client kits typically require some building to create a few native libraries typically related to CODECs. You will also unzip the Hadoop configurations files (“hadoop-conf”). Finally you want to create a default directory for location files that will be referenced by external tables. This is the “exttab” directory below. This directory needs read and write privileges set for “oracle”.

At this point you should have a directory structure that looks something like this:

/home/oracle/osch/orahdfs-2.2.0
/home/oracle/osch/hadoop-2.0.0
/home/oracle/osch/hadoop-conf
/home/oracle/osch/exttab

Configure HDFS

Follow the standard Hadoop client instructions that allow you access the Hadoop cluster via HDFS from the Oracle system logged in as “oracle”. Typically this is to call Hadoop pointing to the hadoop-conf files you copied over.

With Hadoop you will want to be able to create, read, and write files under HDFS /user/oracle directory. For the moment carve out an area where we will put test data to read from HDFS using OSCH.

hadoop --config /home/oracle/osch/hadoop-conf fs –mkdir /user/oracle/osch/exttab

Configure OSCH

In the OSCH kit you will need to configure the preprocessor that is used to access the Hadoop cluster and read HDFS files. It is in the OSCH kit under the bin directory, and is called hdfs_stream. This is a bash script which invokes an OSCH executable under the covers. You need to edit the script and provide a definition for OSCH_HOME. You will also need to modify and export modified PATH and JAVA_LIBRARY_PATH definitions to pick up Hadoop client binaries.

e.g.
OSCH_PATH = /home/oracle/orahdfs-2.2.0
export PATH=/home/oracle/hadoop-2.0.0/bin:/user/bin:/bin
export JAVA_LIBRARY_PATH=/home/oracle/hadoop-2.0.0/lib/native

Optionally hdfs_stream allows you to specify where external table log files go. By default it goes into the log directory living in the OSCH installation (e.g. /home/oracle/orahdfs-2.2.0/log).

When you’ve complete this step, interactively invoke hdfs_stream with a single bogus argument “foo”, again on the Oracle system logged in as “oracle”.

e.g.

./hdfs_stream
OSCH: Error reading or parsing location file foo

This might seem lame, but it is a good sanity check that ensures Oracle can execute the script while processing an external table. If you get a Java stack trace rather than the above error message, the paths you defined in hdfs_stream are probably broken and need to be fixed.

Configure Oracle for OSCH

In this step you need to first connect to Oracle as SYSDBA and create an Oracle DIRECTORY object that points to the file location where hdfs_stream exists. You create one of these to be shared by any Oracle users running OSCH to connect to a particular Hadoop cluster.

SQLPLUS> CREATE DIRECTORY osch_bin_path as ‘/home/oracle/osch/oradhdfs-2.2.0/bin’;

Assuming you’ve created a vanilla Oracle user (e.g. "oschuser") which will own the external table, you want to grant execute privileges on the osch_bin_path directory.

SQLPLUS>GRANT EXECUTE ON DIRECTORY osch_bin_path TO oschuser;

Now reconnect to Oracle as “oschuser” and create an additional directory to point to the directory where location files live.

SQLPLUS> CREATE DIRECTORY exttab_default_directory AS ‘/home/oracle/osch/exttab’;

At this point you have configured OSCH to run against a Hadoop cluster. Now you move on to create external tables to map to content living in HDFS.

Create an Oracle External Table that works against Local Test Data

You want to create an external table definition that mirrors the table you want to load (e.g. reflecting the same column names and data types.)

Even the simplest local external table definitions take some time to get right, and 99% of the external table verbiage needed to get it working against HDFS is identical to getting it to work against local files, so it makes sense to get a vanilla local external table working before trying it against HDFS. 

What you want to do is take a small representative set of sample data that you want to access in HDFS and localize it into as a single file local to the Oracle system and to the “oracle” user. Call it testdata.txt and put it in the /home/oracle/osch/exttab directory, which is our directory for location files.   I would recommend starting with a simple text CSV file.

To make things easier we will use the OSCH External Table tool to create an external table definition that you can use as a template to tweak to conform to your data.  This tool can be run from any system that can connect to the Oracle database, but in this case we are going to stay put and run it locally where Oracle is running as the OS "oracle" user.

The tool requires two environmental settings to run: specifically JAVA_HOME and CLASSPATH which needs to reference the tool's jar files:

export JAVA_HOME=/usr/lib/jdk
export CLASSPATH=/home/oracle/osch/orahdfs-2.2.0/jlib/*

For our running example it would look like this:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle..hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.defaultDirectory=exttab_default_directory
  -D oracle.hadoop.exttab.locationFileCount=1
  -D oracle.hadoop.exttab.columnCount=7
  -createTable --noexecute

Let’s decompose this command.

The following invokes the OSCH External Table tool by pointing to the OSCH jar file (“orahdfs.jar”):

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
/home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable

These two lines connect to the Oracle database service ("dbm") as Oracle user “oschuser”:

  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser

This identifies the name of the external table we want to create:

-D oracle.hadoop.exttab.tableName=helloworld_exttab

This tells the tool the directory in HDFS where data lives:

-D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab

This indicates where the location files will live (using the name of the Oracle directory created above that maps to "/home/oracle/osch/exttab"):

-D oracle.hadoop.exttab.defaultDirectory=exttab_default_dir

This indicates how many location files we generate. For now since we are only loading one HDFS file, we need only one location file to reference it, so we feed it a value of 1:

-D oracle.hadoop.exttab.locationFileCount=1

 This indicates how many columns are in the table:

-D oracle.hadoop.exttab.columnCount=7

Finally we tell the tool to just pretend to create an external table.  This will generate an external table definition and output it to the console:

-createTable --noexecute

The generated console output should look something like this:

Oracle SQL Connector for HDFS Release 2.2.0 - Production

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.

The create table command was not executed.

The following table would be created.

CREATE TABLE "OSCHUSER"."HELLOWORLD_EXTTAB"
(
 "C1"                             VARCHAR2(4000),
 "C2"                             VARCHAR2(4000),
 "C3"                             VARCHAR2(4000),
 "C4"                             VARCHAR2(4000),
 "C5"                             VARCHAR2(4000),
 "C6"                             VARCHAR2(4000),
 "C7"                             VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
   TYPE ORACLE_LOADER
   DEFAULT DIRECTORY "EXTTAB_DEFAULT_DIRECTORY"
   ACCESS PARAMETERS
   (
     RECORDS DELIMITED BY 0X'0A'
     CHARACTERSET AL32UTF8
     STRING SIZES ARE IN CHARACTERS
     PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
     FIELDS TERMINATED BY 0X'2C'
     MISSING FIELD VALUES ARE NULL
     (
       "C1" CHAR(4000),
       "C2" CHAR(4000),
       "C3" CHAR(4000),
       "C4" CHAR(4000),
       "C5" CHAR(4000),
       "C6" CHAR(4000),
       "C7" CHAR(4000)
     )
   )
   LOCATION
   (
     'osch-20130904094340-966-1'
   )
) PARALLEL REJECT LIMIT UNLIMITED;

Cut and paste the console output to an editor (or cut and paste the text above) and temporarily remove the PREPROCESSOR directive and rename the location file (i.e. "osch=20130904094340-966-1") to "testdata.txt" (the name of your data file).  You then want to twiddle with the external table verbiage and change the dummy column names (e.g. C1), data types (e.g. VARCHAR2), and field definitions (e.g. CHAR) to reflect the table you want to load. (The details for creating Oracle external tables are explained here).  Note that the rest of the verbiage (e.g. RECORDS DELIMITED BY) is used to support standard CSV text files, so if the data in your test file is correctly formed as CSV input, then this stuff should be left as is.

When you think your external table definition is correct, create the table in Oracle and  try accessing the data from SQL:

SQLPLUS>SELECT * FROM helloworld_exttab;

After you’ve got a SQL SELECT statement working, it's time to load the same data file it into HDFS and recreate the external table for remote access.

Load an HDFS directory with Local Test Data File

Using your hadoop client on your Oracle system upload the working test file you got working into HDFS into a the data directory you created earlier.

hadoop fs –put /home/oracle/osch/exttab/testdata.txt /user/oracle/osch/exttab

Recreate the External Table Using the PREPROCESSOR Directive

Now drop the local external table, and recreate it using the identical syntax that worked above, but putting back the PREPROCESSOR directive:

PREPROCESSOR "OSCH_BIN_PATH":hdfs_stream

This will redirect processing to HDFS files living in your Hadoop cluster. Don’t try doing a SELECT statement yet. The last step is to recreate location files so they point to content living in HDFS.

Big Switch

Using the OSCH Publishing Tool to point to test data living in HDFS

By adding the PREPROCESSOR directive, you now have an external table that is bound to data living in a Hadoop cluster. You now want to point the external table to data living somewhere in HDFS. For our case that is the data living in the HDFS directory we created and populated above: “/user/oracle/osch/exttab”.

First delete the local data file, testdata.txt, living under /home/oracle/osch/exttab.  That way we know if the external table works, it's not fooling us simply accessing local data.

Then rerun the External Table tool with the "publish" command:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.locationFileCount=1
  -publish

This time the tool executes the "publish" command, which connects to the Oracle database, prompts for "oschuser"'s password, reads the files living in the HDFS under “/user/oracle/osch/exttab” and creates one location file that references our singleton data file "testdata.txt" that we moved into HDFS.  If you look at your local directory, “ /home/oracle/osch/exttab”, you will see that it has been populated with a machine generated file (e.g. “osch-20130821102309-6509-1”) which contains XML verbiage referring to testdata.txt in HDFS.

Test an Oracle External Table that works against HDFS Data

Now you connect to Oracle as “oschuser" and issue the same SQL query you did when the data was local.  You should get identical results as you did earlier (the order of the rows might be different).

SQLPLUS>SELECT * FROM helloworld_exttab;

At this point you have SQL access to content living in HDFS.   To use it to load an Oracle table (e.g. "helloworld") you need to use either an INSERT statement:

SQLPLUS> INSERT INTO helloworld SELECT * FROM helloworld_exttab;

or a CREATE TABLE statement.

SQLPLUS>CREATE TABLE helloworld as SELECT * from helloworld_exttab;

What Did We Just Do?

Abby Normal

Aside from doing one time initialization steps, what we did was create an external table and tested it locally to see if it would work with a particular data file format, then we recreated the external table definition, adding the PREPROCESSOR directive to point to HDFS living in a Hadoop cluster.  We then used the OSCH External Table tool to point an external table to a directory in HDFS with data files having the same format.

The bindings here are simple to understand:

  • The PREPROCESSOR directive references hdfs_stream which binds external tables to a particular Hadoop cluster
  • The External Table publishing tool binds an external table to a set of data files living in that cluster

If you want an to access multiple Hadoop clusters, you simply need to create a copy of “hdfs_stream” giving it a new name. (e.g. "hdfs_stream_2”), configure it to work against the other cluster, and embed the use the PREPROCESSOR directive to call “hdfs_stream_2”.

If you want two external tables to point to two different data sources of the same format, then create a new external table with the same attributes, and use OSCH External Table tool to point to another directory in HDFS.

One question that frequently comes up has to do with using OSCH for SQL access.  Specifically, since external tables map HDFS data, are they useful for doing general purpose Oracle SQL queries against HDFS data, not just for loading an Oracle table?

If the data set is very large and you intend to run multiple SQL queries, then you want load it into an Oracle table and run your queries against it. The reason has to do with the “black box” design of external tables. The storage is not controlled by Oracle, so there are no indices and no internal structures that Oracle would need to make access by SQL efficient. SELECT statements against any external table are a full table scan, something Oracle SQL optimizer tries to avoid because it is resource expensive.

One last point, always use external table definitions to facilitate the conversion of text to Oracle native datatypes (e.g. NUMBER, INTEGER, TIMESTAMP, DATE).  Do not rely on CAST and other functions (e.g. to_date) in SQL.  The data type conversion code in external tables is much more efficient.

Next Steps

This post was to get a toy example working with a single data file. The next post will focus on how to tune OSCH to for large data sets living in HDFS and exploit Oracle Parallel query infrastructure for high performance loads.  We will also discuss the pros and cons of using OSCH versus OLH.

Tuesday Apr 30, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 1 - Overview)


Introduction

This is the first of a series of blog posts that will discuss how to load data living in the Hadoop Ecosphere into Oracle tables. The goal is to give insights, discuss pros and cons, and best practices for achieving optimal load times and flexibility from an experienced developer’s point of view.

Oracle and Hadoop are complementary technologies where the whole is greater than the sum of the parts. They both have parallel architectures, which, if used intelligently can move data at an impressive rate. Last year, we achieved a load rate of 12TB (terabytes) per hour between Oracle Exadata and Hadoop running on Oracle’s Big Data Appliance (BDA). The ability to distill big data in Hadoop and then to seamlessly move large result sets into the Oracle stack creates enormous added value in solving Big Data problems.

In supporting customers who need this functionality we’ve noticed that more frequently than not, we are talking to people who are either Hadoop experts or Oracle heavyweights but not both. In our attempt to explain these two technologies we will offer breakout sections that offer some rudimentary background notes about Hadoop and Oracle that we think are important to understand, so you can use these tools effectively. Additional specialized topics will also go into loading issues specific to RAC and Exadata environments.

Why Use Oracle Big Data Connectors?

Hadoop developers might be asking themselves the following question: Oracle has been around for a long time managing huge sets of data in tables. These tables had to be loaded somehow? What’s the added value of the Big Data Connectors? Can’t we use the standard utilities Oracle has provided to load tables?

The quick answer is yes. But if you are dealing with Big Data, you really don’t want to.

Some Background about Conventional Loading Tools and Oracle

Oracle's off-the-shelf utility used for loading data from external source is called SQL*Loader. It does a great job loading files of various formats into an Oracle table.

The following SQL*Loader control file illustrates what this utility does:

LOAD DATA

INFILE file1.dat

INFILE file2.dat

INFILE file3.dat

APPEND

INTO TABLE emp

( empno POSITION(1:4) INTEGER EXTERNAL,

ename POSITION(6:15) CHAR,

deptno POSITION(17:18) CHAR,

mgr POSITION(20:23) INTEGER EXTERNAL

)

SQL*Loader is being told to open three files and append an existing table “emp” with data from the files whose column mapping, physical position, and representation are articulated between the parenthesis. SQL*Loader is really powerful for processing files of various formats.

But to use this tool with Hadoop you need to work around several problems. The first of which is that Hadoop content lives in Hadoop Distributed File System (HDFS) files, not standard OS file systems. SQL*Loader does not know how to access HDFS directly, so the “INFILE” verbiage is a non-starter.

You could work around this problem two ways. One way is to copy the file from Hadoop onto a local disk on a system where SQL*Loader is installed. The problem with this solution is that Hadoop files are big, very often bigger than any storage you have on a single system. Remember that a single Hadoop file can potentially be huge (say 18TB, larger than the digital content of the Library of Congress). That’s a big storage requirement for a single system, especially for a transient requirement such as staging data. Also you can assume that whatever storage requirements you have today for Big Data, they will certainly grow fast.

Secondly, in order to get the data from HDFS into an Oracle table you are doubling the amount of IO resources consumed. (“Read from HDFS, write into an Oracle table” becomes “Read from HDFS, write to staged file, read from staged file, write into an Oracle table”). When operating against Big Data, doubling the IO overhead is worth avoiding.

An alternative approach is to use FUSE technology (Mountable HDFS) that creates a mount point for HDFS. It is an elegant solution but it is substantially slower than Oracle Big Data Connectors (by a factor of 5) and consumes about three times the CPU.

And in both cases you would be forced to run SQL*Loader on the machine where Oracle lives, not because of some functional limitation of SQL*Loader (you can run it anywhere) but because of the practicalities of working with HDFS which is inherently distributed. Running SQL*Loader on a non-Oracle system means you are moving huge data blocks of distributed data living on any number of Hadoop DataNodes through the network to a single system which will be tasked to pass the entire payload over the network again to Oracle. This model doesn’t scale.

WrongToolForTheJob

Exploiting Parallelism in Oracle and Hadoop to Load Data

The best solution for loading data from Hadoop to Oracle is to use and align the mechanisms for doing parallel work in both environments.

Parallelism in Oracle Loader for Hadoop (OLH)

For OLH this means running MapReduce programs in Hadoop to break up a load operation into tasks running on all available MapReduce nodes in a Hadoop cluster. These MapReduce tasks run concurrently, naturally dividing the workload into discrete payloads that use Oracle MapReduce code to connect to Oracle Database remotely and load data into a target table. It’s a natural parallel model for Hadoop since the loading logic is encapsulated and run as vanilla MapReduce jobs. And it’s a natural model for Oracle, since the Oracle database system is being tasked to serve multiple clients (i.e MapReduce tasks) loading data at once, using standard client-server architecture that’s been around for decades.

Parallelism in Oracle SQL Connector for Hadoop Distributed File System (OSCH)

OSCH is the alternative approach that marries two other parallel mechanisms: Oracle Parallel Query for Oracle External Tables and Hadoop HDFS Client. To explain how these mechanisms align, let’s first talk about External tables and Parallel Query.

External Tables

External tables are tables defined in Oracle which manage data not living in Oracle. For example, suppose you had an application that managed and frequently updated some structured text files in a system, but you needed to access that data to join it to some Oracle table. You would define an Oracle External table which pointed it to the same structured text files updated by the application, accompanied by verbiage that looks striking similar to the SQL*Loader verbiage discussed above. That’s not a coincidence. The Oracle External tables use the SQL*Loader driver which executes SQL*Loader code under the covers.

Parallel Query

Parallel Query (PQ) is a “divide and conquer” strategy that decomposes a SQL statement into partitioned tasks that can execute in parallel and merge the results. PQ exploits the fact that SQL tables are symmetric and can be logically subdivided into horizontal partitions (i.e. sets of rows). With PQ if you want to execute:

SELECT last_name FROM emp WHERE salary > 30000

Oracle can decompose this query into smaller units of work which perform the identical query in parallel against mutually exclusive sets of rows in the “emp” table. For PQ to give you this advantage it needs to be enabled and properly configured (a detail we will talk about in a future post.) For now you simply need to understand that PQ works to break down SQL statements into worker bees (i.e. PQ Slaves) that divide the load and execute in parallel. In particular, PQ can be enabled for External tables which allow SQL to access data outside of Oracle in parallel. The amount of parallelism an External table has is configurable and is dictated by configuring the DOP (degree of parallelism). The DOP can be asserted various ways: as an attribute of a table, or within a SQL statement using a table, or at the session level after the user connects to Oracle.

HDFS Client

Now let’s talk about Hadoop HDFS Client. This is a Java API living in Hadoop that acts as a client to HDFS file systems. It looks like your standard file system programmatic interface: with open, read, write, and close methods. But because it works against HDFS which distributes individual blocks of a file across a Hadoop cluster, there is a lot of parallelism going on in the back end. Blocks are served up to HDFS by Hadoop DataNodes that are daemons running on Hadoop nodes, serving up data blocks that are stored locally to the node. If you run a lot of HDFS Clients concurrently against different HDFS files, you are doing lots of concurrent IO and concurrent streaming of data, from every Hadoop node that has a running DataNode. In other words you are maximizing retrieval of data from the Hadoop Cluster.

Putting It All Together

OSCH works by using all these mechanisms together. It defines a specialized External table which can invoke HDFS Client software to access data in HDFS. And when PQ is enabled for this type of External table, a SQL select statement gets decomposed into N PQ slaves (where N is the DOP). In other words a SQL select statement can kick off N PQ slaves that are each accessing K Hadoop DataNodes in parallel. Access of HDFS blocks by PQ slaves maximizes disk IO, network bandwidth, and processing both in Oracle and in Hadoop.

DataStampede

With this model, you load an Oracle table (e.g. “MY_TABLE”) by executing a single SQL Insert statement. One that gets its data from a subordinate Select statement that references the external table retrieving data from HDFS (e.g. “MY_EXTERNAL_TABLE”).

INSERT INTO MY_TABLE as SELECT * FROM MY_EXTERNAL_TABLE;

Actually I lied. It takes two statements.

COMMIT;

Just sayin.

Next Topic

In following post we will look at OLH in depth starting with JDBC. We will look at the execution model, and discuss the basics for configuring and tuning a MapReduce job used to load a table living in Oracle.

Author’s Background

My background in this space involves both product development and performance. I was pulled into this project about 20 months ago from doing work in Oracle middleware (specifically Oracle Coherence). I am currently working with a talented team that developed OLH and OSCH from scratch. My contribution was to design and prototype OSCH to the point that it scaled, and then spin up on Oracle BDA/Exadata/Hadoop internals to do performance benchmarks and testing. Because I’m almost the newest member of the team, the experience of spinning up in this space is still fresh in my mind, so I have a healthy respect for what it’s like to wrap ones brain around both technologies. Many readers will have much deeper knowledge in either the Oracle space or in Hadoop, so questions or clarifications are welcome.

About

Oracle Loader for Hadoop and Oracle SQL Connector for HDFS

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today