Thursday Feb 06, 2014

Try the connectors in Oracle BigDataLite VM

You can now try all the examples technical expert Bob has been blogging about in a VM.  Download the VM from here, and the instructions (hands on lab) for using the connectors in the VM from here. The VM contains complete examples for using the connectors and other big data products. Check it out!

The VM contains a single-node Hadoop cluster, an Oracle Database, the Oracle Big Data Connectors and a lot more.  With everything in the same VM it can be used to test performance, but it can be used to play around with the technology and test functionality.

Tuesday Oct 22, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

Using OSCH: Beyond Hello World

In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH.

The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads.

Using OSCH External Tables for Access and Loading

OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL:

SELECT * FROM my_hdfs_external_table;

or use the same SQL access to load a table in Oracle.

INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table;

To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints.

ALTER SESSION FORCE PARALLEL DML PARALLEL  8;
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8;
INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table;

There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively.

/*+ parallel(my_oracle_table,8) */
/*+ parallel(my_hdfs_external_table,8) */

Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other words, it doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient.

Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table.

Determine Your DOP

DOP

It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables.

The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control.

Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32).

In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen.

The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour.

The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible.

Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system

Determining the Number of Location Files

Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.)

Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists

For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB.

The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool.

Rule 3: The number of location files chosen should be a small multiple of the DOP

Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea.

Determining the Number of HDFS Files

Let’s start with the next rule and then explain it:

Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size

TheCount

In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few.

For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour.

If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.)

As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time.

What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA.

The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading.

Next Steps

So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA.


Thursday Sep 05, 2013

How to Load Oracle Tables From Hadoop Tutorial (Part 4 - OSCH Hello World)

Oracle OSCH: A “World Hello” Example

World Hello

In this post we will walk through Alice in Wonderland's looking glass and do a “Hello World” example for Oracle SQL Connector for HDFS (i.e. OSCH). The above title, “World Hello” is a play on words meant to drive home the relationship between the two loading models: OLH and OSCH. They both can be used to load an Oracle table but while OLH is run on Hadoop and uses Hadoop’s Map Reduce engine to write data into Oracle tables, OSCH uses the SQL engine running on Oracle to read data living in HDFS files. OLH pushes data to Oracle on demand from Hadoop. OSCH pulls data from HDFS on demand from Oracle.

Below we will first review the OSCH execution’s model. We will then discuss configuration. OSCH has a few more moving parts to worry about than OLH which invariably will create hiccups, but if you follow my instructions, in the following order, these should be minimized.

  • Perform one-time configuration steps
  • Create an Oracle external table that works against local test data
  • Load the same local test data into an HDFS directory
  • Recreate the external table to reference a Hadoop cluster
  • Use OSCH External Table publishing tool to point the external table to the test data location in HDFS

The OSCH Execution Model

OSCH was explained in the first lesson of the tutorial, but since we are revisiting it in depth, let’s review how it works.

OSCH is simply the plumping that lets Oracle external tables access HDFS content. Oracle external tables are a well established mechanism for reading content that is not populated or managed by Oracle. For conventional Oracle external tables, the content lives as files visible to the OS where the Oracle system is running.  These would be either local files, or shared network files (e.g. NFS). When you create an Oracle External table you point it to a set of files that constitute data that can be rendered as SQL tables. Oracle External table definitions call these “location” files.

Before OSCH was invented, external tables introduced an option called a PREPROCESSOR directive. Originally it was an option that allowed a user to preprocess a single location file before the content was streamed into Oracle. For instance, if your contents were zip files, the PREPROCESSOR option could specify that “unzip –p” is to be called with each location file, which would unzip the files before passing the unzipped content to Oracle. The output of an executable specified in the PREPROCESSOR directive is always stdout (hence the “-p” option for the unzip call). A PREPROCESSOR executable is a black box to Oracle. All Oracle knows is that when it launches it and feeds it a location file path as an argument, the executable will feed it a stream of bits that represents data of an external table.

OSCH repurposed the PREPROCESSOR directive to provide access to HDFS. Instead of calling something like “unzip” it calls an OSCH tool that streams HDFS file content from Hadoop. The files it reads from HDFS are specified as OSCH metadata living in the external table “location” files locally. (These metadata files are created using OSCH’s publishing tool.) In other words, for OSCH, location files do not contain HDFS content, but contains references to HDFS files living in a Hadoop cluster. The OSCH supplied preprocessor expects to find OSCH metadata in this file.

All this is encapsulated with the Oracle external table definition. The preprocessor logic gets invoked every time one issues a SELECT statement in SQL against the external table. At run time, the OSCH preprocessor is invoked, which opens a “location” file with metadata. It parses it the metadata and then generates a list of files in HDFS it will open, one at a time, and read, piping the content into Oracle. (The metadata also includes optional CODEC directives, so if the HDFS content needs to be decompressed before being fed to Oracle, the OSCH preprocessor can handle it).

BTW, if you just got nervous about the performance implications of the “one at a time” phrase above, don’t be. This model is massively scalable.

One Time Configuration Steps

Checklist

Understand the Requirements for Installing and Configuring OSCH

The things you will need for installing and configuring OSCH include:
  • Access to the system where Oracle is running and to the OS account where Oracle is running (typically the Unix account "oracle”)
  • Access to SQL*Plus and permission to connect as DBA
  • Ability to create an Oracle user (e.g. "oschuser") with enough permission to create an external table and directory objects
  • The OSCH kit 
  • The Hadoop client kit for the Hadoop cluster you want to access
  • The Hadoop client configuration for HDFS access
  • Permission to read, write, and delete files in HDFS as OS user "oracle"  (i.e. "oracle" is an Hadoop user)
The formal documentation to install OSCH is here. Below I try to outline a process that has works best for me.

Install the Bits

Logon to the system where Oracle is running as “oracle”. Carve out an independent directory structure (e.g. /home/oracle/osch) outside of the directory structure of ORACLE_HOME. Install the OSCH kit (called “orahdfs-2.2.0”) and the Hadoop client kit (“hadoop-2.0.0”). I typically make these peers. Both kits need to be unzipped. Hadoop client kits typically require some building to create a few native libraries typically related to CODECs. You will also unzip the Hadoop configurations files (“hadoop-conf”). Finally you want to create a default directory for location files that will be referenced by external tables. This is the “exttab” directory below. This directory needs read and write privileges set for “oracle”.

At this point you should have a directory structure that looks something like this:

/home/oracle/osch/orahdfs-2.2.0
/home/oracle/osch/hadoop-2.0.0
/home/oracle/osch/hadoop-conf
/home/oracle/osch/exttab

Configure HDFS

Follow the standard Hadoop client instructions that allow you access the Hadoop cluster via HDFS from the Oracle system logged in as “oracle”. Typically this is to call Hadoop pointing to the hadoop-conf files you copied over.

With Hadoop you will want to be able to create, read, and write files under HDFS /user/oracle directory. For the moment carve out an area where we will put test data to read from HDFS using OSCH.

hadoop --config /home/oracle/osch/hadoop-conf fs –mkdir /user/oracle/osch/exttab

Configure OSCH

In the OSCH kit you will need to configure the preprocessor that is used to access the Hadoop cluster and read HDFS files. It is in the OSCH kit under the bin directory, and is called hdfs_stream. This is a bash script which invokes an OSCH executable under the covers. You need to edit the script and provide a definition for OSCH_HOME. You will also need to modify and export modified PATH and JAVA_LIBRARY_PATH definitions to pick up Hadoop client binaries.

e.g.
OSCH_PATH = /home/oracle/orahdfs-2.2.0
export PATH=/home/oracle/hadoop-2.0.0/bin:/user/bin:/bin
export JAVA_LIBRARY_PATH=/home/oracle/hadoop-2.0.0/lib/native

Optionally hdfs_stream allows you to specify where external table log files go. By default it goes into the log directory living in the OSCH installation (e.g. /home/oracle/orahdfs-2.2.0/log).

When you’ve complete this step, interactively invoke hdfs_stream with a single bogus argument “foo”, again on the Oracle system logged in as “oracle”.

e.g.

./hdfs_stream
OSCH: Error reading or parsing location file foo

This might seem lame, but it is a good sanity check that ensures Oracle can execute the script while processing an external table. If you get a Java stack trace rather than the above error message, the paths you defined in hdfs_stream are probably broken and need to be fixed.

Configure Oracle for OSCH

In this step you need to first connect to Oracle as SYSDBA and create an Oracle DIRECTORY object that points to the file location where hdfs_stream exists. You create one of these to be shared by any Oracle users running OSCH to connect to a particular Hadoop cluster.

SQLPLUS> CREATE DIRECTORY osch_bin_path as ‘/home/oracle/osch/oradhdfs-2.2.0/bin’;

Assuming you’ve created a vanilla Oracle user (e.g. "oschuser") which will own the external table, you want to grant execute privileges on the osch_bin_path directory.

SQLPLUS>GRANT EXECUTE ON DIRECTORY osch_bin_path TO oschuser;

Now reconnect to Oracle as “oschuser” and create an additional directory to point to the directory where location files live.

SQLPLUS> CREATE DIRECTORY exttab_default_directory AS ‘/home/oracle/osch/exttab’;

At this point you have configured OSCH to run against a Hadoop cluster. Now you move on to create external tables to map to content living in HDFS.

Create an Oracle External Table that works against Local Test Data

You want to create an external table definition that mirrors the table you want to load (e.g. reflecting the same column names and data types.)

Even the simplest local external table definitions take some time to get right, and 99% of the external table verbiage needed to get it working against HDFS is identical to getting it to work against local files, so it makes sense to get a vanilla local external table working before trying it against HDFS. 

What you want to do is take a small representative set of sample data that you want to access in HDFS and localize it into as a single file local to the Oracle system and to the “oracle” user. Call it testdata.txt and put it in the /home/oracle/osch/exttab directory, which is our directory for location files.   I would recommend starting with a simple text CSV file.

To make things easier we will use the OSCH External Table tool to create an external table definition that you can use as a template to tweak to conform to your data.  This tool can be run from any system that can connect to the Oracle database, but in this case we are going to stay put and run it locally where Oracle is running as the OS "oracle" user.

The tool requires two environmental settings to run: specifically JAVA_HOME and CLASSPATH which needs to reference the tool's jar files:

export JAVA_HOME=/usr/lib/jdk
export CLASSPATH=/home/oracle/osch/orahdfs-2.2.0/jlib/*

For our running example it would look like this:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle..hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.defaultDirectory=exttab_default_directory
  -D oracle.hadoop.exttab.locationFileCount=1
  -D oracle.hadoop.exttab.columnCount=7
  -createTable --noexecute

Let’s decompose this command.

The following invokes the OSCH External Table tool by pointing to the OSCH jar file (“orahdfs.jar”):

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
/home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable

These two lines connect to the Oracle database service ("dbm") as Oracle user “oschuser”:

  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser

This identifies the name of the external table we want to create:

-D oracle.hadoop.exttab.tableName=helloworld_exttab

This tells the tool the directory in HDFS where data lives:

-D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab

This indicates where the location files will live (using the name of the Oracle directory created above that maps to "/home/oracle/osch/exttab"):

-D oracle.hadoop.exttab.defaultDirectory=exttab_default_dir

This indicates how many location files we generate. For now since we are only loading one HDFS file, we need only one location file to reference it, so we feed it a value of 1:

-D oracle.hadoop.exttab.locationFileCount=1

 This indicates how many columns are in the table:

-D oracle.hadoop.exttab.columnCount=7

Finally we tell the tool to just pretend to create an external table.  This will generate an external table definition and output it to the console:

-createTable --noexecute

The generated console output should look something like this:

Oracle SQL Connector for HDFS Release 2.2.0 - Production

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.

The create table command was not executed.

The following table would be created.

CREATE TABLE "OSCHUSER"."HELLOWORLD_EXTTAB"
(
 "C1"                             VARCHAR2(4000),
 "C2"                             VARCHAR2(4000),
 "C3"                             VARCHAR2(4000),
 "C4"                             VARCHAR2(4000),
 "C5"                             VARCHAR2(4000),
 "C6"                             VARCHAR2(4000),
 "C7"                             VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
   TYPE ORACLE_LOADER
   DEFAULT DIRECTORY "EXTTAB_DEFAULT_DIRECTORY"
   ACCESS PARAMETERS
   (
     RECORDS DELIMITED BY 0X'0A'
     CHARACTERSET AL32UTF8
     STRING SIZES ARE IN CHARACTERS
     PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
     FIELDS TERMINATED BY 0X'2C'
     MISSING FIELD VALUES ARE NULL
     (
       "C1" CHAR(4000),
       "C2" CHAR(4000),
       "C3" CHAR(4000),
       "C4" CHAR(4000),
       "C5" CHAR(4000),
       "C6" CHAR(4000),
       "C7" CHAR(4000)
     )
   )
   LOCATION
   (
     'osch-20130904094340-966-1'
   )
) PARALLEL REJECT LIMIT UNLIMITED;

Cut and paste the console output to an editor (or cut and paste the text above) and temporarily remove the PREPROCESSOR directive and rename the location file (i.e. "osch=20130904094340-966-1") to "testdata.txt" (the name of your data file).  You then want to twiddle with the external table verbiage and change the dummy column names (e.g. C1), data types (e.g. VARCHAR2), and field definitions (e.g. CHAR) to reflect the table you want to load. (The details for creating Oracle external tables are explained here).  Note that the rest of the verbiage (e.g. RECORDS DELIMITED BY) is used to support standard CSV text files, so if the data in your test file is correctly formed as CSV input, then this stuff should be left as is.

When you think your external table definition is correct, create the table in Oracle and  try accessing the data from SQL:

SQLPLUS>SELECT * FROM helloworld_exttab;

After you’ve got a SQL SELECT statement working, it's time to load the same data file it into HDFS and recreate the external table for remote access.

Load an HDFS directory with Local Test Data File

Using your hadoop client on your Oracle system upload the working test file you got working into HDFS into a the data directory you created earlier.

hadoop fs –put /home/oracle/osch/exttab/testdata.txt /user/oracle/osch/exttab

Recreate the External Table Using the PREPROCESSOR Directive

Now drop the local external table, and recreate it using the identical syntax that worked above, but putting back the PREPROCESSOR directive:

PREPROCESSOR "OSCH_BIN_PATH":hdfs_stream

This will redirect processing to HDFS files living in your Hadoop cluster. Don’t try doing a SELECT statement yet. The last step is to recreate location files so they point to content living in HDFS.

Big Switch

Using the OSCH Publishing Tool to point to test data living in HDFS

By adding the PREPROCESSOR directive, you now have an external table that is bound to data living in a Hadoop cluster. You now want to point the external table to data living somewhere in HDFS. For our case that is the data living in the HDFS directory we created and populated above: “/user/oracle/osch/exttab”.

First delete the local data file, testdata.txt, living under /home/oracle/osch/exttab.  That way we know if the external table works, it's not fooling us simply accessing local data.

Then rerun the External Table tool with the "publish" command:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.locationFileCount=1
  -publish

This time the tool executes the "publish" command, which connects to the Oracle database, prompts for "oschuser"'s password, reads the files living in the HDFS under “/user/oracle/osch/exttab” and creates one location file that references our singleton data file "testdata.txt" that we moved into HDFS.  If you look at your local directory, “ /home/oracle/osch/exttab”, you will see that it has been populated with a machine generated file (e.g. “osch-20130821102309-6509-1”) which contains XML verbiage referring to testdata.txt in HDFS.

Test an Oracle External Table that works against HDFS Data

Now you connect to Oracle as “oschuser" and issue the same SQL query you did when the data was local.  You should get identical results as you did earlier (the order of the rows might be different).

SQLPLUS>SELECT * FROM helloworld_exttab;

At this point you have SQL access to content living in HDFS.   To use it to load an Oracle table (e.g. "helloworld") you need to use either an INSERT statement:

SQLPLUS> INSERT INTO helloworld SELECT * FROM helloworld_exttab;

or a CREATE TABLE statement.

SQLPLUS>CREATE TABLE helloworld as SELECT * from helloworld_exttab;

What Did We Just Do?

Abby Normal

Aside from doing one time initialization steps, what we did was create an external table and tested it locally to see if it would work with a particular data file format, then we recreated the external table definition, adding the PREPROCESSOR directive to point to HDFS living in a Hadoop cluster.  We then used the OSCH External Table tool to point an external table to a directory in HDFS with data files having the same format.

The bindings here are simple to understand:

  • The PREPROCESSOR directive references hdfs_stream which binds external tables to a particular Hadoop cluster
  • The External Table publishing tool binds an external table to a set of data files living in that cluster

If you want an to access multiple Hadoop clusters, you simply need to create a copy of “hdfs_stream” giving it a new name. (e.g. "hdfs_stream_2”), configure it to work against the other cluster, and embed the use the PREPROCESSOR directive to call “hdfs_stream_2”.

If you want two external tables to point to two different data sources of the same format, then create a new external table with the same attributes, and use OSCH External Table tool to point to another directory in HDFS.

One question that frequently comes up has to do with using OSCH for SQL access.  Specifically, since external tables map HDFS data, are they useful for doing general purpose Oracle SQL queries against HDFS data, not just for loading an Oracle table?

If the data set is very large and you intend to run multiple SQL queries, then you want load it into an Oracle table and run your queries against it. The reason has to do with the “black box” design of external tables. The storage is not controlled by Oracle, so there are no indices and no internal structures that Oracle would need to make access by SQL efficient. SELECT statements against any external table are a full table scan, something Oracle SQL optimizer tries to avoid because it is resource expensive.

One last point, always use external table definitions to facilitate the conversion of text to Oracle native datatypes (e.g. NUMBER, INTEGER, TIMESTAMP, DATE).  Do not rely on CAST and other functions (e.g. to_date) in SQL.  The data type conversion code in external tables is much more efficient.

Next Steps

This post was to get a toy example working with a single data file. The next post will focus on how to tune OSCH to for large data sets living in HDFS and exploit Oracle Parallel query infrastructure for high performance loads.  We will also discuss the pros and cons of using OSCH versus OLH.

About

Oracle Loader for Hadoop and Oracle SQL Connector for HDFS

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today