Thursday Mar 12, 2015

When does "free" challenge that old adage, "You get what you pay for"?

This post generated a lot of attention and follow on discussion about the "numbers" seen in the post and related YCSB benchmark results.

Some readers commenting, "Only 26,000 tps on 3 machines? I can get 50,000+ on one."   and    

"Only 10,000 YCSB Workload B tps on 6 nodes? I can get more than that on one."

Thought it was worth stating the obvious, because sometimes what is perfectly clear to one person is completely opaque to another.   Numbers like "1 million tps" are meaningless without context.   A dead simple example to illustrate the point, I might be able to do 50K inserts of an account balance (50K txns) in half a second on a given server machine, but take that same server and try to insert 50K finger print images (50K txns) and if you can get that done in half a second, call me cause magic of that nature is priceless and we should talk. 

So for clarity,

[Read More]

Tuesday Jan 20, 2015

Using R to plot data from NoSql tables

Oracle NoSql tables are relatively new. The communication between those tables and other BigData systems and tools is under construction. There is a package (rkvstore) allowing to get access to Nosql kv data from R but this package does not allow to use tables. This paper presents a way to access R from Java and to work with table data from Nosql using the R package Rserve and shows how to generate a plot of this data. 


You need to install into R the package Rserve which can be found here and once there Download/Files

To launch the R server just install the Rserve package:

R CMD INSTALL  <path to the Rserve.tar.gz  source package> 

Run R



R server : gets data from a Java program and returns the result

NoSql Tables

To create and load Nosql tables refer to this post.

Java Code

The main steps of the java programs are:

  • Connect to the kvstore
  • Get data from a table (via an iterator)
  • Create an R session
  • Transform data from the iterator to an R format
  • Assign data to R variables
  • Generate the R elements to make the plot
  • Display the data
  • Disconnect

You can find the java source code for this  blog entry here.


run java with the following class and arguments

nosql.r.RViewTablesBlog kvstore kvhost:kvport tablename fligthrangebegin fligthrangeend
use -1 for fligthrangebegin fligthrangeend to ignore flight range


add kvclient.jar from NoSql and REngine.jar, RServeEngine.jar from Rserve to the classpath


R returns an image similar to:

Plot result from R

Enjoy ! 

Monday Jan 12, 2015

Big Data SQL for Oracle NoSQL Database (part 1)


We recently announced Oracle Big Data SQL support for Oracle NoSQL Database. This feature will allow Oracle Database users to connect to external data repositories like Oracle NoSQL Database, and/or Hadoop, and will enable them to fetch data from any or all of the repositories (at once) through single SQL query. We believe this a significant step forward as it streamlines the adoption of big data technologies with traditional technology stack.

If you are following big data space then you would agree that everyone likes the scalability, availability, performance at cost aspect of it but almost everyone struggles to modify their enterprise application to work with big data technologies because they don't talk the same language as traditional database systems do.

For innovations to happen fast what is required is that we lower down the adoption barrier for the developers & the data scientists who have been at the core of this data revolution. And by providing same SQL declarative language for relational and non-relational data we have made it simpler for everyone to adopt these big data technologies.

SQL on Oracle NoSQL Database

Having an access layer using a declarative language is definitely a great thing but a common trend these days is that every NoSQL vendor is writing a custom SQLike layer. Now if you want to use your existing application to access relational data as well as non-relational data stored in the NoSQL database then you can not write a single SQL query to access these two separate stores, which again makes it complicated from application development and manageability point of view.

Oracle Big Data SQL for Oracle NoSQL Database fills this gap by providing exactly the same SQL access layer to both the stores and therefore the queries would look exactly the same no matter which store you access. To achieve this, Oracle Big Data SQL requires that Oracle NoSQL Database support the ability to access its data via Hive. Thus, plugging Oracle NoSQL Database to Oracle Database is as simple as performing 1 (Creating a NoSQL Store and a Table), 2  (Configuring Hive to access NoSQL Table) & 3 (Configuring Oracle Database to talk to HCatalog) steps.

Please note that Big Data SQL provides much better query performance over running a query directly on a generic external tables because with former approach, computation (for selecting, & formatting data) is moved where the data is as apposed to moving all the data to the Oracle Database node first and then applying the predicates. The same proprietary cell technology (developed for Exadata) is used for applying the query predicates in parallel to all the distributed NoSQL partitions, making the overall query to run faster.

In rest of the article I will give you the technical details on how to perform first two steps so you can get the hands-on experience of this feature.

Figure 1: Flow diagram showing steps to access data in Oracle NoSQL Database using hive queries.


Step 1: Create a TABLE in Oracle NoSQL Database and load some data that we would like to access from HIVE later on.

  • For this demonstration we will be using KVLITE but you can very well deploy a real cluster by following steps covered in this webinar. Please remember to set KVHOME as the environment variable that will point to the directory where you unzipped the latest Oracle NoSQL Database binaries (download here) and KVROOT (eg. /u01/kvroot) where you would like metadata files to be stored.

$java -jar $KVHOME/lib/kvstore.jar kvlite -root $KVROOT -host localhost &

  • Once you have Oracle NoSQL Database cluster deployed (in our case it is single instance), connect to administration shell using runadmin command and then from the KV prompt connect to the store (kvstore in my case).

$java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host localhost

kv-> CONNECT STORE -name kvstore 

  • Create MOVIE table with few columns (to capture movie specific information)

             id INTEGER, \
             original_title STRING,\
             overview STRING,\
             poster_path STRING,\
             release_date STRING,\
             vote_count INTEGER,\
             runtime INTEGER,\
             popularity DOUBLE,\
             genres STRING,\
             PRIMARY KEY (id) )" 

  • SHOW table to confirm it is created successfully & DESCRIBE to make sure all the columns types are appropriate as well.



  • Next we will load some MOVIE data into the table. You can download a movie.dat file with details of about 10 movies represented as JSON string. We will directly load these strings into MOVIE table.

kv-> PUT TABLE -name MOVIE -file /tmp/movie.dat
Inserted 10 rows in movie table

  • You can now COUNT the rows and GET the rows to make sure data is indeed there.

kv-> AGGREGATE TABLE -name MOVIE -count
Row count: 10



Step 2: Create Hive External Table using Storage Handler for Oracle NoSQL Database (that got shipped in release 3.2.5) .

  • Before we begin doing anything with Hive we first need to install Hadoop on our system.
  • After Hadoop being installed and tested, download the Hive archive from here. I am using apache-hive-0.14.0 at the time of this article but you can download any latest version. Unzip the archive and create environment variable HIVE_HOME to point to this directory.
  • Here are some of the environment variables that are set in my ~/.bashrc file:

export HADOOP_HOME=/u01/home/hadoop/hadoop-2.2.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export KVHOME=/u01/nosql/kvhome

export HIVE_HOME=/u01/home/hadoop/apache-hive-0.14.0
export HIVE_AUX_JARS_PATH=$KVHOME/lib/kvclient.jar

export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:\

  •  NOTICE that we have HIVE_AUX_JARS_PATH set to kvclient.jar. We need to set this variable to make sure Hive can find Storage Handler for Oracle NoSQL Database from the client driver itself.
  • Now the only thing that is left is creating an external table to mimic the table exist in Oracle NoSQL Database. Let's connect to hive first and then create the table using this DDL:


         MOVIE( id INT, 
                original_title STRING,
                overview STRING, 
                poster_path STRING,
                release_date STRING, 
                vote_count INT, 
                runtime INT, 
                popularity DOUBLE, 
                genres STRING) 
         STORED BY 'oracle.kv.hadoop.hive.table.TableStorageHandler' 
         TBLPROPERTIES  ( "oracle.kv.kvstore"="kvstore",

Time taken: 0.174 seconds

          Note: STORE BY field is where we define the StorageHandler class for Oracle NoSQL Database and TBLPROPERTIES is where we define name of the store, host name and port where storage node agent is listening at (in our case we have only node), and the name of the table in NoSQL DB. We have MOVIE as the table there so that is what we use as oracle.kv.tableName  field.

  • That is all one need to do to configure Hive to work with Oracle NoSQL Database. Pretty simple, right ? 

Step 2b: Now that we have NoSQL DB & Hive setup done, we just need to test running hive queries.

  • Here are some of the simple queries to fetch MOVIE data using Hive Query Language (HQL):

hive> SELECT id, original_title, release_date, genres 
        FROM MOVIE;

hive> SELECT id, original_title, release_date, genres 
        FROM MOVIE 
        WHERE release_date >1990;

hive> SELECT id, original_title, release_date, genres 
        FROM MOVIE 
        WHERE genres LIKE '%Comedy%'; 

  • Some other queries that would require M/R jobs to be spun off to get the results:

hive> SELECT count(*) 
        FROM MOVIE 
        WHERE genres LIKE '%Drama%'; 

hive> SELECT id, original_title, release_date, genres 
        FROM MOVIE 
        WHERE genres LIKE '%Drama%' 
        ORDER BY release_date;

hive> SELECT MAX(release_date) 
        FROM MOVIE;


Using the Hive StorageHandler from latest release, one can access NoSQL Data as hive external tables. The metadata of these external tables are stored in the HCatalog, which is later referenced by Oracle Big Data SQL to decide where to dispatch the queries in parallel. The process of configuring Oracle NoSQL Database has become very simple (using the out of the box hive StorageHandler) that it takes only couple of steps to complete.

In the part 2 of the blog I will talk about how hive metastore can be configured with Oracle Database to finally run SQL queries on NoSQL Data.

Tuesday Jan 06, 2015

Announcement: Oracle NoSQL Database 3.2.5

Oracle announced Oracle NoSQL Database 3.2.5 on December 8, 2014. This release offers Big Data SQL support, RESTful API, C Table Driver, SQL like DDL, Apache Hive support and much more. Please read the full announcement to learn more and to download latest release click here.

Tuesday Dec 02, 2014

Using Nosql Tables with Spark

This post goal is to explain how to use Nosql tables and how to put their content into a file on hdfs using the java API for Spark. In hdfs, the table content will be presented in a comma separated style (CSV).

Oracle (latest) Big Data Appliance "X4-2", offers Cloudera Enterprise Technology software including Cloudera CDH, and Oracle NoSql database including tables.

The Cloudera part offers several ways of integration with Spark (see Using Nosql and Spark) : Standalone or via Yarn (see Running Spark Applications)

The Nosql part allows the use of tables. Tables can be defined within the Nosql console by issuing the following command:

java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar runadmin -host <host> -port <store port> -store <store name>

There are two parts for defining and creating a table. Define which includes table name, table fields, primary key and shared-key which is a "prefix" of the primary key, ends with the keyword "exit"

table create -name flightTestExtract

add-field -name param -type STRING

add-field -name flight -type STRING

add-field -name timeref -type LONG

add-field -name value -type INTEGER

primary-key -field timeref -field param -field flight 

shard-key -field timeref


Plan which allows table creation and index definition and creation:

plan add-table -wait -name flightTestExtract

plan add-index -wait -name flightIndex -table  flightTestExtract -field flight -field param -field timeref

plan add-index -wait -name paramIndex -table  flightTestExtract -field param -field flight -field timeref

Inserting into the table can be done by the put command as:

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"8\",\"timeref\":61000000000002,\"value\":1764248535}"

put table -name flightTestExtract -json "{\"param\":\"12\",\"flight\":\"8\",\"timeref\":61000000000002,\"value\":-1936513330}"

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"6\",\"timeref\":61000000000013,\"value\":1600130521}"

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"8\",\"timeref\":61000000000013,\"value\":478674806}"

The last patch of Nosql, 3.1.7, has some new java classes that could be used to get table data into hadoop. The class oracle.kv.hadoop.table.TableInputFormat can be used as a Spark JavaRDD:

JavaPairRDD<PrimaryKey, Row> jrdd = sc.newAPIHadoopRDD(hconf, TableInputFormat.class, PrimaryKey.class, Row.class);

The oracle.kv.table.PrimaryKey.class correspond to the fields of the primary key of the table, for example in json style:

{"timeref":61000000000013, "param":"11","flight":"8"}

The oracle.kv.table.Row.class correspond to the fields of table row, for example in json style:


If we want to save the content of the table on hdfs in a csv style we have to:

  • apply a flatMap on the rows of the RDD 
    flatMap(func) each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item). 
  • save the result on hdfs

The following inner class defines the map:

     static class FlatMapRow_Str implements FlatMapFunction<Row, String> {


        public Iterable<String> call(Row s) {

            List<String> lstr = s.getFields();

            String tabedValues = "";

            for (String field : lstr)

                tabedValues += s.get(field) + ",";

            return Arrays.asList(tabedValues);



The code to do the job is: 

//Obtain the Row RDD       

JavaRDD<Row> rddvalues = jrdd.values();

//Obtain the csv style form of the RDD 

JavaRDD<String> csvStr = rddvalues .flatMap(new FlatMapRow_Str());

//Save the results on hdfs 

csvStr.saveAsTextFile(pathPrefix + "/" + tableName + "csvStr");

The last step is to test using Yarn:

spark-submit --master yarn --jars /u01/nosql/kv-ee/lib/kvclient.jar --class table.SparkNosqlTable2HadoopBlog /u01/nosql/kv-ee/examples/table/deploy/sparktables.jar <nosql store name> <nosql store url> <table name> <path prefix>

<nosql store url> is <store host>:<store port> 

You can get the java source code here

Monday Nov 10, 2014

Is your NoSQL ready for SSD - benchmark results from SanDisk

It’s well known that the killer NoSQL use case is scalability for large data sets under high concurrency.   However, what is less talked about is the fact that performance is often the key driver. Given a high performance profile, NoSQL then provides the ability to scale that performance out linearly by adding more servers.   If not for the high performance profile to begin with, often times an alternative technology like a distributed cache would be used instead, fronting a more traditional database technology.

Given that performance is often a key driver, it’s no surprise to find that flash based storage devices are increasingly common for NoSQL deployments.  In fact, popular cloud providers of NoSQL services (Amazon and Google) provide those services exclusively on SSD backed infrastructure.  The SSD form factor gives not only the best performance, but also more manageable service level agreements, by taking disk I/O bottlenecks out of the equation.  At Oracle, we are also seeing customers deploy the Oracle NoSQL Database on servers backed by SSD storage.

[Read More]

Tuesday Oct 21, 2014

Using Nosql and Spark (How to Start)

Spark is an open-source data analytics cluster computing framework,  built outside of Hadoop’s two-stage MapReduce paradigm but runs on top of HDFS. Because of its successful approach, Spark has quickly been adopted and is as an attractive choice for the future of data processing in Hadoop. The question about how to link Nosql and Spark often concerns Big Data architects and developers.

Let's take a quit look to this question.

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. RDDs can be created by referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.

Good news,  Nosql could be integrated with Spark-Java using KVInputFormat, KVAvroInputFormat, which are two Nosql java classes which extend the Hadoop abstract class InputFormat.

How to proceed

  1. Get Spark resources
  2. Define the configuration parameters for Nosql (connection, store name,  key prefix)
  3. Define the Spark resilient distributed dataset to get data from Nosql
  4. Dispatch Nosql key-value subset into a dataset for key and a dataset for values
  5. Do some computations
  6. Release Spark resources

Get Spark resources

SparkConf sparkConf = new SparkConf().setAppName("SparkKVInput"); //AppName allows                                                                       //the tracking of job

JavaSparkContext sc = new JavaSparkContext(sparkConf);

Define the configuration parameters

hconf.set("oracle.kv.kvstore", "kvstore");

hconf.set("oracle.kv.parentKey", "/H/10"); // just a major key prefix

hconf.set("oracle.kv.hosts", String[]{"bigdatalite.localdomain:5000"}); 

Define the Spark resilient distributed dataset to get data from Nosql

JavaPairRDD<Text,Text> jrdd = sc.newAPIHadoopRDD(hconf,KVInputFormat.class, Text.class, Text.class);

The dataset parameters are the configuration, the InputFormat extension, the java class for keys and the java class for values. The class of keys and values is stated on the javadoc of KVInputFormat

public class KVInputFormat 

extends KVInputFormatBase<,>

Dispatch Nosql key-value subset into a dataset for key and a dataset for values

Setting the datasets for keys and values is easy: 

JavaRDD<Text> rddkeys = jrdd.keys();

JavaRDD<Text> rddvalues = jrdd.values();

Their manipulation is not possible, Spark does not know how to serialize Text. A mapping is needed to transform the Text dataset into a String dataset. The following code does the trick: 

The code for the key values is very similar

Do some computations

Print key and values:

List <String> keys = strkeys.collect();

List <String> values = strvalues.collect();

for(int idx=0;idx<values.size();idx++){

       String val = values.get(idx);

       String key = keys.get(idx);



Release Spark resources

sc.stop(); //release JavaSparkContext 

How to test-it

Put some data on the kvstore

  • put kv -key /T/1/1/-/1 -value V11_1
  • put kv -key /T/1/1/-/2 -value V11_2
  • put kv -key /T/1/1/-/3 -value V11_3
  • put kv -key /T/1/2/-/3 -value V12_3
  • put kv -key /T/2/2/-/1 -value V22_1
  • put kv -key /T/2/2/-/2 -value V22_2
  • Generate a jar containing kvclient.jar, avro.jar and the class with the spark code SparkKVInput. This class has three parameters which are the configuration file parameters:,  and oracle.kv.parentkey.

    Generate a jar (spark.jar) containing the nosql kvclient.jar and kvavro.jar

    An example of calling command on a bigdatalite 4.1 is:
    spark-submit --class <path  to spark.jar location>/spark.jar kvstore bigdatalite.localdomain:5000 /T/1

    results are:

    14/10/21 11:26:54 INFO SparkContext: Job finished: collect at, took 0.341590587 s





    14/10/21 11:26:54 INFO SparkUI: Stopped Spark web UI at http://bigdatalite.localdomain:4040

    Hope this first introduction to the use of Nosql key/values on Spark will help you to go deeper with Spark as an engine to manipulate Nosql Data. 

    Loading into Nosql using Hive

    The main purpose of this post is to  show how strongly we can tied NoSql and Hive, the focus will be the upload of data into Nosql from Hive.

    The post  (here) discussed about the use of Hive external tables to select data from Oracle Nosql. We used a HiveStorageHandle implementation. We have reworked on this implementation to load data from hdfs or a local system via Hive into Nosql. Only uploading of text data is currently supported.

    Two kinds of data files can be uploaded:

    Case 1: Files containing plain text data like the following comma separated lines:

    • 10,5,001,545973390
    • 10,5,010,1424802007
    • 10,5,011,164988888 

    Case 2: Files containing a JSON field corresponding to a given AVRO schema like the following tab separated lines:

    •  10 5 173 {"samples": [{"delay": 0, "value": -914351026}, {"delay": 1, "value": 1842307749}, {"delay": 2, "value": -723989379}, {"delay": 3, "value": -1665788954}, {"delay": 4, "value": 91277214}, {"delay": 5, "value": 1569414562}, {"delay": 6, "value": -877947100}, {"delay": 7, "value": 498879656}, {"delay": 8, "value": -1245756571}, {"delay": 9, "value": 812356097}]}
    •  10 5 174 {"samples": [{"delay": 0, "value": -254460852}, {"delay": 1, "value": -478216539}, {"delay": 2, "value": -1735664690}, {"delay": 3, "value": -1997506933}, {"delay": 4, "value": -1062624313}]}

    How to do it ?

    1. Define the external table

    2. Create and load a native Hive table

    3. Insert into the external table a selection from the native Hive table

    Case 1:

    1.Define the external table

    CREATE EXTERNAL TABLE MY_KV_PI_10_5_TABLE (flight string, sensor string, timeref string, stuff string)

          STORED BY ''

          WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "flight,sensor", "kv.minor.metadata" = "false", "kv.minor.keys.mapping" = "timeref", "kv.key.prefix" = "PI/10/5", "kv.value.type" = "string", "kv.key.range" = "", "" = "bigdatalite:5000", "" = "kvstore","kv.key.ismajor" = "true");

    2. Create and load a native Hive table

    CREATE TABLE kv_pi_10_5_load (flight string, sensor string, timeref string, stuff string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' STORED AS TEXTFILE;

    LOAD DATA LOCAL INPATH '/home/oracle/hivepath/pi_10_5' OVERWRITE INTO TABLE kv_pi_10_5_load;

    3. Insert into the external table a selection from the native Hive table

    INSERT INTO TABLE my_kv_pi_10_5_table SELECT * from kv_pi_10_5_load;

    The external table generation defines a major key and its complete key components, this definition is used when inserting, the flight, and sensor values of the data are ignored, timeref elements are loaded the Nosql operation API for batching the insertions.

    Case 2:

    1.Define the external table

    CREATE EXTERNAL TABLE MY_KV_RI_10_5_TABLE (flight string, sensor string, timeref string, stuff string)

          STORED BY ''

          WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "flight,sensor", "kv.minor.metadata" = "false", "kv.minor.keys.mapping" = "timeref", "kv.key.prefix" = "RI/10/5", "kv.value.type" = "avro", "kv.key.range" = "","kv.key.ismajor" = "true", "kv.avro.schema" = "","" = "bigdatalite:5000", "" = "kvstore");

     When creating the external table used for upload into Nosql a new parameter is used "kv.avro.schema" = ""

    It is the Nosql name for an avro schema. Talking about avro schema definition, its the schema namespace "." schema name. 

     2. Create and load a native Hive table

     CREATE TABLE kv_ri_10_5_load (flight string, sensor string, timeref string, stuff string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' STORED AS TEXTFILE;

    LOAD DATA LOCAL INPATH '/home/oracle/hivepath/ri_10_5' OVERWRITE INTO TABLE kv_ri_10_5_load;

     3. Insert into the external table a selection from the native Hive table

     LOAD DATA LOCAL INPATH '/home/oracle/hivepath/ri_10_5' INTO TABLE my_kv_ri_10_5_table;

    How to verify the upload ? 

    Two possibilities:

    • a select query on Hive
    • a get on the kvstore

    Let's do it on the Nosql client command line

    Case 1: Verify a random line existence

     kv-> get kv  -key /PI/10/5/-/010 -all



    1 Record returned

    Case 2: Verify a random line existence

    kv-> get kv  -key /RI/10/5/-/173 -all
      "samples" : [ {
        "delay" : 0,
        "value" : -914351026
      }, {
        "delay" : 1,
        "value" : 1842307749
      }, {
        "delay" : 2,
        "value" : -723989379
      }, {
        "delay" : 3,
        "value" : -1665788954
      }, {
        "delay" : 4,
        "value" : 91277214
      }, {
        "delay" : 5,
        "value" : 1569414562
      }, {
        "delay" : 6,
        "value" : -877947100
      }, {
        "delay" : 7,
        "value" : 498879656
      }, {
        "delay" : 8,
        "value" : -1245756571
      }, {
        "delay" : 9,
        "value" : 812356097



    1 Record returned

    Let's do it on the hive command line

    Case 1: Verify a random line existence

    select *  from MY_KV_PI_10_5_TABLE where timeref = "010";


    10 5 010 1424802007

    Case 2: Verify a random line existence

    hive> select *  from MY_KV_RI_10_5_TABLE where timeref = "173";



    10 5 173 {"samples": [{"delay": 0, "value": -914351026}, {"delay": 1, "value": 1842307749}, {"delay": 2, "value": -723989379}, {"delay": 3, "value": -1665788954}, {"delay": 4, "value": 91277214}, {"delay": 5, "value": 1569414562}, {"delay": 6, "value": -877947100}, {"delay": 7, "value": 498879656}, {"delay": 8, "value": -1245756571}, {"delay": 9, "value": 812356097}]}

    You can get a Jdeveloper 12c project here

    We have done, a return trip between Nosql and Hive:

    1. Key value subsets of a Nosql database, can be viewed using the select query language of Hive 
    2. Data from Hive tables can be uploaded into Nosql key-value pairs


    Wednesday Oct 15, 2014

    Revisiting Nosql into Hive External Tables

    About two years ago, in the Data Integration blog, David Allan wrote about the integration between Oracle NoSql and Hive, in his post David mentioned the work of Alexander Vilcek to implement a HiveStorageHandler to have access to NoSql data via Hive Sql select commands.

    Unfortunately the Java code does not compile anymore with the NoSql Java Api (3.0+).I've modified the java source code to comply with the 3.0 java KV API. The new Hive storage handler has new definition possibilities.

    The current  limitations are the same as the ones stated by the original Vilcek project code:

    • Supports only external non-native Hive tables.
    • Writing data to Oracle NoSQLDB is not supported .
    • Parsing of Hive SerDe properties is very rudimentary yet and spaces between NoSQL DB keys definitions in the key mappings properties in the Hive table create statement will cause key names to be misinterpreted.
    • Columns names and types specified in the Hive table definition are ignored; only NoSQL DB Major and Minor Keys mappings in the Hive table create statement define the column names.
    • A NoSQL DB Value for a given key is always interpred as string in the Hive table.

    Hive CREATE TABLE Syntax:

    CREATE EXTERNAL TABLE <hive_table_name> (column_name column_type,column_name column type, ...)
    STORED BY ''

    WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "<majorKey1,majorKey2,...>", "kv.minor.keys.mapping" = "<minorKey1,minorKey2,...>"

    "kv.minor.metadata" = "true|false", "kv.key.prefix" = "H", "kv.value.type" = "<avro|other>", "kv.key.range" = "<key restriction range>", "kv.key.ismajor" = "true|false","" = "<kvstore hostname>:<kvstore port number>", "" = "<kvstore name>"); 

    Example 1:

    Data stored in Oracle NoSQL Database:

    /Smith/Bob/-/birthdate: 05/02/1975

    /Smith/Bob/-/phonenumber: 1111-1111 

    For this kind of data the minor key birthdate or phonenumber are metadata (field names) and the value of the key value pair is the real value for the metadata

    Example 2:



    For this kind of data the minor key is a timestamp for a measure (45678, 45640) of a gauge of a plane. There in no metadata on the minor key.

    The new table definitions  have the following parameters:

    • "kv.value.type" when the value type is avro the value is transformed into string by using the json transformation of the avro value
    • "kv.minor.metadata"  it's true or false value allows to make a difference between the two uses of key value pairs, true for Example 1, false for Example 2
    • "kv.key.prefix"  allows to define an starting path for a major key
    • "kv.key.ismajor" it's value is true if the key prefix is a complete major key path or false otherwise  
    • "kv.key.range"  is used in cases  where the key prefix does not correspond to a major key or when there is no key prefix

    In real bigdata store you should not use null values for both "kv.key.prefix"   and "kv.key.range" .

    Detail of metadata Example 1:

    The following put operations have been done on our store using the kvstore client commands: 

    • put kv -key /H/Smith/Bob/-/birthdate -value 05/02/1975
    • put kv -key /H/Smith/Bob/-/phonenumber -value 1111-1111
    • put kv -key /H/Smith/Bob/-/userid -value 1
    • put kv -key /H/Smith/Patricia/-/birthdate -value 10/25/1967
    • put kv -key /H/Smith/Patricia/-/phonenumber -value 2222-2222
    • put kv -key /H/Smith/Patricia/-/userid -value 2
    • put kv -key /H/Wong/Bill/-/birthdate -value 03/10/1982
    • put kv -key /H/Wong/Bill/-/phonenumber -value 3333-3333
    • put kv -key /H/Wong/Bill/-/userid -value 3

    A table creation to include all the above insertions is:

    CREATE EXTERNAL TABLE MY_KV_TABLE (lastname string, firstname string, birthdate string, phonenumber string, userid string)

          STORED BY ''

          WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "lastname,firstname", "kv.minor.keys.mapping" = "birthdate,phonenumber,userid", 

    "kv.minor.metadata" = "true", "k.key.pvrefix" = "H", "kv.value.type" = "String", "kv.key.range" = "", "kv.key.ismajor" = "false", "" = "bigdatalite:5000", "" = "kvstore");

     a select * from my_kv_table returns 9 values.

    The creation and the select commands needs that the jar containing be a part of the hive lib jars, lets say this jar is kvhive.jar.

    any select applied to the table (different to the select *) invokes map/reduce procedures and needs hive add jar commands to target kvclient.jar and  kvhive.jar.

     For example: 

    hive> SELECT lastname, firstname, collect_set(birthdate)[0], collect_set(phonenumber)[0], collect_set(userid)[0] FROM my_kv_table GROUP BY lastname, firstname;


    Smith      Bob      05/02/1975      1111-1111      1
    Smith Patricia      10/25/1967 2222-2222 2
    Wong Bill 03/10/1982 3333-3333 3

    Detail of a non-metadata example 2:

    We have defined an Avro schema to record the list of min and max values of a given measure for a range of time stamps. 

    The Hive table declaration in this case is:

    CREATE EXTERNAL TABLE MY_KV_A_TABLE (flight string, sensor string, timeref string, stuff string)

          STORED BY ''

          WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "flight,sensor", "kv.minor.metadata" = "false", "kv.minor.keys.mapping" = "timeref", "kv.key.prefix" = "A/8/11", "kv.value.type" = "avro", "kv.key.range" = "", "" = "bigdatalite:5000", "" = "kvstore","kv.key.ismajor" = "true");

     In this case the value type specified is avro, the key prefix A/8/11 is A for analytic data prefix 8 for plane number 8 and 11 for gauge measure number 8.

    A select * from my_kv_a_table returns records of the type:

     8 11 W2412 {"values": [{"min": -2.141749606E9, "max": 2.14331348E9}, {"min": -2.118390555E9, "max": 2.090954548E9}, {"min": -2.130274733E9, "max": 2.098153258E9}, {"min": -2.082685648E9, "max": 2.023497823E9}]}

     {"values": [{"min": -2.141749606E9, "max": 2.14331348E9}, {"min": -2.118390555E9, "max": 2.090954548E9}, {"min": -2.130274733E9, "max": 2.098153258E9}, {"min": -2.082685648E9, "max": 2.023497823E9}]} is the json format of the avro values stored in the nosql database.

    The HiveSorageHandle Jdeveloper code can be get here.

    Monday Sep 15, 2014

    Using multiDelete for efficient cleanup of old data

    In a recent project one of our field engineers ( Gustavo Arango ) was confronted by a problem, he needed to efficiently delete millions of keys beneath a key space where he did not know the complete major key path, which ended in a time stamp.   He quickly discovered a way to efficiently find these major key paths and then use them to perform high speed multi-value deletions without causing unusual chattiness ( a call for each key-value pair ) on the network.   I thought it would be useful to review this solution here and give you a link to a Github example of working code to play with and understand the behavior. 

    This is possible by using Oracle NoSQL's method storeKeysIterator( ) on your connection to the cluster.    This iterator can be used to obtain a list of store keys in a specific range, in a single call and without loading all of the value data beneath the keys:

    First, you need a partial key path:

    Key key = Key.createKey( "/root/pieceOfKey" );

    Next you need a range: 

    KeyRange kr = new KeyRange( "aastart" ,true, "zzfinish", true);

    Now you can get your Key iterator  ( getKVStore( ) returns a connection handle to cluster, an instance of KVStore ):

    Iterator<Key> iter = getKVStore().storeKeysIterator(Direction.UNORDERED, 0, key, kr, Depth.DESCENDANTS_ONLY);

    So, this is nice as the storeKeysIterator will do a query to the cluster and return ONLY the keys that start with that partial path and optionally in the range specifier, no need to give a range if you want all the keys.    

    Now, to efficiently delete many keys with a single call to the cluster, you need to have a full major key path.  So, now that you have the whole set of keys, you can ask them what is their full major key path and then use that information to do the efficient multiDelete operation, which in Gustavo's case of storing and managing dense time series data, meant millions of keys/values being deleted with only a small number of actual network calls and very little data being transferred across the network. 

    boolean enditerate = false;  

              while(! enditerate){


                    Key iterkey =;

                    iterkey = Key.createKey(iterkey.getMajorPath());

                    int delNb = getKVStore(). multiDelete(iterkey, kr, Depth.PARENT_AND_DESCENDANTS, durability, 0, null) ;   

                    res += delNb;

                  }else{  enditerate = true; }

    If you want to get tricky and do this in parallel, you can wrap this in a worker thread and place the initialization of the iterator inside the while loop, effectively treating the iterator like a queue.  It would cause a new query to happen every time the loop iterates, but in the mean time, a lot of other threads may have deleted half of that queue.

    In fact, there are also parallel versions of the store iterator that will query the entire cluster in parallel using the number of threads that works best given your hardware configuration and the number of shards in your cluster.  You can learn more about in the online documentation.

    If you are interested in playing with the ideas in this blog, there is a Github example project in Eclipse that will get you started. 

    Monday Jun 09, 2014

    Master-slave vs. peer-to-peer archictecture: benefits and problems

    Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections.

    As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean.

    The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost.

    We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency.

    The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness.

    So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t.

    What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies.

    So what’s the problem with this? There are two major issues:

    First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high.

    Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency).

    Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item.

    Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is.

    This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many).

    So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference!

    Monday May 12, 2014

    Article on NoSQL Database by James Anthony (CTO - e-DBA)

    James Anthony recently published this article about the latest release of Oracle NoSQL Database.  James does an excellent job describing the basic NoSQL concepts such as CAP theorem and ACID transactions.  His insights into how to use database systems and NoSQL systems are based on extensive experience building large production applications.

     Definitely worth a read.

    Friday Dec 20, 2013

    Big Data and NoSQL expert e-DBA lends a helping hand

    Found this offer on one of the Oracle Partner sites and thought it would be worth sharing with the community.

    Oracle partner e-DBA offers an Oracle NoSQL Database free trial including personalized assistance for getting setup, building applications, loading your data and highlighting its scalable performance for your specific project objectives.  This is an excellent opportunity to get some direct support from a partner who has successfully deployed the Oracle NoSQL database into production for mission critical, high value, low latency application requirements. See the following site for complete details:

    Hope everyone is having an awesome holiday season.  Cheers .... Robert

    Monday Dec 09, 2013

    Look I shrunk the keys

    Whether we use relational or non-relational database to durably persist our data on the disk, we all are aware of the fact that how indexes plays a major role in accessing the data in real time. There is one aspect most of us tend to overlook while designing the index-key i.e. how to efficiently size them.

    Not that I want to discount the fact but in traditional databases where we used to store from hundreds of thousands to few million of records in a database, sizing the index-key didn’t come (that often) as a very high priority but in NoSQL database where you are going to persist few billion to trillions of records every byte saved in the key goes a long mile.

    That is exactly what came up this week while working on one of the POC and I thought I should share with you the best practices and tricks of the trade that you can also use in developing your application. So here is my numero uno recommendation for designing the index Keys:

    • Keep it Small.

    Now there is nothing new there that you didn't know already, right? Right but I just wanted to highlight it, so if there is anything you remember from this post then it is this one bullet item.

    All right, here is what I was dealing with this week: couple billion records of telematics/spacial data that we needed to capture and query based on the timestamp (of the feed that was received) and x and y co-ordinates of the system. To run the kind of queries we wanted to run (on spacial data), we came up with this as an index-key:


    How we used above key structure to run spacial queries is another blog post but for this one I would just say that when we plugged in the values to the variables our key became 24 bytes (1+13+5+5) long. Here’s how:

    Table Prefix => type char = 1 byte (eg. S)

    Timestamp => type long = 13 bytes (eg.1386286913165)

    X co-ordinate => type string = 5 bytes (eg. 120.78 degree, 31.87 degree etc)

    Y co-ordinate => type string = 5 bytes (eg. 132.78 degree, 33.75 degree etc)

    With amount of hardware resource we had available (for POC) we could create 4 shards cluster only. So to store two billion records we needed to store (2B records/4 shards) 500 million records on each of the four shards. Using DBCacheSize utility, we calculated we would need about 32 GB of JE cache on each of the Replication Node (RN).

    $java -d64 -XX:+UseCompressedOops -jar $KVHOME/lib/je.jar DbCacheSize -records 500000000 

    -key 24 

    === Database Cache Size ===
     Minimum Bytes        Maximum Bytes          Description
    ---------------       ---------------        -----------
     29,110,826,240   32,019,917,056         Internal nodes only

    But we knew that if we can shrink the key size (without losing the information) we can save lot of memory and can improve the query time (as search is a function of # of records and size of each record) as well. So we built a simple encoding program that uses range of 62 ASCII characters (0-1, a-z, A-Z) to encode any numeric digit. You can find the program from here or build your own but what is important to note here is that we were able to represent same information with less number of bytes:

    13 Byte Timestamp (e.g. 1386286913165) became 7 byte (e.g. opc2BTn)

    5 byte X/Y co-ordinate (e.g. 132.78) became 3 byte each (e.g. a9X/6dF)

    i.e. 14 byte encoded key (1 + 7 byte + 3 byte + 3 byte). So what’s the fuss that we shrunk our keys (it’s just 10 bytes saving), you would ask? Well, we plugged in the numbers again to DBCacheSize utility and this time the verdict was that we needed only 20GB of JE cache to store same half a billion records on each RN. That’s 40% improvement (12GB of saving/Replication Node) and is definitely an impressive start.

    $java -d64 -XX:+UseCompressedOops -jar $KVHOME/lib/je.jar DbCacheSize -records 500000000 
    -key 14 
    === Database Cache Size ===
     Minimum Bytes        Maximum Bytes          Description
    ---------------       ---------------        -----------
     16,929,008,448       19,838,099,264         Internal nodes only

    To conclude: you just seen how simple encoding technique can save you big time when you are dealing with billions of records. Next time when you design an index-key just think little harder on how you can shrink it down!

    Thursday Oct 31, 2013

    Oracle Social Analytics with the Big Data Appliance

    Found an awesome demo put together by one of the Oracle NoSQL Database partners, eDBA, on using the Big Data Appliance to do social analytics.

    In this video, James Anthony is showing off the BDA, Hadoop, the Oracle Big Data Connectors and how they can be used and integrated with the Oracle Database to do an end-to-end sentiment analysis leveraging twitter data.   A really great demo worth the view. 


    This blog is about everything NoSQL. An open place to express thoughts on this exciting topic and exchange ideas with other enthusiasts learning and exploring about what the coming generation of data management will look like in the face of social digital modernization. A collective dialog to invigorate the imagination and drive innovation straight into the heart of our efforts to better our existence thru technological excellence.


    « February 2016