Thursday May 19, 2016

Techtarget writes about Oracle NoSQL

Jack Vaughn of TechTarget has written about Oracle NoSQL Database.  In it, Jack describes the history of Oracle NoSQL Database and how this new type of technology will be needed for new types of cloud applications. In addition, the article contains some comments from industry analysts.  Take a read at:  http://searchdatamanagement.techtarget.com/news/450296683/Oracle-NoSQL-An-oxymoron-waiting-to-happen

Thursday May 12, 2016

Charles Pack Examines the Benefits of Oracle NoSQL for the Oracle RDBMS DBA

Take a look at this article, where Charles Pack of CSX Technologies talks about the benefits of using Oracle NoSQL Database for Oracle RDBMS DBA's. Read the story HERE.

Thursday May 05, 2016

Oracle NoSQL Database on display

Come see and learn about Database at the Big Data Summit in NYC on May 10 and 11. Info at: http://www.dbta.com/DataSummit/2016/

Learn about the new features of Oracle NoSQL Database at the event. Bring your toughest questions.

Thursday Apr 28, 2016

Submit your OOW 2016 Proposals

If you have a great story or solution that includes Oracle NoSQL Database and want to talk about it, please submit at: 

https://www.oracle.com/openworld/call-for-proposals.html

Closes on May 9, 2016

  • Conference location: San Francisco, California, USA
  • Date: Sunday, September 18 to Thursday, September 22, 2016

Friday Apr 22, 2016

Oracle Open World 2016 - Call for Proposals

The call for proposals for Oracle Open World 2016 is open.  Please submit your proposals by May 9, 2016.  The proposal web site is:

 https://www.oracle.com/openworld/call-for-proposals.html

 Submit if you have an interesting use of Oracle NoSQL Database to talk about. 

Wednesday Mar 16, 2016

Oracle NoSQL Database Cluster YCSB Testing with Fusion ioMemory™ Storage

Highly distributed systems with large data stores in the form of NoSQL databases are becoming increasingly important to enterprises, not just to hyperscale organizations. NoSQL databases are being deployed for capturing patient sensors data in health care, smart meter analysis in utilities, customer sentiment analysis in retail, and various other use cases in different industries. NoSQL database systems help organizations store, manage, and analyze huge amounts of data on distributed system architecture. The sheer volume of data and the distributed system design needed to manage this large data at a reasonable cost necessitated a different category of database systems, leading to NoSQL databases. Oracle NoSQL Database is part of the NoSQL database family and is based on a distributed, key-value architecture.

This technical white paper describes a three-node Oracle NoSQL Database Cluster deployment procedure on Fusion ioMemory™ storage. The following points are emphasized:

  • Highlights performance and scalability advantages compared to traditional spinning disks.
  • Because enterprises evaluate and assess new technologies for enterprise-wide adaptability, Yahoo Cloud Serving Benchmark (YCSB) is the standard benchmark tool employed for testing and is the same tool used in this paper to evaluate Oracle NoSQL Database for YCSB Benchmark Testing.
  • Analysis and discussion are provided for throughput and latency testing results with YCSB.

 

Download now at:   https://www.sandisk.com/content/dam/sandisk-main/en_us/assets/resources/enterprise/white-papers/oracle-nosql-cluster-ycsb-testing-with-fusion-iomemory.pdf

Tuesday Mar 01, 2016

Support for Oracle NoSQL Database CE available

For those of you who have downloaded  Oracle NoSQL Database Community Edition  (CE) , did you know that support in two forms is available for this ?  Some customers may think that by downloading the CE Version that they are left on their own.  This is not true.  Questions and answers can be posted to:  

https://community.oracle.com/community/database/high_availability/nosql_database   where lots of users monitor the questions and respond to these inquiries. 

In addition, support from Oracle is available to anyone.  Go to:  https://shop.oracle.com/pls/ostore/f?p=DSTORE:PRODUCT:::NO:RP,6:P6_LPI:124789930244771286351865     where details are available.

Thanks to all who use Oracle NoSQL Database and continue to create innovative applications and use the family of Oracle products. 

Tuesday Feb 23, 2016

Oracle NoSQL Database Sessions at All India Oracle Users Group (AIOUG)

Come listen to two talks by members of the Oracle NoSQL team at the AIOUG event in Hyderabad, India on February 27, 2016. Ashutosh Naik and Anand Chandak will be presenting on how to set up an Oracle NoSQL Database environment and will discuss innovative customers and how they are taking advantage of the Oracle NoSQL Database. Details can be found below.  The full announcement is at:

https://blogs.oracle.com/NoSQL/resource/HydBigdataDay.pdf

Ganga Conference Hall

Tech Mahindra Learning world (Just Adjacent to Oracle India Office- Hi Tech City)

TechMahindra Infocity Campus
Hi-Tech City

 

Monday Feb 15, 2016

New Oracle NoSQL Database Whitepaper on Bulk Put and Bulk Get

A new whitepaper is available, describing the new Bulk Get and Bulk Put API's in the Oracle NoSQL Database product.  Download from our Oracle NoSQL Database page, under Whitepapers.

http://www.oracle.com/technetwork/database/database-technologies/nosqldb/overview/index.html

Monday Jan 04, 2016

Oracle NoSQL BulkGet API

Recently, we have been getting queries from our developer community and customers who wanted to know what is the most efficient way to retrieve large amounts of data in a single operation using Oracle NoSQL Database. An example of such a request would be on an eCommerce website, where potential customers want to retrieve all the phones in the price range $ 200 to $ 500 from Apple, Samsung, Nokia, Motorola (for example) and a host of other manufacturers to return all the details including the images of the product.

[Read More]

Tuesday Nov 17, 2015

Turn Big Data into Fast Data Using Oracle NoSQL and Fusion ioMemory

Prasad Venkatachar at SanDisk has been working with the Oracle NoSQL Database and with Fusion ioMemory to see how using flash affects the performance of various NoSQL workloads.  The Fusion ioMemory application accelerators deliver extreme performance when running a variety of workloads. Specifically for the Oracle NoSQL Database, the results show an amazing increase in performance compared to hard disk drives. The YCSB benchmark was run using different configurations adn the results are explained for a Write heavy workload (50 % write / 50 % read), a Read heavy workload (5 % write / 95 % read) and a read only workload (100 % read).  The latencies remained constant over larger workloads as did the performance, even as the amount of data grew. Read more details about using Fusion ioMemory by clicking here

Wednesday Aug 12, 2015

Migrating/Importing MongoDB Documents into Nosql Tables

Summary 

This paper presents a how to to migrate documents in MongoDB's collections into tables and child tables in Oracle Nosql. The idea is to take as example a relatively complex document, define a mapping file to map the basic fields of the document into a table,  and to map the embedded collections of the document into child tables. The java class that we provide will generate the Nosql structure of the tables and insert the data. The set of components of each element of the collection is inserted in the same operation into the store.

A Json example 

 Let's use an example of a family item from a MongoDB collection:

{ "_id" : ObjectId("55c4c6576e4ae64b5997d39e"),

"firstname" : "lena",

"lastname" : "clark",

"gender" : "W",

"childrens" : [

{ "name" : "bob",

"schools" : [ {"street" : "90, pine street","id" : "Saint volume"},

{"street" : "134, mice street","id" : "Saint appearance"}

],

"hobbies" : ["soccer","photo"]

},

{ "name" : "joseph",

"schools" : [ {"street" : "168, merely street","id" : "Saint slipped"} ],

"hobbies" : ["tennis","piano"]

},

{ "name" : "sandy",

"schools" : [{"street" : "227, thread street","id" : "Saint discovery"}],

"hobbies" : ["football","guitar"]

}

]

}

In this case the main document has the the following fields : '_id', 'firstname', 'lastname', 'gender' and childrens'. 'childrens' is an embedded collection, containing 'name', 'schools' and 'hobbies'. 'schools' is again a nested collection with 'street and 'id' fields and 'hobbies' is a list. We can map them into several nested tables:

  • the main table represents FAMILY,
  • FAMILY.CHILDREN  gets 'childrens' items and
  • FAMILY.CHILDREN.SCHOOLS and FAMILY.CHILDREN.HOBBIES store schools and hobbies information.

The mapping file 

The mapping file, is a properties file, it contains also connect information to access MongoDB database and Nosql store:

  • the name of the Nosql store: Nosql.Store=kvstore
  • the host and port of the nosql store: Nosql.URL=bigdatalite:5000
  • the mongodb host: MongoDB.host=localhost
  • the mongodb port: MongoDB.port=27017
  • the mongodb database: MongoDB.DB=gadb

Mapping principles

Define the main collection, its fields and its main table mapping

For each field define its type and its mapping value. Note that this can be a recursive step.

For each table define the primary key index components. 

Mapping extracts

Mapping collection and table with its primary keys

  • mongo.collection=family
  • mongo.collection.map=FAMILY
  • FAMILY.indexcols=LASTNAME,FIRSTNAME
indexcols is the keyword to introduce the comma separated list of columns of the key, order is important. The indexcol prefix is a Nosql table name

Family fields

  • family.fields=lastname,firstname,gender,childrens
  • family.firstname.type=string
  • family.firstname.map=FIRSTNAME
  • family.childrens.type=collection
  • family.childrens.map=CHILDREN
fields is the keyword to introduce the comma separated list of fields of a collection. For each field type corresponds to the type of a column in a Nosql table (string, integer, long, float, double or boolean are accepted). Two other values are used: array or collection. array is for lists of basic types, collection is for more complex collections. When  type is a basic type, map indicates a column of the mapped table, when the type is array or collection, map introduces a new table.

Children mappings

  • CHILDREN.indexcols=NAME
  • childrens.fields=name,schools,hobbies
  • childrens.name.type=string
  • childrens.name.map=NAME
  • childrens.schools.type=collection
  • childrens.schools.map=SCHOOLS
  • childrens.hobbies.type=array
  • childrens.hobbies.map=HOBBIES

School mappings 

  • schools.fields=street,id
  • schools.indexcols=ID
street and id are basic string fields, their type and map are not shown.

Hobbies mappings

  • hobbies.fields=hobbies
  • hobbies.hobbies.type=string
  • hobbies.hobbies.map=HOBBY
  • HOBBIES.indexcols=HOBBY

children.hobbies is an array of strings mapped to child table HOBBIES, there is no name in the main collection for the field, I've chosen to use hobbies (the name of the collection) as the field name to be able to define a mapping. 

Tables generated

Get child tables from FAMILY   

kv-> show tables -parent FAMILY

Tables: 

FAMILY.CHILDREN

 FAMILY.CHILDREN.HOBBIES

 FAMILY.CHILDREN.SCHOOLS

Get table indexes

kv-> show indexes -table FAMILY

Indexes on table FAMILY

FAMILYIndex (LASTNAME, FIRSTNAME)

kv-> show indexes -table FAMILY.CHILDREN

Indexes on table FAMILY.CHILDREN

CHILDRENIndex (NAME)

kv-> show indexes -table FAMILY.CHILDREN.SCHOOLS

Indexes on table FAMILY.CHILDREN.SCHOOLS

SCHOOLSIndex (ID)

kv-> show indexes -table FAMILY.CHILDREN.HOBBIES

Indexes on table FAMILY.CHILDREN.HOBBIES

HOBBIESIndex (HOBBY) 

Getting data from tables

Get our example family

kv-> get table -name FAMILY -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"FIRSTNAME":"lena","LASTNAME":"clark","GENDER":"W"}

Get our family children

kv-> get table -name FAMILY.CHILDREN -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy"}

Get our family children schools

kv-> get table -name FAMILY.CHILDREN.SCHOOLS -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"134, mice street","ID":"Saint appearance"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"90, pine street","ID":"Saint volume"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","STREET":"168, merely street","ID":"Saint slipped"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","STREET":"227, thread street","ID":"Saint discovery"} 

Get our family children hoobies

kv-> get table -name FAMILY.CHILDREN.HOBBIES -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"photo"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"soccer"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"piano"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"tennis"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"football"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"guitar"}

Running the example

jar files needed

MongoJavaDriver: mongo-java-driver-3.0.0.jar (this is the version we have used)

Nosql client: kvclient.jar (it should be a version containing tables 12.3+, it had been tested with 3.3.4)

main java class : mongoloadnosql.MongoDB2Nosql (java source code is here)

Parameters

The tool has 5 parameters:

  • -limit <integer>, number of documents to load, 0 is for all the documents
  • -skip <integer>, offset of the first document to load, similar to the skip function in MongoDB, 0 means no skip of documents
  • -mapfile <file>, properties file to load
  • -create [true|<anything else than true>], if true the Nosql API functions for creation of tables and indexes are issued
  • -insert [true|<anything else than true>], if true the Nosql API functions  for insertion are issued

Launch command

This command creates nosql tables and indexes if there do not exists, and insert the whole collection items using the properties file  mappingfam.properties:

java -classpath <tool_dir>/classes:<KVHOME>/lib/kvclient.jar:<MONGODB_CLASSPATH>/mongo-java-driver-3.0.0.jar mongoloadnosql.Mongo2Nosql -limit 0 -skip 0 -mapfile mappingfam.properties -create true -insert true

Caveats and warnings

Actually there is no possibility to map  MongoDB references (neither referenced relationships nor DBRefs)

Fields should be presented in the order defined by their primary keys (lastname,firstname) instead of (firstname,lastname) 

The java code attached is just  to illustrate how to  import/migrate MongoDB data into Nosql tables in an efficient and consistent way, it has not been tested in all kinds of situations and it is not intended to be free of bugs.

Bonus

The following mapping file, allows to map MongoDB documents having the structure of a post.  

In this case there is an embedded object "origine" which is defined as: {"owner" : "gus","site" : "recent_safety.com"}) which is not a collection.

There is no primary key other than the MongoDB '_id' field.

Enjoy trying this example also.

Thursday Mar 12, 2015

When does "free" challenge that old adage, "You get what you pay for"?

This post generated a lot of attention and follow on discussion about the "numbers" seen in the post and related YCSB benchmark results.

Some readers commenting, "Only 26,000 tps on 3 machines? I can get 50,000+ on one."   and    

"Only 10,000 YCSB Workload B tps on 6 nodes? I can get more than that on one."

Thought it was worth stating the obvious, because sometimes what is perfectly clear to one person is completely opaque to another.   Numbers like "1 million tps" are meaningless without context.   A dead simple example to illustrate the point, I might be able to do 50K inserts of an account balance (50K txns) in half a second on a given server machine, but take that same server and try to insert 50K finger print images (50K txns) and if you can get that done in half a second, call me cause magic of that nature is priceless and we should talk. 

So for clarity,

[Read More]

Tuesday Dec 02, 2014

Using Nosql Tables with Spark

This post goal is to explain how to use Nosql tables and how to put their content into a file on hdfs using the java API for Spark. In hdfs, the table content will be presented in a comma separated style (CSV).

Oracle (latest) Big Data Appliance "X4-2", offers Cloudera Enterprise Technology software including Cloudera CDH, and Oracle NoSql database including tables.

The Cloudera part offers several ways of integration with Spark (see Using Nosql and Spark) : Standalone or via Yarn (see Running Spark Applications)

The Nosql part allows the use of tables. Tables can be defined within the Nosql console by issuing the following command:

java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar runadmin -host <host> -port <store port> -store <store name>

There are two parts for defining and creating a table. Define which includes table name, table fields, primary key and shared-key which is a "prefix" of the primary key, ends with the keyword "exit"

table create -name flightTestExtract

add-field -name param -type STRING

add-field -name flight -type STRING

add-field -name timeref -type LONG

add-field -name value -type INTEGER

primary-key -field timeref -field param -field flight 

shard-key -field timeref

exit

Plan which allows table creation and index definition and creation:

plan add-table -wait -name flightTestExtract

plan add-index -wait -name flightIndex -table  flightTestExtract -field flight -field param -field timeref

plan add-index -wait -name paramIndex -table  flightTestExtract -field param -field flight -field timeref

Inserting into the table can be done by the put command as:

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"8\",\"timeref\":61000000000002,\"value\":1764248535}"

put table -name flightTestExtract -json "{\"param\":\"12\",\"flight\":\"8\",\"timeref\":61000000000002,\"value\":-1936513330}"

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"6\",\"timeref\":61000000000013,\"value\":1600130521}"

put table -name flightTestExtract -json "{\"param\":\"11\",\"flight\":\"8\",\"timeref\":61000000000013,\"value\":478674806}"

The last patch of Nosql, 3.1.7, has some new java classes that could be used to get table data into hadoop. The class oracle.kv.hadoop.table.TableInputFormat can be used as a Spark JavaRDD:

JavaPairRDD<PrimaryKey, Row> jrdd = sc.newAPIHadoopRDD(hconf, TableInputFormat.class, PrimaryKey.class, Row.class);

The oracle.kv.table.PrimaryKey.class correspond to the fields of the primary key of the table, for example in json style:

{"timeref":61000000000013, "param":"11","flight":"8"}

The oracle.kv.table.Row.class correspond to the fields of table row, for example in json style:

{"param":"11","flight":"8","timeref":61000000000013,"value":478674806}

If we want to save the content of the table on hdfs in a csv style we have to:

  • apply a flatMap on the rows of the RDD 
    flatMap(func) each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item). 
  • save the result on hdfs

The following inner class defines the map:

     static class FlatMapRow_Str implements FlatMapFunction<Row, String> {

        @Override

        public Iterable<String> call(Row s) {

            List<String> lstr = s.getFields();

            String tabedValues = "";

            for (String field : lstr)

                tabedValues += s.get(field) + ",";

            return Arrays.asList(tabedValues);

        }

    }

The code to do the job is: 

//Obtain the Row RDD       

JavaRDD<Row> rddvalues = jrdd.values();

//Obtain the csv style form of the RDD 

JavaRDD<String> csvStr = rddvalues .flatMap(new FlatMapRow_Str());

//Save the results on hdfs 

csvStr.saveAsTextFile(pathPrefix + "/" + tableName + "csvStr");

The last step is to test using Yarn:

spark-submit --master yarn --jars /u01/nosql/kv-ee/lib/kvclient.jar --class table.SparkNosqlTable2HadoopBlog /u01/nosql/kv-ee/examples/table/deploy/sparktables.jar <nosql store name> <nosql store url> <table name> <path prefix>

<nosql store url> is <store host>:<store port> 

You can get the java source code here

Tuesday Oct 21, 2014

Loading into Nosql using Hive

The main purpose of this post is to  show how strongly we can tied NoSql and Hive, the focus will be the upload of data into Nosql from Hive.

The post  (here) discussed about the use of Hive external tables to select data from Oracle Nosql. We used a HiveStorageHandle implementation. We have reworked on this implementation to load data from hdfs or a local system via Hive into Nosql. Only uploading of text data is currently supported.

Two kinds of data files can be uploaded:

Case 1: Files containing plain text data like the following comma separated lines:

  • 10,5,001,545973390
  • 10,5,010,1424802007
  • 10,5,011,164988888 

Case 2: Files containing a JSON field corresponding to a given AVRO schema like the following tab separated lines:

  •  10 5 173 {"samples": [{"delay": 0, "value": -914351026}, {"delay": 1, "value": 1842307749}, {"delay": 2, "value": -723989379}, {"delay": 3, "value": -1665788954}, {"delay": 4, "value": 91277214}, {"delay": 5, "value": 1569414562}, {"delay": 6, "value": -877947100}, {"delay": 7, "value": 498879656}, {"delay": 8, "value": -1245756571}, {"delay": 9, "value": 812356097}]}
  •  10 5 174 {"samples": [{"delay": 0, "value": -254460852}, {"delay": 1, "value": -478216539}, {"delay": 2, "value": -1735664690}, {"delay": 3, "value": -1997506933}, {"delay": 4, "value": -1062624313}]}

How to do it ?

1. Define the external table

2. Create and load a native Hive table

3. Insert into the external table a selection from the native Hive table

Case 1:

1.Define the external table

CREATE EXTERNAL TABLE MY_KV_PI_10_5_TABLE (flight string, sensor string, timeref string, stuff string)

      STORED BY 'nosql.example.oracle.com.NosqlStorageHandler'

      WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "flight,sensor", "kv.minor.metadata" = "false", "kv.minor.keys.mapping" = "timeref", "kv.key.prefix" = "PI/10/5", "kv.value.type" = "string", "kv.key.range" = "", "kv.host.port" = "bigdatalite:5000", "kv.name" = "kvstore","kv.key.ismajor" = "true");

2. Create and load a native Hive table

CREATE TABLE kv_pi_10_5_load (flight string, sensor string, timeref string, stuff string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' STORED AS TEXTFILE;

LOAD DATA LOCAL INPATH '/home/oracle/hivepath/pi_10_5' OVERWRITE INTO TABLE kv_pi_10_5_load;

3. Insert into the external table a selection from the native Hive table

INSERT INTO TABLE my_kv_pi_10_5_table SELECT * from kv_pi_10_5_load;

The external table generation defines a major key and its complete key components, this definition is used when inserting, the flight, and sensor values of the data are ignored, timeref elements are loaded the Nosql operation API for batching the insertions.

Case 2:

1.Define the external table

CREATE EXTERNAL TABLE MY_KV_RI_10_5_TABLE (flight string, sensor string, timeref string, stuff string)

      STORED BY 'nosql.example.oracle.com.NosqlStorageHandler'

      WITH SERDEPROPERTIES ("kv.major.keys.mapping" = "flight,sensor", "kv.minor.metadata" = "false", "kv.minor.keys.mapping" = "timeref", "kv.key.prefix" = "RI/10/5", "kv.value.type" = "avro", "kv.key.range" = "","kv.key.ismajor" = "true", "kv.avro.schema" = "com.airbus.zihb.avro.SampleIntSet","kv.host.port" = "bigdatalite:5000", "kv.name" = "kvstore");

 When creating the external table used for upload into Nosql a new parameter is used "kv.avro.schema" = "com.airbus.zihb.avro.SampleIntSet"

It is the Nosql name for an avro schema. Talking about avro schema definition, its the schema namespace "." schema name. 

 2. Create and load a native Hive table

 CREATE TABLE kv_ri_10_5_load (flight string, sensor string, timeref string, stuff string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' STORED AS TEXTFILE;

LOAD DATA LOCAL INPATH '/home/oracle/hivepath/ri_10_5' OVERWRITE INTO TABLE kv_ri_10_5_load;

 3. Insert into the external table a selection from the native Hive table

 LOAD DATA LOCAL INPATH '/home/oracle/hivepath/ri_10_5' INTO TABLE my_kv_ri_10_5_table;

How to verify the upload ? 

Two possibilities:

  • a select query on Hive
  • a get on the kvstore

Let's do it on the Nosql client command line

Case 1: Verify a random line existence

 kv-> get kv  -key /PI/10/5/-/010 -all

/PI/10/5/-/010

1424802007

1 Record returned

Case 2: Verify a random line existence

kv-> get kv  -key /RI/10/5/-/173 -all
/RI/10/5/-/173
{
  "samples" : [ {
    "delay" : 0,
    "value" : -914351026
  }, {
    "delay" : 1,
    "value" : 1842307749
  }, {
    "delay" : 2,
    "value" : -723989379
  }, {
    "delay" : 3,
    "value" : -1665788954
  }, {
    "delay" : 4,
    "value" : 91277214
  }, {
    "delay" : 5,
    "value" : 1569414562
  }, {
    "delay" : 6,
    "value" : -877947100
  }, {
    "delay" : 7,
    "value" : 498879656
  }, {
    "delay" : 8,
    "value" : -1245756571
  }, {
    "delay" : 9,
    "value" : 812356097
  }

 ]

}

1 Record returned

Let's do it on the hive command line

Case 1: Verify a random line existence

select *  from MY_KV_PI_10_5_TABLE where timeref = "010";

OK

10 5 010 1424802007

Case 2: Verify a random line existence

hive> select *  from MY_KV_RI_10_5_TABLE where timeref = "173";

... 

OK

10 5 173 {"samples": [{"delay": 0, "value": -914351026}, {"delay": 1, "value": 1842307749}, {"delay": 2, "value": -723989379}, {"delay": 3, "value": -1665788954}, {"delay": 4, "value": 91277214}, {"delay": 5, "value": 1569414562}, {"delay": 6, "value": -877947100}, {"delay": 7, "value": 498879656}, {"delay": 8, "value": -1245756571}, {"delay": 9, "value": 812356097}]}

You can get a Jdeveloper 12c project here

We have done, a return trip between Nosql and Hive:

  1. Key value subsets of a Nosql database, can be viewed using the select query language of Hive 
  2. Data from Hive tables can be uploaded into Nosql key-value pairs

 

About

This blog is about everything NoSQL. An open place to express thoughts on this exciting topic and exchange ideas with other enthusiasts learning and exploring about what the coming generation of data management will look like in the face of social digital modernization. A collective dialog to invigorate the imagination and drive innovation straight into the heart of our efforts to better our existence thru technological excellence.

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
6
7
8
9
11
13
14
15
16
17
18
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today