Monday Jan 25, 2016

Integrating Apache Spark with Oracle NoSQL Database

A new whitepaper from the Oracle NoSQL Database team looks at the integration between Apache Spark and the Oracle NoSQL Database. In this whitepaper, learn about:

• High-level architecture of Oracle NoSQL Database and Apache Spark and the integration between them

• Log analysis use case

• How to implement the use case using Oracle NoSQL Database and Apache Spark 

The Whitepaper can be found HERE.

Monday Jan 04, 2016

Oracle NoSQL BulkGet API

Recently, we have been getting queries from our developer community and customers who wanted to know what is the most efficient way to retrieve large amounts of data in a single operation using Oracle NoSQL Database. An example of such a request would be on an eCommerce website, where potential customers want to retrieve all the phones in the price range $ 200 to $ 500 from Apple, Samsung, Nokia, Motorola (for example) and a host of other manufacturers to return all the details including the images of the product.

[Read More]

Tuesday Dec 15, 2015

Oracle NoSQL Database 12.2.3.5.2 Available

Oracle NoSQL Database , version 12.2.3.5.2 is now available for download.  We strongly recommend that you download this new version.  The highlights are:

  • Bulk Put API -  A high performance API that allows the application to insert multiple records (Bulk Put) in a single API call. The Bulk Put API is available for table as well as the key/Value data model. The API provides a significant performance gains over single row inserts by reducing the network round trips as well as by doing ordered inserts in batch on internally sorted data allowing application developers to work more effectively with very large datasets.
  • Kerberos integration - Oracle NoSQL Enterprise Edition(EE) supports authentication using a Kerberos service. Kerberos is an industry standard authentication protocol for large client/server system. With Kerberos, Oracle NoSQL DB and application developers can take advantage of existing authentication infrastructure and processes within your enterprise. To use Oracle NoSQL DB with Kerberos, you must have a properly configured Kerberos deployment, configured Kerberos service principals for Oracle NoSQL DB, and added Kerberos user principal to Oracle NoSQL DB. Please refer to the security guide for details and also refer the sample code.

 Read more and download from HERE.

Tuesday Nov 17, 2015

Turn Big Data into Fast Data Using Oracle NoSQL and Fusion ioMemory

Prasad Venkatachar at SanDisk has been working with the Oracle NoSQL Database and with Fusion ioMemory to see how using flash affects the performance of various NoSQL workloads.  The Fusion ioMemory application accelerators deliver extreme performance when running a variety of workloads. Specifically for the Oracle NoSQL Database, the results show an amazing increase in performance compared to hard disk drives. The YCSB benchmark was run using different configurations adn the results are explained for a Write heavy workload (50 % write / 50 % read), a Read heavy workload (5 % write / 95 % read) and a read only workload (100 % read).  The latencies remained constant over larger workloads as did the performance, even as the amount of data grew. Read more details about using Fusion ioMemory by clicking here

Friday Nov 06, 2015

Oracle NoSQL Database demonstrates record performance with SPARC M7

Oracle's SPARC T7-4 server containing the SPARC M7 processor delivered 1.9 million ops/sec on 1.6 billion records for the Yahoo Cloud Serving Benchmark (YCSB) 95% read/5% update workload. Oracle NoSQL Database was used in these tests. NoSQL is important for Big Data Analysis and for Cloud Computing. The details can be found at the Performance blog.

Tuesday Nov 03, 2015

OOW'15 Oracle NoSQL HOLs

Last week at OOW'15 we did Hands On Labs for the NoSQL Database. There were essentially 2 tracks 1) Administrators 2) developers. I was often asked to provide the links for the labs, the links are available on our OTN pages and also posting the same on this blog. The link contains the VM that has pre-build scripts and a guide on the desktop that walks through the HOL step by step

 1) Admin Track

  • Contains steps to deploy a 3x3 cluster, we are simulating a scenario or 3 machine each with 3 disks 
  • Extends the cluster, by adding a node to each shard (3x4) thereby increasing the read throughput 
  • Backup and Recovery of your store
  • Securing the store

  2) Developer Track

  • Deploys the cluster
  • Create tables (parent and child)
  • Create Secondary Indexes
  • Populate the tables
  • Query using cli and application :
    • Range Queries
    • Retrieve both parent and child records in a single call
    • Usage of secondary indexes
  • Integrate with RDBMS using external tables
  • Data modelling exercise

HOL were designed to be very simple and they provide us an opportunity to help folks in our community to get started with Oracle NoSQL DB, but I also realize that its too short a time to convey all of the above in a 1 hour session - specially for folks who are trying the product for 1st time. There are some very intricate details around distributed system, topology management, data modelling etc..That's something we'd work upon as we do these session next year..I'd also like to hear from you folks, what you'd like your feedback on content, material etc..

Friday Oct 30, 2015

Oracle NoSQL Database Mentioned in Press Release

See the Oracle press release from October, 27, 2015, regarding the Oracle Cloud Platform for Big Data.  Read more here.

Friday Oct 09, 2015

Announcement - Oracle NoSQL Database 12.1.3.4.7

Oracle announced Oracle NoSQL Database 3.4.7 on Oct, 2015. This release offers new features including new commands to perform the Failover and SwitchOver operation when there's a loss of quorum, Bulk Get API to take input a list of primary keys and return all the records matching those keys, Off Heap Cache to allow users to allocate and use additional memory outside the limits of the Java heap, reducing the impact of Java garbage collection and support for the Big Data SQL and Apache Hive integration. Please read the full annoucments http://bit.ly/1qBUMuP. To download visit our download (http://bit.ly/1mpL2f9) page for the latest releaseless

Check the release presentation here

Monday Aug 24, 2015

Setting up a single node NoSQL cluster



[Read More]

Thursday Aug 20, 2015

Invoking OracleNoSQL based Java application from PL/SQL

Recently, we ran into an interesting use-case with one of large supermarket customers, who wanted to take the output of a PL/SQL process and store that in  Oracle NoSQL Database to be be later consumed by one of their retail application - very quickly and something that can scale very well to support high volume of data that they are expected.  Oracle NoSQL DB is the obvious choice because it can provide a high throughput, low latency read/write operation and can scale to support large volume of data.

Coming to the integration, one of the highlights of the OracleN SQL Database is that it integrates really very well with other Oracle Tech Stack. The simplest way to write to Oracle NoSQL DB from a PL/SQL procedure is to call a  Java procedure that uses the native NoSQL DB API in order to insert data into the database and the simplest way to read from Oracle NoSQL DB in a stored procedure is to use an External Table in the query so that data from Oracle NoSQL DB can be passed to the Oracle Database query processor. There's another possible option to  use Golden Gate to move data from the Oracle Database to NoSQL DB. We have already blogged about the GoldenGate Integration, so in this blog I am going to focus on the Java Stored procedure based approach.

In case if you are not familiar with  Java Stored Procedure : A Java stored procedure essentially contains Java public static methods that are published to PL/SQL and stored in an Oracle database for general use. This allows a Java stored procedure to be executed from an application as if it were a PL/SQL stored procedure. When called by client applications, a Java stored procedure can accept arguments, reference Java classes, and return Java result values.

So, to help our customer, we created a POC that showcases this integration. Am listing down steps involved in this integration 

  1. First, create a NoSQL DB tables  that would store the data from Oracle Database
  2. Create a Java Application using the native NoSQL Driver to perform CRUD operation on NoSQL DB.
  3. Load the Java Application classes that we created in Step#2 in the oracle database using the load-java utility.
  4. Create a Java Store stored procedure that takes the data from the PL/SQL and updates NoSQL Database
  5. Next, publish Java stored procedures in the Oracle data dictionary. To do that, you write call specs, which map Java method names, parameter types, and return types to their SQL counterparts.
  6. Finally, call the Java store procedure from the PL/SQL Block to perform the updates.

The POC is available for download in a zip file from our OTN page (refer: The PL/SQL Integration in the Demo/Sample Program). The READ-ME file bundled in the zip has all the detailed steps and files needed for this integration.

With this approach, the NoSQL access is transparent to the Oracle DB application . NoSQL DB is an excellent choice here and using this Java Stored Procedure approach, the customer can exploit the advantages of BOTH repositories effectively and with better TCO.



Wednesday Aug 12, 2015

Migrating/Importing MongoDB Documents into Nosql Tables

Summary 

This paper presents a how to to migrate documents in MongoDB's collections into tables and child tables in Oracle Nosql. The idea is to take as example a relatively complex document, define a mapping file to map the basic fields of the document into a table,  and to map the embedded collections of the document into child tables. The java class that we provide will generate the Nosql structure of the tables and insert the data. The set of components of each element of the collection is inserted in the same operation into the store.

A Json example 

 Let's use an example of a family item from a MongoDB collection:

{ "_id" : ObjectId("55c4c6576e4ae64b5997d39e"),

"firstname" : "lena",

"lastname" : "clark",

"gender" : "W",

"childrens" : [

{ "name" : "bob",

"schools" : [ {"street" : "90, pine street","id" : "Saint volume"},

{"street" : "134, mice street","id" : "Saint appearance"}

],

"hobbies" : ["soccer","photo"]

},

{ "name" : "joseph",

"schools" : [ {"street" : "168, merely street","id" : "Saint slipped"} ],

"hobbies" : ["tennis","piano"]

},

{ "name" : "sandy",

"schools" : [{"street" : "227, thread street","id" : "Saint discovery"}],

"hobbies" : ["football","guitar"]

}

]

}

In this case the main document has the the following fields : '_id', 'firstname', 'lastname', 'gender' and childrens'. 'childrens' is an embedded collection, containing 'name', 'schools' and 'hobbies'. 'schools' is again a nested collection with 'street and 'id' fields and 'hobbies' is a list. We can map them into several nested tables:

  • the main table represents FAMILY,
  • FAMILY.CHILDREN  gets 'childrens' items and
  • FAMILY.CHILDREN.SCHOOLS and FAMILY.CHILDREN.HOBBIES store schools and hobbies information.

The mapping file 

The mapping file, is a properties file, it contains also connect information to access MongoDB database and Nosql store:

  • the name of the Nosql store: Nosql.Store=kvstore
  • the host and port of the nosql store: Nosql.URL=bigdatalite:5000
  • the mongodb host: MongoDB.host=localhost
  • the mongodb port: MongoDB.port=27017
  • the mongodb database: MongoDB.DB=gadb

Mapping principles

Define the main collection, its fields and its main table mapping

For each field define its type and its mapping value. Note that this can be a recursive step.

For each table define the primary key index components. 

Mapping extracts

Mapping collection and table with its primary keys

  • mongo.collection=family
  • mongo.collection.map=FAMILY
  • FAMILY.indexcols=LASTNAME,FIRSTNAME
indexcols is the keyword to introduce the comma separated list of columns of the key, order is important. The indexcol prefix is a Nosql table name

Family fields

  • family.fields=lastname,firstname,gender,childrens
  • family.firstname.type=string
  • family.firstname.map=FIRSTNAME
  • family.childrens.type=collection
  • family.childrens.map=CHILDREN
fields is the keyword to introduce the comma separated list of fields of a collection. For each field type corresponds to the type of a column in a Nosql table (string, integer, long, float, double or boolean are accepted). Two other values are used: array or collection. array is for lists of basic types, collection is for more complex collections. When  type is a basic type, map indicates a column of the mapped table, when the type is array or collection, map introduces a new table.

Children mappings

  • CHILDREN.indexcols=NAME
  • childrens.fields=name,schools,hobbies
  • childrens.name.type=string
  • childrens.name.map=NAME
  • childrens.schools.type=collection
  • childrens.schools.map=SCHOOLS
  • childrens.hobbies.type=array
  • childrens.hobbies.map=HOBBIES

School mappings 

  • schools.fields=street,id
  • schools.indexcols=ID
street and id are basic string fields, their type and map are not shown.

Hobbies mappings

  • hobbies.fields=hobbies
  • hobbies.hobbies.type=string
  • hobbies.hobbies.map=HOBBY
  • HOBBIES.indexcols=HOBBY

children.hobbies is an array of strings mapped to child table HOBBIES, there is no name in the main collection for the field, I've chosen to use hobbies (the name of the collection) as the field name to be able to define a mapping. 

Tables generated

Get child tables from FAMILY   

kv-> show tables -parent FAMILY

Tables: 

FAMILY.CHILDREN

 FAMILY.CHILDREN.HOBBIES

 FAMILY.CHILDREN.SCHOOLS

Get table indexes

kv-> show indexes -table FAMILY

Indexes on table FAMILY

FAMILYIndex (LASTNAME, FIRSTNAME)

kv-> show indexes -table FAMILY.CHILDREN

Indexes on table FAMILY.CHILDREN

CHILDRENIndex (NAME)

kv-> show indexes -table FAMILY.CHILDREN.SCHOOLS

Indexes on table FAMILY.CHILDREN.SCHOOLS

SCHOOLSIndex (ID)

kv-> show indexes -table FAMILY.CHILDREN.HOBBIES

Indexes on table FAMILY.CHILDREN.HOBBIES

HOBBIESIndex (HOBBY) 

Getting data from tables

Get our example family

kv-> get table -name FAMILY -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"FIRSTNAME":"lena","LASTNAME":"clark","GENDER":"W"}

Get our family children

kv-> get table -name FAMILY.CHILDREN -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy"}

Get our family children schools

kv-> get table -name FAMILY.CHILDREN.SCHOOLS -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"134, mice street","ID":"Saint appearance"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"90, pine street","ID":"Saint volume"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","STREET":"168, merely street","ID":"Saint slipped"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","STREET":"227, thread street","ID":"Saint discovery"} 

Get our family children hoobies

kv-> get table -name FAMILY.CHILDREN.HOBBIES -field LASTNAME -value "clark" -field FIRSTNAME -value "lena"

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"photo"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"soccer"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"piano"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"tennis"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"football"}

{"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"guitar"}

Running the example

jar files needed

MongoJavaDriver: mongo-java-driver-3.0.0.jar (this is the version we have used)

Nosql client: kvclient.jar (it should be a version containing tables 12.3+, it had been tested with 3.3.4)

main java class : mongoloadnosql.MongoDB2Nosql (java source code is here)

Parameters

The tool has 5 parameters:

  • -limit <integer>, number of documents to load, 0 is for all the documents
  • -skip <integer>, offset of the first document to load, similar to the skip function in MongoDB, 0 means no skip of documents
  • -mapfile <file>, properties file to load
  • -create [true|<anything else than true>], if true the Nosql API functions for creation of tables and indexes are issued
  • -insert [true|<anything else than true>], if true the Nosql API functions  for insertion are issued

Launch command

This command creates nosql tables and indexes if there do not exists, and insert the whole collection items using the properties file  mappingfam.properties:

java -classpath <tool_dir>/classes:<KVHOME>/lib/kvclient.jar:<MONGODB_CLASSPATH>/mongo-java-driver-3.0.0.jar mongoloadnosql.Mongo2Nosql -limit 0 -skip 0 -mapfile mappingfam.properties -create true -insert true

Caveats and warnings

Actually there is no possibility to map  MongoDB references (neither referenced relationships nor DBRefs)

Fields should be presented in the order defined by their primary keys (lastname,firstname) instead of (firstname,lastname) 

The java code attached is just  to illustrate how to  import/migrate MongoDB data into Nosql tables in an efficient and consistent way, it has not been tested in all kinds of situations and it is not intended to be free of bugs.

Bonus

The following mapping file, allows to map MongoDB documents having the structure of a post.  

In this case there is an embedded object "origine" which is defined as: {"owner" : "gus","site" : "recent_safety.com"}) which is not a collection.

There is no primary key other than the MongoDB '_id' field.

Enjoy trying this example also.

Sunday Jun 14, 2015

Uploading NoSQL tables from Golden Gate User Exits

Golden Gate and NoSQL Integration

The aim of this post is to illustrate how to use Golden Gate to stream relational transactions to Oracle NoSQL 12.3.4. We follow the structure of the post which illustrated how to use Golden Gate to stream data into HBase. 

 As shown in the diagram below, integrating database with NoSQL is accomplished by developing a custom handler using Oracle GoldenGate's  and NoSQL's Java APIs.

The custom handler is deployed as an integral part of the Oracle GoldenGate Pump process.   The Pump process and the custom adapter are configured through the Pump parameter file and custom adapter's properties file. The Pump reads the Trail File created by the Oracle GoldenGate Capture process and passes the transactions to the adapter. Based on the configuration, the adapter writes the transactions into NoSQL table.

You can find the Java code for the handler at this Github repository in folder StreamFromGG.

The steps to generate and test an example are:

  1. Prepare the database to stream data from a table
  2. Define a NoSQL table
  3. Define the pump extract parameter file from the data base
  4. Define  the extract parameter file and the adapter properties file for NoSQL
  5. Register the extract process
  6. Start the GoldenGate extract processes
  7. Do some data manipulation on the data base and verify the content on the NoSQL table

Lets take an example

Prepare the database to stream data from a table

This part is not detailed lets say that a database user is defined on Oracle 12c to allow Golden Gate transnational streaming. The database parameters are set to log transnational SQL commands in the appropriate way to satisfy Golden Gate requirements.

We will focus on the database table T2 from the gha schema, whose definition is:

CREATE TABLE "GHA"."T2" 

   ( "ID" NUMBER, 

"CREATED" TIMESTAMP (6), 

"NOM" VARCHAR2(32 BYTE), 

"VILLE" VARCHAR2(128 BYTE), 

CONSTRAINT "PK_T2" PRIMARY KEY ("ID", "CREATED")

Define a NoSQL table

 After connecting to the NoSQL store the following commands create the table T2:

table create -name T2

# Add table fields

add-field -name ID -type STRING

add-field -name NOM -type STRING

add-field -name CREATED -type STRING

add-field -name VILLE -type STRING

# Assign a field as primary key

primary-key -field ID -field CREATED

shard-key -field ID

exit

# Add table to the database

plan add-table -wait -name T2

Define the pump extract parameter file from the data base

The extract for the database requires previously the use of defgen  utility to create what is called a data definition file which contains the definition of the source table.

The content of the extract parameter's file is:

EXTRACT E_ghat2

TARGETDEFS ./dirsql/t2.sql

SETENV (ORACLE_SID=cdb)

userid c##ogg, password ogg

exttrail /u01/ogg/dirdat/T2

GETUPDATEBEFORES

table orcl.gha.t2  TARGET gha.t2;

The extract name is E_ghat2, the table definition file  is t2.sql, the oracle user for the transnational streaming is c##ogg, trail files generated are prefixed with T2, the container of the schema gha is orcl.

Define  the extract parameter file  and the adapter properties file for NoSQL

 When using GoldenGate java adapters, there are two files, one defines the extract parameters, the other gives the java specific properties for the adapter (the default name for this file is <extract_name>.properties, if a different name is used it should be given on the extract parameters. Our extract name is nosqlt2. Par of the content of nosqlt2.properties is:

jvm.bootoptions= -Xms64m -Xmx512M -Dlog4j.configuration=log4j.properties -Djava.class.path=dirprm:/u01/nosql/kv-ee/lib/jackson-core-asl.jar:/u01/nosql/kv-ee/lib/jackson-mapper-asl.jar:/u01/nosql/kv-ee/lib/avro.jar:/u01/ogg/ggjava/oggnosql.jar:/u01/nosql/kv-ee/lib/kvclient.jar:/u01/ogg/ggjava/ggjava.jar:/usr/lib/hadoop/client/commons-configuration-1.6.jar:/etc/hadoop/conf:/usr/lib/hadoop/client/commons-cli.jar

#Nosql Handler.

gg.handlerlist=nosqlhandler

gg.handler.nosqlhandler.type=com.goldengate.delivery.handler.nosql.NosqlHandler

gg.handler.nosqlhandler.NosqlStore=kvstore

gg.handler.nosqlhandler.NosqlUrl=bigdatalite:5000

gg.handler.nosqlhandler.NosqlTable=T2

gg.handler.nosqlhandler.NosqlCols=ID,CREATED,NOM,VILLE

gg.handler.nosqlhandler.NosqlPKCols=ID,CREATED

gg.handler.nosqlhandler.NosqlShardCols=ID

gg.handler.nosqlhandler.NosqlMappings=ID,ID;CREATED,CREATED;NOM,NOM;VILLE,VILLE

The meaning of these properties is: 

  • jvm.bootoptions, gives the path for the nosql java classes including json data managing and the jar for nosql adapter
  • gg.handlerlist, gives the list of handlers in this case noqsqlhandler will be used to identify the properties
  • gg.handler.nosqlhandler.type, gives the class used as adapter
  • gg.handler.nosqlhandler.NosqlStore, gives the name of the  Nosql store to connect to
  • gg.handler.nosqlhandler.NosqlUrl, gives the nosql store url (hostname:port)
  • gg.handler.nosqlhandler.NosqlTable, gives the name of the table
  • gg.handler.nosqlhandler.NosqlCols, gives a comma separated list of the Nosql Table columns
  • gg.handler.nosqlhandler.NosqlPKCols, gives a comma separated list of the Nosql Table primary key columns
  • gg.handler.nosqlhandler.NosqlShardCols, gives a comma separated list of the Nosql Table shard columns (should be a non void subset of the primary key columns)
  • gg.handler.nosqlhandler.NosqlMappings, gives a semi-colon separated list of mapping pairs (source column:target column)

The adapter implementation of NoSQL data manipulation, delete, update, create uses the shard column values to batch operations into the NoSQL database. The execution of the batched operations is done only when the shard stored value changes.

Register the extract process

 Use ggsci utility to issue the following commands (replacing <OGG_HOME> with it's real value): 

add extract E_GHAT2 ,integrated tranlog, begin now

add exttrail  <OGG_HOME>/dirdat/T2 , extract E_GHAT2 , megabytes 10

register extract E_GHAT2  database container (orcl)

add extract NOSQLT2, exttrailsource <OGG_HOME>/dirdat/T2 

 Start the GoldenGate extract processes

 Use ggsci utility to start e_ghat2 and nosqlt2

 Verify that the process are running:

GGSCI (bigdatalite.localdomain) 1> info all


Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING                                           

EXTRACT     RUNNING     E_GHAT2     00:00:02      00:01:07    

EXTRACT     RUNNING     NOSQLT2     49:22:05      00:01:56    

Data manipulation and verifications between Oracle 12c and NoSQL

Get the table count on NoSQL

kv-> aggregate table -name t2  -count

Row count: 4329

Delete data from t2 on Oracle 12c

delete t2 where id = 135

commit 

2 rows deleted 

Recompute the table count on NoSQL

kv-> aggregate table -name t2  -count

Row count: 4327

Note that last batch of NoSQL operations is flushed when the extract nosqlt2 is stopped 

Thursday Mar 19, 2015

Forrester Wave places NoSQL Database among the leaders

We are very pleased that Oracle NoSQL Database has been recognized as one of the leaders in the key-value NoSQL product category by Forrester Research.  Please see http://www.oracle.com/us/corporate/analystreports/forrester-wave-nosql-2348063.pdf for the full report. 

In the past few years, we’ve witnessed growing adoption of NoSQL technologies to address specific data management problems.   In many of the early adopter scenarios, the NoSQL applications were developed and managed as self-contained, standalone repositories of semi-structured data.

In recent months, it has become clear that such data silos are very expensive to implement and maintain.  Big data and NoSQL users now understand that well integrated NoSQL and SQL systems are the key to effective data management in today’s world.  An integrated set of products for managing NoSQL and relational data is critical for delivering business value in a cost effective manner. Oracle NoSQL Database is fully integrated with Oracle Database and related technologies, thus making it an excellent choice for enterprise-grade, mission-critical NoSQL applications.  As mentioned in the Forrester Wave report, “Many Oracle customers use Oracle NoSQL to balance the need for scale-out workloads of simpler key-value data, with the rich set of relational data management capabilities needed in their core business systems, or when supporting new applications that have frequently changing key-value data, such as profiles for fraud, personalization, and sensor data management”.

Thursday Mar 12, 2015

When does "free" challenge that old adage, "You get what you pay for"?

This post generated a lot of attention and follow on discussion about the "numbers" seen in the post and related YCSB benchmark results.

Some readers commenting, "Only 26,000 tps on 3 machines? I can get 50,000+ on one."   and    

"Only 10,000 YCSB Workload B tps on 6 nodes? I can get more than that on one."

Thought it was worth stating the obvious, because sometimes what is perfectly clear to one person is completely opaque to another.   Numbers like "1 million tps" are meaningless without context.   A dead simple example to illustrate the point, I might be able to do 50K inserts of an account balance (50K txns) in half a second on a given server machine, but take that same server and try to insert 50K finger print images (50K txns) and if you can get that done in half a second, call me cause magic of that nature is priceless and we should talk. 

So for clarity,

[Read More]

Tuesday Jan 20, 2015

Using R to plot data from NoSql tables

Oracle NoSql tables are relatively new. The communication between those tables and other BigData systems and tools is under construction. There is a package (rkvstore) allowing to get access to Nosql kv data from R but this package does not allow to use tables. This paper presents a way to access R from Java and to work with table data from Nosql using the R package Rserve and shows how to generate a plot of this data. 


RServe

You need to install into R the package Rserve which can be found here and once there Download/Files

To launch the R server just install the Rserve package:

R CMD INSTALL  <path to the Rserve.tar.gz  source package> 

Run R

>library("Rserve")

>Rserve()

R server : gets data from a Java program and returns the result


NoSql Tables

To create and load Nosql tables refer to this post.


Java Code

The main steps of the java programs are:

  • Connect to the kvstore
  • Get data from a table (via an iterator)
  • Create an R session
  • Transform data from the iterator to an R format
  • Assign data to R variables
  • Generate the R elements to make the plot
  • Display the data
  • Disconnect

You can find the java source code for this  blog entry here.

Usage

run java with the following class and arguments

nosql.r.RViewTablesBlog kvstore kvhost:kvport tablename fligthrangebegin fligthrangeend
use -1 for fligthrangebegin fligthrangeend to ignore flight range

 

add kvclient.jar from NoSql and REngine.jar, RServeEngine.jar from Rserve to the classpath


Results 

R returns an image similar to:

Plot result from R

Enjoy ! 

About

This blog is about everything NoSQL. An open place to express thoughts on this exciting topic and exchange ideas with other enthusiasts learning and exploring about what the coming generation of data management will look like in the face of social digital modernization. A collective dialog to invigorate the imagination and drive innovation straight into the heart of our efforts to better our existence thru technological excellence.

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
6
7
8
9
11
13
14
15
16
17
18
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today