X

The Oracle NoSQL Database Blog covers all things Oracle NoSQL Database. On-Prem, Cloud and more.

Recent Posts

Oracle NoSQL Database Cloud Service – 10 Minutes to Hello World in Node.js

Guest Post by: Dave Rubin, Senior Director, NoSQL, and Embedded Database Development, Oracle I recently posted a hello world example coded in Python and showed how easy it was for Python developers to get started with the Oracle NoSQL Database cloud.  In this post, I will show the same hello world example coded in JavaScript, and if you are a Javascript developer and already have access to the Oracle Cloud, I believe you will also be able to be up and running in 10 minutes or less by using the Oracle NoSQL Database Cloud service with the JavaScript SDK. In the very first “hello world” post, I also talked about why you might want to use a NoSQL Database for certain applications.   For that discussion, you can check out the post here 15-minutes-to-hello-world.  The remainder of this post will focus on writing JavaScript code for your first Oracle NoSQL Database cloud application. Getting Started with the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service is a server-less, fully managed data store that delivers predictable single-digit response times and allows the application to scale on-demand via provisioning API calls.  There are five simple steps to getting started with the Oracle NoSQL Database Cloud Service. Download the Oracle NoSQL Database SDK Create a compartment for your table (if you do not want your table in the root compartment) Connect to the Oracle NoSQL Database cloud service Create a table with provisioned reads/sec, writes/sec, and GB storage Write data to the table and read data from the table Furthermore, you can use free cloud credits to do all of this and not pay a single penny.  Once you have created a cloud account, you can either navigate to the Oracle NoSQL Database table management console or if you are like most developers, quickly move onto writing your first hello world code.  For this release of the Oracle NoSQL Database Cloud Service, you can write your first hello world program using Python, Node.js, Java, or Go.  Future releases of the service will support C++, C#, and other popular programming languages.  I will use JavaScript, running in Node.js for the remainder of this blog. Download the Oracle NoSQL Database SDK Since access to the Oracle NoSQL Cloud Service is via HTTP, you can run your application directly on your laptop and connect to the database service over the internet.  While I would never recommend using the internet as a network transport for performance-sensitive applications, it works perfectly for our hello world example.  In fact, it is likely that you would want to deploy a real application by running inside your own tenancy, co-located in the same Oracle Cloud Infrastructure region as your Oracle NoSQL table, and use the Oracle Cloud Infrastructure Service Gateway to connect to the NoSQL Cloud Service. Like other open-source JavaScript packages, you can find the Oracle NoSQL Database JavaScript SDK at npm.  NOTE that the JavaScript SDK is intended for server-side development, hence it requires that you run it inside of Node.js.  When you search for Oracle NoSQL, you will see that the latest Oracle NoSQL Driver that lets you connect to the cloud service, or the Oracle NoSQL Database running anywhere, is called oracle-nosqldb and its current version is 5.2.2, so I will choose to install this one locally on my laptop.  Also, note that the SDK requires Node.js version 12.0.0 or higher. Below, I am running sudo npm install –g oracle-nosqldb to install the Oracle NoSQL Database JavaScript SDK on my laptop and make it available globally. Create a Compartment for Your Table If you would like your table to be created in your own compartment (e.g. namespace) rather the root compartment, you can create a compartment by navigating to the compartments section of the Identity menu item in the Oracle cloud console. Connect to the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service uses the Oracle Cloud Infrastructure native cloud Identity Access Manager (IAM) for authentication and authorization.  In the JavaScript API documentation, (https://oracle.github.io/nosql-node-sdk) for the SDK, you will notice a NoSQLClient class, which is what I will use to instantiate a connection to the cloud service and authenticate.  NOTE the IAMConfig configuration object, since this is what I will use to supply my credentials to the NoSQLClient class.   Before you can authenticate your application with the cloud service, you must generate a key pair and upload your public key to the Oracle Cloud.  The instructions here will guide you through generating an API signing key and uploading the public portion of that key to the Oracle Cloud https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How The following information will be needed for you to successfully authenticate with the cloud service: Tenancy and user OCID – This page (https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#Other) will help you locate your tenancy and user OCIDs. Private key file and fingerprint – The file holding your private key should be safely stored in a location known only to you and would have been generated when you created the API signing key.  To find out how to retrieve the fingerprint of the private key, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#four Passphrase – If you have provided a password for your key then you must supply it to the SignatureProvider. The code below is structured as follows: The function getConnection – This is where you will fill in your specific credentials.  This method will perform authentication with the Oracle NoSQL Cloud Service and return a handle to the service.  The function createTable  - This function will create a table with two columns; an ID column of type LONG and a content column of type JSON. The method writeARecord – This method will write a single record to the hello world table. The method readARecord – This method will read a single record from the hello world table and return this record as a JSON string. The doHelloWorld – Will first get a connection to the service, create the table, write a single record, read that record back, and finally print that record to stdout.   'use strict'; const NoSQLClient = require('oracle-nosqldb').NoSQLClient; const Region = require('oracle-nosqldb').Region; /** * Call the main function tfor this example **/ doHelloWorld(); /** THis function will authenticate with the cloud service, * create a table, write a record to the table, then read that record back * **/ async function doHelloWorld() { try { let handle = await getConnection(Region.US_ASHBURN_1); await createTable(handle); await writeARecord(handle, { id : 1, content : { 'hello' : 'world' } } ); console.log("Wrote a record with primary key 1") let theRecord = await readARecord(handle, 1); console.log('Successfully read the record: ' + JSON.stringify(theRecord.row)); process.exit(0); } catch (error) { console.log(e); process.exit(-1); } } /** * Create and return an instance of a NoSQLCLient object. * NOTE that you need to fill in your cloud credentials and the * compartment where you want your table created. Compartments * can be dot deperated paths. For example: developers.dave. * * @param {Region} whichRegion An element in the Region enumeration * indicating the cloud * region you wish to connect to */ function getConnection(whichRegion) { return new NoSQLClient({ compartment: 'Your compartment name goes here', region: whichRegion, auth: { iam: { tenantId: 'The OCID of ryour tenancy goes here', userId: 'Your user OCID goes here', fingerprint: 'The fingerprint for your key pair goes here', privateKeyFile: 'A fully qualified path to your private ' + 'key file goes here', passphrase: 'The passphrase used to create your private ' + 'key goes here' } } }); } /** * This function will creat the hello_world table with two columns, * one long column which will be the primary key and one JSON column. * * @param {NoSQLClient} handle An instance of NoSQLClient */ async function createTable(handle) { const createDDL = `CREATE TABLE IF NOT EXISTS hello_world (id LONG, content JSON, ` + `PRIMARY KEY(id))`; console.log('Create table: ' + createDDL); let res = await handle.tableDDL(createDDL, { complete: true, tableLimits: { readUnits: 1, writeUnits: 1, storageGB: 1 } }); } /** * Writes a single record to the hello_world table * * @param {NoSQLClient} handle an instance of NoSQLClient * @param {Object} record A JSON object representing record to * write to hello_world. */ async function writeARecord(handle, record) { await handle.put('hello_world', record); } /** * Reads and returns a record from the hello_world table * * @param {NoSQLClient} handle an instance of NoSQLClient * @param {number} pk The primary key of the record to retrieve */ async function readARecord(handle, pk) { return await handle.get('hello_world', { 'id' : pk }) } If you take the code above and place it in a file entitled HelloWorld.js and fill in your credentials in the getConnection()function as well as the fully qualified name to the compartment where you want your table created,  you can run this file as specified below: For more information about the JavaScript SDK and pointers to more example code on Github, take a look here https://www.npmjs.com/package/oracle-nosqldb. Using the Native Oracle Cloud Infrastructure Console You can also explore your tables using the Oracle Cloud Infrastructure console.  On the left-hand menu, simply navigate to the NoSQL Database menu item. Once you click on the hello_world table link, you will see the following page: Clicking on the Table rows control and then the clicking the Run query button control will display the record that you just inserted into the table: If you have questions regarding this exercise, please send an email to oraclenosql-info_ww@oracle.com with “Hello World - JavaScript” in the subject line, and someone will get back to you as soon as possible.  

Guest Post by:Dave Rubin, Senior Director, NoSQL, and Embedded Database Development, Oracle I recently posted a hello world example coded in Python and showed how easy it was for Python developers to...

NoSQL Database

Oracle NoSQL Database Cloud Service – 10 Minutes to Hello World in Python

Guest Post by: Dave Rubin, Senior Director, NoSQL and Embedded Database Development, Oracle In a previous post, The Oracle NoSQL Database development team is thrilled to have the Oracle NoSQL Database Cloud Service fully integrated on the native Oracle Cloud Infrastructure. In a previous post,I showed how easy it was to write a simple Hello World application in Java using the recently introduced Oracle NoSQL Database Cloud Service.  In fact, I claimed that you could write your first HelloWorld application in Java in 15 minutes or less.  In this post, I will demonstrate the same HelloWorld application written in Python, and if you are a Python developer and already have access to the Oracle Cloud, I believe your first HelloWorld application can be written in less than 10 minutes.  In the first post, I also talked about why you might want to use a NoSQL Database for certain applications.   For that discussion, you can check out the post here 15-minutes-to-hello-world.  The remainder of this post will focus on writing Python code for your first Oracle NoSQL Database cloud application.  Getting Started with the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service is a server-less, fully managed data store that delivers predictable single-digit response times and allows applications to scale on-demand via provisioning API calls.  There are four simple steps to getting started with the Oracle NoSQL Database Cloud Service.  Download the Oracle NoSQL Database SDK.  Create a compartment for your table (if you do not want your table in the root compartment). Connect to the Oracle NoSQL Database cloud service. Create a table with provisioned reads/sec, writes/sec, and GB storage. Write data to the table and read data from the table. Furthermore, you can use free cloud credits to do all of this and not pay a single penny.  Once you have created a cloud account, you can either navigate to the Oracle NoSQL Database table management console or if you are like most developers, quickly move onto writing your first hello world code.  For this release of the Oracle NoSQL Database Cloud Service, you can write your first hello world program using Python, Node.js, Java, or Go.  Future releases of the service will support C++, C#, and other popular programming languages.  I will use Python for the remainder of this blog. Download the Oracle NoSQL Database SDK Since access to the Oracle NoSQL Cloud Service is via HTTP, you can run your application directly on your laptop and connect to the database service over the internet.  While I would never recommend using the internet as a network transport for performance-sensitive applications, it works perfectly for our hello world example.  In fact, it is likely that you would want to deploy a real application by running inside your own tenancy, co-located in the same Oracle Cloud Infrastructure region as your Oracle NoSQL table, and use the Oracle Cloud Infrastructure Service Gateway to connect to the NoSQL Cloud Service. Like other open source Python packages, you can find the Oracle NoSQL Database Python SDK at PyPi.     When you search for Oracle NoSQL, you will see the following results.  The latest Oracle NoSQL Driver that lets you connect to the cloud service or Oracle NoSQL Database running anywhere is called borneo, so I will choose to install this one locally on my laptop.                                         Below, I am running pip install borneo to install the Oracle NoSQL Database Python SDK on my laptop. Since we will be connecting to the Oracle NoSQL Database Cloud service, we must also install the Oracle Cloud Infrastructure package.  Below, I am using pip install oci in the screenshot below to install the oci package on my laptop.  More detailed installation instructions can be found here https://nosql-python-sdk.readthedocs.io/en/latest/installation.html. Create a Compartment for Your Table If you would like your table to be created in your own compartment (e.g. namespace) rather than the root compartment, you can create a compartment by navigating to the compartments section of the Identity menu item in the cloud console. Connect to the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service uses the Oracle Cloud Infrastructure native cloud Identity Access Manager (IAM) for authentication and authorization.  In the Python API documentation,  (https://nosql-python-sdk.readthedocs.io/en/latest/api.html) for the SDK, you will notice a borneo.iam package, which currently contains a single class, SignatureProvider.    We will use this class to provide our authentication information to the cloud service.    Before you can authenticate your application with the cloud service, you must generate a key pair and upload your public key to the Oracle Cloud.  The instructions here will guide you through generating an API signing key and uploading the public portion of that key to the Oracle Cloud https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How The following information will be needed for you to successfully authenticate with the cloud service: Tenancy and user OCID – This page (https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#Other) will help you locate your tenancy and user OCIDs. Private key file and fingerprint – The file holding your private key should be safely stored in a location known only to you and would have been generated when you created the API signing key.  To find out how to retrieve the fingerprint of the private key, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#four Passphrase – If you have provided a password for your key then you must supply it to the SignatureProvider. The code below is structured as follows:  The function get_connection – This is where you will fill in your specific credentials.  This method will perform authentication with the Oracle NoSQL Cloud Service and return a handle to the service.  The function create_table  - This function will create a table with two columns; an ID column of type LONG and a content column of type JSON. The method write_a_record – This method will write a single record to the hello world table. The method read_a_record – This method will read a single record from the hello world table and return this record as a JSON string. The main entry point – Will first get a connection to the service, create the table, write a single record, read that record back, and finally print that record to stdout.   import os from borneo import (Regions, NoSQLHandle, NoSQLHandleConfig, PutRequest, TableRequest, GetRequest, TableLimits, State) from borneo.iam import SignatureProvider # Given a region, and compartment, instantiate a connection to the # cloud service and return it def get_connection(region, compartment): print("Connecting to the Oracle NoSQL Cloud Service") provider = SignatureProvider( tenant_id='Your tenant OCI goes here', user_id='Your user OCID goes here', private_key='A fully qualified path to your key file goes here', fingerprint='The fingerprint for your key pair goes here', pass_phrase='The pass phrase for your key goes here') config = NoSQLHandleConfig(region, provider) config.set_default_compartment(compartment) return(NoSQLHandle(config)) # Given a handle to the Oracle NoSQL Database cloud service, # this function will create the hello_world table with two columns # and set the read units to 1, write units to 1, and GB storage to 1 def create_table(handle): statement = 'create table if not exists hello_world (id long, ' +\ 'content JSON, primary key(id))' print('Creating table: ' + statement) request = TableRequest().set_statement(statement)\ .set_table_limits(TableLimits(1, 1, 1)) # Ask the cloud service to create the table, waiting for a total of # 40000 milliseconds and polling the service every 3000 # milliseconds to see if the table is active table_result = handle.do_table_request(request, 40000, 3000) if (table_result.get_state() != State.ACTIVE): table_result.wait_for_completion(handle, 40000, 3000) # Check to see if the table is in the ACTIVE state if (table_result.get_state() != State.ACTIVE): raise NameError('Table hello_world is in an unexpected state ' + str(table_result.get_state())) # Given a handle to the Oracle NoSQL Database cloud service, the # name of the table to write the record to, and an instance of a dictionary, # formatted as a record for the table, this function will write the record to # the table def write_a_record(handle, table_name, record): request = PutRequest().set_table_name(table_name) request.set_value(record) handle.put(request) # Given a handle to the Oracle NoSQL Database cloud service, the name of # the table to read from, and the primary key value for the table, # this function will read the record from the table and return it def read_a_record(handle, table_name, pk): request = GetRequest().set_table_name(table_name) request.set_key({'id' : pk}) return(handle.get(request)) def main(): handle = get_connection(Regions.US_ASHBURN_1, 'Your compartment name goes here') create_table(handle) record = {'id' : 1, 'content' : {'hello' : 'world'}} write_a_record(handle, 'hello_world', record) print('Wrote record: \n\t' + str(record)) the_written_record = read_a_record(handle, 'hello_world', 1) print('Read record: \n\t' + str(record)) os._exit(os.EX_OK) if __name__ == "__main__": main() If you take the code above and place it in a file entitled HelloWorld.py and fill in your credentials in the get_connection()function as well as the fully qualified name to the compartment where you want your table created,  you can run this file using python HelloWorld.py as specified below: For more information about the Python SDK and pointers to more example code on Github, take a look here https://pypi.org/project/borneo/.   Using the Native Oracle Cloud Infrastructure Console You can also explore your tables using the Oracle Cloud Infrastructure console.  On the left-hand menu, simply navigate to the NoSQL Database menu item. Once you click on the hello_world table link, you will see the following page: Clicking on the Table rows control and then the clicking the Run query button control will display the record that you just inserted into the table: If you have questions regarding this exercise, please send an email to oraclenosq-info_ww@oracle.com with "Hello World - Python" in the subject line, and someone will get back to you as soon as possible.

Guest Post by:Dave Rubin, Senior Director, NoSQL and Embedded Database Development, Oracle In a previous post, The Oracle NoSQL Database development team is thrilled to have the Oracle NoSQL Database...

NoSQL Database

Oracle NoSQL Database Multi-Region Table - Part2

This blog is part two of the series on the multi-region table blog. In the first blog, we saw the need for a geographically distributed database and the advantages of that in terms of providing low latency local read and writes.  We also looked at the potential use-cases that can benefit from this feature. In this blog, we'll look at more details about multi-region tables, as well as different components that make up this feature and finally how to correctly set-up a multi-region table in the Oracle NoSQL Database for an active-active architecture. In the last blog, we looked at an active-active architecture, and from a database point of view, here are some of the critical requirements that a database should satisfy to provide a correct active-active set-up: Geographic distribution – Deploying a system across a broad set of regions separated by a significant speed of light distance (> 10 ms). The system should be automatically able to replicate the data to all the connected regions without requiring manual work from the end-users. Performance (low read/write latency) – The ability to minimize the read/write latency based on user location.  Write anywhere – The ability to write to any record, in any region, at any time with the apparent notion that these writes may conflict when the record with the same key is updated in multiple regions. Multi-region table and multi-region architecture At a high level, a multi-region architecture is two or more independent, geographically distributed Oracle NoSQL Database clusters bridged by bi-directional replication. For example, a user can deploy three Oracle NoSQL Database instances (also referred to as a store) in Frankfurt, London, and Dublin. Now, suppose you want to collect and maintain similar data across these multiple regions. You need a mechanism to create tables that can span across multiple regions and keep themselves updated with the inputs from all the participating regions. You can achieve this using multi-region tables. A multi-region table is a read-anywhere and write-anywhere table that lives in multiple regions.   Figure 1: Multi-region Architecture Please note: the term "region" has different meanings in different contexts. Unlike Oracle NoSQL Cloud service (NDCS), where "region" means one of the OCI regions, e.g., us-east (Ashburn, VA), us-west (Phoenix, AZ), or EU-central (Frankfurt), here the region means an independent Oracle NoSQL Database installation. Each Oracle NoSQL Database installation may be geographically distributed. In this architecture, all the distributed stores form a fully-connected graph. All multi-region tables on those three regions would be synchronized.  Within each region, a new component called the Cross-Region (XRegion) Service is deployed and is responsible for subscribing to committed changes on multi-region tables.  Each region's committed changes are streamed directly out of the store's transaction log and delivered to all remote subscribers.  Please refer to our documentation to learn more details about the architecture and various components, including the XRegion Service. With the above as a background, let's set up a multi-region table and discuss its life cycle with an example. Consider an Oracle NoSQL Database with two regions, Frankfurt and London. We create a table called Users to store the user details in each of the regions. Let's summarize the steps that you must perform to create and manage the table: Prepare the JSON config file and make a writeable directory for each region.  The JSON config file describes how regions will connect (e.g., the hosts and ports) as well as the security file used.  Deploy stores in each region Start XRegion Service in each region Define the local region's name, and define the remote regions Create a multi-region table in each region We then perform data operations like INSERT, UPDATE, and DELETE on the table in one region, and will see those changes propagated to the other region. Preparation The first step is to create a home directory for XRegion Service and copy the example JSON config file into the directory. The JSON config file should specify this directory as the "path" (see example below). pre { font-size: 12px; border: 2px solid grey; width: 800px; border-radius: 5px; padding: 14px; } bash-4.1$ /home/aachanda/xrshome bash-4.1$ ls -lrt total 1 -rw-r--r--+ 1 aachanda g900 303 April  1 16:25 json.config Prepare JSON config file These are two JSON config files used to create a 2-region multi-region table across region London (LND) and Frankfurt (FRA). The attributes in these JSON config files are explained below: Region LND {  "path" : "/home/aachanda/work/xrshome",   "agentGroupSize" : 1,   "agentId" : 0,   "region" : "LND”,   "store" : "mystore",   "helpers" : [ "localhost:5000" ],   "regions" : [ {     "name" : "FRA”,     "store" : "mystore",     "helpers" : [ "lnd-nosql-1.oracle.com:5000" ] } ] } Region FRA {   "path" : "/home/aachanda/work/xrshome",   "agentGroupSize" : 1,   "agentId" : 0,   "region" : "FRA”,   "store" : "mystore",   "helpers" : [ "localhost:5000" ],   "regions" : [ {     "name" : "LND”,     "store" : "mystore",     "helpers" : [ "fra-nosql-1.oracle.com:5000" ] } ] } Path: This is the root directory of the XRegion Service. The service will use the directory to dump logs, statistics, and other auxiliary files. The directory shall be readable and writable to the service. AgentGroupSize and AgentId: These two parameters specify the number of service agents and the agent id (0-based numbering) in the group. The service uses them to form a group of agents to serve the local region to achieve horizontal scalability. In the current release, we only support a single service for each local region, therefore in its JSON config file "agentGroupSize" is set to 1; "agentId" is set to 0. Region: This is the local region name. Users can name the local region using a name different from its store name. This name will be used in DDL to create a multi-region table in a remote region. For example, if the local region name is "LND," the name "LND" shall be used when the user creates a multi-region table in the remote region "FRA." Store and Helpers: These are the store name and helpers of the local NoSQL Database store.  These helper hosts are the same ones that are used to connect to the store from a KV client.  Deploy the NoSQL Database Stores Each region needs to deploy its store as usual.  Please see Oracle NoSQL Database document for instructions on deploying a store. After a store is deployed, you can use the PING command to check the health of the store. bash-4.1$ java -jar $KVHOME/dist/lib/kvstore.jar ping -port 5000 -host localhost Pinging components of store mystore based upon topology sequence #12 8 partitions and 1 storage nodes Time: 2020-04-06 05:41:38 UTC Version: 20.1.12 Shard Status: healthy:1 writable-degraded:0 read-only:0 offline:0 total:1 Admin Status: healthy Zone [name=FRA id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:1 read-only:0 offline:0 Storage Node [sn1] on fra-nosql-1:5000 Zone: [name=HPN id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 20.1.12 2020-03-27 04:12:18 UTC Build id: 7fce2c227666 Edition: Enterprise Admin [admin1] Status: RUNNING,MASTER Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:131 haPort:5011 available storage size:27 GB Start XRegion Service You need to start XRegion Service in each region using the XRSTART command with the path to the JSON config file. Here is an example to start the service in region LND, and we need to start the service at FRA as well.  If the local region is not up, the XRegion Service will start and poll the local region until it is up and running. bash-4.1$ nohup java -Xms256m -Xmx2048m -jar $KVHOME/dist/lib/kvstore.jar xrstart -config /home/aachanda/ xrshome/json.config > /home/aachanda/ /xrshome/nohup.out & [1] 5618 bash-4.1$ jps | grep xrstart 3:5618 /home/aachanda/KV/kvhome/dist/lib/kvstore.jar xrstart -config /home/aachanda/xrshome/json.config bash-4.1$ cat /home/aachanda/work/xrshome/nohup.out Cross-region agent (region=FRA, store=mystore, helpers=[localhost:5000]) starts up from config file=/home/aachanda/xrshome/json.config Define Region Before creating the first multi-region table in each participating region, you must set a local region name and define each remote region in the local region. After that, we can verify that the region is set up by execute 'show regions'. In FRA, we shall name Frankfurt as local and define London as remote  bash-4.1$ java -jar $KVHOME/dist/lib/kvstore.jar runadmin -host fra-nosql-1 -port 5000 kv-> connect store -name mystore Connected to mystore at fra-nosql-1:5000. kv-> execute 'SET LOCAL REGION FRA’ Statement completed successfully kv-> execute 'CREATE REGION LND’ Statement completed successfully kv-> execute 'show regions' regions FRA (local, active) LND (remote, active) kv-> exit  In LND, we shall name London as local and define Frankfurt as remote bash-4.1$ java -jar $KVHOME/dist/lib/kvstore.jar runadmin -host lnd-nosql-1 -port 5000 kv-> connect store -name mystore Connected to mystore at lnd-nosql-1:5000. kv-> execute 'SET LOCAL REGION LND’ Statement completed successfully kv-> execute 'CREATE REGION FRA’ Statement completed successfully kv-> execute 'show regions' regions LND (local, active) FRA (remote, active) kv-> exit Create a multi-region table in LND and FRA You must create a multi-region table on each store in the connected graph, and specify the list of regions that the table should span. Now we create the Users table in LND with FRA as a remote region. kv-> execute ‘CREATE TABLE Users(uid INTEGER, person JSON,PRIMARY KEY(uid))IN REGIONS FRA’ Statement completed successfully(uid))IN REGIONS FRA’  Next, we create the Users table in FRA with LND as the remote region.  kv-> execute ‘CREATE TABLE Users(uid INTEGER, person JSON,PRIMARY KEY(uid))IN REGIONS LND’ Statement completed successfully(uid))IN REGIONS FRA’ Verify the Table Is Ready Now we have the table Users created in two regions, FRA and LND. After the table is created, we can check the status of the table. The remote regions will show up in the "regions" section at the end of the output. For example, if we do "show tables" in FRA, the remote region of table "Users" is LND kv-> show table -name Users { "json_version" : 1, "type" : "table", "name" : "Users", "shardKey" : [ "uid" ], "primaryKey" : [ "uid" ], "fields" : [ { "name" : "uid", "type" : "INTEGER", "nullable" : false, "default" : null }, { "name" : "person", "type" : "JSON", "nullable" : true, "default" : null } ], "regions" : { "1" : "FRA" "2" : "LND" } } Perform the Data Operations Now we are ready to try some DML operations. Suppose we are in FRA. INSERT/UPDATE We first insert a few rows in FRA and update one of them. bash-4.1$ java -jar $KVHOME/dist/lib/sql.jar -helper-hosts fra-1-nosql:5000 -store mystore sql-> insert into users values(1,{"firstName":"jack","lastName":"ma","location":"FRA"});
{"NumRowsInserted":1} 1 row returned sql-> insert into users values(2, {"firstName":"foo","lastName":"bar","location":null});
{"NumRowsInserted":1} 1 row returned sql-> update users u set u.person.location = "FRA" where uid = 2;
{"NumRowsUpdated":1} 1 row returned .. and now we query the table at LND.  bash-4.1$ java -jar $KVHOME/dist/lib/sql.jar -helper-hosts lnd-1-nosql:5000 -store mystore sql-> select * from users; {"uid":1,"person":{"firstName":"jack","lastName":"ma","location": "FRA"}} {"uid":2,"person":{"firstName":"foo","lastName":"bar","location": "FRA"}} 2 rows returned Now we update a row(uid=1), for example, to indicate that this user has traveled to LND by changing the description location from "FRA" to "LND". sql-> update users u set u.person.location= "LND" where uid =1; {"NumRowsUpdated":1} 1 row returned   We go back to FRA and query the table, note the row (uid =1) the user's location has been changed from "FRA" to "LND". bash-4.1$ java -jar $KVHOME/dist/lib/sql.jar -helper-hosts fra-1-nosql:5000 -store mystore sql-> select * from users;
 {"uid":1,"person":{"firstName":"jack","lastName":"ma","location": "LND"}} {"uid":2,"person":{"firstName":"foo","lastName":"bar","location": "FRA"}} 2 rows returned DELETE Finally, let us delete a row in FRA.  While we are still in FRA, we can go ahead and delete the row where uid=1.  bash-4.1$ java -jar $KVHOME/dist/lib/sql.jar -helper-hosts fra-1-nosql:5000 -store mystore sql-> delete from users where uid=1;
{"numRowsDeleted":1} 1 row returned bash-4.1$ java -jar $KVHOME/dist/lib/sql.jar -helper-hosts lnd-1-nosql:5000 -store mystore sql-> select * from users; {"uid":2,"person":{"firstName":"foo","lastName":"bar","location": "FRA"}} 1 row returned Developers can also read and write the multi-region tables using the existing NoSQL Database APIs like any non-multi-region table. For details API details, please see Oracle NoSQL Database Table Documentation.  At any given point in time, users can add or remove the regions as shown below: Remove Region For example, if the user would like to remove Frankfurt from the example above, then: In the London store, remove Frankfurt from its remote region list Command: kv-> execute 'alter table Users drop regions FRA' In the Frankfurt store, remove London from its remote region list Command: kv-> execute 'alter table Users drop regions LND' Depending on the application requirement, the table at Frankfurt may have to be dropped first. If not, the table still exists at Frankfurt but would be out-of-sync with the London region. Add Region You can expand a multi-region table to new regions. Another way to look at this is you are adding a new region to an existing multi-region table.  In the above example, if you would like to add another region, for example, Paris to the multi-region table "Users," then you would need to create that table in the Paris region.  When this is done, the existing regions must be specified when creating the multi-region table in Paris.  Second, the new region (Paris) needs to be added to Frankfurt and London with an Alter Table DDL command.   Command: kv-> execute 'alter table Users add regions PAR' The Paris region will first initialize the table by copying the rows from LND and FRA. The table at Paris is fully active and able to read and write during initialization, but some rows at other regions might not show up before the table copy is done. A couple of points to note about the current behavior of multi-region table: Asynchronous Propagation: In the above INSERT, UPDATE, DELETE operations, the changes are synchronized across multiple regions asynchronously. It means, when you write a row in the FRA region, the write operation is executed entirely in the FRA (Local) region without waiting for the subscribing regions to update. Heterogenous Topologies: Stores can have different topologies. For example, a NoSQL store in Frankfurt may have three shards while a NoSQL store in London may only have a single shard. Each region can also independently support elasticity operations enabling the addition or removal of shards separately in each region. Regional Security. You need to authenticate with each region and gain proper access privileges to create, read, and write a multi-region table in that region. Modifying security constraints for the given table in one region will not affect other regions. Automatic-failover: When a region fails, multi-region tables in other live regions will continue to work as usual and will be not be affected by the failed region. When the failed region comes back, its multi-region table will be re-synced with the others. This concludes the two-part series on the Multi-region table feature. Happy Exploring! Acknowledgment: A special thanks to Junyi Xie, the lead developer behind this feature, and Dave Rubin, our Engineering head for proofreading this blog and suggesting changes.  

This blog is part two of the series on the multi-region table blog. In the first blog, we saw the need for a geographically distributed database and the advantages of that in terms of providing...

NoSQL Database

Oracle NoSQL Database Multi-Region Table - Part1

We are excited to announce the release of the Oracle NoSQL Database version 20.1. With this release, we are highlighting our multi-region table feature. In this blog, we'll look at a high-level introduction of the feature and different customer scenarios that can benefit from this feature. Introduction When it comes to measuring an application's performance, network latency and throughput are the most commonly used metrics. Network latency is the minimum (latent) time a data packet takes to travel across the network, typically expressed in terms of the round-trip time(RTT) and throughput is the quantity of data transmitted across the network during a specified period via a network.  As Einstein outlined in his theory of relativity, the speed of light is the maximum speed at which conventional matters and information can travel. As such, the speed of light places a hard upper limit and is a regulator on the propagation time of any data packet. The speed of light is 299,792,458 meters per second or 186,282 miles per second. However, that is the speed of light in vacuum, and the data packets travel through a medium such as a copper wire or a fiber-optic cable, which further slows down the signal. Let's take an example: the distance between New York to San Francisco is about 2578 miles (or 4148 KM). If we consider the most optimistic scenario wherein the packet travels along the great-circle path (the shortest distance between two points on the globe) between the cities, the RTT in fiber (speed of light to be about 30% slower in a normal optical fiber) is about 42 ms.  Now, looking between Europe (London) and the US east coast (New York), it is about 56 ms. But in reality, the packets take a much longer route passing through multiple routers before they reach their final destination. At each hop, along the route, there's going to be some additional routing, processing, queueing, and transmission delays as a result of which the actual RTT between US East and West is approx. 80 milliseconds and between the US and Europe is approx. 140 milliseconds - which may not be acceptable to applications, such as gaming, eCommerce, multi-player online gaming, collaborative work, and mobile. Today's applications are getting more and more personalized (e.g., recommendation engines, personalized discounts, and offers), interactive and immersive. For such applications, fast response times – or conversely, low latency is of utmost importance.   There are published studies that show that every 100-millisecond delay in website load time can hurt the conversion rates by 7%. There are other studies from internet companies that show that web performance translates directly to dollars and cents; for example,  a 2,000 ms delay on Bing search pages decreased per-user revenue by 4.3%!.  In the last several years, we have heard from many large customers that they want to deploy Oracle NoSQL Database in a geographically distributed fashion because of the global nature of their business. All of these applications demand very high availability and ALWAYS-ON (operating 24x7x365 days) capabilities spread across multiple data centers separated by a significant speed of light distance (> 10 ms).   The two most popular methods adopted by the network and system managers today to achieve this are active-passive and active-active configurations. An active-passive configuration typically consists of at least two data centers or regions. However, as the name "active-passive" implies, not all the regions are going to serve write requests. One region actively serves write requests, while the second region is somewhat passive, able to support reads and ready to support failover in case the active region fails.  In Oracle NoSQL Database, this can be done easily where the active Data Center will have a primary zone, and the secondary(passive) Data Center will be set up as a secondary zone. In this case, ALL master nodes will always be in the active Data Center, and changes to data will be propagated asynchronously to the secondary zone. Read more about zones in our documentation.  An active-active configuration comprises at least two regions. In this configuration, the application is deployed across multiple regions, and all the regions are actively serving read and write requests simultaneously. This architecture provides the following benefits: Globally dispersed audience – It is advantageous to provide local processing over remote processing when support is needed for a geographically distributed customer base.  Local processing offers the lowest latencies, the best performance and eliminates wide area network access. Read and Write anywhere – The ability to read and write the same record in any region at any time.   This provides optimal usage of data center resources over the active/passive approach in that all of your data centers can serve writes as well as reads.   Furthermore, it affords a high degree of disaster recovery flexibility, allowing you to route both read and write requests to any data center in the event of a regional disaster. Seamless disaster recovery – An outage in one region does not stop the others.   You can re-route your requests to a region that is up and running. Enter the Multi-Region table or MR Table that helps to realize active-active configurations with Oracle NoSQL Database.  Multi-Region Table A Multi-Region Table is a global logical table that eliminates the problematic and error-prone work of replicating data between regions, enabling a developer to focus on application business logic.  All updates performed in one region are automatically propagated to all the other regions.  An important by-product of this propagation is disaster proofing an application with data redundancy.  You can guard against a region's failure in the application because the data is in multiple regions.    If your application gets a failure because a region is unavailable, it can redirect that request to a different region. This is a cost-effective disaster recovery solution.   Here's a high-level architecture diagram of a scenario where a company deploys three on-premise instances of Oracle NoSQL Database one each at Phoenix (US -West), Frankfurt (Germany) and Tokyo (Japan) and there's a table called ShoppingCart which stores the shopping cart information from customers shopping in our globally distributed store.    In such a setup involving multiple geographic locations, each independent Oracle NoSQL Database deployment is referred to as a Region.  An architecture having two or more independent, geographically distributed NoSQL instances is known as Multi-Region Architecture. The table ShoppingCart is a Multi-Region Table that is a read-anywhere and write-anywhere table that lives in multiple regions. The glue that connects one region to another is our Cross-Region Service (XRegion Service).  The XRegion service is a standalone service that runs in each of the regions and has the knowledge of all the regions identified in an MR table.  It subscribes to changes from MR tables from all the remote regions and persists those changes to the local region.        Figure 1. Depicting Multi-region architecture and multi-region tables. Use-Cases  Let's look at some real-world use-cases that can benefit from this feature and ensure that your company is ahead of the curve in today's global competitive landscape. The Traveling User/Package  There are two specific use cases that we have heard, one from a leading mobile company with a global presence and the other from a leading courier/logistics company. However, when sifted through thoroughly, we can consider these as the same use case.  In the case of the mobile company, the application team had the mandate from their management to make their mobile eCommerce store user experience seamless and performant no matter where in the world a user happens to access it. In the traveling user problem, a user in California wakes up in the morning, shops online, and adds a mobile phone to the shopping cart. The west coast region serves this session, and the user experiences good read and write request latency from the region local data center. This user then gets on a plane to Germany, lands 13 hours later, gets to the hotel, connects to the wifi network, goes to the mobile company's online store, and finds that there's another model of the phone that looks more appealing to him. So he decides to update the shopping cart with this new model of the phone and continues to browse the mobile e-commerce Store. The EU regional data center, which is the most proximal data center, serves this session, and provides the user with the same low latency read and write experience as he had in the US. In the case of the courier/logistics company, however, the traveling entity is a package, and the package delivery status is getting updated in two different regions. Both these cases can benefit from the multi-region table because a user can read and write the same data record with the same key in two different data centers with local latencies.  The Telco Problem – Offering a Family Plan This use case comes from a large Telco company. Let's say you have a family of five, three young adults, and both parents.  Like a lot of parents, you want to find the best cost option for cell phone service for the family.  After spending a few hours scouring the internet, you find the unlimited plus plan by GoFam.  You get five lines for one fixed rate along with unlimited nationwide talk and text, and 50 GB of data.   This is perfect because your kids are off in college with one in Hawaii, one in Seattle and one in Boston, while you are in Chicago.   Everyone can use as much data as they want up to 50GB, at which point you will be charged a premium for data overage.   All data access activity is occurring in different regions and in different time zones across the US.  To be able to view a consolidated report of data access at any time during the billing cycle, data for the account must be synchronized across all regions. Furthermore, for a good, low latency user experience, application access to the account should be routed to an application server and NoSQL Database store that is closest to the user. The Telco can now offer accurate consolidated summaries to its customers as well as alerting capability that can alert customers when their overall data usage across regions is approaching the alert threshold.  All of the plumbing to make this happen is taken care of by Oracle NoSQL Database multi-region tables. We'll follow-up with another blog on how to set up the MR Table and look at the architecture in more detail.  If you are interested in learning more about this feature, refer to our product documentation.            

We are excited to announce the release of the Oracle NoSQL Database version 20.1. With this release, we are highlighting our multi-region table feature. In this blog, we'll look at a high-level...

NoSQL Database

Oracle NoSQL Database Version 20.1 is now available

We are pleased to announce the release of Oracle  NoSQL Database Version 20.1. This release contains the following new features:  Multi-Region Table – Oracle NoSQL Database 20.1 introduces tables that can span multiple geographic regions. This feature bi-directionally replicates a table's data and keeps it in sync across all regions. Multi-Region tables have an active-active architecture that can service users in their local regions, enable predictable low latency access, and provide disaster recovery in the face of full-region failures. Oracle NoSQL Database can automatically resolve write conflicts, which allows the application developer to focus on the application's business logic. Key benefits to the customer include read/write anywhere, disaster recovery, and always-on availability.  Read more about this feature in our docs here Untyped JSON Index - JSON Indexing capabilities have been enhanced in Oracle NoSQL 20.1, providing more flexibility in the usage of schema-less JSON. Developers can create an index on one or more JSON fields without specifying the data type of the index fields.  Un-typed JSON indexes enable schema-less documents whose indexed values can contain any valid JSON atomic types (e.g., number, string, boolean).  This is extremely powerful when indexing values in your documents of different types. SQL IN Operator - The IN operator is a logical operator that allows developers to test whether a specified value matches any value in a list. With this feature, SQL queries become more compact and are executed more efficiently.  E.g you can now write the following query :  SELECT * FROM Foo WHERE a IN (1,5,4) Instead of: SELECT * FROM Foo WHERE a=1 or a=5 or a=4 Refer to our doc for more details to learn about the IN Operator. Some important product updates: Inclusion of Oracle NoSQL Database Enterprise Edition(EE) with Oracle Database Enterprise Edition and Deprecation of Oracle NoSQL Database Basic Edition(BE) The Oracle Database Enterprise Edition(EE) license now includes Oracle NoSQL Database Enterprise Edition. Any customer who purchases or has purchased an Oracle Database Enterprise Edition (DB EE) license is entitled to download and use Oracle NoSQL Database Enterprise Edition(EE). Support for Oracle NoSQL Database Enterprise Edition would be included as part of the DB EE support contract if support was purchased. In addition to the above, with this release, we are also announcing the deprecation of the Oracle NoSQL Database Basic Edition(BE).  Oracle NoSQL Database is a multi-region database, designed to provide a highly available, scalable, performant, flexible, and reliable data management solution to meet today's most demanding application workloads. you can choose the data model that best fits your data. It can run anywhere, be it private, public, or hybrid cloud. If you want to take Oracle NoSQL Database 20.1 for a spin, the fastest and the easiest way is to launch a cluster on Oracle Cloud Infrastructure that is available in multiple regions around the world, or you can run on other clouds too. Refer to this whitepaper for details on how to quickly set up Oracle NoSQL Database Cluster on Oracle Cloud Infrastructure(OCI). To download  Visit Here The product changelog is located  here. The release notes are located here. Product documentation is located here. Join our mailing list - NoSQL-Database@oss.oracle.com Visit our LinkedIn page      

We are pleased to announce the release of Oracle  NoSQL Database Version 20.1. This release contains the following new features:  Multi-Region Table – Oracle NoSQL Database 20.1 introduces tables that...

Announcing Oracle NoSQL Database Cloud Service

We are pleased to announce that Oracle NoSQL Database Cloud Service,  the most versatile, elastic, and easy to use NoSQL Database on the market,  is now available on the Oracle Cloud as a pay-as-you-go, server-less, and fully managed service.   The new Oracle NoSQL service runs on the latest Oracle Cloud Infrastructure (OCI Gen 2), and delivers predictable single-digit millisecond response times at massive scale.  Its ability to seamlessly handle schema-less JSON and fixed schema data in addition to pure key-value data, provides users with flexible data modelling options and the ability to rapidly develop and deploy applications without a steep learning curve. Users can also easily scale their data sizes from 10s of KB to 100s of TB, and their operation throughputs from 10s per second to 100s of thousands per second, just by changing their provisioned capacity. The new Oracle NoSQL Database Cloud Service provides a wealth of features to meet the needs of today’s developers: Developer-Centric: NoSQL Database Cloud Service is designed for flexibility. The database supports Document (JSON), fixed-schema, and key-value data models, all with flexible ACID transaction guarantees.   Open: The service provides a SQL language interface, delivering innovative interoperability between document and fixed-schema data models. Users also have deployment options to run the same application in the cloud or on-premises, with no platform lock-in. Ease and Simplicity: With an available SDK and support for popular languages including Python, Go, Node.js and Java, Oracle NoSQL Cloud offers an easy to use application development solution.  A simple graphical user interface allows an application to be developed and deployed in minutes. Elastic Scalability: Oracle NoSQL Database Cloud Service scales to meet application throughput performance requirements while maintaining low and predictable response times. As workloads change with periodic business fluctuations, applications can increase or decrease their provisioned throughput to maintain a consistent user experience. Failsafe high availability of 99.995% with instant automatic failure detection and failover to protect your applications against unplanned downtime, outages and data loss. Get Started in 3 Easy Steps Combine the advantages of world-class database technology and the innovations of some of the best minds at Oracle.                     Setup your Oracle NoSQL Account Explore the Oracle NoSQL Database service Use your Database You start by either placing an order for Oracle NoSQL Database Cloud Service through Universal credits or sign up for a 30 day free trial and get $300 in credits through Oracle Cloud Free Trial.  Next sign into the Oracle Cloud infrastructure console. As the creator, you will be accorded the privileges of database administrator. Locate the service URL from the welcome email and then sign on to the Oracle NoSQL Database Cloud Service. In the console, select NoSQL Database. Use either the root compartment or create a compartment to create your tables. Tables in Oracle NoSQL Database Cloud Service can be created in two modes: a) interactively through the console  b) declaratively using a DDL statement. Either way allows one to create a table, specify primary keys, set shard key, add columns to table, setup reserve capacity.     If you are interested in writing an application please refer to our getting started tutorial which walks through the steps to connect to Oracle NoSQL Database Cloud Service and do basic table level operations using a sample application. Developers love us because DevOps love us because Faster time to value – Use native data structures that you already love and use in your programming languages. They don’t need to do anything because Oracle NoSQL Database Cloud Service is a server-less, fully managed service from Oracle. Multiple models for multiple use cases – Combine use cases like Document, Spatial, Search and Graph or serve AI models against time series data in one place     Do check Oracle Cloud Data Regions for availability of Oracle NoSQL Database Cloud Services in a region near you. Here is what a customer has to say: “Our clients have unique ways of describing their data. Oracle’s schema-less NoSQL database let us save and retrieve data as an object. And moving each client’s data to the cloud makes deployment fast and easy.” - Jim Geldermann, Director of Technology, BizDoc More Information For more information, see the Oracle NoSQL Database Cloud Service Documentation and FAQ.          

We are pleased to announce that Oracle NoSQL Database Cloud Service,  the most versatile, elastic, and easy to use NoSQL Database on the market,  is now available on the Oracle Cloud as a...

15 Minutes to Hello World

Oracle NoSQL Database Cloud Service – 15 Minutes to Hello World   Guest Post by: Dave Rubin, Senior Director, NoSQL and Embedded Database Development, Oracle The Oracle NoSQL Database Development team is thrilled to have the Oracle NoSQL Database Cloud Service fully integrated on the native Oracle Cloud Infrastructure.  In this post, I will walk through the steps to write a simple Hello World application using the recently integrated Oracle NoSQL Database Cloud Service.  You will also see that writing your first Hello World application can be accomplished in 15 minutes or less.  You may first be wondering, why use a NoSQL Database and what is the Oracle NoSQL Database Cloud Service?  These are great questions and ones that I will discuss before getting into the code. Why use a NoSQL Database? Modern application developers have many choices when faced with deciding when and how to persist a piece of data.   In recent years, NoSQL databases have become increasingly popular and are now seen as one of the necessary tools in the toolbox that every application developer must have at their disposal.  While tried and true relational databases are great at solving classic application problems like data normalization, strictly consistent data, and arbitrarily complex queries to access that data, NoSQL databases take a different approach.   Many of the more recent applications have been designed to personalize the user experience to the individual, ingest huge volumes of machine generated data, deliver blazingly fast, crisp user interface experiences, and deliver these experiences to large populations of concurrent users.  In addition, these applications must always be operational, with zero down-time, and with zero tolerance for failure.  The approach taken by Oracle NoSQL Database is to provide extreme availability and exceptionally predictable, single digit millisecond response times to simple queries at scale. The Oracle NoSQL Database Cloud Service is designed from the ground up for high availability, predictably fast responses, resiliency to failure, all while operating at extreme scale. Largely, this is due to Oracle NoSQL Database’s shared nothing, replicated, horizontal scale-out architecture and by using the Oracle NoSQL Database Cloud Service, Oracle manages the scale out, monitoring, tuning, and hardware/software maintenance, all while providing your application with predictable behavior. Getting Started with the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service is a server-less, fully managed data store that delivers predictable single digit response times and allows application to scale on demand via provisioning API calls.  There are four simple steps to getting started with the Oracle NoSQL Database Cloud Service. Download the Oracle NoSQL Database driver Connect to the Oracle NoSQL Database Cloud Service Create a table with provisioned reads/sec, writes/sec, and GB storage Write data to the table and read data from the table Furthermore, you can use your free cloud credits to do all of this and not pay a single penny.  Once you have created a cloud account, you can either navigate to the Oracle NoSQL Database table management console, or if you are like most developers, quickly move onto writing your first hello world code.  For this release of the Oracle NoSQL Database Cloud Service, you can write your first hello world program using Python, Node.js, Java, or Go.  Future releases of the service will support C++, C#, and other popular programming languages.  I will use Java for the remainder of this blog.  Java 8 and Java 11 are certified. Download the Oracle NoSQL Database Driver Since access to the Oracle NoSQL Cloud Service is via HTTP, you can run your application directly on your laptop and connect to the database service over the internet.  While I would never recommend using the internet as a network transport for performance sensitive applications, it works perfectly for our hello world example.  In fact, most likely you would want to deploy by running your application inside your own tenancy co-located in the same Oracle Cloud Infrastructure region as your NoSQL table and use the Oracle Cloud Infrastructure Service Gateway to connect to the NoSQL Cloud Service. I’ll navigate my browser to the following URL https://www.oracle.com/downloads/cloud/nosql-cloud-java-driver-downloads.html, where I am presented with the following options: I will choose the tar.gz file and download that one.  Once you unbundle this file, you will see a lib directory which contains the Oracle NoSQL Cloud Service driver along with a few other dependent libraries.  For this example, I am placing these files in /Users/drubin/NoSQL/oracle-nosql-java-sdk-5.2.11.  Notice the contents of the lib directory.  These are the only libraries that you will need to compile and run your hello world example. Connect Application to the Oracle NoSQL Database Cloud Service The Oracle NoSQL Database Cloud Service uses the Oracle Cloud Infrastructure native cloud Identity Access Manager (IAM) for authentication and authorization.  In the Javadoc (https://docs.oracle.com/en/cloud/paas/nosql-cloud/csnjv/index.html) for the driver, you will notice a new package entitled oracle.nosql.driver.iam, which currently contains a single class, SignatureProvider.    We will use this class to provide our authentication information to the cloud service.  Before you can authenticate your application with the cloud service, you must generate a key pair and upload your public key to the Oracle Cloud.  The instructions here will guide you through generating an API signing key and uploading the public key to the Oracle Cloud https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How The following information will be needed for you to successfully authenticate with the cloud service: Tenancy and user OCID – This page (https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#Other) will help you locate your tenancy and user OCIDs. Private key file and fingerprint – The file holding your private key should be safely stored in a location known only to you and would have been generated when you created the API signing key.  To find out how to retrieve the fingerprint of the private key, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#four Passphrase – If you have provided a password for your key then you must supply it to the SignatureProvider. The code below is structured as follows: The method getNoSQLConnection – This is where you will fill in your specific credentials.  This method will perform authentication with the Oracle NoSQL Cloud Service and return a handle to the service. The method createHelloWorldTable  - This method will create a table with two columns; an ID column of type LONG and a content column of type JSON. The method writeOneRecord – This method will write a single record to the hello world table. The method readOneRecord – This method will read a single record from the hello world table and return this record as a JSON string. The main() entrypoint – Will first get a connection to the service, create the table, write a single record, read that record back, and finally print that record to stdout.   pre { font-size: 12px; border: 2px solid grey; width: 800px; border-radius: 5px; padding: 14px; } If you take the code above and place it in a file entitled HelloWorld.java and you fill in your credentials in the getNoSQLConnection() method, you can compile this file as specified below:       and run the resulting class file as specified below (NOTE the JSON output from running the program):      Using the Native Oracle Cloud Infrastructure Console You can also explore your tables using the Oracle Cloud Infrastructure console.  On the left hand menu, simply navigate to the NoSQL Database menu item.  Once you click on the hello_world table link, you will see the following page: Clicking on the Table rows control and then the clicking the Run query button control will display the record that you just inserted into the table:   If you have questions regarding this exercise, please send an email to oraclenosql-info_ww@oracle.com with “Hello World” in the subject line, and someone will get back to you as soon as possible.               

Oracle NoSQL Database Cloud Service – 15 Minutes to Hello World   Guest Post by:Dave Rubin, Senior Director, NoSQL and Embedded Database Development, Oracle The Oracle NoSQL Database Development team is...

Scale out an Oracle NoSQL Database Cluster Like A Pro

This blog is part 3 of the series on using Oracle NoSQL Database on-premises. We started with KVlite, which can get up and running with NoSQL DB in less than 5 mins! Then we set up a single sharded Oracle NoSQL Database Cluster, and in this edition, we will look at how to expand the single sharded Oracle NoSQL Database into three shards. A 3 sharded Oracle NoSQL Database cluster with a replication factor of 3 is a common deployment topology that customers tend to use in production. In my last blog, we looked at the architecture of the Oracle NoSQL Database and the components that make up the cluster. In case if you haven't read that, it is useful to do that so you can understand all the different parts that make up an Oracle NoSQL Database cluster. In any system, achieving scalability by reducing communication between components is valuable - this is why Big Data/NoSQL systems are sharded. Each shard is independent of the others; in effect, you have lots of "little databases" that are running independently of each other, with very little communication between the shards. There is a trade-off to sharding; you may not be able to provide the same functionality that a single system does. E.g., at the time of writing this blog, Oracle NoSQL Database doesn't support transactions across shards, because that would involve too much communication, and would compromise the scalability of the system. Secondly, in a distributed system, the probability that SOME components SOMEWHERE in the system will fail grows dramatically as the number of components increases. For example, if the mean-time-between-failure for a single disk is 1000 hours, if you have 1000 disks in the system, some disk drive, somewhere, is going to fail every hour!!!. So the system has to be highly available to compensate for a high probability of failure. The only way to achieve high availability is through redundancy (keep the information in two or more places). To continue the earlier example, if all 1000 disks had the same information, then it wouldn't matter if there was a disk failure every hour. Of course, the price of redundancy is reduced capacity. So a well-designed distributed system achieves the right balance between "decoupled components" for scalability and redundancy for availability. We have tested the Oracle NoSQL Database on systems as large as 300 nodes and demonstrated linear scalability of performance.  Before we dive into details of cluster expansion, let's look at the different options that exist today for expanding an Oracle NoSQL Database cluster. You can expand an Oracle NoSQL Database cluster in two ways: By Increasing the replication factor By Adding a shard Let's look at each of the scenarios. When you expand an Oracle NoSQL kvstore by increasing the replication factor, you create more copies (replication nodes) of the data to each shard in the kvstore. Because each shard can contain only one master node, the additional replication nodes increase the number of nodes that handle reads, thereby improving the read request throughput. It also increases the durability to withstand disk or machine failures. However, the increased durability and read throughput have costs associated with it: more hardware resources are needed to host and serve the additional copies of the data and slower write performance because each shard has more replica nodes to send updates to. For further information on how to identify your replication factor and to understand the implications of the factor value, see Replication Factor.   Figure 1: Increasing replication factor from RF=3 to RF=4 Let's look at the second scenario where you add the shard to the Oracle NoSQL Database cluster. When you add a shard to the cluster, you add a master node and some replica nodes, depending on the existing replication factor. Now because one of the new replication nodes is a Master node, the kvstore has an additional replication node to service the database write operations, thereby improving the database write throughput. Recollect that database writes are always directed to the leader (master) node. When you add a new shard, the replica nodes that service the database read operations are also added, thus improving the read throughput. So, when you add a shard, it increases both the read and write throughput.  Figure 2: Adding new shard With above as background, we'll look at how to add shards to the cluster. We'll use the same topology that we used in the last blog, and we will add two more shards, so, from a single shard, we go to three shards, each with a replication factor of 3. The steps to add new shards remain pretty much the same as creating a single shard, i.e., bootstrap the new storage nodes, start the SNA, and configure the store. There's one additional step that is needed, which is to redistribute the topology to distribute the partition across the new shards. The process of redistribution is automatic; the system automatically picks up the number of partitions from each shard and moves them to the new shard, thus reducing human errors.  In the topology that we used in the previous blog, we used three storage nodes, each with capacity one. In the expansion, we will add two more storage nodes (sn4 and sn5), each with a capacity of three, i.e., the new storage nodes will each have three disks.  Let's dive in First, bootstrap both the new storage nodes. Notice that we set the capacity to 3 and we specify the directory for each storage Bootstrap storage node 4 java -jar $KVHOME/lib/kvstore.jar makebootconfig \         -root $KVROOT \         -store-security none \         -capacity 3 \         -harange 5010,5030 \         -port 5000 \         -memory_mb 200 \         -host kvhost04 \         -storagedir $KVDATA/u01 \         -storagedirsize 100-MB \         -storagedir $KVDATA/u02 \         -storagedirsize 100-MB \         -storagedir $KVDATA/u03 \         -storagedirsize 100-MB \ It is essential here that the number of storagedir construct matches the capacity of the storage node. In the above example, we specified 100-MB for each storage directory size; however, we do support different values for the storagedirectorysize. Bootstrap storage node 5 java -jar $KVHOME/lib/kvstore.jar makebootconfig \         -root $KVROOT \         -store-security none \         -capacity 3 \         -harange 5010,5030 \         -port 5000 \         -memory_mb 200 \         -host kvhost05 \         -storagedir $KVDATA/u01 \         -storagedirsize 100-MB \         -storagedir $KVDATA/u02 \         -storagedirsize 100-MB \         -storagedir $KVDATA/u03 \         -storagedirsize 100-MB \ Next, start the SNA on each of the above storage nodes java - jar $KVHOME/lib/kvstore.jar start -root $KVDATA/sn4/kvroot & java -jar $KVHOME/lib/kvstore.jar start -root $KVDATA/sn5/kvroot & Finally, we deploy and configure the storage node. Now, since we are expanding the cluster, we need to distribute the data, and for that, there's a command topology redistribute. The redistribute command works only if new Storage Nodes are added to make creating new Replication Nodes possible for new shards. With new shards, the system distributes partitions across the new shards, resulting in more Replication Nodes to service write operations. When we redistribute the store, the system automatically picks (without user intervention) the number of partitions from each of the existing shards, thus ensuring there's no data hotspot as well as reducing human error.  The following example demonstrates adding a set of Storage Nodes (kvhost04 — kvhost05) and redistributing the data to those nodes -> plan deploy-sn -zn Boston -host kvhost04 -port 5000 -wait Executed plan 7, waiting for completion.....  Plan 7 ended successfully kv-> plan deploy-sn -zn Boston -host kvhost05 -port 5000 -wait Executed plan 8, waiting for completion..... Plan 8 ended successfully kv-> pool join -name BostonPool -sn sn4 Added Storage Node(s) [sn4] to pool BostonPool kv-> pool join -name BostonPool -sn sn5 Added Storage Node(s) [sn5] to pool BostonPool kv-> topology clone -current -name newTopo Created newTopo kv-> topology redistribute -name newTopo -pool BostonPool Redistributed: newTopo kv-> plan deploy-topology -name newTopo -wait Executed plan 11, waiting for completion... Plan 11 ended successfully  The redistribute command incorporates the new Storage Node capacity that you added to the BostonPool, and creates new shards. The command also migrates partitions to the new shards.  That's it!  You have successfully expanded the cluster.  

This blog is part 3 of the series on using Oracle NoSQL Database on-premises. We started with KVlite, which can get up and running with NoSQL DB in less than 5 mins! Then we set up a single sharded...

NoSQL Database

Setup Single Sharded Oracle NoSQL Database: Quickly and Easily

This blog is part 2 of the series on using Oracle NoSQL Database on-premises. Last time we saw how to get started with KVlite can be accomplished in under 5 mins! In this edition, we’ll look at setting up a single Oracle NoSQL Cluster. In this blog, we will look at how to set up a single sharded cluster with 3 Replica Oracle NoSQL Database Cluster.   Oracle NoSQL Cluster Architecture Before we dive into setting up an Oracle NoSQL Database Cluster, it is essential to have a look at high-level architecture and various components that form its underpinnings. Oracle NoSQL Database reads and writes data by performing network requests against an Oracle NoSQL Database data store (“store” for short), referred to as the KVStore. The KVStore is a collection of one or more Storage Nodes, each of which hosts one or more Replication Nodes. Data is automatically spread across these Replication Nodes by internal KVStore mechanisms. The store can contain multiple Storage Nodes. A Storage Node is a physical (or virtual) machine with its local storage. The machine is intended to be commodity hardware. While not a requirement, each storage node is typically identical to all other Storage Nodes within the store. Every Storage Node hosts one or more Replication Nodes as determined by its capacity. A Storage Node's capacity serves as a rough measure of the hardware resources associated with it. We recommend configuring each Storage Node with a capacity equal to the number of available disks on the machine. Such configuration permits the placement of each Replication Node on its disk, ensuring that Replication Nodes on the Storage Node are not competing for I/O resources. Stores can contain Storage Nodes with different capacities, and Oracle NoSQL Database ensures that a Storage Node is assigned a proportional load size to its capacity. The store's Replication Nodes are organized into shards. You can think of shard as a placeholder for a subset of the sharded data. A single shard contains multiple Replication Nodes. Each shard has a replica master node or a Paxos elected master node as shown in the figure below. The master node performs all database write activities. Each shard also contains one or more read-only replicas. The master node copies all new write activity data to the replicas. The replicas are then used to service read-only operations. While there can be only one master node per shard at any given time, any of the other shard members can become a master node. An exception to this is for nodes in a secondary zone.  When an application asks to retrieve the record for a given key, the Oracle NoSQL Database driver will hash a portion of the key (denoted as “the shard key” during table creation) to identify the shard that houses the data.  Once the shard is identified, the Oracle NoSQL Database driver can choose to read the data from the most optimal replica in the shard, depending on the requested consistency level.  With respect to write operations, the Oracle NoSQL Database driver will always route the write requests to the leader (Elected Leader in Figure 1) node of the shard.  Hence, from the perspective of workload scaling, the application can generally think of this architecture as being scaled by adding shards.  Oracle NoSQL Database supports online elastic expansion of the cluster by adding shards. High Level Architecture Oracle NoSQL Database                            Oracle NoSQL Database Cluster Deployment  At a high level deploying a NoSQL database cluster requires the following steps :  Bootstrap Storage Node (SN): The physical machine or the storage node that we are going to use needs to be bootstrapped first. This process will create a config file in the KVRoot Dir that holds the cluster deployment info. Start Storage Node Agent (SNA): After bootstrapping, we start the Storage Node Agent or SNA that works like a listener that enables the communication between the admin node and other replication nodes (RN). We run bootstrapping and start SNAs on all the storage nodes.  Configure the NoSQL Database store : Name your Store Create a Data Canter(Zone) Deploy the storage nodes and start the admin nodes. Create the storage node pool and assign the storage node Create the deploy the replication nodes. We are now set to start the deployment. It’s important that users read the installation prerequisites listed in the documentation. Download the kv-ee-<version>.tar.gz (.zip)  server software from our OTN page and unpack it into a directory. Refer to our doc about Installing and Prerequisites. Set the following environment variables on all the storage nodes. KVHOME points to the directory where you unzipped the latest Oracle NoSQL Database binaries KVROOT where you would like metadata files to be stored. KVDATA to the storage directories. KVHOME=/home/oracle/nosqldb/kv-<version> KVROOT=/home/oracle/nosqldb/root KVDATA=/u02/nosqldb/data mkdir ${KVROOT} mkdir ${KVDATA}​​​​​​ Bootstrap the Storage Nodes: In this step, we bootstrap the storage nodes that deployed as part of the Oracle NoSQL Database cluster. We ship a utility called makebootconfig that will do the bootstrapping job for us. I suggest reading the utility document to understand the various options that are available. Remember, we are setting a single sharded NoSQL DB cluster with a Replication Factor of 3 spread across three different storage nodes (kvhost01, kvhost02, kvhost03) with a capacity of 1 each. We are setting up an unsecured NoSQL Cluster, but you can setup a secured cluster by configuring the security related configuration parameter in the makebootconfig. We bootstrap the storage nodes one by one. Bootstrap storage node 1 java -jar $KVHOME/lib/kvstore.jar makebootconfig \         -root $SN2_DATA_HOME/kvroot \         -store-security none \         -capacity 1 \         -harange 5010,5030 \         -port 5000 \         -memory_mb 200 \         -host kvhost01 \         -storagedir $KVDATA/u01 \         -storagedirsize 50-MB \      Bootstrap storage node 2 java -jar $KVHOME/lib/kvstore.jar makebootconfig \         -root $SN2_DATA_HOME/kvroot \         -store-security none \         -capacity 1 \         -harange 6010,6030 \         -port 5000 \         -memory_mb 200 \         -host kvhost02 \         -storagedir $KVDATA/u02 \         -storagedirsize 50-MB \ Bootstrap storage node 3 java -jar $KVHOME/lib/kvstore.jar makebootconfig \         -root $SN2_DATA_HOME/kvroot \         -store-security none \         -capacity 1 \         -harange 7010,7030 \         -port 5000 \         -memory_mb 200 \         -host kvhost03 \         -storagedir $KVDATA/u03 \         -storagedirsize 50-MB \ Start Storage Node Agent(SNA) Once the Storage Node is bootstrapped the next step is to start the Oracle NoSQL Database Storage Agent(SNA). Notice that we are passing KVROOT of the SN to the start command. nohup java -Xmx64m -Xms64m -jar KVHOME/lib/kvstore.jar start -root KVROOT &  This step needs to be done on each of the storage nodes. At this stage, users can use the ping to verify if that Oracle NoSQL Database client library can contact the SNA. If the Storage Nodes do not start up, you can look through the adminboot and snaboot logs in the KVROOT directory to identify the problem. Configure the Oracle NoSQL Database store Assuming the Storage Nodes all started successfully, you can configure the KVStore. To do this, you use the CLI command interface. Start runadmin: >java -Xmx64m -Xms64m -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host kvhost01 Follow the steps below: Name your KVStore The name of your store is essentially used to form a path to records kept in the store. Here we are calling it mystore:  kv-> configure -name mystore Store configured: mystore   Create a Zone Once you have started the command line interface and configured a store name, you can create at least one zone (or datacentre). In this example we are creating zone called “Boston” with replication factor of 3 using plancommand kv-> plan deploy-zone -name "Boston" -rf 3 -wait Deploy all the storage node to the zone and start the admin nodes Every Oracle NoSQL Database store has an administration database. You must deploy the Storage Node(SN) to which the command line interface is currently connecting to, in this case, "sn1", “sn2” and “sn3” and then deploy an Administration process on that same node. ### Deploy First Storage Node  ## plan deploy-sn -znname "Boston" -port 5000 -wait -host kvhost01 plan deploy-admin -sn sn1 -wait ## Deploy Second Storage Node ## plan deploy-sn -znname "Boston" -port 6000 -wait -host kvhost02 plan deploy-admin -sn sn2 -wait ## Deploy Third Storage Node ## plan deploy-sn -znname "Boston" -port 7000 -wait -host kvhost03 plan deploy-admin -sn sn3 -wait Create the storage node pool and add storage nodes to the pool Once the storage nodes are deployed we create a storage node pool. This pool is used to contain all the storage nodes in the store. In this example we call this pool “BostonPool”  and then join storage nodes to the pool kv -> pool create -name BostonPool kv -> pool join -name BostonPool -sn sn1 kv -> pool join -name BostonPool -sn sn2 kv -> pool join -name BostonPool -sn sn3 Create and Deploy the Replication nodes The final step in your configuration process is to create Replication Nodes on every nodes using the topology create and deploy-topology commands.  topology create -name 3x3 -pool BostonPool -partitions 120 topology preview -name 3x3 plan deploy-topology -name 3x3 -wait As a final sanity check, you can confirm that all of the plans succeeded using the show plans command. kv-> show plans 1 Deploy Zone (1)          SUCCEEDED 2 Deploy Storage Node (2)  SUCCEEDED 3 Deploy Admin Service (3) SUCCEEDED 4 Deploy Storage Node (4)  SUCCEEDED 5 Deploy Storage Node (5)  SUCCEEDED 6 Deploy-RepNodes          SUCCEEDED Having done that, you can exit the command line interface and you are done.    

This blog is part 2 of the series on using Oracle NoSQL Database on-premises. Last time we saw how to get started with KVlite can be accomplished in under 5 mins! In this edition, we’ll look...

NoSQL Database

How to be productive with Oracle NoSQL Database in less than 5 mins!

We often get requests from our users and customers on how they can get started with using Oracle NoSQL Database on-premises. I am starting this series of blogs where we look into the various options for deployment that are available with the Oracle NoSQL Database. In this first edition, we’ll look at the simplest and the easiest option to start with NoSQL Database called KVlite. KVLite Are you looking to be productive with Oracle NoSQL Database in less than 5 mins? Are you looking to try the NoSQL Database APIs and quickly test your application business logic without setting up a flow-blown database server?  Are you looking to evaluate Oracle NoSQL DB with a small dataset without allocating resources that a production server requires? If the answer to any of the above questions is yes, then you need KVlite. KVLite is a simplified, lightweight version of the Oracle NoSQL Database. It provides a single-node store that is not replicated. It runs in a single process without requiring any administrative interface. Users can configure, start, and stop KVLite using a command-line interface.           KVlite KVLite is intended for use by application developers who need to unit test their Oracle NoSQL Database application. It is not intended for production deployment or performance measurements. KVLite is installed when you install Oracle NoSQL Database. It is available in the kvstore.jar file in the lib directory of your Oracle NoSQL Database distribution. KVLite can be run within a docker container or on a VM or on bare metal machines. At the time of writing the blog, the current version of NoSQL DB is 19.3 and following are the prerequisite for running KVLite  : Java 8 or greater Minimum of 5GB disk space. KVLite  Starting KVLite You start KVLite by using the kvlite utility, which can be found in KVHOME/lib/kvstore.jar. If you use this utility without any command line options, then KVLite will run with the following default values: The store name is kvstore. The hostname is the local machine. The registry port is 5000. The directory where Oracle NoSQL Database data is placed (known as KVROOT) is./kvroot. The administration process is turned on using port 5001. Let’s dive in. Open any terminal and type the following command: $ java -Xmx64m -Xms64m -jar lib/kvstore.jar kvlite Expected Output: Generated password for user admin: password User login file: ./kvroot/security/user.security Created new kvlite store with args: -root ./kvroot -store kvstore -host localhost -port 5000 -secure-config enable In a second shell, ping the KVLite instance to verify that KVLite got started $ java -Xmx64m -Xms64m -jar lib/kvstore.jar ping -host localhost -port 5000 -security kvroot/security/user.security Expected output: Pinging components of store kvstore based upon topology sequence #14 10 partitions and 1 storage nodes Time: 2017-05-02 09:34:43 UTC   Version: 12.2.4.4.6 Shard Status: healthy:1 writable-degraded:0 read-only:0 offline:0 Admin Status: healthy Zone [name=KVLite id=zn1 type=PRIMARY allowArbiters=false]   RN Status: online:1 offline:0 Storage Node [sn1] on localhost:5000    Zone: [name=KVLite id=zn1 type=PRIMARY allowArbiters=false]            Status: RUNNING   Ver: 12cR2.4.4.6 2017-04-13 06:54:25 UTC  Build id: d6a9b947763f        Admin [admin1]            Status: RUNNING,MASTER        Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:204 haPort:5006 The status indicates that the KVLite is up and running. Here we started KVLite in secured mode, if we want to start in unsecure mode execute the kvstore.jar file using the -enable-secure disable flag to disable security and start KVLite in unsecure mode. That’s it you are done!   

We often get requests from our users and customers on how they can get started with using Oracle NoSQL Database on-premises. I am starting this series of blogs where we look into the various options...

Multi-Region Data Replication

In today’s competitive global market, businesses are facing the challenge of delivering faster and better services to their customers. Enterprises need to develop new innovative applications and make them available globally in the shortest time possible. The complexity of deploying, operating, and maintaining these applications in multiple regions to serve the global customers can be daunting. Users expect to complete their online activities with smooth and fast user experience at any time and from anywhere. To meet such expectations, enterprises need to host applications and data at distributed regions closest to the users. Data needs to be actively replicated across these regions with predictable low latency. Oracle NoSQL Database is designed for today’s most demanding workloads with high volume, high velocity, and high variety. It meets the requirements of applications for  predictable low latency, multiple data models, rapid development, elastic capacity scaling, ease of operations, and management. Data and applications can reside anywhere and interoperate with a single application interface. The built-in multi-region data replication feature enables application data written in any region to be replicated transparently across multiple regions right away. For example, a mobile application company would like to serve their customers in three different regions: Frankfurt, London, and Dublin. When a user updates his user profile via a smartphone in Frankfurt, the new data is replicated in London and Dublin immediately. Similar replication will be performed when an update happens in Dublin or London.  If updates happen at all regions, Oracle NoSQL multi-region architecture handles the complexity of conflicts and replicates the data accordingly. The mobile company can easily set up the Oracle NoSQL multi-region data replication to serve their customers in different regions. The diagram below illustrates the underlying multi-region architecture. To enable multi-region data replication, NoSQL multi-region tables are created in Frankfurt, London, and Dublin clusters. These tables are read-anywhere and write-anywhere hosted in these regions. One or a few Cross-Region Services can be deployed in each region. Each enables data streaming from remote regions to the local region. Each cluster in a region is fully autonomous and may have different topology, capacity, elastic expansion policy, operations, and maintenance. Steps to Deploy Multi-Region Data Replication in a Region Step 1: Deploy cross-region service in each region. Create a directory for the Cross-Region (XRegion) Service in the local NoSQL cluster. Create a JSON config file in the directory with the following parameters:                          For example, a config file to stream data to London from Denmark and Dublin: Start the XRegion Service in each region using the command: For example, start XRegion Servce in London: Step 2: Install Oracle NoSQL Database in each region using the standard steps and create regions. Step 3: Create multi-region tables in each region. Use CREATE TABLE to create a multi-region table. For example, use SQL command to create a multi-region table in London: Step 4: Perform read and write operations. After creating the multi-region table in each region, you are ready to perform read or write operations on the table using the existing data access APIs or DML statements. Try writing a few test records in London region and verify those records are automatically replicated in Dublin and Frankfurt regions.

In today’s competitive global market, businesses are facing the challenge of delivering faster and better services to their customers. Enterprises need to develop new innovative applications and make...

Hardware Strategies for Horizontal and Vertical Scalability for Oracle NoSQL Database

Since almost all NoSQL databases have built-in high availability, it is easy to change hardware configuration which involves shutting down, replacing, or adding a server to a cluster while maintaining the continuous operations. Seamless, online scalability is one of the key attributes of NoSQL systems which are sharded, shared nothing databases. Users can start with a small cluster (few servers) and then grow the cluster (by adding more hardware) as throughput and/or storage requirements grow. Similarly, users can also contract the cluster as throughput and/or storage requirements shrink due to decreased workloads. Hence, most NoSQL databases are considered to have horizontal scalability. Figure 1 shows horizontal scalability during different operational workloads. Figure 1 - Horizontal Scaling for various workloads   Scalability is appealing to businesses for a variety of reasons.  In many cases, the peak workload is not known in advance of the initial setup of a NoSQL database cluster. Scalability enables the organization to deploy enough capacity to meet the expected workload without unnecessary over-expenditure on the hardware initially. As the workload increases, more hardware can be added to meet the peak demand. As the workload decreases, the hardware can be reclaimed and re-purposed for other applications. The cluster can grow and shrink as the workload changes with zero downtime. Most NoSQL systems claim linear or near-linear scalability as the cluster size grows. Linear scalability simply means that when more hardware is added to the cluster, it is expected to deliver the performance that is roughly proportional to its capacities. In other words, if the number of servers in the NoSQL cluster is doubled, it is expected to handle roughly double the workload (throughput and storage) compared to the original configuration, if the additional servers have the same processing and storage capabilities. Since almost all NoSQL databases have built-in high availability, it is easy to replace hardware incrementally which involves shutting down a machine, removing it from the cluster, bringing in a new server, and adding it to the cluster. Everything continues to work without downtime or interruption of the application.   From a capital expenditure perspective, it can be more cost effective to purchase newer and “bigger” servers because hardware costs decline over time. Newer hardware has more processing and storage capability relative to older generations of hardware at the same price. Replacing older hardware with newer, more powerful and capable hardware gives NoSQL database vertical scalability. From an operational cost perspective (OPEX) , a cluster with fewer servers is preferable to a cluster with a large number of servers, assuming both options deliver the same performance and storage capabilities. Hardware needs to be actively managed, needs space, cooling and consumes power. Smaller clusters can improve operational efficiency. Over time, the combined effect of growing the number of servers in a cluster and/or replacing old servers with newer hardware will result in a cluster which has a mix of servers of varying processing and storage capabilities and capacity. It is common to find such “mixed” clusters in production NoSQL applications that have been upgraded over a period of time. In the context of “mixed” cluster scenarios, it is important to choose a NoSQL solution that can leverage hardware with varying processing and storage capabilities in order to address the application requirements efficiently and effectively. As mentioned earlier, most NoSQL products claim horizontal scalability. But, does an administrator really want to maintain a large cluster for NoSQL product X if a smaller cluster running a different NoSQL product will do the job, or exceed the requirements? The example in Figure 2 shows that over time, and with increased storage and processing power, a 12-machine system could be replaced with just a 3-machine system. Figure 2 - Replacing smaller servers with larger, more capable ones. When evaluating a NoSQL product, it is wise to consider the horizontal as well as vertical scalability characteristics. Oracle NoSQL Database is designed for horizontal and vertical scalability in order to deliver the most cost-effective NoSQL solution for modern applications. For every server deployed in Oracle NoSQL Database cluster, the administrator specifies the capacities (e.g., storage) of the server at the time of the initial setup.  Oracle NoSQL Database uses this information to automatically assign workloads such that each server process manages a subset of data that is roughly proportional to the storage capacities of that server.  A smaller server will be assigned less workload compared to a larger server.  Similarly, each process uses RAM based on the amount of physical memory available on the machine.  Figure 3 shows the Oracle NoSQL Database cluster with mixed hardware capacities. Figure 3 - A Mixed cluster Oracle NoSQL Database distributes the workload across the available hardware based on the capabilities of the server. More importantly, during the cluster creation and expansion/contraction operations, the system ensures that each replica for every shard (remember, Oracle NoSQL Database is a sharded, HA database) is distributed across different physical servers. This “rule” ensures that the failure of any single server will never result in the loss of a complete shard.   Also, in steady-state operation, it is possible that servers might be shut down and restarted at various points in time.  Oracle NoSQL Database automatically monitors the health and workload on each server in the cluster and rebalances the workload across the servers in order to avoid “hotspots”.  All of this happens automatically and without any administrative intervention, thus ensuring reliable and predictable performance and high availability over mixed clusters and cluster transitions. Oracle NoSQL Database leverages hardware effectively in order to meet the performance and availability requirements of the most demanding and innovative applications..  When deciding which NoSQL solution to use, vertical scalability is just as important as horizontal scalability in order to ensure the lowest Total Cost of Ownership for the overall solution.  

Since almost all NoSQL databases have built-in high availability, it is easy to change hardware configuration which involves shutting down, replacing, or adding a server to a cluster while maintaining...

NoSQL Database

Power your Geo Enabled application with Oracle NoSQL Database

Introduction GeoJSON is becoming a very popular data format among many GIS technologies and services — it's simple, lightweight and straightforward. According to GeoJSON Specification (RFC 7946): GeoJSON is a format for encoding a variety of geographic data structures […]. A GeoJSON object may represent a region of space (a Geometry), a spatially bounded entity (a Feature), or a list of Features (a FeatureCollection). GeoJSON supports the following geometry types: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, and GeometryCollection. Oracle NoSQL Database supports all of the above geometry types and allows you to query, create indexes and use geo defined functions to extract data. GeoJSON Types GeoJSON is represented as JSON and defines the below geometrical entities Type Description Example Point For a Point the "coordinates" is for a single position { "type" : "point", "coordinates" : [ 23.549, 35.2908 ] } LineSegment A LineString is one or more connected lines; the end-point of one line is the start-point of the next line. The "coordinates" member is an array of two or more positions { "type" : "LineString", "coordinates" : [ [121.9447, 37.2975],[121.9500,37.3171], [121.9892,37.3182],[122.1554, 37.3882],[122.2899,37.4589], [122.4273,37.6032],[122.4304, 37.6267],[122.3975,37.6144]] } Polygon A polygon defines a surface area by specifying its outer perimeter and the perimeters of any potential holes inside the area { "type" : "polygon", "coordinates" : [ [ [23.48, 35.16],[24.30, 35.16], [24.30, 35.50],[24.16, 35.61], [23.74, 35.70],[23.56, 35.60], [23.48, 35.16]]] } MultiPoint For a MultiPoint the "coordinates" field is an array of two or more positions. { "type" : "MultiPoint", "coordinates" : [ [121.9447, 37.2975], [121.9500, 37.3171], [122.3975, 37.6144] ] } MultiLineString For a MultiLineString the "coordinates" member is an array of LineString coordinate arrays { "type": "MultiLineString", "coordinates": [[ [100.0, 0.0], [01.0, 1.0] ], [ [102.0, 2.0], [103.0, 3.0] ]] } MultiPolygon For a MultiPolygon the "coordinates" member is an array of Polygon coordinate arrays { "type": "MultiPolygon", "coordinates": [[[ [102.0, 2.0],[103.0, 2.0], [103.0, 3.0],[102.0, 3.0], [102.0, 2.0] ]], [ [ [100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0] ]] ] } GeometryCollection Instead of a “coordinates” field, a GeometryCollection has a “geometries” field. The value of "geometries" is an array. { "type": "GeometryCollection", "geometries": [ { "type": "Point", "coordinates": [100.0, 0.0] }, { "type": "LineString", "coordinates": [ [101.0, 0.0], [102.0, 1.0] ] } ] } Querying GeoJson Data Oracle NoSQL Database provides four functions to query for Geo data that have a relationship based on a certain geometry. geo_intersect - geometries that intersect with a GeoJSON geometry select t.poi as park from PointsOfInterest t where t.poi.kind = “nature park” and geo_intersect(t.poi.location, { “type” : “polygon”, “coordinates” : [[ [121.94, 36.28], [117.52, 37.38], [119.99,39.00],[120.00,41.97],[124.21,41.97],[124.39,40.42], [121.94,36.28] ]] }) geo_inside - geometries within a bounding GeoJSON geometry. geo_within_distance - geospatial objects in proximity to a point on a sphere geo_near - geospatial objects in proximity to a point select t.poi as gas_station, geo_distance(t.poi.location, { “type” : “LineString”, “coordinates” : [ [121.9447, 37.2975], [121.9500, 37.3171], [121.9892, 37.3182], [122.1554, 37.3882], [122.2899,37.4589],[122.4273,37.6032],[122.4304,37.6267],[122.3975,37.6144] ] }) as distance from PointsOfInterest t where t.poi.kind = “gas station” and geo_near(t.poi.location, { “type” : “LineString”, “coordinates” : [ [121.9447,37.2975],[121.9500,37.3171],[121.9892,37.3182],[122.1554, 37.3882], [122.2899,37.4589],[122.4273,37.6032],[122.4304,37.6267],[122.3975,37.6144] ] }, 1609) Indexes on GeoJson Data Indexing GeoJson data is similar to indexing a JSON type data in Oracle NoSQL Database. When defining a GeoJson index the “geometry” or “point” keyword must be used after an index etc. For optimal performance “point” must be used when rows in the table are expected to have single point geometries. Indexing of geometries is based on geo hashing that encodes a longitude/latitude pair to a string. A more detailed explanation for indexes and geo hashing can be found here. create index idx_test1 on testTable(coord.point as point); Using Geo Queries: Hyperlocal marketing is the process of targeting prospective customers in a highly specific, geographically restricted area, sometimes just a few blocks or streets, often with the intention of targeting people conducting “near me” searches on their mobile device. For example, let’s say you’re regular customer is in your vicinity. You as a book store that maintains customer profile and their buying details have created ad campaigns for different customer profiles to increase foot fall into your store. Hyperlocal marketing will help you push these notifications to your customers on probable personalized offers that you want to specifically send to your customers. Sample data: {   "id" : 1,   "info" : {     "kind" : "store1",     "country": "greece",     "region" : "crete",     "county" : "chania",     "city" : null,     "point" : { "type" : "point", "coordinates" : [ 23.549, 35.2908 ] }   } } {   "id" : 2,   "info" : {     "kind" : "store2",     "country": "greece",     "region" : "crete",     "county" : "iraklion",     "city" : null,     "point" : { "type" : "point", "coordinates" : [ 24.9, 35.4 ] }   } } Table The table structure would have 2 column with the GeoJSON data stored in “info”.   create table points (id integer, info json, primary key(id)); Index We will create 2 indexes, one which creates an index on the “point” attribute and another one on the “city” attribute create index idx_ptn on points(info.point as point)   create index idx_kind_ptn_city on points(info.kind as string,                                                                           info.point as point,                                                                           info.city as string) Queries Here we are searching for all stores from the current point (location of the customer) within a radius (configured) # # All stores # declare         $radius2 double;         $point1 double;         $point2 double;   select /* FORCE_PRIMARY_INDEX(points) */        id, p.info.point from points p where       geo_within_distance(p.info.point,                           { "type" : "point",                             "coordinates" : [$point1, $point2]                           },                           $radius2) More detailed documentation and examples of GeoJson support can be found in the official Oracle NoSQL Database SQL Specification.

Introduction GeoJSON is becoming a very popular data format among many GIS technologies and services — it's simple, lightweight and straightforward. According to GeoJSON Specification (RFC 7946): GeoJSON ...

Oracle NoSQL Database Version 19.3 is Now Available

This release ushers in a new paradigm in data management flexibility and simplicity that empowers enterprises to run and grow their businesses to best serve customers. Data and applications can reside anywhere and interoperate with a single application interface to meet today's and future business needs. Oracle NoSQL Database is designed for today’s most demanding workloads with high volume, high velocity, high variety and applications that require predictable low latency, multi data models, rapid development, elastic capacity scaling, ease of operations and management. With a single common application interface, developers can easily build applications that run and interoperate in a hybrid cloud environment. Oracle NoSQL Database is used in applications such as Internet of Things, customer 360, contextual online advertising, fraud detection, social network, real-time big data, mobile application, user personalization, online gaming, and more. New Features -- HTTP Proxy - New middle-tier component that enables client application drivers to connect to Oracle NoSQL Database Cloud Service and on-premise Oracle NoSQL Database using a single common application programming interface. This feature opens endless possibilities for applications running in a hybrid cloud environment. -- String Pattern Matching - New regex_like function that searches for a specific string pattern at the beginning, middle, or end of a text field in a column or a JSON document. This feature offers a quick and easy-to-use search option for textual content. -- Dynamic Streaming – New APIs to add or remove a subscribed table in a live stream without terminating and recreating a new stream. This feature offers more flexibility in the streaming subscription interface. Resources    Visit here to download    Visit here to read about the release notes    Visit Oracle NoSQL LinkedIn page   Join our mailing list - NoSQL-Database@oss.oracle.com   

This release ushers in a new paradigm in data management flexibility and simplicity that empowers enterprises to run and grow their businesses to best serve customers. Data and applications can reside...

NoSQL Database

Migrate MongoDB data to Oracle NoSQL Database

Introduction Migrating data from a database to another entail design decisions related to data consistency, downtime of applications, and the key design differences between the two databases. When it comes to migrating across NoSQL databases, the challenges are multifold considering the many flavors such as document, key value, wide column, graph, etc. available today. Oracle NoSQL Database makes it easy to migrate data from different data models be it document, key-value or wide column databases. Oracle NoSQL Database is a truly multi-model NoSQL data store which allows the customer to choose between key-value, JSON or table representation of their data. It is shard local ACID and provides control at the operation level for consistency and durability. One can leverage the efficient Elastic Search integration for their full-text index needs which includes full-text indexes on JSON documents. One can store Large Objects in Oracle NoSQL Database. The database provides a rich and extremely efficient LOB API. The database allows you to start with the key-value model of storing data and then can easily migrate to the table model and take advantage of the table semantics and SQL like Query feature. If you need joins then the database also provides the customers to build their data model as Parent-Child as against modeling as large nested documents. Export-Import utility Oracle NoSQL Database includes an Export/Import utility that allows you to export and import Binary data files from/to NoSQL database. With the 19.1 release of the database, this utility has been further enhanced to process multiple data formats. The utility now supports export data from NoSQL database to data files in Binary or JSON format, and import data files in Binary, JSON or MongoDB JSON format to Oracle NoSQL database. Data format – Binary Export from Oracle NoSQL Database    Import into Oracle NoSQL Database Data format – JSON    Export from Oracle NoSQL Database    Import into Oracle NoSQL Database Data format – MongoDB JSON          Import into Oracle NoSQL Database Understanding Source and Sink:  A source in the Import utility is file-based which contains the data exported from a database. Let us consider we want to migrate data from MongoDB to Oracle NoSQL Database. Here all the data from MongoDB are exported in JSON format and stored in files. These files are identified as “source”. A sink in the Import utility is always Oracle NoSQL Database. We have already identified MongoDB extracted JSON file as the source and we want to import this data into Oracle NoSQL Database which would be called as “sink”. User Experience - Export: Using the export utility, one can export all the data/metadata from the Oracle NoSQL Database store to local file system. Using the utility one can export: Application created data Schema definitions such as table definitions, Avro schemas, and the index definitions. TTL of every table record The utility creates a package which contains data, schema, and logging information in a well-defined format which can be consumed by the import package. The syntax for export: # Export the entire kvstore or a given table to an export store. Use config file to specify export store parameters.  java -jar KVHOME/lib/kvtool.jar export              -export-all | -table table_names | -namespace namespaces              -store storeName              -helper-hosts helper_hosts              -config config_file_name              [-format BINARY | JSON]              [-username user]              [-security security-file-path]              [-verbose] User Experience – Import: Using the import utility, one can import the data from local file system) to Oracle NoSQL Database.  Using the utility one can import: Import MongoDB JSON format exported data into Oracle NoSQL Database All the schema definitions and the data (table, Avro and none format data) from the backed up store to the Oracle NoSQL Database store. The utility first imports all the schema definitions and then imports the user data into the Oracle NoSQL Database. Individual table or tables with specified namespace The TTL of the record The syntax for import: # To import the schema definitions and data from Oracle NoSQL store into local filesystem: java -jar KVHOME/lib/kvtool.jar import         -import-all | -table table_names | -namespace namespaces | -external         -store storeName         -helper-hosts helper_hosts         -config config_file_name         [-status status_file]         [-format BINARY | JSON | MONGODB_JSON]         [-username user]         [-security security-file-path]         [-verbose] Using Export-Import: While examples can also be found in the official documentation page Using Export Import Utility. For this article let us consider an existing Vacation booking application that stores Country information in MongoDB and decides to migrate its data to the enterprise scale Oracle NoSQL Database. The country information is stored as JSON document and contains information such as – landlocked, shares borders with, latitude and longitude, currency, official name, official languages spoken etc. and more such information that might be useful for a person trying to book a vacation to a desired country. A sample JSON document is given below: {             "_id": {                         "$oid": "55a0f42f20a4d760b5fc3153"             },             "altSpellings": ["US", "USA", "United States of America"],             "area": 9.37261e+06,             "borders": ["CAN", "MEX"],             "callingCode": ["1"],             "capital": "Washington D.C.",             "cca2": "US",             "cca3": "USA",             "ccn3": "840",             "cioc": "USA",             "currency": ["USD", "USN", "USS"],             "demonym": "American",             "landlocked": false,             "languages": {                         "eng": "English"             },             "latlng": [38, -97],             "name": {                         "common": "United States",                         "native": {                                     "eng": {                                                 "common": "United States",                                                 "official": "United States of America"                                     }                         },                         "official": "United States of America"             },             "region": "Americas",             "subregion": "Northern America",             "tld": [".us"],             "translations": {                         "deu": {                                     "common": "Vereinigte Staaten von Amerika",                                     "official": "Vereinigte Staaten von Amerika"                         },                         "fin": {                                     "common": "Yhdysvallat",                                     "official": "Amerikan yhdysvallat"                         },                         "fra": {                                     "common": "États-Unis",                                     "official": "Les états-unis d'Amérique"                         },                         "hrv": {                                     "common": "Sjedinjene Američke Države",                                     "official": "Sjedinjene Države Amerike"                         },                         "ita": {                                     "common": "Stati Uniti D'America",                                     "official": "Stati Uniti d'America"                         },                         "jpn": {                                     "common": "アメリカ合衆国",                                     "official": "アメリカ合衆国"                         },                         "nld": {                                     "common": "Verenigde Staten",                                     "official": "Verenigde Staten van Amerika"                         },                         "por": {                                     "common": "Estados Unidos",                                     "official": "Estados Unidos da América"                         },                         "rus": {                                     "common": "Соединённые Штаты Америки",                                     "official": "Соединенные Штаты Америки"                         },                         "spa": {                                     "common": "Estados Unidos",                                     "official": "Estados Unidos de América"                         }             } } You can see above that this data is a typical MongoDB record with the auto-generated $oid. The Export-Import utility can read this MongoDB JSON format and load it into Oracle NoSQL Database. Let us look at how easy and with minimal steps this can be performed. The Export/Import utility requires a JSON represented config file. You can customize your import operation by including one or many of the following options: {     "configFileVersion": <version>,     "abortOnError": [false | true],     "errorOutput": <error-dir>,     "errorFileSizeMB": <error-file-chunk-size-mb>,     "errorFileCount": <error-file-count>,     "path": <dir-or-file>,                    "namespace": <namespace>,      "tableName": <table-name>,                     "ignoreFields": [<field1>, <field2>, ...],     "renameFields": {         <old-name>:<new-name>,         ...     }     "charset": <charset-name>,     "ddlSchemaFile": <file>,     "continueOnDdlError": <false | true>,     "streamConcurrency": <stream-parallelism>,     "overwrite": <false | true>,     “ttlRelativeDate”: <date-to-use in UTC>,     “dateTimeToLong”: <true | false> } The most frequently used config options are path – path to the JSON data file Namespace – the namespace in Oracle NoSQL Database that you would like the imported table to be created. IgnoreFields – the attributes in the JSON document that you would want to ignore renameFields – the attributes in the JSON document that you would like to rename in the table in Oracle NoSQL Database. ddlSchemaFile – point this to the table schema DDL script file Run the import utility java -jar kvtool.jar import -helper-hosts localhost:5000 -store kvstore -external -format MONGODB_JSON -config ./mongoimp/config/config.json –verbose The parameters for the kvtool.jar are: -import – to specify that the current operation is an Import. -helper-hosts – the Oracle NoSQL Database connection details. -store – the registered name of the kvstore -external specifies that the data to import has been generated by a source other than Oracle NoSQL Database. -format specifies the format to import. -config – the location of the config file explained above. With just 2 steps you would be able to migrate your MongoDB data to Oracle NoSQL Database. Happy migration!

Introduction Migrating data from a database to another entail design decisions related to data consistency, downtime of applications, and the key design differences between the two databases. When...

NoSQL Database

Announcing a feature rich Oracle NoSQL Database Export-Import

We are pleased to announce the release of Oracle NoSQL Database Version 19.1. This release contains the following new features: Export-Import utility: With this release, we have enhanced the export-import utility making it easier to move the data in and out of the Oracle NoSQL Database. The utility now supports: Export table data from Oracle NoSQL Database and store the data as JSON formatted files on a local (or network mounted) file system. Import ad-hoc JSON data generated from a relational database or other sources, and JSON data generated via MongoDB strict export. Export data and metadata from one or more existing Oracle NoSQL Database tables, raw key/value based data, and large object data to a compact binary format. Read data from, or write data to files in the file system. Import one or more tables into an Oracle NoSQL Database. Restart from a checkpoint if an import or export fails before completion. JSON datatype support for Full Text Search: Users can now create full text search indexes on a JSON field and on the attributes within that JSON field. Oracle continues to enhance Oracle NoSQL Database to meet customer requirements. To download - Visit Here The Release Notes can be found Here. Join our mailing list - NoSQL-Database@oss.oracle.com Visit our LinkedIn page Follow us @OracleNoSQL and oracle-nosql

We are pleased to announce the release of Oracle NoSQL Database Version 19.1. This release contains the following new features: Export-Import utility: With this release, we have enhanced...

Working with Oracle NoSQL Database Aggregate Expressions

Introduction As an application developer building modern applications you will be constantly required to handle speed for both inserts and querying of data. Oracle NoSQL Database is a scalable, distributed NoSQL database, designed to provide highly reliable, flexible and available data management across a configurable set of machine. It is truly multi model with flexibility in defining your data models as Key-Value, strict table schema or JSON document structure. While modern applications need data model flexibility, the need for aggregate operations that can process records and return computed results simplifies application code to a great extent. Oracle NoSQL Database provides a rich set of aggregate operations that perform calculations on your data sets via the SQL like query language. The database provides a SQL like interface that can be used to query data from a flat relational data model, hierarchical typed data and schema-less JSON data models seamlessly. Queries can be executed on either the command line sql interface or using the Java API. SQL like Query Language SQL (Structured Query Language) is widely used in programming primarily for managing data held in data stores. It is particularly defined in the form of statements which are classified as DML (Data Manipulation Language), DDL (Data Definition Language) among others. To get a detailed introduction with examples on the SQL like query language for Oracle NoSQL Database see Getting Started with SQL for Oracle NoSQL Database. As with the 18.3 version of Oracle NoSQL Database the SQL like query language includes support for non-updating queries (SELECT), updating queries (INSERT, DELETE) and DDL queries (CREATE, ALTER). Currently Oracle NoSQL Database supports Atomic types (Integer, Long, Float, Double, Number, String, Boolean, Binary, Timestamp), Complex Types (Arrays, Map, Record) and JSON type of data. It also includes the ability to perform simple aggregates and perform join operations on Parent-Child tables. In this post let us look at details of how using simple aggregates you can provide a real time or near real time representation of your table data. Use Case - Customer 360 Let’s take a Cab Aggregator application (web/mobile) which allows a typical customer to book a cab using its service. This customer sets himself on the application by creating a demographic profile before he starts using the core service. A 360-degree customer view helps companies to get a complete view of customers buying and behavioral pattern by aggregating data from various touch points that a customer may use to purchase or receive a service. This requires identification of new ways to capture Customer’s data which could be structured (third party applications) or unstructured (social media channels) and combine them in a central location and analyze them. Oracle NoSQL Database with its ability to be truly multi-model is a perfect fit to build and store your Customer 360 profile and run aggregate queries which can help to analyze and identify customer needs over and beyond serving their requests. Typical questions that can be answered using the aggregate operations are: How many movies has a customer booked while booking a cab to a multiplex cinema hall? How many metro train tickets has a customer booked while booking a cab to the nearest metro station? What is the total revenue generated by a customer? What is the minimum amount spent by a customer for a booked cab? What is the maximum amount spent by a customer for a booked cab? What is the average amount spent by a customer for a booked cab? A list of all the supported Aggregate operations can be found here. Let us see some examples of how aggregation queries in Oracle NoSQL Database can be used to analyze a customer’s behavior. Examples: Below are examples of queries that can be run on a typical Customer 360 profile for the Cab Aggregator Industry. The table(s) used are: DataLake CREATE TABLE DataLake (ID INTEGER, profile JSON, PRIMARY KEY (ID)) DataLake.MetroBooking CREATE TABLE DataLake.MetroBooking (MB_ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), metroBookingDetails JSON, PRIMARY KEY (MB_ID)) DataLake.MovieBooking CREATE TABLE DataLake.MovieBooking (MV_ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), movieBookingDetails JSON, PRIMARY KEY (MV_ID)) NOTE: The assumption made below – the Cab Aggregator mobile app includes Value Added Services as Customer Touch Points like Movie Booking and Train Ticket Booking while they book their cabs to reach a Multiplex Theater or the nearest Local Train station. Movies booked by User/Customer with ID 1 in the last 1 year (in the query i = each month of the year) select count(*) as count from DataLake.MovieBooking dlm where month(cast(dlm.movieBookingDetails.createTime as Timestamp(0))) = "+i+" and dlm.ID=1 The query above uses the following constructs: Aggregate function – count Timestamp Function – month Utility Function – cast (Timestamp is represented as String within a JSON) Train tickets booked by User/Customer with ID 1 in the last 1 year (in the query i = each month of the year) select count(*) as count from DataLake.MetroBooking dlm where month(cast(dlm.metroBookingDetails.createTime as Timestamp(0))) = "+i+" and dlm.ID=1 The query above uses the following constructs: Aggregate function – count Timestamp Function – month Utility Function – cast (Timestamp is represented as String within a JSON) Total number of Cabs booked by a User/Customer with ID 1 select size(dl.profile.customerRideDetails) as size from DataLake dl where cp.ID=1 The query above uses the following constructs: Aggregate function – size Total revenue generated by a User/Customer with ID 1  in booking Metro train tickets select sum(cast(dlm.metroBookingDetails.fare as INTEGER)) from DataLake.MetroBooking dlm where dlm.ID = 1 The query above uses the following constructs: Aggregate function – sum Utility Function – cast (Timestamp is represented as String within a JSON) Total revenue generated by User/Customer with ID 1 on cab rides booked. select sum(dl.profile.customerRideDetails[].price) as sum from DataLake dl where cp.ID=1 The query above uses the following constructs: Aggregate function – sum Summary Oracle NoSQL Database supports aggregate functions like sum, count, size, avg, max and min. More details can be found in the official documentation – Getting Started with SQL for Oracle NoSQL Database and SQL for NoSQL Specification.  

Introduction As an application developer building modern applications you will be constantly required to handle speed for both inserts and querying of data. Oracle NoSQL Database is a scalable,...

NoSQL Database

15 minutes to Oracle NoSQL Database Cloud Service

Introduction Oracle NoSQL Database cloud service is a fully managed NoSQL database cloud service for today’s most demanding applications that require predictable low latencies, flexible data models, and elastic scaling for dynamic workloads. The database cloud service is suitable for applications such as Internet of Things, user experience personalization, real time fraud detection, and online display advertising. Developers focus on application development without dealing with the hassle of managing back-end servers, storage expansion, cluster deployments, software installation/patches/upgrades, backup, operating systems, and high availability configurations. If you would like to understand some of the key features of Oracle NoSQL Database cloud service please read this. Getting started Getting started with Oracle NoSQL Database cloud service requires you to use your favorite Java development Application IDE and create your tables. The only pre-requisite is a valid Oracle Cloud Subscription. For more detailed steps on activation of your Oracle Cloud account, creating an Oracle NoSQL Database cloud user and basic configuration requirements for your Oracle NoSQL Database application to connect to Oracle NoSQL Database cloud service read this. Your First Application To get started with your Oracle NoSQL Database Cloud service you would need to create a table. As mentioned above and ensuring that you have the basic configuration information like client credentials, entitlement id, idcs URL and service end point URL you can create your table and start building your application. The following are the service end-point URL’s supported today with more regions being added in the coming months: North America East - ndcs.uscom-east-1.oraclecloud.com North America West - ndcs.uscom-west-1.oraclecloud.com Europe - ndcs.eucom-central-1.oraclecloud.com In addition to the configuration information you will need: Cloud Java Driver Eclipse Plugin to ease your application development. To create your first table with instructions on where to find client credentials can be found here. Tools Some of the tools provided by Oracle NoSQL Database Cloud service to take you further in building a complete user application are: Oracle NoSQL Database Capacity Estimator and Cost Estimator. Oracle NoSQL Database Cloud Simulator Oracle NoSQL Database Cloud Eclipse plugin.

Introduction Oracle NoSQL Database cloud service is a fully managed NoSQL database cloud service for today’s most demanding applications that require predictable low latencies, flexible data models,...

NoSQL Database

Ease Your Oracle NoSQL Database Cloud Application Development with Eclipse

How about building an Oracle NoSQL Database Cloud Service application up and running quickly? If you missed how you can run a locally deployable Oracle NoSQL Database Cloud Simulator to build, test and debug your Java application read this. To enhance your experience of building an application with Oracle NoSQL Database Cloud Service we are announcing a plugin for the commonly used open source Integrated Development Environments (IDEs) - Eclipse. With this Eclipse plugin you can: Connect to a running instance of the Oracle NoSQL Database Cloud Simulator or Oracle NoSQL Database cloud service to Explore development/test data from tables associated with your Oracle Cloud account Build and test your Oracle NoSQL Database queries Paginate results Add a NoSQL Nature to an existing project – this simplifies adding Oracle NoSQL Database Cloud service client and dependent libraries automatically to your project classpath Browse Examples Excited! Read further to learn more on how to install the plugin and explore the features that have been provided. Install Oracle NoSQL Database Cloud Simulator Oracle NoSQL Database Cloud Simulator is distributed as cloudsim.jar within the SDK. The latest SDK and Driver packages can be downloaded from Oracle Technology Network. Installation of Cloud Simulator is as simple as unzip/gunzip the distribution. Detailed instructions can be found in the official documentation. Starting Oracle NoSQL Database Cloud Simulator Oracle NoSQL Database Cloud SDK comes with helper scripts which will help you to start Cloud Simulator. To start cloudsim the only parameter required is –root which is where the database files are created. For other options and detailed steps to start Cloud Simulator is here. Install Eclipse Plugin for Oracle NoSQL Database Cloud Simulator NOTE: Oracle NoSQL Database Eclipse Plugin has been built to work with Eclipse Neon 4.6 and later releases. Now that you have started Cloud Simulator, you can install the Eclipse Plugin to build your application. Follow the screenshots below for detailed instructions to install the plugin. Before installing ensure you have downloaded the plugin distribution and unzipped to a desired location. Help -> Install Software Point to the directory where you have unzipped the distribution If the plugin is already installed – this step will update the installation Accept the License and click on Finish This will take a few mins to complete the installation and prompt you to restart Eclipse. Once Eclipse is restarted to verify the plugin installation: Right click on your existing application -> Configure. You should an option to Add Oracle NoSQL Nature Alternately: Window->Show View -> Other You should see Oracle NoSQL – Schema Explorer Configure the plugin Once the plugin is installed and verified you need to ensure the plugin understands where to find the Oracle NoSQL Database Cloud SDK. Instructions on downloading the SDK, client libraries can be found here. Once you have the SDK follow these steps to set preferences for the plugin. Window -> Preferences Oracle NoSQL Preferences – The plugin allows you to work with either Cloud Simulator or Oracle NoSQL Database Cloud Service. If you would like your application to work with Cloud Simulator: Select the Profile Type CloudSim. Enter the Service URL – http://localhost:8080 Tenant Identifier – this will be the namespace under which all your tables would be created Location of the downloaded SDK If you would like your application to work with the Oracle NoSQL Database Cloud Service: Select the Profile Type as Cloud Enter the Endpoint – ndcs.uscom-east-1.oraclecloud.com/ Entitlement ID – this will be the 9 digit entitlement ID associated with your cloud account. IDCS URL – this will be the idcs – Authentication URL associated with your cloud account. NOTE: The endpoint is specific to one of the available data centers. Please ensure to check and use the right endpoint where your Oracle Cloud account has been created. NOTE: In case you don’t have the Entitlement ID and IDCS URL follow this to obtain details specific to your Oracle Cloud account. Adding a Oracle NoSQL Database Nature to an existing Eclipse Project To add an Oracle NoSQL nature to your existing application Right click on the application Configure Add Oracle NoSQL Nature Browse Oracle NoSQL Database Examples To browse Examples that are bundled as part of the SDK, the plugin allows you to create a new Eclipse Project of type NoSQL Examples File -> New -> Other -> Oracle NoSQL – NoSQl Examples Click on next and enter the desired Project Name Select one/all the bundled examples and click on Finish This will create a new Project with the required Oracle NoSQL Database libraries and the source of the examples. Explore Tables and Data A very useful feature that the plug-in provides is the ability to browse data that you store and use in your development/test environment in Cloud Simulator or data stored in your Oracle NoSQL Database Cloud Service. Ensure to select the appropriate Profile Type before exploring the tables and data. With the Data Explorer a developer can: View all the development and test tables created. View the detailed structure of the tables created which includes Primary key, Shard key, indexes, columns and types. Build, test and run Oracle NoSQL queries on your development/test data before using it within your application. Paginate data stored in Cloud Simulator. The amount of data that you want to show in a single view can be configured in Window->Preferences. Explore Data stored in Oracle NoSQL Database Cloud Service Before you explore data stored in your Oracle NoSQL Database Cloud Service account ensure to have read Getting Started with Oracle NoSQL Database Cloud and Develop your NoSQL Application in Oracle Cloud. Once you have read these and have setup your Oracle NoSQL Database Eclipse Plugin preferences with details related to your cloud account you should select the Profile Type as Cloud and follow the steps outlined earlier. So you can see how easy it is get started with building, testing and debugging your application for Oracle NoSQL Database Cloud Service.

How about building an Oracle NoSQL Database Cloud Service application up and running quickly? If you missed how you can run a locally deployable Oracle NoSQL Database Cloud Simulator to build, test and...

NoSQL Database

Build an Oracle NoSQL Database cloud service application locally

Want to build your Oracle NoSQL Database cloud Java application locally before migrating it to the Oracle cloud? We are excited to provide a tool for accelerating your Oracle NoSQL Database cloud application development and testing with the Oracle NoSQL Database Cloud Simulator. The Cloud Simulator or “cloudsim” allows you to develop, debug and test your application against a locally deployable database instance with all functionality similar to the actual Oracle NoSQL Database cloud service. When you are ready to run your application on production on Oracle Cloud, all that is needed is change the endpoint from cloudsim to the cloud service endpoint with the right authentication and authorization credentials. Refer to Getting Started with Oracle NoSQL Database cloud for details on creating a cloud account. Details and an example of the minimal code changes can be found in our Authentication and Authorization FAQ. To start building your application for Oracle NoSQL Database cloud service, you would need the Oracle NoSQL Cloud SDK and the Oracle NoSQL Cloud Java Driver. These can be obtained from the official cloud download page. The Oracle NoSQL Cloud Simulator is part of the Oracle NoSQL Cloud SDK and is a standalone, local version of the Oracle NoSQL Database cloud service. The Oracle NoSQL Cloud Simulator gives you a head start to play around, understand the easy to use API’s of Oracle NoSQL Database cloud driver without having to purchase or use any Oracle NoSQL Database cloud credits. You can take advantage of the Oracle NoSQL Cloud Simulator and run the database service as a background process in your development machine which could be your laptop or a VM or a VM that is shared by all developers of the application team. The Oracle NoSQL Cloud Simulator supports all the Java client API’s that are required to communicate with Oracle NoSQL Database cloud service, which means code written once is portable to run against the actual cloud service. A few things to remember: The Oracle NoSQL Cloud SDK can only be used on standalone/local development environments on Windows, Mac and Linux platforms for development and test. Developers should have Java Developer Kit (JDK) Version 10 installed on their client systems Read for more detailed difference between Oracle NoSQL Database Cloud Simulator and Oracle NoSQL Database cloud service. Excited! If you would like to read in detail – Oracle NoSQL Database cloud – Developer On-boarding. If you are a developer using the Eclipse IDE Develop your applications using programming languages other than Java Keep watching this space to know our commitment to enhance Oracle NoSQL Database cloud developer experience.

Want to build your Oracle NoSQL Database cloud Java application locally before migrating it to the Oracle cloud? We are excited to provide a tool for accelerating your Oracle NoSQL Database cloud...

IDENTITY column in Oracle NoSQL Database

Introduction Oracle NoSQL Database introduces IDENTITY column to auto increment a value each time a row is inserted into the table. IDENTITY Column A primary key for a row in Oracle NoSQL Database table must be unique. But, how can we ensure that the primary key is always unique? One way would be to use a formula to generate the primary key. This may work well but is complex and not foolproof. The IDENTITY column feature can generate a unique numeric value in sequence. These are typically used for Primary Key columns and are termed as an Identity column in the relational database world. With the 18.3 release, Oracle NoSQL Database tables can Include an Identity Column (Primary Key or Shard Key) to be associated with a Sequence Generator. Associate a Sequence Generator to any numeric/Integer/Long type column in the table that requires an auto increment value. An IDENTITY column Sequence Generator can be created with several configurable attributes that offer more flexibility. START WITH. The first value in the sequence. Default is 1. INCREMENT BY. The next number to increment the current value in the sequence is generated by adding INCREMENT BY value. The increment value can be a positive number or negative number. When INCREMENT BY is a positive number, values ascend as the SG adds them. If INCREMENT BY is a negative number, values descend/ The SG decrements from the last value with each iteration. The Default is 1. MINVALUE. The lower bound of the sequence value. MAXVALUE. The upper bound of the sequence value. CACHE. Determines how many values are available for each client to use for assigning IDENTITY numbers. Default is 1000. CYCLE or NO CYCLE. Indicates whether the sequence generator will reuse the IDENTITY numbers. The default is NO CYCLE. Using CYCLE indicates that IDENTITY numbers cannot be unique after the first CYCLE phase. Use Case – Customer 360 This use case is about a typical bank which has a growing customer base. The bank is interested in in understanding their customer behavior and perform targeted marketing. They have multiple systems which capture different aspects of customer information and want to collate all of them in one table as a JSON document. Having built an interface which builds one JSON document, they now want to store them in Oracle NoSQL Database. The bank only needs to identify what should be their Primary Key in this table. Let us look at different table structures which can be created using the Sequence Generator which will aid in generating the Primary Key while the bank interface only builds the JSON document. IDENTITY column on a Primary key CREATE TABLE customer(ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), customerProfile JSON, PRIMARY KEY (ID)); IDENTITY column on a Shard Key CREATE TABLE customer(ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), SSN INTEGER, customerProfile JSON, PRIMARY KEY (shard(ID),SSN))); Identity column on an integer type (not a Primary/Shard Key). CREATE TABLE customer(ID STRING, customerProfile JSON, uniqueID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), PRIMARY KEY (ID)); IDENTITY column on an Indexed column CREATE TABLE customer (ID STRING, customerProfile JSON, uniqueId INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 10 MAXVALUE 10000), PRIMARY KEY (ID));                                                                           Create index uniqueIdIndex on customer(uniqueId); Alter Table to add an IDENTITY column CREATE TABLE customer (ID STRING, customerProfile JSON, PRIMARY KEY (ID)); ALTER TABLE customer (ADD uniqueId INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000)); IDENTITY column on a Parent and Child table. CREATE TABLE customer (ID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), customerProfile JSON, PRIMARY KEY (ID)); CREATE TABLE customer.address (addressID INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 10000), customerAddress JSON, PRIMARY KEY (addressID));  Summary Sequence Generator can be used to generate numbers sequentially whenever a new row is added into the database. Sequence Generator is commonly used to generate primary keys. Sequence Generator can also be used for columns other than Primary Keys. More details, limitations can be found in the official documentation – Sequence Generator.

Introduction Oracle NoSQL Database introduces IDENTITY column to auto increment a value each time a row is inserted into the table. IDENTITY Column A primary key for a row in Oracle NoSQL Database...

Oracle NoSQL Database Version 18.3 Now Available

  We are pleased to announce the release of Oracle NoSQL Database Version 18.3. This release contains the following new features : Querying GeoJSON Data: Introducing support for spatial queries on RFC7946 compliant GeoJSON standard data.  A number of spatial functions, as well as indexing for GeoJSON data, have been added. IDENTITY column: Tables can now specify the IDENTITY type, unburdening the application developer from having to generate unique primary keys. Namespace: A convenience container for easing the administration of authorization, import/export, and application development for applications that must manage their own tenants. Admin Web Service: The Oracle NoSQL Admin service can now be accessed as a web service over the standard HTTP/S ports (80/443).  This new access paradigm exposes RESTful interfaces easing the task of operational automation. SQL for NoSQL Enhancements: Oracle NoSQL Database continues to power applications that require very fast access to data and flexible data models with enhanced SQL functionality to include INSERT and UPSERT. Oracle continues to enhance Oracle NoSQL Database to meet customer requirements. To download - Visit Here To read about the new features - Visit Here The Release Notes can be found Here. Join our mailing list - NoSQL-Database@oss.oracle.com Visit our LinkedIn page

  We are pleased to announce the release of Oracle NoSQL Database Version 18.3. This release contains the following new features : Querying GeoJSON Data: Introducing support for spatial queries on...

JSON Partial Update using the Oracle NoSQL Database Query Language

Background A general direction that we see in NoSQL and relational databases is the adoption of the document model. While each document-oriented database implementation differs on the details of this definition, in general, they all assume documents encapsulate and encode data (or information) in some standard format or encoding such as XML, YAML, JSON, and BSON . JavaScript Object Notation (JSON) has become the accepted medium for interchange between numerous Internet services and for unstructured data. The JSON-style document model enables customers to build services that are schema-less. Sure, you could always store JSON as an opaque JSON blob in Database, but that meant that you could not easily work with JSON data at the database level. Starting with Oracle NoSQL Database 4.2.10 we added support for a native JSON data type defined by RFC 7159 that enables efficient access to data in JSON documents. The JSON data type provides following benefits over storing JSON-format strings in a string column : Automatic JSON Document Validation : Only valid JSON documents can be stored in a JSON column, so you get automatic validation of your data. Invalid documents produce an error while doing the “put” or insert operation in Oracle NoSQL Database. Efficient Access : JSON documents stored in JSON columns are converted to an internal binary (optimized) format that allows quick read access to document elements. Let’s say you want to find a particular name/value pair from a JSON document and If you were to store JSON documents as blob, your best option would be to read the JSON data in memory, run it through a JSON parser, which meant that you have to scan the JSON document and convert at the application level – needless to say it won’t be performant. Conversely, with native JSON data type, the database parses and stores JSON internally in native data format and querying JSON data is no different than querying data in other columns. Performance : Improve your query performance by creating indexes on a specific field within the JSON column value. We continue to enhance the Oracle NoSQL Database support for JSON datatypes. This includes recent enhancements: Indexes into JSON column. An index is a JSON index if it indexes at least one field that is contained inside JSON data. We support both simple type JSON indexes and multi-key types (which index arrays or map) JSON indexes.This allows developers access to the nested attributes embedded deep within a JSON document. Queries (SQL for NoSQL) involving JSON data types. Update JSON documents using SQL Update Statement. An Update statement can be used to update a row in a table that contains JSON column. This also allow users to perform partial updates or updating a subset of fields in the JSON column. While each of above point deserve a separate blog post, in this post, we are  going to focus on the last point about performing an update on the JSON document using the SQL Update statement.  Updating JSON Documents While some systems don’t recommend manipulating JSON documents, Oracle NoSQL Database with SQL Update statement has made is possible for developers to modify or partially update (subset of fields) JSON documents. Let’s take a real world example, where you want to model some User profile data (personal data associated with specific users) in Oracle NoSQL Database. User profile makes an interesting case for JSON data type, because a user profile data can have varying sets of attributes – users have different interest and expense patterns, some users have more than one work address, some users have more than one type of phone (work, home etc), some are like to specify firstInitial}{lastname} and others want {firstname}.{lastname} etc. Thus, a user profile data doesn’t really conform to the rigid schema (schema, data-types etc) of the traditional system and there is a need for modelling option that reduces the liturgy (you don’t have to define schemas) and increases flexibility (store, all sorts of data without prior definition). Enter native support for JSON in Oracle NoSQL Database – that provides the scalability and flexibility together, which makes developers more productive. Let’s create a simple table called Persons with column person defined as that holds the user profile CREATE TABLE IF NOT EXISTS JSONPersons ( id integer, person JSON, primary key (id) ); And here’s sample data that we want to insert (notice how each users have a different set of fields): { "id":1, "person" : { "firstname":"David", "lastname":"Morrison", "age":25, "income":100000, "lastLogin" : "2016-10-29T18:43:59.8319", "address":{"street":"150 Route 2", "city":"Antioch", "state":"TN", "zipcode" : 37013, "phones":[{"type":"home", "areacode":423, "number":8634379}] }, "connections":[2, 3], "expenses":{"food":1000, "gas":180} } } { "id":2, "person": { "firstname":"Peter", "lastname":"Smith", "age":38, "income":80000, "lastLogin" : "2016-10-19T09:18:05.5555", "address":{"street":"364 Mulberry Street", "city":"Leominster", "state":"MA", "phones":[{"type":"work", "areacode":339, "number":4120211}, {"type":"work", "areacode":339, "number":8694021}, {"type":"home", "areacode":339, "number":1205678}, null, {"type":"home", "areacode":305, "number":8064321} ] }, "connections":[3, 5, 1, 2], "expenses":{"food":6000, "books":240, "clothes":2000, "shoes":1200} } } { "id":6, "person" : { "mynumber":5, "myarray":[1,2,3,4] } } You can store the above JSON records in a JSON file and use the load utility to insert the records in Oracle NoSQL Database. Alternatively, you can also use the API to “put” the JSON records in the table. After successful inserting the data, you may need to modify (change field values, add or remove elements from user profile) the user records. And reasonable to say that the modification shall satisfy follow requirements : Shall take place at server:  Eliminating the read-modify-write cycle, that is, the need to fetch the whole user row at the client, compute new values for the targeted fields (potentially based on their current values) and then send the entire row back to the server. Shall be thread safe and not lead in race conditions (if updates are happening concurrently from different sessions) Shall be Atomic: Other threads shall see the most up-to-date values. Enter SQL UPDATE statement that can be used to update the rows.  UPDATE statements make server-side updates directly, without requiring a Read/Modify/Write update cycle. Both syntactically and semantically, the update statement of Oracle NoSQL Database is similar to the update statement of standard SQL, but with extensions to handle the richer data model of Oracle NoSQL Database.   Let’s take a look how this works: Case 1: Changing field values, this is simplest form of update Let’s say you want to modify user with id 3 and set the field mynumber to 300. We can use the SET clause as shown below : sql-> UPDATE JSONPersons j SET j.person.mynumber = 300 WHERE j.id = 3 RETURNING id, j.person.mynumber AS MyNumber; +----+---------------------+ | id | MyNumber | +----+---------------------+ | 3 | 300 | +----+---------------------+ 1 row returned Case: 2 Add elements in Array Let’s say we want to ADD more elements (50 and 100) to the array myarray field for user id =3. And we want to add 50 at 0th position and 100 at the end. Let’s use the ADD clause for the same: sql-> UPDATE JSONPersons j ADD j.person.myarray 0 50, ADD j.person.myarray 100 WHERE j.id = 3 RETURNING *; +----+-------------------+ | id | person | +----+-------------------+ | 3 | myarray | | | 50 | | | 1 | | | 2 | | | 3 | | | 4 | | | 100 | | | mynumber | 300 | +----+-------------------+ 1 row returned Notice that multiple ADD clauses are used in the query above. Case 3: Remove elements from Array Let’s say we want to remove the elements(myarray and mynumber) from the user id=3. Clearly, this user looks different from other users. We can do that using the single UPDATE statement sql-> UPDATE JSONPersons j REMOVE j.person.myarray, REMOVE j.person.mynumber WHERE j.id = 3 RETURNING *; +----+-----------------+ | id | person | +----+-----------------+ | 3 | | +----+-----------------+ 1 row returned Case 4: Add elements to the Map Next, let’s say we want to modify the user with id=3 and add the firstname and lastname for this user. Here, we use a single PUT clause that specifies a map with multiple elements: “ sql-> UPDATE JSONPersons j PUT j.person {"firstname" : "Wendy", "lastname" : "Purvis"} WHERE j.id = 3 RETURNING *; +----+--------------------+ | id | person | +----+--------------------+ | 3| firstname | Wendy | | | lastname | Purvis | +----+--------------------+ 1 row returned Next, we can specify the age, connections, expenses, income, and lastLogin fields using multiple PUT clauses on a single UPDATE statement: sql-> UPDATE JSONPersons j PUT j.person {"age": 43}, PUT j.person {"connections”: [2, 3]}, PUT j.person {"expenses”: {"food”: 1100, "books”: 210, "travel”: 50}}, PUT j.person {"income”: 80000}, PUT j.person {"lastLogin”: "2017-06-29T16:12:35.0285"} WHERE j.id = 3 RETURNING *; +----+----------------------------------------+ | id | person | +----+----------------------------------------+ | 3 | age | 43 | | | connections | | | 2 | | | 3 | | | expenses | | | books | 210 | | | food | 1100 | | | travel | 50 | | | firstname | Wendy | | | income | 80000 | | | lastLogin | 2017-06-29T16:12:35.0285 | | | lastname | Purvis | +----+----------------------------------------+ 1 row returned Conclusion Oracle NoSQL Database with native support for the JSON datatype makes it easy to store data that doesn’t confirm to rigid schema. In addition, the SQL UPDATE statement provides a simple and intuitive way to add, modify, remove elements from the JSON documents eliminating need to do the Read-Modify-Write cycle. Do try this at home!                                

Background A general direction that we see in NoSQL and relational databases is the adoption of the document model. While each document-oriented database implementation differs on the details of this...

Greater developer flexibility building JSON applications with Oracle NoSQL Database

In this article, we look at how to go about storing and retrieving JSON data with Oracle NoSQL Database as our choice of the database because it further enhances developer flexibility. If you are a developer building microservices for the web or mobile applications that communicate with other services over HTTP using REST (REpresentational State Transfer) architecture, then you are, usually working with JSON data. JSON has become the choice of developers for building microservices with its simplicity and flexibility. As you, or developers, started using JSON as the communication between microservices, writing and reading this JSON data to and from backend database became the need of the hour. Only because, that saved developers from converting (serializing/deserializing) JSON data fetched from the database as a result of a query run, thus further unleashing the power of JSON. Over the last few years, NoSQL databases have stepped up to provide comprehensive support for storing and querying JSON data, and relational databases have followed the trend.   Flexibility of JSON JSON is a common data interchange format and provides flexibility to developers when it comes to designing and evolving data models for building microservices. It is useful for cases where data is unorganized. How to manipulate JSON data boils down to two aspects while developing applications.   Modeling data to unleash  JSON flexibility Querying the free and wild JSON data effectively   It is important to note that databases store the JSON data in a manner which the record itself stores the type information.  Therefore each of the records is self-describing rather than schema-less. However, they do not have to adhere to any conventional structure within the scope of a table or record collection. As an example, If we are building contacts database for Person entity, a sample JSON data could look like below - { "account" : 1, "person" : {      "lastName" : "Jones",      "firstName" : "Joe",      "address" : {        "home" : {          "street" : "15 Elm",          "city" : "Lakeville",          "zip" : "12345"        },        "work" : {          "street" : "12 Main",          "city" : "Lakeville",          "zip" : "12345"        }      },      "phone" : {        "home" : "800-555-1234",       "work" : "877-123-4567"      }   } }   Here, please note that there might be records of Persons for which the phone map is not available. Thus allowing the application to work with JSON data that conforms to various combinations of its object structure. If the developer gets a new requirement that  “Person could have 2 or more phone numbers.”, He would be able to incorporate that use case with minimal code changes as opposed to the instance if he had to adhere to a standard structure for all records. Application developers can unleash this flexibility when they use JSON. Before we go further, It is also important to note that - while the JSON data itself can have its structure, in all cases the context of information including the type (static or dynamic) of field is known to the application. In other words, Application still has the business context of data and knows how to manipulate the JSON data. It does not make any sense to store 2 JSON records that have entirely different structure and also, more importantly, different information context ( one record represents Person Contact, and another record represents Shopping Cart)  in one table or collection. The application (Object Mapping frameworks when used) brings in the type consistency to the free and wild JSON data format. Now let's look at how does Oracle NoSQL Database extend this flexibility - for both data modeling and querying. Data Modeling - Oracle NoSQL database allows developers to manipulate data using the popular and well-understood paradigm of Tables. JSON is a native data-type for the column of this Table.  Meaning the Create Table DDL call can create JSON column types. So, let's see how this model allows for greater flexibility. Let's consider following create table statement - CREATE TABLE personContacts ( account INTEGER, person JSON, Primary Key (account) ); After statement execution, it creates a Table with primary key and JSON column. This model is equivalent to Document NoSQL stores such that each record in the table is a JSON document but with a primary (or shard) key defined outside. You can assume that some of the properties of person JSON data are always defined (either by default or some value). It is possible to consider another way to define the same Table such as - CREATE TABLE personContacts ( account INTEGER, person JSON, primary_phone INTEGER, Primary Key (account) ); Here, the table schema allows for self-described JSON type to be mixed with columns with other data types. Allowing for taking common fields or properties out of the JSON for the application to gain better control wherever applicable. Querying - Oracle NoSQL Database provides comprehensive ways to query individual fields or properties within the  JSON data type. Including the ability to create indexes on fields or properties from within the JSON data type. Querying JSON is possible through API calls as well as the SQL-like interface. In this blog, we will look at some of the examples using the SQL-like interface, for APIs supported to perform queries over JSON data, please see this section in the product documentation. In general, querying can be a topic of series of blog posts, but here I will discuss some of the high-level aspects that contribute towards the flexibility and in turn power of querying JSON data. Oracle NoSQL Database supports Expressions that represent a set of operations using various operators (arithmetic, logical, value and sequence comparisons), function calls and other grammatical constructs performed over the records stored to produce a result. Consider the query below - sql-> select p.person.lastName, p.account from personContacts p where p.person.address.home.city = "Lakeville"; {"lastName":"Jones","account":1}   1 row returned   The query returns lastName and account number associated with all users that live in “Lakeville.” The expression steps into the JSON data to identify the lastName property. The example is a typical Select-From-Where query in which the where clause can use specific attributes from the JSON data as a predicate. Such expressions can be written to form a variety of SFW (Select-From-Where) queries to retrieve results based on complex criteria or conditions.  They can be added at multiple places in the supported grammar further enhancing the flexibility. The supported grammar for SFW queries is below- SELECT <expression> FROM <table name> [WHERE <expression>] [ORDER BY <expression> [<sort order>]] [OFFSET <number>] [LIMIT <number>]; The beauty of the query support is that the same set of queries would work on complex data types such as Array, Map, and Record which are supported by Oracle NoSQL Database.  As a developer if you want to move from wild and free JSON to more structured schema approach by mapping JSON data to multiple column types, the queries will remain untouched. So as you firm up your application schema, you will have a choice to stay with JSON formatted data or move to a "type consistent" table schema. Oracle NoSQL Database version 4.5 or higher will include support for JSON related features we discussed earlier; you can download this version from here. To learn more about building real-world web-scale JSON data applications with Oracle NoSQL Database, refer to this extensive tutorial.

In this article, we look at how to go about storing and retrieving JSON data with Oracle NoSQL Database as our choice of the database because it further enhances developer flexibility. If you are a deve...

NoSQL Database

Real Time Tracking with Oracle NoSQL Database

Real Time Tracking with Oracle NoSQL Database I recently got back from a short trip to Boston to watch my daughter race in the Boston Marathon. As soon as my wife and I found some spots to watch, about 1 mile from the finish line, we got out our phones and fired up the tracking software. We were both disappointed at the ability to get timely updates regarding the progress of our daughter. Remember that once you have a space to stand and watch, you basically don’t move for greater than 4 hours and try to figure out when your favorite runner will be passing your spot. An effective and efficient tracking application is critical for this. I got to thinking about the application for tracking runners, now that RFID tags are common place and so inexpensive. Each numbered bib that the runner wears contains an RFID chip that can be activated as the runner passes on or through the data activation mat. Here is what the sensor looks like from an actual Boston Marathon bib. During the race, at specific intervals, the time of activation of the sensor is captured, stored, and some simple computations are then performed, such as the most recent minutes/mile and an extrapolation of what the expected finishing time will be. A NoSQL application for the timing of runners would be quite straightforward to develop. Let’s look at two of the basics.   Registration – when someone registers to run in a race, the basic information must be acquired, including name, address, phone and birthday. The birthday is actually quite important, as qualifying times are based on the age at the time of the race, as well as how a participant places within their respective age group. For example, a JSON Document could be created at registration time with the following information. { "RunnerId":12345, "Runner" : { "firstname":"John", "lastname":"Doe", "birthdayyear":”1985”, “birthdaymonth”:”02”, “birthdayday”:”15”, "email" : "john.doe@example.net", “payment”:”Mastercard” “paymentstatus”: “paid” "social":{ "twitter":"@jdoe", "instagram":"jdoe_pics" }, } } As the race begins, each runner passes over a mat on the ground which activates the RFID chip and records the start time. As the runner progress over the race course, at specified intervals the runners cross more of these mats and the times are recorded. Simple math can then determine the elapsed time for that specific runner, as well as the minutes per mile over the past interval, as well as extrapolate the expected finish time. The JSON data as the race progresses may look like below which is quite small and can be transmitted to the servers quite quickly, or even batched up (depending on the transmitting device capability) and sent every few hundred runners, or when there is a break in the runners crossing the mat { "RunnerId":”12345”, "milestone":"milestone_5k", "timestamp":"2017-04-12T10:00:00" } Then, this information could be added to the race record for the runner as they make progress. "Marathon_Boston" : {                 “RunnerID”:12345, "start_time":"2017-04-12T10:00:00",   "milestone_5k":"2017-04-12T10:21:00", "milestone_10k":"2017-04-12T10:44:00", "milestone_15k":"2017-04-12T11:10:00", "milestone_20k":"2017-04-12T11:25:00", "milestone_25k":"2017-04-12T11:42:00", "milestone_30k":"2017-04-12T11:56:00", "milestone_35k":"2017-04-12T12:09:00", "milestone_40k":"2017-04-12T12:28:00", "milestone_41k":"2017-04-12T12:42:00", "milestone_42k_end":"2017-04-12T12:45:00" } Overall, this would be an ideal application to use a NoSQL system. The amount of data, even for a 35,000 person race would not be very much, and as the runners spread out, even less so than comparted to the starting gates. If we assume that each runners record would consume about 1K of data, then for the entire race there would only be about 35 MB of raw data. If we then assume a replication factor of 3, and include some overhead, the entire race data would need about 225 MB of storage, which could easily fit on a USB thumb drive. Using high speed SSDs can store in the Terabyte (TB) range, so that thousands of marathons results could be stored in a single Oracle NoSQL Database. This still doesn’t answer the question as to why the updates were so slow, but from my source in the Boston area, the downtown is notorious for poor cell service and add many thousands of race watchers trying to use their tracking apps at basically the same time, and you can start to understand the delays. At least we know that if a system were based on NoSQL, it would not be the culprit.

Real Time Tracking with Oracle NoSQL Database I recently got back from a short trip to Boston to watch my daughter race in the Boston Marathon. As soon as my wife and I found some spots to watch, about...

Oracle NoSQL Database on Docker

      Docker makes it easier for developers to package, deploy and run applications by using Containers. It makes applications portable across operating systems running across Cloud platforms. The latest release of Oracle NoSQL Database (version 4.3) is now available as Docker container on the Docker Hub. This means that if you have Docker already installed, you can get setup and running with Oracle NoSQL Database with just one command.   $ docker run -d --name=kvlite oracle/nosql   The Docker container for Oracle NoSQL Database would download the latest Oracle NoSQL Database Community Edition and jump-start the kvlite instance, a single node - single shard Oracle NoSQL Database store. Developers can start playing around with Oracle NoSQL Database including some of the new and exciting features of version 4.3 in no time. Oracle NoSQL Database provides strong security support and starting with version 4.3, the security would be enabled by default. Please note that, however, the Docker version by default bootstraps the kvlite instance with security disabled.    CMD ["java", "-jar", "lib/kvstore.jar", "kvlite", "-secure-config", "disable"]   Building your own Docker version with security enabled is very simple. Clone github.com/oracle/docker-images/nosql Update 4.3.11/Dockerfile to include following command that will bring up the secured kvlite instance when the container is run CMD ["java", "-jar", "lib/kvstore.jar", "kvlite"] To connect to the secured kvlite instance, please follow the command to start the CLI as noted in the Oracle NoSQL Database documentation, here.   Oracle NoSQL Database Docker container will get you started and you are ready to play with storing and retrieving data in minutes, but there is a lot more you can do with this. Abhishek Gupta put together a simple Jersey (JAX-RS implementation) web application that uses Oracle NoSQL Database over REST APIs. In this example, he builds 2 Docker containers, one for Oracle NoSQL Database and another that runs the Jersey application. You can follow these simple steps and simply extend the web application to build your own Oracle NoSQL Database application.   Happy coding! Docker and Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein.

      Docker makes it easier for developers to package, deploy and run applications by using Containers. It makes applications portable across operating systems running across Cloud platforms. The...

Oracle NoSQL Database Keeps Your Data Secure

Recent news hasbrought back the focus on how a poorly secured database server can causeirreversible damage to the reputation of the software vendor apart from manyother tangible and intangible losses. The security features in Oracle NoSQL Database makes it amember of Oracle family of products which prides themselves in being called as verysecure. This blog briefly describes these security features. 1) There are two levels of security: network security anduser authentication and authorization Networksecurity provides an outer layer of protection at the network level and isconfigured during the installation process using cryptographic keys, X.509certificate chains and trusted certificates. What this means is thecommunication between the client and server nodes and also between the servernodes is encrypted using the SSL/TLS protocol. Userauthentication and authorization can either be managed by using Oracle NoSQL Databaseutilities or relegated to any Kerberos compliant LDAP/Single-sign-on server. 2) Starting with release 4.3 of Oracle NoSQL Database,the security features are enabled by default. 3) Access to a secure Oracle NoSQL Database is limited toonly authenticated users. Oracle NoSQL Database provides tools for userand password management. 4) Password credentials for connecting to database are storedin a client-side OracleWallet, a secure software container used to storeauthentication and signing credentials. With Oracle Wallet, applications nolonger need to embed user and password in application code and scripts. This considerablyreduces risks because the user credentials are no longer exposed in the clearand password management policies are more easily enforced without changingapplication code whenever user names or passwords change. 5) Oracle NoSQL Database provides a set of default rules forcreating and updating a user password in order to enhance security. These rulesenable the administrator to enforce strongpassword policies such as minimum and maximum password length,minimum number of upper and lower case characters, digits and specialcharacters, password expiry, list of restricted passwords and maximum passwordsto be remembered that cannot be reused when setting new password. 6) Successfully authenticated users receive an identifierfor a login session that allows a single login operation to be shared acrossStorage Nodes. That session has an initial lifetime associated with it, afterwhich the session is no longer valid. The server notifies the user with anerror once the session is no longer valid. The application then needs tore-authenticate. 7) Oracle NoSQL Database provides rolebased authorization. A user can be assigned one or more roles. Roles can either be a built-in system role(readonly, writeonly, readwrite, dbadmin, sysadmin and public) or user definedrole. These built-in roles map to one or more privileges. Privileges can eitherbe System privileges or Object (table level) privileges. System privilegegrants the user the ability to perform a store wide action while Objectprivilege grants the user the ability to perform an action only on that specificobject (table). 8) With Kerberos integration Oracle NoSQL Database canrelegate the authentication and authorization job to any Kerberos compliantLDAP or single-sign-on server. In summary, Oracle takes security very seriously for allof its products. Oracle NoSQL Database has been designed from the start to besecure and protect user’s data. Please refer to SecurityGuide for more details on any of the above mentioned security features.

Recent news has brought back the focus on how a poorly secured database server can cause irreversible damage to the reputation of the software vendor apart from manyother tangible and intangible...

Oracle NoSQL Database performance on Oracle Cloud Infrastructure (OCI) IaaS

We measured the performance of Oracle NoSQL Database release 4.3 running on Oracle Cloud Infrastructure (OCI) compute nodes using the YCSB workload. YCSB is the most popular benchmark for NoSQL systems. We are pleased to report the following performance results of our test. Performance Benchmark Results Clients Records Insert Throughput (ops/sec) Mixed Throughput (ops/sec) Insert Latency (ms) (av/95%/99%) Read Latency (ms) (av/95%/99%) Update Latency (ms) (av/95%/99%) 6 450 million 276,352 843,969 4.29/12/26 1.11/3/8 5.79/13/61     Hardware Configuration We used three DenseIO nodes to run the Oracle NoSQL Database server and three Standard nodes for the YCSB driver application. DenseIO node: 36 OCPU, 512 GB RAM, 28.8 TB NVMe SSD, 10 Gbps network. Standard node: 36 OCPU, 256 GB RAM, 10 Gbps network. Software Configuration We used Oracle Linux Server release 6.8 on the client as well as the server nodes. We used Oracle NoSQL DB version 4.3, which can be downloaded from here. The server was configured as a 9 shard system with three-way replication (RF = 3).   Benchmark Configuration We used the standard YCSB parameters: 13 byte keys and 1024 byte values. The database size was 450 million records. All insert and update operations were performed with a durability policy of simple majority. All read operations were performed with the NONE_REQUIRED read consistency policy. A total of 6 client processes were used, with 2 client processes running on each of the 3 Standard Compute Capacity nodes. Each client used 200 threads for the insert workload as well as the mixed read/update workload. NOTE: Learn how Oracle assists developers by automating the installation of Oracle NoSQL Database on the Oracle Cloud Infrastructure (OCI). Click here for details.

We measured the performance of Oracle NoSQL Database release 4.3 running on Oracle Cloud Infrastructure (OCI) compute nodes using the YCSB workload. YCSB is the most popular benchmark for NoSQL...

Oracle NoSQL Database for IoT environments

How can NoSQL be used in an IoT solution be used to manage truck parking spaces ? A problem that exists in many domains is how to allocate and use efficiently a finite resource, when the demand exceeds the normal supply. For example, there is a finite supply of truck parking spaces at truck stops, and on any given night, there may be more demand than supply.  How does a trucker know if a space is available at the next rest stop, or should he/she drive the extra hour to the next one.  With federal rules as to the amount of time that a driver may be on the road and required rest periods, optimizing the drive/rest cycle and using the limited resource is critical to running a truck stop business. Pain point: Need to manage thousands of truck parking spaces, (reserve, charge, analyze, maintain, as well as collateral fuel purchases and store purchases).  Demand is high for these spaces. There are about 1.5 million1 large trucks in the US.  New federal regulations are requiring truck drivers to stop and rest more frequently.  This is creating huge demand for these parking spaces, and thus the opportunity to monetize them. The customer needs to be able to monitor the availabily of each space, charge for their use, and tie this information into their back office applications.   This is their business, so it is crucial to capture the data fast and efficiently. A combination of sensor technology at the parking sites, wireless reservation systems with real time capacity information can dramatically increase revenue and increase driver safety as well, as drivers can plan more efficient drive and rest cycles. An architecture that includes an IoT service to acquire the data, as well as NoSQL to store the data, before sending to  a longer term database, can greatly increase revenue and operating efficiency. LINKS   

How can NoSQL be used in an IoT solution be used to manage truck parking spaces ? A problem that exists in many domains is how to allocate and use efficiently a finite resource, when the demand exceeds...

Oracle NoSQL Database - SQL Query Overview & Demo

SQL (Structured Query Language) is one of the most popular query languages for accessing databases. Oracle NoSQL database (release 4.2 or later) offers such query feature to simplify application development. Developers can now use their knowledge and experience in SQL for  Oracle NoSQL database. This blog provides a quick and easy guide to get you started on testing SQL queries in Oracle NoSQL database. Following steps will guide you through the setup process. Step 1: Oracle NoSQL Database Setup If you have not installed Oracle NoSQL database, please download the product here. Either enterprise or community edition is fine. Enterprise edition is used in this example. Place the package in /home/oracle/nosql. Follow the commands below to unzip the package and launch KVLite, a single-node Oracle NoSQL database. gunzip <name of the downloaded package> Example: gunzip kv-ee-4.2.10.tar.gztar -xvf <name of the gunzipped package> Example: tar -xvf kv-ee-4.2.10.tar KVHOME=<directory of the unzipped package> Example: KVHOME=/home/oracle/nosql/kv-4.2.10 java -jar $KVHOME/lib/kvstore.jar kvlite Step 2: SQL Test Environment Setup A small sample database and scripts will be used to set up the test environment. Table is used to model the data. Download this demo file. Place and unzip it in the same parent directory where the Oracle NoSQL Database is located. Example: /home/oracle/nosql gunzip TestSQL.tar.gz tar -xvf TestSQL.tar After the file is unzipped, a folder called TestSQL should appear. Follow the commands below to set up the database and SQL test environment. cd TestSQL to open the foldertestSQL.sh to start testing SQL Step 3: SQL Testing A file called testSQL.txt (in TestSQL folder) has a collection of sample SQL statements that can be used for testing. For further details on SQL usage in Oracle NoSQL products, please refer to the online documents.

SQL (Structured Query Language) is one of the most popular query languages for accessing databases. Oracle NoSQL database (release 4.2 or later) offers such query feature to simplify application...

Oracle NoSQL Database - putIfVersion() API for read-modify-write operations

I read an interesting article about how hackers were able to steal 896 BTC and caused Flexcoin to shut down because the loss was too large for the company to bear.  The article highlights the need for software implementers to provide APIs designed to prevent unexpected behavior for read-modify-write (RMW) operations.  For example, if you need to update an account balance, you need to perform the following sequence of operations:   Step 1: curr_value = get(key, "my_account_no"); // read from the database into the application   Step 2: new_value = curr_value - amount_to_be_deducted;  // update the value in the application   Step 3: put(key, new_value);  // store the updated value in the database   A database system that supports multi-statement transactions ensures that the account balance for my_account_no in the database cannot be changed by another user until the update has been committed.   Unfortunately, most NoSQL database systems do not support multi-statement transactions.  If concurrent users perform RMW operations like the one described above and on the same record, it is possible to get incorrect results, as the flexcoin article illustrates.   Oracle NoSQL Database provides APIs that help prevent this problem, even in the presence of concurrent activity. In this blog, we are going to talk about one such useful API, putIfVersion.   KVStore.PutIfVersion is a simple, yet powerful API that can be used when updating a value to ensure that it has not changed since the last time it was read.    Version putIfVersion(Key key,Value value,Version matchVersion)           Put a key/value pair, but only if the version of the existing value matches the matchVersion argument. Version putIfVersion(Key key,Value value,Version matchVersion,ReturnValueVersion prevValue,Durability durability, long timeout,TimeUnit timeoutUnit)           Put a key/value pair, but only if the version of the existing value matches the matchVersion argument. Let us see, how putIfVersion can be used to avoid consistency issues in the application. Consider the same sequence of RMW operations:     Step 1: curr_value = get(key, "my_account_no");   Step 2: new_value = curr_value - amount_to_be_deducted;   Step 3: put(key, new_value);   The step two can be more elaborate, with checks to ensure that the new_value is still valid (e.g. non-zero, above certain limit etc.) Now, if there are two or more application threads that are updating the same value at the same time, it is possible that both the threads have read the current value before the put() operation is executed. At the end of this operation from both the threads, the value would be incorrect as the amount would have been deducted twice but updated in the database to have been deducted only once. putIfVersion() API solves the problem above. It protects the application by making an additional check during updates ensuring that the value has not changed since the last time.   putIfVersion(key, new_value, curr_value);   Although the example above illustrates usage of putIfVersion() for Key/Value data model, Oracle NoSQL Database implements putIfVersion() API for both Key/Value as well as Table data model. Refer the sample code here, to leverage putIfVersion() API in your application. Happy (and safe) coding!  

I read an interesting article about how hackers were able to steal 896 BTC and caused Flexcoin to shut down because the loss was too large for the company to bear.  The article highlights the need...

Oracle NoSQL Database 4.0 Released

Oracle NoSQL Database 4.0 is now released and available for download from the Oracle Technology Network download pages. OracleNoSQL Database Version 4.0 – New Features · Full text search – Ability to perform full textsearches over the data using Elastic Search. · Time-To-Live – efficient aging out of “expired” data –This is a common IoT  requirement. · SQL Query – Declarative query language for developersmore comfortable with SQL than API level access. · Predicate Pushdown – ability to process predicates from BigData SQL in NoSQL Database nodes – This leadsto  improved performance and scalability. · Import/Export – Easy to backup/restore data or move databetween different Oracle NoSQL Database stores OracleNoSQL Database continues to enable innovative applications that require veryfast access to data and flexible interfaces. · Integrated– Connect with many other Oracle products for a complete, productiveenvironment. · Fast– Utilizes parallelism at a number of levels · Flexible– Flexible schema, is not predefined · Simpleto implement – Setup, Admin and Integration is straightforward. · Reliable– built-in data replication and HA support foravailability and business continuity. · Scalable– Add more servers to handle more data and throughput with zero down-time.  Learn More About Oracle NoSQL Database: LinkedIn Group – https://www.linkedin.com/groups/4147754 Twitter – @OracelNoSQL Community – https://community.oracle.com/community/database/high_availability/nosql_database

Oracle NoSQL Database 4.0 is now released and available for download from the Oracle Technology Network download pages. Oracle NoSQL Database Version 4.0 – New Features · Full text search – Ability to...

Oracle NoSQL Database Cluster YCSB Testing with Fusion ioMemory™ Storage

Highly distributed systems with large data stores in the form of NoSQL databases are becoming increasinglyimportant to enterprises, not just to hyperscale organizations. NoSQL databases are being deployed forcapturing patient sensors data in health care, smart meter analysis in utilities, customer sentiment analysis inretail, and various other use cases in different industries. NoSQL database systems help organizations store,manage, and analyze huge amounts of data on distributed system architecture. The sheer volume of data andthe distributed system design needed to manage this large data at a reasonable cost necessitated a differentcategory of database systems, leading to NoSQL databases. Oracle NoSQL Database is part of the NoSQLdatabase family and is based on a distributed, key-value architecture. This technical white paper describes a three-node Oracle NoSQL Database Cluster deployment procedure onFusion ioMemory™ storage. The following points are emphasized: Highlights performance and scalability advantages compared to traditional spinning disks. Because enterprises evaluate and assess new technologies for enterprise-wide adaptability, Yahoo CloudServing Benchmark (YCSB) is the standard benchmark tool employed for testing and is the same toolused in this paper to evaluate Oracle NoSQL Database for YCSB Benchmark Testing. Analysis and discussion are provided for throughput and latency testing results with YCSB. Download now at:   https://www.sandisk.com/content/dam/sandisk-main/en_us/assets/resources/enterprise/white-papers/oracle-nosql-cluster-ycsb-testing-with-fusion-iomemory.pdf

Highly distributed systems with large data stores in the form of NoSQL databases are becoming increasinglyimportant to enterprises, not just to hyperscale organizations. NoSQL databases are being...

Oracle NoSQL BulkPut

Our customers have often asked us “what’s the fastest and most efficient way to insert a large number of records in the Oracle NoSQL database? “ Very recently, a shipping company reached out to us with a specific requirement of using Oracle NoSQL database for their ship management application, which is used to track the movements of their container ships that moves the cargo from port to port. The cargo ships are all fitted with GPS and other tracking devices, which relays ship's location after a few seconds into the application.   The application is then queried for  1) The location of all the ships displayed on the map 2) A specific ship's trajectory over a given period of time displayed on the map too. As the volume of the location data started growing, the company is finding hard to scale the application and is now looking at a back-end system that can ingest this large data-set very efficiently.   Historically, we have supported the option to execute a batch of operations for records that share the same shard key, which is what our large airline customer (Airbus) has done. They pre-sort the data by the shard key and then perform a multi-record insert when the shard key changes. Basically, rather than sending and storing a record at a time they can send a large number of records in a single operation. This certainly saved network trips, but they could only batch insert records that shared the same shard key. With Oracle NoSQL Database release 3.5.2, we have added the ability to do a bulk insert or a bulk put records across different shards, allowing application developers to work more effectively with very large data-sets.   The BulkPut API is available for the table as well as the key/Value data  model. The API provides significant performance gains over single row inserts by reducing the network traffic round trips as well as by doing ordered inserts in batch on internally sorted data across different shards in parallel. This feature is released in a controlled fashion, so there aren’t java docs available for this API with this release, but we encourage you to use it and give us feedback.   API KV interface: loads Key/Value pairs supplied by special purpose streams into the store. public void put(List<EntryStream<KeyValue>> streams, BulkWriteOptions bulkWriteOptions)   Table interface: loads rows supplied by special purpose streams into the store. public void put(List<EntryStream<Row>> streams, BulkWriteOptions bulkWriteOptions) streams the streams that supply the rows to be inserted. bulkWriteOptions non-default arguments controlling the behavior the bulk write operations Stream Interface : public interface EntryStream<E> { String name(); E getNext(); void completed(); void keyExists(E entry); void catchException(RuntimeException exception, E entry); }   Performance We ran the YCSB benchmark with the new Bulk-Put API on 3x3 (3 shards each with 3 copies of data) NoSQL Cluster running on bare metal servers, ingesting 50M records per shard or 150M records across the datastore, using 3 parallel thread per shard or total 9 ( 3x3) for the store and 6 parallel input streams per SN or total 54 ( 6 *9) across the store. The results for the benchmark run are shown in the graph below   The above graph compares the throughput (ops/sec) of Bulk vs Simple Put API with NoSQL store having 1000 partitions with durability settings of None and Simple Majority. As seen from the above charts there is over a 100% increase in throughput with either durability settings. Sample Example Here's link program uploaded to the github repository, the sample demonstrate how to use the BulkPut API in your application code. refer to the readme file for details related to the program execution.   {C} {C}{C}{C} Summary If you are looking at bulk loading data into Oracle NoSQL Database the latest Bulk Put API provides the most efficient and fastest (as demonstrated by the YCSB) way to ingest large amount of data. Check it out now and download the latest version of the Oracle NoSQL Database at: www.oracle.com/nosql. I'd like to thanks my colleague Jin Zhao for inputs on the performance numbers.          

Our customers have often asked us “what’s the fastest and most efficient way to insert a large number of records in the Oracle NoSQL database? “ Very recently, a shipping company reached out to us...

Oracle NoSQL BulkGet API

As seen from the timing graph (a)  above, getting 30M rows using simple get API  would have taken us  420 seconds which reduces to 149ms with 72 executor threads (Plotted on X-Axis) using the bulk get API. This is almost a 3X improvement! And as seen on the graph (b) the throughput went to 200K ops/s with 72 executor threads from 68k ops/sec using simple get operation. That is again a 3X improvement! In the above charts, the bulk-X is the maximum number of concurrent request that specifies the maximum degree of parallelism (in effect the maximum number of NoSQL Client side threads) to be used when running an iteration. To achieve the good performance we want to keep the Replication Nodes working as much as possible, so as a starting point we suggest using 3 * available RN (Replication Nodes). For e.g., if you are running a 3x3 store then you could start with 3*9 ( 3 shards each with 3 copies of data) = 27 concurrent requests. Of course, the optimal value varies based on the nature of the application requirements - some may want to be unobtrusive and use minimal resources (but efficiently) with elapsed time being a lower priority for e.g running analytic on secondary zones and on the other hand you may want strong a real-time latency running multi-get on Primary Zones. Please refer the java documentation for the API. We support both the key/value and table interfaces for the API Performance  In our internal Yahoo Server Scalability Benchmark runs we found that we could retrieve 30M rows in 149 sec with 72 executor threads, running on 3x3 shards, with 90 reader threads  and each record size is 100 bytes.  Refer to the below chart  for details of the benchmark runs                                         (a)                                                                                                      (b) As seen from the timing graph (a)  above, getting 30M rows using simple get api  would have taken us 420 seconds which reduces to 149ms with 72 executor threads (Plotted on X-Axis) using the bulk get API. This is almost a 3X improvement ! And as seen on the graph (b) the throughput went to 200K ops/s with 72 executor threads from 68k ops/sec using simple get operation. That is again a 3X improvement! In the above charts the bulk-X  is the maximum number of concurrent request that specifies the maximum degree of parallelism (in effect the maximum number of NoSQL Client side threads) to be used when running an iteration. To achieve good performance we want to keep the Replication Nodes working as much as possible, so as a starting point we suggest to use 3 * available RN (Replication Nodes). For e.g.  if you are running a 3x3 store then you could start with 3*9 ( 3 shards each with 3 copies of data) = 27 concurrent requests. Off course, the optimal value varies based on the nature of the application requirements - some may want to be unobtrusive and use minimal resources (but efficiently) with elapsed time being a lower priority for e.g. running analytic on secondary zones and on the other hand you may want strong a real-time latency running multi-get on Primary Zones.   Table Definition and Sample BulkGet Example : Below, is some sample code to demonstrate how to use this API   PhoneTable (manufacturer string,                price double ,                 …                primary key(shard(manufacturer), price));   import java.util.ArrayList; import java.util.List; import oracle.kv.Direction; import oracle.kv.KVStore; import oracle.kv.KVStoreConfig; import oracle.kv.KVStoreFactory; import oracle.kv.StoreIteratorException; import oracle.kv.table.FieldRange; import oracle.kv.table.MultiRowOptions; import oracle.kv.table.PrimaryKey; import oracle.kv.table.Row; import oracle.kv.table.Table; import oracle.kv.table.TableAPI; import oracle.kv.table.TableIterator; import oracle.kv.table.TableIteratorOptions;   public class BulkGetExample {    // hard-coded connection parameters in the example, this could be taken as input to the program or can be read from configuration file       private final String storeName = "kvstore";     private final String hostName = "localhost";     private final int port = 5000;     private final String tableName = "phones";     final KVStore store;     public static void main(final String args[]) {         try {             BulkGetExample runTest = new BulkGetExample();             runTest.run();         } catch (Exception e) {             System.err.print("BulkGetExample run failed: " + e.getMessage());         }     }       BulkGetExample() {         store = KVStoreFactory.getStore             (new KVStoreConfig(storeName, hostName + ":" + port));     }       void run() {         final String[] manufacturers = {"Nokia", "Apple", "Samsung", "Motorola"};         final List<PrimaryKey> keys =             new ArrayList<PrimaryKey>(manufacturers.length);         final TableAPI tableAPI = store.getTableAPI();         final Table phoneTable = tableAPI.getTable("PhoneTable");         if (phoneTable == null) {             throw new IllegalArgumentException("Table not found: " + tableName);         }         for (String manufacturer : manufacturers) {             final PrimaryKey pk = phoneTable.createPrimaryKey();             pk.put("manufacturer", manufacturer);             keys.add(pk);         }          /* Initialize multiRowOption: price range in [200, 500].  */         final FieldRange range = phoneTable.createFieldRange("price");         range.setStart(200d, true).setEnd(500d, true);         final MultiRowOptions mro = new MultiRowOptions(range, null, null);        /* Initialize TableIteratorOptions */ //  Setting batch size parameter as 200, this number indicates the maximum number of results batches that can be held in the No-SQL database client before its processed by the replication node         final int batchResultsSize = 200; //  Setting the maximum number of concurrent threads of executor threads to 9, I ran this on 3x3 shard         final int parallelism = 9;           final TableIteratorOptions tio =            new TableIteratorOptions(Direction.UNORDERED /* Direction */,                                      null /* Consistency */,                                      0 /* RequestTimeOut */,                                      null /*TimeUnit*/,                                      parallalism,                                      batchResultsSize);         TableIterator<Row> itr = null;         int count = 0;         try {               itr = tableAPI.tableIterator(keys.iterator(), mro, tio);             while (itr.hasNext()) {                 final Row phone = itr.next();                 System.out.println(phone.toJsonString(false));                 count++;                 /* ... */             }             System.out.println(count + " rows returned.");         } catch (StoreIteratorException sie) {             /* Handle exception.. */         } finally {             if (itr != null) {                 itr.close();             }         }     } } The above example returns an iterator over the keys matching the manufacturers within the price range [200-500] supplied by the supplied by iterator, if along with other details it also desired to retrieve the  images of all the phones then the images can be modeled as child table (for efficiency reasons) and the same can be retrieved in the single API call. Summary Oracle NoSQL BulkGet API provides the most effective and per-format way to fetch the data in bulk from Oracle NoSQL database. As demonstrated by the  YCSB runs, using this API you can expect between a 2 and 3 times performance improvement for retrieving data in bulk        

As seen from the timing graph (a)  above, getting 30M rows using simple get API  would have taken us  420 seconds which reduces to 149ms with 72 executor threads (Plotted on X-Axis) using the bulk get...

Oracle NoSQL Database 12.2.3.5.2 Available

Oracle NoSQL Database , version 12.2.3.5.2 is now available for download.  We strongly recommend that you download this new version.  The highlights are: Bulk Put API -  A high performance API that allows the application to insert multiple records (Bulk Put) in a single API call. The Bulk Put API is available for table as well as the key/Value data model. The API provides a significant performance gains over single row inserts by reducing the network round trips as well as by doing ordered inserts in batch on internally sorted data allowing application developers to work more effectively with very large datasets. Kerberos integration - Oracle NoSQL Enterprise Edition(EE) supports authentication using a Kerberos service. Kerberos is an industry standard authentication protocol for large client/server system. With Kerberos, Oracle NoSQL DB and application developers can take advantage of existing authentication infrastructure and processes within your enterprise. To use Oracle NoSQL DB with Kerberos, you must have a properly configured Kerberos deployment, configured Kerberos service principals for Oracle NoSQL DB, and added Kerberos user principal to Oracle NoSQL DB. Please refer to the security guide for details and also refer the sample code.  Read more and download from HERE.

Oracle NoSQL Database , version 12.2.3.5.2 is now available for download.  We strongly recommend that you download this new version.  The highlights are: Bulk Put API -  A high performance API that...

OOW'15 Oracle NoSQL HOLs

Last week at OOW'15 we did Hands On Labs for the NoSQL Database. There were essentially 2 tracks 1) Administrators 2) developers. I was often asked to provide the links for the labs, the links are available on our OTN pages and also posting the same on this blog. The link contains the VM that has pre-build scripts and a guide on the desktop that walks through the HOL step by step  1) Admin Track :  Contains steps to deploy a 3x3 cluster, we are simulating a scenario or 3 machine each with 3 disks  Extends the cluster, by adding a node to each shard (3x4) thereby increasing the read throughput  Backup and Recovery of your store Securing the store   2) Developer Track :  Deploys the cluster Create tables (parent and child) Create Secondary Indexes Populate the tables Query using cli and application : Range Queries Retrieve both parent and child records in a single call Usage of secondary indexes Integrate with RDBMS using external tables Data modelling exercise HOL were designed to be very simple and they provide us an opportunity to help folks in our community to get started with Oracle NoSQL DB, but I also realize that its too short a time to convey all of the above in a 1 hour session - specially for folks who are trying the product for 1st time. There are some very intricate details around distributed system, topology management, data modelling etc..That's something we'd work upon as we do these session next year..I'd also like to hear from you folks, what you'd like your feedback on content, material etc..

Last week at OOW'15 we did Hands On Labs for the NoSQL Database. There were essentially 2 tracks 1) Administrators 2) developers. I was often asked to provide the links for the labs, the links...

Invoking OracleNoSQL based Java application from PL/SQL

Recently, we ran into an interesting use-case with one of large supermarket customers, who wanted to take the output of a PL/SQL process and store that in  Oracle NoSQL Database to be be later consumed by one of their retail application - very quickly and something that can scale very well to support high volume of data that they are expected.  Oracle NoSQL DB is the obvious choice because it can provide a high throughput, low latency read/write operation and can scale to support large volume of data. Coming to the integration, one of the highlights of the OracleN SQL Database is that it integrates really very well with other Oracle Tech Stack. The simplest way to write to Oracle NoSQL DB from a PL/SQL procedure is to call a  Java procedure that uses the native NoSQL DB API in order to insert data into the database and the simplest way to read from Oracle NoSQL DB in a stored procedure is to use an External Table in the query so that data from Oracle NoSQL DB can be passed to the Oracle Database query processor. There's another possible option to  use Golden Gate to move data from the Oracle Database to NoSQL DB. We have already blogged about the GoldenGate Integration, so in this blog I am going to focus on the Java Stored procedure based approach. In case if you are not familiar with  Java Stored Procedure : A Java stored procedure essentially contains Java public static methods that are published to PL/SQL and stored in an Oracle database for general use. This allows a Java stored procedure to be executed from an application as if it were a PL/SQL stored procedure. When called by client applications, a Java stored procedure can accept arguments, reference Java classes, and return Java result values. So, to help our customer, we created a POC that showcases this integration. Am listing down steps involved in this integration  First, create a NoSQL DB tables  that would store the data from Oracle Database Create a Java Application using the native NoSQL Driver to perform CRUD operation on NoSQL DB. Load the Java Application classes that we created in Step#2 in the oracle database using the load-java utility. Create a Java Store stored procedure that takes the data from the PL/SQL and updates NoSQL Database Next, publish Java stored procedures in the Oracle data dictionary. To do that, you write call specs, which map Java method names, parameter types, and return types to their SQL counterparts. Finally, call the Java store procedure from the PL/SQL Block to perform the updates. The POC is available for download in a zip file from our OTN page (refer: The PL/SQL Integration in the Demo/Sample Program). The READ-ME file bundled in the zip has all the detailed steps and files needed for this integration. With this approach, the NoSQL access is transparent to the Oracle DB application . NoSQL DB is an excellent choice here and using this Java Stored Procedure approach, the customer can exploit the advantages of BOTH repositories effectively and with better TCO.

Recently, we ran into an interesting use-case with one of large supermarket customers, who wanted to take the output of a PL/SQL process and store that in  Oracle NoSQL Database to be be later...

Migrating/Importing MongoDB Documents into Nosql Tables

Summary  This paper presents a how to to migrate documents in MongoDB's collections into tables and child tables in Oracle Nosql. The idea is to take as example a relatively complex document, define a mapping file to map the basic fields of the document into a table,  and to map the embedded collections of the document into child tables. The java class that we provide will generate the Nosql structure of the tables and insert the data. The set of components of each element of the collection is inserted in the same operation into the store. A Json example   Let's use an example of a family item from a MongoDB collection: { "_id" : ObjectId("55c4c6576e4ae64b5997d39e"), "firstname" : "lena", "lastname" : "clark", "gender" : "W", "childrens" : [ { "name" : "bob", "schools" : [ {"street" : "90, pine street","id" : "Saint volume"}, {"street" : "134, mice street","id" : "Saint appearance"} ], "hobbies" : ["soccer","photo"] }, { "name" : "joseph", "schools" : [ {"street" : "168, merely street","id" : "Saint slipped"} ], "hobbies" : ["tennis","piano"] }, { "name" : "sandy", "schools" : [{"street" : "227, thread street","id" : "Saint discovery"}], "hobbies" : ["football","guitar"] } ] } In this case the main document has the the following fields : '_id', 'firstname', 'lastname', 'gender' and childrens'. 'childrens' is an embedded collection, containing 'name', 'schools' and 'hobbies'. 'schools' is again a nested collection with 'street and 'id' fields and 'hobbies' is a list. We can map them into several nested tables: the main table represents FAMILY, FAMILY.CHILDREN  gets 'childrens' items and FAMILY.CHILDREN.SCHOOLS and FAMILY.CHILDREN.HOBBIES store schools and hobbies information. The mapping file  The mapping file, is a properties file, it contains also connect information to access MongoDB database and Nosql store: the name of the Nosql store: Nosql.Store=kvstore the host and port of the nosql store: Nosql.URL=bigdatalite:5000 the mongodb host: MongoDB.host=localhost the mongodb port: MongoDB.port=27017 the mongodb database: MongoDB.DB=gadb Mapping principles Define the main collection, its fields and its main table mapping For each field define its type and its mapping value. Note that this can be a recursive step. For each table define the primary key index components.  Mapping extracts Mapping collection and table with its primary keys mongo.collection=family mongo.collection.map=FAMILY FAMILY.indexcols=LASTNAME,FIRSTNAME indexcols is the keyword to introduce the comma separated list of columns of the key, order is important. The indexcol prefix is a Nosql table name Family fields family.fields=lastname,firstname,gender,childrens family.firstname.type=string family.firstname.map=FIRSTNAME family.childrens.type=collection family.childrens.map=CHILDREN fields is the keyword to introduce the comma separated list of fields of a collection. For each field type corresponds to the type of a column in a Nosql table (string, integer, long, float, double or boolean are accepted). Two other values are used: array or collection. array is for lists of basic types, collection is for more complex collections. When  type is a basic type, map indicates a column of the mapped table, when the type is array or collection, map introduces a new table. Children mappings CHILDREN.indexcols=NAME childrens.fields=name,schools,hobbies childrens.name.type=string childrens.name.map=NAME childrens.schools.type=collection childrens.schools.map=SCHOOLS childrens.hobbies.type=array childrens.hobbies.map=HOBBIES School mappings  schools.fields=street,id schools.indexcols=ID street and id are basic string fields, their type and map are not shown. Hobbies mappings hobbies.fields=hobbies hobbies.hobbies.type=string hobbies.hobbies.map=HOBBY HOBBIES.indexcols=HOBBY children.hobbies is an array of strings mapped to child table HOBBIES, there is no name in the main collection for the field, I've chosen to use hobbies (the name of the collection) as the field name to be able to define a mapping.  Tables generated Get child tables from FAMILY    kv-> show tables -parent FAMILY Tables:  FAMILY.CHILDREN  FAMILY.CHILDREN.HOBBIES  FAMILY.CHILDREN.SCHOOLS Get table indexes kv-> show indexes -table FAMILY Indexes on table FAMILY FAMILYIndex (LASTNAME, FIRSTNAME) kv-> show indexes -table FAMILY.CHILDREN Indexes on table FAMILY.CHILDREN CHILDRENIndex (NAME) kv-> show indexes -table FAMILY.CHILDREN.SCHOOLS Indexes on table FAMILY.CHILDREN.SCHOOLS SCHOOLSIndex (ID) kv-> show indexes -table FAMILY.CHILDREN.HOBBIES Indexes on table FAMILY.CHILDREN.HOBBIES HOBBIESIndex (HOBBY)  Getting data from tables Get our example family kv-> get table -name FAMILY -field LASTNAME -value "clark" -field FIRSTNAME -value "lena" {"FIRSTNAME":"lena","LASTNAME":"clark","GENDER":"W"} Get our family children kv-> get table -name FAMILY.CHILDREN -field LASTNAME -value "clark" -field FIRSTNAME -value "lena" {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy"} Get our family children schools kv-> get table -name FAMILY.CHILDREN.SCHOOLS -field LASTNAME -value "clark" -field FIRSTNAME -value "lena" {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"134, mice street","ID":"Saint appearance"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","STREET":"90, pine street","ID":"Saint volume"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","STREET":"168, merely street","ID":"Saint slipped"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","STREET":"227, thread street","ID":"Saint discovery"}  Get our family children hoobies kv-> get table -name FAMILY.CHILDREN.HOBBIES -field LASTNAME -value "clark" -field FIRSTNAME -value "lena" {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"photo"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"bob","HOBBY":"soccer"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"piano"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"joseph","HOBBY":"tennis"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"football"} {"LASTNAME":"clark","FIRSTNAME":"lena","NAME":"sandy","HOBBY":"guitar"} Running the example jar files needed MongoJavaDriver: mongo-java-driver-3.0.0.jar (this is the version we have used) Nosql client: kvclient.jar (it should be a version containing tables 12.3+, it had been tested with 3.3.4) main java class : mongoloadnosql.MongoDB2Nosql (java source code is here) Parameters The tool has 5 parameters: -limit <integer>, number of documents to load, 0 is for all the documents -skip <integer>, offset of the first document to load, similar to the skip function in MongoDB, 0 means no skip of documents -mapfile <file>, properties file to load -create [true|<anything else than true>], if true the Nosql API functions for creation of tables and indexes are issued -insert [true|<anything else than true>], if true the Nosql API functions  for insertion are issued Launch command This command creates nosql tables and indexes if there do not exists, and insert the whole collection items using the properties file  mappingfam.properties: java -classpath <tool_dir>/classes:<KVHOME>/lib/kvclient.jar:<MONGODB_CLASSPATH>/mongo-java-driver-3.0.0.jar mongoloadnosql.Mongo2Nosql -limit 0 -skip 0 -mapfile mappingfam.properties -create true -insert true Caveats and warnings Actually there is no possibility to map  MongoDB references (neither referenced relationships nor DBRefs) Fields should be presented in the order defined by their primary keys (lastname,firstname) instead of (firstname,lastname)  The java code attached is just  to illustrate how to  import/migrate MongoDB data into Nosql tables in an efficient and consistent way, it has not been tested in all kinds of situations and it is not intended to be free of bugs. Bonus The following mapping file, allows to map MongoDB documents having the structure of a post.   In this case there is an embedded object "origine" which is defined as: {"owner" : "gus","site" : "recent_safety.com"}) which is not a collection. There is no primary key other than the MongoDB '_id' field. Enjoy trying this example also.

Summary  This paper presents a how to to migrate documents in MongoDB's collections into tables and child tables in Oracle Nosql. The idea is to take as example a relatively complex document, define...

Uploading NoSQL tables from Golden Gate User Exits

Golden Gate and NoSQL Integration The aim of this post is to illustrate how to use Golden Gate to stream relational transactions to Oracle NoSQL 12.3.4. We follow the structure of the post which illustrated how to use Golden Gate to stream data into HBase.   As shown in the diagram below, integrating database with NoSQL is accomplished bydeveloping a custom handler using Oracle GoldenGate's  and NoSQL's Java APIs. The custom handler is deployed as an integral part of theOracle GoldenGate Pump process.   ThePump process and the custom adapter are configured through the Pump parameterfile and custom adapter's properties file. The Pump readsthe Trail File created by the Oracle GoldenGate Capture process and passes thetransactions to the adapter. Based on the configuration, the adapter writes thetransactions into NoSQL table. You can find the Java code for the handler at this Github repository in folder StreamFromGG. The steps to generate and test an example are: Prepare the database to stream data from a table Define a NoSQL table Define the pump extract parameter file from the data base Define  the extract parameter file and the adapter properties file for NoSQL Register the extract process Start the GoldenGate extract processes Do some data manipulation on the data base and verify the content on the NoSQL table Lets take an example Prepare the database to stream data from a table This part is not detailed lets say that a database user is defined on Oracle 12c to allow Golden Gate transnational streaming. The database parameters are set to log transnational SQL commands in the appropriate way to satisfy Golden Gate requirements. We will focus on the database table T2 from the gha schema, whose definition is: CREATE TABLE "GHA"."T2"     ( "ID" NUMBER,  "CREATED" TIMESTAMP (6),  "NOM" VARCHAR2(32 BYTE),  "VILLE" VARCHAR2(128 BYTE),  CONSTRAINT "PK_T2" PRIMARY KEY ("ID", "CREATED") )  Define a NoSQL table  After connecting to the NoSQL store the following commands create the table T2: table create -name T2 # Add table fields add-field -name ID -type STRING add-field -name NOM -type STRING add-field -name CREATED -type STRING add-field -name VILLE -type STRING # Assign a field as primary key primary-key -field ID -field CREATED shard-key -field ID exit # Add table to the database plan add-table -wait -name T2 Define the pump extract parameter file from the data base The extract for the database requires previously the use of defgen  utility to create what is called a data definition file which contains the definition of the source table. The content of the extract parameter's file is: EXTRACT E_ghat2 TARGETDEFS ./dirsql/t2.sql SETENV (ORACLE_SID=cdb) userid c##ogg, password ogg exttrail /u01/ogg/dirdat/T2 GETUPDATEBEFORES table orcl.gha.t2  TARGET gha.t2; The extract name is E_ghat2, the table definition file  is t2.sql, the oracle user for the transnational streaming is c##ogg, trail files generated are prefixed with T2, the container of the schema gha is orcl. Define  the extract parameter file  and the adapter properties file for NoSQL  When using GoldenGate java adapters, there are two files, one defines the extract parameters, the other gives the java specific properties for the adapter (the default name for this file is <extract_name>.properties, if a different name is used it should be given on the extract parameters. Our extract name is nosqlt2. Par of the content of nosqlt2.properties is: jvm.bootoptions= -Xms64m -Xmx512M -Dlog4j.configuration=log4j.properties -Djava.class.path=dirprm:/u01/nosql/kv-ee/lib/jackson-core-asl.jar:/u01/nosql/kv-ee/lib/jackson-mapper-asl.jar:/u01/nosql/kv-ee/lib/avro.jar:/u01/ogg/ggjava/oggnosql.jar:/u01/nosql/kv-ee/lib/kvclient.jar:/u01/ogg/ggjava/ggjava.jar:/usr/lib/hadoop/client/commons-configuration-1.6.jar:/etc/hadoop/conf:/usr/lib/hadoop/client/commons-cli.jar #Nosql Handler. gg.handlerlist=nosqlhandler gg.handler.nosqlhandler.type=com.goldengate.delivery.handler.nosql.NosqlHandler gg.handler.nosqlhandler.NosqlStore=kvstore gg.handler.nosqlhandler.NosqlUrl=bigdatalite:5000 gg.handler.nosqlhandler.NosqlTable=T2 gg.handler.nosqlhandler.NosqlCols=ID,CREATED,NOM,VILLE gg.handler.nosqlhandler.NosqlPKCols=ID,CREATED gg.handler.nosqlhandler.NosqlShardCols=ID gg.handler.nosqlhandler.NosqlMappings=ID,ID;CREATED,CREATED;NOM,NOM;VILLE,VILLE The meaning of these properties is:  jvm.bootoptions, gives the path for the nosql java classes including json data managing and the jar for nosql adapter gg.handlerlist, gives the list of handlers in this case noqsqlhandler will be used to identify the properties gg.handler.nosqlhandler.type, gives the class used as adapter gg.handler.nosqlhandler.NosqlStore, gives the name of the  Nosql store to connect to gg.handler.nosqlhandler.NosqlUrl, gives the nosql store url (hostname:port) gg.handler.nosqlhandler.NosqlTable, gives the name of the table gg.handler.nosqlhandler.NosqlCols, gives a comma separated list of the Nosql Table columns gg.handler.nosqlhandler.NosqlPKCols, gives a comma separated list of the Nosql Table primary key columns gg.handler.nosqlhandler.NosqlShardCols, gives a comma separated list of the Nosql Table shard columns (should be a non void subset of the primary key columns) gg.handler.nosqlhandler.NosqlMappings, gives a semi-colon separated list of mapping pairs (source column:target column) The adapter implementation of NoSQL data manipulation, delete, update, create uses the shard column values to batch operations into the NoSQL database. The execution of the batched operations is done only when the shard stored value changes. Register the extract process  Use ggsci utility to issue the following commands (replacing <OGG_HOME> with it's real value):  add extract E_GHAT2 ,integrated tranlog, begin now add exttrail  <OGG_HOME>/dirdat/T2 , extract E_GHAT2 , megabytes 10 register extract E_GHAT2  database container (orcl) add extract NOSQLT2, exttrailsource <OGG_HOME>/dirdat/T2   Start the GoldenGate extract processes  Use ggsci utility to start e_ghat2 and nosqlt2  Verify that the process are running: GGSCI (bigdatalite.localdomain) 1> info all Program     Status      Group       Lag at Chkpt  Time Since Chkpt MANAGER     RUNNING                                            EXTRACT     RUNNING     E_GHAT2     00:00:02      00:01:07     EXTRACT     RUNNING     NOSQLT2     49:22:05      00:01:56     Data manipulation and verifications between Oracle 12c and NoSQL Get the table count on NoSQL kv-> aggregate table -name t2  -count Row count: 4329 Delete data from t2 on Oracle 12c delete t2 where id = 135 commit  2 rows deleted  Recompute the table count on NoSQL kv-> aggregate table -name t2  -count Row count: 4327 Note that last batch of NoSQL operations is flushed when the extract nosqlt2 is stopped 

Golden Gate and NoSQL Integration The aim of this post is to illustrate how to use Golden Gate to stream relational transactions to Oracle NoSQL 12.3.4. We follow the structure of the post which illustr...

Forrester Wave places NoSQL Database among the leaders

Weare very pleased that Oracle NoSQL Database has been recognized as one of theleaders in the key-value NoSQL product category by Forrester Research. Please see http://www.oracle.com/us/corporate/analystreports/forrester-wave-nosql-2348063.pdf for the full report.  Inthe past few years, we’ve witnessed growing adoption of NoSQL technologies toaddress specific data management problems.   In many of the earlyadopter scenarios, the NoSQL applications were developed and managed asself-contained, standalone repositories of semi-structured data. Inrecent months, it has become clear that such data silos are very expensive toimplement and maintain.  Big data and NoSQL users now understand that wellintegrated NoSQL and SQL systems are the key to effective data management intoday’s world.  An integrated set of products for managing NoSQL andrelational data is critical for delivering business value in a cost effectivemanner. Oracle NoSQL Database is fully integrated with Oracle Database andrelated technologies, thus making it an excellent choice for enterprise-grade,mission-critical NoSQL applications.  As mentioned in the Forrester Wavereport, “Many Oracle customers use Oracle NoSQL to balance the need forscale-out workloads of simpler key-value data, with the rich set of relationaldata management capabilities needed in their core business systems, or whensupporting new applications that have frequently changing key-value data, suchas profiles for fraud, personalization, and sensor data management”.

We are very pleased that Oracle NoSQL Database has been recognized as one of the leaders in the key-value NoSQL product category by Forrester Research.  Please see http://www.oracle.com/us/corporate/an...

Using R to plot data from NoSql tables

Oracle NoSql tables are relatively new. The communication between those tables and other BigData systems and tools is under construction. There is a package (rkvstore) allowing to get access to Nosql kv data from R but this package does not allow to use tables. This paper presents a way to access R from Java and to work with table data from Nosql using the R package Rserve and shows how to generate a plot of this data.  RServe You need to install into R the package Rserve which can be found here and once there Download/Files To launch the R server just install the Rserve package: R CMD INSTALL  <path to the Rserve.tar.gz  source package>  Run R >library("Rserve") >Rserve() R server : gets data from a Java program and returns the result NoSql Tables To create and load Nosql tables refer to this post. Java Code The main steps of the java programs are: Connect to the kvstore Get data from a table (via an iterator) Create an R session Transform data from the iterator to an R format Assign data to R variables Generate the R elements to make the plot Display the data Disconnect You can find the java source code for this  blog entry here. Usage run java with the following class and arguments nosql.r.RViewTablesBlog kvstore kvhost:kvport tablename fligthrangebegin fligthrangeenduse -1 for fligthrangebegin fligthrangeend to ignore flight rangeadd kvclient.jar from NoSql and REngine.jar, RServeEngine.jar from Rserve to the classpath Results  R returns an image similar to: Enjoy ! 

Oracle NoSql tables are relatively new. The communication between those tables and other BigData systems and tools is under construction. There is a package (rkvstore) allowing to get access to Nosql...