Monday Dec 09, 2013

Look I shrunk the keys

Whether we use relational or non-relational database to durably persist our data on the disk, we all are aware of the fact that how indexes plays a major role in accessing the data in real time. There is one aspect most of us tend to overlook while designing the index-key i.e. how to efficiently size them.

Not that I want to discount the fact but in traditional databases where we used to store from hundreds of thousands to few million of records in a database, sizing the index-key didn’t come (that often) as a very high priority but in NoSQL database where you are going to persist few billion to trillions of records every byte saved in the key goes a long mile.

That is exactly what came up this week while working on one of the POC and I thought I should share with you the best practices and tricks of the trade that you can also use in developing your application. So here is my numero uno recommendation for designing the index Keys:

  • Keep it Small.

Now there is nothing new there that you didn't know already, right? Right but I just wanted to highlight it, so if there is anything you remember from this post then it is this one bullet item.

All right, here is what I was dealing with this week: couple billion records of telematics/spacial data that we needed to capture and query based on the timestamp (of the feed that was received) and x and y co-ordinates of the system. To run the kind of queries we wanted to run (on spacial data), we came up with this as an index-key:

/S/{timestamp}{x-coordinate}{y-coordinate}

How we used above key structure to run spacial queries is another blog post but for this one I would just say that when we plugged in the values to the variables our key became 24 bytes (1+13+5+5) long. Here’s how:

Table Prefix => type char = 1 byte (eg. S)

Timestamp => type long = 13 bytes (eg.1386286913165)

X co-ordinate => type string = 5 bytes (eg. 120.78 degree, 31.87 degree etc)

Y co-ordinate => type string = 5 bytes (eg. 132.78 degree, 33.75 degree etc)

With amount of hardware resource we had available (for POC) we could create 4 shards cluster only. So to store two billion records we needed to store (2B records/4 shards) 500 million records on each of the four shards. Using DBCacheSize utility, we calculated we would need about 32 GB of JE cache on each of the Replication Node (RN).

$java -d64 -XX:+UseCompressedOops -jar $KVHOME/lib/je.jar DbCacheSize -records 500000000 

-key 24 

=== Database Cache Size ===
 Minimum Bytes        Maximum Bytes          Description
---------------       ---------------        -----------
 29,110,826,240   32,019,917,056         Internal nodes only

But we knew that if we can shrink the key size (without losing the information) we can save lot of memory and can improve the query time (as search is a function of # of records and size of each record) as well. So we built a simple encoding program that uses range of 62 ASCII characters (0-1, a-z, A-Z) to encode any numeric digit. You can find the program from here or build your own but what is important to note here is that we were able to represent same information with less number of bytes:

13 Byte Timestamp (e.g. 1386286913165) became 7 byte (e.g. opc2BTn)

5 byte X/Y co-ordinate (e.g. 132.78) became 3 byte each (e.g. a9X/6dF)

i.e. 14 byte encoded key (1 + 7 byte + 3 byte + 3 byte). So what’s the fuss that we shrunk our keys (it’s just 10 bytes saving), you would ask? Well, we plugged in the numbers again to DBCacheSize utility and this time the verdict was that we needed only 20GB of JE cache to store same half a billion records on each RN. That’s 40% improvement (12GB of saving/Replication Node) and is definitely an impressive start.

$java -d64 -XX:+UseCompressedOops -jar $KVHOME/lib/je.jar DbCacheSize -records 500000000 
-key 14 

=== Database Cache Size ===
 Minimum Bytes        Maximum Bytes          Description
---------------       ---------------        -----------
 16,929,008,448       19,838,099,264         Internal nodes only

To conclude: you just seen how simple encoding technique can save you big time when you are dealing with billions of records. Next time when you design an index-key just think little harder on how you can shrink it down!

Thursday Oct 24, 2013

Fast Data - Big Data's achilles heel

At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.  [Read More]

Friday Oct 11, 2013

Accolades - Oracle NoSQL customers speak out with praise

For all of those participating in the Oracle NoSQL Database community and following the product evolution, there have been a number of changes emerging on Oracle OTN for the NoSQL Database.

In particular, on the main page Dave's Segleau's NoSQL Now presentation on Enterprise NoSQL is prominently displayed.  This is a great discussion on the trends involved in NoSQL adoption which highlights the most important aspects of NoSQL technology selection and what Oracle in particular is bringing to the movement.    Many of you know that for Oracle getting companies to speak up publicly on their use of our technology is much harder than it is for pure open source startups.  So, I am particularly pleased with the accolades starting to emerge from the users of Oracle NoSQL.   Plus, there is new content getting published every day to help our growing community to champion NoSQL technology adoption within their teams and organizations.

Starting to grow: I've noticed that our Meetup group is also gaining a lot of momentum.  We are now over 400 members strong and growing aggressively.   There is an awesome Meetup coming next week ( Oct 15th at Elance 441 Logue Avenue, Mountain ViewCA ) where Mike Olson, co-founder and Chief Strategy Officer of Cloudera will be talking about the virtues of NoSQL key-value stores.  There are already 88 people signed up for this event, so hurry up and join now or you may end up on a wait-list. 

Spread the word, tell your friends, an Enterprise backed NoSQL is on the move!!

Monday Jul 15, 2013

High-Five for Rolling Upgrade in Oracle NoSQL Database

In today’s world of e-commerce where businesses are operated 24/7, the equations for revenue made or lost are some times expressed in terms of latency i.e. the faster you serve your customer the more likely they are going to do the business with you. Imagine in this scenario (where every millisecond count) your online business going to be inaccessible for few minutes to couple of hours because you needed to apply an important patch or an upgrade.

I think you got an idea that how important it is to stay online and available even during the course of any planned hardware or software upgrade. Oracle NoSQL Database 12c R1 (12.1.2.1.8) release puts you on a track where you can upgrade your NoSQL cluster with no disruption to your business services. And it makes it possible by providing some smart administration tools that calculates the safest combination of storage nodes that can be brought down in parallel and upgraded keeping all the shards in the database available for read/write at all the time.

Let's take a look at a real world example. Say I have deployed a 9x3 database cluster i.e 9 shards, with 3 replica each shard (total of 27 replication-nodes) on 9 physical nodes. I got a highly available cluster (thanks to the intelligent topology feature shipped in 11gR2.2.0.23) with replicas of each shards spread across three physical nodes so that there is no single point of failure. All right so here is how my topology looks like:

[user@host09 kv-2.1.1]$ java -jar lib/kvstore.jar runadmin -port 5000  -host host01

kv-> ping
Pinging components of store mystore based upon topology sequence #136
mystore comprises 90 partitions and 9 Storage Nodes
Storage Node [sn1] on host01:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5012
        Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence number: 41 haPort: 5013
        Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
Storage Node [sn2] on host02:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg1-rn2]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5010
        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
Storage Node [sn3] on host03:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5011
        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012
        Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5010
Storage Node [sn4] on host04:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg6-rn1]      Status: RUNNING,MASTER at sequence number: 41 haPort: 5012
        Rep Node [rg4-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5010
        Rep Node [rg5-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
Storage Node [sn5] on host05:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg6-rn2]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012
        Rep Node [rg4-rn2]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5010
        Rep Node [rg5-rn2]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5011
Storage Node [sn6] on host06:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg5-rn3]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
        Rep Node [rg4-rn3]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5010
        Rep Node [rg6-rn3]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012
Storage Node [sn7] on host07:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg9-rn1]      Status: RUNNING,MASTER at sequence number: 41 haPort: 5012
        Rep Node [rg7-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5010
        Rep Node [rg8-rn1]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
Storage Node [sn8] on host08:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg8-rn2]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5011
        Rep Node [rg9-rn2]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012
        Rep Node [rg7-rn2]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5010
Storage Node [sn9] on host09:5000    Datacenter: Boston [dc1]    Status: RUNNING   Ver: 12cR1.2.1.1
        Rep Node [rg7-rn3]      Status: RUNNING,MASTER at sequence number: 45 haPort: 5010
        Rep Node [rg8-rn3]      Status: RUNNING,REPLICA at sequence number: 45 haPort: 5011
        Rep Node [rg9-rn3]      Status: RUNNING,REPLICA at sequence number: 41 haPort: 5012


Notice that each storage node (sn1-sn9) is hosting one MASTER node and two REPLICA nodes from entirely different shards. Now if you would like to upgrade the active cluster to a latest version of Oracle NoSQL Database without a downtime then all you need to do is grab the latest binaries from the OTN and lay down the bits (NEW_KVHOME) on each of the 9 nodes (only once if you had a shared drive accessible from all the nodes). From the administration command-line-interface (CLI) simply perform 'show upgrade':

[user@host09 kv-2.1.8]$ java -jar lib/kvstore.jar runadmin -port 5000  -host host01

kv-> show upgrade
Calculating upgrade order, target version: 12.1.2.1.8, prerequisite: 11.2.2.0.23
sn3 sn4 sn7
sn1 sn8 sn5
sn2 sn6 sn9


SNs in each horizontal row represents the storage nodes that can be patched/upgraded in parallel and multiple rows represent the sequential order  i.e. you can upgrade sn3, sn4 & sn7 in parallel and once all three are done, you can then move to the next row (with sn1, sn8 & sn5) and so on and so forth. You must be asking what if you have fairly large cluster and you don't want to manually upgrade them, can you not automate this process by writing a script, well we have already done that for you as well. An example script is available for you to try out and can be found from:
<KVROOT>/examples/upgrade/onlineUpgrade


I hope you would find this feature useful just the way I do. If you need more details on this topic then I would recommend that you visit Upgrading an Existing Oracle NoSQL Database Deployment  in the Administrator's Guide. If you are new to Oracle NoSQL Database, get the complete product documentation from here and learn about the product from self paced web tutorials with some hands on exercises as well.

About

This blog is about everything NoSQL. An open place to express thoughts on this exciting topic and exchange ideas with other enthusiasts learning and exploring about what the coming generation of data management will look like in the face of social digital modernization. A collective dialog to invigorate the imagination and drive innovation straight into the heart of our efforts to better our existence thru technological excellence.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today