X

The Oracle NoSQL Database Blog covers all things Oracle NoSQL Database. On-Prem, Cloud and more.

  • July 15, 2013

High-Five for Rolling Upgrade in Oracle NoSQL Database

In today’s world of e-commerce where businesses are operated 24/7,
the equations for revenue made
or lost
are some times expressed in terms of latency i.e. the
faster you serve your customer the more likely they are going to do
the business with you. Imagine in this scenario (where every
millisecond count) your online business going to be inaccessible for
few minutes to couple of hours because you needed to apply an
important patch or an upgrade.


I think you got an idea that how important it is to stay online and
available even during the course of any planned hardware or software
upgrade. Oracle NoSQL Database 12c R1 (12.1.2.1.8) release puts you
on a track where you can upgrade your NoSQL cluster with no
disruption to your business services. And it makes it possible by
providing some smart administration tools that calculates the safest
combination of storage nodes that can be brought down in parallel
and upgraded keeping all the shards in the database available for
read/write at all the time.


Let's take a look at a real world example. Say I have deployed a 9x3
database cluster i.e 9 shards, with 3 replica each shard (total of
27 replication-nodes) on 9 physical nodes. I got a highly available
cluster (thanks to the intelligent topology feature shipped in 11gR2.2.0.23)
with replicas of each shards spread across three physical nodes so
that there is no single point of failure. All right so here is how
my topology looks like:

[user@host09 kv-2.1.1]$ java -jar
lib/kvstore.jar runadmin -port 5000  -host host01


kv-> ping

Pinging components of store mystore based upon topology sequence
#136

mystore comprises 90 partitions and 9 Storage Nodes

Storage Node [sn1] on host01:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5012

        Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence
number: 41 haPort: 5013

        Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

Storage Node [sn2] on host02:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg1-rn2]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5010

        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012

        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

Storage Node [sn3] on host03:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5011

        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012

        Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5010

Storage Node [sn4] on host04:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg6-rn1]      Status: RUNNING,MASTER at sequence
number: 41 haPort: 5012

        Rep Node [rg4-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5010

        Rep Node [rg5-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

Storage Node [sn5] on host05:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg6-rn2]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012

        Rep Node [rg4-rn2]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5010

        Rep Node [rg5-rn2]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5011

Storage Node [sn6] on host06:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg5-rn3]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

        Rep Node [rg4-rn3]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5010

        Rep Node [rg6-rn3]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012

Storage Node [sn7] on host07:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg9-rn1]      Status: RUNNING,MASTER at sequence
number: 41 haPort: 5012

        Rep Node [rg7-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5010

        Rep Node [rg8-rn1]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

Storage Node [sn8] on host08:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg8-rn2]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5011

        Rep Node [rg9-rn2]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012

        Rep Node [rg7-rn2]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5010

Storage Node [sn9] on host09:5000    Datacenter: Boston [dc1]   
Status: RUNNING   Ver: 12cR1.2.1.1

        Rep Node [rg7-rn3]      Status: RUNNING,MASTER at sequence
number: 45 haPort: 5010

        Rep Node [rg8-rn3]      Status: RUNNING,REPLICA at
sequence number: 45 haPort: 5011

        Rep Node [rg9-rn3]      Status: RUNNING,REPLICA at
sequence number: 41 haPort: 5012



Notice that each storage node (sn1-sn9) is hosting one MASTER node
and two REPLICA nodes from entirely different shards. Now if you would
like to upgrade the active cluster to a latest version of Oracle
NoSQL Database without a downtime then all you need to do is grab the
latest binaries from the OTN
and lay down the bits (NEW_KVHOME) on each of the 9 nodes (only once
if you had a shared drive accessible from all the nodes). From the
administration command-line-interface (CLI) simply perform 'show
upgrade':

[user@host09 kv-2.1.8]$ java -jar
lib/kvstore.jar runadmin -port 5000  -host host01


kv-> show upgrade

Calculating upgrade order, target version: 12.1.2.1.8,
prerequisite: 11.2.2.0.23

sn3 sn4 sn7

sn1 sn8 sn5

sn2 sn6 sn9



SNs in each horizontal row represents the storage nodes that can be
patched/upgraded in parallel and multiple rows represent the
sequential order  i.e. you can upgrade sn3, sn4 & sn7 in
parallel and once all three are done, you can then move to the next
row (with sn1, sn8 & sn5) and so on and so forth. You must be
asking what if you have fairly large cluster and you don't want to
manually upgrade them, can you not automate this process by writing a
script, well we have already done that for you as well. An example
script is available for you to try out and can be found from:
<KVROOT>/examples/upgrade/onlineUpgrade



I hope you would find this feature useful just the way I do. If you
need more details on this topic then I would recommend that you visit
Upgrading an Existing Oracle NoSQL Database Deployment  in the Administrator's
Guide
.
If you are new to Oracle NoSQL Database, get the complete product
documentation from here and learn about the product from self paced web tutorials with some hands on exercises as well.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.