Oracle NoSQL Database Performance Tests
By Charles Lamb on Jan 12, 2012
Our colleagues at Cisco gave us access to their Unified Computing and Servers (UCS) labs for some Oracle NoSQL Database performance testing. Specifically, they let us use a dozen C210 servers for hosting the Oracle NoSQL Database Rep Nodes and a handful of C200 servers for driving load.
The C210 machines were configured with 96GB RAM, dual Xeon X5670 CPUs (2.93 GHz), and 16 x 7200 rpm SAS drives. The drives were configured into two sets of 8 drives, each in a RAID-0 array using the hardware controller, and then combined into one large RAID-0 volume using the OS. The OS was Linux 2.6.32-130.el6.x86_64.
Cisco 10GigE switches were used to connect all the machines (Rep Nodes and load drivers).
We used the Yahoo! Cloud System Benchmark
as the client for the tests. Our keysize was 13 bytes and the datasize
1108 bytes (that's how our serialization turned out for 1K of data).
We ran two phases: a load, and a 50/50 read/update benchmark. Because
YCSB only supports a Java integer's worth of records (2.1 billion), we
created 400 million records per NoSQL Database Rep Group. The "KVS
size" column shows the total number of records in the K/V Store followed
by the number of rep groups and replication factor in ()'s. For
example, "400m(1x3)" means 400m total records in a K/V Store consisting
of 1 Rep Group with a Replication Factor of 3 (3 Replication Nodes
The clients ran on the C200 nodes, which were configured with dual X5670 Xeon CPUs and 96GB of memory, although really only the CPU speed matters on that side of the equation since they were not memory or IO bound. Typically, we ran with 90 client threads per YCSB client process. In the table below, the total number of client processes is shown in the "Clients" column, and at 90 threads/client (in general), the total client threads is shown in the "Total Client Threads" column.
The Oracle NoSQL Database Rep Node cache sizes were configured such that the B+Tree Internal Nodes fit into memory, but the leaf nodes (the data) did not. Specifically, we configured them with 32GB of JVM heap and 22GB of cache. Therefore, the 50/50 Read/Update results are showing a single I/O per YCSB operation. The Durability was the NoSQL Database recommended (and default) value of no_sync, simple_majority, no_sync. The Consistency that we used for the 50/50 read/update test was Consistency.NONE.
|Clients||Total Client Threads
||Insert Avg Latency (ms)
||95% Latency (ms)
||99% Latency (ms)
50/50 Read/Update Results
|Clients||Total Client Threads|| Total Throughput
||Avg Read Latency
||95% Read Latency
||99% Read Latency
||Avg Update Latency||95% Update Latency||99% Update Latency|
The results demonstrate excellent scalability, throughput, and latency of Oracle NoSQL Database.
I want to say "thank you" to my colleagues at Cisco for sharing their extremely capable hardware, lab, and staff with us for these tests.